Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Data Privacy. Show all posts

Over 600,000 People Impacted In a Major Data Leak

 

Over 600,000 persons were impacted by a data leak that took place at another background check company. Compared to the 2.9 billion persons impacted by the National Public Data theft, this is a minor breach, but it's still concerning. SL Data Services, the company in question, was discovered online. It was neither encrypted or password-protected and was available to the public.

Jeremiah Fowler, a cybersecurity researcher, uncovered the breach (or lack of protection on the files). Full names, residences, email addresses, employment data, social media accounts, phone numbers, court records, property ownership data, car records, and criminal records were all leaked.

Everything was stored in PDF files, the majority of which were labelled "background check." The database had a total of 713.1GB of files. Fortunately, the content is no longer publicly available, however it took some time to be properly secured. After receiving the responsible disclosure warning, SL Data Services took a week to make it unavailable. 

A week is a long time to have 600,000 people's information stored in publicly accessible files. Unfortunately, those with data in the breach might not even know their information was included. Since background checks are typically handled by someone else, and the person being checked rarely knows whose background check company was utilised, this might become even more complicated. 

While social security numbers and financial details are not included in the incident, because so much information about the people affected is publicly available, scammers can use it to deceive unsuspecting victims using social engineering.

Thankfully, there is no evidence that malicious actors accessed the open database or obtained sensitive information, but there is no certainty that they did not. Only time will tell—if we observe an increase in abrupt social engineering attacks, we know something has happened.

Internal Threats Loom Large as Businesses Deal With External Threats

 

Most people have likely been forced by their employer to undergo hour-long courses on how to prevent cyberattacks such as phishing, malware, and ransomware. Companies compel their staff to do this since cybercrime can be quite costly. According to FBI and IMF estimates, the cost is predicted to rise from $8.4 trillion in 2022 to $23 trillion by 2027. There are preventative methods available, such as multifactor authentication. 

The fact is, all of these threats are external. As companies develop the ability to handle these concerns, leadership's attention will move to an even more important concern: risks emanating from within the organisation. Being on "the inside" generally entails having access to highly sensitive and confidential information required to perform their duties. 

This can include financial performance statistics, product launch timelines, and source code. While this seems reasonable at first look, allowing access to this information also poses a significant risk to organizations—from top-secret government agencies to Fortune 500 companies and small businesses—if employees leak it.

Unfortunately, insider disclosures are becoming increasingly common. Since 2019, the number of insider occurrences reported by organisations has increased from 66% to an astounding 76%. Furthermore, these insider leaks are costly. In 2023, organisations spent an average of $16.2 million on resolving insider threats, with North American companies incurring the greatest overall cost of $19.09 million. 

There are several recent examples. Someone has leaked Israeli documents regarding an attack on Iran. An Apple employee leaked information about the iPhone 16. Examples abound throughout history. For example, in 1971, the Pentagon Papers altered public perception of the Vietnam War. However, the widespread use of internet media has made these risks simpler to propagate and more difficult to detect. 

Prevention tips 

Tech help: Monitoring for suspicious behaviour with software and AI is one technique to prevent leaks. Behaviour modelling technology, particularly AI-powered ones, can be quite effective at generating statistical conclusions using predictive analytics to, well, forecast outcomes and raise red flags. 

These solutions can provide an alarm, for example, if someone in HR, who would ordinarily not handle product design files, suddenly downloads a large number of product design files. Or if an employee has saved a large amount of information to a USB drive. Companies can use this information to conduct investigations, adjust access levels, or notify them that they need to pay more attention. 

Shut down broad access: Restricting employee access to specific data and files or eliminating certain files completely are two other strategies to stop internal leaks. This can mitigate the chance of leakage in the short term, but at what cost? Information exchange can inspire creativity and foster a culture of trust and innovation. 

Individualize data and files: Steganography, or the act of concealing information in plain sight, dates back to Ancient Greece and is a promising field for preventing leaks. It employs forensic watermarks to change a piece of content (an email, file, photo, or presentation) in imperceptible ways that identify the content so that sharing can be traced back to a single person. 

In recent times, the film industry was the first to employ steganography to combat piracy and theft of vital content. Movies and shows streamed on Hulu or Netflix are often protected with digital rights management (DRM), which includes audio and video watermarking to ensure that each copy is unique. Consider applying this technology to a company's daily operations, where terabytes of digital communications including potentially sensitive information—emails, presentations, photos, customer data—could be personalised for each individual. 

One thing is certain, regardless of the approach a business takes: it needs to have a strategy in place for dealing with the escalating issue of internal leaks. The danger is genuine, and the expenses are excessive. Since most employees are good, it only takes one bad actor to leak information and bring significant damage to their organisation.

The Debate Over Online Anonymity: Safeguarding Free Speech vs. Ensuring Safety

 

Mark Weinstein, an author and privacy expert, recently reignited a long-standing debate about online anonymity, suggesting that social media platforms implement mandatory user ID verification. Weinstein argues that such measures are crucial for tackling misinformation and preventing bad actors from using fake accounts to groom children. While his proposal addresses significant concerns, it has drawn criticism from privacy advocates and cybersecurity experts who highlight the implications for free speech, personal security, and democratic values.  

Yegor Sak, CEO of Windscribe, opposes the idea of removing online anonymity, emphasizing its vital role in protecting democracy and free expression. Drawing from his experience in Belarus, a country known for authoritarian surveillance practices, Sak warns that measures like ID verification could lead democratic nations down a similar path. He explains that anonymity and democracy are not opposing forces but complementary, as anonymity allows individuals to express opinions without fear of persecution. Without it, Sak argues, the potential for dissent and transparency diminishes, endangering democratic values. 

Digital privacy advocate Lauren Hendry Parsons agrees, highlighting how anonymity is a safeguard for those who challenge powerful institutions, including journalists, whistleblowers, and activists. Without this protection, these individuals could face significant personal risks, limiting their ability to hold authorities accountable. Moreover, anonymity enables broader participation in public discourse, as people can freely express opinions without fear of backlash. 

According to Parsons, this is essential for fostering a healthy democracy where diverse perspectives can thrive. While anonymity has clear benefits, the growing prevalence of online harm raises questions about how to balance safety and privacy. Advocates of ID verification argue that such measures could help identify and penalize users engaged in illegal or harmful activities. 

However, experts like Goda Sukackaite, Privacy Counsel at Surfshark, caution that requiring sensitive personal information, such as ID details or social security numbers, poses serious risks. Data breaches are becoming increasingly common, with incidents like the Ticketmaster hack in 2024 exposing the personal information of millions of users. Sukackaite notes that improper data protection can lead to unauthorized access and identity theft, further endangering individuals’ security. 

Adrianus Warmenhoven, a cybersecurity expert at NordVPN, suggests that instead of eliminating anonymity, digital education should be prioritized. Teaching critical thinking skills and encouraging responsible online behavior can empower individuals to navigate the internet safely. Warmenhoven also stresses the role of parents in educating children about online safety, comparing it to teaching basic life skills like looking both ways before crossing the street. 

As discussions about online anonymity gain momentum, the demand for privacy tools like virtual private networks (VPNs) is expected to grow. Recent surveys by NordVPN reveal that more individuals are seeking to regain control over their digital presence, particularly in countries like the U.S. and Canada. However, privacy advocates remain concerned that legislative pushes for ID verification and weakened encryption could result in broader restrictions on privacy-enhancing tools. 

Ultimately, the debate over anonymity reflects a complex tension between protecting individual rights and addressing collective safety. While Weinstein’s proposal aims to tackle urgent issues, critics argue that the risks to privacy and democracy are too significant to ignore. Empowering users through education and robust privacy protections may offer a more sustainable path forward.

Five Common Cybersecurity Errors and How to Avoid Them

 

In the cultural mishmash of modern tech-savvy consumers, the blue screen of death looms large. The screen serves as a simple reminder informing the user that the device is unable to resolve the issue on its own. A computer crash can indicate that your CPU is degrading after years of use, but a cybersecurity compromise can also cause hardware to malfunction or operate unexpectedly. 

A significant portion of the total amount of theft and illegal conduct that impacts people today is carried out by cybercriminals. According to the FBI's 2023 Internet Crime Report, cybercrime complaints resulted in losses above $12.5 billion. The numbers showed a 10% increase in complaints and a 22% increase in financial losses.

As defenders, we must constantly look for what we have missed and how we can get better. Five common cybersecurity errors are listed below, along with tips on how to prevent them: 

Using simple password:  Employing strong passwords to safeguard your sensitive data is a vital part of any effective cybersecurity plan. Strong passwords can make it difficult for hackers to access your credentials. These passwords must include capital letters, symbols, and broken words, if any. Nearly everyone is aware of this aspect of internet use, and many online systems require users to include these security features in their profiles. However, 44% of users hardly ever change their passwords (though over a third of internet users participate in monthly refreshes), and 13% of Americans use the same password for every online account they create. 

Underestimating the human element: This is a fatal error because you would be overlooking a significant contributor to 74% of data breaches. According to the Ponemon Cost of a Data Breach 2022 Report, the top attack vector last year was stolen or compromised credentials; it appears that many of us are falling for scams and disclosing critical information. That's why black hats keep coming back: we provide a consistent, predictable source of funds. To tighten those reigns, implement an employee Security Awareness Training (SAT) program and follow the principle of least privilege. 

Invincible thinking:  Small firms frequently fall into this attitude, believing they have nothing of value to an outside attacker. If all attackers were pursuing billions of money and governmental secrets, this could be accurate. But they aren't. There are innumerable black hats who profit from "small" payments, compounded dividends, and the sale of credential lists. Any company having users and logins can find what they're looking for. This same approach can and should be applied to organisations of all sizes. Combat the "it can't happen to me" mentality with regular risk assessments, pen tests, SAT training, and red teaming to prepare your organisation; because it can. 

Not caring enough:   This is exactly where fraudsters want you: clueless and "I don't care." This can happen all too easily when SOCs become overwhelmed by the 1,000-plus daily notifications they receive, let alone attempting to stay ahead of the game with proactive preventive measures (or even strategy). Threat actors take advantage of teams that are overburdened. If your resources are stretched thin, the correct investment in the right area might alleviate some of the stress, allowing you to do more with less. 

Playing a defensive game:   We've all heard that the best defence is a good offence. And that is true. Cybersecurity frequently receives a solely defensive rap, which unfairly underestimates its value. Cybercriminals are continuously catching organisations off guard, and all too often, SOCs on the ground have never dealt with anything like them before. They patched vulnerabilities. They dodged phishing emails. However, an APT, advanced threat, or even a true red-alert cyber incursion might all be new territory. Prepare your digital and people nervous systems for an attack by instilling offensive security techniques such as penetration testing and red teaming in them before day zero.

Ransomware Gangs Actively Recruiting Pen Testers: Insights from Cato Networks' Q3 2024 Report

 

Cybercriminals are increasingly targeting penetration testers to join ransomware affiliate programs such as Apos, Lynx, and Rabbit Hole, according to Cato Network's Q3 2024 SASE Threat Report, published by its Cyber Threats Research Lab (CTRL).

The report highlights numerous Russian-language job advertisements uncovered through surveillance of discussions on the Russian Anonymous Marketplace (RAMP). Speaking at an event in Stuttgart, Germany, on November 12, Etay Maor, Chief Security Strategist at Cato Networks, explained:"Penetration testing is a term from the security side of things when we try to reach our own systems to see if there are any holes. Now, ransomware gangs are hiring people with the same level of expertise - not to secure systems, but to target systems."

He further noted, "There's a whole economy in the criminal underground just behind this area of ransomware."

The report details how ransomware operators aim to ensure the effectiveness of their attacks by recruiting skilled developers and testers. Maor emphasized the evolution of ransomware-as-a-service (RaaS), stating, "[Ransomware-as-a-service] is constantly evolving. I think they're going into much more details than before, especially in some of their recruitment."

Cato Networks' team discovered instances of ransomware tools being sold, such as locker source code priced at $45,000. Maor remarked:"The bar keeps going down in terms of how much it takes to be a criminal. In the past, cybercriminals may have needed to know how to program. Then in the early 2000s, you could buy viruses. Now you don't need to even buy them because [other cybercriminals] will do this for you."

AI's role in facilitating cybercrime was also noted as a factor lowering barriers to entry. The report flagged examples like a user under the name ‘eloncrypto’ offering a MAKOP ransomware builder, an offshoot of PHOBOS ransomware.

The report warns of the growing threat posed by Shadow AI—where organizations or employees use AI tools without proper governance. Of the AI applications monitored, Bodygram, Craiyon, Otter.ai, Writesonic, and Character.AI were among those flagged for security risks, primarily data privacy concerns.

Cato CTRL also identified critical gaps in Transport Layer Security (TLS) inspection. Only 45% of surveyed organizations utilized TLS inspection, and just 3% inspected all relevant sessions. This lapse allows attackers to leverage encrypted TLS traffic to evade detection.

In Q3 2024, Cato CTRL noted that 60% of CVE exploit attempts were blocked within TLS traffic. Prominent vulnerabilities targeted included Log4j, SolarWinds, and ConnectWise.

The report is based on the analysis of 1.46 trillion network flows across over 2,500 global customers between July and September 2024. It underscores the evolving tactics of ransomware gangs and the growing challenges organizations face in safeguarding their systems.

New SMTP Cracking Tool for 2024 Sold on Dark Web Sparks Email Security Alarm

 

A new method targeting SMTP (Simple Mail Transfer Protocol) servers, specifically updated for 2024, has surfaced for sale on the dark web, sparking significant concerns about email security and data privacy.

This cracking technique is engineered to bypass protective measures, enabling unauthorized access to email servers. Such breaches risk compromising personal, business, and government communications.

The availability of this tool showcases the growing sophistication of cybercriminals and their ability to exploit weaknesses in email defenses. Unauthorized access to SMTP servers not only exposes private correspondence but also facilitates phishing, spam campaigns, and cyber-espionage.

Experts caution that widespread use of this method could result in increased phishing attacks, credential theft, and malware distribution. "Organizations and individuals must prioritize strengthening email security protocols, implementing strong authentication, and closely monitoring for unusual server activity," they advise.

Mitigating these risks requires consistent updates to security patches, enforcing multi-factor authentication, and using email encryption. The emergence of this dark web listing highlights the ongoing threats cybercriminals pose to critical communication systems.

As attackers continue to innovate, the cybersecurity community emphasizes vigilance and proactive defense strategies to safeguard sensitive information. This development underscores the urgent need for robust email security measures in the face of evolving cyber threats.

Hacker Claims to Publish Nokia Source Code

 

The Finnish telecoms equipment firm Nokia is looking into the suspected release of source code material on a criminal hacking site. See also: Gartner Market Guide for DFIR Retainer Services.

An attacker going by the handle "IntelBroker," who is also the proprietor of the current iteration of BreachForums, revealed on Thursday what he said was a cache of "Nokia-related source code" stolen from a third-party breach. The data consists of two folders: "nokia_admin1" and "nokia_etl_summary-data."

IntelBroker initially stated in a Last week's BreachForums post that he was selling the code, characterising it as a collection of "SSH keys, source code, RSA keys, Bitbucket logins, SMTP accounts, Webhooks, and hardcoded credentials."

A Nokia spokesperson stated that the company is "aware of reports that an unauthorised actor has alleged to have gained access to certain third-party contractor data, and possibly Nokia data." We will continue to constantly watch the situation." Last week on Tuesday, the hacker told Hackread that the data would cost $20,000.

IntelBroker told Bleeping Computer that the data came from Nokia's third-party service provider SonarQube. The hacker claimed to have gained access using a default password. SonarQube did not immediately reply to a request for comment.

In 2023, IntelBroker published online data stolen from a health insurance marketplace used by members of Congress, their families, and staffers. Earlier this year, he sparked a probe at the Department of State by uploading online papers purportedly stolen from government contractor Acuity. 

Third-party breaches at major firms are becoming more regular as companies improve their own cyber defences. Earlier this year, a slew of well-known brands, including AT&T, Ticketmaster, Santander Bank, automotive parts supplier Advance Auto Parts, and luxury retailer Neiman Marcus, were hit with breaches caused by a series of attacks on their accounts at cloud-based data warehousing platform Snowflake.

ZKP Emerged as the "Must-Have" Component of Blockchain Security.

 

Zero-knowledge proof (ZKP) has emerged as a critical security component in Web3 and blockchain because it ensures data integrity and increases privacy. It accomplishes this by allowing verification without exposing data. ZKP is employed on cryptocurrency exchanges to validate transaction volumes or values while safeguarding the user's personal information.

In addition to ensuring privacy, it protects against fraud. Zero-knowledge cryptography, a class of algorithms that includes ZKP, enables complex interactions and strengthens blockchain security. Data is safeguarded from unauthorised access and modification while it moves through decentralised networks. 

Blockchain users are frequently asked to certify that they have sufficient funds to execute a transaction, but they may not necessarily want to disclose their whole amount. ZKP can verify that users meet the necessary standards during KYC processes on cryptocurrency exchanges without requiring users to share their paperwork. Building on this, Holonym offered Human Keys to ensure security and privacy in Zero Trust situations. 

Each person is given a unique key that they can use to unlock their security and privacy rights. It strengthens individual rights through robust decentralised protocols and configurable privacy. The privacy-preserving principle applies to several elements of Web3 data security. ZKP involves complex cryptographic validations, and any effort to change the data invalidates the proof. 

Trustless data processing eases smart contract developer work 

Smart contract developers are now working with their hands tied, limited to self-referential opcodes that cannot provide the information required to assess blockchain activities. To that end, the Space and Time platform's emphasis on enabling trustless, multichain data processing and strengthening smart contracts is worth mentioning, since it ultimately simplifies developers' work. 

Their SXT Chain, a ZKP data blockchain, is now live on testnet. It combines decentralised data storage and blockchain verification. Conventional blockchains are focused on transactions, however SXT Chain allows for advanced data querying and analysis while preserving data integrity through blockchain technology.

The flagship DeFi generation introduced yield farming and platforms like Aave and Uniswap. The new one includes tokenized real-world assets, blockchain lending with dynamic interest rates, cross-chain derivatives, and increasingly complicated financial products. 

To unlock Web3 use cases, a crypto-native, trustless query engine is required, which allows for more advanced DeFi by providing smart contracts with the necessary context. Space and Time is helping to offer one by extending on Chainlink's aggregated data points with a SQL database, allowing smart contract authors to execute SQL processing on any part of Ethereum's history. 

Effective and fair regulatory model 

ZKP allows for selective disclosure, in which just the information that regulators require is revealed. Web3 projects comply with KYC and AML rules while protecting user privacy. ZKP even opens up the possibility of a tiered regulation mechanism based on existing privacy models. Observers can examine the ledger for unusual variations and report any suspect accounts or transactions to higher-level regulators. 

Higher-level regulators reveal particular transaction data. The process is supported by zero-knowledge SNARKs (Succinct Non-interactive Arguments of Knowledge) and attribute-based encryption. These techniques use ZKP to ensure consistency between transaction and regulatory information, preventing the use of fake information to escape monitoring. 

Additionally, ZK solutions let users withdraw funds in a matter of minutes, whereas optimistic rollups take approximately a week to finalise transactions and process withdrawals.

The Growing Concern Regarding Privacy in Connected Cars

 

Data collection and use raise serious privacy concerns, even though they can improve driving safety, efficiency, and the whole experience. The automotive industry's ability to collect, analyse, and exchange such data outpaces the legislative frameworks intended to protect individuals. In numerous cases, car owners have no information or control over how their data is used, let alone how it is shared with third parties. 

The FIA European Bureau feels it is time to face these challenges straight on. As advocates for driver and car owners' rights, we are calling for clearer, more open policies that restore individuals' control over their data. This is why, in partnership with Privacy4Cars, we are hosting an event called "Driving Data Rights: Enhancing Privacy and Control in Connected Cars" on November 19th in Brussels. The event will bring together policymakers, industry executives, and civil society to explore current gaps in legislation and industry practices, as well as how we can secure enhanced data protection for all. 

Balancing innovation with privacy 

A recent Privacy4Cars report identifies alarming industry patterns, demonstrating that many organisations are not fully compliant with GDPR laws. Data transparency, security, and consent methods are often lacking, exposing consumers to data misuse. These findings highlight the critical need for reforms that allow individuals more control over their data while ensuring that privacy is not sacrificed in the sake of innovation.

The benefits of connected vehicle data are apparent. Data has the potential to alter the automotive industry in a variety of ways, including improved road safety, predictive maintenance, and enhanced driving experiences. However, this should not be at the expense of individual private rights. 

As the automobile sector evolves, authorities and industry stakeholders must strike the correct balance between innovation and privacy protection. Stronger enforcement of existing regulations, as well as the creation of new frameworks that suit the unique needs of connected vehicles, are required. Car owners should have a say in how their data is utilised and be confident that it is managed properly. 

Shaping the future of data privacy in cars 

The forthcoming event on November 19th will provide an opportunity to dig deeper into these concerns. Key stakeholders from the European Commission, the automotive industry, and privacy experts will meet to discuss the present legal landscape and what else can be done to protect persons in this fast changing environment. 

The agenda includes presentations from Privacy4Cars on the most recent findings on automotive privacy practices, a panel discussion with automotive industry experts, and case studies demonstrating real-world examples of data misuse and third-party access. 

Connected cars are the future of mobility, but it must be founded on confidence and transparency. By giving individuals authority over their personal data, we can build a system that benefits everyone—drivers, manufacturers, and society as a whole. The FIA European Bureau is committed to collaborating with all parties to make this happen.

Balancing Act: Russia's New Data Decree and the Privacy Dilemma

Balancing Act: Russia's New Data Decree and the Privacy Dilemma

Data Privacy and State Access

Russia's Ministry of Digital Development, Communications, and Mass Media has introduced a draft decree specifying the conditions under which authorities can access staff and customer data from businesses operating in Russia, according to Forbes.

The decree would authorize authorities to demand anonymized personal data of customers and employees from businesses in order to protect the population during emergencies, prevent terrorism, and control the spread of infectious diseases, as well as for economic and social research purposes.

The Proposed Decree

Expected to take effect in September 2025, this draft decree follows amendments to the law On Personal Data, adopted on August 8. This law established a State Information System, requiring businesses and state agencies to upload the personal data of their staff and customers upon request.

The Big Data Association, a nonprofit that includes major Russian companies like Yandex, VK, and Gazprombank, has expressed concerns that the draft decree would permit authorities to request personal data from businesses "for virtually any reason." They warned that this could create legal uncertainties and impose excessive regulatory burdens on companies processing personal data, affecting nearly all businesses and organizations.

Global Context: A Tightrope Walk

Russia is not alone in its quest for greater access to personal data. Countries around the world are grappling with similar issues. For instance, the United States has its own set of laws and regulations under the Patriot Act and subsequent legislation that allows the government to access personal data under certain conditions. Similarly, the European Union’s General Data Protection Regulation (GDPR) provides a framework for data access while aiming to protect individual privacy.

Each country’s approach reflects its unique political, social, and cultural context. However, the core issue remains: finding the right balance between state access and individual privacy.

Ethical and Social Implications

The debate over state access to personal data is not purely legal or political; it is deeply ethical and social. Enhanced state access can lead to improved public safety and national security. For example, during a health crisis like the COVID-19 pandemic, having access to personal data can help in effective contact tracing and monitoring the spread of the virus.

New Tool Circumvents Google Chrome's New Cookie Encryption System

 

A researcher has developed a tool that bypasses Google's new App-Bound encryption cookie-theft defences and extracts saved passwords from the Chrome browser. 

Alexander Hagenah, a cybersecurity researcher, published the tool, 'Chrome-App-Bound-Encryption-Decryption,' after noticing that others had previously identified equivalent bypasses. 

Although the tool delivers what several infostealer operations have already done with their malware, its public availability increases the risk for Chrome users who continue to store sensitive information in their browsers. 

Google launched Application-Bound (App-Bound) encryption in July (Chrome 127) as a new security feature that encrypts cookies using a Windows process with SYSTEM rights. 

The goal was to safeguard sensitive data against infostealer malware, which operates with the logged user's access, making it impossible to decrypt stolen cookies without first achieving SYSTEM privileges and potentially setting off security software alarms. 

"Because the App-Bound service is running with system privileges, attackers need to do more than just coax a user into running a malicious app," noted Google in July. "Now, the malware has to gain system privileges, or inject code into Chrome, something that legitimate software shouldn't be doing.” 

However, by September, several infostealer thieves had discovered ways to circumvent the new security feature, allowing their cybercriminal customers to once again siphon and decrypt sensitive data from Google Chrome. 

Google previously stated that the "cat and mouse" game between info-stealer developers and its engineers was to be expected, and that they never assumed that its defence measures would be impenetrable. Instead, they believed that by introducing App-Bound encryption, they could finally set the groundwork for progressively constructing a more robust system. Below is Google's response from the time:

"We are aware of the disruption that this new defense has caused to the infostealer landscape and, as we stated in the blog, we expect this protection to cause a shift in attacker behavior to more observable techniques such as injection or memory scraping. This matches the new behavior we have seen. 

We continue to work with OS and AV vendors to try and more reliably detect these new types of attacks, as well as continuing to iterate on hardening defenses to improve protection against infostealers for our users.”

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

Growing Focus on Data Privacy Among GenAI Professionals in 2024

 


Recent reports published by Deloitte and Deloitte Consulting, highlighting the significance of data privacy as it pertains to Generative Artificial Intelligence (GenAI), have been widely cited. As the survey found, there has been a significant increase in professionals' concerns about data privacy; only 22% ranked it as their top concern at the beginning of 2023, and the number will rise to 72% by the end of 2024. 

Technology is advancing at an exponential rate, and as a result, there is a growing awareness of its potential risks. There has been a surge in concerns over data privacy caused by generative AI across several industries, according to a new report by Deloitte. Only 22% of professionals ranked it as among their top three concerns last year, these numbers have risen to 72% this year, according to a recent study. 

There was also strong concern regarding data provenance and transparency among professionals, with 47% and 40% informing us that they considered them to be among their top three ethical GenAI concerns for this year, respectively. The proportion of respondents concerned about job displacement, however, was only 16%. It is becoming increasingly common for staff to be curious about how AI technology operates, especially when it comes to sensitive data. 

Almost half of security professionals surveyed by HackerOne in September believe AI is risky, with many of them believing leaks of training data threaten their networks' security. It is noteworthy that 78% of business leaders ranked "safe and secure" as one of their top three ethical technology principles. This represents a 37% increase from the year 2023, which shows the importance of security to businesses today.

As a result of Deloitte's 2024 "State of Ethics and Trust in Technology " report, the results of the survey were reported in a report which surveyed over 1,800 business and technical professionals worldwide, asking them to rate the ethical principles that they apply to technological processes and, specifically, to their use of GenAI. It is becoming increasingly important for technological leaders to carefully examine the talent needs of their organizations, as they assist in guiding the adoption of generative AI. There are also ethical considerations that should be included on this checklist as well. 

A Deloitte report highlights the effectiveness of GenAI in eliminating the "expertise barrier": more people will be able to make more use of their data more happily and cost-effectively, according to Sachin Kulkarni, managing director, of risk and brand protection, at Deloitte. There may be a benefit to this, though as a result there may be an increased risk of data leaks as a result of this action." 

Furthermore, there has been concern expressed about the effects of generative AI on transparency, data provenance, intellectual property ownership, and hallucinations among professionals. Even though job displacement is often listed as a top concern by respondents, only 16% of those asked are reporting job displacement to be true. As a result of the assessment of emerging technology categories, business and IT professionals have concluded that cognitive technologies, which include large language models, artificial intelligence, neural networks, and generative AI, among others, pose the greatest ethical challenges.  

This category had a significant achievement over other technology verticals, including virtual reality, autonomous vehicles, and robotics. However, respondents stated that they considered cognitive technologies to be the most likely to bring about social good in the future. Flexential's survey published earlier this month found that several executives, in light of the huge reliance on data, are concerned about how generative AI tools can increase cybersecurity risks by extending their attack surface as a result, according to the report. 

In Deloitte's annual report, however, the percentage of professionals reporting that they use GenAI internally grew by 20% between last year and this year, reflecting an increase in the use of GenAI by their employees over the previous year. 94% of the respondents said they had incorporated it into their organization's processes in some way or another. Nevertheless, most respondents indicated that these technologies are either still in the pilot phase or are limited in their usage, with only 12% saying that they are used extensively. 

Gartner research published last year also found that about 80% of GenAI projects fail to make it past proof-of-concept as a result of a lack of resources. Europe has been impacted by the recent EU Artificial Intelligence Act and 34% of European respondents have reported that their organizations have taken action over the past year to change their use of AI to adapt to the Act's requirements. 

According to the survey results, however, the impact of the Act is more widespread, with 26% of respondents from the South Asian region changing their lifestyles because of it, and 16% of those from the North and South American regions did the same. The survey also revealed that 20 per cent of respondents based in the U.S. had altered the way their organization was operating as a result of the executive order. According to the survey, 25% of South Asians, 21% of South Americans, and 12% of Europeans surveyed had the same perspective. 

The report explains that "Cognitive technologies such as artificial intelligence (AI) have the potential to provide society with the greatest benefits, but are also the most vulnerable to misuse," according to its authors. The accelerated adoption of GenAI technology is overtaking the capacity of organizations to effectively govern it at a rapid pace. GenAI tools can provide a great deal of help to businesses in a range of areas, from choosing which use cases to apply them to quality assurance, to implementing ethical standards. 

Companies should prioritize both of these areas." Despite artificial intelligence being widely used, policymakers want to make sure that they won't get themselves into trouble with its use, especially when it comes to legislation because any use of it can lead to a lot of problems. 34% of respondents reported that regulatory compliance was their most important reason for implementing ethics policies and guidelines to comply with regulations, while regulatory penalties topped the list of concerns about not complying with such policies and guidelines. 

A new piece of legislation in the EU, known as the Artificial Intelligence Act, entered into force on August 1. The Act, which takes effect today, is intended to ensure that artificial intelligence systems that are used in high-risk environments are safe, transparent, and ethical. If a company does not comply with the regulations, it could face financial penalties ranging from €35 million ($38 million), which is equivalent to 7% of global turnover, to €7.5 million ($8.1 million), which is equivalent to 1.5% of global turnover. 

Over a hundred companies have already signed the EU Artificial Intelligence Pact, with Amazon, Google, Microsoft, and OpenAI among them; they have also volunteered to begin implementing the requirements of the bill before any deadlines established by law. Both of these actions demonstrate that they are committed to the responsible implementation of artificial intelligence in society, and also help them to avoid future legal challenges in the future. 

The United States released a similar executive order in October 2023 with broad guidelines regarding the protection and enhancement of military, civil, and personal privacy as well as protecting the security of government agencies while fostering AI innovation and competition across the entire country. Even though this is not a law, many companies operating in the U.S. have made policy changes to ensure compliance with regulatory changes and comply with public expectations regarding the privacy and security of AI.

Ethics and Tech: Data Privacy Concerns Around Generative AI

Ethics and Tech: Data Privacy Concerns Around Generative AI

The tech industry is embracing Generative AI, but the conversation around data privacy has become increasingly important. The recent “State of Ethics and Trust in Technology” report by Deloitte highlights the pressing ethical considerations that accompany the rapid adoption of these technologies. 30% of organizations have adjusted new AI projects, and 25% have modified existing ones in response to the AI Act, the report mentions.

The Rise of Generative AI

54% of professionals believe that generative AI poses the highest ethical risk among emerging technologies. Additionally, 40% of respondents identified data privacy as their top concern. 

Generative AI, which includes technologies like GPT-4, DALL-E, and other advanced machine learning models, has shown immense potential in creating content, automating tasks, and enhancing decision-making processes. 

These technologies can generate human-like text, create realistic images, and even compose music, making them valuable tools across industries such as healthcare, finance, marketing, and entertainment.

However, the capabilities of generative AI also raise significant data privacy concerns. As these models require vast amounts of data to train and improve, the risk of mishandling sensitive information increases. This has led to heightened scrutiny from both regulatory bodies and the public.

Key Data Privacy Concerns

Data Collection and Usage: Generative AI systems often rely on large datasets that may include personal and sensitive information. The collection, storage, and usage of this data must comply with stringent privacy regulations such as GDPR and CCPA. Organizations must ensure that data is anonymized and used ethically to prevent misuse.

Transparency and Accountability: One of the major concerns is the lack of transparency in how generative AI models operate. Users and stakeholders need to understand how their data is being used and the decisions being made by these systems. Establishing clear accountability mechanisms is crucial to build trust and ensure ethical use.

Bias and Discrimination: Generative AI models can inadvertently perpetuate biases present in the training data. This can lead to discriminatory outcomes, particularly in sensitive areas like hiring, lending, and law enforcement. Addressing these biases requires continuous monitoring and updating of the models to ensure fairness and equity.

Security Risks: The integration of generative AI into various systems can introduce new security vulnerabilities. Cyberattacks targeting AI systems can lead to data breaches, exposing sensitive information. Robust security measures and regular audits are essential to safeguard against such threats.

Ethical Considerations and Trust

80% of respondents are required to complete mandatory technology ethics training, marking a 7% increase since 2022.  Nearly three-quarters of IT and business professionals rank data privacy among their top three ethical concerns related to generative AI:

  • Developing and implementing ethical frameworks for AI usage is crucial. These frameworks should outline principles for data privacy, transparency, and accountability, guiding organizations in the responsible deployment of generative AI.
  • Engaging with stakeholders, including employees, customers, and regulatory bodies, is essential to build trust. Open dialogues about the benefits and risks of generative AI can help in addressing concerns and fostering a culture of transparency.
  • The dynamic nature of AI technologies necessitates continuous monitoring and improvement. Regular assessments of AI systems for biases, security vulnerabilities, and compliance with privacy regulations are vital to ensure ethical use.

Complexity: Research Offers Solution for Healthcare Security Amid Rising Cyberattacks

Complexity: Research Offers Solution for Healthcare Security Amid Rising Cyberattacks

In May, Ascension, a healthcare provider with a network of 140 hospitals across the U.S., suffered a major cyber-attack that disrupted its clinical operations for almost a month. Experts traced the problem to a malicious ransomware that had exploited an employee's computer. 

Healthcare: Juicy Target for Criminals

Threat actors see healthcare systems as lucrative targets for cybercrime because they hold crucial financial, health, and personal data. A 2023 survey research in health and IT professionals revealed that 88% of organizations had suffered around 40% of attacks in the past year. 

Complexity: Flaw in IT System

One major flaw is the rise of complexity in IT systems, says Hüseyin Tanriverdi, associate professor of information, risk, and operations management at Texas McCombs. He believes it's due to years of mergers and acquisitions that have made large-scale multi-hospital systems. 

After mergers, healthcare providers don’t standardize their tech and security operations, which results in causing major complexity in the health systems- different IT systems, different care processes, and different command structures. 

But his new research shows complexity can also offer solutions to these issues. “A good kind of complexity,” Tanriverdi believes can support communication across different systems, governance structures, and care processes, and combat against cyber incidents.

Understanding the Complex vs. Complicated

The research team found two similar-sounding IT terms that link to the problem. In “complicatedness,” an abundance of elements interconnect in a system for sharing info in structured ways. Whereas “complexity” happens when many elements interconnect to share information in unstructured ways- integrating systems following a merger and acquisition. 

Tanrivedi believes complicated structures are better because they are structured, despite being difficult, one can control them. Such is not the case with complex systems as they are unstructured networks. He believes healthcare systems got more vulnerable as they got more complex, 29% were more likely to get hit than average. 

Solution for Better Healthcare Security

Complex systems offer hackers more data transfer points to attack, and a higher risk for human errors, making it a bigger problem.

The solution lies in following a centralized approach for handling the data. “With fewer access points and simplified and hardened cybersecurity controls, unauthorized parties are less likely to gain unauthorized access to patient data,” says Tanrivedi. “Technology reduces cybersecurity risks if it is organized and governed well.”

Construction Firms Targeted in Brute Force Assaults on Accounting Software

 

Unidentified hackers have targeted construction firms using Foundation accounting software, security experts revealed earlier this week. 

According to cybersecurity firm Huntress, the hackers hunt for publicly available Foundation installations on the internet and then test combinations of default usernames and passwords that allow for administrative access.

Huntress claimed it has detected active software breaches from organisations in the plumbing, concrete, and heating, ventilation, and air conditioning (HVAC) industries. The researchers did not specify whether the attacks were effective or what their purpose was. 

Foundation Software, the platform's Ohio-based developer, stated that it was working with Huntress to clarify some of the report's information. 

“The event potentially impacted a small subset of on-premise FOUNDATION users. It did not at all impact the bulk of our accounting users, which are under our secure, cloud-based [software-as-a-service] offering. It also did not impact our internal systems or any of our other product offerings through our subsidiary companies,” Foundation stated. 

The Huntress analysts stated they noticed the malicious behaviour targeting Foundation last week. On one host, the researchers discovered approximately 35,000 brute-force login attempts against the Microsoft SQL Server (MSSQL) used by the organisation to manage its database operations. 

Typically, such databases are kept secret and secure behind a firewall or virtual private network (VPN), but Foundation "features connectivity and access by a mobile app," researchers noted. This means that a specific TCP port, which is designed to regulate and identify network traffic on a computer, may be made open to the public, allowing direct access to the Microsoft SQL database. 

According to the report, Foundation users often used default, easy-to-guess passwords to protect high-privilege database accounts.

“As a result of not following recommendations and security best practices that were provided (one example being not resetting the default credentials), this small subset of on-premise users might face possible vulnerabilities,” Foundation noted. “We have been communicating and providing technical support to these users to mitigate this.” 

Huntress stated it detected 500 hosts running the Foundation software, and nearly 33 of them were publicly exposed with unchanged default credentials. 

“In addition to notifying those where we saw suspicious activity, we also sent out a precautionary advisory notification to any of our customers and partners who have the FOUNDATION software in their environment,” Huntress concluded.

Here's How to Remove Malware From Your Chromebook

 

Imagine this: your Chromebook fails just before you click "Save" after spending hours working on your project. Let's imagine you want to watch a series, but it keeps crashing, making it impossible for you to get the most out of your favourite program. If these situations sound familiar to you, malware may have infected your Chromebook. 

Malware on your Chromebook can have detrimental effects, such as compromising your financial information, forcing you to lose work productivity, and compromising personal information. It is imperative that you take quick action if you think your Chromebook is infected. 

In this article, we'll walk you through the process of identifying whether your Chromebook is infected and give you the simplest method for virus removal: a reputable antivirus software. We'll also go over key precautions you should take to protect your Chromebook from future malware threats. 

Can malware infect Chromebooks ? 

As Chromebooks become more popular, fraudsters hunt for new ways to infect them and steal sensitive information for financial gain or identity theft. And, while Google's sophisticated ecosystem actively protects its users, no system is completely immune to cyber-attacks. 

Viruses, for example, are a popular sort of malware on the internet that adds malicious code to otherwise normal downloads. They are active when you download a malicious file, and they can also download and install automatically if you click on a link. Once the virus is installed on your system, it can cause damage and prevent you from using your device or network.

The positive news is that it is nearly impossible to become infected by an actual virus on Chrome OS. Because it does not enable the installation of any executable software, it is one of the most secure operating systems available today. 

The bad news is that Chromebooks are still vulnerable to some forms of malware, such as search hijackers (search redirection), malicious browser extensions, adware, spyware, phishing schemes, and downloads from unverified websites. 

Prevention tips

Chromebooks are vulnerable to several forms of malware, even though viruses rarely affect them, as mentioned above. Google recommends the following best practices to maintain a secure Chromebook experience: 

Stay updated: Keep your Chrome OS and applications up to date. Regular updates often have critical security patches. 

Use caution with extensions and apps: Read reviews and only use reliable browser extensions and apps from the Chrome Web Store or Google Play. 

Avoid phishing scams: Exercise caution while accessing suspicious websites or emails that ask for personal information. 

Consider security software: Although Chromebooks have built-in security safeguards, adding an extra layer of protection with reputable security software can provide additional peace of mind. 

As Chromebooks gain popularity as a low-cost and efficient alternative to traditional laptops, it is critical to understand their risks, particularly those related to malware. Chrome OS, with its web-based applications and regular updates, offers strong security, but it is still vulnerable to different types of malware such as search hijackers, adware, and spyware.

Ford’s Latest Patent: A Step Toward High-Tech Advertising or Privacy Invasion?


 

Among those filed recently is one from Ford for a system that gathers driver data to personalise in-car advertisements, which raises lots of concerns over privacy. This technological advancement can collect types of information from a car's GPS location to its driving habits and even conversations inside the vehicle. It aims to give targeted ads, real-time, which has raised issues among some privacy advocates over the level of surveillance this system will introduce.

While Ford explains patenting something does not equate to its actual implementation, the idea of the system raises some red flags. It shines a light on at least some of the dangers with gathering vast amounts of data and how that impacts any and all privacy concerns related to targeting consumers at the wheel.

What Does Ford's Patent Explain?

The patent explains the way in which information would be gathered and used by the system for delivering specific ads:

1. GPS Location: This one would identify where the car is and then which advertisements to pop up based on where various shops are in the area. Thus, if a driver is close to a fast food, then they may see an ad for that specific chain on the car's infotainment system.

2. Driving Situations: Ads can be targeted based on traffic conditions and speed of driving as well. When a driver is caught in heavy traffic, for example, ads might be displayed related to entertainment tools like audiobooks or podcasts.

3. Historical Data: Targeted on the basis of earlier behaviour such as which places one has earlier visited or what kind of music he prefers, historical data can be used.

4. In-Car Dialogue: The most contentious part of the patent is how the system will listen to dialogues going on inside the car, be it between the passengers or even among family members. If they are discussing going grocery shopping, the system could automatically point out nearby supermarkets.

Such data collection, particularly the dialogues, has been widely criticised as overly intrusive and a serious concern for privacy.

Privacy Concerns and a Backlash

As such, quite a few privacy advocates view this patent as a threat. Recording in-car conversations, even for the purpose of delivering ads, is a huge violation of privacy. If monitored at such levels, critics argue, it might lead to manipulations through advertisements and raise further worries regarding the usage and protection of data.

It's getting a little too intimate," says Daryl Killian, an automotive influencer discussing the issue. "We're so used to stuff popping up on our devices based on what we're doing online. For a car to be listening and sharing conversations is a bit much. It will send those consumers away who don't like the fact that companies collect this much data already.".

There are also concerns over safety, in that too many commercials can divert focus from driving.

Too much advertisement during driving may expose the driver to probable safety problems during very congested situations.

Ford Position and General Industry Trends

Ford has come out to explain that for them, patenting is just a ritual that does not mean the technology will be developed. The company has reported that this patent is part of the exploration of new ideas and should not be misconstrued as an expression of immediate implementation.

Ford has dabbled in personalised advertising before through a technology that would enable digital variations of signs to display on the windshield of a car for drivers as they drive by. But they are not alone in that. General Motors and many others have experimented with similar technology, which suggests an entire shift toward data-driven, personalised in-car experience.

The Dynamic Between Innovation and Privacy

While exciting with great potential in applications such as tailored navigation or real-time traffic updates, personalised in-car technology should be balanced with strong protections of privacy. Ability for drivers to opt out of data collection and advertising are all crucial to maintaining user trust.

There are several concerns that must be grappled with as this technology continues to evolve:

1. Transparency: Drivers should be told what data is being collected and for what purpose. There must be options that are clear for the users to control or opt-out from such collection of data.

2. Data Security: As more personal data is collected, robust security measures are crucial to protect against unauthorised access or breaches.

3. Regulatory Oversight: Governments may have to evolve and make clearer regulations about how the data of drivers is collected, used, and secured in order to help better protect consumer privacy.

Essentially, as such innovations promise convenience with personalised advertising, it is similarly very important to balance these innovations with necessary protective layers on the side of privacy. Car manufacturers will have to ensure that new technologies improve the driving experience without derailing user trust.