Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label User Privacy. Show all posts

Hyundai AutoEver America Breach Exposes Employee SSNs and Driver’s License Data

 

Hyundai AutoEver America (HAEA), an IT services affiliate of Hyundai Motor Group, has confirmed a data breach that compromised sensitive personal information, including Social Security Numbers (SSNs) and driver’s licenses, of approximately 2,000 individuals, mostly current and former employees. The breach occurred between February 22 and March 2, 2025, with the company discovering the intrusion and launching an investigation on March 1.

HAEA specializes in providing IT consulting, managed services, and digital solutions for Hyundai and Kia affiliates, covering vehicle telematics, over-the-air updates, vehicle connectivity, and embedded systems, as well as business systems and digital manufacturing platforms. The company’s IT environment supports 2 million users and 2.7 million vehicles, with a workforce of 5,000 employees.

The notification to affected individuals revealed that the breach exposed names, while the Massachusetts government portal listed additional information such as SSNs and driver’s licenses. It is still unclear whether customers or users were affected besides employees, and the exact breakdown of impacted groups remains unspecified. The company worked with external cybersecurity experts and law enforcement to investigate the incident, confirm containment, and identify the potentially affected data.

At the time of the report, no ransomware groups had claimed responsibility for the attack, and the perpetrators are unknown. This incident adds to a series of cybersecurity challenges faced by Hyundai and its affiliates in recent years, including previous ransomware attacks and data breaches affecting operations in Europe and exposing owner data in Italy and France. 

Additionally, security researchers previously identified significant privacy and security issues with Hyundai’s companion app, which allowed unauthorized remote control of vehicles, and vulnerabilities in built-in anti-theft systems.

HAEA has not yet released a full public statement with details about the breach, mitigation steps, or future security improvements. The limited information available highlights the need for robust security protocols, especially for organizations handling large volumes of sensitive personal and automotive data. The breach serves as a reminder of the ongoing risks facing major automotive and IT service providers amid the growing threat landscape for digital infrastructure.

ASF Rejects Akira Breach Claims Against Apache OpenOffice

 

Apache OpenOffice, an open-source office suite project maintained by the Apache Software Foundation (ASF), is currently disputing claims of a significant data breach allegedly perpetrated by the Akira ransomware gang. 

On October 30, 2025, Akira published a post on its data leak site asserting that it had compromised Apache OpenOffice and exfiltrated 23 GB of sensitive corporate documents, including employee personal information—such as home addresses, phone numbers, dates of birth, driver’s licenses, social security numbers, and credit card data—as well as financial records and internal confidential files. The group further claimed it would soon release these documents publicly.

Responding publicly, the ASF refutes the claims, stating it has no evidence that its systems have been compromised or that a breach has occurred. According to ASF representatives, the data types described by Akira do not exist within the Foundation’s infrastructure. Importantly, the ASF points out the open-source nature of the project: there are no paid employees associated with Apache OpenOffice or the Foundation, and therefore, sensitive employee information as specified by Akira is not held by ASF. 

All development activities, bug tracking, and feature requests for the software are managed openly and transparently, primarily through public developer mailing lists. Thus, any internal reports or application issues cited in the alleged leak are already available in the public domain.

ASF further emphasized its strong commitment to security and clarified that, as of November 4, 2025, it had received no ransom demands directed at either the Foundation or the OpenOffice project. The Foundation has initiated an internal investigation to fully assess the veracity of Akira’s claims but, so far, has found no supporting evidence. 

It has not contacted law enforcement or external cybersecurity experts, signaling that the incident is being treated as a claim without substantiation. As of the time of publication, none of the purported stolen data has surfaced on the Akira leak site, leaving ASF’s assertion unchallenged.

This dispute highlights the increasingly common tactic among ransomware operators of leveraging publicity and unsubstantiated claims to pressure organizations, even when the technical evidence does not support their assertions. For now, ASF continues to reassure users and contributors that Apache OpenOffice remains uncompromised, and stresses the transparency inherent in open-source development as a key defense against misinformation and data exfiltration claims.

Bluetooth Security Risks: Why Leaving It On Could Endanger Your Data

 

Bluetooth technology, widely used for wireless connections across smartphones, computers, health monitors, and peripherals, offers convenience but carries notable security risks—especially when left enabled at all times. While Bluetooth security and encryption have advanced over decades, the protocol remains exposed to various cyber threats, and many users neglect these vulnerabilities, putting personal data at risk.

Common Bluetooth security threats

Leaving Bluetooth permanently on is among the most frequent cybersecurity oversights. Doing so effectively announces your device’s continuous availability to connect, making it a target for attackers. 

Threat actors exploit Bluetooth through methods like bluesnarfing—the unauthorized extraction of data—and bluejacking, where unsolicited messages and advertisements are sent without consent. If hackers connect, they may siphon valuable information such as banking details, contact logs, and passwords, which can subsequently be used for identity theft, fraudulent purchases, or impersonation.

A critical issue is that data theft via Bluetooth is often invisible—victims receive no notification or warning. Further compounding the problem, Bluetooth signals can be leveraged for physical tracking. Retailers, for instance, commonly use Bluetooth beacons to trace shopper locations and gather granular behavioral data, raising privacy concerns.

Importantly, Bluetooth-related vulnerabilities affect more than just smartphones; they extend to health devices and wearables. Although compromising medical Bluetooth devices such as pacemakers or infusion pumps is technically challenging, targeted attacks remain a possibility for motivated adversaries.

Defensive strategies 

Mitigating Bluetooth risks starts with turning off Bluetooth in public or unfamiliar environments and disabling automatic reconnection features when constant use (e.g., wireless headphones) isn’t essential. Additionally, set devices to ‘undiscoverable’ mode as a default, blocking unexpected or unauthorized connections.

Regularly updating operating systems is vital, since outdated devices are prone to exploits like BlueBorne—a severe vulnerability allowing attackers full control over devices, including access to apps and cameras. Always reject unexpected Bluetooth pairing requests and periodically review app permissions, as many apps may exploit Bluetooth to track locations or obtain contact data covertly. 

Utilizing a Virtual Private Network (VPN) enhances overall security by encrypting network activity and masking IP addresses, though this measure isn’t foolproof. Ultimately, while Bluetooth offers convenience, mindful management of its settings is crucial for defending against the spectrum of privacy and security threats posed by wireless connectivity.

UK Digital ID Faces Security Crisis Ahead of Mandatory Rollout

 

The UK’s digital ID system, known as One Login, triggered major controversy in 2025 due to serious security vulnerabilities and privacy concerns, leading critics to liken it to the infamous Horizon scandal. 

One Login is a government-backed identity verification platform designed for access to public services and private sector uses such as employment verification and banking. Despite government assurances around its security and user benefits, public confidence plummeted amid allegations of cybersecurity failures and rushed implementation planned for November 18, 2025.

Critics, including MPs and cybersecurity experts, revealed that the system failed critical red-team penetration tests, with hackers gaining privileged access during simulated cyberattacks. Further concerns arose over development practices, with portions of the platform built by contractors in Romania on unsecured workstations without adequate security clearance. The government missed security deadlines, with full compliance expected only by March 2026—months after the mandatory rollout began.

This “rollout-at-all-costs” approach amidst unresolved security flaws has created a significant trust deficit, risking citizens’ personal data, which includes sensitive information like biometrics and identification documents. One Login collects comprehensive data, such as name, birth date, biometrics, and a selfie video for identity verification. This data is shared across government services and third parties, raising fears of surveillance, identity theft, and misuse.

The controversy draws a parallel to the Horizon IT scandal, where faulty software led to wrongful prosecutions of hundreds of subpostmasters. Opponents warn that flawed digital ID systems could cause similar large-scale harms, including wrongful exclusions and damaged reputations, undermining public trust in government IT projects.

Public opposition has grown, with petitions and polls showing more people opposing digital ID than supporting it. Civil liberties groups caution against intrusive government tracking and call for stronger safeguards, transparency, and privacy protections. The Prime Minister defends the program as a tool to simplify life and reduce identity fraud, but critics label it expensive, intrusive, and potentially dangerous.

In conclusion, the UK’s digital ID initiative stands at a critical crossroads, facing a crisis of confidence and comparisons to past government technology scandals. Robust security, oversight, and public trust are imperative to avoid a repeat of such failures and ensure the system serves citizens without compromising their privacy or rights.

MANGO Marketing Vendor Breach Exposes Customer Contact Details

 

MANGO, the Spanish fashion retailer, has disclosed a data breach affecting customer information due to a cyberattack on one of its external marketing service providers. The incident, revealed on October 14, 2025, involved unauthorized access to personal data used in marketing campaigns, prompting the company to notify affected customers directly.

The compromised data includes customers' first names, country of residence, postal codes, email addresses, and telephone numbers. Notably, sensitive details such as last names, banking information, credit card data, government-issued IDs, passports, and account credentials were not accessed, reducing the risk of financial fraud. Despite this, the exposed information could be leveraged by threat actors for targeted phishing campaigns, where attackers impersonate legitimate entities to trick individuals into revealing further personal or financial data.

MANGO emphasized that its corporate infrastructure and internal IT systems remained unaffected, with no disruption to business operations. The company confirmed that all security protocols were activated immediately upon detection of the breach at the third-party vendor, although the name of the compromised marketing partner has not been disclosed.

In response, MANGO has reported the incident to the Spanish Data Protection Agency (AEPD) and other relevant regulatory authorities, in compliance with data protection regulations. To assist concerned customers, the company has established a dedicated support channel, including an email address (personaldata@mango.com) and a toll-free hotline (900 150 543), where individuals can seek clarification and guidance regarding potential exposure.

Founded in 1984 and headquartered in Barcelona, MANGO operates over 2,800 physical and e-commerce stores across 120 countries. It employs approximately 16,300 people and generates an annual revenue of €3.3 billion, with nearly 30% derived from online sales. While the breach does not impact core business systems, the incident highlights the growing risks associated with third-party vendors in digital supply chains, particularly in the retail and fashion sectors that rely heavily on external marketing and customer engagement platforms.

At the time of reporting, no ransomware group has claimed responsibility for the attack, and the identity of the attackers remains unknown. Local media outlets reached out to MANGO for further details on the scope and technical aspects of the breach but had not received a response by publication.

Windows 10 Support Termination Leaves Devices Vulnerable

 

Microsoft has officially ended support for Windows 10, marking a major shift impacting hundreds of millions of users worldwide. Released in 2015, the operating system will no longer receive free security updates, bug fixes, or technical assistance, leaving all devices running it vulnerable to exploitation. This decision mirrors previous end-of-life events such as Windows XP, which saw a surge in cyberattacks after losing support.

Rising security threats

Without updates, Windows 10 systems are expected to become prime targets for hackers. Thousands of vulnerabilities have already been documented in public databases like ExploitDB, and several critical flaws have been actively exploited. 

Among them are CVE-2025-29824, a “use-after-free” bug in the Common Log File System Driver with a CVSS score of 7.8; CVE-2025-24993, a heap-based buffer overflow in NTFS marked as “known exploited”; and CVE-2025-24984, leaking NTFS log data with the highest EPSS score of 13.87%. 

These vulnerabilities enable privilege escalation, code execution, or remote intrusion, many of which have been added to the U.S. CISA’s Known Exploited Vulnerabilities (KEV) catalog, signaling the seriousness of the risks.

Limited upgrade paths

Microsoft recommends that users migrate to Windows 11, which features modernized architecture and ongoing support. However, strict hardware requirements mean that roughly 200 million Windows 10 computers worldwide remain ineligible for the upgrade. 

For those unable to transition, Microsoft provides three main options: purchasing new hardware compatible with Windows 11, enrolling in a paid Extended Security Updates (ESU) program (offering patches for one extra year), or continuing to operate unsupported — a risky path exposing systems to severe cyber threats.

The support cutoff extends beyond the OS. Microsoft Office 2016 and 2019 have simultaneously reached end-of-life, leaving only newer versions like Office 2021 and LTSC operable but unsupported on Windows 10. Users are encouraged to switch to Microsoft 365 or move licenses to Windows 11 devices. Notably, support for Office LTSC 2021 ends in October 2026.

Data protection tips

Microsoft urges users to back up critical data and securely erase drives before recycling or reselling devices. Participating manufacturers and Microsoft itself offer trade-in or recycling programs to ensure data safety. As cyber risks amplify and hackers exploit obsolete systems, users still on Windows 10 face a critical choice — upgrade, pay for ESU, or risk exposure in an increasingly volatile digital landscape.

India Plans Techno-Legal Framework to Combat Deepfake Threats

 

India will introduce comprehensive regulations to combat deepfakes in the near future, Union IT Minister Ashwini Vaishnaw announced at the NDTV World Summit 2025 in New Delhi. The minister emphasized that the upcoming framework will adopt a dual-component approach combining technical solutions with legal measures, rather than relying solely on traditional legislation.

Vaishnaw explained that artificial intelligence cannot be effectively regulated through conventional lawmaking alone, as the technology requires innovative technical interventions. He acknowledged that while AI enables entertaining applications like age transformation filters, deepfakes pose unprecedented threats to society by potentially misusing individuals' faces and voices to disseminate false messages completely disconnected from the actual person.

The minister highlighted the fundamental right of individuals to protect their identity from harmful misuse, stating that this principle forms the foundation of the government's approach to deepfake regulation. The techno-legal strategy distinguishes India's methodology from the European Union's primarily regulatory framework, with India prioritizing innovation alongside societal protection.

As part of the technical solution, Vaishnaw referenced ongoing work at the AI Safety Institute, specifically mentioning that the Indian Institute of Technology Jodhpur has developed a detection system capable of identifying deepfakes with over 90 percent accuracy. This technological advancement will complement the legal framework to create a more robust defense mechanism.

The minister also discussed India's broader AI infrastructure development, noting that two semiconductor manufacturing units, CG Semi and Kaynes, have commenced production operations in the country. Additionally, six indigenous AI models are currently under development, with two utilizing approximately 120 billion parameters designed to be free from biases present in Western models.

The government has deployed 38,000 graphics processing units (GPUs) for AI development and secured a $15 billion investment commitment from Google to establish a major AI hub in India. This infrastructure expansion aims to enhance the nation's research capabilities and application development in artificial intelligence.

Discord Third-Party Breach Exposes User Data and Government IDs

 

Discord has confirmed a significant data breach affecting users who interacted with their customer support or trust & safety teams, stemming not from a direct attack on Discord’s own systems but through a compromised third-party vendor that handled customer service operations.

This incident highlights a persistent and growing vulnerability within the tech industry—outsourcing crucial services to external parties with potentially weaker cybersecurity standards, making user data increasingly reliant on the practices of organizations that customers never directly chose to trust.

Data exposed in the breach

The breach resulted in unauthorized access to sensitive personal information stored in customer service records. Specifically, exposed data included names, email addresses, Discord usernames, and various contact details for users engaging with Discord support. Furthermore, hackers gained limited billing information comprising payment types, purchase history, and the last four digits of credit cards, with full card numbers and passwords remaining secure.

A particularly concerning aspect was a small subset of government-issued ID images—such as driver’s licenses and passports—belonging to users who had submitted documents for age verification purposes. Although not all Discord users were affected, the breach still poses a tangible risk of identity theft and privacy erosion for those involved.

Third-Party vendor risks

The incident underscores the dangers posed by outsourcing digital operations to third-party vendors. Discord’s response involved revoking the vendor’s access and launching a thorough investigation; however, the damage had already been done, reflecting security gaps that even prompt internal actions cannot immediately resolve once data is compromised. 

The broader issue is that while companies often rely on vendors to reduce costs and streamline services, these relationships introduce new, often less controllable, points of failure. In essence, the robust security of a major platform like Discord can be undermined by external vendors who do not adhere to equally rigorous protection standards.

Implications for users

In the aftermath, Discord followed standard protocols by notifying affected users via email and communicating with data protection authorities. Yet, this episode demonstrates a critical lesson: users’ digital privacy extends beyond the platforms they consciously choose, as it also depends on a network of third-party companies that can become invisible weak links. 

Each vendor relationship broadens the attack surface for potential breaches, transforming cybersecurity into a chain only as strong as the least secured party involved. The Discord incident serves as a stark reminder of the challenges in safeguarding digital identity in an interconnected ecosystem, where the security of personal data cannot be taken for granted.

OpenAI's Sora App Raises Facial Data Privacy Concerns

 

OpenAI's video-generating app, Sora, has raised significant questions regarding the safety and privacy of user's biometric data, particularly with its "Cameo" feature that creates realistic AI videos, or "deepfakes," using a person's face and voice. 

To power this functionality, OpenAI confirms it must store users' facial and audio data. The company states this sensitive data is encrypted during both storage and transmission, and uploaded cameo data is automatically deleted after 30 days. Despite these assurances, privacy concerns remain. The app's ability to generate hyper-realistic videos has sparked fears about the potential for misuse, such as the creation of unauthorized deepfakes or the spread of misinformation. 

OpenAI acknowledges a slight risk that the app could produce inappropriate content, including sexual deepfakes, despite the safeguards in place. In response to these risks, the company has implemented measures to distinguish AI-generated content, including visible watermarks and invisible C2PA metadata in every video created with Sora .

The company emphasizes that users have control over their likeness. Individuals can decide who is permitted to use their cameo and can revoke access or delete any video featuring them at any time. However, a major point of contention is the app's account deletion policy. Deleting a Sora account also results in the termination of the user's entire OpenAI account, including ChatGPT access, and the user cannot register again with the same email or phone number. 

While OpenAI has stated it is developing a way for users to delete their Sora account independently, this integrated deletion policy has surprised and concerned many users who wish to remove their biometric data from Sora without losing access to other OpenAI services.

The app has also drawn attention for potential copyright violations, with users creating videos featuring well-known characters from popular media. While OpenAI provides a mechanism for rights holders to request the removal of their content, the platform's design has positioned it as a new frontier for intellectual property disputes.

NSSF Sued for Secretly Using Gun Owners’ Data in Political Ads

 

The National Shooting Sports Foundation (NSSF) is facing a class-action lawsuit alleging it secretly built a database with personal information from millions of gun owners and used it for political advertising without consent.

The lawsuit, filed by two gun owners—Daniel Cocanour of Oklahoma and Dale Rimkus of Illinois—claims the NSSF obtained data from warranty cards filled out by customers for firearm rebates or repairs, which included sensitive details like contact information, age, income, vehicle ownership, and reasons for gun ownership. These individuals never consented to their data being shared or used for political purposes, according to the suit.

The NSSF, based in Shelton, Connecticut, began compiling the database in 1999 following the Columbine High School shooting, aiming to protect the firearms industry’s image and legal standing. By May 2001, the database held 3.4 million records, growing to 5.5 million by 2002 under the name “Data Hunter,” with contributions from major manufacturers like Glock, Smith & Wesson, Marlin Firearms, and Savage Arms. The plaintiffs allege “unjust enrichment,” arguing the NSSF profited from using this data without compensating gun owners.

The organization reportedly used the database to target political ads supporting pro-gun candidates, claiming its efforts were a “critical component” in George W. Bush’s narrow 2000 presidential victory. The NSSF continued using the database in elections through 2016, including hiring Cambridge Analytica during President Trump’s campaign to mobilize gun rights supporters in swing states . This partnership is notable given Cambridge Analytica’s later collapse due to a Facebook data scandal involving unauthorized user data.

Despite publicly advocating for gun owners’ privacy—such as supporting the “Protecting Privacy in Purchases Act”—the NSSF allegedly engaged in practices contradicting this stance. The lawsuit seeks damages exceeding $5 million and class-action status for all U.S. residents whose data was collected from 1990 to present. 

The case highlights a breach of trust, as the NSSF reportedly amassed data while warning against similar databases being used for gun confiscation . As of now, the NSSF has not commented publicly but maintains its data practices were legal and ethical .

Call-Recording App Neon Suspends Service After Security Breach

 

Neon, a viral app that pays users to record their phone calls—intending to sell these recordings to AI companies for training data—has been abruptly taken offline after a severe security flaw exposed users’ personal data, call recordings, and transcripts to the public.

Neon’s business model hinged on inviting users to record their calls through a proprietary interface, with payouts of 30 cents per minute for calls between Neon users and half that for calls to non-users, up to $30 per day. The company claimed it anonymized calls by stripping out personally identifiable information before selling the recordings to “trusted AI firms,” but this privacy commitment was quickly overshadowed by a crippling security lapse.

Within a day of rising to the top ranks of the App Store—boasting 75,000 downloads in a single day—the app was taken down after researchers discovered a vulnerability that allowed anyone to access other users’ call recordings, transcripts, phone numbers, and call metadata. Journalists found that the app’s backend was leaking not only public URLs to call audio files and transcripts but also details about recent calls, including call duration, participant phone numbers, timing, and even user earnings.

Alarmingly, these links were unrestricted—meaning anyone with the URL could eavesdrop on conversations—raising immediate privacy and legal concerns, especially given complex consent laws around call recording in various jurisdictions.

Founder and CEO Alex Kiam notified users that Neon was being temporarily suspended and promised to “add extra layers of security,” but did not directly acknowledge the security breach or its scale. The app itself remains visible in app stores but is nonfunctional, with no public timeline for its return. If Neon relaunches, it will face intense scrutiny over whether it has genuinely addressed the security and privacy issues that forced its shutdown.

This incident underscores the broader risks of apps monetizing sensitive user data—especially voice conversations—in exchange for quick rewards, a model that has emerged as AI firms seek vast, real-world datasets for training models. Neon’s downfall also highlights the challenges app stores face in screening for complex privacy and security flaws, even among fast-growing, high-profile apps.

For users, the episode is a stark reminder to scrutinize privacy policies and app permissions, especially when participating in novel data-for-cash business models. For the tech industry, it raises questions about the adequacy of existing safeguards for apps handling sensitive audio and personal data—and about the responsibilities of platform operators to prevent such breaches before they occur.

As of early October 2025, Neon remains offline, with users awaiting promised payouts and a potential return of the service, but with little transparency about how (or whether) the app’s fundamental security shortcomings have been fixed.

FTC Launches Formal Investigation into AI Companion Chatbots

 

The Federal Trade Commission has announced a formal inquiry into companies that develop AI companion chatbots, focusing specifically on how these platforms potentially harm children and teenagers. While not currently tied to regulatory action, the investigation seeks to understand how companies "measure, test, and monitor potentially negative impacts of this technology on children and teens". 

Companies under scrutiny 

Seven major technology companies have been selected for the investigation: Alphabet (Google's parent company), Character Technologies (creator of Character.AI), Meta, Instagram (Meta subsidiary), OpenAI, Snap, and X.AI. These companies are being asked to provide comprehensive information about their AI chatbot operations and safety measures. 

Investigation scope 

The FTC is requesting detailed information across several key areas. Companies must explain how they develop and approve AI characters, including their processes for "monetizing user engagement". Data protection practices are also under examination, particularly how companies safeguard underage users and ensure compliance with the Children's Online Privacy Protection Act Rule.

Motivation and concerns 

Although the FTC hasn't explicitly stated its investigation's motivation, FTC Commissioner Mark Meador referenced troubling reports from The New York Times and Wall Street Journal highlighting "chatbots amplifying suicidal ideation" and engaging in "sexually-themed discussions with underage users". Meador emphasized that if violations are discovered, "the Commission should not hesitate to act to protect the most vulnerable among us". 

Broader regulatory landscape 

This investigation reflects growing regulatory concern about AI's immediate negative impacts on privacy and health, especially as long-term productivity benefits remain uncertain. The FTC's inquiry isn't isolated—Texas Attorney General has already launched a separate investigation into Character.AI and Meta AI Studio, examining similar concerns about data privacy and chatbots falsely presenting themselves as mental health professionals. 

Implications

The investigation represents a significant regulatory response to emerging AI safety concerns, particularly regarding vulnerable populations. As AI companion technology proliferates, this inquiry may establish important precedents for industry oversight and child protection standards in the AI sector.

Muzaffarpur Man Loses ₹3.5 Lakh in Remote Access App Bank Fraud

 

A resident of Muzaffarpur, Bihar fell victim to a sophisticated remote access application scam that resulted in the loss of ₹3.5 lakh from his bank account. The cybercrime incident occurred when the victim was searching online for courier service assistance and discovered what appeared to be a legitimate customer support contact number through Google search results. 

Scam operation 

The fraudsters posed as courier service agents and initiated contact with the unsuspecting victim. During the conversation, the criminals convinced the man to download and install a remote access application on his mobile device, claiming it would help resolve his delivery-related issues. Once the victim granted remote access permissions to the application, the cybercriminals gained complete control over his smartphone and banking applications . 

Financial impact  

Within minutes of installing the malicious remote access software, the fraudsters executed multiple unauthorized transactions from the victim's bank account. The scammers managed to conduct seven separate high-value financial transfers, draining a total amount of ₹3.5 lakh from the man's banking accounts. The transactions were processed rapidly, taking advantage of the victim's digital banking credentials that were accessible through the compromised device . 

Broader criminal network 

Local police investigations have revealed that this incident is part of a larger interstate fraud syndicate operating across multiple states. The cyber crime cell has traced the fraudulent transactions to various bank accounts, suggesting a well-organized criminal network. Law enforcement agencies suspect that the scammers strategically place fake customer service numbers on internet search platforms, impersonating official service providers to target unsuspecting consumers.

Rising threat 

This case represents an alarming trend in cybercrime where fraudsters exploit remote desktop applications like AnyDesk and TeamViewer to gain unauthorized access to victims' devices. The scammers often target individuals seeking customer support for various services, including courier deliveries, utility bills, and other common consumer needs. These social engineering attacks have become increasingly sophisticated, with criminals creating convincing scenarios to pressure victims into installing malicious software. 

Prevention and safety measures 

Cybersecurity experts emphasize the importance of digital awareness and caution when dealing with unsolicited support calls or online search results. Users should verify customer service numbers directly from official websites rather than relying on search engine results. 

Additionally, individuals should never install remote access applications unless they are completely certain about the legitimacy of the requesting party. Financial institutions and telecom providers are working to implement enhanced fraud detection systems to identify and prevent such scams in real-time .

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Fake Netflix Job Offers Target Facebook Credentials in Real-Time Scam

 

A sophisticated phishing campaign is targeting job seekers with fake Netflix job offers designed to steal Facebook login credentials. The scam specifically focuses on marketing and social media professionals who may have access to corporate Facebook business accounts. 

Modus operandi 

The attack begins with highly convincing, AI-generated emails that appear to come from Netflix's HR team, personally tailored to recipients' professional backgrounds. When job seekers click the "Schedule Interview" link, they're directed to a fraudulent career site that closely mimics Netflix's official page. 

The fake site prompts users to create a "Career Profile" and offers options to log in with Facebook or email. However, regardless of the initial choice, victims are eventually directed to enter their Facebook credentials. This is where the scam becomes particularly dangerous. 

Real-time credential theft 

What makes this attack especially sophisticated is the use of websocket technology that allows scammers to intercept login details as they're being typed. As Malwarebytes researcher Pieter Arntz explains, "The phishers use a websocket method that allows them to intercept submissions live as they are entered. This allows them to try the credentials and if your password works, they can log into your real Facebook account within seconds". 

The attackers can immediately test stolen credentials on Facebook's actual platform and may even request multi-factor authentication codes if needed. If passwords don't work, they simply display a "wrong password" message to maintain the illusion. 

While personal Facebook accounts have value, the primary goal is accessing corporate social media accounts. Cybercriminals seek marketing managers and social media staff who control company Facebook Pages or business accounts. Once compromised, these accounts can be used to run malicious advertising campaigns at the company's expense, demand ransom payments, or leverage the organization's reputation for further scams.

Warning signs and protection

Security researchers have identified several suspicious email domains associated with this campaign, including addresses ending with @netflixworkplaceefficiencyhub.com, @netflixworkmotivation, and @netflixtalentnurture.com. The fake hiring site was identified as hiring.growwithusnetflix[.]com, though indicators suggest the operators cleared their tracks after the scam was exposed. 

Job seekers should be cautious of unsolicited job offers, verify website addresses carefully, and remember that legitimate Netflix recruitment doesn't require Facebook login credentials. The campaign demonstrates how scammers exploit both job market anxiety and the appeal of working for prestigious companies to execute sophisticated credential theft operations.

Orange Belgium Hit by Cyberattack Affecting 850,000 Customers

 

Orange Belgium, a major telecommunications provider and subsidiary of French telecom giant Orange Group, confirmed in August 2025 a significant cyberattack on its IT systems that resulted in unauthorized access to the personal data of approximately 850,000 customers.

The attack was detected at the end of July, after which the company swiftly activated its incident response procedures, including blocking access to the affected system, strengthening its security measures, and notifying both the relevant authorities and impacted customers. An official complaint was filed with judicial authorities, and the investigation remains ongoing.

The data accessed by the attackers included surname, first name, telephone number, SIM card number, PUK (Personal Unlocking Key) code, and tariff plan. Importantly, Orange Belgium reassured customers that no critical data—such as passwords, email addresses, or bank and financial details—were compromised in this incident. This distinction is significant, as the absence of authentication and financial data reduces, but does not eliminate, risks for affected individuals. 

Affected customers are being notified by email or text message, with advice to remain vigilant for suspicious communications, particularly phishing or impersonation attempts. The company recommends that customers exercise caution with any unexpected requests for sensitive information, as criminals may use the stolen data for social engineering attacks.

Some security experts have specifically warned about the risk of SIM swapping—whereby attackers hijack a phone number by convincing a mobile operator to transfer service to a new SIM card under their control—and advise customers to request new SIM cards and PUK codes as a precaution. 

The incident is one of several cyberattacks targeting Orange and its subsidiaries in 2025, although there is no evidence to suggest that this breach is linked to previous attacks affecting Orange’s operations in other countries. Orange Belgium operates a network serving over three million customers in Belgium and Luxembourg, making this breach one of the most significant data security incidents in the region this year. 

Criticism has emerged regarding Orange Belgium’s communication strategy, with some cybersecurity experts arguing that the company underplayed the potential risks—such as SIM swapping—and placed too much responsibility on customers to protect themselves after the breach. Despite these concerns, Orange Belgium’s response included immediate technical containment, regulatory notification, and customer outreach, aligning with standard incident response protocols for major telecom providers.

The breach highlights the persistent threat of cyberattacks against telecommunications companies, which remain attractive targets due to the vast amounts of customer data they manage. While the immediate risk of financial loss or account takeover is lower in this case due to the nature of the exposed data, the incident underscores the importance of robust cybersecurity measures and clear, transparent communication with affected users. Customers are encouraged to monitor their accounts, change passwords as a precaution, and report any suspicious activity to Orange Belgium and the authorities.

FreeVPN.One Extension Turns from Privacy Tool to Surveillance Threat

 

Security researchers at Koi Security have discovered troubling behavior from FreeVPN.One, a popular Chrome VPN extension with over 100,000 installations that has begun secretly capturing and transmitting users' screenshots to remote servers. 

Threat discovery 

The extension, which had maintained legitimate functionality for years, recently shifted its behavior in July 2025 to silently capture screenshots approximately one second after each page load. These screenshots are then transmitted to external servers—initially unencrypted, but later obfuscated with encryption after updates. The malicious behavior was introduced gradually through smaller updates that first requested additional permissions to access all websites and inject custom scripts. 

Developer's response

When confronted, FreeVPN.One's developer claimed the extension "is fully compliant with Chrome Web Store policies" and that screenshot functionality is disclosed in their privacy policy. The developer provided various justifications, including that screenshots only trigger "if a domain appears suspicious" as part of "background scanning". 

However, Koi researchers refuted this, providing evidence of activation on trusted domains including Google's own sites. The developer also claimed screenshots are "not being stored or used" but "only analyzed briefly for potential threats"—a distinction researchers found unconvincing. 

Chrome web store failures

This incident highlights significant security gaps in Google's Chrome Web Store review process. Despite Google's claims of performing security checks through automated scans, human reviews, and monitoring for malicious behavior changes, FreeVPN.One managed to maintain its verified status and featured placement while conducting these activities. 

The extension appears to have exploited a patient approach—operating legitimately for years before introducing malicious functionality, effectively bypassing security measures. While the product overview mentions "advanced AI Threat Detection" with "passive mode" monitoring, it fails to clearly state that "scanning them visually" means sending screenshots to remote servers without notification or opt-out options. 

Current status

As of the article's publication, Google had not responded to inquiries about investigating the extension or removing it from the Chrome Web Store. The FreeVPN.One extension remained active and available for download despite the security findings, raising concerns about user protection in browser marketplaces. This case demonstrates how privacy-branded extensions can become surveillance tools, exploiting user trust while bypassing platform security measures.

Here's How 'AI Poisoning' Tools Are Sabotaging Data-Hungry Bots

 

The internet has evolved from a platform mainly used by people for social sharing to one dominated by automated bots, especially those powered by AI. Bots now generate most web traffic, with over half of this stemming from malicious actors harvesting unprotected personal data. Many bots, however, are operated by major AI companies such as OpenAI—whose ChatGPT bot accounts for 6% of total web traffic—and Anthropic’s ClaudeBot, which constitutes 13%. 

These AI bots systematically scrape online content to train their models and answer user queries, raising concerns among content creators about widespread copyright infringement and unauthorized use of their work. Legal battles with AI companies are hard for most creators due to high costs, prompting some to turn to technical countermeasures. Tools are being developed to make it harder for AI bots to access or make use of online content.

Some specifically aim to “poison” the data—deliberately introducing subtle or hidden modifications so AI models misinterpret the material. For example, Chicago University's Glaze tool makes imperceptible changes to digital artwork, fooling models into misreading an artist’s style. Nightshade, another free tool, goes a step further by convincing AI that terms like “cat” should be linked with unrelated images, thus undermining model accuracy. 

Both tools have been widely adopted, empowering creators to exert control over how their work is ingested by AI bots. Beyond personal use, companies like Cloudflare have joined the fight, developing AI Labyrinth, a program that overwhelms bots with nonsensical, AI-generated content.

This method both diverts bots and protects genuine content. Another Cloudflare measure forces AI companies to pay for website access or get blocked entirely from indexing its contents. Historically, data “poisoning” is not a new idea. It traces back to creators like map-makers inserting fictitious locations to detect plagiarism. 

Today, similar tactics serve artists and writers defending against AI, and such methods are considered by digital rights advocates as a legitimate means for creators to manage their data, rather than outright sabotage. However, these protections have broader implications. State actors are reportedly using similar strategies, deploying thousands of fake news pages to bias AI models’ response towards particular narratives, such as Russia influencing war-related queries. 

Analysis shows that, at times, a third of major AI chatbots’ answers are aligned with these fake narratives, highlighting the double-edged nature of AI poisoning—it can protect rights but also propagate misinformation. Ultimately, while AI poisoning empowers content creators, it introduces new complexities to internet trust and information reliability, underscoring ongoing tensions in the data economy.

Native Phishing Emerges as a New Microsoft 365 Threat Vector

 

A recent cybersecurity threat report highlights a tactic known as "native phishing," where attackers exploit the trusted, built-in features of Microsoft 365 to launch attacks from within an organization. This method moves beyond traditional phishing emails with malicious attachments, instead leveraging the trust users have in their own company's systems. 

The core of native phishing is its subtlety and legitimacy. After compromising a single user's Microsoft 365 account, an attacker can use integrated apps like OneNote and OneDrive to share malicious content. Since these notifications come from a legitimate internal account and the links point to the organization’s own OneDrive, they bypass both security systems and the suspicions of trained users.

Modus operandi

Attackers have found Microsoft OneNote to be a particularly effective tool. While OneNote doesn't support macros, it is not subject to Microsoft's "Protected View," which typically warns users about potentially unsafe files. Its flexible formatting allows attackers to create deceptive layouts and embed malicious links . 

In a typical scenario, an attacker who has gained access to a user's credentials will create a OneNote file containing a malicious link within the user's personal OneDrive. They then use the built-in sharing feature to send a legitimate-looking Microsoft notification to hundreds of colleagues. The email, appearing to be from a trusted source, contains a secure link to the file hosted on the company's OneDrive, making it highly convincing. 

Victims who click the link are directed to a fake login page, often a near-perfect replica of their company's actual authentication portal. These phishing sites are frequently built using free, AI-powered, no-code website builders like Flazio, ClickFunnels, and JotForm, which allow attackers to quickly create and host convincing fake pages with minimal effort. This technique has shown an unusually high success rate compared to other phishing campaigns. 

Mitigation strategies 

To combat native phishing, organizations are advised to take several proactive steps: 

  • Enforce multi-factor authentication (MFA) and conditional access to reduce the risk of account takeovers. 
  • Conduct regular phishing simulations to build employee awareness.
  • Establish clear channels for employees to report suspicious activity easily. 
  • Review and tighten Microsoft 365 sharing settings to limit unnecessary exposure.
  • Set up alerts for unusual file-sharing behavior and monitor traffic to known no-code website builders.

Gemini Flaw Exposed Via Malicious Google Calendar Invites, Researchers Find

 

Google recently fixed a critical vulnerability in its Gemini AI assistant, which is tightly integrated with Android, Google Workspace, Gmail, Calendar, and Google Home. The flaw allowed attackers to exploit Gemini via creatively crafted Google Calendar invites, using indirect prompt injection techniques hidden in event titles. 

Once the malicious invite was sent, any user interaction with Gemini—such as asking for their daily calendar or emails—could trigger unintended actions, including the extraction of sensitive data, the control of smart home devices, tracking of user locations, launching of applications, or even joining Zoom video calls. 

The vulnerability exploited Gemini’s wide-reaching permissions and its context window. The attack did not require acceptance of the calendar invite, as Gemini’s natural behavior is to pull all event details when queried. The hostile prompt, embedded in the event title, would be processed by Gemini as part of the conversation, bypassing its prompt filtering and other security mechanisms. 

The researchers behind the attack, SafeBreach, demonstrated that just acting like a normal Gemini user could unknowingly expose confidential information or give attackers command over connected devices. In particular, attackers could stealthily place the malicious prompt in the sixth invite out of several, as Google Calendar only displays the five most recent events unless manually expanded, further complicating detection by users. 

The case raises deep concerns about the inherent risks of AI assistants linked to rich context sources like email and calendars, where hostile prompts can easily evade standard model protections and inject instructions not visible to the user. This type of attack, called an indirect prompt injection, was previously flagged by Mozilla’s Marco Figueroa in other Gemini-related exploits. Such vulnerabilities pave the way for both data leaks and convincing phishing attacks. 

Google responded proactively, patching the flaw before public exploitation, crediting the research team for responsible disclosure and collaboration. The incident has accelerated Google’s deployment of advanced defenses, including improved adversarial awareness and mitigations against hijack attempts. 

Security experts stress that continued red-teaming, industry cooperation, and rethinking automation boundaries are now imperative as AI becomes more enmeshed in smart devices and agents with broad powers. Gemini’s incident stands as a wake-up call for the real-world risks of prompt injection and automation in next-generation AI assistants, emphasizing the need for robust, ongoing security measures.