Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label User Privacy. Show all posts

Windows 10 Support Termination Leaves Devices Vulnerable

 

Microsoft has officially ended support for Windows 10, marking a major shift impacting hundreds of millions of users worldwide. Released in 2015, the operating system will no longer receive free security updates, bug fixes, or technical assistance, leaving all devices running it vulnerable to exploitation. This decision mirrors previous end-of-life events such as Windows XP, which saw a surge in cyberattacks after losing support.

Rising security threats

Without updates, Windows 10 systems are expected to become prime targets for hackers. Thousands of vulnerabilities have already been documented in public databases like ExploitDB, and several critical flaws have been actively exploited. 

Among them are CVE-2025-29824, a “use-after-free” bug in the Common Log File System Driver with a CVSS score of 7.8; CVE-2025-24993, a heap-based buffer overflow in NTFS marked as “known exploited”; and CVE-2025-24984, leaking NTFS log data with the highest EPSS score of 13.87%. 

These vulnerabilities enable privilege escalation, code execution, or remote intrusion, many of which have been added to the U.S. CISA’s Known Exploited Vulnerabilities (KEV) catalog, signaling the seriousness of the risks.

Limited upgrade paths

Microsoft recommends that users migrate to Windows 11, which features modernized architecture and ongoing support. However, strict hardware requirements mean that roughly 200 million Windows 10 computers worldwide remain ineligible for the upgrade. 

For those unable to transition, Microsoft provides three main options: purchasing new hardware compatible with Windows 11, enrolling in a paid Extended Security Updates (ESU) program (offering patches for one extra year), or continuing to operate unsupported — a risky path exposing systems to severe cyber threats.

The support cutoff extends beyond the OS. Microsoft Office 2016 and 2019 have simultaneously reached end-of-life, leaving only newer versions like Office 2021 and LTSC operable but unsupported on Windows 10. Users are encouraged to switch to Microsoft 365 or move licenses to Windows 11 devices. Notably, support for Office LTSC 2021 ends in October 2026.

Data protection tips

Microsoft urges users to back up critical data and securely erase drives before recycling or reselling devices. Participating manufacturers and Microsoft itself offer trade-in or recycling programs to ensure data safety. As cyber risks amplify and hackers exploit obsolete systems, users still on Windows 10 face a critical choice — upgrade, pay for ESU, or risk exposure in an increasingly volatile digital landscape.

India Plans Techno-Legal Framework to Combat Deepfake Threats

 

India will introduce comprehensive regulations to combat deepfakes in the near future, Union IT Minister Ashwini Vaishnaw announced at the NDTV World Summit 2025 in New Delhi. The minister emphasized that the upcoming framework will adopt a dual-component approach combining technical solutions with legal measures, rather than relying solely on traditional legislation.

Vaishnaw explained that artificial intelligence cannot be effectively regulated through conventional lawmaking alone, as the technology requires innovative technical interventions. He acknowledged that while AI enables entertaining applications like age transformation filters, deepfakes pose unprecedented threats to society by potentially misusing individuals' faces and voices to disseminate false messages completely disconnected from the actual person.

The minister highlighted the fundamental right of individuals to protect their identity from harmful misuse, stating that this principle forms the foundation of the government's approach to deepfake regulation. The techno-legal strategy distinguishes India's methodology from the European Union's primarily regulatory framework, with India prioritizing innovation alongside societal protection.

As part of the technical solution, Vaishnaw referenced ongoing work at the AI Safety Institute, specifically mentioning that the Indian Institute of Technology Jodhpur has developed a detection system capable of identifying deepfakes with over 90 percent accuracy. This technological advancement will complement the legal framework to create a more robust defense mechanism.

The minister also discussed India's broader AI infrastructure development, noting that two semiconductor manufacturing units, CG Semi and Kaynes, have commenced production operations in the country. Additionally, six indigenous AI models are currently under development, with two utilizing approximately 120 billion parameters designed to be free from biases present in Western models.

The government has deployed 38,000 graphics processing units (GPUs) for AI development and secured a $15 billion investment commitment from Google to establish a major AI hub in India. This infrastructure expansion aims to enhance the nation's research capabilities and application development in artificial intelligence.

Discord Third-Party Breach Exposes User Data and Government IDs

 

Discord has confirmed a significant data breach affecting users who interacted with their customer support or trust & safety teams, stemming not from a direct attack on Discord’s own systems but through a compromised third-party vendor that handled customer service operations.

This incident highlights a persistent and growing vulnerability within the tech industry—outsourcing crucial services to external parties with potentially weaker cybersecurity standards, making user data increasingly reliant on the practices of organizations that customers never directly chose to trust.

Data exposed in the breach

The breach resulted in unauthorized access to sensitive personal information stored in customer service records. Specifically, exposed data included names, email addresses, Discord usernames, and various contact details for users engaging with Discord support. Furthermore, hackers gained limited billing information comprising payment types, purchase history, and the last four digits of credit cards, with full card numbers and passwords remaining secure.

A particularly concerning aspect was a small subset of government-issued ID images—such as driver’s licenses and passports—belonging to users who had submitted documents for age verification purposes. Although not all Discord users were affected, the breach still poses a tangible risk of identity theft and privacy erosion for those involved.

Third-Party vendor risks

The incident underscores the dangers posed by outsourcing digital operations to third-party vendors. Discord’s response involved revoking the vendor’s access and launching a thorough investigation; however, the damage had already been done, reflecting security gaps that even prompt internal actions cannot immediately resolve once data is compromised. 

The broader issue is that while companies often rely on vendors to reduce costs and streamline services, these relationships introduce new, often less controllable, points of failure. In essence, the robust security of a major platform like Discord can be undermined by external vendors who do not adhere to equally rigorous protection standards.

Implications for users

In the aftermath, Discord followed standard protocols by notifying affected users via email and communicating with data protection authorities. Yet, this episode demonstrates a critical lesson: users’ digital privacy extends beyond the platforms they consciously choose, as it also depends on a network of third-party companies that can become invisible weak links. 

Each vendor relationship broadens the attack surface for potential breaches, transforming cybersecurity into a chain only as strong as the least secured party involved. The Discord incident serves as a stark reminder of the challenges in safeguarding digital identity in an interconnected ecosystem, where the security of personal data cannot be taken for granted.

OpenAI's Sora App Raises Facial Data Privacy Concerns

 

OpenAI's video-generating app, Sora, has raised significant questions regarding the safety and privacy of user's biometric data, particularly with its "Cameo" feature that creates realistic AI videos, or "deepfakes," using a person's face and voice. 

To power this functionality, OpenAI confirms it must store users' facial and audio data. The company states this sensitive data is encrypted during both storage and transmission, and uploaded cameo data is automatically deleted after 30 days. Despite these assurances, privacy concerns remain. The app's ability to generate hyper-realistic videos has sparked fears about the potential for misuse, such as the creation of unauthorized deepfakes or the spread of misinformation. 

OpenAI acknowledges a slight risk that the app could produce inappropriate content, including sexual deepfakes, despite the safeguards in place. In response to these risks, the company has implemented measures to distinguish AI-generated content, including visible watermarks and invisible C2PA metadata in every video created with Sora .

The company emphasizes that users have control over their likeness. Individuals can decide who is permitted to use their cameo and can revoke access or delete any video featuring them at any time. However, a major point of contention is the app's account deletion policy. Deleting a Sora account also results in the termination of the user's entire OpenAI account, including ChatGPT access, and the user cannot register again with the same email or phone number. 

While OpenAI has stated it is developing a way for users to delete their Sora account independently, this integrated deletion policy has surprised and concerned many users who wish to remove their biometric data from Sora without losing access to other OpenAI services.

The app has also drawn attention for potential copyright violations, with users creating videos featuring well-known characters from popular media. While OpenAI provides a mechanism for rights holders to request the removal of their content, the platform's design has positioned it as a new frontier for intellectual property disputes.

NSSF Sued for Secretly Using Gun Owners’ Data in Political Ads

 

The National Shooting Sports Foundation (NSSF) is facing a class-action lawsuit alleging it secretly built a database with personal information from millions of gun owners and used it for political advertising without consent.

The lawsuit, filed by two gun owners—Daniel Cocanour of Oklahoma and Dale Rimkus of Illinois—claims the NSSF obtained data from warranty cards filled out by customers for firearm rebates or repairs, which included sensitive details like contact information, age, income, vehicle ownership, and reasons for gun ownership. These individuals never consented to their data being shared or used for political purposes, according to the suit.

The NSSF, based in Shelton, Connecticut, began compiling the database in 1999 following the Columbine High School shooting, aiming to protect the firearms industry’s image and legal standing. By May 2001, the database held 3.4 million records, growing to 5.5 million by 2002 under the name “Data Hunter,” with contributions from major manufacturers like Glock, Smith & Wesson, Marlin Firearms, and Savage Arms. The plaintiffs allege “unjust enrichment,” arguing the NSSF profited from using this data without compensating gun owners.

The organization reportedly used the database to target political ads supporting pro-gun candidates, claiming its efforts were a “critical component” in George W. Bush’s narrow 2000 presidential victory. The NSSF continued using the database in elections through 2016, including hiring Cambridge Analytica during President Trump’s campaign to mobilize gun rights supporters in swing states . This partnership is notable given Cambridge Analytica’s later collapse due to a Facebook data scandal involving unauthorized user data.

Despite publicly advocating for gun owners’ privacy—such as supporting the “Protecting Privacy in Purchases Act”—the NSSF allegedly engaged in practices contradicting this stance. The lawsuit seeks damages exceeding $5 million and class-action status for all U.S. residents whose data was collected from 1990 to present. 

The case highlights a breach of trust, as the NSSF reportedly amassed data while warning against similar databases being used for gun confiscation . As of now, the NSSF has not commented publicly but maintains its data practices were legal and ethical .

Call-Recording App Neon Suspends Service After Security Breach

 

Neon, a viral app that pays users to record their phone calls—intending to sell these recordings to AI companies for training data—has been abruptly taken offline after a severe security flaw exposed users’ personal data, call recordings, and transcripts to the public.

Neon’s business model hinged on inviting users to record their calls through a proprietary interface, with payouts of 30 cents per minute for calls between Neon users and half that for calls to non-users, up to $30 per day. The company claimed it anonymized calls by stripping out personally identifiable information before selling the recordings to “trusted AI firms,” but this privacy commitment was quickly overshadowed by a crippling security lapse.

Within a day of rising to the top ranks of the App Store—boasting 75,000 downloads in a single day—the app was taken down after researchers discovered a vulnerability that allowed anyone to access other users’ call recordings, transcripts, phone numbers, and call metadata. Journalists found that the app’s backend was leaking not only public URLs to call audio files and transcripts but also details about recent calls, including call duration, participant phone numbers, timing, and even user earnings.

Alarmingly, these links were unrestricted—meaning anyone with the URL could eavesdrop on conversations—raising immediate privacy and legal concerns, especially given complex consent laws around call recording in various jurisdictions.

Founder and CEO Alex Kiam notified users that Neon was being temporarily suspended and promised to “add extra layers of security,” but did not directly acknowledge the security breach or its scale. The app itself remains visible in app stores but is nonfunctional, with no public timeline for its return. If Neon relaunches, it will face intense scrutiny over whether it has genuinely addressed the security and privacy issues that forced its shutdown.

This incident underscores the broader risks of apps monetizing sensitive user data—especially voice conversations—in exchange for quick rewards, a model that has emerged as AI firms seek vast, real-world datasets for training models. Neon’s downfall also highlights the challenges app stores face in screening for complex privacy and security flaws, even among fast-growing, high-profile apps.

For users, the episode is a stark reminder to scrutinize privacy policies and app permissions, especially when participating in novel data-for-cash business models. For the tech industry, it raises questions about the adequacy of existing safeguards for apps handling sensitive audio and personal data—and about the responsibilities of platform operators to prevent such breaches before they occur.

As of early October 2025, Neon remains offline, with users awaiting promised payouts and a potential return of the service, but with little transparency about how (or whether) the app’s fundamental security shortcomings have been fixed.

FTC Launches Formal Investigation into AI Companion Chatbots

 

The Federal Trade Commission has announced a formal inquiry into companies that develop AI companion chatbots, focusing specifically on how these platforms potentially harm children and teenagers. While not currently tied to regulatory action, the investigation seeks to understand how companies "measure, test, and monitor potentially negative impacts of this technology on children and teens". 

Companies under scrutiny 

Seven major technology companies have been selected for the investigation: Alphabet (Google's parent company), Character Technologies (creator of Character.AI), Meta, Instagram (Meta subsidiary), OpenAI, Snap, and X.AI. These companies are being asked to provide comprehensive information about their AI chatbot operations and safety measures. 

Investigation scope 

The FTC is requesting detailed information across several key areas. Companies must explain how they develop and approve AI characters, including their processes for "monetizing user engagement". Data protection practices are also under examination, particularly how companies safeguard underage users and ensure compliance with the Children's Online Privacy Protection Act Rule.

Motivation and concerns 

Although the FTC hasn't explicitly stated its investigation's motivation, FTC Commissioner Mark Meador referenced troubling reports from The New York Times and Wall Street Journal highlighting "chatbots amplifying suicidal ideation" and engaging in "sexually-themed discussions with underage users". Meador emphasized that if violations are discovered, "the Commission should not hesitate to act to protect the most vulnerable among us". 

Broader regulatory landscape 

This investigation reflects growing regulatory concern about AI's immediate negative impacts on privacy and health, especially as long-term productivity benefits remain uncertain. The FTC's inquiry isn't isolated—Texas Attorney General has already launched a separate investigation into Character.AI and Meta AI Studio, examining similar concerns about data privacy and chatbots falsely presenting themselves as mental health professionals. 

Implications

The investigation represents a significant regulatory response to emerging AI safety concerns, particularly regarding vulnerable populations. As AI companion technology proliferates, this inquiry may establish important precedents for industry oversight and child protection standards in the AI sector.

Muzaffarpur Man Loses ₹3.5 Lakh in Remote Access App Bank Fraud

 

A resident of Muzaffarpur, Bihar fell victim to a sophisticated remote access application scam that resulted in the loss of ₹3.5 lakh from his bank account. The cybercrime incident occurred when the victim was searching online for courier service assistance and discovered what appeared to be a legitimate customer support contact number through Google search results. 

Scam operation 

The fraudsters posed as courier service agents and initiated contact with the unsuspecting victim. During the conversation, the criminals convinced the man to download and install a remote access application on his mobile device, claiming it would help resolve his delivery-related issues. Once the victim granted remote access permissions to the application, the cybercriminals gained complete control over his smartphone and banking applications . 

Financial impact  

Within minutes of installing the malicious remote access software, the fraudsters executed multiple unauthorized transactions from the victim's bank account. The scammers managed to conduct seven separate high-value financial transfers, draining a total amount of ₹3.5 lakh from the man's banking accounts. The transactions were processed rapidly, taking advantage of the victim's digital banking credentials that were accessible through the compromised device . 

Broader criminal network 

Local police investigations have revealed that this incident is part of a larger interstate fraud syndicate operating across multiple states. The cyber crime cell has traced the fraudulent transactions to various bank accounts, suggesting a well-organized criminal network. Law enforcement agencies suspect that the scammers strategically place fake customer service numbers on internet search platforms, impersonating official service providers to target unsuspecting consumers.

Rising threat 

This case represents an alarming trend in cybercrime where fraudsters exploit remote desktop applications like AnyDesk and TeamViewer to gain unauthorized access to victims' devices. The scammers often target individuals seeking customer support for various services, including courier deliveries, utility bills, and other common consumer needs. These social engineering attacks have become increasingly sophisticated, with criminals creating convincing scenarios to pressure victims into installing malicious software. 

Prevention and safety measures 

Cybersecurity experts emphasize the importance of digital awareness and caution when dealing with unsolicited support calls or online search results. Users should verify customer service numbers directly from official websites rather than relying on search engine results. 

Additionally, individuals should never install remote access applications unless they are completely certain about the legitimacy of the requesting party. Financial institutions and telecom providers are working to implement enhanced fraud detection systems to identify and prevent such scams in real-time .

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Fake Netflix Job Offers Target Facebook Credentials in Real-Time Scam

 

A sophisticated phishing campaign is targeting job seekers with fake Netflix job offers designed to steal Facebook login credentials. The scam specifically focuses on marketing and social media professionals who may have access to corporate Facebook business accounts. 

Modus operandi 

The attack begins with highly convincing, AI-generated emails that appear to come from Netflix's HR team, personally tailored to recipients' professional backgrounds. When job seekers click the "Schedule Interview" link, they're directed to a fraudulent career site that closely mimics Netflix's official page. 

The fake site prompts users to create a "Career Profile" and offers options to log in with Facebook or email. However, regardless of the initial choice, victims are eventually directed to enter their Facebook credentials. This is where the scam becomes particularly dangerous. 

Real-time credential theft 

What makes this attack especially sophisticated is the use of websocket technology that allows scammers to intercept login details as they're being typed. As Malwarebytes researcher Pieter Arntz explains, "The phishers use a websocket method that allows them to intercept submissions live as they are entered. This allows them to try the credentials and if your password works, they can log into your real Facebook account within seconds". 

The attackers can immediately test stolen credentials on Facebook's actual platform and may even request multi-factor authentication codes if needed. If passwords don't work, they simply display a "wrong password" message to maintain the illusion. 

While personal Facebook accounts have value, the primary goal is accessing corporate social media accounts. Cybercriminals seek marketing managers and social media staff who control company Facebook Pages or business accounts. Once compromised, these accounts can be used to run malicious advertising campaigns at the company's expense, demand ransom payments, or leverage the organization's reputation for further scams.

Warning signs and protection

Security researchers have identified several suspicious email domains associated with this campaign, including addresses ending with @netflixworkplaceefficiencyhub.com, @netflixworkmotivation, and @netflixtalentnurture.com. The fake hiring site was identified as hiring.growwithusnetflix[.]com, though indicators suggest the operators cleared their tracks after the scam was exposed. 

Job seekers should be cautious of unsolicited job offers, verify website addresses carefully, and remember that legitimate Netflix recruitment doesn't require Facebook login credentials. The campaign demonstrates how scammers exploit both job market anxiety and the appeal of working for prestigious companies to execute sophisticated credential theft operations.

Orange Belgium Hit by Cyberattack Affecting 850,000 Customers

 

Orange Belgium, a major telecommunications provider and subsidiary of French telecom giant Orange Group, confirmed in August 2025 a significant cyberattack on its IT systems that resulted in unauthorized access to the personal data of approximately 850,000 customers.

The attack was detected at the end of July, after which the company swiftly activated its incident response procedures, including blocking access to the affected system, strengthening its security measures, and notifying both the relevant authorities and impacted customers. An official complaint was filed with judicial authorities, and the investigation remains ongoing.

The data accessed by the attackers included surname, first name, telephone number, SIM card number, PUK (Personal Unlocking Key) code, and tariff plan. Importantly, Orange Belgium reassured customers that no critical data—such as passwords, email addresses, or bank and financial details—were compromised in this incident. This distinction is significant, as the absence of authentication and financial data reduces, but does not eliminate, risks for affected individuals. 

Affected customers are being notified by email or text message, with advice to remain vigilant for suspicious communications, particularly phishing or impersonation attempts. The company recommends that customers exercise caution with any unexpected requests for sensitive information, as criminals may use the stolen data for social engineering attacks.

Some security experts have specifically warned about the risk of SIM swapping—whereby attackers hijack a phone number by convincing a mobile operator to transfer service to a new SIM card under their control—and advise customers to request new SIM cards and PUK codes as a precaution. 

The incident is one of several cyberattacks targeting Orange and its subsidiaries in 2025, although there is no evidence to suggest that this breach is linked to previous attacks affecting Orange’s operations in other countries. Orange Belgium operates a network serving over three million customers in Belgium and Luxembourg, making this breach one of the most significant data security incidents in the region this year. 

Criticism has emerged regarding Orange Belgium’s communication strategy, with some cybersecurity experts arguing that the company underplayed the potential risks—such as SIM swapping—and placed too much responsibility on customers to protect themselves after the breach. Despite these concerns, Orange Belgium’s response included immediate technical containment, regulatory notification, and customer outreach, aligning with standard incident response protocols for major telecom providers.

The breach highlights the persistent threat of cyberattacks against telecommunications companies, which remain attractive targets due to the vast amounts of customer data they manage. While the immediate risk of financial loss or account takeover is lower in this case due to the nature of the exposed data, the incident underscores the importance of robust cybersecurity measures and clear, transparent communication with affected users. Customers are encouraged to monitor their accounts, change passwords as a precaution, and report any suspicious activity to Orange Belgium and the authorities.

FreeVPN.One Extension Turns from Privacy Tool to Surveillance Threat

 

Security researchers at Koi Security have discovered troubling behavior from FreeVPN.One, a popular Chrome VPN extension with over 100,000 installations that has begun secretly capturing and transmitting users' screenshots to remote servers. 

Threat discovery 

The extension, which had maintained legitimate functionality for years, recently shifted its behavior in July 2025 to silently capture screenshots approximately one second after each page load. These screenshots are then transmitted to external servers—initially unencrypted, but later obfuscated with encryption after updates. The malicious behavior was introduced gradually through smaller updates that first requested additional permissions to access all websites and inject custom scripts. 

Developer's response

When confronted, FreeVPN.One's developer claimed the extension "is fully compliant with Chrome Web Store policies" and that screenshot functionality is disclosed in their privacy policy. The developer provided various justifications, including that screenshots only trigger "if a domain appears suspicious" as part of "background scanning". 

However, Koi researchers refuted this, providing evidence of activation on trusted domains including Google's own sites. The developer also claimed screenshots are "not being stored or used" but "only analyzed briefly for potential threats"—a distinction researchers found unconvincing. 

Chrome web store failures

This incident highlights significant security gaps in Google's Chrome Web Store review process. Despite Google's claims of performing security checks through automated scans, human reviews, and monitoring for malicious behavior changes, FreeVPN.One managed to maintain its verified status and featured placement while conducting these activities. 

The extension appears to have exploited a patient approach—operating legitimately for years before introducing malicious functionality, effectively bypassing security measures. While the product overview mentions "advanced AI Threat Detection" with "passive mode" monitoring, it fails to clearly state that "scanning them visually" means sending screenshots to remote servers without notification or opt-out options. 

Current status

As of the article's publication, Google had not responded to inquiries about investigating the extension or removing it from the Chrome Web Store. The FreeVPN.One extension remained active and available for download despite the security findings, raising concerns about user protection in browser marketplaces. This case demonstrates how privacy-branded extensions can become surveillance tools, exploiting user trust while bypassing platform security measures.

Here's How 'AI Poisoning' Tools Are Sabotaging Data-Hungry Bots

 

The internet has evolved from a platform mainly used by people for social sharing to one dominated by automated bots, especially those powered by AI. Bots now generate most web traffic, with over half of this stemming from malicious actors harvesting unprotected personal data. Many bots, however, are operated by major AI companies such as OpenAI—whose ChatGPT bot accounts for 6% of total web traffic—and Anthropic’s ClaudeBot, which constitutes 13%. 

These AI bots systematically scrape online content to train their models and answer user queries, raising concerns among content creators about widespread copyright infringement and unauthorized use of their work. Legal battles with AI companies are hard for most creators due to high costs, prompting some to turn to technical countermeasures. Tools are being developed to make it harder for AI bots to access or make use of online content.

Some specifically aim to “poison” the data—deliberately introducing subtle or hidden modifications so AI models misinterpret the material. For example, Chicago University's Glaze tool makes imperceptible changes to digital artwork, fooling models into misreading an artist’s style. Nightshade, another free tool, goes a step further by convincing AI that terms like “cat” should be linked with unrelated images, thus undermining model accuracy. 

Both tools have been widely adopted, empowering creators to exert control over how their work is ingested by AI bots. Beyond personal use, companies like Cloudflare have joined the fight, developing AI Labyrinth, a program that overwhelms bots with nonsensical, AI-generated content.

This method both diverts bots and protects genuine content. Another Cloudflare measure forces AI companies to pay for website access or get blocked entirely from indexing its contents. Historically, data “poisoning” is not a new idea. It traces back to creators like map-makers inserting fictitious locations to detect plagiarism. 

Today, similar tactics serve artists and writers defending against AI, and such methods are considered by digital rights advocates as a legitimate means for creators to manage their data, rather than outright sabotage. However, these protections have broader implications. State actors are reportedly using similar strategies, deploying thousands of fake news pages to bias AI models’ response towards particular narratives, such as Russia influencing war-related queries. 

Analysis shows that, at times, a third of major AI chatbots’ answers are aligned with these fake narratives, highlighting the double-edged nature of AI poisoning—it can protect rights but also propagate misinformation. Ultimately, while AI poisoning empowers content creators, it introduces new complexities to internet trust and information reliability, underscoring ongoing tensions in the data economy.

Native Phishing Emerges as a New Microsoft 365 Threat Vector

 

A recent cybersecurity threat report highlights a tactic known as "native phishing," where attackers exploit the trusted, built-in features of Microsoft 365 to launch attacks from within an organization. This method moves beyond traditional phishing emails with malicious attachments, instead leveraging the trust users have in their own company's systems. 

The core of native phishing is its subtlety and legitimacy. After compromising a single user's Microsoft 365 account, an attacker can use integrated apps like OneNote and OneDrive to share malicious content. Since these notifications come from a legitimate internal account and the links point to the organization’s own OneDrive, they bypass both security systems and the suspicions of trained users.

Modus operandi

Attackers have found Microsoft OneNote to be a particularly effective tool. While OneNote doesn't support macros, it is not subject to Microsoft's "Protected View," which typically warns users about potentially unsafe files. Its flexible formatting allows attackers to create deceptive layouts and embed malicious links . 

In a typical scenario, an attacker who has gained access to a user's credentials will create a OneNote file containing a malicious link within the user's personal OneDrive. They then use the built-in sharing feature to send a legitimate-looking Microsoft notification to hundreds of colleagues. The email, appearing to be from a trusted source, contains a secure link to the file hosted on the company's OneDrive, making it highly convincing. 

Victims who click the link are directed to a fake login page, often a near-perfect replica of their company's actual authentication portal. These phishing sites are frequently built using free, AI-powered, no-code website builders like Flazio, ClickFunnels, and JotForm, which allow attackers to quickly create and host convincing fake pages with minimal effort. This technique has shown an unusually high success rate compared to other phishing campaigns. 

Mitigation strategies 

To combat native phishing, organizations are advised to take several proactive steps: 

  • Enforce multi-factor authentication (MFA) and conditional access to reduce the risk of account takeovers. 
  • Conduct regular phishing simulations to build employee awareness.
  • Establish clear channels for employees to report suspicious activity easily. 
  • Review and tighten Microsoft 365 sharing settings to limit unnecessary exposure.
  • Set up alerts for unusual file-sharing behavior and monitor traffic to known no-code website builders.

Gemini Flaw Exposed Via Malicious Google Calendar Invites, Researchers Find

 

Google recently fixed a critical vulnerability in its Gemini AI assistant, which is tightly integrated with Android, Google Workspace, Gmail, Calendar, and Google Home. The flaw allowed attackers to exploit Gemini via creatively crafted Google Calendar invites, using indirect prompt injection techniques hidden in event titles. 

Once the malicious invite was sent, any user interaction with Gemini—such as asking for their daily calendar or emails—could trigger unintended actions, including the extraction of sensitive data, the control of smart home devices, tracking of user locations, launching of applications, or even joining Zoom video calls. 

The vulnerability exploited Gemini’s wide-reaching permissions and its context window. The attack did not require acceptance of the calendar invite, as Gemini’s natural behavior is to pull all event details when queried. The hostile prompt, embedded in the event title, would be processed by Gemini as part of the conversation, bypassing its prompt filtering and other security mechanisms. 

The researchers behind the attack, SafeBreach, demonstrated that just acting like a normal Gemini user could unknowingly expose confidential information or give attackers command over connected devices. In particular, attackers could stealthily place the malicious prompt in the sixth invite out of several, as Google Calendar only displays the five most recent events unless manually expanded, further complicating detection by users. 

The case raises deep concerns about the inherent risks of AI assistants linked to rich context sources like email and calendars, where hostile prompts can easily evade standard model protections and inject instructions not visible to the user. This type of attack, called an indirect prompt injection, was previously flagged by Mozilla’s Marco Figueroa in other Gemini-related exploits. Such vulnerabilities pave the way for both data leaks and convincing phishing attacks. 

Google responded proactively, patching the flaw before public exploitation, crediting the research team for responsible disclosure and collaboration. The incident has accelerated Google’s deployment of advanced defenses, including improved adversarial awareness and mitigations against hijack attempts. 

Security experts stress that continued red-teaming, industry cooperation, and rethinking automation boundaries are now imperative as AI becomes more enmeshed in smart devices and agents with broad powers. Gemini’s incident stands as a wake-up call for the real-world risks of prompt injection and automation in next-generation AI assistants, emphasizing the need for robust, ongoing security measures.

NZTA Breach Results in Vehicle Theft, User Data Compromise


Data compromise leads to targeted motor theft

A privacy breach has leaked the details of 1000 people (estimate) in a Transport firm's database over the past year. According to the agency, the breach targeted 13 vehicles for theft. The problem was in the agency’s Motocheck system, which let users access information stored on the Motor Vehicle Register. 

User account compromise led to unauthorized access

According to the NZTA, it became aware of the attack in May 2025 when a customer complained, and also through the police as part of an investigation. NZTA found that illegal access happened from an ex-employee's account of Motocheck of Auckland Auto Collections LTD. The threat actor used the compromised account to access people’s personal information, such as names and addresses from the MVR. 

"To date, we have determined that names and addresses of 951 people were accessed improperly over the 12 months to May 2025, and that at least 13 of these vehicles are suspected to have been targeted for theft," NZTA said in a statement. 

NZTA assisting affected customers

The agency contacted affected customers to assist them in the breach and updated them on measures that were taken to address the incident, and also offered support and assistance for their concerns. 

"We have sincerely apologised to those affected for the inconvenience and distress caused by the breach," it said. NZTA is also assisting police in their investigations of the incident and the vehicles that were targeted for theft. NZTA also informed the Office of the Privacy Commissioner. The agency’s systems aim to protect people’s privacy.

NZTA claims that "work is underway to improve the protection of personal information within our registers, with a priority to address risks of harm. This work will involve improvements across policy, contractual, operational, and digital aspects of register access.” A customer impacted by the incident was informed by the agency that their name and address were stolen last year.

NZTA said that they “have been unable to confirm the reason why your name and address were accessed. If you feel that your safety is at risk, we encourage you to contact NZ Police directly." 

FBI Alert: Avoid Scanning This QR Code on Your Phone

 

The FBI has issued a warning about a new scam in which cybercriminals send unsolicited packages containing a QR code to people’s homes, aiming to steal personal and financial information or install malware on their devices. These packages often lack sender information, making them seem mysterious and tempting to open. 

Modus operandi 

Scammers mail unexpected packages without sender information, deliberately creating curiosity that encourages recipients to scan the included QR code. Once scanned, the code either: 

  • Redirects users to fake websites requesting personal and financial information. 
  • Automatically downloads malicious software that steals data from phones.
  • Attempts to gain unauthorized access to device permissions.

This strategy is based on old "brushing scams," in which unscrupulous vendors send unsolicited products in order to generate fake positive feedback. The new variation uses QR codes to permit more serious financial theft, rather than simple review manipulation. 

Who is at risk?

Anyone who receives a surprise package—especially one without clear sender details—could be targeted. The scam exploits curiosity and the widespread, trusting use of QR codes for payments, menus, and other daily activities. 

Safety tips

  • Do not scan QR codes from unknown or unsolicited packages.
  • Be cautious of packages you didn’t order, especially those without sender information. 
  • Inspect links carefully if you do scan a QR code—look for suspicious URLs before proceeding. 
  • Secure your online accounts and consider requesting a free credit report if you suspect you’ve been targeted. 
  • Stay vigilant in public places, as scammers also place fake QR codes on parking meters and in stores. 

This warning comes amid a broader rise in sophisticated scams, including voice message attacks where criminals impersonate recognizable figures to encourage victim interaction. The FBI emphasizes that while QR codes may appear harmless, they can pose significant security risks when used maliciously. 

Open-source Autoswagger Exposes API Authorisation Flaws

 

Autoswagger is a free, open-source tool designed to scan OpenAPI-documented APIs for broken authorization vulnerabilities. These vulnerabilities remain common, even among organizations with strong security postures, and pose a significant risk as they can be exploited easily. 

Key features and approach

API Schema Detection: Begins with a list of organization domains and scans for OpenAPI/Swagger documentation across various formats and locations. 

Endpoint Enumeration: Parses the discovered API specs to automatically generate a comprehensive list of endpoints along with their required parameters. 

Authorization Testing: Sends requests to endpoints using valid parameters and flags those that return successful responses instead of the expected HTTP 401/403, highlighting potential improper or missing access control. 

Advanced Scanning: With the --brute flag, the tool can simulate bypassing validation checks, helping to identify endpoints vulnerable to specific data-format-based validation logic. 

Sensitive Data Exposure: Reviews successful responses for exposure of sensitive data—such as PII, credentials, or internal records. Endpoints returning such data without proper authentication are included in the output report. 

Security Insights

Publicly exposing API documentation expands the attack surface. Unless essential for business, it is advised not to reveal API docs. Regular API security scanning should be performed after every development iteration to mitigate risks.

Autoswagger is freely available on GitHub, making it an accessible resource for security teams looking to automate API authorization testing and harden their defenses against common vulnerabilities.

UK Government Proposes Mandatory Reporting of Ransomware Attacks

 

The British government's proposals to amend its ransomware strategy marked a minor milestone on Tuesday, when the Home Office issued its formal answer to a survey on modifying the law, but questions remain regarding the effectiveness of the measures. 

The legislative process in the United Kingdom regularly involves public consultations. In order to address the ransomware issue, the Home Office outlined three main policy recommendations and asked for public input in order to support forthcoming legislation. 

The three main policy ideas are prohibiting payments from public sector or critical national infrastructure organisations; requiring victims to notify the government prior to making any extortion payments; and requiring all victims to report attacks to law enforcement.

Following a string of high-profile ransomware incidents that affected the nation, including several that left the shelves of several high-street grocery stores empty and one that contributed to the death of a hospital patient in London, the official response was published on Tuesday, cataloguing feedback for and against the measures.

Despite being labelled as part of the government's much-talked-about Plan for Change, the plans are identical to those made while the Conservative Party was in control prior to Rishi Sunak's snap election, which delayed the consultation's introduction. Even that plan in 2024 was late to the game. 

In 2022, ransomware attacks dominated the British government's crisis management COBR meetings. However, successive home secretaries prioritised responding to small boat crossings of migrants in the English Channel. Ransomware attacks on British organisations had increased year after year for the past five years. 

“The proposals are a sign that the government is taking ransomware more seriously, which after five years of punishing attacks on UK businesses and critical national infrastructure is very welcome,” stated Jamie MacColl, a senior research fellow at think tank RUSI. But MacColl said there remained numerous questions regarding how effective the response might be. 

Earlier this year, the government announced what the Cyber Security and Resilience Bill (CSRB) will include when it is brought to Parliament. The CSRB, which only applies to regulated critical infrastructure firms, is likely to overlap with the ransomware regulations by enhancing cyber incident reporting requirements, but it is unclear how.

Chinese Government Launches National Cyber ID Amid Privacy Concerns

 

China's national online ID service went into effect earlier this month with the promise of improving user privacy by limiting the amount of data collected by private-sector companies. However, the measures have been criticised by privacy and digital rights activists as giving the government more control over citizens' online activities.

The National Online Identity Authentication Public Service is a government-run digital identity system that will reduce the overall information footprint by allowing citizens to register with legitimate government documents and then protecting their data from Internet services. Users can choose not to utilise the service at this time, however businesses are expected to refrain from collecting users' personal information unless specifically mandated by law. 

Kendra Schaefer, a partner at Beijing-based policy consultancy Trivium China, claims that the rules, on the surface, give Internet users a centralised repository for their identity data, owned by the government, and to prevent inconsistent handling by private enterprises. 

"Basically, they're just switching the holder of data," Schaefer stated. "Users use to have to put their ID information into each new website when they logged into that website. ... It would be up to the collector of that data — for example, the platform itself — to properly encrypt it, properly transmit it to the state for verification. ... That is sort of being eliminated now.” 

Several nations are adopting regulations to establish digital identity systems that link online and offline identities. For instance, Australia expanded its government digital ID, permitted private sector participation, and strengthened privacy protections in 2024 with the adoption of the Digital ID Act of 2024. Based on Estonia's digital-government system, Singapore has long provided its people with a digital ID, SingPass, to facilitate transactions with government services. 

However, China's strategy has sparked serious concerns about escalating government monitoring under the guise of privacy and data security. According to an analysis by the Network of Chinese Human Rights Defenders (CHRD), a non-governmental collective of domestic and international Chinese human rights activists and groups, and Article 19, an international non-governmental organisation, the measures contain privacy and notification clauses, but several loopholes allow authorities to easily access private information without notification.

According to Shane Yi, a researcher with CHRD, the new Internet ID system is intended to bolster the state's monitoring apparatus rather than to safeguard individual privacy. 

The goal of the Internet ID numbers, also known as Network Numbers, is to centralise the process of confirming residents' digital identities. Real-name verification is required by the Chinese government, but since it is spread across numerous internet services, it may pose a data security threat. The Chinese regulation states that Internet platforms cannot maintain information about a citizen's true identity if they use a digital ID. The new restrictions (translation) entered into effect on July 15, 2025.

"After internet platforms access the Public Service, where users elect to use Network Numbers or Network Credentials to register and verify their real identity information, and pass verification, the internet platforms must not require that the users separately provide explicit identification information, except where laws or administrative regulations provide otherwise or the users consent to provide it," the regulation reads. 

Chinese officials say that the strategy strengthens citizens' privacy. Lin Wei, president of the Southwest University of Political Science and Law in Chongqing, China, claims that the 67 sites and applications that use the virtual ID service collect 89% less personal information. According to reports, the Ministry of Public Security in China released the academic's work.