Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Privacy. Show all posts

OpenAI Faces Court Order to Disclose 20 Million Anonymized ChatGPT Chats


OpenAI, a company that is pushing to redefine how courts balance innovation, privacy, and the enforcement of copyright in the current legal battle over artificial intelligence and intellectual property, has brought a lawsuit challenging a sweeping discovery order. 

It was announced on Wednesday that the artificial intelligence company requested a federal judge to overturn a ruling that requires it to disclose 20 million anonymized ChatGPT conversation logs, warning even de-identified records may reveal sensitive information about users. 

In the current dispute, the New York Times and several other news organizations have filed a lawsuit alleging that OpenAI is violating copyright terms in its large language models by illegally using their content. The claim is that OpenAI has violated its copyright rights by using their copyrighted content. 

A federal district court in New York upheld two discovery orders on January 5, 2026 that required OpenAI to produce a substantial sample of the interactions with ChatGPT by the end of the year, a consequential milestone in an ongoing litigation that is situated at the intersection of copyright law, data privacy, and the emergence of artificial intelligence. 

According to the court's decision, this case concludes that there is a growing willingness by judicial authorities to critically examine the internal data practices of AI developers, while corporations argue that disclosure of this sort could have far-reaching implications for both user trust and the confidentiality of platforms themselves. As part of the controversy, plaintiffs are requesting access to ChatGPT's conversation logs that record both user prompts and the system's response to those prompts. 

Those logs, they argue, are crucial in evaluating claims of copyright infringement as well as OpenAI's asserted defenses, including fair use, since they capture both user prompts and system responses. In July 2025, when OpenAI filed a motion seeking the production of a 120-million-log sample, citing the scale and the privacy concerns involved in the request, it refused.

OpenAI, which maintains billions of logs as part of its normal operations, initially resisted the request. It responded by proposing to produce 20 million conversations, stripped of all personally identifiable information and sensitive information, using a proprietary process that would ensure the data would not be manipulated. 

A reduction of this sample was agreed upon by plaintiffs as an interim measure, however they reserved the right to continue their pursuit of a broader sample if the data were not sufficient. During October 2025, tensions escalated as OpenAI changed its position, offering instead to search for targeted words within the 20-million-log dataset and only to find conversations that directly implicated the plaintiff's work based on those search terms.

In their opinion, limiting disclosure to filtered results would be a better safeguard for user privacy, preventing the exposure of unnecessary unrelated communications. Plaintiffs, however, swiftly rejected this approach, filing a new motion to demand the release of the entire de-identified dataset. 

On November 7, 2025, a U.S. Magistrate Judge Ona Wang sided with the plaintiffs, ordering OpenAI to provide all of the sample data in addition to denying the company's request to reconsider. A judge ruled that obtaining access to both relevant and ostensibly irrelevant logs was necessary in order to conduct a comprehensive and fair analysis of OpenAI's claims. 

Accordingly, even conversations which are not directly referencing copyrighted material can be taken into account by OpenAI when attempting to prove fair use. As part of its assessment of privacy risks, the court deemed that the dataset had been reduced from billions to 20 million records by applying de-identification measures and enforcing a standing protective order, all of which were adequate to mitigate them. 

In light of the fact that the litigation is entering a more consequential phase, Keker Van Nest, Latham & Watkins, and Morrison & Foerster are representing OpenAI in the matter, which is approaching court-imposed production deadlines. 

In light of the fact that the order reflects a broader judicial posture toward artificial intelligence disputes, legal observers have noticed that courts are increasingly willing to compel extensive discovery - even if it is anonymous - to examine the process by which large language models are trained and whether copyrighted material may be involved.

A crucial aspect of this ruling is that it strengthens the procedural avenues for publishers and other content owners to challenge alleged copyright violations by AI developers. The ruling highlights the need for technology companies to be vigilant with their stewardship of large repositories of user-generated data, and the legal risks associated with retaining, processing, and releasing such data. 

Additionally, the dispute has intensified since there have been allegations that OpenAI was not able to suspend certain data deletion practices after the litigation commenced, therefore perhaps endangering evidence relevant to claims that some users may have bypassed publisher paywalls through their use of OpenAI products. 

As a result of the deletions, plaintiffs claim that they disproportionately affected free and subscription-tier user records, raising concerns about whether evidence preservation obligations were met fully. The company, which has been named as a co-defendant in the case, has been required to produce more than eight million anonymized Copilot interaction logs in response to the lawsuit and has not faced similar data preservation complaints.

A statement by Dr. Ilia Kolochenko, CEO of ImmuniWeb, on the implications of the ruling was given by CybersecurityNews. He said that while the ruling represents a significant legal setback for OpenAI, it could also embolden other plaintiffs to pursue similar discovery strategies or take advantage of stronger settlement positions in parallel proceedings. 

In response to the allegations, several courts have requested a deeper investigation into OpenAI's internal data governance practices, including a request for injunctions preventing further deletions until it is clear what remains and what is potentially recoverable and what can be done. Aside from the courtroom, the case has been accompanied by an intensifying investor scrutiny that has swept the artificial intelligence industry nationwide. 

In the midst of companies like SpaceX and Anthropic preparing for a possible public offering at a valuation that could reach hundreds of billions of dollars, market confidence is becoming increasingly dependent upon the ability of companies to cope with regulatory exposure, rising operational costs, and competitive pressures associated with rapid artificial intelligence development. 

Meanwhile, speculation around strategic acquisitions that could reshape the competitive landscape continues to abound in the industry. The fact that reports suggest OpenAI is exploring Pinterest may highlight the strategic value that large amounts of user interaction data have for enhancing product search capabilities and increasing ad revenue—both of which are increasingly critical considerations in the context of the competition between major technology companies for real-time consumer engagement and data-driven growth.

In view of the detailed allegations made by the news organizations, the litigation has gained added urgency due to the fact that a significant volume of potentially relevant data has been destroyed as a consequence of OpenAI's failure to preserve key evidence after the lawsuit was filed. 

A court filing indicates that plaintiffs learned nearly 11 months ago that large quantities of ChatGPT output logs, which reportedly affected a considerable number of Free, Pro, and Plus user conversations, had been deleted at an alarming rate after the suit was filed, and they were reportedly doing so at a disproportionately high rate. 

It is argued by plaintiffs that users trying to circumvent paywalls were more likely to enable chat deletion, which indicates this category of data is most likely to contain infringing material. Furthermore, the filings assert that despite OpenAI's attempt to justify the deletion of approximately one-third of all user conversations after the New York Times' complaint, OpenAI failed to provide any rationale other than citing what appeared to be an anomalous drop in usage during the period around the New Year of 2024. 

While news organizations have alleged OpenAI has continued routine deletion practices without implementing litigation holds despite two additional spikes in mass deletions that have been attributed to technical issues, they have selectively retained outputs relating to accounts mentioned in the publishers' complaints and continue to do so. 

During a testimony by OpenAI's associate general counsel, Mike Trinh, plaintiffs argue that the trial documents preserved by OpenAI substantiate the defenses of OpenAI, whereas the records that could substantiate the claims of third parties were not preserved. 

According to the researchers, the precise extent of the loss of the data remains unclear, because OpenAI still refuses to disclose even basic details about what it does and does not erase, an approach that they believe contrasts with Microsoft's ability to preserve Copilot log files without having to go through similar difficulties.

Consequently, as a result of Microsoft's failure to produce searchable Copilot logs, and in light of OpenAI's deletion of mass amounts of data, the news organizations are seeking a court order for Microsoft to produce searchable Copilot logs as soon as possible. 

It has also been requested that the court maintain the existing preservation orders which prevent further permanent deletions of output data as well as to compel OpenAI to accurately reflect the extent to which output data has been destroyed across the company's products as well as clarify whether any of that information can be restored and examined for its legal purposes.

Lego’s Move Into Smart Toys Faces Scrutiny From Play Professionals


 

In the wake of its unveiling of the company's smart brick technology, LEGO is seeking to reassure critics who argue that the initiative could undermine the company's commitment to hands-on, imaginative play as well as its longstanding history of innovation. 

A key announcement by LEGO has signaled a significant shift in its product strategy. Among industry observers as well as play experts, this announcement has sparked an early debate about whether the addition of digital intelligence into LEGO bricks could lead to a shift away from its traditional brick foundation. 

A few weeks ago, Federico Begher, LEGO's Senior Vice President of Product and New Business, addressed these concerns in an interview with IGN, in which he explained that the introduction of smart elements is a significant milestone that has been carefully considered by LEGO for many years, one that aims to enhance, rather than replace, LEGO's tactile creativity, which has characterized the brand for generations. 

With the launch of the new Smart Bricks, LEGO has introduced one of the most significant product developments in its history, and this position places the company in a unique position to reinvent the way its iconic building system interacts with a new generation of players. 

In the technology, which was introduced at CES 2026, sound, light, and motion-responsive elements are embedded directly into bricks, allowing structures to be responsive to touch as well as movement dynamically. 

During the announcement, LEGO executives framed the initiative as a natural extension of its creative ethos, with the intention of enticing children to go beyond static construction of objects through designing interactive models that can be programmed and adapted in real time, leveraging the brand's creative ethos.

There has been a great deal of enthusiasm for the approach as a way to encourage children to learn digital literacy as well as problem-solving at an early age, however education and child-development specialists have also been expressing measured reactions to it. 

Some have warned that increased electronic use may alter the tactile, open-ended nature of traditional brick-based play, despite others recognizing that it is capable of expanding the educational possibilities available to children. 

There is no denying that the core of LEGO's Smart Play ecosystem is a newly developed Smart Brick that replicates the dimensions of the familiar 2x4 bricks, but combines them with a variety of embedded electronics that are what enable Smart Play to work. 

Besides containing a custom microchip, the brick also contains motion and light sensors, orientation detection, integrated LEDs, and a compact speaker, forming the core of a wider system which also includes Smart Minifigures and Smart Tags, which all contain a distinct digital identifier that is distinct from the rest. 

Whenever these elements are combined or brought into proximity with each other, the Smart Brick recognizes them and performs predefined behaviors or lighting effects as a result of recognizing them. 

There is no need for internet connectivity, cloud-based processing, or companion applications to establish interactions between multiple Smart Bricks in order to coordinate responses, as the BrickNet protocol is a proprietary local wireless protocol, allowing coordinated responses without the need for internet access.

In spite of occasional mention of artificial intelligence, LEGO has emphasized that the system relies on on-device logic and not adaptive or generative models, delivering consistent and predictable responses that are meant to complement and enhance traditional hands-on play, not replace it. 

It is possible for Smart Bricks to respond to simple physical interactions with the system, in which directional changes, impacts, or proximity trigger visual and audio cues that are predetermined. Smart Tags can provide context storytelling elements that guide play scenarios with flashing lights and sound effects when a model falls, while falling models can trigger flashing lights and sound effects when they are attached to the model. 

Academics have expressed cautious praise for this combination of digital responsiveness and tangible construction. It is the experience of Professor Andrew Manches, a specialist in children and technology at the University of Edinburgh, to describe the system as technologically advanced, yet he added that imaginative play ultimately relies on a child's ability to develop narratives on their own rather than relying on scripted prompts. 

Smart Bricks are scheduled to be released by LEGO on March 1, 2026, with Star Wars-themed sets being the first to be released, with preorders beginning January 9 in the company's retail channels and select partners.

The electronic components add a premium quality to the products, ranging from entry-level sets priced under $100 to large collections priced over $150, thereby positioning the products as premium items. Some child advocacy groups have expressed concerns the preprogrammed responses in LEGO's BrickNet system could subtly restrict creative freedom or introduce privacy risks. 

However, LEGO maintains that its offline and encrypted system avoids many of the vulnerabilities associated with app-dependent smart toys that rely on internet connections. There have been gradual introductions of interactive elements into the company's portfolio in a bid to balance technological innovation with the enduring appeal of physical, open-ended play that has dominated the company's digital strategy as a whole. 

While the debate over the Smart Bricks continues, there is a more fundamental question of how the world's largest toy maker is going to manage the conflict between tradition and innovation. 

There are no plans in the near future to replace classic bricks with LEGO's Smart Play system, instead, LEGO CEOs insist that the technology is designed primarily to add a layer of benefit to classic bricks rather than replace them, positioning the technology as a complimentary layer that families can either choose to engage with or ignore. 

With the company choosing to keep the system fully offline and avoiding app-dependency in order to address concerns regarding data security and privacy as they have increasingly shaped conversations about connected toys, the company has attempted to address the privacy concerns. 

In accordance with industry analysts, Lego's premium pricing and phased rollout, starting with internationally popular licensed themes, suggest that the company is taking a market-tested approach rather than undergoing a wholesale change in its identity in order to make room for more premium products. 

A key factor that will determine whether Smart Bricks are successful over the long term will be whether they can earn the trust of parents, educators, and children as soon as they enter homes later this year. By establishing LEGO's reputation as a place to foster creativity and adapt to the expectations of a digitally-native generation, LEGO is reinforcing this reputation.

Privacy Takes Center Stage in WhatsApp’s Latest Feature Update

 


There are billions of WhatsApp users worldwide, making it a crucial communication platform for both personal and professional exchanges alike. But its wide spread has also made it an increasingly attractive target for cybercriminals because of its widespread reach and popularity. 

Recent security research has highlighted the possibility of emerging threats exploiting the platform's ecosystem. For example, a technique known as GhostPairing is being used to connect a victim's account to a malicious browser session through the use of a covert link. 

Additionally, separate studies have shown that the app's contact discovery functionality can also be exploited by third parties in order to expose large numbers of phone numbers, as well as photo profiles and other identifying information, causing fresh concerns about the exploitation of large-scale data. 

Despite the fact that WhatsApp relyes heavily on end-to-end encryption to safeguard message content and has made additional efforts to ensure the safety of message content, including passkey-secured backups and privacy-conscious artificial intelligence, security experts emphasize that user awareness remains an important factor in protecting the service from threats. 

When properly enabled, the platform comes with a variety of built-in tools that, when properly deployed, can significantly enhance account security and reduce risk of exposure to evolving digital threats when implemented properly. 

WhatsApp has continued to strengthen its end-to-end encryption framework in response to these evolving risks as well as to increase its portfolio of privacy-centric security controls. In response, it has been said that security analysts believe that limited user awareness often undermines the effectiveness of these safeguards, causing many account holders to not properly configure the protections that are already available to them. 

WhatsApp's native privacy settings can be an effective tool to prevent unauthorised access, curb data misuse, and reduce the risk of account takeover if they are properly enabled. There is an increased importance for this matter, especially because the platform is routinely used to exchange sensitive information, such as Aadhaar information and bank credentials, as well as one-time passwords, personal images, and official documents, on a daily basis.

In accordance with expert opinion, lax privacy configurations can put sensitive personal data at risk of fraud, identity theft, and social engineering attacks, while even a modest effort to review and tighten privacy controls can significantly improve one's digital security posture. It has come as a result of these broader privacy debates that the introduction of the Meta AI within WhatsApp has become a focus of concern for both users and privacy advocates. 

The AI chatbot, which can be accessed via a persistent blue icon on the Chats screen, will enable users to generate images and receive responses to prompts, but its continuous presence has sparked concerns over data handling, consent management, and user control over the chatbot. 

Despite WhatsApp's claims that only messages shared on the platform intentionally will be processed by the chatbot, many users are uneasy about the inability of the company to disable or remove Meta AI, especially since the company is unsure of the policies regarding data retention, training AI, and possible third-party access. 

Despite the company's caution against sharing sensitive personal information with the chatbot, users may still be able to use this data in order to refine the model as a whole, implicitly acknowledging the possibility of doing so. 

In light of this backdrop, WhatsApp has rolled out a feature aimed at protecting users from one another in lieu of addressing concerns associated with AI integration directly. It is designed to create an additional layer of confidentiality within selected conversations, and eliminates the use of Meta AI within those threads so that end-to-end encryption is maintained during user-to-user conversations. This framework reinforces the concept of end-to-end encryption at each level of the user-to-user conversation. 

As a result, many critics of this technology contend that while it is successful in safeguarding sensitive information comprehensively, it has limitations, such as allowing screenshots and manual saving of content. This limits its ability to provide comprehensive information protection.

The feature may temporarily reduce the anxiety surrounding Meta AI's involvement in private conversations, but experts claim it does little to resolve deeper concerns about transparency, consent, and control over the collection and use of data by AI systems.

In the future, WhatsApp will eventually need to address those concerns in a more direct manner in the course of rolling out additional updates. WhatsApp continues to serve as a primary channel for workplace communication, but security experts warn that convenience has quietly outpaced caution as it continues to consolidate its position.

Despite the fact that many professionals still use the default settings of their accounts, there are still risks associated with hijacking, impersonation, and data theft, which go far beyond the risks to your personal privacy, client privacy, and brand reputation.

There are several layers of security that are widely available, including two-step authentication, device management, biometric app locks, encrypted backups, and regular privacy checks, all of which remain underutilized despite their proven effectiveness at preventing common takeovers and phishing attempts. 

It must be noted that experts emphasize that technical controls alone are not sufficient to prevent cybercriminals from exploiting vulnerabilities. Human error remains one of the most exploited vulnerabilities, especially since attackers are increasingly using WhatsApp for social engineering scams, voice phishing, and impersonation of executives.

There has been an upward trend in the adoption of structured phishing simulation and awareness programs in recent years, which, according to industry data, can significantly reduce breach costs and employee susceptibility to attacks, as well as employees' privacy concerns. 

It is becoming increasingly important for organizations to take action to safeguard sensitive conversations in a climate where messaging apps have become both indispensable tools and high-value targets, through the disciplined application of WhatsApp's built-in protections and sustained investment in user training. 

The development of these developments, taken together, underscores the widening gap between WhatsApp's security capabilities and how it is used in reality. As the app continues to evolve into a hybrid space for personal communication, business coordination, and AI-assisted interactions, privacy and data protection concerns are growing as it develops into an increasingly hybrid platform. 

Various attack techniques have advanced over the years, but the combination of these techniques, the opaque integration of artificial intelligence, and the widespread reliance on default settings has resulted in an environment where users have become increasingly responsible for their own security. 

There has been some progress on WhatsApp's security, in terms of introducing meaningful safeguards, and it has also announced further updates, but their ultimate impact relies on informed adoption, transparent governance, and sustained scrutiny from regulators, as well as the security community. 

While clearer boundaries are being established around data use and user control, protecting conversations on one of the world's most popular messaging platforms will continue to be a technical challenge, but also a test of trust between users and the service they rely upon on a daily basis.

Chinese Hacking Group Breaches Email Systems Used by Key U.S. House Committees: Report

 

A cyber espionage group believed to be based in China has reportedly gained unauthorized access to email accounts used by staff working for influential committees in the U.S. House of Representatives, according to a report by the Financial Times published on Wednesday. The information was shared by sources familiar with the investigation.

The group, known as Salt Typhoon, is said to have infiltrated email systems used by personnel associated with the House China committee, along with aides serving on committees overseeing foreign affairs, intelligence, and armed services. The report did not specify the identities of the staff members affected.

Reuters said it was unable to independently confirm the details of the report. Responding to the allegations, Chinese Embassy spokesperson Liu Pengyu criticized what he described as “unfounded speculation and accusations.” The Federal Bureau of Investigation declined to comment, while the White House and the offices of the four reportedly targeted committees did not immediately respond to media inquiries.

According to one source cited by the Financial Times, it remains uncertain whether the attackers managed to access the personal email accounts of lawmakers themselves. The suspected intrusions were reportedly discovered in December.

Members of Congress and their staff, particularly those involved in overseeing the U.S. military and intelligence apparatus, have historically been frequent targets of cyber surveillance. Over the years, multiple incidents involving hacking or attempted breaches of congressional systems have been reported.

In November, the Senate Sergeant at Arms alerted several congressional offices to a “cyber incident” in which hackers may have accessed communications between the nonpartisan Congressional Budget Office and certain Senate offices. Separately, a 2023 report by the Washington Post revealed that two senior U.S. lawmakers were targeted in a hacking campaign linked to Vietnam.

Salt Typhoon has been a persistent concern for the U.S. intelligence community. The group, which U.S. officials allege is connected to Chinese intelligence services, has been accused of collecting large volumes of data from Americans’ telephone communications and intercepting conversations, including those involving senior U.S. politicians and government officials.

China has repeatedly rejected accusations of involvement in such cyber spying activities. Early last year, the United States imposed sanctions on alleged hacker Yin Kecheng and the cybersecurity firm Sichuan Juxinhe Network Technology, accusing both of playing a role in Salt Typhoon’s operations.

How Gender Politics Are Reshaping Data Privacy and Personal Information




The contemporary legal and administrative actions in the United States are revamping how personal data is recorded, shared, and accessed by government systems. For transgender and gender diverse individuals, these changes carry heightened risks, as identity records and healthcare information are increasingly entangled with political and legal enforcement mechanisms.

One of the most visible shifts involves federal identity documentation. Updated rules now require U.S. passport applicants to list sex as assigned at birth, eliminating earlier flexibility in gender markers. Courts have allowed this policy to proceed despite legal challenges. Passport data does not function in isolation. It feeds into airline systems, border controls, employment verification processes, financial services, and law enforcement databases. When official identification does not reflect an individual’s lived identity, transgender and gender diverse people may face repeated scrutiny, increased risk of harassment, and complications during travel or routine identity checks. From a data governance perspective, embedding such inconsistencies also weakens the accuracy and reliability of federal record systems.

Healthcare data has become another major point of concern. The Department of Justice has expanded investigations into medical providers offering gender related care to minors by applying existing fraud and drug regulation laws. These investigations focus on insurance billing practices, particularly the use of diagnostic codes to secure coverage for treatments. As part of these efforts, subpoenas have been issued to hospitals and clinics across the country.

Importantly, these subpoenas have sought not only financial records but also deeply sensitive patient information, including names, birth dates, and medical intake forms. Although current health privacy laws permit disclosures for law enforcement purposes, privacy experts warn that this exception allows personal medical data to be accessed and retained far beyond its original purpose. Many healthcare providers report that these actions have created a chilling effect, prompting some institutions to restrict or suspend gender related care due to legal uncertainty.

Other federal agencies have taken steps that further intensify concern. The Federal Trade Commission, traditionally focused on consumer protection and data privacy, has hosted events scrutinizing gender affirming healthcare while giving limited attention to patient confidentiality. This shift has raised questions about how privacy enforcement priorities are being set.

As in person healthcare becomes harder to access, transgender and gender diverse individuals increasingly depend on digital resources. Research consistently shows that the vast majority of transgender adults rely on the internet for health information, and a large proportion use telehealth services for medical care. However, this dependence on digital systems also exposes vulnerabilities, including limited broadband access, high device costs, and gaps in digital literacy. These risks are compounded by the government’s routine purchase of personal data from commercial data brokers.

Privacy challenges extend into educational systems as well. Courts have declined to establish a national standard governing control over students’ gender related data, leaving unresolved questions about who can access, store, and disclose sensitive information held by schools.

Taken together, changes to identity documents, aggressive access to healthcare data, and unresolved data protections in education are creating an environment of increased surveillance for transgender and gender diverse individuals. While some state level actions have successfully limited overly broad data requests, experts argue that comprehensive federal privacy protections are urgently needed to safeguard sensitive personal data in an increasingly digital society.

Inside the Hidden Market Where Your ChatGPT and Gemini Chats Are Sold for Profit

 

Millions of users may have unknowingly exposed their most private conversations with AI tools after cybersecurity researchers uncovered a network of browser extensions quietly harvesting and selling chat data.Here’s a reminder many people forget: an AI assistant is not your friend, not a financial expert, and definitely not a doctor or therapist. It’s simply someone else’s computer, running in a data center and consuming energy and water. What you share with it matters.

That warning has taken on new urgency after cybersecurity firm Koi uncovered a group of Google Chrome extensions that were quietly collecting user conversations with AI tools and selling that data to third parties. According to Koi, “Medical questions, financial details, proprietary code, personal dilemmas,” were being captured — “all of it, sold for ‘marketing analytics purposes.’”

This issue goes far beyond just ChatGPT or Google Gemini. Koi says the extensions indiscriminately target multiple AI platforms, including “Claude, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI) and Meta AI.” In other words, using any browser-based AI assistant could expose sensitive conversations if these extensions are installed.

The mechanism is built directly into the extensions. Koi explains that “for each platform, the extension includes a dedicated ‘executor’ script designed to intercept and capture conversations.” This data harvesting is enabled by default through hardcoded settings, with no option for users to turn it off. As Koi warns, “There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.”

Once installed, the extensions monitor browser activity. When a user visits a supported AI platform, the extension injects a specific script — such as chatgpt.js, claude.js, or gemini.js — into the page. The result is total visibility into AI usage. As Koi puts it, this includes “Every prompt you send to the AI. Every response you receive. Conversation identifiers and timestamps. Session metadata. The specific AI platform and model used.”

Alarmingly, this behavior was not part of the extension’s original design. It was introduced later through updates, while the privacy policy remained vague and misleading. Although the tool is marketed as a privacy-focused product, Koi says it does the opposite. The policy admits: “We share the Web Browsing Data with our affiliated company,” described as a data broker “that creates insights which are commercially used and shared.”

The main extension involved is Urban VPN Proxy, which alone has around six million users. After identifying its behavior, Koi searched for similar code and found it reused across multiple products from the same publisher, spanning both Chrome and Microsoft Edge.

Affected Chrome Web Store extensions include:
  • Urban VPN Proxy – 6,000,000 users
  • 1ClickVPN Proxy – 600,000 users
  • Urban Browser Guard – 40,000 users
  • Urban Ad Blocker – 10,000 users
On Microsoft Edge Add-ons, the list includes:
  • Urban VPN Proxy – 1,323,622 users
  • 1ClickVPN Proxy – 36,459 users
  • Urban Browser Guard – 12,624 users
  • Urban Ad Blocker – 6,476 users
Despite this activity, most of these extensions carry “Featured” badges from Google and Microsoft. These labels suggest that the tools have been reviewed and meet quality standards — a signal many users trust when deciding what to install.

Koi and other experts argue that this highlights a deeper problem with extension privacy disclosures. While Urban VPN does technically mention some of this data collection, it’s easy to miss. During setup, users are told the extension processes “ChatAI communication” along with “pages you visit” and “security signals,” supposedly “to provide these protections.”

Digging deeper, the privacy policy spells it out more clearly: “‘AI Inputs and Outputs. As part of the Browsing Data, we will collect the prompts and outputs queried by the End-User or generated by the AI chat provider, as applicable.’” It also states plainly: “‘We also disclose the AI prompts for marketing analytics purposes.’”

The extensions, Koi warns, “remained live for months while harvesting some of the most personal data users generate online.” The advice is blunt: “if you have any of these extensions installed, uninstall them now. Assume any AI conversations you've had since July 2025 have been captured and shared with third parties.”

U.S. Authorities Shut Down Online Network Selling Fake Identity Templates

 



United States federal authorities have taken down an online operation accused of supplying tools used in identity fraud across multiple countries. The case centers on a Bangladeshi national who allegedly managed several websites that sold digital templates designed to imitate official government identification documents.

According to U.S. prosecutors, the accused individual, Zahid Hasan, is a 29-year-old resident of Dhaka. He is alleged to have operated an online business that distributed downloadable files resembling authentic documents such as U.S. passports, social security cards, and state driver’s licenses. These files were not physical IDs but editable digital templates that buyers could modify by inserting personal details and photographs.

Court records indicate that the operation ran for several years, beginning in 2021 and continuing until early 2025. During this period, the websites reportedly attracted customers from around the world. Investigators estimate that more than 1,400 individuals purchased these templates, generating nearly $2.9 million in revenue. Despite the scale of the operation, individual items were sold at relatively low prices, with some templates costing less than $15.

Law enforcement officials state that such templates are commonly used to bypass identity verification systems. Once edited, the counterfeit documents can be presented to banks, cryptocurrency platforms, and online services that rely on document uploads to confirm a user’s identity. This type of fraud poses serious risks, as it enables financial crimes, account takeovers, and misuse of digital platforms.

The investigation intensified after U.S. authorities traced a transaction in which Bitcoin was exchanged for fraudulent templates by a buyer located in Montana. Following this development, federal agents moved to seize multiple domains allegedly connected to the operation. These websites are now under government control and no longer accessible for illegal activity.

The case involved extensive coordination between agencies. The FBI’s Billings Division and Salt Lake City Cyber Task Force led the investigation, with support from the FBI’s International Operations Division. Authorities in Bangladesh, including the Dhaka Metropolitan Police’s Counterterrorism and Transnational Crime Unit, also assisted in tracking the alleged activities.

A federal grand jury has returned a nine-count indictment against Hasan. The charges include multiple counts related to the distribution of false identification documents, passport fraud, and social security fraud. If convicted, the penalties could include lengthy prison sentences, substantial fines, and supervised release following incarceration.

The case is being prosecuted by Assistant U.S. Attorney Benjamin Hargrove. As with all criminal proceedings, the charges represent allegations, and the accused is presumed innocent unless proven guilty in court.

Cybersecurity experts note that the availability of such tools highlights the growing sophistication of digital fraud networks. The case is an alarming call for the importance of international cooperation and continuous monitoring to protect identity systems and prevent large-scale misuse of personal data.



Aadhaar Verification Rules Amended as India Strengthens Data Compliance


 

It is expected that India's flagship digital identity infrastructure, the Aadhaar, will undergo significant changes to its regulatory framework in the coming days following a formal amendment to the Aadhaar (Targeted Determination of Services and Benefits Management) Regulations, 2.0.

Introducing a new revision in the framework makes facial authentication formally recognized as a legally acceptable method of verifying a person's identity, marking a significant departure from traditional biometric methods such as fingerprinting and iris scans. 

The updated regulations introduce a strong compliance framework that focuses on explicit user consent, data minimisation, and privacy protection, as well as a stronger compliance architecture. The government seems to have made a deliberate effort to align Aadhaar's operational model with evolving expectations about biometric governance, data protection, and the safe and responsible use of digital identity systems as they evolved. 

In the course of undergoing the regulatory overhaul, the Unique Identification Authority of India has introduced a new digital identity tool called the Aadhaar Verifiable Credential in order to facilitate a secure and tamper-proof identity verification process. 

Additionally, the authority has tightened the compliance framework governing offline Aadhaar verification, placing higher accountability on entities that authenticate identities without direct access to the UIDAI system in real time. Aadhaar (Authentication and Offline Verification) Regulations, 2021 have been amended to include these measures, and they were formally published by the UIDAI on December 9 through the Gazette as well as on UIDAI's website. 

UIDAI has also launched a dedicated mobile application that provides individuals with a higher degree of control over how their Aadhaar data is shared, which emphasizes the shift towards a user-centric identity ecosystem which is also concerned with privacy. 

According to the newly released Aadhaar rules, the use of facial recognition as a valid means of authentication would be officially authorised as of the new Aadhaar rules, while simultaneously tightening consent requirements, purpose-limitations, and data-use requirements to ensure compliance with the Digital Personal Data Protection Act. 

In addition, the revisions indicate a substantial shift in the scope of Aadhaar's deployment in terms of how it is useful, extending its application to an increased range of private-sector uses under stricter regulation, so as to extend its usefulness beyond welfare delivery and government services. This change coincides with a preparation on the part of the Unique Identification Authority of India to launch a newly designed mobile application for Aadhaar. 

As far as officials are concerned, the application will be capable of supporting Aadhaar-based identification for routine scenarios like event access, registrations at hotels, deliveries, and physical access control, without having to continuously authenticate against a central database in real-time. 

Along with the provisions in the updated framework that explicitly acknowledge facial authentication and the existing biometric and one-time password mechanisms, the updated framework is also strengthening provisions governing offline Aadhaar verification, so that identity verification can be carried out in a controlled manner without direct connection to UIDAI's systems. 

As part of the revised framework, offline Aadhaar verification is also broadened beyond the limited QR code scanning that was previously used. A number of verification methods have been authorised by UIDAI as a result of this notification, including QR code-based checks, paperless offline e-KYC, Aadhaar Verifiable Credential validation, electronic authentication through Aadhaar, and paper-based offline verification. 

Additional mechanisms can be approved as time goes by, with the introduction of the Aadhaar Verifiable Credential, a digitally signed document with cryptographically secure features that contains some demographic data. This is the most significant aspect of this expansion. With the ability to verify locally without constantly consulting UIDAI's central databases, this credential aims to reduce systemic dependency on live authentication while addressing long-standing privacy and data security concerns that have arose. 

Additionally, the regulations introduce offline face verification, a system which allows a locally captured picture of the holder of an Aadhaar to be compared to the photo embedded in the credential without having to transmit biometric information over an external network. Furthermore, the amendments establish a formal regulatory framework for entities that conduct these checks, which are called Offline Verification Seeking Entities.

 The UIDAI has now mandated that organizations seeking to conduct offline Aadhaar verification must register, submit detailed operational and technical disclosures, and adhere to prescribed procedural safeguards in order to conduct the verification. A number of powers have been granted to the authority, including the ability to review applications, conduct inspections, obtain clarifications, suspend or revoke access in the case of noncompliance. 

In addition to clearly outlining grounds for action, the Enforcement provisions also include the use of verification facilities, deviation from UIDAI standards, failure to cooperate with audits, and facilitation of identity-related abuse. A particularly notable aspect of these rules is that they require affected entities to be provided an opportunity to present their case prior to punitive measures being imposed, reinforcing the idea of respecting due process and fairness in regulations. 

In the private sector, the verification process using Aadhaar is still largely unstructured at present; hotels, housing societies, and other service providers routinely collect photocopies or images of identity documents, which are then shared informally among vendors, security personnel, and front desk employees with little clarity regarding how they will retain or delete those documents. 

By introducing a new registration framework, we hope to replace this fragmented system with a regulated one, in which private organizations will be formally onboarded as Offline Verification Seeking Entities, and they will be required to use UIDAI-approved verification flows in place of storing Aadhaar copies, either physically or digitally.

With regard to this transition, one of the key elements of UIDAI's upcoming mobile application will be its ability to enable selective disclosure by allowing residents to choose what information is shared for a particular reason. For example, a hotel may just receive the name and age bracket of the guest, a telecommunication provider the address of the guest, or a delivery service the name and photograph of the visitor, rather than a full identity record. 

Aadhaar details will also be stored in the application for family members, biometric locks and unlocks can be performed instantly, and demographic information can be updated directly, thus reducing reliance on paper-based processes. As a result, control is increasingly shifting towards individuals, minimizing the risk of exposure that service providers face to their data and curbing the indefinite circulation of identity documents. 

UIDAI has been working on a broader ecosystem-building initiative that includes regulatory pushes, which are part of a larger effort. In November, the organization held a webinar, in which over 250 organizations participated, including hospitality chains, logistics companies, real estate managers, and event planners, in order to prepare for the rollout. 

In the midst of ongoing vulnerability concerns surrounding the Aadhaar ecosystem, there has been an outreach to address them. Based on data from the Indian Cyber Crime Coordination Centre, Aadhaar Enabled Payment System transactions are estimated to account for approximately 11 percent of the cyber-enabled financial fraud of 2023, according to the Centre's data. 

Several states have reported instances where cloned fingerprints associated with Aadhaar have been used to siphon beneficiary funds, most often after public records or inadequately secure computer systems have leaked data. Aadhaar-based authentication has been viewed as a systemic risk by some privacy experts, saying it could increase systemic risks if safeguards are not developed in parallel with its extension into private access environments. 

Researchers from civil society organizations have highlighted earlier this year that anonymized Aadhaar-linked datasets are still at risk of re-identification and that the current data protection law does not regulate anonymized data sufficiently, resulting in a potential breakdown in the new controls when repurposing and processing them downstream. 

As a result of the amendments, Aadhaar's role within India's rapidly growing digital economy has been recalibrated, with greater usability balanced with tighter governance, as the amendments take into account a conscious effort to change the status of the system. Through formalizing offline verification, restricting the use of data through selective disclosure, and imposing clearer obligations on private actors, the revised regulations aim to curb informal practices that have long contributed to increased privacy and security risks. 

The success of these measures will depend, however, largely on the disciplined implementation of the measures, the continued oversight of the regulatory authorities, and the willingness of industry stakeholders to abandon legacy habits of indiscriminate data collection. There are many advantages to the transition for service providers. They can reduce compliance risks by implementing more efficient, privacy-preserving verification methods. 

Residents have a greater chance of controlling their personal data in everyday interactions with providers. As Aadhaar leaves its open access environments behind and moves deeper into private circumstances, continued transparency from UIDAI, regular audits of verification entities, and public awareness around consent and data rights will be critical in preserving trust in Aadhaar and in ensuring that convenience doesn't come at the expense of security.

There has been a lot of talk about how large-scale digital identity systems can evolve responsibly in an era where data protection expectations are higher than ever, so if the changes are implemented according to plan, they could serve as a blueprint for future evolution.

U.S. Startup Launches Mobile Service That Requires No Personal Identification

 



A newly launched U.S. mobile carrier is questioning long-standing telecom practices by offering phone service without requiring customers to submit personal identification. The company, Phreeli, presents itself as a privacy-focused alternative in an industry known for extensive data collection.

Phreeli officially launched in early December and describes its service as being built with privacy at its core. Unlike traditional telecom providers that ask for names, residential addresses, birth dates, and other sensitive information, Phreeli limits its requirements to a ZIP code, a chosen username, and a payment method. According to the company, no customer profiles are created or sold, and user data is not shared for advertising or marketing purposes.

Customers can pay using standard payment cards, or opt for cryptocurrency if they wish to reduce traceable financial links. The service operates entirely on a prepaid basis, with no contracts involved. Monthly plans range from lower-cost options for light usage to higher-priced tiers for customers who require more mobile data. The absence of contracts aligns with the company’s approach, as formal agreements typically require verified personal identities.

Rather than building its own cellular infrastructure, Phreeli operates as a Mobile Virtual Network Operator. This means it provides service by leasing network access from an established carrier, in this case T-Mobile. This model allows Phreeli to offer nationwide coverage without owning physical towers or equipment.

Addressing legal concerns, the company states that U.S. law does not require mobile carriers to collect customer names in order to provide service. To manage billing while preserving anonymity, Phreeli says it uses a system that separates payment information from communication data. This setup relies on cryptographic verification to confirm that accounts are active, without linking call records or data usage to identifiable individuals.

The company’s privacy policy notes that information will only be shared when necessary to operate the service or when legally compelled. By limiting the amount of data collected from the start, Phreeli argues that there is little information available even in the event of legal requests.

Phreeli was founded by Nicholas Merrill, who previously operated an internet service provider and became involved in a prolonged legal dispute after challenging a government demand for user information. That experience reportedly influenced the company’s data-minimization philosophy.

While services that prioritize anonymity are often associated with misuse, Phreeli states that it actively monitors for abusive behavior. Accounts involved in robocalling or scams may face restrictions or suspension.

As concerns grow rampant around digital surveillance and commercial data harvesting, Phreeli’s launch sets the stage for a broader discussion about privacy in everyday communication. Whether this model gains mainstream adoption remains uncertain, but it introduces a notable shift in how mobile services can be structured in the United States.



FTC Refuses to Lift Ban on Stalkerware Company that Exposed Sensitive Data


The surveillance industry banned a stalkerware maker after a data breach leaked information of its customers and the people they were spying on. Consumer spyware company Support King can't sell the surveillance software now, the US Federal Trade Commission (FTC) said. 

The FTC has denied founder Scott Zuckerman's request to cancel the ban. It is also applicable to other subsidiaries OneClickMonitor and SpyFone.

Recently, the FTC announced the move in a press release when Zuckerman petitioned the agency to cancel the ban order in July of 2025. 

The FTC banned Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business,” in 2021 and stopped him from running other stalkerware business. Zuckerman had to also delete all the data stored by SpyFone and went through various audits to implement cybersecurity measures for his ventures. Then acting director of the FTC's Bureau of Consumer Protection, Samuel Levine said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security."

Zuckerman in his petition said that the FTC mandate has made it difficult for him to conduct other businesses due to monetary losses, even though Support King is out of business and he now only operates a restaurant and plans other ventures.

The ban came from a 2018 incident after a researcher discovered an Amazon S3 bucket of SpyFone that left important data such as selfies, chats, texts, contacts, passwords, logins, and audio recordings exposed online in the open. The leaked data comprised 44,109 email ids.

According to Samuel, “SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information." He further said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security.r

According to TechCrunch, after the 2021 order, Zuckerman started running another stalkerware firm. In 2022, TechCrunch found breached data from stalkerware application SpyTrac. 

According to the data, freelance developers ran SpyTrac who had direct links with Support King. It was an attempt to escape the FTC ban. Additionally, the breached data contained records from SpyFone, which Support King was supposed to delete. Beside this, the data also contained access keys to the cloud storage of OneClickMonitor, another stalkerware application. 

Indian Government Proposes Compulsory Location Tracking in Smartphones, Faces Backlash


Government faces backlash over location-tracking proposal

The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance. 

Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying. 

About the proposal 

Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.

The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.

Strong opposition from tech giants 

If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple. 

Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device." 

According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.  

Telecom vs government 

Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world. 

According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS. 

Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.

According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."



Brave Experiments With Automated AI Browsing Under Tight Security Checks

 



Brave has started testing a new feature that allows its built-in assistant, Leo, to carry out browsing activities on behalf of the user. The capability is still experimental and is available only in the Nightly edition of the browser, which serves as Brave’s testing environment for early features. Users must turn on the option manually through Brave’s internal settings page before they can try it.

The feature introduces what Brave calls agentic AI browsing. In simple terms, it allows Leo to move through websites, gather information, and complete multi-step tasks without constant user input. Brave says the tool is meant to simplify activities such as researching information across many sites, comparing products online, locating discount codes, and creating summaries of current news. The company describes this trial as its initial effort to merge active AI support with everyday browsing.

Brave has stated openly that this technology comes with serious security concerns. Agentic systems can be manipulated by malicious websites through a method known as prompt injection, which attempts to make the AI behave in unsafe or unintended ways. The company warns that users should not rely on this mode for important decisions or any activity involving sensitive information, especially while it remains in early testing.

To limit these risks, Brave has placed the agent in its own isolated browser profile. This means the AI does not share cookies, saved logins, or browsing data from the user’s main profile. The agent is also blocked from areas that could create additional vulnerabilities. It cannot open the browser’s settings page, visit sites that do not use HTTPS, interact with the Chrome Web Store, or load pages that Brave’s safety system identifies as dangerous. Whenever the agent attempts a task that might expose the user to risk, the browser will display a warning and request the user’s confirmation.

Brave has added further oversight through what it calls an alignment checker. This is a separate monitoring system that evaluates whether the AI’s actions match what the user intended. Since the checker operates independently, it is less exposed to manipulation that may affect the main agent. Brave also plans to use policy-based restrictions and models trained to resist prompt-injection attempts to strengthen the system’s defenses. According to the company, these protections are designed so that the introduction of AI does not undermine Brave’s existing privacy promises, including its no-logs policy and its blocking of ads and trackers.

Users interested in testing the feature can enable it by installing Brave Nightly and turning on the “Brave’s AI browsing” option from the experimental flags page. Once activated, a new button appears inside Leo’s chat interface that allows users to launch the agentic mode. Brave has asked testers to share feedback and has temporarily increased payments on its HackerOne bug bounty program for security issues connected to AI browsing.


Your Phone Is Being Tracked in Ways You Can’t See: One Click Shows the Truth

 



Many people believe they are safe online once they disable cookies, switch on private browsing, or limit app permissions. Yet these steps do not prevent one of the most persistent tracking techniques used today. Modern devices reveal enough technical information for websites to recognise them with surprising accuracy, and users can see this for themselves with a single click using publicly available testing tools.

This practice is known as device fingerprinting. It collects many small and unrelated pieces of information from your phone or computer, such as the type of browser you use, your display size, system settings, language preferences, installed components, and how your device handles certain functions. None of these details identify you directly, but when a large number of them are combined, they create a pattern that is specific to your device. This allows trackers to follow your activity across different sites, even when you try to browse discreetly.

The risk is not just about being observed. Once a fingerprint becomes associated with a single real-world action, such as logging into an account or visiting a page tied to your identity, that unique pattern can then be connected back to you. From that point onward, any online activity linked to that fingerprint can be tied to the same person. This makes fingerprinting an effective tool for profiling behaviour over long periods of time.

Growing concerns around online anonymity are making this issue more visible. Recent public debates about identity checks, age verification rules, and expanded monitoring of online behaviour have already placed digital privacy under pressure. Fingerprinting adds an additional layer of background tracking that does not rely on traditional cookies and cannot be easily switched off.

This method has also spread far beyond web browsers. Many internet-connected devices, including smart televisions and gaming systems, can reveal similar sets of technical signals that help build a recognisable device profile. As more home electronics become connected, these identifiers grow even harder for users to avoid.

Users can test their own exposure through tools such as the Electronic Frontier Foundation’s browser evaluation page. By selecting the option to analyse your browser, you will either receive a notice that your setup looks common or that it appears unique compared to others tested. A unique result means your device stands out strongly among the sample and can likely be recognised again. Another testing platform demonstrates just how many technical signals a website can collect within seconds, listing dozens of attributes that contribute to a fingerprint.

Some browsers attempt to make fingerprinting more difficult by randomising certain data points or limiting access to high-risk identifiers. These protections reduce the accuracy of device recognition, although they cannot completely prevent it. A virtual private network can hide your network address, but it cannot block the internal characteristics that form a fingerprint.

Tracking also happens through mobile apps and background services. Many applications collect usage and technical data, and privacy labels do not always make this clear to users. Studies have shown that complex privacy settings and permission structures often leave people unaware of how much information their devices share.

Users should also be aware of design features that shift them out of protected environments. For example, when performing a search through a mobile browser, some pages include prompts that encourage the user to open a separate application instead of continuing in the browser. These buttons are typically placed near navigation controls, making accidental taps more likely. Moving into a dedicated search app places users in a different data-collection environment, where protections offered by the browser may no longer apply.

While there is no complete way to avoid fingerprinting, users can limit their exposure by choosing browsers with built-in privacy protections, reviewing app permissions frequently, and avoiding unnecessary redirections into external applications. Ultimately, the choice depends on how much value an individual places on privacy, but understanding how this technology works is the first step toward reducing risk.

Big Tech’s New Rule: AI Age Checks Are Rolling Out Everywhere

 



Large online platforms are rapidly shifting to biometric age assurance systems, creating a scenario where users may lose access to their accounts or risk exposing sensitive personal information if automated systems make mistakes.

Online platforms have struggled for decades with how to screen underage users from adult-oriented content. Everything from graphic music tracks on Spotify to violent clips circulating on TikTok has long been available with minimal restrictions.

Recent regulatory pressure has changed this landscape. Laws such as the United Kingdom’s Online Safety Act and new state-level legislation in the United States have pushed companies including Reddit, Spotify, YouTube, and several adult-content distributors to deploy AI-driven age estimation and identity verification technologies. Pornhub’s parent company, Aylo, is also reevaluating whether it can comply with these laws after being blocked in more than a dozen US states.

These new systems require users to hand over highly sensitive personal data. Age estimation relies on analyzing one or more facial photos to infer a user’s age. Verification is more exact, but demands that the user upload a government-issued ID, which is among the most sensitive forms of personal documentation a person can share online.

Both methods depend heavily on automated facial recognition algorithms. The absence of human oversight or robust appeals mechanisms magnifies the consequences when these tools misclassify users. Incorrect age estimation can cut off access to entire categories of content or trigger more severe actions. Similar facial analysis systems have been used for years in law enforcement and in consumer applications such as Google Photos, with well-documented risks and misidentification incidents.

Refusing these checks often comes with penalties. Many services will simply block adult content until verification is completed. Others impose harsher measures. Spotify, for example, warns that accounts may be deactivated or removed altogether if age cannot be confirmed in regions where the platform enforces a minimum age requirement. According to the company, users are given ninety days to complete an ID check before their accounts face deletion.

This shift raises pressing questions about the long-term direction of these age enforcement systems. Companies frequently frame them as child-safety measures, but users are left wondering how long these platforms will protect or delete the biometric data they collect. Corporate promises can be short-lived. Numerous abandoned websites still leak personal data years after shutting down. The 23andMe bankruptcy renewed fears among genetic testing customers about what happens to their information if a company collapses. And even well-intentioned apps can create hazards. A safety-focused dating application called Tea ended up exposing seventy-two thousand users’ selfies and ID photos after a data breach.

Even when companies publicly state that they do not retain facial images or ID scans, risks remain. Discord recently revealed that age verification materials, including seventy thousand IDs, were compromised after a third-party contractor called 5CA was breached.

Platforms assert that user privacy is protected by strong safeguards, but the details often remain vague. When asked how YouTube secures age assurance data, Google offered only a general statement claiming that it employs advanced protections and allows users to adjust their privacy settings or delete data. It did not specify the precise security controls in place.

Spotify has outsourced its age assurance system to Yoti, a digital identity provider. The company states that it does not store facial images or ID scans submitted during verification. Yoti receives the data directly and deletes it immediately after the evaluation, according to Spotify. The platform retains only minimal information about the outcome: the user’s age in years, the method used, and the date the check occurred. Spotify adds that it uses measures such as pseudonymization, encryption, and limited retention policies to prevent unauthorized access. Yoti publicly discloses some technical safeguards, including use of TLS 1.2 by default and TLS 1.3 where supported.

Privacy specialists argue that these assurances are insufficient. Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation, told PCMag that facial scanning systems represent an inherent threat, regardless of whether they are being used to predict age, identity, or demographic traits. He reiterated the organization’s stance supporting a ban on government deployment of facial recognition and strict regulation for private-sector use.

Schwartz raises several issues. Facial age estimation is imprecise by design, meaning it will inevitably classify some adults as minors and deny them access. Errors in facial analysis also tend to fall disproportionately on specific groups. Misidentification incidents involving people of color and women are well documented. Google Photos once mislabeled a Black software engineer and his friend as animals, underlining systemic flaws in training data and model accuracy. These biases translate directly into unequal treatment when facial scans determine whether someone is allowed to enter a website.

He also warns that widespread facial scanning increases privacy and security risks because faces function as permanent biometric identifiers. Unlike passwords, a person cannot replace their face if it becomes part of a leaked dataset. Schwartz notes that at least one age verification vendor has already suffered a breach, underscoring material vulnerabilities in the system.

Another major problem is the absence of meaningful recourse when AI misjudges a user’s age. Spotify’s approach illustrates the dilemma. If the algorithm flags a user as too young, the company may lock the account, enforce viewing restrictions, or require a government ID upload to correct the error. This places users in a difficult position, forcing them to choose between potentially losing access or surrendering more sensitive data.

Do not upload identity documents unless required, check a platform’s published privacy and retention statements before you comply, and use account recovery channels if you believe an automated decision is wrong. Companies and regulators must do better at reducing vendor exposure, increasing transparency, and ensuring appeals are effective. 

Despite these growing concerns, users continue to find ways around verification tools. Discord users have discovered that uploading photos of fictional characters can bypass facial age checks. Virtual private networks remain a viable method for accessing age-restricted platforms such as YouTube, just as they help users access content that is regionally restricted. Alternative applications like NewPipe offer similar functionality to YouTube without requiring formal age validation, though these tools often lack the refinement and features of mainstream platforms.


How Oversharing, Weak Passwords, and Digital IDs Make You an Easy Target and What You Can Do




The more we share online, the easier it becomes for attackers to piece together our personal lives. Photos, location tags, daily routines, workplace details, and even casual posts can be combined to create a fairly accurate picture of who we are. Cybercriminals use this information to imitate victims, trick service providers, and craft convincing scams that look genuine. When someone can guess where you spend your time or what services you rely on, they can more easily pretend to be you and manipulate systems meant to protect you. Reducing what you post publicly is one of the simplest steps to lower this risk.

Weak passwords add another layer of vulnerability, but a recent industry assessment has shown that the problem is not only with users. Many of the most visited websites do not enforce strong password requirements. Some platforms do not require long passwords, special characters, or case sensitivity. This leaves accounts easier to break into through automated attacks. Experts recommend that websites adopt stronger password rules, introduce passkey options, and guide users with clear indicators of password strength. Users can improve their own security by relying on password managers, creating long unique passwords, and enabling two factor authentication wherever possible.

Concerns about device security are also increasing. Several governments have begun reviewing whether certain networking devices introduce national security risks, especially when the manufacturers are headquartered in countries that have laws allowing state access to data. These investigations have sparked debates over how consumer hardware is produced, how data flows through global supply chains, and whether companies can guarantee independence from government requests. For everyday users, this tension means it is important to select routers and other digital devices that receive regular software updates, publish clear security policies, and have a history of addressing vulnerabilities quickly.

Another rising threat is ransomware. Criminal groups continue to target both individuals and large organisations, encrypting data and demanding payment for recovery. Recent cases involving individuals with cybersecurity backgrounds show how profitable illicit markets can attract even trained professionals. Because attackers now operate with high levels of organisation, users and businesses should maintain offline backups, restrict access within internal networks, and test their response plans in advance.

Privacy concerns are also emerging in the travel sector. Airline data practices are also drawing scrutiny. Travel companies cannot directly sell passenger information to government programs due to legal restrictions, so several airlines jointly rely on an intermediary that acts as a broker. Reports show that this broker had been distributing data for years but only recently registered itself as a data broker, which is legally required. Users can request removal from this data-sharing system by emailing the broker’s privacy address and completing identity verification. Confirmation records should be stored for reference. The process involves verifying identity details, and users should keep a copy of all correspondence and confirmations. 

Finally, several governments are exploring digital identity systems that would allow residents to store official identification on their phones. Although convenient, this approach raises significant privacy risks. Digital IDs place sensitive information in one central location, and if the surrounding protections are weak, the data could be misused for tracking or monitoring. Strong legal safeguards, transparent data handling rules, and external audits are essential before such systems are implemented.

Experts warn that centralizing identity increases the potential impact of a breach and may facilitate tracking unless strict limits, independent audits, and user controls are enforced. Policymakers must balance convenience with strong technical and legal protections. 


Practical, immediate steps one should follow:

1. Reduce public posts that reveal routines or precise locations.

2. Use a password manager and unique, long passwords.

3. Turn on two factor authentication for important accounts.

4. Maintain offline backups and test recovery procedures.

5. Check privacy policies of travel brokers and submit opt-out requests if you want to limit data sharing.

6. Prefer devices with clear update policies and documented security practices.

These measures lower the chance that routine online activity becomes a direct route into your accounts or identity. Small, consistent changes will greatly reduce risk.

Overall, users can strengthen their protection by sharing less online, reviewing how their travel data is handled, and staying informed about the implications of digital identification. Small and consistent actions reduce the likelihood of becoming a victim of cyber threats.