Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Biometric data. Show all posts

Big Tech’s New Rule: AI Age Checks Are Rolling Out Everywhere

 



Large online platforms are rapidly shifting to biometric age assurance systems, creating a scenario where users may lose access to their accounts or risk exposing sensitive personal information if automated systems make mistakes.

Online platforms have struggled for decades with how to screen underage users from adult-oriented content. Everything from graphic music tracks on Spotify to violent clips circulating on TikTok has long been available with minimal restrictions.

Recent regulatory pressure has changed this landscape. Laws such as the United Kingdom’s Online Safety Act and new state-level legislation in the United States have pushed companies including Reddit, Spotify, YouTube, and several adult-content distributors to deploy AI-driven age estimation and identity verification technologies. Pornhub’s parent company, Aylo, is also reevaluating whether it can comply with these laws after being blocked in more than a dozen US states.

These new systems require users to hand over highly sensitive personal data. Age estimation relies on analyzing one or more facial photos to infer a user’s age. Verification is more exact, but demands that the user upload a government-issued ID, which is among the most sensitive forms of personal documentation a person can share online.

Both methods depend heavily on automated facial recognition algorithms. The absence of human oversight or robust appeals mechanisms magnifies the consequences when these tools misclassify users. Incorrect age estimation can cut off access to entire categories of content or trigger more severe actions. Similar facial analysis systems have been used for years in law enforcement and in consumer applications such as Google Photos, with well-documented risks and misidentification incidents.

Refusing these checks often comes with penalties. Many services will simply block adult content until verification is completed. Others impose harsher measures. Spotify, for example, warns that accounts may be deactivated or removed altogether if age cannot be confirmed in regions where the platform enforces a minimum age requirement. According to the company, users are given ninety days to complete an ID check before their accounts face deletion.

This shift raises pressing questions about the long-term direction of these age enforcement systems. Companies frequently frame them as child-safety measures, but users are left wondering how long these platforms will protect or delete the biometric data they collect. Corporate promises can be short-lived. Numerous abandoned websites still leak personal data years after shutting down. The 23andMe bankruptcy renewed fears among genetic testing customers about what happens to their information if a company collapses. And even well-intentioned apps can create hazards. A safety-focused dating application called Tea ended up exposing seventy-two thousand users’ selfies and ID photos after a data breach.

Even when companies publicly state that they do not retain facial images or ID scans, risks remain. Discord recently revealed that age verification materials, including seventy thousand IDs, were compromised after a third-party contractor called 5CA was breached.

Platforms assert that user privacy is protected by strong safeguards, but the details often remain vague. When asked how YouTube secures age assurance data, Google offered only a general statement claiming that it employs advanced protections and allows users to adjust their privacy settings or delete data. It did not specify the precise security controls in place.

Spotify has outsourced its age assurance system to Yoti, a digital identity provider. The company states that it does not store facial images or ID scans submitted during verification. Yoti receives the data directly and deletes it immediately after the evaluation, according to Spotify. The platform retains only minimal information about the outcome: the user’s age in years, the method used, and the date the check occurred. Spotify adds that it uses measures such as pseudonymization, encryption, and limited retention policies to prevent unauthorized access. Yoti publicly discloses some technical safeguards, including use of TLS 1.2 by default and TLS 1.3 where supported.

Privacy specialists argue that these assurances are insufficient. Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation, told PCMag that facial scanning systems represent an inherent threat, regardless of whether they are being used to predict age, identity, or demographic traits. He reiterated the organization’s stance supporting a ban on government deployment of facial recognition and strict regulation for private-sector use.

Schwartz raises several issues. Facial age estimation is imprecise by design, meaning it will inevitably classify some adults as minors and deny them access. Errors in facial analysis also tend to fall disproportionately on specific groups. Misidentification incidents involving people of color and women are well documented. Google Photos once mislabeled a Black software engineer and his friend as animals, underlining systemic flaws in training data and model accuracy. These biases translate directly into unequal treatment when facial scans determine whether someone is allowed to enter a website.

He also warns that widespread facial scanning increases privacy and security risks because faces function as permanent biometric identifiers. Unlike passwords, a person cannot replace their face if it becomes part of a leaked dataset. Schwartz notes that at least one age verification vendor has already suffered a breach, underscoring material vulnerabilities in the system.

Another major problem is the absence of meaningful recourse when AI misjudges a user’s age. Spotify’s approach illustrates the dilemma. If the algorithm flags a user as too young, the company may lock the account, enforce viewing restrictions, or require a government ID upload to correct the error. This places users in a difficult position, forcing them to choose between potentially losing access or surrendering more sensitive data.

Do not upload identity documents unless required, check a platform’s published privacy and retention statements before you comply, and use account recovery channels if you believe an automated decision is wrong. Companies and regulators must do better at reducing vendor exposure, increasing transparency, and ensuring appeals are effective. 

Despite these growing concerns, users continue to find ways around verification tools. Discord users have discovered that uploading photos of fictional characters can bypass facial age checks. Virtual private networks remain a viable method for accessing age-restricted platforms such as YouTube, just as they help users access content that is regionally restricted. Alternative applications like NewPipe offer similar functionality to YouTube without requiring formal age validation, though these tools often lack the refinement and features of mainstream platforms.


Smartwatch on the Stand: How Wearable Data Is Turning Into Courtroom Evidence

 

Fitness trackers and smartwatches are increasingly becoming digital witnesses in legal proceedings, with biometric data from Apple Watch, Fitbit, and similar devices now regularly used as evidence in murder, injury, and insurance cases across the country. 

Wearables transform into legal liabilities 

Your smartwatch creates minute-by-minute digital testimony that prosecutors, personal injury lawyers, and insurance companies can subpoena. The granular biometric and location data automatically syncing to manufacturer clouds transforms wearable devices into potential witnesses that users never intended to create. 

Criminal cases demonstrate how powerful this evidence can be. In the Dabate murder case, a suspect's alibi collapsed when his wife's Fitbit showed her moving well after he claimed she was killed. Similarly, an Apple Watch in Australia pinpointed a victim's exact death window, directly contradicting the suspect's testimony.

These devices record GPS coordinates, movement patterns, heart rate spikes, and sleep disruption with forensic precision, creating evidence more detailed than browsing history. Unlike deleted texts, this data automatically syncs to manufacturer servers where companies retain it for extended periods under their data policies. 

Federal courts approve smartwatch data requests using the "narrow, proportional, and relevant" standard when evaluating discovery requests. Personal injury lawsuits increasingly subpoena activity logs to prove or disprove disability claims, where step counts either support or destroy injury narratives. 

Traffic accident cases utilize GPS data to establish whether individuals were walking, driving, or stationary during critical moments. Major manufacturers like Apple and Garmin explicitly state in privacy policies that they'll comply with lawful requests regardless of user preferences. The third-party doctrine means data shared with cloud providers enjoys weaker privacy protections than information stored on locked phones. 

Protection strategies 

Users can limit legal exposure through strategic privacy settings without eliminating functionality. Key recommendations include reviewing companion app privacy settings to minimize cloud syncing, enabling device-level encryption and strong authentication, and treating smartwatch data like financial records that could face future legal scrutiny. 

Additional protective measures involve limiting third-party app permissions and understanding manufacturer data retention policies before information becomes discoverable evidence. With over 34% of adults now wearing fitness trackers daily, the judicial system's reliance on wearable data will only intensify.

UK Police’s Passport Photo Searches Spark Privacy Row Amid Facial Recognition Surge

 

.
Police in the UK have carried out hundreds of facial recognition searches using the national passport photo database — a move campaigners call a “historic breach of the right to privacy,” The Telegraph has reported.

Civil liberties groups say the number of police requests to tap into passport and immigration photo records for suspect identification has soared in recent years. Traditionally, searches for facial matches were limited to police mugshot databases. Now, dozens of forces are turning to the Home Office’s store of more than 50 million passport images to match suspects from CCTV or doorbell footage.

Government ministers argue the system helps speed up criminal investigations. Critics, however, say it is edging Britain closer to an “Orwellian” surveillance state.

A major concern is that passport holders are never informed if their photo has been used in a police search. The UK’s former biometrics watchdog has warned that the practice risks being disproportionate and eroding public trust.

According to figures obtained via freedom of information requests by Big Brother Watch, passport photo searches rose from just two in 2020 to 417 in 2023. In the first ten months of 2024 alone, police had already conducted 377 such searches. Immigration photo database searches — containing images gathered by Border Force — also increased sharply, reaching 102 last year, seven times higher than in 2020.

The databases contain images of people who have never been convicted of a crime, yet campaigners say the searches take place with minimal legal oversight. While officials claim the technology is reserved for serious offences, evidence suggests it is being used for a wide range of investigations.

Currently, there is no national guidance from the Home Office or the College of Policing on the use of facial recognition in law enforcement. Big Brother Watch has issued a legal warning to the Government, threatening court action over what it calls an “unlawful breach of privacy.”

Silkie Carlo, director of Big Brother Watch, said:

“The Government has taken all of our passport photos and secretly turned them into mugshots to build a giant, Orwellian police database without the public’s knowledge or consent and with absolutely no democratic or legal mandate. This has led to repeated, unjustified and ongoing intrusions on the entire population’s privacy.”

Sir Keir Starmer has voiced support for expanding police use of facial recognition — including live street surveillance, retrospective image searches, and a new app for on-the-spot suspect identification.

Sir David Davis, Conservative MP, accused the Government of creating a “biometric digital identity system by the backdoor” without Parliament’s consent. The position of Biometrics Commissioner, responsible for oversight of such technology, was vacant for nearly a year until July.

Government officials maintain that facial recognition is already bound by existing laws, and stress its role in catching dangerous criminals. They say a detailed plan for its future use — including the legal framework and safeguards — will be published in the coming months.

China’s Ministry of State Security Warns of Biometric Data Risks in Crypto Reward Schemes

 

China’s Ministry of State Security (MSS) has issued a strong warning over the collection of biometric information by foreign companies in exchange for cryptocurrency rewards, describing the practice as a potential danger to both personal privacy and national security. The announcement, released on the MSS’s official WeChat account, highlighted reported incidents of large-scale iris scanning linked to digital token distributions. 

Although the statement did not specifically name the organization involved, the description closely matches Worldcoin, a project developed by Tools for Humanity. Worldcoin has drawn global attention for its use of spherical “orb” devices that scan an individual’s iris to generate a unique digital identity, which is then tied to distributions of its cryptocurrency, WLD. 

According to the MSS, the transfer of highly sensitive biometric data to foreign entities carries risks that extend far beyond its intended use. Such information could be misused in ways that compromise personal safety or even national security. The agency’s remarks add to a growing chorus of global concerns about how biometric data is handled, particularly within the cryptocurrency and decentralized finance sectors. 

Worldcoin, launched in 2023, has already faced investigations and regulatory pushback in several countries. Concerns have largely centered around data protection practices and whether users fully understand and consent to the collection of their biometric information. In May, Indonesian regulators suspended the company’s permit, citing irregularities in its identity verification services. The project later announced a voluntary pause of its proof-of-personhood operations in Indonesia to clarify compliance requirements. 

China has long maintained a restrictive approach toward cryptocurrencies, banning trading and initial coin offerings while warning against speculative risks. The MSS’s latest statement broadens this position, suggesting that data collection tied to crypto incentives is not only a consumer protection issue but also one of national security—particularly when foreign companies are involved in managing or storing sensitive personal data.  

The issue reflects a wider international debate about balancing innovation with privacy. Proponents of biometric-based verification argue it offers a scalable way to distinguish real human users from bots in the Web3 ecosystem. Critics counter that once biometric information is collected, the possibility of data leaks, misuse, or unauthorized access remains, even with encryption.

Similar privacy concerns have emerged globally. In Europe, regulators are reviewing Worldcoin’s activities under the GDPR framework, while Kenya suspended new registrations in 2023. The MSS has now urged Chinese citizens to be cautious about offers that involve trading personal data for cryptocurrency, signaling that further oversight of such projects could follow.

India Expands Aadhaar Authentication, Allowing Private Sector Access to Biometric Data

 

The Indian government has introduced significant changes to its Aadhaar authentication system, expanding its use to a wider range of industries. Previously restricted to sectors like banking, telecommunications, and public utilities, Aadhaar verification will now be available to businesses in healthcare, travel, hospitality, and e-commerce. Officials claim this change will enhance service efficiency and security, but privacy advocates have raised concerns about potential misuse of biometric data. 

On January 31, the Ministry of Electronics and Information Technology (MeitY) announced revisions to the Aadhaar Authentication for Good Governance (Social Welfare, Innovation, Knowledge) Rules, 2025. These amendments allow both public and private organizations to integrate Aadhaar-based authentication into their operations, provided their services align with the public interest. The government states that this update is designed to improve identity verification processes and ensure smoother service delivery across various sectors.  

One major change in the updated framework is the removal of a rule that previously linked Aadhaar authentication to preventing financial fraud. This revision broadens the scope of verification, allowing more businesses to use Aadhaar data for customer identification. The Unique Identification Authority of India (UIDAI), the agency overseeing Aadhaar, will continue to manage the authentication system. The scale of Aadhaar’s use has grown significantly. 

Government records indicate that Aadhaar authentication was conducted in nearly 130 billion transactions by January 2025, a sharp increase from just over 109 billion transactions the previous year. With the new regulations, companies wishing to adopt Aadhaar authentication must submit detailed applications outlining their intended use. These requests will be reviewed by the relevant government department and UIDAI before receiving approval. Despite the government’s assurance that all applications will undergo strict scrutiny, critics argue that the review process lacks clarity. 

Kamesh Shekar, a policy expert at The Dialogue, a technology-focused think tank, has called for more transparency regarding the criteria used to assess these requests. He pointed out that the Supreme Court has previously raised concerns about potential misuse of Aadhaar data. These concerns stem from past legal challenges to Aadhaar’s use. In 2018, the Supreme Court struck down Section 57 of the Aadhaar Act, which had previously allowed private entities to use Aadhaar for identity verification. 

A later amendment in 2019 permitted voluntary authentication, but that provision remains contested in court. Now, with an even broader scope for Aadhaar verification, experts worry that insufficient safeguards could put citizens’ biometric data at risk. While the expansion of Aadhaar authentication is expected to simplify verification for businesses and consumers, the ongoing debate over privacy and data security underscores the need for stricter oversight. 

As Aadhaar continues to evolve, ensuring a balance between convenience and personal data protection will be crucial.

The Intersection of Travel and Data Privacy: A Growing Concern

 

The evolving relationship between travel and data privacy is sparking significant debate among travellers and experts. A recent Spanish regulation requiring hotels and Airbnb hosts to collect personal guest data has particularly drawn criticism, with some privacy-conscious tourists likening it to invasive surveillance. This backlash highlights broader concerns about the expanding use of personal data in travel.

Privacy Concerns Across Europe

This trend is not confined to Spain. Across the European Union, regulations now mandate biometric data collection, such as fingerprints, for non-citizens entering the Schengen zone. Airports and border control points increasingly rely on these measures to streamline security and enhance surveillance. Advocates argue that such systems improve safety and efficiency, with Chris Jones of Statewatch noting their roots in international efforts to combat terrorism, driven by UN resolutions and supported by major global powers like the US, China, and Russia.

Challenges with Biometric and Algorithmic Systems

Despite their intended benefits, systems leveraging Passenger Name Record (PNR) data and biometrics often fall short of expectations. Algorithmic misidentifications can lead to unjust travel delays or outright denials. Biometric systems also face significant logistical and security challenges. While they are designed to reduce processing times at borders, system failures frequently result in delays. Additionally, storing such sensitive data introduces serious risks. For instance, the 2019 Marriott data breach exposed unencrypted passport details of millions of guests, underscoring the vulnerabilities in large-scale data storage.

The EU’s Ambitious Biometric Database

The European Union’s effort to create the world’s largest biometric database has sparked concern among privacy advocates. Such a trove of data is an attractive target for both hackers and intelligence agencies. The increasing use of facial recognition technology at airports—from Abu Dhabi’s Zayed International to London Heathrow—further complicates the privacy landscape. While some travelers appreciate the convenience, others fear the long-term implications of this data being stored and potentially misused.

Global Perspectives on Facial Recognition

Prominent figures like Elon Musk openly support these technologies, envisioning their adoption in American airports. However, critics argue that such measures often prioritize efficiency over individual privacy. In the UK, stricter regulations have limited the use of facial recognition systems at airports. Yet, alternative tracking technologies are gaining momentum, with trials at train stations exploring non-facial data to monitor passengers. This reflects ongoing innovation by technology firms seeking to navigate legal restrictions.

Privacy vs. Security: A Complex Trade-Off

According to Gus Hosein of Privacy International, borders serve as fertile ground for experiments in data-driven travel technologies, often at the expense of individual rights. These developments point to the inevitability of data-centric travel but also emphasize the need for transparent policies and safeguards. Balancing security demands with privacy concerns remains a critical challenge as these technologies evolve.

The Choice for Travelers

For travelers, the trade-off between convenience and the protection of personal information grows increasingly complex with every technological advance. As governments and companies push forward with data-driven solutions, the debate over privacy and transparency will only intensify, shaping the future of travel for years to come.

Massive Data Breach Exposes Sensitive Information of Indian Law Enforcement Officials

 

Recently, a significant data breach compromised the personal information of thousands of law enforcement officials and police officer applicants in India. Discovered by security researcher Jeremiah Fowler, the breach exposed sensitive details such as fingerprints, facial scans, signatures, and descriptions of tattoos and scars. Alarmingly, around the same time, cybercriminals advertised the sale of similar biometric data on Telegram. 

The breach was traced to an exposed web server linked to ThoughtGreen Technologies, an IT firm with offices in India, Australia, and the United States. Fowler found nearly 500 gigabytes of data, encompassing 1.6 million documents dating from 2021 to early April. This data included personal information about various professionals, including teachers, railway workers, and law enforcement officials. Among the documents were birth certificates, diplomas, and job applications. 

Although the server has been secured, the incident highlights the risks of collecting and storing biometric data and the potential misuse if leaked. “You can change your name, you can change your bank information, but you can't change your actual biometrics,” Fowler noted. This data, if accessed by cybercriminals, poses a long-term risk, especially for individuals in sensitive law enforcement roles. Prateek Waghre, executive director of the Internet Freedom Foundation, emphasized the extensive biometric data collection in India and the heightened security risks for law enforcement personnel. 

If compromised, such data can be misused to gain unauthorized access to sensitive information. Fowler also found a Telegram channel advertising the sale of Indian police data, including specific individuals’ information, shortly after the database was secured. The structure and screenshots of the data matched what Fowler had seen. For ethical reasons, he did not purchase the data, so he could not fully verify its authenticity. In response, ThoughtGreen Technologies stated, “We take data security very seriously and have taken immediate steps to secure the exposed data.” 

They assured a thorough investigation to prevent future incidents but did not provide specific details. The company also reported the breach to Indian law enforcement but did not specify which organization was contacted. When shown a screenshot of the Telegram post, the company claimed it was “not our data.” Telegram did not respond to requests for comment. 

Shivangi Narayan, an independent researcher, stressed the need for more robust data protection laws and better data handling practices by companies. Data breaches are so frequent that they no longer shock people, as evidenced by a recent face-recognition data breach involving an Indian police force.

Globally, as governments and organizations increasingly use biometric data for identity verification and surveillance, the risk of data leaks and abuse rises. For example, a recent face recognition leak in Australia affected up to a million people and led to a blackmail charge. It also has to be noted that many countries are looking at biometric verification for identities, and all of that information has to be stored somewhere. If they decide to farm it out to a third-party company, they lose control of that data.

Microsoft Introduces Passkey Authentication for Personal Microsoft Accounts

 

Microsoft has introduced a new feature allowing Windows users to log into their Microsoft consumer accounts using a passkey, eliminating the need for traditional passwords. This passkey authentication method supports various password-less options such as Windows Hello, FIDO2 security keys, biometrics like facial scans or fingerprints, and device PINs.

These "consumer accounts" are personal accounts used for accessing a range of Microsoft services including Windows, Office, Outlook, OneDrive, and Xbox Live. The announcement coincides with World Password Day, with Microsoft aiming to enhance security against phishing attacks and eventually phase out passwords entirely.

Previously available for logging into websites and applications, passkey support is now extended to Microsoft accounts, streamlining the login process without requiring a password.

Passkeys, unlike passwords, utilize a cryptographic key pair where the private key remains securely stored on the user's device. This method enhances security as it eliminates the risk of password interception or theft, and it simplifies the login experience, reducing reliance on password memorization and minimizing risky practices such as password recycling.

Moreover, passkeys offer compatibility across various devices and operating systems, ensuring a seamless authentication process. However, Microsoft's approach of syncing passkeys across devices raises some security concerns, potentially compromising account security if accessed by unauthorized individuals.

To enable passkey support for Microsoft accounts, users can create a passkey through a provided link and select from options like facial recognition, fingerprint, PIN, or security key. Supported platforms include Windows 10 and newer, macOS Ventura and newer, Safari 16 or newer, ChromeOS, Chrome, Microsoft Edge 109, iOS 16 and newer, and Android 9 and newer. Upon signing in, users can select their passkey from the list and proceed with the authentication process using the chosen method.