Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data Regulation. Show all posts

Meta’s Smart Glasses Face Privacy Backlash as Experts Flag Legal and Ethical Risks

 



A whirlwind of concerns around Meta’s AI-enabled smart glasses are intensifying after reports suggested that human reviewers may have accessed sensitive user recordings, raising broader questions about privacy, consent, and data protection.

Online discussions have surged, with users expressing alarm over how much data may be visible to the company. Some individuals on forums have claimed that recorded footage could be manually reviewed to train artificial intelligence systems, while others raised concerns about the use of such devices in sensitive environments like healthcare settings, where patient information could be unintentionally exposed.


What triggered the controversy?

The debate gained momentum following an investigation by Swedish media outlets, which reported that contractors working at external facilities were tasked with reviewing video recordings captured through Ray-Ban Meta Smart Glasses. According to these findings, some of the reviewed material included highly sensitive content.

The issue has since drawn regulatory attention in multiple regions. Authorities in the United Kingdom, including the Information Commissioner's Office, have sought clarification on how such user data is processed. In the United States, the controversy has also led to legal action against Meta Platforms, with allegations that consumers were not adequately informed about the device’s privacy safeguards.

The timing is of essence here, as smart glasses are rapidly gaining popularity. Legal filings suggest that more than seven million units were sold in 2025 alone. Unlike smartphones, these glasses resemble regular eyewear but can discreetly capture images, audio, and video from the wearer’s perspective, often without others being aware.


Why are experts concerned?

Legal analysts highlight that such practices could conflict with India’s Digital Personal Data Protection Act, 2023 if data involving Indian individuals is collected.

According to legal experts, consent remains a foundational requirement. Any access to recordings involving identifiable individuals must be based on informed approval. If footage is reviewed without the knowledge or permission of those captured, it could constitute a violation of Indian data protection law.

Beyond legality, specialists argue that wearable AI devices introduce a deeper structural issue. Unlike traditional data collection methods, these tools continuously capture real-world environments, making it difficult to define clear boundaries for data usage.

Experts also point out that although Meta includes visible indicators such as LED lights to signal recording, these measures do not fully address how the data of bystanders is processed. There are concerns about the absence of strict limitations on why such data is collected or how much of it is retained.

Additionally, outsourcing the review of user-generated content introduces further complications. Apart from the risk of misuse or unauthorized sharing, there are also ethical concerns regarding the working conditions and psychological impact on individuals tasked with reviewing potentially distressing material.


Cross-border and systemic risks

Another key concern is international data handling. If recordings involving Indian users are accessed by contractors located overseas, companies are still expected to maintain the same standards of security and confidentiality required under Indian regulations.

Experts emphasize that these devices are part of a much larger artificial intelligence ecosystem. Data captured through smart glasses is not simply stored. It may be uploaded to cloud servers, processed by machine learning systems, and in some cases, reviewed by humans to improve system performance. This creates a chain of data handling where highly personal information, including facial features, voices, surroundings, and behavioral patterns, may circulate beyond the user’s direct control.


What is Meta’s response?

Meta has stated that protecting user data remains a priority and that it continues to refine its systems to improve privacy protections. The company has explained that its smart glasses are designed to provide hands-free AI assistance, allowing users to interact with their surroundings more efficiently.

It also acknowledged that, in certain cases, human reviewers may be involved in evaluating shared content to enhance system performance. According to the company, such processes are governed by its privacy policies and include steps intended to safeguard user identity, such as automated filtering techniques like face blurring.

However, reports citing Swedish publications suggest that these safeguards may not always function consistently, with some instances where identifiable details remain visible.

While recording must be actively initiated by the user, either manually or through voice commands, experts note that many users may not fully understand that their captured content could be subject to human review.


The Ripple Effect

This controversy reflects a wider shift in how personal data is generated and processed in the age of AI-driven wearables. Unlike earlier technologies, smart glasses operate in real time and in shared environments, raising complex questions about consent not just for users, but for everyone around them.

As adoption runs rampant, regulators worldwide are likely to tighten scrutiny on such devices. The challenge for companies will be to balance innovation with transparent data practices, especially as public awareness around digital privacy continues to rise.

For users, this is a wake up call to not rely on new age technology blindly and take into account that convenience-driven technologies often come with hidden trade-offs, particularly when it comes to control over personal data.

EU Fines TikTok $600 Million for Data Transfers to China

EU Fines TikTok $600 Million for Data Transfers to China

Regulators said that the EU has fined TikTok 530 million euros (around $600 million). Chinese tech giant ByteDance owns TikTok, which has been found guilty of illegally sending the private data of EU users to China and lack of compliance to ensure the protection of data from potential access by Chinese authorities. According to an AFP news report, the penalty— one of the largest ever issued to date by EU’s data protection agencies— comes after a detailed inquiry into the legitimacy of TikTok’s data transfer rules. 

TikTok Fine and EU

TikTok’s lead regulator in Europe, Ireland’s Data Protection Commission (DPC) said that TikTok accepted during the probe about hosting European user data in China. DPC’s deputy commissioner Graham Doyle said that “TikTok failed to verify, guarantee, and demonstrate that the personal data of (European) users, remotely accessed by staff in China, was afforded a level of protection essentially equivalent to that guaranteed within the EU,”

Besides this, Doyle said that TikTok’s failure to address the dangers of possible access to Europeans’s private data by Chinese authorities under China’s anti-terrorism, counter-espionage, and other regulations, which TikTok itself found different than EU’s data protection standards. 

TikTok will contest the decision

TikTok has declared to contest the heavy EU fine, despite the findings. TikTok Europe’s Christine Grahn stressed that the company has “never received a request” from authorities in China for European users’ data and that “TikTok” has never given EU users’ data to Chinese authorities. “We disagree with this decision and intend to appeal it in full,” Christine said. 

TikTok boasts a massive 1.5 billion users worldwide. In recent years, the social media platform has been under tough pressure from Western governments due to worries about the misuse of data by Chinese actors for surveillance and propaganda aims. 

TikTok to comply with EU Rules

In 2023, the Ireland DPC fined TikTok 354 million euros for violating EU rules related to the processing of children’s information. The DPC’s recent judgment also revealed that TikTok violated requirements under the EU’s General Data Protection Regulation (GDPR) by sending user data to China. The decision includes a 530 million euro administrative penalty plus a mandate that TikTok aligns its data processing rules with EU practices within 6 months. 

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Google (GOOGL) and Apple (AAPL) are under harsh scrutiny after a recent report disclosed that their app stores host VPN applications associated with a Chinese cybersecurity firm, Qihoo 360. The U.S government has blacklisted the firm. The Financial Times reports that 5 VPNs still available to U.S users, such as VPN Proxy master and Turbo VPN, are linked to Qihoo. It was sanctioned in 2020 on the charges of alleged military ties. 

Ilusion of Privacy: VPNs collecting data 

In 2025 alone, three VPN apps have had over a million downloads on Google Play and  Apple’s App Store, suggesting these aren’t small-time apps, Sensor Tower reports. They are advertised as “private browsing” tools, but the VPNs provide the companies with complete user data of their online activity. This is alarming because China’s national security laws mandate that companies give user data if the government demands it. 

Concerns around ownership structures

The intricate web of ownership structures raises important questions; the apps are run by Singapore-based Innovative Connecting, owned by Lemon Seed, a Cayman Islands firm. Qihoo acquired Lemon Seed for $69.9 million in 2020. The company claimed to sell the business months late, but FT reports the China-based team making the applications were still under Qihoo’s umbrella for years. According to FT, a developer said, “You could say that we’re part of them, and you could say we’re not. It’s complicated.”

Amid outrage, Google and Apple respond 

Google said it strives to follow sanctions and remove violators when found. Apple has removed two apps- Snap VPN and Thunder VPN- after FT contacted the business, claiming it follows strict rules on VPN data-sharing.

Privacy scare can damage stock valuations

What Google and Apple face is more than public outage. Investors prioritise data privacy, and regulatory threat has increased, mainly with growing concerns around U.S tech firms’ links to China. If the U.S government gets involved, it can result in stricter rules, fines, and even more app removals. If this happens, shareholders won’t be happy. 

According to FT, “Innovative Connecting said the content of the article was not accurate and declined to comment further. Guangzhou Lianchuang declined to comment. Qihoo and Chen Ningyi did not respond to requests for comment.”

Why European Regulators Are Investigating Chinese AI firm DeepSeek

 


European authorities are raising concerns about DeepSeek, a thriving Chinese artificial intelligence (AI) company, due to its data practices. Italy, Ireland, Belgium, Netherlands, France regulators are examining the data collection methods of this firm, seeing whether they comply with the European General Data Protection Regulation or, if they also might consider that personal data is anyway transferred unlawfully to China.

Hence, due to these issues, the Italian authority has released a temporary restrainment to access the DeepSeek chatbot R1 for the time-being under which investigation will be conducted on what and how data get used, and how much has affected training in the AI model.  


What Type of Data Does DeepSeek Actually Collect? 

DeepSeek collects three main forms of information from the user: 

1. Personal data such as names and emails.  

2. Device-related data, including IP addresses.  

3. Data from third parties, such as Apple or Google logins.  

Moreover, there is an action that an app would be able to opt to take if at all that user was active elsewhere on those devices for "Community Security." Unlike many companies I have said where there are actual timelines or limits on data retention, it is stated that retention of data can happen indefinitely by DeepSeek. This can also include possible sharing with others-advertisers, analytics firms, governments, and copyright holders.  

Noting that most AI companies like the case of OpenAI's ChatGPT and Anthropic's Claude have met such privacy issues, experts would observe that DeepSeek doesn't expressly provide users the rights to deletion or restrictions on its use of their data as mandated requirement in the GDPR.  


The Collected Data Where it Goes  

One of major problems of DeepSeek is that it saves user data in China. Supposedly, the company has secure security measures in place for the data set and observes local laws for data transfer, but from a legal perspective, there is no valid basis being presented by DeepSeek concerning the storing of data from its European users outside the EU.  

According to the EDPB, privacy laws in China lay more importance on "stability of community than that of individual privacy," thus permitting broadly-reaching access to personal data for purposes such as national security or criminal investigations. Yet it is not clear whether that of foreign users will be treated differently than that of Chinese citizens. 


Cybersecurity and Privacy Threats 

As accentuated by cyber crime indices in 2024, China is one of the countries most vulnerable to cyberattacks. Cisco's latest report shows that DeepSeek's AI model does not have such strong security against hacking attempts. Other AI models can block at least some "jailbreak" cyberattacks, while DeepSeek turned out to be completely vulnerable to such assaults, which have made it softer for manipulation. 


Should Users Worry? 

According to experts, users ought to exercise caution when using DeepSeek and avoid sharing highly sensitive personal details. The uncertain policies of the company with respect to data protection, storage in China, and relatively weak security defenses could avail pretty heavy risks to users' privacy and as such warrant such caution. 

European regulators will then determine whether DeepSeek will be allowed to conduct business in the EU as investigations continue. Until then, users should weigh risks against their possible exposure when interacting with the platform. 



CFPB US Agency Proposes Rule to Block Data Brokers from Selling Sensitive Personal Information

The Consumer Financial Protection Bureau (CFPB) has proposed a groundbreaking rule to restrict data brokers from selling Americans’ personal and financial information, marking a significant step toward strengthening privacy protections in the digital age. The rule, introduced under the Fair Credit Reporting Act (FCRA), targets practices that exploit regulatory loopholes, particularly the sale of sensitive data such as Social Security numbers and phone numbers.

CFPB's Initiative to Curb Data Exploitation

CFPB Director Rohit Chopra emphasized the agency’s commitment to addressing the “widespread evasion” of federal privacy laws by data brokers. He noted that these companies often operate outside the regulatory frameworks governing credit bureaus and tenant screening firms, profiting from data sales while exposing consumers to significant risks. 

"This rule represents a decisive step to ensure that those trafficking in Americans' most sensitive information face accountability," Chopra stated during a press briefing.

The proposed rule aims to reclassify data brokers under the same legal framework as credit bureaus and background check companies, thereby closing a longstanding regulatory gap. It would impose restrictions on selling data that identifies individuals, such as Social Security numbers, income histories, and credit scores, limiting the ability of data brokers to monetize private information.

Building on Momentum from Federal Initiatives

The CFPB’s proposal aligns with momentum from President Biden’s recent executive order targeting the sale of Americans’ personal data. The move reflects growing public and governmental scrutiny of data brokers, who have faced criticism for exploiting lax regulations to generate profits at the expense of consumer privacy.

Chopra underscored the dangers of unregulated data sales, describing the risks as "staggering." He highlighted the threat to individuals and national security posed by the unrestricted availability of Americans’ private information to virtually anyone willing to pay.

FCRA and the Call for Stronger Privacy Protections

The FCRA, enacted in 1970, was designed to ensure the privacy and accuracy of consumer data managed by reporting agencies. However, the absence of comprehensive national data protection laws has left Americans more vulnerable compared to citizens in other Western democracies.

If enacted, the new rule would represent a significant step in federal efforts to regulate data brokers, building on Congress’s original intent in passing the FCRA—to protect Americans’ personal data. The public will have until March 2025 to provide comments on the proposed rule, which could face challenges from the incoming administration's deregulatory stance.

Bipartisan Support and Industry Reactions

Despite potential political obstacles, Chopra pointed to bipartisan acknowledgment of the risks posed by data brokers: "This isn’t a partisan issue. The dangers of unregulated access to Americans’ private data are recognized across the political spectrum."

Stakeholder reactions, including those from consumer advocacy groups and the data broker industry, are expected to shape the final form of the rule. While some industry players may resist the changes, advocates for stronger privacy protections view the proposal as a much-needed step to safeguard consumer rights in an increasingly data-driven economy.

Potential Impact on the Digital Economy

If adopted, the rule would signify a pivotal shift in how sensitive data is handled in the U.S., setting a potential precedent for broader privacy protections. By regulating data brokers more stringently, the CFPB aims to strike a balance between protecting privacy rights and accommodating commercial interests.

Next Steps for the Proposed Rule

To advance the proposal, the CFPB recommends:

  1. Engaging Public Feedback
    Encourage diverse stakeholders to participate in the comment period to address concerns and refine the rule.
  2. Strengthening Compliance Mechanisms
    Develop clear guidelines and enforcement measures to ensure adherence by data brokers.
  3. Collaborating with Lawmakers
    Build bipartisan support to overcome political hurdles and facilitate legislative backing for the rule.
  4. Raising Awareness
    Educate consumers about their privacy rights and the implications of data sales on their personal security.

Looking Ahead

As the CFPB leads the charge on this critical issue, the debate over privacy rights versus commercial interests enters a decisive phase. The proposed rule has the potential to reshape the digital economy’s relationship with personal data, paving the way for stronger consumer protections and greater accountability among data brokers.

World's First AI Law: A Tough Blow for Tech Giants

World's First AI Law: A Tough Blow for Tech Giants

In May, EU member states, lawmakers, and the European Commission — the EU's executive body — finalized the AI Act, a significant guideline that intends to oversee how corporations create, use, and use AI. 

The European Union's major AI law goes into effect on Thursday, bringing significant implications for American technology companies.

About the AI Act

The AI Act is a piece of EU legislation that regulates AI. The law, first suggested by the European Commission in 2020, seeks to combat the harmful effects of artificial intelligence.

The legislation establishes a comprehensive and standardized regulatory framework for AI within the EU.

It will largely target huge U.S. tech businesses, which are currently the main architects and developers of the most advanced AI systems.

However, the laws will apply to a wide range of enterprises, including non-technology firms.

Tanguy Van Overstraeten, head of legal firm Linklaters' technology, media, and technology practice in Brussels, described the EU AI Act as "the first of its kind in the world." It is expected to influence many enterprises, particularly those building AI systems, as well as those implementing or simply employing them in certain scenarios, he said.

High-risk and low-risk AI systems

High-risk AI systems include self-driving cars, medical equipment, loan decisioning systems, educational scores, and remote biometric identification systems.

The regulation also prohibits all AI uses that are judged "unacceptable" in terms of danger. Unacceptable-risk artificial intelligence applications include "social scoring" systems that evaluate citizens based on data gathering and analysis, predictive policing, and the use of emotional detection technology in the workplace or schools.

Implication for US tech firms

Amid a global craze over artificial intelligence, US behemoths such as Microsoft, Google, Amazon, Apple, and Meta have been aggressively working with and investing billions of dollars in firms they believe can lead the field.

Given the massive computer infrastructure required to train and run AI models, cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud are critical to supporting AI development.

In this regard, Big Tech companies will likely be among the most aggressively targeted names under the new regulations.

Generative AI and EU

The EU AI Act defines generative AI as "general-purpose" artificial intelligence. This title refers to tools that are designed to do a wide range of jobs on a par with, if not better than, a person.

General-purpose AI models include but are not limited to OpenAI's GPT, Google's Gemini, and Anthropic's Claude.

The AI Act imposes stringent standards on these systems, including compliance with EU copyright law, disclosure of how models are trained, routine testing, and proper cybersecurity measures.