But the problem has not disappeared completely, as users still face problems sometimes. To address the issue, user can use email aliases.
Email alias is an alternative email address that allows you to get mails without sharing your address. The alias reroutes all incoming mails to your primary account.
Plus addressing: For organizing mail efficiently, you are a + symbol and a category, you can also add rules to your mail and filter them by source.
Provider aliases: Mainly used for organizations to have particular emails for sections, while all mails go to the same inbox.
Masked/forwarding aliases: They are aimed at privacy. Users don't give their real email, instead, a random mail is generated, while the email is sent to your real inbox. This feature is available with services like Proton Mail.
Email aliases are helpful for organizing inbox, and can be effective for contacting business. But the main benefit is protecting your privacy.
There are several strategies to accomplish this, but the primary one is to minimize the amount of time your email is displayed online. Your aliases can be removed at any moment, but they will still be visible and used. The more aliases you use, the more difficult it is to identify your real core email address.
Because it keeps your address hidden from spammers, marketers, and phishing efforts, you will have more privacy. It is also simpler to determine who has exploited your data.
Giving email aliases in specific circumstances makes it simpler to find instances when they have been abused. Instead of having to deal with a ton of spam, you can remove an alias as soon as you discover someone is abusing it and start over.
Aliases can be helpful for privacy, but they are not a foolproof way to be safe online. They do not automatically encrypt emails, nor do they cease tracking cookies.
Court filings revealed that Apple Hide My Email, a function intended to protect genuine email addresses, does not keep users anonymous from law enforcement, raising new concerns about privacy.
With the use of this feature, which is accessible to iCloud+ subscribers, users can create arbitrary email aliases so that websites and applications never see their primary address. Apple claims it doesn't read messages; they are just forwarded. However, recent US cases show a clear limit: Apple was able to connect those anonymous aliases to identifiable accounts in response to legitimate court demands
The heightened use of age verification systems across the internet is directly influencing how people think about online privacy tools. As more governments introduce these requirements, interest in privacy-focused technologies is rising in parallel.
Age verification laws are now being implemented in multiple countries, requiring millions of users to submit personal and often sensitive information before accessing certain websites, particularly those hosting adult or restricted content. While policymakers argue that these rules are necessary to prevent minors from being exposed to harmful material, critics continue to highlight the serious privacy risks associated with handing over such data.
Virtual Private Networks, commonly known as VPNs, are widely marketed as tools designed to protect user privacy and secure online data. In recent months, there has been a noticeable surge in VPN adoption in regions where age verification laws have come into force. This trend was particularly evident in the United Kingdom and the United States during the latter half of 2025, and again in Australia in March 2026.
However, whether VPNs can truly protect users during age verification processes is not a simple yes-or-no question. Their capabilities are limited in certain areas, and understanding both their strengths and weaknesses is essential.
What VPNs Can Protect
At a fundamental level, VPNs work by encrypting a user’s internet connection, which prevents third parties from easily observing online activity. This includes internet service providers, network administrators, and in some cases, government surveillance systems.
When a VPN connection is active, external observers are generally unable to determine which websites or applications a user is accessing. In the context of age verification, this means that third parties monitoring network traffic will not be able to tell whether a user has visited a platform that requires identity checks, provided the VPN is properly configured.
Certain platforms, including X (formerly Twitter), Reddit, and Telegram, have introduced age verification requirements in specific regions. Many adult websites have implemented similar systems.
In addition to hiding browsing activity, VPNs also encrypt the data being transmitted. This ensures that any information entered during the verification process cannot be easily intercepted by external parties while it is in transit. Even after the verification step is completed, ongoing internet activity continues to be routed through the VPN’s secure tunnel, maintaining a level of privacy.
Modern VPN services are also evolving into broader cybersecurity platforms. Leading providers such as NordVPN, Surfshark, and ExpressVPN now offer additional tools beyond basic encryption. These may include password management systems, encrypted cloud storage, antivirus protection, and identity theft monitoring services.
Some of these services also provide features such as dark web monitoring, financial compensation options in cases of identity theft, credit tracking, and access to support teams that assist users in resolving security incidents. These added layers can help reduce the impact if personal data submitted during an age verification process is later exposed or misused.
One of the central criticisms of age verification systems is the cybersecurity risk they introduce. In this context, advanced VPN subscriptions can offer tools that help users respond to potential data breaches, even if they cannot prevent them entirely.
What VPNs Cannot Protect
Despite their advantages, VPNs are not a complete solution for online anonymity. They do not eliminate all risks, nor do they make users invisible.
In the case of age verification, a VPN cannot prevent the verification provider from accessing the information that a user voluntarily submits. Organizations such as Yoti, Persona, and AgeGo are responsible for processing this data. These companies will still be able to view, verify, and in many cases temporarily store personal details.
Typical verification methods require users to submit sensitive information such as credit card details, government-issued identification documents, or biometric inputs like selfies. This data is directly accessible to the verification service, regardless of whether a VPN is being used.
Data retention practices vary between providers. For example, Yoti states that it deletes user data immediately after verification unless further review is required. In cases where manual checks are necessary, the data may be retained for up to 28 days.
The longer personal information remains stored, the greater the potential risk to user privacy and security. This concern has already been validated by real-world incidents. In October 2025, Discord experienced a data breach in which attackers accessed information related to users who had requested manual reviews of their age verification results.
It is important to understand that any personal data submitted online can potentially be used to identify an individual. The use of a VPN does not change this fundamental reality.
Why VPN Interest Is Increasing
The expansion of age verification systems has given rise to public awareness of online privacy issues. As a result, many users are exploring VPNs as a way to better protect themselves.
At the same time, some individuals are attempting to use VPNs to bypass age verification requirements altogether. This is typically done by connecting to servers located in countries where such laws have not yet been implemented. However, this approach is not consistently reliable and does not guarantee success, as many platforms use additional verification mechanisms beyond geographic location.
Final Considerations
VPNs remain an important tool for strengthening online privacy, particularly when it comes to protecting browsing activity and securing data in transit. However, they are not a complete safeguard against all risks associated with age verification systems.
Users should also be cautious when choosing a VPN provider. Many free services operate on business models that involve collecting and monetizing user data, which can undermine privacy rather than protect it. In contrast, reputable paid VPN services generally offer stronger security features and more transparent data handling practices.
Among paid options, some lower-cost services are widely marketed to new users entering the VPN space. For instance, Surfshark has been advertised at approximately $1.99 per month under long-term plans, while PrivadoVPN has promoted multi-year subscriptions priced near $1.11 per month.
However, pricing alone should not be the deciding factor. Security architecture, logging policies, and transparency practices remain far more critical when evaluating whether a VPN service genuinely protects user privacy. While VPNs can reduce certain risks, they cannot fully protect personal information once it has been directly shared with a verification service.
A whirlwind of concerns around Meta’s AI-enabled smart glasses are intensifying after reports suggested that human reviewers may have accessed sensitive user recordings, raising broader questions about privacy, consent, and data protection.
Online discussions have surged, with users expressing alarm over how much data may be visible to the company. Some individuals on forums have claimed that recorded footage could be manually reviewed to train artificial intelligence systems, while others raised concerns about the use of such devices in sensitive environments like healthcare settings, where patient information could be unintentionally exposed.
What triggered the controversy?
The debate gained momentum following an investigation by Swedish media outlets, which reported that contractors working at external facilities were tasked with reviewing video recordings captured through Ray-Ban Meta Smart Glasses. According to these findings, some of the reviewed material included highly sensitive content.
The issue has since drawn regulatory attention in multiple regions. Authorities in the United Kingdom, including the Information Commissioner's Office, have sought clarification on how such user data is processed. In the United States, the controversy has also led to legal action against Meta Platforms, with allegations that consumers were not adequately informed about the device’s privacy safeguards.
The timing is of essence here, as smart glasses are rapidly gaining popularity. Legal filings suggest that more than seven million units were sold in 2025 alone. Unlike smartphones, these glasses resemble regular eyewear but can discreetly capture images, audio, and video from the wearer’s perspective, often without others being aware.
Why are experts concerned?
Legal analysts highlight that such practices could conflict with India’s Digital Personal Data Protection Act, 2023 if data involving Indian individuals is collected.
According to legal experts, consent remains a foundational requirement. Any access to recordings involving identifiable individuals must be based on informed approval. If footage is reviewed without the knowledge or permission of those captured, it could constitute a violation of Indian data protection law.
Beyond legality, specialists argue that wearable AI devices introduce a deeper structural issue. Unlike traditional data collection methods, these tools continuously capture real-world environments, making it difficult to define clear boundaries for data usage.
Experts also point out that although Meta includes visible indicators such as LED lights to signal recording, these measures do not fully address how the data of bystanders is processed. There are concerns about the absence of strict limitations on why such data is collected or how much of it is retained.
Additionally, outsourcing the review of user-generated content introduces further complications. Apart from the risk of misuse or unauthorized sharing, there are also ethical concerns regarding the working conditions and psychological impact on individuals tasked with reviewing potentially distressing material.
Cross-border and systemic risks
Another key concern is international data handling. If recordings involving Indian users are accessed by contractors located overseas, companies are still expected to maintain the same standards of security and confidentiality required under Indian regulations.
Experts emphasize that these devices are part of a much larger artificial intelligence ecosystem. Data captured through smart glasses is not simply stored. It may be uploaded to cloud servers, processed by machine learning systems, and in some cases, reviewed by humans to improve system performance. This creates a chain of data handling where highly personal information, including facial features, voices, surroundings, and behavioral patterns, may circulate beyond the user’s direct control.
What is Meta’s response?
Meta has stated that protecting user data remains a priority and that it continues to refine its systems to improve privacy protections. The company has explained that its smart glasses are designed to provide hands-free AI assistance, allowing users to interact with their surroundings more efficiently.
It also acknowledged that, in certain cases, human reviewers may be involved in evaluating shared content to enhance system performance. According to the company, such processes are governed by its privacy policies and include steps intended to safeguard user identity, such as automated filtering techniques like face blurring.
However, reports citing Swedish publications suggest that these safeguards may not always function consistently, with some instances where identifiable details remain visible.
While recording must be actively initiated by the user, either manually or through voice commands, experts note that many users may not fully understand that their captured content could be subject to human review.
The Ripple Effect
This controversy reflects a wider shift in how personal data is generated and processed in the age of AI-driven wearables. Unlike earlier technologies, smart glasses operate in real time and in shared environments, raising complex questions about consent not just for users, but for everyone around them.
As adoption runs rampant, regulators worldwide are likely to tighten scrutiny on such devices. The challenge for companies will be to balance innovation with transparent data practices, especially as public awareness around digital privacy continues to rise.
For users, this is a wake up call to not rely on new age technology blindly and take into account that convenience-driven technologies often come with hidden trade-offs, particularly when it comes to control over personal data.
Meta Platforms has confirmed that it will remove support for end-to-end encrypted messaging in Instagram direct messages beginning May 8, 2026. After this date, conversations that previously relied on this encryption feature will no longer be protected by the same privacy mechanism.
According to guidance published in the platform’s support documentation, users whose conversations are affected will receive instructions explaining how to download messages or media files they want to retain. In some situations, individuals may also need to install the latest version of the Instagram application before they can export their chat history.
When asked about the decision, Meta stated that encrypted messaging on Instagram saw limited adoption. The company explained that only a small percentage of users chose to enable end-to-end encryption within Instagram direct messages. Meta also pointed out that people who want encrypted communication can still use the feature on WhatsApp, where end-to-end encryption is already widely used.
How Instagram Encryption Was Introduced
Instagram’s encrypted messaging capability was originally introduced as part of a broader push by Meta to transform its messaging ecosystem. In 2021, Meta CEO Mark Zuckerberg outlined a “privacy-focused” strategy for social networking that aimed to shift communication toward private and secure messaging environments.
Within that initiative, Meta began experimenting with encrypted direct messages on Instagram. However, the feature never became the default setting for users. Instead, it remained an optional capability available only in certain regions and had to be manually activated within specific conversations.
The tool also gained relevance during geopolitical tensions. Shortly after the outbreak of the Russia-Ukraine conflict in early 2022, Meta expanded access to encrypted direct messages for adult users in both Russia and Ukraine. The company said the move was intended to provide safer communication channels during the early phase of the war.
Industry Debate Over Encrypted Messaging
The decision to discontinue Instagram’s encrypted chats comes amid a broader debate in the technology sector about whether strong encryption improves or complicates online safety.
Recently, the social media platform TikTok said it currently has no plans to introduce end-to-end encryption for its messaging system. The company told the BBC that such technology could reduce its ability to monitor harmful activity and protect younger users from abuse.
End-to-end encryption is widely regarded by cybersecurity experts as one of the strongest ways to secure digital communication. When this technology is used, messages are encrypted on the sender’s device and can only be decrypted by the recipient. This means that even the platform hosting the conversation cannot read the message contents during transmission.
Because of this design, encrypted systems can protect users from surveillance, data interception, or unauthorized access by third parties. Many messaging services, including WhatsApp and Signal, rely on similar encryption models to secure billions of conversations globally.
Law Enforcement Concerns
Despite its privacy advantages, encryption has long been controversial among law enforcement agencies and child-safety advocates. Critics argue that encrypted messaging makes it harder for technology companies to detect criminal behavior such as terrorism recruitment or the distribution of child sexual abuse material.
Authorities describe this challenge as the “Going Dark” problem, referring to situations where investigators cannot access message content even when they obtain legal warrants. Policymakers have repeatedly warned that widespread encryption could reduce the ability of platforms to cooperate with criminal investigations.
Internal documents previously reported by Reuters indicated that some Meta executives had raised similar concerns internally. In discussions dating back to 2019, company officials warned that widespread encryption could limit the company’s ability to identify and report illegal activity to law enforcement authorities.
Regulatory Pressure and Future Policy
The global policy debate around encryption is still evolving and charting new courses. The European Commission is expected to release a technology roadmap on encryption later this year. The initiative aims to explore ways to allow lawful access to encrypted data for investigators while preserving cybersecurity protections and civil liberties.
A Changing Messaging Strategy
Meta’s decision to remove encrypted messaging from Instagram highlights the complex trade-offs technology companies face when balancing privacy protections with safety monitoring and regulatory expectations.
While encryption remains a cornerstone of messaging on WhatsApp and has expanded across other platforms, the rollback on Instagram suggests that adoption rates, platform design, and policy pressures can influence whether such security features remain viable.
For Instagram users who relied on encrypted chats, the upcoming change means reviewing conversations before May 2026 and exporting any information they wish to keep before the feature is officially retired.