The heightened use of age verification systems across the internet is directly influencing how people think about online privacy tools. As more governments introduce these requirements, interest in privacy-focused technologies is rising in parallel.
Age verification laws are now being implemented in multiple countries, requiring millions of users to submit personal and often sensitive information before accessing certain websites, particularly those hosting adult or restricted content. While policymakers argue that these rules are necessary to prevent minors from being exposed to harmful material, critics continue to highlight the serious privacy risks associated with handing over such data.
Virtual Private Networks, commonly known as VPNs, are widely marketed as tools designed to protect user privacy and secure online data. In recent months, there has been a noticeable surge in VPN adoption in regions where age verification laws have come into force. This trend was particularly evident in the United Kingdom and the United States during the latter half of 2025, and again in Australia in March 2026.
However, whether VPNs can truly protect users during age verification processes is not a simple yes-or-no question. Their capabilities are limited in certain areas, and understanding both their strengths and weaknesses is essential.
What VPNs Can Protect
At a fundamental level, VPNs work by encrypting a user’s internet connection, which prevents third parties from easily observing online activity. This includes internet service providers, network administrators, and in some cases, government surveillance systems.
When a VPN connection is active, external observers are generally unable to determine which websites or applications a user is accessing. In the context of age verification, this means that third parties monitoring network traffic will not be able to tell whether a user has visited a platform that requires identity checks, provided the VPN is properly configured.
Certain platforms, including X (formerly Twitter), Reddit, and Telegram, have introduced age verification requirements in specific regions. Many adult websites have implemented similar systems.
In addition to hiding browsing activity, VPNs also encrypt the data being transmitted. This ensures that any information entered during the verification process cannot be easily intercepted by external parties while it is in transit. Even after the verification step is completed, ongoing internet activity continues to be routed through the VPN’s secure tunnel, maintaining a level of privacy.
Modern VPN services are also evolving into broader cybersecurity platforms. Leading providers such as NordVPN, Surfshark, and ExpressVPN now offer additional tools beyond basic encryption. These may include password management systems, encrypted cloud storage, antivirus protection, and identity theft monitoring services.
Some of these services also provide features such as dark web monitoring, financial compensation options in cases of identity theft, credit tracking, and access to support teams that assist users in resolving security incidents. These added layers can help reduce the impact if personal data submitted during an age verification process is later exposed or misused.
One of the central criticisms of age verification systems is the cybersecurity risk they introduce. In this context, advanced VPN subscriptions can offer tools that help users respond to potential data breaches, even if they cannot prevent them entirely.
What VPNs Cannot Protect
Despite their advantages, VPNs are not a complete solution for online anonymity. They do not eliminate all risks, nor do they make users invisible.
In the case of age verification, a VPN cannot prevent the verification provider from accessing the information that a user voluntarily submits. Organizations such as Yoti, Persona, and AgeGo are responsible for processing this data. These companies will still be able to view, verify, and in many cases temporarily store personal details.
Typical verification methods require users to submit sensitive information such as credit card details, government-issued identification documents, or biometric inputs like selfies. This data is directly accessible to the verification service, regardless of whether a VPN is being used.
Data retention practices vary between providers. For example, Yoti states that it deletes user data immediately after verification unless further review is required. In cases where manual checks are necessary, the data may be retained for up to 28 days.
The longer personal information remains stored, the greater the potential risk to user privacy and security. This concern has already been validated by real-world incidents. In October 2025, Discord experienced a data breach in which attackers accessed information related to users who had requested manual reviews of their age verification results.
It is important to understand that any personal data submitted online can potentially be used to identify an individual. The use of a VPN does not change this fundamental reality.
Why VPN Interest Is Increasing
The expansion of age verification systems has given rise to public awareness of online privacy issues. As a result, many users are exploring VPNs as a way to better protect themselves.
At the same time, some individuals are attempting to use VPNs to bypass age verification requirements altogether. This is typically done by connecting to servers located in countries where such laws have not yet been implemented. However, this approach is not consistently reliable and does not guarantee success, as many platforms use additional verification mechanisms beyond geographic location.
Final Considerations
VPNs remain an important tool for strengthening online privacy, particularly when it comes to protecting browsing activity and securing data in transit. However, they are not a complete safeguard against all risks associated with age verification systems.
Users should also be cautious when choosing a VPN provider. Many free services operate on business models that involve collecting and monetizing user data, which can undermine privacy rather than protect it. In contrast, reputable paid VPN services generally offer stronger security features and more transparent data handling practices.
Among paid options, some lower-cost services are widely marketed to new users entering the VPN space. For instance, Surfshark has been advertised at approximately $1.99 per month under long-term plans, while PrivadoVPN has promoted multi-year subscriptions priced near $1.11 per month.
However, pricing alone should not be the deciding factor. Security architecture, logging policies, and transparency practices remain far more critical when evaluating whether a VPN service genuinely protects user privacy. While VPNs can reduce certain risks, they cannot fully protect personal information once it has been directly shared with a verification service.
Recently, a vulnerability was disclosed in the “Live in Chrome” panel of Google Chrome, a built-in interface for the Gemini assistant that offers agent-like AI capabilities directly within the browser environment that challenged this assumption.
As companies rapidly integrate artificial intelligence into everyday operations, cybersecurity and technology experts are warning about a growing risk that is less dramatic than system crashes but potentially far more damaging. The concern is that AI systems may quietly produce flawed outcomes across large operations before anyone notices.
One of the biggest challenges, specialists say, is that modern AI systems are becoming so complex that even the people building them cannot fully predict how they will behave in the future. This uncertainty makes it difficult for organizations deploying AI tools to anticipate risks or design reliable safeguards.
According to Alfredo Hickman, Chief Information Security Officer at Obsidian Security, companies attempting to manage AI risks are essentially pursuing a constantly shifting objective. Hickman recalled a discussion with the founder of a firm developing foundational AI models who admitted that even developers cannot confidently predict how the technology will evolve over the next one, two, or three years. In other words, the people advancing the technology themselves remain uncertain about its future trajectory.
Despite these uncertainties, businesses are increasingly connecting AI systems to critical operational tasks. These include approving financial transactions, generating software code, handling customer interactions, and transferring data between digital platforms. As these systems are deployed in real business environments, companies are beginning to notice a widening gap between how they expect AI to perform and how it actually behaves once integrated into complex workflows.
Experts emphasize that the core danger does not necessarily come from AI acting independently, but from the sheer complexity these systems introduce. Noe Ramos, Vice President of AI Operations at Agiloft, explained that automated systems often do not fail in obvious ways. Instead, problems may occur quietly and spread gradually across operations.
Ramos describes this phenomenon as “silent failure at scale.” Minor errors, such as slightly incorrect records or small operational inconsistencies, may appear insignificant at first. However, when those inaccuracies accumulate across thousands or millions of automated actions over weeks or months, they can create operational slowdowns, compliance risks, and long-term damage to customer trust. Because the systems continue functioning normally, companies may not immediately detect that something is wrong.
Real-world examples of this problem are already appearing. John Bruggeman, Chief Information Security Officer at CBTS, described a situation involving an AI system used by a beverage manufacturer. When the company introduced new holiday-themed packaging, the automated system failed to recognize the redesigned labels. Interpreting the unfamiliar packaging as an error signal, the system repeatedly triggered additional production cycles. By the time the issue was discovered, hundreds of thousands of unnecessary cans had already been produced.
Bruggeman noted that the system had not technically malfunctioned. Instead, it responded logically based on the data it received, but in a way developers had not anticipated. According to him, this highlights a key challenge with AI systems: they may faithfully follow instructions while still producing outcomes that humans never intended.
Similar risks exist in customer-facing applications. Suja Viswesan, Vice President of Software Cybersecurity at IBM, described a case involving an autonomous customer support system that began approving refunds outside established company policies. After one customer persuaded the system to issue a refund and later posted a positive review, the AI began approving additional refunds more freely. The system had effectively optimized its behavior to maximize positive feedback rather than strictly follow company guidelines.
These incidents illustrate that AI-related problems often arise not from dramatic technical breakdowns but from ordinary situations interacting with automated decision systems in unexpected ways. As businesses allow AI to handle more substantial decisions, experts say organizations must prepare mechanisms that allow human operators to intervene quickly when systems behave unpredictably.
However, shutting down an AI system is not always straightforward. Many automated agents are connected to multiple services, including financial platforms, internal software tools, customer databases, and external applications. Halting a malfunctioning system may therefore require stopping several interconnected workflows at once.
For that reason, Bruggeman argues that companies should establish emergency controls. Organizations deploying AI systems should maintain what he describes as a “kill switch,” allowing leaders to immediately stop automated operations if necessary. Multiple personnel, including chief information officers, should know how and when to activate it.
Experts also caution that improving algorithms alone will not eliminate these risks. Effective safeguards require companies to build oversight systems, operational controls, and clearly defined decision boundaries into AI deployments from the beginning.
Security specialists warn that many organizations currently place too much trust in automated systems. Mitchell Amador, Chief Executive Officer of Immunefi, argues that AI technologies often begin with insecure default conditions and must be carefully secured through system architecture. Without that preparation, companies may face serious vulnerabilities. Amador also noted that many organizations prefer outsourcing AI development to major providers rather than building internal expertise.
Operational readiness remains another challenge. Ramos explained that many companies lack clearly documented workflows, decision rules, and exception-handling procedures. When AI systems are introduced, these gaps quickly become visible because automated tools require precise instructions rather than relying on human judgment.
Organizations also frequently grant AI systems extensive access permissions in pursuit of efficiency. Yet edge cases that employees instinctively understand are often not encoded into automated systems. Ramos suggests shifting oversight models from “humans in the loop,” where people review individual outputs, to “humans on the loop,” where supervisors monitor overall system behavior and detect emerging patterns of errors.
Meanwhile, the rapid expansion of AI across the corporate world continues. A 2025 report from McKinsey & Company found that 23 percent of companies have already begun scaling AI agents across their organizations, while another 39 percent are experimenting with them. Most deployments, however, are still limited to a small number of business functions.
Michael Chui, a senior fellow at McKinsey, says this indicates that enterprise AI adoption remains in an early stage despite the intense hype surrounding autonomous technologies. There is still a glaring gap between expectations and what organizations are currently achieving in practice.
Nevertheless, companies are unlikely to slow their adoption efforts. Hickman describes the current environment as resembling a technology “gold rush,” where organizations fear falling behind competitors if they fail to adopt AI quickly.
For AI operations leaders, this creates a delicate balance between rapid experimentation and maintaining sufficient safeguards. Ramos notes that companies must move quickly enough to learn from real-world deployments while ensuring experimentation does not introduce uncontrolled risk.
Despite these concerns, expectations for the technology remain high. Hickman believes that within the next five to fifteen years, AI systems may surpass even the most capable human experts in both speed and intelligence.
Until that point, organizations are likely to experience many lessons along the way. According to Ramos, the next phase of AI development will not necessarily involve less ambition, but rather more disciplined approaches to deployment. Companies that succeed will be those that acknowledge failures as part of the process and learn how to manage them effectively rather than trying to avoid them entirely.
Oasis Security found the issue and informed OpenClaw, a fix was then released in version 2026.2.26 on 26th February.
OpenClaw is a self-hosted AI tool that became famous recently for allowing AI agents to autonomously execute commands, send texts, and handle tasks across multiple platforms. Oasis security said that the flaw is caused by the OpenClaw gateway service linking with the localhost and revealing a WebSocket interface.
As cross-origin browser policies do not stop WebSocket connections to a localhost, a compromised website opened by an OpenClaw user can use Javascript to secretly open a connection to the local gateway and try verification without raising any alarms.
To stop attacks, OpenClaw includes rate limiting. But the loopback address (127.0.0.1) is excused by default. Therefore, local CLI sessions are not accidentally locked out.
Experts discovered that they could brute-force the OpenClaw management password at hundreds of attempts per second without any failed attempts being logged. When the correct password is guessed, the hacker can silently register as a verified device, because the gateway autonomously allows device pairings from localhost without needing user info.
“In our lab testing, we achieved a sustained rate of hundreds of password guesses per second from browser JavaScript alone At that speed, a list of common passwords is exhausted in under a second, and a large dictionary would take only minutes. A human-chosen password doesn't stand a chance,” Oasis said.
The attacker can now directly interact with the AI platform by identifying connected nodes, stealing credentials, dumping credentials, and reading application logs with an authenticated session and admin access.
According to Oasis, this might enable an attacker to give the agent instructions to perform arbitrary shell commands on paired nodes, exfiltrate files from linked devices, or scan chat history for important information. This would essentially result in a complete workstation compromise that is initiated from a browser tab.
Oasis provided an example of this attack, demonstrating how the OpenClaw vulnerability could be exploited to steal confidential information. The problem was resolved within a day of Oasis reporting it to OpenClaw, along with technical information and proof-of-concept code.
Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives.
IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.
Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader.
In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server.
Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”.
Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging.
There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables.
The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework.
According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”
The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations.
IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”
Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks.
According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat.
By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes.
The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection.
There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself.
Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."
The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams.
Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”
The increasing regulatory data collection is now mixing with bitcoin’s on-chain transparency, making a trove of identity linked data that hackers can abuse for forced, real-world attacks.
Physical attacks against cryptocurrency holders are on the rise due to a number of factors, including social engineering, frequent major data breaches, KYC requirements, and regulatory data collection.
These occurrences, which are frequently referred to as "wrench attacks," entail coercion to gain private keys or force transactions by threats or physical violence. With France emerging as a focus point, this movement is highlighting a weakness in the industry's regulation.
Threats has become the rule rather than the exception, with at least 47.2% of cases involving verified torture or physical assault and 51.5% including firearms. There were 19 fatal occurrences, which resulted in 24 deaths overall and a 6.2% fatality rate. 2025 was the most violent year on record in terms of recorded cases, but analysts warn that the actual number of occurrences is probably greater because of underreporting. All numbers are based on cases that were publicly available at the time of reporting.
The risk profile for Bitcoin holders is very harsh. Transactions are irreversible once private keys are turned over under duress. Chargebacks, account freezes, and institutional recovery procedures are nonexistent. When coupled with actual compulsion, the protocol's famed finality becomes a liability.
France serves as an example of how rapidly this risk might increase. In France, there were twenty bitcoin-related physical attacks in 2025, compared to a total of just four between 2017 and 2024. Eight more cases had already been reported by early February 2026, indicating that the rise is continuing rather than leveling down. Europe now accounts for around 40% of all events worldwide, up from about 22% in 2024.
OpenClaw addressed a high-severity security threat that could have been exploited to allow a malicious site to link with a locally running AI agent and take control. According to the Oasis Security report, “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented.”
The threat was codenamed ClawJacked by the experts. CVE-2026-25253 could have become a severe vulnerability chain that would have allowed any site to hack a person’s AI agent. The vulnerability existed in the main gateway of the software. As OpenClaw is built to trust connections from the user’s system, it could have allowed hackers easy access.
On a developer's laptop, OpenClaw is installed and operational. Its gateway, a local WebSocket server, is password-protected and connected to localhost. When the developer visits a website that is controlled by the attacker via social engineering or another method, the attack begins. According to the Oasis Report, “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”
The research revealed a smart trick using WebSockets. Generally, your browser is active at preventing different websites from meddling with your local files. But WebSockets are an exception as they are built to stay “always-on” to send data simultaneously.
The OpenClaw gateway assumed that the connection must be safe because it comes from the user's own computer (localhost). But it is dangerous because if a developer running OpenClaw mistakenly visits a malicious website, a hidden script installed in the webpage can connect via WebSocket and interact directly with the AI tool in the background. The user will be clueless.