Brave has started testing a new feature that allows its built-in assistant, Leo, to carry out browsing activities on behalf of the user. The capability is still experimental and is available only in the Nightly edition of the browser, which serves as Brave’s testing environment for early features. Users must turn on the option manually through Brave’s internal settings page before they can try it.
The feature introduces what Brave calls agentic AI browsing. In simple terms, it allows Leo to move through websites, gather information, and complete multi-step tasks without constant user input. Brave says the tool is meant to simplify activities such as researching information across many sites, comparing products online, locating discount codes, and creating summaries of current news. The company describes this trial as its initial effort to merge active AI support with everyday browsing.
Brave has stated openly that this technology comes with serious security concerns. Agentic systems can be manipulated by malicious websites through a method known as prompt injection, which attempts to make the AI behave in unsafe or unintended ways. The company warns that users should not rely on this mode for important decisions or any activity involving sensitive information, especially while it remains in early testing.
To limit these risks, Brave has placed the agent in its own isolated browser profile. This means the AI does not share cookies, saved logins, or browsing data from the user’s main profile. The agent is also blocked from areas that could create additional vulnerabilities. It cannot open the browser’s settings page, visit sites that do not use HTTPS, interact with the Chrome Web Store, or load pages that Brave’s safety system identifies as dangerous. Whenever the agent attempts a task that might expose the user to risk, the browser will display a warning and request the user’s confirmation.
Brave has added further oversight through what it calls an alignment checker. This is a separate monitoring system that evaluates whether the AI’s actions match what the user intended. Since the checker operates independently, it is less exposed to manipulation that may affect the main agent. Brave also plans to use policy-based restrictions and models trained to resist prompt-injection attempts to strengthen the system’s defenses. According to the company, these protections are designed so that the introduction of AI does not undermine Brave’s existing privacy promises, including its no-logs policy and its blocking of ads and trackers.
Users interested in testing the feature can enable it by installing Brave Nightly and turning on the “Brave’s AI browsing” option from the experimental flags page. Once activated, a new button appears inside Leo’s chat interface that allows users to launch the agentic mode. Brave has asked testers to share feedback and has temporarily increased payments on its HackerOne bug bounty program for security issues connected to AI browsing.
Many people believe they are safe online once they disable cookies, switch on private browsing, or limit app permissions. Yet these steps do not prevent one of the most persistent tracking techniques used today. Modern devices reveal enough technical information for websites to recognise them with surprising accuracy, and users can see this for themselves with a single click using publicly available testing tools.
This practice is known as device fingerprinting. It collects many small and unrelated pieces of information from your phone or computer, such as the type of browser you use, your display size, system settings, language preferences, installed components, and how your device handles certain functions. None of these details identify you directly, but when a large number of them are combined, they create a pattern that is specific to your device. This allows trackers to follow your activity across different sites, even when you try to browse discreetly.
The risk is not just about being observed. Once a fingerprint becomes associated with a single real-world action, such as logging into an account or visiting a page tied to your identity, that unique pattern can then be connected back to you. From that point onward, any online activity linked to that fingerprint can be tied to the same person. This makes fingerprinting an effective tool for profiling behaviour over long periods of time.
Growing concerns around online anonymity are making this issue more visible. Recent public debates about identity checks, age verification rules, and expanded monitoring of online behaviour have already placed digital privacy under pressure. Fingerprinting adds an additional layer of background tracking that does not rely on traditional cookies and cannot be easily switched off.
This method has also spread far beyond web browsers. Many internet-connected devices, including smart televisions and gaming systems, can reveal similar sets of technical signals that help build a recognisable device profile. As more home electronics become connected, these identifiers grow even harder for users to avoid.
Users can test their own exposure through tools such as the Electronic Frontier Foundation’s browser evaluation page. By selecting the option to analyse your browser, you will either receive a notice that your setup looks common or that it appears unique compared to others tested. A unique result means your device stands out strongly among the sample and can likely be recognised again. Another testing platform demonstrates just how many technical signals a website can collect within seconds, listing dozens of attributes that contribute to a fingerprint.
Some browsers attempt to make fingerprinting more difficult by randomising certain data points or limiting access to high-risk identifiers. These protections reduce the accuracy of device recognition, although they cannot completely prevent it. A virtual private network can hide your network address, but it cannot block the internal characteristics that form a fingerprint.
Tracking also happens through mobile apps and background services. Many applications collect usage and technical data, and privacy labels do not always make this clear to users. Studies have shown that complex privacy settings and permission structures often leave people unaware of how much information their devices share.
Users should also be aware of design features that shift them out of protected environments. For example, when performing a search through a mobile browser, some pages include prompts that encourage the user to open a separate application instead of continuing in the browser. These buttons are typically placed near navigation controls, making accidental taps more likely. Moving into a dedicated search app places users in a different data-collection environment, where protections offered by the browser may no longer apply.
While there is no complete way to avoid fingerprinting, users can limit their exposure by choosing browsers with built-in privacy protections, reviewing app permissions frequently, and avoiding unnecessary redirections into external applications. Ultimately, the choice depends on how much value an individual places on privacy, but understanding how this technology works is the first step toward reducing risk.
OpenAI has officially entered the web-browsing market with ChatGPT Atlas, a new browser built on Chromium: the same open-source base that powers Google Chrome. At first glance, Atlas looks and feels almost identical to Chrome or Safari. The key difference is its built-in ChatGPT assistant, which allows users to interact with web pages directly. For example, you can ask ChatGPT to summarize a site, book tickets, or perform online actions automatically, all from within the browser interface.
While this innovation promises faster and more efficient browsing, privacy experts are increasingly worried about how much personal data the browser collects and retains.
How ChatGPT Atlas Uses “Memories”
Atlas introduces a feature called “memories”, which allows the system to remember users’ activity and preferences over time. This builds on ChatGPT’s existing memory function, which stores details about users’ interests, writing styles, and previous interactions to personalize future responses.
In Atlas, these memories could include which websites you visit, what products you search for, or what tasks you complete online. This helps the browser predict what you might need next, such as recalling the airline you often book with or your preferred online stores. OpenAI claims that this data collection aims to enhance user experience, not exploit it.
However, this personalization comes with serious privacy implications. Once stored, these memories can gradually form a comprehensive digital profile of an individual’s habits, preferences, and online behavior.
OpenAI’s Stance on Early Privacy Concerns
OpenAI has stated that Atlas will not retain critical information such as government-issued IDs, banking credentials, medical or financial records, or any activity related to adult content. Users can also manage their data manually: deleting, archiving, or disabling memories entirely, and can browse in incognito mode to prevent the saving of activity.
Despite these safeguards, recent findings suggest that some sensitive data may still slip through. According to The Washington Post, an investigation by a technologist at the Electronic Frontier Foundation (EFF) revealed that Atlas had unintentionally stored private information, including references to sexual and reproductive health services and even a doctor’s real name. These findings raise questions about the reliability of OpenAI’s data filters and whether user privacy is being adequately protected.
Broader Implications for AI Browsers
OpenAI is not alone in this race. Other companies, including Perplexity with its upcoming browser Comet, have also faced criticism for extensive data collection practices. Perplexity’s CEO openly admitted that collecting browser-level data helps the company understand user behavior beyond the AI app itself, particularly for tailoring ads and content.
The rise of AI-integrated browsers marks a turning point in internet use, combining automation and personalization at an unprecedented scale. However, cybersecurity experts warn that AI agents operating within browsers hold immense control — they can take actions, make purchases, and interact with websites autonomously. This power introduces substantial risks if systems malfunction, are exploited, or process data inaccurately.
What Users Can Do
For those concerned about privacy, experts recommend taking proactive steps:
• Opt out of the memory feature or regularly delete saved data.
• Use incognito mode for sensitive browsing.
• Review data-sharing and model-training permissions before enabling them.
AI browsers like ChatGPT Atlas may redefine digital interaction, but they also test the boundaries of data ethics and security. As this technology evolves, maintaining user trust will depend on transparency, accountability, and strict privacy protection.
With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless.
Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history.
Incognito mode helps to keep your browsing data safe from other users who use your device
A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.
1. It doesn’t hide user activity from the Internet Service Provider (ISP)
Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites.
2. Incognito mode doesn’t stop websites from tracking users
When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.
3. Incognito mode doesn’t hide your IP address
If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.
It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.
There are other options to protect your online privacy, such as:
At DEF CON 33, independent security researcher Marek Tóth revealed a new class of attack called DOM-based extension clickjacking that can manipulate browser-based password managers and, in limited scenarios, hijack passkey authentication flows. This is not a failure of cryptography itself, but a breakdown in the layers surrounding it.
What is being attacked, and how?
Clickjacking is not new. In its classic form, an attacker overlays a transparent frame or control on a visible page so that a user thinks they are clicking one thing but actually triggers another.
What Tóth’s technique adds is the targeting of browser extensions’ UI elements specifically, the autofill prompts that password managers inject into web pages. The attacker’s script controls the page’s Document Object Model (DOM) and applies CSS tricks (such as setting opacity to zero or overlaying fake elements) so that a user’s genuine click (for example, “Accept cookies”) also activates that hidden autofill element. The result: the extension may populate fields transparently, then the attacker reads the filled data.
In many of Tóth’s tests, a single click was sufficient to trigger data leakage credentials, TOTP codes (2FA), credit card information, or personal data. In some setups, passkey workflows could also be subverted using “signed assertion hijacking,” if the server did not enforce session-bound challenges.
How serious is the exposure?
Tóth examined 11 popular password-manager extensions (such as Bitwarden, 1Password, LastPass, iCloud Passwords). All were vulnerable under default settings to at least one variant of the attack.
Among the risks:
Credential theft: Usernames, passwords and even stored TOTP codes could be auto-populated and exfiltrated.
Credit card data: Autofill of payment fields (card number, expiration, CVV) was exposed in several tests.
Passkey hijack: If the relying server does not bind the challenge to a session, an attacker controlling a page could co-opt a passkey login request.
Some vendors have already released patches. For example, Enpass addressed clickjacking in browser extensions in version 6.11.6. Other tools remain at risk under certain configurations.
Why this doesn’t mean cryptographic failure
It is critical to clarify: the underlying passkey standards (WebAuthn / FIDO protocols) were not broken. Instead, the attack targets the implementation and environment around them namely, the browser’s extension UI interaction. The exploit is possible only when the extension injects visible elements into the page DOM, and when an attacker can manipulate those elements.
In other words, passkeys are strong in theory. But every layer above — browser, extension, site must preserve integrity or risk defeat.
What must users and organizations do
Users should:
1. Update your browser and your password-manager extensions immediately; enable auto-update.
2. Disable inline autofill where possible; prefer manual copy-paste or invoke filling only through the extension’s menu.
3. On Chromium-based browsers, set extension site access to “on click,” not “all sites.”
4. Remove or disable unused extensions.
5. For high-value accounts, prefer platform-native passkey or hardware-backed authenticators rather than extension-based credentials.
Organizations should:
• Audit extension policies and restrict or whitelist extensions.
• Enforce secure best practices on web apps (e.g., session-bound challenges with passkeys).
• Encourage or mandate the use of vetted and updated password-management tools.
This disclosure emphasizes that security is a chain, and your cryptographic strength is only as strong as its weakest link. Passkeys are an important evolution beyond passwords, but until all layers: browser, extensions, applications are hardened, risk remains. Act now before attackers exploit complacency.