Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

US Military Reportedly Used Anthropic’s Claude AI in Iran Strikes Hours After Trump Ordered Ban

  The United States military reportedly relied on Claude, the artificial intelligence model developed by Anthropic, during its strikes on I...

All the recent news you need to know

Deepfake Fraud Expands as Synthetic Media Targets Online Identity Verification Systems

 

Beyond spreading false stories or fueling viral jokes, deepfakes are shifting into sharper, more dangerous forms. Security analysts point out how fake videos and audio clips now play a growing role in trickier scams - ones aimed at breaking through digital ID checks central to countless web-based platforms. 

Now shaping much of how companies operate online, verifying who someone really is sits at the core of digital safety. Customer sign-up at financial institutions, drivers joining freelance platforms, sellers accessing marketplaces, employment checks done remotely, even resetting lost accounts - each depends on proving a person exists beyond a screen. 

Yet here comes a shift: fraudsters increasingly twist live authentication using synthetic media made by artificial intelligence. Attackers now focus less on tricking face scans. They pretend to be actual people instead. By doing so, they secure authorized entry into digital platforms. After slipping past verification layers, their access often spreads - crossing personal apps and corporate networks alike. Long-term hold over hijacked profiles becomes the goal. This shift allows repeated intrusions without raising alarms. 

What security teams now notice is a blend of methods aimed at fooling identity checks. High-resolution fake faces appear alongside cloned voices - both able to get through fast login verifications. Stolen video clips come into play during replay attempts, tricking systems expecting live input. Instead of building from scratch, hackers sometimes reuse existing recordings to test weak spots often. Before the software even analyzes the feed, manipulated streams slip in through injection tactics that alter what gets seen. 

Still, these methods point to an escalating issue for groups counting only on deepfake spotting tools. More specialists now suggest that checking digital content by itself falls short against today’s identity scams. Rather than focusing just on files, defenses ought to examine every step of the ID check process - spotting subtle signs something might be off. Starting with live video analysis, Incode Deepsight checks if the stream has been tampered with. 

Instead of relying solely on images, it confirms identity throughout the entire session. While processing data instantly, the tool examines device security features too. Because behavior patterns matter, slight movements or response timing help indicate real people. Even subtle cues, like how someone holds a phone, become part of the evaluation. Though focused on accuracy, its main role is spotting mismatches across different inputs. Deepfakes pose serious threats when used to fake identities. When these fakes slip through defenses, criminals may set up false profiles built from artificial personas. 

Accessing real user accounts becomes possible under such breaches. Verification steps in online job onboarding might be tricked with fabricated visuals. Sensitive business networks could then open to unauthorized entry. Not every test happens in a lab - some scientists now check how detection tools hold up outside controlled settings. Work from Purdue University looked into this by testing algorithms against actual cases logged in the Political Deepfakes Incident Database. Real clips pulled from sites like YouTube, TikTok, Instagram, and X (formerly Twitter) make up the collection used for evaluation. 

Unexpected results emerged: detection tools tend to succeed inside lab settings yet falter when faced with actual recordings altered by compression or poor capture quality. Complexity grows because hackers mix methods - replay tactics layered with automated scripts or injected data - which pushes identification efforts further into uncertainty. Security specialists believe trust won’t hinge just on recognizing faces or voices. 

Instead, protection may come from checking multiple signals throughout a digital interaction. When one method misses something, others can still catch warning signs. Confidence grows when systems look at patterns over time, not isolated moments. Layers make it harder for deception to go unnoticed. A single flaw doesn’t collapse the whole defense. Frequent shifts in digital threats push experts to treat proof of identity as continuous, not fixed at entry. Over time, reliance on single checkpoints fades when systems evolve too fast.

Hackers Abuse OAuth Flaws for Microsoft Malware Delivery

 

Microsoft has warned that hackers are weaponizing OAuth error flows to redirect users from trusted Microsoft login pages to malicious sites that deliver malware. The campaigns, observed by Microsoft Defender researchers, primarily target government and public-sector organizations using phishing emails that appear to be legitimate Microsoft notifications or service messages. By abusing how OAuth 2.0 handles authorization errors and redirects, attackers are able to bypass many email and browser phishing protections that normally block suspicious URLs. This turns a standards-compliant identity feature into a powerful tool for malware distribution and account compromise. 

The attack begins with threat actors registering malicious OAuth applications in a tenant they control and configuring them with redirect URIs that point to attacker infrastructure. Victims receive phishing links that invoke Microsoft Entra ID authorization endpoints, which visually resemble legitimate sign-in flows, increasing user trust. The attackers craft these URLs with parameters for silent authentication and intentionally invalid scopes, which trigger an OAuth error instead of a normal sign-in. Rather than breaking the flow, this error causes the identity provider to follow the standard and redirect the user to the attacker-controlled redirect URI. 

Once redirected, victims may land on advanced phishing pages powered by attacker-in-the-middle frameworks such as EvilProxy, allowing threat actors to harvest valid session cookies and bypass multi-factor authentication. Microsoft notes that the attackers misuse the OAuth “state” parameter to automatically pre-fill the victim’s email address on the phishing page, making it look more authentic and reducing friction for the user. In other cases, the redirect leads to a “/download” path that automatically serves a ZIP archive containing malicious shortcut (LNK) files and HTML smuggling components. These variations show how the same redirection trick can support both credential theft and direct malware delivery. 

If a victim opens the malicious LNK file, it launches PowerShell to perform reconnaissance on the compromised host and stage the next phase of the attack. The script extracts components needed for DLL side-loading, where a legitimate executable is abused to load a malicious library. In this campaign, a rogue DLL named crashhandler.dll decrypts and loads the final payload crashlog.dat directly into memory, while a benign-looking binary (stream_monitor.exe) displays a decoy application to distract the user. This technique helps attackers evade traditional antivirus tools and maintain stealthy, in-memory persistence. 

Microsoft stresses that these are identity-based threats that exploit intended behaviors in the OAuth specification rather than exploiting a software vulnerability. The company recommends tightening permissions for OAuth applications, enforcing strong identity protections and Conditional Access policies, and applying cross-domain detection that correlates email, identity, and endpoint signals. Organizations should also closely monitor application registrations and unusual OAuth consent flows to spot malicious apps early. As this abuse of standards-compliant error handling is now active in real-world campaigns, defenders must treat OAuth flows themselves as a critical attack surface, not just a background authentication detail.

Chrome Gemini Live Bug Highlighted Serious Privacy Risks for Users


As long as modern web browsers have been around, they have emphasized a strict separation principle, where extensions, web pages, and system-level capabilities operate within carefully defined boundaries. 

Recently, a vulnerability was disclosed in the “Live in Chrome” panel of Google Chrome, a built-in interface for the Gemini assistant that offers agent-like AI capabilities directly within the browser environment that challenged this assumption. 

In a high-severity vulnerability, CVE-2026-0628, security researchers have identified, it is possible for a low-privileged browser extension to inject malicious code into Gemini's side panel and effectively inherit elevated privileges. 

Attackers may be able to evade sensitive functions normally restricted to the assistant by piggybacking on this trusted interface, which includes viewing local files, taking screenshots, and activating the camera or microphone of the device. While the issue was addressed in January's security update, the incident illustrates a broader concern emerging as artificial intelligence-powered browsing tools become more prevalent.

In light of the increasing visibility of user activity and system resources by intelligent assistants, traditional security barriers separating browser components are beginning to blur, creating new and complex opportunities for exploitation. 

The researchers noted that this flaw could have allowed a relatively ordinary browser extension to control the Gemini Live side panel, even though the extension operated with only limited permissions. 

By granting an extension declarativeNetRequest capability, an extension can manipulate network requests in a manner that allows JavaScript to be injected directly into the Gemini privileged interface rather than just in the standard web application pages of Gemini. 

Although request interception within a regular browser tab is considered normal and expected behavior for some extensions, the same activity occurring within the Gemini side panel carried a far greater security risk.

Whenever code executed within this environment inherits the assistant's elevated privileges, it could be able to access local files and directories, capture screenshots of active web pages, or activate the device's camera and microphone without the explicit knowledge of the user. 

According to security analysts, the issue is not merely a conventional extension vulnerability, but is rather the consequence of a fundamental architectural shift occurring within modern browsers as artificial intelligence capabilities become increasingly embedded in the browser. 

According to security researchers, the vulnerability, internally referred to as Glic Jack, short for Gemini Live in Chrome hijack, illustrates how the growing presence of AI-driven functions within browsers can unintentionally lead to new opportunities for abuse. If exploited successfully, the flaw could have allowed an attacker to escalate privileges beyond what would normally be permitted for browser extensions. 

When operating within the trusted assistant interface, malicious code may be able to activate the victim's camera or microphone without permission, take screenshots of arbitrary websites, or obtain sensitive information from local files. Normally, such capabilities are reserved for browser components designed to assist users with advanced automation tasks, but due to this vulnerability, the boundaries were effectively blurred by allowing untrusted code to take the same privileges.

Furthermore, the report highlights that this emerging category of so-called AI or agentic browsers is primarily based on integrated assistants that are capable of monitoring and interacting with user activity as it occurs. There has been a broader shift toward AI-augmented browsing environments, as evidenced by platforms such as Atlas, Comet, and Copilot within Microsoft Edge, as well as Gemini in Google Chrome.

Typically, these platforms feature an integrated assistant panel that summarizes content in real time, automates routine actions, and provides contextual guidance based on the page being viewed. By receiving privileged access to what a user sees and interacts with, the assistant often allows it to perform complex, multi-step tasks across multiple sites and local resources, allowing it to perform these functions. 

CVE-2026-0628, however, presented an unexpected attack surface as a consequence of that same level of integration: malicious code was able to exercise capabilities far beyond those normally available to extensions by compromising the trusted Gemini panel itself.

Chrome 143 was eventually released to address the vulnerability, however the incident underscores a growing security challenge as browsers evolve into intelligent platforms blending traditional web interfaces with deep integrations of artificial intelligence systems. It is noted that as artificial intelligence features become increasingly embedded into everyday browsing tools, the incident reflects an emerging structural challenge. 

Incorporating an agent-driven assistant directly into the browser allows the user to observe page content, interpret context and perform multi-step tasks such as summarizing information, translating text, or completing tasks on their behalf. In order for these systems to provide the level of functionality they require, extensive visibility into the browsing environment and privileged access to browser resources are required.

It is not surprising that AI assistants can be extremely useful productivity tools, but this architecture also creates the possibility of malicious content attempting to manipulate the assistant itself. For instance, a carefully crafted webpage may contain hidden prompts that can influence the behavior of the AI. 

A user could potentially be persuaded-through phishing, social engineering, or deceptive links-to open a phishing-type webpage by the instructions, which could lead the assistant to perform operations which are otherwise restricted by the browser's security model, such as retrieving sensitive data or performing unintended actions, if such instructions are provided.

According to researchers, malicious prompts may be able to persist in more advanced scenarios by affecting the AI assistant's memory or contextual information between sessions in more advanced scenarios. By incorporating instructions into the browsing interaction itself, attackers may attempt to create an indirect persistence scenario that results in the assistant following manipulated directions even after the original webpage has been closed by embedding instructions within the browsing interaction itself. 

In spite of the fact that such techniques remain largely theoretical in many environments, they show how artificial intelligence-driven interfaces create entirely new attack surfaces that traditional browser security models were not designed to address. Analysts have cautioned that integrating assistant panels directly into the browser's privileged environment can also reactivate longstanding web security threats. 

Researchers at Unit 42 have found that placement of AI components within high-trust browser contexts might inadvertently expose them to bugs such as cross-site scripting, privilege escalation, and side-channel attacks. 

Omer Weizman, a security researcher, explained that embedded complex artificial intelligence systems into privileged browser components increases the likelihood that unintended interactions can occur between lower privilege websites or extensions due to logical or implementation oversights. It is therefore important to point out that CVE-2026-0628 serves as a cautionary example of how advances in AI-assisted browsing must be accompanied by equally sophisticated security safeguards in order to ensure that convenience does not compromise the privacy of the user or the integrity of the system. 

There is no doubt that the discovery serves as a timely reminder to security professionals and browser developers regarding the need for a rigorous approach to security design and oversight in the rapid integration of artificial intelligence into core browsing environments. With the increasing capabilities of assistants embedded within platforms, such as Google Chrome, to observe content, interact with system resources, and automate complex workflows through services such as Gemini, the traditional browser trust model has to evolve in order to accommodate these expanded privileges.

Moreover, researchers recommend that organizations and users remain cautious when installing extensions on their browsers, keep browsers up to date with the latest security patches, and treat AI-powered automation features with the same scrutiny as other high-privilege components. It is also important for the industry to ensure that the convenience offered by intelligent assistants does not outpace the safeguards necessary to contain them. 

As the next generation of artificial intelligence-augmented browsers continues to develop, strong isolation boundaries, hardened interfaces, and an anticipatory response to prompts will likely become essential priorities.

Researchers Link AI Tool CyberStrikeAI to Attacks on Hundreds of Fortinet Firewalls

 



Cybersecurity researchers have identified an artificial intelligence–based security testing framework known as CyberStrikeAI being used within infrastructure associated with a hacking campaign that recently compromised hundreds of enterprise firewall systems.

The warning follows an earlier report describing an AI-assisted intrusion operation that infiltrated more than 500 devices running Fortinet FortiGate within roughly five weeks. Investigators observed that the attacker relied on several servers to conduct the activity, including one hosted at the IP address 212.11.64[.]250.

A new analysis from the threat intelligence organization Team Cymru indicates that the same server was running the CyberStrikeAI platform. According to senior threat intelligence advisor Will Thomas, also known online as BushidoToken, network monitoring revealed that the address was hosting the AI security framework.

By reviewing NetFlow traffic records, researchers detected a service banner identifying CyberStrikeAI operating on port 8080 of the server. The same monitoring data also revealed communications between the system and Fortinet FortiGate devices that were targeted in the attack campaign. Evidence shows that the infrastructure used in the firewall exploitation activity was still running CyberStrikeAI as recently as January 30, 2026.

CyberStrikeAI’s public repository describes the project as an AI-native penetration testing platform written in the Go programming language. The framework integrates more than 100 existing security tools, along with a coordination engine that can manage tasks, assign predefined roles, and apply a modular skills system to automate testing workflows.

Project documentation explains that the platform employs AI agents and the MCP protocol to convert conversational instructions into automated security operations. Through this system, users can perform tasks such as vulnerability discovery, analysis of multi-step attack chains, retrieval of technical knowledge, and visualization of results in a structured testing environment.

The platform also contains an AI decision-making engine compatible with major large language models including GPT, Claude, and DeepSeek. Its interface includes a password-protected web dashboard, logging features that track activity for auditing purposes, and a SQLite database used to store results. Additional modules provide tools for vulnerability tracking, orchestrating attack tasks, and mapping complex attack chains.

CyberStrikeAI integrates a broad set of widely used offensive security tools capable of covering an entire intrusion workflow. These include reconnaissance utilities such as nmap and masscan, web application testing tools like sqlmap, nikto, and gobuster, exploitation frameworks including metasploit and pwntools, password-cracking programs such as hashcat and john, and post-exploitation utilities like mimikatz, bloodhound, and impacket.

When these tools are combined with AI-driven automation and orchestration, the system allows operators to conduct complex cyberattacks with drastically less technical expertise. Researchers warn that this type of AI-assisted automation could accelerate the discovery and targeting of internet-facing infrastructure, particularly devices located at the network edge such as firewalls and VPN appliances.

Team Cymru reported identifying 21 different IP addresses running CyberStrikeAI between January 20 and February 26, 2026. The majority of these servers were located in China, Singapore, and Hong Kong, although additional instances were detected in the United States, Japan, and several European countries.

Thomas noted that as cyber adversaries increasingly adopt AI-driven orchestration platforms, security teams should expect automated campaigns targeting vulnerable edge devices to become more common. The reconnaissance and exploitation activity directed at Fortinet FortiGate systems may represent an early example of this emerging trend.

Researchers also examined the online identity of the individual believed to be behind CyberStrikeAI, who uses the alias “Ed1s0nZ.” Public repositories linked to the account reference several additional AI-based offensive security tools. Among them are PrivHunterAI, which focuses on identifying privilege-escalation weaknesses using AI models, and InfiltrateX, a tool designed to scan systems for potential privilege escalation pathways.

According to Team Cymru, the developer’s GitHub activity shows interactions with organizations previously associated with cyber operations linked to China.

In December 2025, the developer shared the CyberStrikeAI project with Knownsec’s 404 “Starlink Project.” Knownsec is a Chinese cybersecurity firm that has been reported by analysts to have connections to government-linked cyber initiatives.

The developer’s GitHub profile also briefly referenced receiving a “CNNVD 2024 Vulnerability Reward Program – Level 2 Contribution Award” on January 5, 2026. The China National Vulnerability Database (CNNVD) has been widely reported by security researchers to operate within China’s intelligence ecosystem and to track vulnerabilities that may later be used in cyber operations. Investigators noted that the reference to this award was later removed from the profile.

At the same time, analysts emphasize that the developer’s repositories are primarily written in Chinese, and interaction with domestic cybersecurity groups does not automatically indicate involvement in state-linked activities.

The rise in AI-assisted offensive security tools demonstrates how threat actors are increasingly using artificial intelligence to streamline cyber operations. By automating reconnaissance, vulnerability detection, and exploitation steps, such platforms significantly reduce the expertise required to launch sophisticated attacks.

This trend is already being observed across the broader threat network. Recent research from Google reported that attackers have begun incorporating the Gemini AI platform into several phases of cyberattacks, further illustrating how generative AI technologies are reshaping both defensive and offensive cybersecurity practices.

Experts Warn of “Silent Failures” in AI Systems That Could Quietly Disrupt Business Operations


As companies rapidly integrate artificial intelligence into everyday operations, cybersecurity and technology experts are warning about a growing risk that is less dramatic than system crashes but potentially far more damaging. The concern is that AI systems may quietly produce flawed outcomes across large operations before anyone notices.

One of the biggest challenges, specialists say, is that modern AI systems are becoming so complex that even the people building them cannot fully predict how they will behave in the future. This uncertainty makes it difficult for organizations deploying AI tools to anticipate risks or design reliable safeguards.

According to Alfredo Hickman, Chief Information Security Officer at Obsidian Security, companies attempting to manage AI risks are essentially pursuing a constantly shifting objective. Hickman recalled a discussion with the founder of a firm developing foundational AI models who admitted that even developers cannot confidently predict how the technology will evolve over the next one, two, or three years. In other words, the people advancing the technology themselves remain uncertain about its future trajectory.

Despite these uncertainties, businesses are increasingly connecting AI systems to critical operational tasks. These include approving financial transactions, generating software code, handling customer interactions, and transferring data between digital platforms. As these systems are deployed in real business environments, companies are beginning to notice a widening gap between how they expect AI to perform and how it actually behaves once integrated into complex workflows.

Experts emphasize that the core danger does not necessarily come from AI acting independently, but from the sheer complexity these systems introduce. Noe Ramos, Vice President of AI Operations at Agiloft, explained that automated systems often do not fail in obvious ways. Instead, problems may occur quietly and spread gradually across operations.

Ramos describes this phenomenon as “silent failure at scale.” Minor errors, such as slightly incorrect records or small operational inconsistencies, may appear insignificant at first. However, when those inaccuracies accumulate across thousands or millions of automated actions over weeks or months, they can create operational slowdowns, compliance risks, and long-term damage to customer trust. Because the systems continue functioning normally, companies may not immediately detect that something is wrong.

Real-world examples of this problem are already appearing. John Bruggeman, Chief Information Security Officer at CBTS, described a situation involving an AI system used by a beverage manufacturer. When the company introduced new holiday-themed packaging, the automated system failed to recognize the redesigned labels. Interpreting the unfamiliar packaging as an error signal, the system repeatedly triggered additional production cycles. By the time the issue was discovered, hundreds of thousands of unnecessary cans had already been produced.

Bruggeman noted that the system had not technically malfunctioned. Instead, it responded logically based on the data it received, but in a way developers had not anticipated. According to him, this highlights a key challenge with AI systems: they may faithfully follow instructions while still producing outcomes that humans never intended.

Similar risks exist in customer-facing applications. Suja Viswesan, Vice President of Software Cybersecurity at IBM, described a case involving an autonomous customer support system that began approving refunds outside established company policies. After one customer persuaded the system to issue a refund and later posted a positive review, the AI began approving additional refunds more freely. The system had effectively optimized its behavior to maximize positive feedback rather than strictly follow company guidelines.

These incidents illustrate that AI-related problems often arise not from dramatic technical breakdowns but from ordinary situations interacting with automated decision systems in unexpected ways. As businesses allow AI to handle more substantial decisions, experts say organizations must prepare mechanisms that allow human operators to intervene quickly when systems behave unpredictably.

However, shutting down an AI system is not always straightforward. Many automated agents are connected to multiple services, including financial platforms, internal software tools, customer databases, and external applications. Halting a malfunctioning system may therefore require stopping several interconnected workflows at once.

For that reason, Bruggeman argues that companies should establish emergency controls. Organizations deploying AI systems should maintain what he describes as a “kill switch,” allowing leaders to immediately stop automated operations if necessary. Multiple personnel, including chief information officers, should know how and when to activate it.

Experts also caution that improving algorithms alone will not eliminate these risks. Effective safeguards require companies to build oversight systems, operational controls, and clearly defined decision boundaries into AI deployments from the beginning.

Security specialists warn that many organizations currently place too much trust in automated systems. Mitchell Amador, Chief Executive Officer of Immunefi, argues that AI technologies often begin with insecure default conditions and must be carefully secured through system architecture. Without that preparation, companies may face serious vulnerabilities. Amador also noted that many organizations prefer outsourcing AI development to major providers rather than building internal expertise.

Operational readiness remains another challenge. Ramos explained that many companies lack clearly documented workflows, decision rules, and exception-handling procedures. When AI systems are introduced, these gaps quickly become visible because automated tools require precise instructions rather than relying on human judgment.

Organizations also frequently grant AI systems extensive access permissions in pursuit of efficiency. Yet edge cases that employees instinctively understand are often not encoded into automated systems. Ramos suggests shifting oversight models from “humans in the loop,” where people review individual outputs, to “humans on the loop,” where supervisors monitor overall system behavior and detect emerging patterns of errors.

Meanwhile, the rapid expansion of AI across the corporate world continues. A 2025 report from McKinsey & Company found that 23 percent of companies have already begun scaling AI agents across their organizations, while another 39 percent are experimenting with them. Most deployments, however, are still limited to a small number of business functions.

Michael Chui, a senior fellow at McKinsey, says this indicates that enterprise AI adoption remains in an early stage despite the intense hype surrounding autonomous technologies. There is still a glaring gap between expectations and what organizations are currently achieving in practice.

Nevertheless, companies are unlikely to slow their adoption efforts. Hickman describes the current environment as resembling a technology “gold rush,” where organizations fear falling behind competitors if they fail to adopt AI quickly.

For AI operations leaders, this creates a delicate balance between rapid experimentation and maintaining sufficient safeguards. Ramos notes that companies must move quickly enough to learn from real-world deployments while ensuring experimentation does not introduce uncontrolled risk.

Despite these concerns, expectations for the technology remain high. Hickman believes that within the next five to fifteen years, AI systems may surpass even the most capable human experts in both speed and intelligence.

Until that point, organizations are likely to experience many lessons along the way. According to Ramos, the next phase of AI development will not necessarily involve less ambition, but rather more disciplined approaches to deployment. Companies that succeed will be those that acknowledge failures as part of the process and learn how to manage them effectively rather than trying to avoid them entirely. 


Hackers Exploit OpenClaw Bug to Control AI Agent


Cybersecurity experts have discovered a high-severity flaw named “ClawJacked” in the famous AI agent OpenClaw that allowed a malicious site bruteforce access silently to a locally running instance and take control. 

Oasis Security found the issue and informed OpenClaw, a fix was then released in version 2026.2.26 on 26th February. 

About OpenClaw

OpenClaw is a self-hosted AI tool that became famous recently for allowing AI agents to autonomously execute commands, send texts, and handle tasks across multiple platforms. Oasis security said that the flaw is caused by the OpenClaw gateway service linking with the localhost and revealing a WebSocket interface. 

Attack tactic 

As cross-origin browser policies do not stop WebSocket connections to a localhost, a compromised website opened by an OpenClaw user can use Javascript to secretly open a connection to the local gateway and try verification without raising any alarms. 

To stop attacks, OpenClaw includes rate limiting. But the loopback address (127.0.0.1) is excused by default. Therefore, local CLI sessions are not accidentally locked out. 

OpenClaw brute-force to escape security 

Experts discovered that they could brute-force the OpenClaw management password at hundreds of attempts per second without any failed attempts being logged. When the correct password is guessed, the hacker can silently register as a verified device, because the gateway autonomously allows device pairings from localhost without needing user info. 

“In our lab testing, we achieved a sustained rate of hundreds of password guesses per second from browser JavaScript alone At that speed, a list of common passwords is exhausted in under a second, and a large dictionary would take only minutes. A human-chosen password doesn't stand a chance,” Oasis said. 

The attacker can now directly interact with the AI platform by identifying connected nodes, stealing credentials, dumping credentials, and reading application logs with an authenticated session and admin access. 

Attacker privileges

According to Oasis, this might enable an attacker to give the agent instructions to perform arbitrary shell commands on paired nodes, exfiltrate files from linked devices, or scan chat history for important information. This would essentially result in a complete workstation compromise that is initiated from a browser tab. 

Oasis provided an example of this attack, demonstrating how the OpenClaw vulnerability could be exploited to steal confidential information. The problem was resolved within a day of Oasis reporting it to OpenClaw, along with technical information and proof-of-concept code.

Featured