Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

AI Actress Tilly Norwood's Controversial Oscars Music Video Sparks Debate

 

Tilly Norwood, billed as the world's first AI-generated actress, has released a new music video titled "Take The Lead" just ahead of the Oscars, promoting AI's role in entertainment. Created by Particle6 Group's Xicoia division under CEO Eline van der Velden, the video features Norwood singing pro-AI lyrics like "AI’s not the enemy, it’s the key" while riding a pink flamingo and performing in stadiums.Despite claims of 18 human collaborators, including costume designers and prompters, the project has drawn sharp criticism for its uncanny visuals and generic composition. 

The video's launch ties into Hollywood's awards season, with Norwood teasing an Oscars appearance in the caption: "Can’t wait to go to the Oscars! Does anyone know if they have free valet parking for my flamingo?" However, view counts remain low, hovering around 4,000 to 23,000 shortly after upload, with comments largely mocking its lack of "human spark."Norwood's social media reflects uneven popularity: nearly 90,000 Instagram followers but under 4,000 YouTube subscribers and just 3 on TikTok. 

Lyrics drawn from van der Velden's essay defend AI creativity, with lines like "When they talk about me, they don’t see the human spark" amid visuals of falling dollar bills with garbled symbols. Critics highlight the "standard AI sheen" where details falter under scrutiny, questioning if it truly showcases innovation. Particle6 positions this as part of the expanding "Tillyverse," a digital universe for AI characters, recently bolstered by hires like Amazon's Mark Whelan for strategy. 

Backlash has been fierce since Norwood's 2025 debut. SAG-AFTRA condemned her, actors threatened boycotts of agencies "signing" her, and outlets like The Guardian slammed early projects like "AI Commissioner." Even supporter Kevin O’Leary misnamed her "Norwell Tillies" while advocating AI replace background actors.Particle6 insists on building AI-human collaborations, but no major film or TV roles have materialized beyond short content. 

As the Oscars approach, Norwood's stunt underscores AI's disruptive potential in Hollywood, blending hype with hostility.While Particle6 eyes a "Scarlett Johansson of AI," industry resistance persists amid fears of job losses. The "Tillyverse" launch later this year could escalate tensions, forcing a reckoning on AI's creative boundaries.

Can a VPN Protect Your Privacy During Age Verification? A Complete Breakdown

 



The heightened use of age verification systems across the internet is directly influencing how people think about online privacy tools. As more governments introduce these requirements, interest in privacy-focused technologies is rising in parallel.

Age verification laws are now being implemented in multiple countries, requiring millions of users to submit personal and often sensitive information before accessing certain websites, particularly those hosting adult or restricted content. While policymakers argue that these rules are necessary to prevent minors from being exposed to harmful material, critics continue to highlight the serious privacy risks associated with handing over such data.

Virtual Private Networks, commonly known as VPNs, are widely marketed as tools designed to protect user privacy and secure online data. In recent months, there has been a noticeable surge in VPN adoption in regions where age verification laws have come into force. This trend was particularly evident in the United Kingdom and the United States during the latter half of 2025, and again in Australia in March 2026.

However, whether VPNs can truly protect users during age verification processes is not a simple yes-or-no question. Their capabilities are limited in certain areas, and understanding both their strengths and weaknesses is essential.


What VPNs Can Protect

At a fundamental level, VPNs work by encrypting a user’s internet connection, which prevents third parties from easily observing online activity. This includes internet service providers, network administrators, and in some cases, government surveillance systems.

When a VPN connection is active, external observers are generally unable to determine which websites or applications a user is accessing. In the context of age verification, this means that third parties monitoring network traffic will not be able to tell whether a user has visited a platform that requires identity checks, provided the VPN is properly configured.

Certain platforms, including X (formerly Twitter), Reddit, and Telegram, have introduced age verification requirements in specific regions. Many adult websites have implemented similar systems.

In addition to hiding browsing activity, VPNs also encrypt the data being transmitted. This ensures that any information entered during the verification process cannot be easily intercepted by external parties while it is in transit. Even after the verification step is completed, ongoing internet activity continues to be routed through the VPN’s secure tunnel, maintaining a level of privacy.

Modern VPN services are also evolving into broader cybersecurity platforms. Leading providers such as NordVPN, Surfshark, and ExpressVPN now offer additional tools beyond basic encryption. These may include password management systems, encrypted cloud storage, antivirus protection, and identity theft monitoring services.

Some of these services also provide features such as dark web monitoring, financial compensation options in cases of identity theft, credit tracking, and access to support teams that assist users in resolving security incidents. These added layers can help reduce the impact if personal data submitted during an age verification process is later exposed or misused.

One of the central criticisms of age verification systems is the cybersecurity risk they introduce. In this context, advanced VPN subscriptions can offer tools that help users respond to potential data breaches, even if they cannot prevent them entirely.


What VPNs Cannot Protect

Despite their advantages, VPNs are not a complete solution for online anonymity. They do not eliminate all risks, nor do they make users invisible.

In the case of age verification, a VPN cannot prevent the verification provider from accessing the information that a user voluntarily submits. Organizations such as Yoti, Persona, and AgeGo are responsible for processing this data. These companies will still be able to view, verify, and in many cases temporarily store personal details.

Typical verification methods require users to submit sensitive information such as credit card details, government-issued identification documents, or biometric inputs like selfies. This data is directly accessible to the verification service, regardless of whether a VPN is being used.

Data retention practices vary between providers. For example, Yoti states that it deletes user data immediately after verification unless further review is required. In cases where manual checks are necessary, the data may be retained for up to 28 days.

The longer personal information remains stored, the greater the potential risk to user privacy and security. This concern has already been validated by real-world incidents. In October 2025, Discord experienced a data breach in which attackers accessed information related to users who had requested manual reviews of their age verification results.

It is important to understand that any personal data submitted online can potentially be used to identify an individual. The use of a VPN does not change this fundamental reality.


Why VPN Interest Is Increasing

The expansion of age verification systems has given rise to public awareness of online privacy issues. As a result, many users are exploring VPNs as a way to better protect themselves.

At the same time, some individuals are attempting to use VPNs to bypass age verification requirements altogether. This is typically done by connecting to servers located in countries where such laws have not yet been implemented. However, this approach is not consistently reliable and does not guarantee success, as many platforms use additional verification mechanisms beyond geographic location.


Final Considerations

VPNs remain an important tool for strengthening online privacy, particularly when it comes to protecting browsing activity and securing data in transit. However, they are not a complete safeguard against all risks associated with age verification systems.

Users should also be cautious when choosing a VPN provider. Many free services operate on business models that involve collecting and monetizing user data, which can undermine privacy rather than protect it. In contrast, reputable paid VPN services generally offer stronger security features and more transparent data handling practices.

Among paid options, some lower-cost services are widely marketed to new users entering the VPN space. For instance, Surfshark has been advertised at approximately $1.99 per month under long-term plans, while PrivadoVPN has promoted multi-year subscriptions priced near $1.11 per month.

However, pricing alone should not be the deciding factor. Security architecture, logging policies, and transparency practices remain far more critical when evaluating whether a VPN service genuinely protects user privacy. While VPNs can reduce certain risks, they cannot fully protect personal information once it has been directly shared with a verification service.



Microsoft Unveils ‘Copilot Cowork’ to Push Agentic AI Into the Workplace

 

Microsoft is intensifying its efforts to capture consumer attention in the AI space, where rivals like ChatGPT and Gemini have gained significant traction. On Monday, the company introduced a fresh set of “agentic” AI updates, with its most notable addition being Copilot Cowork.

Developed in partnership with Anthropic, Copilot Cowork is designed to function as an autonomous digital assistant. Similar in concept to Anthropic’s Claude Cowork, it can access data from files, emails, and calendars to independently carry out tasks without requiring constant human input. From generating spreadsheets to conducting research and compiling reports, the tool aims to act like a true workplace collaborator.

"Cowork is the new chat. It's the new way of interacting with AI," said Charles Lamanna, Microsoft’s president of business applications and agents. He emphasized the shift from interactive AI usage to full task delegation, adding, "With chat, you're babysitting every step -- this is much more like 'fire and forget' with Cowork to get the job done."

Lamanna shared a personal use case where he employed Copilot Cowork to evaluate his meeting schedule over the next three months. By analyzing his emails and calendar, the AI identified meetings that might not require his presence and presented the findings in a clear chart. After his review, the system declined certain meetings and attached AI-generated summaries when necessary. He described the 40-minute process as "delightful and practical," noting that it saved both him and his executive assistant several hours.

Currently available as a limited research preview, Copilot Cowork is part of a broader push by Microsoft into agent-based AI. The company also announced that its AI agent management platform, Agent 365, will become widely available starting May 1. This platform enables organizations to monitor and manage multiple AI agents used across workflows. Microsoft revealed it has already created over 500,000 AI agents internally using this system. Additionally, new AI models from both Anthropic and OpenAI will be integrated into Copilot, signaling Microsoft’s neutral stance amid increasing competition among AI developers.

Agentic AI tools are rapidly gaining popularity, especially among professionals seeking automation. Even in its preview stage, Claude Cowork has attracted widespread attention while also raising concerns in financial markets. Earlier this year, major tech stocks dipped as advancements from Anthropic prompted uncertainty about the future of employment.

Tools such as Claude Code and Codex are becoming capable of replacing traditional software solutions—an area where Microsoft has long been dominant. This shift explains Microsoft’s urgency in advancing its own agentic AI capabilities. Industry experts increasingly believe that 2026 could mark a breakthrough year for such technologies, with projects like OpenClaw highlighting their growing influence.

Lamanna noted that "the shape of what we do on a day-to-day basis will change," but stressed that AI should ultimately free up time for more meaningful work. He described the transition as moving from using AI to assist with tasks toward fully delegating them to autonomous agents.

However, as these tools become more accessible, questions around their impact on jobs persist. Concerns have been amplified by AI-driven layoffs at major companies like Amazon and Block. At the same time, some research suggests that AI adoption may lead to longer work hours and reduced job satisfaction for certain employees. As with any emerging technology, its real impact will depend on how effectively it is implemented in the workplace.

US Military Reportedly Used Anthropic’s Claude AI in Iran Strikes Hours After Trump Ordered Ban

 

The United States military reportedly relied on Claude, the artificial intelligence model developed by Anthropic, during its strikes on Iran—even though former President Donald Trump had ordered federal agencies to stop using the company’s technology just hours earlier.

Reports from The Wall Street Journal and Axios indicate that Claude was used during the large-scale joint US-Israel bombing campaign against Iran that began on Saturday. The episode highlights how difficult it can be for the military to quickly remove advanced AI systems once they are deeply integrated into operational frameworks.

According to the Journal, the AI tools supported military intelligence analysis, assisted in identifying potential targets, and were also used to simulate battlefield scenarios ahead of operations.

The day before the strikes began, Trump instructed all federal agencies to immediately discontinue using Anthropic’s AI tools. In a post on Truth Social, he criticized the company, calling it a "Radical Left AI company run by people who have no idea what the real World is all about".

Tensions between the US government and Anthropic had already been escalating. The conflict intensified after the US military reportedly used Claude during a January mission to capture Venezuelan President Nicolás Maduro. Anthropic raised concerns over that operation, noting that its usage policies prohibit the application of its AI systems for violent purposes, weapons development, or surveillance.

Relations continued to deteriorate in the months that followed. In a lengthy post on X, US Defense Secretary Pete Hegseth accused the company of "arrogance and betrayal", stating that "America's warfighters will never be held hostage by the ideological whims of Big Tech".

Hegseth also called for complete and unrestricted access to Anthropic’s AI models for any lawful military use.

Despite the political dispute, officials acknowledged that removing Claude from military systems would not be immediate. Because the technology has become widely embedded across operations, the Pentagon plans a transition period. Hegseth said Anthropic would continue providing services "for a period of no more than six months to allow for a seamless transition to a better and more patriotic service".

Meanwhile, OpenAI has moved quickly to fill the gap created by the rift. CEO Sam Altman announced that the company had reached an agreement with the Pentagon to deploy its AI tools—including ChatGPT—within the military’s classified networks.

Chrome Gemini Live Bug Highlighted Serious Privacy Risks for Users


As long as modern web browsers have been around, they have emphasized a strict separation principle, where extensions, web pages, and system-level capabilities operate within carefully defined boundaries. 

Recently, a vulnerability was disclosed in the “Live in Chrome” panel of Google Chrome, a built-in interface for the Gemini assistant that offers agent-like AI capabilities directly within the browser environment that challenged this assumption. 

In a high-severity vulnerability, CVE-2026-0628, security researchers have identified, it is possible for a low-privileged browser extension to inject malicious code into Gemini's side panel and effectively inherit elevated privileges. 

Attackers may be able to evade sensitive functions normally restricted to the assistant by piggybacking on this trusted interface, which includes viewing local files, taking screenshots, and activating the camera or microphone of the device. While the issue was addressed in January's security update, the incident illustrates a broader concern emerging as artificial intelligence-powered browsing tools become more prevalent.

In light of the increasing visibility of user activity and system resources by intelligent assistants, traditional security barriers separating browser components are beginning to blur, creating new and complex opportunities for exploitation. 

The researchers noted that this flaw could have allowed a relatively ordinary browser extension to control the Gemini Live side panel, even though the extension operated with only limited permissions. 

By granting an extension declarativeNetRequest capability, an extension can manipulate network requests in a manner that allows JavaScript to be injected directly into the Gemini privileged interface rather than just in the standard web application pages of Gemini. 

Although request interception within a regular browser tab is considered normal and expected behavior for some extensions, the same activity occurring within the Gemini side panel carried a far greater security risk.

Whenever code executed within this environment inherits the assistant's elevated privileges, it could be able to access local files and directories, capture screenshots of active web pages, or activate the device's camera and microphone without the explicit knowledge of the user. 

According to security analysts, the issue is not merely a conventional extension vulnerability, but is rather the consequence of a fundamental architectural shift occurring within modern browsers as artificial intelligence capabilities become increasingly embedded in the browser. 

According to security researchers, the vulnerability, internally referred to as Glic Jack, short for Gemini Live in Chrome hijack, illustrates how the growing presence of AI-driven functions within browsers can unintentionally lead to new opportunities for abuse. If exploited successfully, the flaw could have allowed an attacker to escalate privileges beyond what would normally be permitted for browser extensions. 

When operating within the trusted assistant interface, malicious code may be able to activate the victim's camera or microphone without permission, take screenshots of arbitrary websites, or obtain sensitive information from local files. Normally, such capabilities are reserved for browser components designed to assist users with advanced automation tasks, but due to this vulnerability, the boundaries were effectively blurred by allowing untrusted code to take the same privileges.

Furthermore, the report highlights that this emerging category of so-called AI or agentic browsers is primarily based on integrated assistants that are capable of monitoring and interacting with user activity as it occurs. There has been a broader shift toward AI-augmented browsing environments, as evidenced by platforms such as Atlas, Comet, and Copilot within Microsoft Edge, as well as Gemini in Google Chrome.

Typically, these platforms feature an integrated assistant panel that summarizes content in real time, automates routine actions, and provides contextual guidance based on the page being viewed. By receiving privileged access to what a user sees and interacts with, the assistant often allows it to perform complex, multi-step tasks across multiple sites and local resources, allowing it to perform these functions. 

CVE-2026-0628, however, presented an unexpected attack surface as a consequence of that same level of integration: malicious code was able to exercise capabilities far beyond those normally available to extensions by compromising the trusted Gemini panel itself.

Chrome 143 was eventually released to address the vulnerability, however the incident underscores a growing security challenge as browsers evolve into intelligent platforms blending traditional web interfaces with deep integrations of artificial intelligence systems. It is noted that as artificial intelligence features become increasingly embedded into everyday browsing tools, the incident reflects an emerging structural challenge. 

Incorporating an agent-driven assistant directly into the browser allows the user to observe page content, interpret context and perform multi-step tasks such as summarizing information, translating text, or completing tasks on their behalf. In order for these systems to provide the level of functionality they require, extensive visibility into the browsing environment and privileged access to browser resources are required.

It is not surprising that AI assistants can be extremely useful productivity tools, but this architecture also creates the possibility of malicious content attempting to manipulate the assistant itself. For instance, a carefully crafted webpage may contain hidden prompts that can influence the behavior of the AI. 

A user could potentially be persuaded-through phishing, social engineering, or deceptive links-to open a phishing-type webpage by the instructions, which could lead the assistant to perform operations which are otherwise restricted by the browser's security model, such as retrieving sensitive data or performing unintended actions, if such instructions are provided.

According to researchers, malicious prompts may be able to persist in more advanced scenarios by affecting the AI assistant's memory or contextual information between sessions in more advanced scenarios. By incorporating instructions into the browsing interaction itself, attackers may attempt to create an indirect persistence scenario that results in the assistant following manipulated directions even after the original webpage has been closed by embedding instructions within the browsing interaction itself. 

In spite of the fact that such techniques remain largely theoretical in many environments, they show how artificial intelligence-driven interfaces create entirely new attack surfaces that traditional browser security models were not designed to address. Analysts have cautioned that integrating assistant panels directly into the browser's privileged environment can also reactivate longstanding web security threats. 

Researchers at Unit 42 have found that placement of AI components within high-trust browser contexts might inadvertently expose them to bugs such as cross-site scripting, privilege escalation, and side-channel attacks. 

Omer Weizman, a security researcher, explained that embedded complex artificial intelligence systems into privileged browser components increases the likelihood that unintended interactions can occur between lower privilege websites or extensions due to logical or implementation oversights. It is therefore important to point out that CVE-2026-0628 serves as a cautionary example of how advances in AI-assisted browsing must be accompanied by equally sophisticated security safeguards in order to ensure that convenience does not compromise the privacy of the user or the integrity of the system. 

There is no doubt that the discovery serves as a timely reminder to security professionals and browser developers regarding the need for a rigorous approach to security design and oversight in the rapid integration of artificial intelligence into core browsing environments. With the increasing capabilities of assistants embedded within platforms, such as Google Chrome, to observe content, interact with system resources, and automate complex workflows through services such as Gemini, the traditional browser trust model has to evolve in order to accommodate these expanded privileges.

Moreover, researchers recommend that organizations and users remain cautious when installing extensions on their browsers, keep browsers up to date with the latest security patches, and treat AI-powered automation features with the same scrutiny as other high-privilege components. It is also important for the industry to ensure that the convenience offered by intelligent assistants does not outpace the safeguards necessary to contain them. 

As the next generation of artificial intelligence-augmented browsers continues to develop, strong isolation boundaries, hardened interfaces, and an anticipatory response to prompts will likely become essential priorities.

Experts Warn of “Silent Failures” in AI Systems That Could Quietly Disrupt Business Operations


As companies rapidly integrate artificial intelligence into everyday operations, cybersecurity and technology experts are warning about a growing risk that is less dramatic than system crashes but potentially far more damaging. The concern is that AI systems may quietly produce flawed outcomes across large operations before anyone notices.

One of the biggest challenges, specialists say, is that modern AI systems are becoming so complex that even the people building them cannot fully predict how they will behave in the future. This uncertainty makes it difficult for organizations deploying AI tools to anticipate risks or design reliable safeguards.

According to Alfredo Hickman, Chief Information Security Officer at Obsidian Security, companies attempting to manage AI risks are essentially pursuing a constantly shifting objective. Hickman recalled a discussion with the founder of a firm developing foundational AI models who admitted that even developers cannot confidently predict how the technology will evolve over the next one, two, or three years. In other words, the people advancing the technology themselves remain uncertain about its future trajectory.

Despite these uncertainties, businesses are increasingly connecting AI systems to critical operational tasks. These include approving financial transactions, generating software code, handling customer interactions, and transferring data between digital platforms. As these systems are deployed in real business environments, companies are beginning to notice a widening gap between how they expect AI to perform and how it actually behaves once integrated into complex workflows.

Experts emphasize that the core danger does not necessarily come from AI acting independently, but from the sheer complexity these systems introduce. Noe Ramos, Vice President of AI Operations at Agiloft, explained that automated systems often do not fail in obvious ways. Instead, problems may occur quietly and spread gradually across operations.

Ramos describes this phenomenon as “silent failure at scale.” Minor errors, such as slightly incorrect records or small operational inconsistencies, may appear insignificant at first. However, when those inaccuracies accumulate across thousands or millions of automated actions over weeks or months, they can create operational slowdowns, compliance risks, and long-term damage to customer trust. Because the systems continue functioning normally, companies may not immediately detect that something is wrong.

Real-world examples of this problem are already appearing. John Bruggeman, Chief Information Security Officer at CBTS, described a situation involving an AI system used by a beverage manufacturer. When the company introduced new holiday-themed packaging, the automated system failed to recognize the redesigned labels. Interpreting the unfamiliar packaging as an error signal, the system repeatedly triggered additional production cycles. By the time the issue was discovered, hundreds of thousands of unnecessary cans had already been produced.

Bruggeman noted that the system had not technically malfunctioned. Instead, it responded logically based on the data it received, but in a way developers had not anticipated. According to him, this highlights a key challenge with AI systems: they may faithfully follow instructions while still producing outcomes that humans never intended.

Similar risks exist in customer-facing applications. Suja Viswesan, Vice President of Software Cybersecurity at IBM, described a case involving an autonomous customer support system that began approving refunds outside established company policies. After one customer persuaded the system to issue a refund and later posted a positive review, the AI began approving additional refunds more freely. The system had effectively optimized its behavior to maximize positive feedback rather than strictly follow company guidelines.

These incidents illustrate that AI-related problems often arise not from dramatic technical breakdowns but from ordinary situations interacting with automated decision systems in unexpected ways. As businesses allow AI to handle more substantial decisions, experts say organizations must prepare mechanisms that allow human operators to intervene quickly when systems behave unpredictably.

However, shutting down an AI system is not always straightforward. Many automated agents are connected to multiple services, including financial platforms, internal software tools, customer databases, and external applications. Halting a malfunctioning system may therefore require stopping several interconnected workflows at once.

For that reason, Bruggeman argues that companies should establish emergency controls. Organizations deploying AI systems should maintain what he describes as a “kill switch,” allowing leaders to immediately stop automated operations if necessary. Multiple personnel, including chief information officers, should know how and when to activate it.

Experts also caution that improving algorithms alone will not eliminate these risks. Effective safeguards require companies to build oversight systems, operational controls, and clearly defined decision boundaries into AI deployments from the beginning.

Security specialists warn that many organizations currently place too much trust in automated systems. Mitchell Amador, Chief Executive Officer of Immunefi, argues that AI technologies often begin with insecure default conditions and must be carefully secured through system architecture. Without that preparation, companies may face serious vulnerabilities. Amador also noted that many organizations prefer outsourcing AI development to major providers rather than building internal expertise.

Operational readiness remains another challenge. Ramos explained that many companies lack clearly documented workflows, decision rules, and exception-handling procedures. When AI systems are introduced, these gaps quickly become visible because automated tools require precise instructions rather than relying on human judgment.

Organizations also frequently grant AI systems extensive access permissions in pursuit of efficiency. Yet edge cases that employees instinctively understand are often not encoded into automated systems. Ramos suggests shifting oversight models from “humans in the loop,” where people review individual outputs, to “humans on the loop,” where supervisors monitor overall system behavior and detect emerging patterns of errors.

Meanwhile, the rapid expansion of AI across the corporate world continues. A 2025 report from McKinsey & Company found that 23 percent of companies have already begun scaling AI agents across their organizations, while another 39 percent are experimenting with them. Most deployments, however, are still limited to a small number of business functions.

Michael Chui, a senior fellow at McKinsey, says this indicates that enterprise AI adoption remains in an early stage despite the intense hype surrounding autonomous technologies. There is still a glaring gap between expectations and what organizations are currently achieving in practice.

Nevertheless, companies are unlikely to slow their adoption efforts. Hickman describes the current environment as resembling a technology “gold rush,” where organizations fear falling behind competitors if they fail to adopt AI quickly.

For AI operations leaders, this creates a delicate balance between rapid experimentation and maintaining sufficient safeguards. Ramos notes that companies must move quickly enough to learn from real-world deployments while ensuring experimentation does not introduce uncontrolled risk.

Despite these concerns, expectations for the technology remain high. Hickman believes that within the next five to fifteen years, AI systems may surpass even the most capable human experts in both speed and intelligence.

Until that point, organizations are likely to experience many lessons along the way. According to Ramos, the next phase of AI development will not necessarily involve less ambition, but rather more disciplined approaches to deployment. Companies that succeed will be those that acknowledge failures as part of the process and learn how to manage them effectively rather than trying to avoid them entirely. 


Hackers Exploit OpenClaw Bug to Control AI Agent


Cybersecurity experts have discovered a high-severity flaw named “ClawJacked” in the famous AI agent OpenClaw that allowed a malicious site bruteforce access silently to a locally running instance and take control. 

Oasis Security found the issue and informed OpenClaw, a fix was then released in version 2026.2.26 on 26th February. 

About OpenClaw

OpenClaw is a self-hosted AI tool that became famous recently for allowing AI agents to autonomously execute commands, send texts, and handle tasks across multiple platforms. Oasis security said that the flaw is caused by the OpenClaw gateway service linking with the localhost and revealing a WebSocket interface. 

Attack tactic 

As cross-origin browser policies do not stop WebSocket connections to a localhost, a compromised website opened by an OpenClaw user can use Javascript to secretly open a connection to the local gateway and try verification without raising any alarms. 

To stop attacks, OpenClaw includes rate limiting. But the loopback address (127.0.0.1) is excused by default. Therefore, local CLI sessions are not accidentally locked out. 

OpenClaw brute-force to escape security 

Experts discovered that they could brute-force the OpenClaw management password at hundreds of attempts per second without any failed attempts being logged. When the correct password is guessed, the hacker can silently register as a verified device, because the gateway autonomously allows device pairings from localhost without needing user info. 

“In our lab testing, we achieved a sustained rate of hundreds of password guesses per second from browser JavaScript alone At that speed, a list of common passwords is exhausted in under a second, and a large dictionary would take only minutes. A human-chosen password doesn't stand a chance,” Oasis said. 

The attacker can now directly interact with the AI platform by identifying connected nodes, stealing credentials, dumping credentials, and reading application logs with an authenticated session and admin access. 

Attacker privileges

According to Oasis, this might enable an attacker to give the agent instructions to perform arbitrary shell commands on paired nodes, exfiltrate files from linked devices, or scan chat history for important information. This would essentially result in a complete workstation compromise that is initiated from a browser tab. 

Oasis provided an example of this attack, demonstrating how the OpenClaw vulnerability could be exploited to steal confidential information. The problem was resolved within a day of Oasis reporting it to OpenClaw, along with technical information and proof-of-concept code.

Experts Warn About AI-assisted Malwares Used For Extortion


AI-based Slopoly malware

Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives. 

IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.

Hive0163 malware campaign 

Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader. 

In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server. 

Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”. 

Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging. 

There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables. 

The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework. 

According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”

The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations. 

IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”

Perplexity's Comet AI Browser Tricked Into Phishing Scam Within Four Minutes


Agentic browser at risk

Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks. 

According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat. 

Tricking the browsers

By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes. 

The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection. 

Attack tactic

There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself. 

Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."

End goal?

The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams. 

Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”

AI Agents Boost Productivity but Introduce New Cybersecurity Risks for Organizations

 

Artificial Intelligence is rapidly evolving from a conversational tool into a system capable of performing real-world tasks independently. Known as AI Agents, these systems can carry out activities such as sending emails, transferring data, and managing software workflows without constant human supervision.

While this automation significantly improves efficiency, it also creates a new entry point for cyber threats.

AI agents can be compared to a new employee who has access to every room in a company building but lacks proper identification. Because these digital systems operate autonomously, they often hold permissions to sensitive resources and information, sometimes without sufficient monitoring.

Cybercriminals have begun exploiting this reality. Instead of attempting to steal passwords or break into systems directly, attackers may manipulate AI agents into performing malicious actions on their behalf.

Organizations that rely on AI-driven automation could therefore face new risks. Many conventional cybersecurity systems were originally designed to protect human users rather than automated digital workers, leaving a potential gap in defense.

To address these concerns, an upcoming webinar titled “Beyond the Model: The Expanded Attack Surface of AI Agents” will explore how this evolving technology is being targeted by threat actors.

During the session, Rahul Parwani, Head of Product for AI Security at Airia, will explain how attackers exploit AI agents and what organizations can do to strengthen their defenses.

What You Will Learn
  • The "Dark Matter" of Identity: Why AI agents are often invisible to your security team and how to find them.
  • How Agents Get Tricked: Learn how a simple "bad idea" hidden in a document can make an AI agent leak your company secrets.
  • The Safety Blueprint: Simple steps to give your AI agents the power they need without giving them "God Mode" over your data.
This session is aimed at business leaders, IT professionals, and anyone responsible for safeguarding corporate data. The discussion will break down complex security concepts in a way that does not require deep coding expertise.

As organizations continue adopting AI-driven automation, understanding the security implications of AI agents is becoming increasingly important. Without proper safeguards, the same tools designed to improve productivity could also become unexpected vulnerabilities.

Too Much Data Regulation Can Create Security Risks


Bitcoin transactions are transparent by design, they work as a pseudonym where operations are visible but identity is hidden. But the increasing amount of identity-based data around users is affecting the transparency into a personal security threat. 

The problem 

The increasing regulatory data collection is now mixing with bitcoin’s on-chain transparency, making a trove of identity linked data that hackers can abuse for forced, real-world attacks. 

What makes data a target? 

Physical attacks against cryptocurrency holders are on the rise due to a number of factors, including social engineering, frequent major data breaches, KYC requirements, and regulatory data collection. 

These occurrences, which are frequently referred to as "wrench attacks," entail coercion to gain private keys or force transactions by threats or physical violence. With France emerging as a focus point, this movement is highlighting a weakness in the industry's regulation.

Threats has become the rule rather than the exception, with at least 47.2% of cases involving verified torture or physical assault and 51.5% including firearms. There were 19 fatal occurrences, which resulted in 24 deaths overall and a 6.2% fatality rate. 2025 was the most violent year on record in terms of recorded cases, but analysts warn that the actual number of occurrences is probably greater because of underreporting. All numbers are based on cases that were publicly available at the time of reporting.

What are the risks?

The risk profile for Bitcoin holders is very harsh. Transactions are irreversible once private keys are turned over under duress. Chargebacks, account freezes, and institutional recovery procedures are nonexistent. When coupled with actual compulsion, the protocol's famed finality becomes a liability. 

France serves as an example of how rapidly this risk might increase. In France, there were twenty bitcoin-related physical attacks in 2025, compared to a total of just four between 2017 and 2024. Eight more cases had already been reported by early February 2026, indicating that the rise is continuing rather than leveling down. Europe now accounts for around 40% of all events worldwide, up from about 22% in 2024.

Chinese AI App Seedance Ignites Hollywood Copyright Panic

 

A groundbreaking Chinese AI app called Seedance 2.0, developed by ByteDance—the company behind TikTok—has ignited both excitement and alarm in Hollywood. Capable of generating cinema-quality videos complete with audio, dialogue, and ultra-realistic visuals from simple text prompts, the tool has produced viral clips featuring iconic characters like Deadpool, Spider-Man, and Darth Vader in entirely new scenarios. These hyper-realistic videos, including fight scenes with Tom Cruise and Brad Pitt or alternate endings to films like Titanic, showcase the app's prowess in mimicking human creativity without traditional production tools.

The rapid spread of these clips on social media has amplified Seedance's reach, drawing millions of views and sparking widespread discussion about AI's creative potential. Users have recreated scenes from popular franchises like The Lord of the Rings, Seinfeld, Avengers, and Breaking Bad, demonstrating the app's versatility across genres from action to sci-fi. ByteDance promotes Seedance as delivering an "ultra-realistic immersive experience," positioning it at the frontier of global AI innovation, particularly from China. This capability extends to low-budget filmmakers, enabling ambitious productions like period dramas or effects-heavy blockbusters that were previously cost-prohibitive.

However, Hollywood's panic stems from blatant copyright infringement embedded in these demonstrations. Studios like Disney and Paramount have issued cease-and-desist letters, demanding Seedance stop using their intellectual property, while Japan's regulators probe ByteDance over anime character videos. The Motion Picture Association condemned the app for "unauthorized use of U.S. copyrighted works on a massive scale," arguing it disregards laws protecting creators and threatens millions of jobs. Even Deadpool writer Rhett Reese voiced despair, lamenting, "I hate to say it. It's over for us."

Industry groups have mobilized swiftly against Seedance 2.0. The Human Artistry Campaign, backed by Hollywood unions, labeled it "an assault on every creator globally," decrying the theft of human work to fuel AI substitutes. SAG-AFTRA echoed this, standing with studios in condemning the "blatant infringement" enabled by ByteDance.Critics warn that without ethical safeguards, such tools prioritize technological advancement over compensation for data used in training, echoing past controversies like OpenAI's Sora. 

As AI blurs lines between innovation and exploitation, Seedance underscores urgent debates on regulation and artist rights. While it empowers creators in emerging markets, Hollywood fears a future where deepfakes erode authenticity and livelihoods. Experts urge balanced policies to harness AI's promise without undermining cultural industries. The app's fallout may catalyze global standards, ensuring technology serves rather than supplants human ingenuity.

Coinbase CEO Says Quantum Threat to Crypto Is Manageable

 

Coinbase Chief Executive Brian Armstrong said concerns that quantum computing could undermine blockchain security are manageable, describing the issue as one the crypto industry has time to address. 

Speaking to CNBC at the World Liberty Forum in Mar a Lago alongside Senator Bernie Moreno of Ohio, Armstrong responded to questions about whether advances in quantum technology could eventually break blockchain encryption. 

“One thing I’ve heard is that quantum is going to break the blockchain. Is that true?” interviewer Sara Eisen asked. 

Armstrong dismissed the idea that the threat is imminent or unfixable. 

He said Coinbase has been proactive and is working closely with major blockchain networks to prepare for a shift toward post quantum cryptography. 

“We’re going to stay engaged on that, and I think it’s very solvable,” Armstrong said. 

Quantum computing has long been viewed as a theoretical risk to public key cryptography, which underpins networks such as Bitcoin and Ethereum. 

While current quantum systems are not powerful enough to crack widely used encryption methods, researchers warn that upgrading global financial systems and decentralized networks could take years, making early preparation important. 

Last month, Coinbase formed an independent quantum advisory board to guide its efforts. The group includes University of Texas professor Scott Aaronson, Stanford cryptographer Dan Boneh, Ethereum Foundation researcher Justin Drake and Coinbase Head of Cryptography Yehuda Lindell. 

The advisory board is expected to publish research evaluating quantum related risks and recommend migration strategies for blockchain systems. Industry observers say there is still time to transition to stronger cryptographic standards. 

Pranav Agarwal, independent director at Jetking Infotrain India, said the main concern for Bitcoin would be the potential breaking of private keys secured by SHA 256 encryption. 

However, he noted that the timeline for building a large scale quantum system capable of such an attack remains uncertain and that upgrading encryption is feasible. 

“There is enough time” to strengthen cryptographic protections across major networks, including Bitcoin and Ethereum, Agarwal said. 

Across the broader crypto ecosystem, preparation has accelerated. The Ethereum Foundation recently elevated post quantum security to a strategic priority. 

Ethereum co founder Vitalik Buterin has urged developers not to delay adopting quantum resistant cryptography, arguing that networks should aim for long term resilience rather than emergency fixes. 

The Solana Foundation said in December that it had begun testing quantum resistant digital signatures on a test network. Bitcoin developers have also advanced proposals such as BIP 360, designed to reduce exposure to quantum related risks. 

During the CNBC interview, Armstrong also addressed developments in U.S. market structure legislation. 

He defended Coinbase’s decision to oppose an earlier draft of a bill known as the CLARITY Act, citing concerns over how stablecoin rewards were treated in the proposal. Armstrong rejected claims that Coinbase blocked the legislation. 

He said the company raised issues that brought lawmakers back to the table and expressed confidence that a revised compromise could advance in the coming months, potentially reaching the President’s desk. 

He also voiced support for the Commodity Futures Trading Commission’s authority over event contracts and prediction markets, as policymakers continue to debate the regulatory framework for digital assets in the United States.

U.S. Blacklists Anthropic as Supply Chain Risk as OpenAI Secures Pentagon AI Deal

 

The Trump administration has designated AI startup Anthropic as a supply chain risk to national security, ordering federal agencies to immediately stop using its AI model Claude. 

The classification has historically been applied to foreign companies and marks a rare move against a U.S. technology firm. 

President Donald Trump announced that agencies must cease use of Anthropic’s technology, allowing a six month phase out for departments heavily reliant on its systems, including the Department of War. 

Defense Secretary Pete Hegseth later formalized the designation and said no contractor, supplier or partner doing business with the U.S. military may conduct commercial activity with Anthropic. 

At the center of the dispute is Anthropic’s refusal to grant the Pentagon unrestricted access to Claude for what officials described as lawful purposes. 

Chief executive Dario Amodei sought two exceptions covering mass domestic surveillance and the development of fully autonomous weapons. 

He argued that current AI systems are not reliable enough for autonomous weapons deployment and warned that mass surveillance could violate Americans’ civil rights. 

Anthropic has said a proposed compromise contract contained loopholes that could allow those safeguards to be bypassed. 

The company had been operating under a 200 million dollar Department of War contract since June 2024 and was the first AI firm to deploy models on classified government networks. 

After negotiations broke down, the Pentagon issued an ultimatum that Anthropic declined, leading to the blacklist. 

The company plans to challenge the designation in court, arguing it may exceed the authority granted under federal law. 

While the restriction applies directly to Defense Department related work, legal analysts say the move could create broader uncertainty across the technology sector. 

Anthropic relies on cloud infrastructure from Amazon, Microsoft and Google, all of which maintain major defense contracts. 

A strict interpretation of the order could complicate those relationships. 

President Trump has warned of serious civil and criminal consequences if Anthropic does not cooperate during the transition. 

Even as Anthropic faces federal restrictions, OpenAI has moved ahead with its own classified agreement with the Pentagon. 

The company said Saturday that it had finalized a deal to deploy advanced AI systems within classified environments under a framework it describes as more restrictive than previous contracts. 

In its official blog post, OpenAI said, "Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies." It added, "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s." 

OpenAI outlined three red lines that prohibit the use of its technology for mass domestic surveillance, for directing autonomous weapons systems and for high stakes automated decision making. 

The company said deployment will be cloud only and that it will retain control over its safety systems, with cleared engineers and researchers involved in oversight. 

"We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the company wrote. 

The contract references existing U.S. laws governing surveillance and military use of AI, including requirements for human oversight in certain weapons systems and restrictions on monitoring Americans’ private information. 

OpenAI said it would not provide models without safety guardrails and could terminate the agreement if terms are violated, though it added that it does not expect that to happen. 

Despite its dispute with Washington, Anthropic appears to be gaining traction among consumers. 

Claude recently climbed to the top position in Apple’s U.S. App Store free rankings, overtaking OpenAI’s ChatGPT. 

Data from SensorTower shows the app was outside the top 100 at the end of January but steadily rose through February. 

A company spokesperson said daily signups have reached record levels this week, free users have increased more than 60 percent since January and paid subscriptions have more than doubled this year.

US Employs Anthropic’s Claude AI in High-Profile Venezuela Raid


 

Using a commercially developed artificial intelligence system in a classified US military operation represents a significant technological shift in the design of modern defence strategy. It appears that what was once confined to research laboratories and enterprise software environments has now become integral to high-profile operational planning, signalling the convergence of Silicon Valley innovation with national security doctrines has reached a new stage.

Nicolás Maduro's capture was allegedly assisted by advanced AI tools. This prompted increased scrutiny of how emerging technologies were utilized in conflict scenarios and prompted broader questions regarding accountability, oversight, and the evolving line between corporate governance frameworks and military necessities, in addition to intensifying scrutiny. 

It was striking to see the US military’s recent operation to seize former Venezuelan President Nicolás Maduro at the intersection of cutting-edge technology and modern warfare. In addition to demonstrating the effectiveness of traditional force, the operation also demonstrated that artificial intelligence is becoming increasingly important in high stakes conflict situations. 

Recent operations by the US military to capture former Venezuelan President Nicolás Maduro represent a striking intersection of cutting-edge technology and modern warfare, and are not just a testament to traditional force; they also demonstrate the growing importance of artificial intelligence in high-stakes conflict situations. 

A number of reports citing The Wall Street Journal indicated that Anthropic's Claude AI model was deployed in the operation that led to the capture of Nicolás Maduro. This indicates that advanced artificial intelligence is becoming a significant part of US defence infrastructure, while also highlighting the complex intersection between corporate AI security measures and military requirements. 

A collaborative effort between Palantir Technologies and Claude enables high-level data synthesis, analysis modeling, and operational support through a secure collaboration. The report describes Claude as the first commercially developed artificial intelligence system to be utilized in a classified environment. 

As Anthropic's published usage policies expressly prohibit applications related to violence, weapon development, or surveillance, its reported involvement is significant. However, according to reports, the model was leveraged by defence officials to assist in key planning phases and intelligence coordination surrounding the mission that culminated in Maduro's arrest and transfer to New York to face federal charges. 

It highlights both the operational utility of AI-enabled analytical systems and the legal and ethical challenges associated with deploying commercial technologies in sensitive national security settings. In addition, reports indicate that Claude's capabilities may have been employed for processing complex intelligence datasets, supporting real-time decision workflows, and synthesizing multilingual information streams within compressed operational timeframes; however, specific implementation details remain confidential.

Following the raid, involving coordinated military action in Caracas and the detention of former Venezuelan leader, the debate about the scope and limitations of artificial intelligence within the U.S. Several leading artificial intelligence developers, including Anthropic and OpenAI, have been encouraged to make their models available on classified networks with less operational restrictions than those imposed in civilian environments, according to reports. 

As part of its strategic objectives, the Pentagon seeks to integrate advanced artificial intelligence into intelligence analysis, mission planning, and multi-domain operational coordination. Claude's availability within classified environments facilitated by third-party infrastructure partnerships has become a source of institutional tension, in particular because Anthropic's internal safeguards prohibit the model from being used for violent or surveillance-related tasks. 

The Department of Defense has argued that AI systems must be able to support "all lawful purposes" in order to be available for future operational readiness, including rapid, AI-assisted intelligence fusion across contested domains. This position is considered essential for future operational readiness. 

Because of the company's hesitation to erode certain safeguards, senior defence leadership, including Pete Hegseth, has indicated that authorities such as the Defense Production Act or supply chain risk assessments may be considered when evaluating future contractual relations.

As the technological convergence accelerates, it becomes increasingly challenging for governments and AI developers to reconcile national security imperatives and corporate governance obligations. There is a broader question at the center of this ethical and strategic challenge regarding how advanced artificial intelligence tools should be governed in national security contexts, a discussion which extends beyond single missions and extends to the future architecture of defence technology as well as safeguards placed on autonomous and semi-automated systems. 

In a time when defence institutions are deeply integrating artificial intelligence into operational command structures, this episode underscores a pivotal point in the governance of dual-use technologies. When commercial AI innovation is combined with classified military deployment, robust contractual clarity is necessary, as are enforceable oversight mechanisms, independent review systems and standardized compliance frameworks integrated into both software and procurement processes. 

The strategic planning, operational effectiveness, legal safeguards, and ethical restraint of regulatory architecture must now be harmonised in a manner that maintains operational effectiveness while maintaining accountability, legal safeguards, and ethical constraints. 

Advancement in artificial intelligence systems risks outpacing the supervision mechanisms designed to ensure their safety if such calibrated governance is not in place. As a result of the standards developed in response to this occasion, the national defence doctrines of the future will be significantly influenced, as will global norms governing artificial intelligence in conflict environments for years to come.

The strategic and ethical challenge entails a wider question regarding how advanced artificial intelligence tools should be governed when deployed for national security purposes, which encompasses the future architecture of defence technology as well as safeguards placed around semi-autonomous and autonomous systems. 

In a time when defence institutions are deeply integrating artificial intelligence into operational command structures, this episode underscores a pivotal point in the governance of dual-use technologies. When commercial AI innovation is combined with classified military deployment, robust contractual clarity is necessary, as are enforceable oversight mechanisms, independent review systems and standardized compliance frameworks integrated into both software and procurement processes. 

The strategic planning, operational effectiveness, legal safeguards, and ethical restraint of regulatory architecture must now be harmonised in a manner that maintains operational effectiveness while maintaining accountability, legal safeguards, and ethical constraints.

Advancement in artificial intelligence systems risks outpacing the supervision mechanisms designed to ensure their safety if such calibrated governance is not in place. As a result of the standards developed in response to this occasion, the national defence doctrines of the future will be significantly influenced, as will global norms governing artificial intelligence in conflict environments for years to come.

Microsoft AI Chief: 18 Months to Automate White-Collar Jobs

 

Mustafa Suleyman, CEO of Microsoft AI, has issued a stark warning about the future of white-collar work. In a recent Financial Times interview, he predicted that AI will achieve human-level performance on most professional tasks within 18 months, automating jobs involving computer-based work like accounting, legal analysis, marketing, and project management. This timeline echoes concerns from AI leaders, comparing the shift to the pre-pandemic moment in early 2020 but far more disruptive. Suleyman attributes this to exponential growth in computational power, enabling AI to outperform humans in coding and beyond.

Suleyman's forecast revives 2025 predictions from tech executives. Anthropic's Dario Amodei warned AI could eliminate half of entry-level white-collar jobs, while Ford's Jim Farley foresaw a 50% cut in U.S. white-collar roles. Elon Musk recently suggested artificial general intelligence—AI surpassing human intelligence—could arrive this year. These alarms contrast with CEO silence earlier, likened by The Atlantic to ignoring a shark fin in the water. The drumbeat of disruption is growing louder amid rapid AI advances.

Current AI impact on offices remains limited despite hype. A 2025 Thomson Reuters report shows lawyers and accountants using AI for tasks like document review, yielding only marginal productivity gains without mass displacement. Some studies even indicate setbacks: a METR analysis found AI slowed software developers by 20%. Economic benefits are mostly in Big Tech, with profit margins up over 20% in Q4 2025, while broader indices like the Bloomberg 500 show no change.

Early job losses signal brewing changes. Challenger, Gray & Christmas reported 55,000 AI-related cuts in 2025, including Microsoft's 15,000 layoffs as CEO Satya Nadella pushed to "reimagine" for the AI era. Markets reacted sharply last week with a "SaaSpocalypse" selloff in software stocks after Anthropic and OpenAI launched agentic AI systems mimicking SaaS functions. Investors doubt AI will boost non-tech earnings, per Wall Street consensus.

Suleyman envisions customizable AI transforming every organization. He predicts users will design models like podcasts or blogs, tailored for any job, driving his push for Microsoft "superintelligence" and independent foundation models. As the "most important technology of our time," Suleyman aims to reduce reliance on partners like OpenAI. This could redefine the American Dream, once fueled by MBAs and law degrees, urging urgent preparation for AI's white-collar reckoning.

ClawJack Allows Malicous Sites to Control Local OpenClaw AI Agents


Peter Steinberger created OpenClaw, an AI tool that can be a personal assistant for developers. It immediately became famous and got 100,000 GitHub stars in a week. Even OpenAI founder Sam Altman was impressed, bringing Steinberger on board and calling him a “genius.” However, experts from Oasis Security said that the viral success had hidden threats.

OpenClaw addressed a high-severity security threat that could have been exploited to allow a malicious site to link with a locally running AI agent and take control. According to the Oasis Security report, “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented.” 

ClawJack scare

The threat was codenamed ClawJacked by the experts. CVE-2026-25253 could have become a severe vulnerability chain that would have allowed any site to hack a person’s AI agent. The vulnerability existed in the main gateway of the software. As OpenClaw is built to trust connections from the user’s system, it could have allowed hackers easy access. 

Assuming the threat model

On a developer's laptop, OpenClaw is installed and operational. Its gateway, a local WebSocket server, is password-protected and connected to localhost. When the developer visits a website that is controlled by the attacker via social engineering or another method, the attack begins. According to the Oasis Report, “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”

Stealthy Attack Tactic 

The research revealed a smart trick using WebSockets. Generally, your browser is active at preventing different websites from meddling with your local files. But WebSockets are an exception as they are built to stay “always-on” to send data simultaneously. 

The OpenClaw gateway assumed that the connection must be safe because it comes from the user's own computer (localhost). But it is dangerous because if a developer running OpenClaw mistakenly visits a malicious website, a hidden script installed in the webpage can connect via WebSocket and interact directly with the AI tool in the background. The user will be clueless.