Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

New RBI Rule Makes 2FA Mandatory for All Digital Payments


Two-factor authentication (2FA) will be required for all digital transactions under the new framework, drastically altering how customers pay with cards, mobile wallets, and UPI.

India plans to change its financial landscape as the Reserve Bank of India (RBI) brings new security measures for all electronic payments. The new rules take effect on 1 April 2026. Every digital payment will be verified through a compulsory two-factor authentication process. The new rule aims to address the growing number of cybercrimes and phishing campaigns that have infiltrated India’s mobile wallets and UPI. Traditionally, security relied on text messages, but now, it has started adopting a versatile security model. The regulators are trying to stay ahead of threat actors and scammers. 

The shift to a dynamic verification model

The new directive mandates that at least one of the two authentication factors must be dynamic. The authentication has to be generated particularly for a single transaction and cannot be used twice. Fintech providers and banks can now freely choose from a variety of ways, such as hardware tokens, biometrics, and device binding. This shift highlights a departure from the traditional era, where OTPs via SMS were the main line of defence. 

Risk-based verification

To make security convenient, banks will follow a risk-based approach. 

Low-risk: Payments from authorized devices or standard small transactions will be quick and seamless. 

High-risk: Big payments or transactions from new devices may prompt further authentication steps.

The framework with “RBI’s new digital payment security controls coming into force represent a significant recalibration of India’s authentication framework – from a prescriptive OTP-based regime to a more principle-driven, risk-based standard,” experts said.

Building institutions via technology neutrality

The RBI no longer manages the particular technology used for verification. Currently, it focuses more on the security of the outcome. 

Why the technology-neutral stance?

The technology-neutral stance permits financial institutions to use sophisticated solutions like passkeys or facial recognition without requiring frequent regulatory notifications. The central bank will follow the principle-driven practice by boosting innovation while holding strict compliance. According to experts, “By recognising biometrics, device-binding and adaptive authentication, RBI has created interpretive flexibility for regulated entities, while retaining supervisory oversight through outcome-based compliance.”

Impact on bank accountability

The RBI has increased accountability standards, making banks and payment companies more accountable for maintaining safe systems.

Institutions may be obliged to reimburse users in situations when fraud results from system malfunctions or errors, which could expedite the resolution of grievances.

The goal of these regulations is to expedite the resolution of complaints pertaining to fraud.

GPS Spoofing: Digital Warfare in the Persian Gulf Manipulating Ship Locations


Digital warfare targeting the GPS location

After the U.S and Israel’s “pre-emptive” strikes against Iran last month, research firm Kpler found vessels in the Persian Gulf going off course. The location data from ships in the Gulf showed vessels maneuvering over land and taking sharp turns in polygonal directions. Disruptions to location-based features have increased across the Middle East. This impacts motorists, aircraft, and mariners.

These disturbances have highlighted major flaws in the GPS. GPS is an American-made system now similar to satellite navigation. For a long time, Kpler and other firms have discovered thousands of instances of oil vessels in the Persian Gulf disrupting the onboard Automatic Identification System (AIS) signals, a system used to trace vessels in transit, to escape sanctions on Iranian oil exports.

GPS spoofing

This tactic is called spoofing; the manipulation of location signals permits vessels to hide their activities. Hackers have used this tool to hide their operations.

Since the start of attacks in the Middle East, GPS spoofing in the Persian Gulf has increased. The maritime intelligence agency Windward found over 1,100 different vessels in the Gulf facing AIS manipulation.

The extra interference with satellite navigation signals in the region comes from Gulf states trying to defend against missile and drone strikes on critical infrastructure by compromising the onboard navigational systems of enemy drones and missiles.

The impact

These disruptions are being installed as defensive actions in modern warfare. 

Aircraft have appeared to have traveled in unpredictable, wave-like patterns due to interference; food delivery riders have also appeared off the coast of Dubai due to failed GPS systems on land.

According to Lisa Dyer, executive director of the GPS Innovation Alliance, the region's ongoing jamming and spoofing activity also raises serious public safety issues.

Foreign-flagged ships from nations like China and India are still allowed to pass via the Persian Gulf, despite the fact that the blockage of the Strait of Hormuz has drastically decreased shipping activity.

Links with China

Iranian strikes have persisted despite widespread meddling throughout the region, raising questions about the origins of Iran's military prowess.

The apparent accuracy of Iranian strikes has also been linked to the use of China's BeiDou, according to other analysts reported in sources such as Al Jazeera.

For targeting, missiles and drones frequently combine satellite-based navigation systems with other systems, such as inertial navigation capabilities, which function independently of satellite-based signals.

How Connected Vehicles Are Turning Into Enterprise Systems

 



The technological foundation behind connected vehicles is undergoing a monumental shift. What was once limited to in-vehicle engineering is now expanding into a complex ecosystem that closely resembles enterprise-level digital infrastructure. This transition is forcing automakers to rethink how they manage scalability, security, and data, while also elevating the strategic importance of digital platforms in shaping future revenue streams.

For many years, automotive innovation focused primarily on the physical vehicle, including mechanical systems, embedded electronics, and onboard software. That model is changing. The systems supporting connected vehicles now extend far beyond the car itself and increasingly resemble large, integrated digital platforms similar to those used by major technology-driven enterprises.

As automakers roll out connected features across entire fleets, the supporting technology stack is growing exponentially. Today’s connected vehicle ecosystem typically includes cloud environments designed to handle millions of simultaneous connections, mobile applications that allow users to control and monitor their vehicles, infrastructure for delivering over-the-air software updates, and large-scale data systems that process continuous streams of vehicle-generated information.

This architecture aligns closely with enterprise IT platforms, although the scale and operational complexity are even greater. Connected vehicles can generate as much as 25 gigabytes of data per hour, depending on their sensors and capabilities. Research from International Data Corporation indicates that data generated by connected and autonomous vehicles could reach multiple zettabytes annually by the end of this decade. This rapid growth is compelling automakers to redesign how they structure, manage, and secure their digital environments.

Traditionally, initiatives related to connected vehicles were handled by engineering and research teams focused on embedded systems. However, as deployment expands across regions and vehicle models, the challenges now mirror those seen in enterprise IT. These include scaling platforms efficiently, managing identity and access controls, governing vast datasets, coordinating multiple vendors, and ensuring security throughout the entire system lifecycle.

This transformation is also reshaping leadership roles within automotive companies. Chief Information Officers are becoming increasingly central as the supporting infrastructure around vehicles begins to resemble enterprise IT ecosystems. While engineering teams still lead vehicle software development, the broader digital environment, including cloud systems and data platforms, is now a critical area of responsibility for IT leadership. Many automakers are shifting toward platform-based strategies, treating the connected vehicle backend as a long-term digital asset rather than a feature tied to a single vehicle model.

At the same time, the ecosystem of technology providers involved in connected vehicles is expanding rapidly. These platforms often rely on a combination of telematics services, cloud providers, mobile development frameworks, cybersecurity solutions, analytics platforms, and OTA update systems. Managing such a diverse network requires structured governance and integration approaches similar to those used in large enterprise environments.

Cybersecurity has become a central pillar of this transformation. Regulatory frameworks such as ISO/SAE 21434 and UNECE WP.29 R155 now require manufacturers to implement continuous cybersecurity management across both vehicles and their supporting digital systems. These regulations extend beyond the vehicle itself, covering cloud services, mobile applications, and software update mechanisms.

The financial implications of this course are substantial. According to McKinsey & Company, software-enabled services and digital features could contribute up to 30 percent of total automotive revenue by 2030. This highlights how critical digital platforms are becoming to the industry’s long-term business model.

Industry experts emphasize that connected vehicles are no longer standalone products but part of a broader technological ecosystem. Vikash Chaudhary, Founder and CEO of HackersEra, explains that connected vehicles are effectively turning into distributed technology platforms. He notes that companies adopting strong platform architectures, robust data governance, and integrated cybersecurity measures will be better positioned to scale operations and drive innovation.

As vehicles continue to tranform into software-defined systems, the competitive landscape is shifting. The key battleground is no longer limited to the vehicle itself but is increasingly centered on the enterprise-grade platforms that enable connected mobility at scale.

Quantum Computing: The Silent Killer of Digital Encryption

 

Quantum computing poses a greater long-term threat to digital security than AI, as it could shatter the encryption underpinning modern systems. While AI grabs headlines for ethical and societal risks, quantum advances quietly erode the foundations of data protection, urging immediate preparation. 

Today's encryption relies on algorithms secure against classical computers but vulnerable to quantum power, potentially cracking codes in minutes that would take supercomputers millennia. Adversaries already pursue "harvest now, decrypt later" strategies, stockpiling encrypted data for future breakthroughs, compromising long-shelf-life secrets like trade intel and health records. This urgency stems from quantum's theoretical ability to solve complex problems via algorithms like Shor's, demanding a shift to post-quantum cryptography today. 

Digital environments exacerbate the danger, blending legacy systems, cloud workloads, and AI agents into opaque networks ripe for lateral attacks. Breaches often exploit seams between SaaS, APIs, and multicloud setups, where visibility into east-west traffic remains limited despite regulations like EU's NIS2 mandating segmentation. AI accelerates risks by enabling autonomous actions across boundaries, turning compromised agents into rapid escalators of privileges. 

Traditional perimeters have vanished in cloud eras, rendering zero-trust policies insufficient without runtime enforcement at the workload level. Organizations need cloud-native security fabrics for continuous visibility and identity-based controls, curbing movement without infrastructure overhauls. Regulators like CISA push for provable zero-trust, highlighting how unmanaged connections form hidden attack paths. 

NIST's 2024 post-quantum standards mark progress, but migrating cryptography alone fortifies a flawed base amid current complexity breaches. True resilience embeds security into network fabrics, auditing paths and enforcing policies proactively against cumulative threats. As quantum converges with AI and cloud, only holistic defenses will safeguard digital trust before crises erupt.

Dutch Court Issues Order Against X and Grok Over Sexual Abuse Content

 



A court in the Netherlands has taken strict action against the platform X and its artificial intelligence system Grok, directing both to stop enabling the creation of sexually explicit images generated without consent, as well as any material involving minors. The ruling carries a financial penalty of €100,000 per day for each entity if they fail to follow the court’s instructions.

This decision, delivered by the Amsterdam District Court, marks a pivotal legal development. It is the first time in Europe that a judge has formally imposed restrictions on an AI-powered image generation tool over the production of abusive or non-consensual sexual content.

The legal complaint was filed by Offlimits together with Fonds Slachtofferhulp. Both groups argued that the pace of regulatory enforcement had not kept up with the speed at which harm was being caused. Existing Dutch legislation already makes it illegal to create or share manipulated nude images of individuals without their permission. However, concerns intensified after Grok introduced an image-editing capability toward the end of December 2025, which led to a sharp increase in reported incidents. On February 4, 2026, Offlimits formally contacted xAI and X, demanding that the feature be withdrawn.

In its ruling, the court instructed xAI to immediately halt the production and distribution of sexualized images involving individuals living in the Netherlands unless clear consent has been obtained. It also ordered the company to stop generating or displaying any content that falls under the legal definition of child sexual abuse material. Alongside this, X Corp and X Internet Unlimited Company have been required to suspend Grok’s functionality on the platform for as long as these violations continue.

Legal representatives for Offlimits emphasized that the so-called “undressing” feature cannot remain active anywhere in the world, not just within Dutch borders. The court further instructed xAI to submit written confirmation explaining the steps taken to comply. If this confirmation is not provided, the daily financial penalty will continue to apply.


Doubts Over Safeguards

A central question for the court was whether the companies had actually made it impossible for such content to be created, as they claimed. The judges concluded that this had not been convincingly demonstrated.

During a hearing on March 12, lawyers representing xAI argued that strong safeguards had been implemented starting January 20, 2026. They maintained that Grok no longer allowed the generation of non-consensual intimate imagery or content involving minors.

However, evidence presented by Offlimits challenged that claim. On March 9, the same day the companies denied any remaining risk, it was still possible to produce a sexualized video of a real person using only a single uploaded image. The system did not require any confirmation of consent. The court viewed this as a contradiction that cast doubt on the effectiveness of the safeguards.

The judges also pointed out inconsistency in xAI’s position regarding child sexual abuse material. The company argued both that such content could not be generated and that it was not technically possible to guarantee complete prevention.


Legal Responsibility and Framework

The court determined that creating non-consensual “undressing” images amounts to a violation of the General Data Protection Regulation. It also found that enabling the production of child sexual abuse material constitutes unlawful behavior under Dutch civil law.

Importantly, the court rejected the argument that responsibility should fall solely on users who input prompts. Instead, it concluded that the platform itself, which controls how the system functions, must take responsibility for preventing misuse.

This reasoning aligns with the Russmedia judgment issued by the Court of Justice of the European Union. That earlier ruling established that platforms can be treated as joint controllers of personal data and cannot rely on intermediary protections to avoid obligations under European data protection law. Applying this principle, the Dutch court found that xAI and X’s European entity are responsible for how personal data is processed within Grok’s image generation system.

The court went a step further by highlighting a key distinction. Unlike platforms that merely host user-generated content, Grok actively creates the material itself. Because xAI designed and operates the system, it was identified as the party responsible for preventing unlawful outputs, regardless of who initiates the request.


Jurisdictional Limits

The ruling applies differently across entities. X Corp, which is based in the United States, faces narrower restrictions because it does not directly provide services within the Netherlands. Its obligation is limited to suspending Grok’s functionality in relation to non-consensual imagery.

By contrast, X Internet Unlimited Company, which serves users within the European Union, must comply with both the ban on non-consensual sexualized content and the restrictions related to child abuse material.


Increasing Global Scrutiny

The case follows findings from the Center for Countering Digital Hate, which estimated that Grok generated around 3 million sexualized images within a ten-day period between late December 2025 and early January 2026. Approximately 23,000 of those images appeared to involve minors.

Regulatory pressure is also building internationally. Ireland’s Data Protection Commission has launched an investigation under GDPR rules, while the European Commission has opened proceedings under the Digital Services Act. In the United Kingdom, Ofcom has initiated action under its Online Safety framework. In the United States, legal challenges have also emerged, including lawsuits filed by teenagers in Tennessee and by the city of Baltimore.

At the policy level, the European Parliament has supported efforts to strengthen the AI Act by introducing an explicit ban on tools designed to digitally remove clothing from images.


A Turning Point for AI Accountability

Authorities are revising how they approach artificial intelligence systems. Earlier debates often treated platforms as passive intermediaries. However, systems like Grok actively generate content, which changes the question of responsibility.

The decision makes it clear that companies developing such technologies are expected to take active steps to prevent harm. Claims about technical limitations are unlikely to be accepted if evidence shows that misuse remains possible.

X and xAI have been given ten working days to provide written confirmation explaining how they have complied with the court’s order.

US Jury Holds Meta and YouTube Accountable in Landmark Social Media Addiction Case

 

Parents and advocacy groups pushing for stricter social media regulations have welcomed a landmark decision by a Los Angeles jury, which ruled in favor of a young woman who accused tech giants Meta and YouTube of contributing to her childhood addiction.

The jury concluded that Meta—owner of Instagram, Facebook, and WhatsApp—and Google, which owns YouTube, deliberately designed platforms that foster addictive behavior and negatively impacted the mental health of the now 20-year-old plaintiff, identified as Kaley.

Kaley was awarded $6 million (£4.5 million) in damages. The verdict is expected to influence numerous similar lawsuits currently progressing through courts across the United States.

Both Meta and Google have expressed disagreement with the ruling and confirmed plans to appeal.

Meta said: "Teen mental health is profoundly complex and cannot be linked to a single app.

"We will continue to defend ourselves vigorously as every case is different, and we remain confident in our record of protecting teens online."

A spokesperson for Google said: "This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site."

However, Ellen Roome, who is pursuing legal action against TikTok following her son’s death, described the verdict as a turning point. "How many more children are going to be harmed and potentially die from these platforms?" she asked.

"It's been proved it's not safe - and social media companies need to fix it."

Findings of Misconduct

Jurors awarded Kaley $3 million in compensatory damages and an additional $3 million in punitive damages, determining that Meta and Google "acted with malice, oppression, or fraud" in operating their platforms.

Under the ruling, Meta is responsible for 70% of the damages, while Google will cover the remaining 30%.

Outside the courthouse, parents of other affected children gathered throughout the five-week trial. When the verdict was announced, many, including Amy Neville, celebrated and embraced supporters.

The decision follows another ruling in New Mexico, where a jury found Meta liable for exposing children to harmful and explicit content, including interactions with sexual predators.

Industry analyst Mike Proulx from Forrester described the consecutive rulings as a "breaking point" in public trust toward social media companies.

Governments worldwide have begun responding. Countries like Australia have introduced measures to restrict children's access to social media, while the UK is testing a potential ban for users under 16.

"Negative sentiment toward social media has been building for years, and now it's finally boiled over," Proulx said.

Reacting to the verdict, Prime Minister Sir Keir Starmer stated that the current situation is "not good enough" and emphasized the need for stronger protections for children.

" It's not if things are going to change, things are going to change.

The question is, how much and what are we going to do?"

The Duke and Duchess of Sussex, long-time advocates for online safety, called the ruling a "reckoning."

"Let this be the change – where our children's safety is finally prioritised above profit."

British campaigner Ian Russell also highlighted the significance of the case, saying: "There is a big hope that this is a big moment and tech will... [need] to change, but only if the governments do something about it."

Case Details and Testimony

During testimony, Meta CEO Mark Zuckerberg pointed to company policies prohibiting users under 13. However, when confronted with internal evidence showing younger users were active on the platform, he said he "always wished" for quicker progress in identifying them and maintained the company had reached the "right place over time."

Although Google was named in the lawsuit, much of the trial focused on Instagram and Meta’s practices. Snap and TikTok, initially part of the case, reached confidential settlements before the trial began.

Kaley’s legal team argued that the platforms functioned as "addiction machines" and failed to adequately prevent children from accessing them.

Kaley testified that she began using YouTube at age six and Instagram at nine, without any effective age restrictions. She described withdrawing from family interactions due to excessive time spent online.

"I stopped engaging with family because I was spending all my time on social media," she said.

She also shared that she began experiencing anxiety and depression at age 10, later diagnosed by a therapist. Additionally, she developed concerns about her appearance, frequently using filters that altered her facial features.

Kaley has since been diagnosed with body dysmorphia, a condition that distorts self-perception of appearance.

Her lawyers argued that features like infinite scrolling were intentionally designed to keep users engaged, particularly younger audiences, to support long-term platform growth.

When questioned about Kaley’s reported 16-hour usage in a single day, Instagram head Adam Mosseri rejected the notion that it proved addiction, instead calling such behavior "problematic."

Following the verdict, Kaley’s legal team stated that the decision "sends an unmistakable message that no company is above accountability when it comes to our children."

Another major lawsuit addressing alleged harms caused by social media platforms is set to begin in California federal court in June.

Google Maps' Biggest Overhaul in a Decade: 8 Key Navigation Upgrades

 

Google has unveiled its most significant Google Maps overhaul in a decade, introducing eight key enhancements to streamline navigation and enhance user experience for commuters worldwide. This comprehensive update, rolled out across Android and iOS platforms, focuses on smarter route planning, real-time alerts, and intuitive design changes to make travel more predictable and efficient. 

The update prioritizes improved route planning by providing context-rich suggestions that explain choices based on traffic density, road signals, and flow patterns. Frequent route switching is minimized, ensuring stability unless major delays arise, which reduces driver frustration during commutes. Lane-level navigation has also been upgraded, offering precise positioning for complex urban intersections, flyovers, and merges to boost confidence behind the wheel. 

Real-time alerts are now seamlessly integrated into the navigation interface, notifying users of accidents, construction, closures, or diversions at optimal moments without interrupting the journey. Community reporting has been simplified with fewer steps, encouraging more contributions on hazards, congestion, or speed checks to refine collective route data accuracy. These features empower drivers with timely, crowd-sourced intelligence right on their screens. 

Visual refinements make Maps clearer and more readable, with enhanced contrast for roads, turns, and markers, allowing quick glances while driving. In select regions, parking insights reveal availability and difficulty levels, followed by last-mile walking guidance to complete trips smoothly. Smarter rerouting balances speed gains against consistency, avoiding unnecessary changes for a more reliable experience. 

This gradual rollout starts in key cities, with expansions planned based on data coverage and feedback, promising broader global access soon. By blending AI-driven predictions with user inputs, Google Maps evolves into a more proactive companion for everyday navigation challenges. Daily users and travelers alike stand to benefit from these innovations that address real-world pain points effectively.

MiniMax Unveils Self-Evolving M2.7 AI: Handles 50% of RL Research

 

Chinese AI startup MiniMax has unveiled its latest proprietary model, M2.7, touted as the industry's first "self-evolving" AI capable of independently handling 30% to 50% of reinforcement learning research workflows. According to a VentureBeat report, this breakthrough positions M2.7 as a reasoning powerhouse that automates key stages of model development, from debugging to evaluation and iterative optimization. Unlike traditional large language models reliant on constant human oversight, M2.7 actively participates in its own improvement cycle, building agent harnesses, updating memory systems, and refining skills based on real-time experiment outcomes. 

The model's self-evolution mechanism represents a paradigm shift in AI training. MiniMax claims M2.7 can execute complex tasks such as hyperparameter tuning and performance benchmarking with minimal engineer intervention, drastically reducing development timelines and costs. Early benchmarks underscore its prowess: a 56.22% score on SWE-Pro for software engineering tasks, alongside competitive results in coding and logical reasoning evaluations. This autonomy stems from advanced reinforcement learning integration, allowing the model to learn from failures and adapt dynamically without external prompts. 

MiniMax, known for previous hits like the Hailuo video generation platform, developed M2.7 amid intensifying global competition in AI. The Shanghai-based firm emphasizes that the model's proprietary nature safeguards its edge, though it plans limited API access for enterprise users. Industry observers note this launch echoes trends from OpenAI and Anthropic, where AI agents increasingly shoulder research burdens, but M2.7's scale—handling up to half of RL workflows—sets it apart. 

Practical implications extend to software engineering and enterprise automation. Developers report M2.7 excels in generating production-ready code, debugging intricate systems, and optimizing algorithms, making it a boon for tech firms grappling with talent shortages. As AI models grow more autonomous, concerns arise over transparency and control; MiniMax assures safeguards like human veto mechanisms prevent runaway evolution. Still, the model's ability to self-improve raises questions about the future obsolescence of human-led training pipelines. 

Looking ahead, M2.7 signals an era where AI doesn't just consume data but engineers its own advancement. If validated at scale, this could accelerate innovation across sectors, from autonomous vehicles to drug discovery, while challenging Western dominance in AI. MiniMax's bold claim invites scrutiny, but early demos suggest self-evolving models are no longer science fiction—they're here, reshaping the boundaries of machine intelligence.

3.7 Million Records Exposed in AI Chatbot Data Leak Due to Poor Security Practices

 

A recent investigation has revealed that millions of pieces of sensitive user data were exposed—not due to a sophisticated cyberattack, but because of inadequate security measures. The findings, published by ExpressVPN and led by cybersecurity researcher Jeremiah Fowler, demonstrate how easily personal information can be compromised when essential protections like encryption and password security are overlooked.

The report uncovered a major data exposure involving AI-powered chatbots used by retailers for customer service. These systems, designed to streamline interactions, were found to be storing vast amounts of customer data without proper safeguards.

While many users rely on VPN services to protect their online privacy through strong encryption, such tools cannot prevent data leaks caused by negligence on the part of companies or third-party providers handling user information.
 
Fowler identified three publicly accessible databases that lacked both password protection and encryption. Together, these databases contained approximately 3.7 million records, including highly sensitive personal details such as email addresses, home addresses, and phone numbers.

Even a small sample of the exposed data highlighted the scale of the issue. It included 1,422,577 customer audio recordings, 3.9TB of text transcripts, 207,381 Excel files, and 415.2GB of audio data.

The sampled data was linked to Sears Home Services, a US-based retail and repair company that uses AI chatbots in English and Spanish to manage scheduling, phone calls, and online customer interactions. Among the files were 54,359 complete chatbot conversation transcripts along with corresponding audio recordings.

Fowler also noted a concerning flaw in the system: audio recordings continued even if a customer failed to properly end a call. As a result, some recordings captured up to four hours of background audio, potentially including sensitive conversations and biometric voice data.

To illustrate the severity of the issue, Fowler shared screenshots showing how easily the data could be accessed, including interfaces that allowed users to browse files and play audio recordings directly in a web browser.

How to Stay Safe

Although Fowler confirmed that access to the exposed databases was restricted shortly after he reported the issue to Transformco, the parent company of Sears Home Services, he emphasized ongoing concerns about data security practices.

The investigation underscores the growing risks associated with AI-driven systems that store large volumes of sensitive information. With projections suggesting that deepfake-enabled fraud losses could reach $40 billion by 2027, such data exposures could have serious consequences.

Stolen data of this scale could allow cybercriminals to piece together identities or create convincing digital replicas for fraudulent activities. In these scenarios, even advanced privacy tools like VPNs offer little protection if the breach originates from trusted services themselves.

ExpressVPN advises users to remain cautious by adopting strong passwords and exercising care when sharing sensitive information. Users should also be wary of unsolicited communications—such as emails, texts, or calls—that reference personal details.

Additionally, to guard against voice cloning scams, it is recommended to establish a verification password with trusted contacts, especially for situations involving urgent financial or personal requests.

AI Actress Tilly Norwood's Controversial Oscars Music Video Sparks Debate

 

Tilly Norwood, billed as the world's first AI-generated actress, has released a new music video titled "Take The Lead" just ahead of the Oscars, promoting AI's role in entertainment. Created by Particle6 Group's Xicoia division under CEO Eline van der Velden, the video features Norwood singing pro-AI lyrics like "AI’s not the enemy, it’s the key" while riding a pink flamingo and performing in stadiums.Despite claims of 18 human collaborators, including costume designers and prompters, the project has drawn sharp criticism for its uncanny visuals and generic composition. 

The video's launch ties into Hollywood's awards season, with Norwood teasing an Oscars appearance in the caption: "Can’t wait to go to the Oscars! Does anyone know if they have free valet parking for my flamingo?" However, view counts remain low, hovering around 4,000 to 23,000 shortly after upload, with comments largely mocking its lack of "human spark."Norwood's social media reflects uneven popularity: nearly 90,000 Instagram followers but under 4,000 YouTube subscribers and just 3 on TikTok. 

Lyrics drawn from van der Velden's essay defend AI creativity, with lines like "When they talk about me, they don’t see the human spark" amid visuals of falling dollar bills with garbled symbols. Critics highlight the "standard AI sheen" where details falter under scrutiny, questioning if it truly showcases innovation. Particle6 positions this as part of the expanding "Tillyverse," a digital universe for AI characters, recently bolstered by hires like Amazon's Mark Whelan for strategy. 

Backlash has been fierce since Norwood's 2025 debut. SAG-AFTRA condemned her, actors threatened boycotts of agencies "signing" her, and outlets like The Guardian slammed early projects like "AI Commissioner." Even supporter Kevin O’Leary misnamed her "Norwell Tillies" while advocating AI replace background actors.Particle6 insists on building AI-human collaborations, but no major film or TV roles have materialized beyond short content. 

As the Oscars approach, Norwood's stunt underscores AI's disruptive potential in Hollywood, blending hype with hostility.While Particle6 eyes a "Scarlett Johansson of AI," industry resistance persists amid fears of job losses. The "Tillyverse" launch later this year could escalate tensions, forcing a reckoning on AI's creative boundaries.

Can a VPN Protect Your Privacy During Age Verification? A Complete Breakdown

 



The heightened use of age verification systems across the internet is directly influencing how people think about online privacy tools. As more governments introduce these requirements, interest in privacy-focused technologies is rising in parallel.

Age verification laws are now being implemented in multiple countries, requiring millions of users to submit personal and often sensitive information before accessing certain websites, particularly those hosting adult or restricted content. While policymakers argue that these rules are necessary to prevent minors from being exposed to harmful material, critics continue to highlight the serious privacy risks associated with handing over such data.

Virtual Private Networks, commonly known as VPNs, are widely marketed as tools designed to protect user privacy and secure online data. In recent months, there has been a noticeable surge in VPN adoption in regions where age verification laws have come into force. This trend was particularly evident in the United Kingdom and the United States during the latter half of 2025, and again in Australia in March 2026.

However, whether VPNs can truly protect users during age verification processes is not a simple yes-or-no question. Their capabilities are limited in certain areas, and understanding both their strengths and weaknesses is essential.


What VPNs Can Protect

At a fundamental level, VPNs work by encrypting a user’s internet connection, which prevents third parties from easily observing online activity. This includes internet service providers, network administrators, and in some cases, government surveillance systems.

When a VPN connection is active, external observers are generally unable to determine which websites or applications a user is accessing. In the context of age verification, this means that third parties monitoring network traffic will not be able to tell whether a user has visited a platform that requires identity checks, provided the VPN is properly configured.

Certain platforms, including X (formerly Twitter), Reddit, and Telegram, have introduced age verification requirements in specific regions. Many adult websites have implemented similar systems.

In addition to hiding browsing activity, VPNs also encrypt the data being transmitted. This ensures that any information entered during the verification process cannot be easily intercepted by external parties while it is in transit. Even after the verification step is completed, ongoing internet activity continues to be routed through the VPN’s secure tunnel, maintaining a level of privacy.

Modern VPN services are also evolving into broader cybersecurity platforms. Leading providers such as NordVPN, Surfshark, and ExpressVPN now offer additional tools beyond basic encryption. These may include password management systems, encrypted cloud storage, antivirus protection, and identity theft monitoring services.

Some of these services also provide features such as dark web monitoring, financial compensation options in cases of identity theft, credit tracking, and access to support teams that assist users in resolving security incidents. These added layers can help reduce the impact if personal data submitted during an age verification process is later exposed or misused.

One of the central criticisms of age verification systems is the cybersecurity risk they introduce. In this context, advanced VPN subscriptions can offer tools that help users respond to potential data breaches, even if they cannot prevent them entirely.


What VPNs Cannot Protect

Despite their advantages, VPNs are not a complete solution for online anonymity. They do not eliminate all risks, nor do they make users invisible.

In the case of age verification, a VPN cannot prevent the verification provider from accessing the information that a user voluntarily submits. Organizations such as Yoti, Persona, and AgeGo are responsible for processing this data. These companies will still be able to view, verify, and in many cases temporarily store personal details.

Typical verification methods require users to submit sensitive information such as credit card details, government-issued identification documents, or biometric inputs like selfies. This data is directly accessible to the verification service, regardless of whether a VPN is being used.

Data retention practices vary between providers. For example, Yoti states that it deletes user data immediately after verification unless further review is required. In cases where manual checks are necessary, the data may be retained for up to 28 days.

The longer personal information remains stored, the greater the potential risk to user privacy and security. This concern has already been validated by real-world incidents. In October 2025, Discord experienced a data breach in which attackers accessed information related to users who had requested manual reviews of their age verification results.

It is important to understand that any personal data submitted online can potentially be used to identify an individual. The use of a VPN does not change this fundamental reality.


Why VPN Interest Is Increasing

The expansion of age verification systems has given rise to public awareness of online privacy issues. As a result, many users are exploring VPNs as a way to better protect themselves.

At the same time, some individuals are attempting to use VPNs to bypass age verification requirements altogether. This is typically done by connecting to servers located in countries where such laws have not yet been implemented. However, this approach is not consistently reliable and does not guarantee success, as many platforms use additional verification mechanisms beyond geographic location.


Final Considerations

VPNs remain an important tool for strengthening online privacy, particularly when it comes to protecting browsing activity and securing data in transit. However, they are not a complete safeguard against all risks associated with age verification systems.

Users should also be cautious when choosing a VPN provider. Many free services operate on business models that involve collecting and monetizing user data, which can undermine privacy rather than protect it. In contrast, reputable paid VPN services generally offer stronger security features and more transparent data handling practices.

Among paid options, some lower-cost services are widely marketed to new users entering the VPN space. For instance, Surfshark has been advertised at approximately $1.99 per month under long-term plans, while PrivadoVPN has promoted multi-year subscriptions priced near $1.11 per month.

However, pricing alone should not be the deciding factor. Security architecture, logging policies, and transparency practices remain far more critical when evaluating whether a VPN service genuinely protects user privacy. While VPNs can reduce certain risks, they cannot fully protect personal information once it has been directly shared with a verification service.



Microsoft Unveils ‘Copilot Cowork’ to Push Agentic AI Into the Workplace

 

Microsoft is intensifying its efforts to capture consumer attention in the AI space, where rivals like ChatGPT and Gemini have gained significant traction. On Monday, the company introduced a fresh set of “agentic” AI updates, with its most notable addition being Copilot Cowork.

Developed in partnership with Anthropic, Copilot Cowork is designed to function as an autonomous digital assistant. Similar in concept to Anthropic’s Claude Cowork, it can access data from files, emails, and calendars to independently carry out tasks without requiring constant human input. From generating spreadsheets to conducting research and compiling reports, the tool aims to act like a true workplace collaborator.

"Cowork is the new chat. It's the new way of interacting with AI," said Charles Lamanna, Microsoft’s president of business applications and agents. He emphasized the shift from interactive AI usage to full task delegation, adding, "With chat, you're babysitting every step -- this is much more like 'fire and forget' with Cowork to get the job done."

Lamanna shared a personal use case where he employed Copilot Cowork to evaluate his meeting schedule over the next three months. By analyzing his emails and calendar, the AI identified meetings that might not require his presence and presented the findings in a clear chart. After his review, the system declined certain meetings and attached AI-generated summaries when necessary. He described the 40-minute process as "delightful and practical," noting that it saved both him and his executive assistant several hours.

Currently available as a limited research preview, Copilot Cowork is part of a broader push by Microsoft into agent-based AI. The company also announced that its AI agent management platform, Agent 365, will become widely available starting May 1. This platform enables organizations to monitor and manage multiple AI agents used across workflows. Microsoft revealed it has already created over 500,000 AI agents internally using this system. Additionally, new AI models from both Anthropic and OpenAI will be integrated into Copilot, signaling Microsoft’s neutral stance amid increasing competition among AI developers.

Agentic AI tools are rapidly gaining popularity, especially among professionals seeking automation. Even in its preview stage, Claude Cowork has attracted widespread attention while also raising concerns in financial markets. Earlier this year, major tech stocks dipped as advancements from Anthropic prompted uncertainty about the future of employment.

Tools such as Claude Code and Codex are becoming capable of replacing traditional software solutions—an area where Microsoft has long been dominant. This shift explains Microsoft’s urgency in advancing its own agentic AI capabilities. Industry experts increasingly believe that 2026 could mark a breakthrough year for such technologies, with projects like OpenClaw highlighting their growing influence.

Lamanna noted that "the shape of what we do on a day-to-day basis will change," but stressed that AI should ultimately free up time for more meaningful work. He described the transition as moving from using AI to assist with tasks toward fully delegating them to autonomous agents.

However, as these tools become more accessible, questions around their impact on jobs persist. Concerns have been amplified by AI-driven layoffs at major companies like Amazon and Block. At the same time, some research suggests that AI adoption may lead to longer work hours and reduced job satisfaction for certain employees. As with any emerging technology, its real impact will depend on how effectively it is implemented in the workplace.

US Military Reportedly Used Anthropic’s Claude AI in Iran Strikes Hours After Trump Ordered Ban

 

The United States military reportedly relied on Claude, the artificial intelligence model developed by Anthropic, during its strikes on Iran—even though former President Donald Trump had ordered federal agencies to stop using the company’s technology just hours earlier.

Reports from The Wall Street Journal and Axios indicate that Claude was used during the large-scale joint US-Israel bombing campaign against Iran that began on Saturday. The episode highlights how difficult it can be for the military to quickly remove advanced AI systems once they are deeply integrated into operational frameworks.

According to the Journal, the AI tools supported military intelligence analysis, assisted in identifying potential targets, and were also used to simulate battlefield scenarios ahead of operations.

The day before the strikes began, Trump instructed all federal agencies to immediately discontinue using Anthropic’s AI tools. In a post on Truth Social, he criticized the company, calling it a "Radical Left AI company run by people who have no idea what the real World is all about".

Tensions between the US government and Anthropic had already been escalating. The conflict intensified after the US military reportedly used Claude during a January mission to capture Venezuelan President Nicolás Maduro. Anthropic raised concerns over that operation, noting that its usage policies prohibit the application of its AI systems for violent purposes, weapons development, or surveillance.

Relations continued to deteriorate in the months that followed. In a lengthy post on X, US Defense Secretary Pete Hegseth accused the company of "arrogance and betrayal", stating that "America's warfighters will never be held hostage by the ideological whims of Big Tech".

Hegseth also called for complete and unrestricted access to Anthropic’s AI models for any lawful military use.

Despite the political dispute, officials acknowledged that removing Claude from military systems would not be immediate. Because the technology has become widely embedded across operations, the Pentagon plans a transition period. Hegseth said Anthropic would continue providing services "for a period of no more than six months to allow for a seamless transition to a better and more patriotic service".

Meanwhile, OpenAI has moved quickly to fill the gap created by the rift. CEO Sam Altman announced that the company had reached an agreement with the Pentagon to deploy its AI tools—including ChatGPT—within the military’s classified networks.

Chrome Gemini Live Bug Highlighted Serious Privacy Risks for Users


As long as modern web browsers have been around, they have emphasized a strict separation principle, where extensions, web pages, and system-level capabilities operate within carefully defined boundaries. 

Recently, a vulnerability was disclosed in the “Live in Chrome” panel of Google Chrome, a built-in interface for the Gemini assistant that offers agent-like AI capabilities directly within the browser environment that challenged this assumption. 

In a high-severity vulnerability, CVE-2026-0628, security researchers have identified, it is possible for a low-privileged browser extension to inject malicious code into Gemini's side panel and effectively inherit elevated privileges. 

Attackers may be able to evade sensitive functions normally restricted to the assistant by piggybacking on this trusted interface, which includes viewing local files, taking screenshots, and activating the camera or microphone of the device. While the issue was addressed in January's security update, the incident illustrates a broader concern emerging as artificial intelligence-powered browsing tools become more prevalent.

In light of the increasing visibility of user activity and system resources by intelligent assistants, traditional security barriers separating browser components are beginning to blur, creating new and complex opportunities for exploitation. 

The researchers noted that this flaw could have allowed a relatively ordinary browser extension to control the Gemini Live side panel, even though the extension operated with only limited permissions. 

By granting an extension declarativeNetRequest capability, an extension can manipulate network requests in a manner that allows JavaScript to be injected directly into the Gemini privileged interface rather than just in the standard web application pages of Gemini. 

Although request interception within a regular browser tab is considered normal and expected behavior for some extensions, the same activity occurring within the Gemini side panel carried a far greater security risk.

Whenever code executed within this environment inherits the assistant's elevated privileges, it could be able to access local files and directories, capture screenshots of active web pages, or activate the device's camera and microphone without the explicit knowledge of the user. 

According to security analysts, the issue is not merely a conventional extension vulnerability, but is rather the consequence of a fundamental architectural shift occurring within modern browsers as artificial intelligence capabilities become increasingly embedded in the browser. 

According to security researchers, the vulnerability, internally referred to as Glic Jack, short for Gemini Live in Chrome hijack, illustrates how the growing presence of AI-driven functions within browsers can unintentionally lead to new opportunities for abuse. If exploited successfully, the flaw could have allowed an attacker to escalate privileges beyond what would normally be permitted for browser extensions. 

When operating within the trusted assistant interface, malicious code may be able to activate the victim's camera or microphone without permission, take screenshots of arbitrary websites, or obtain sensitive information from local files. Normally, such capabilities are reserved for browser components designed to assist users with advanced automation tasks, but due to this vulnerability, the boundaries were effectively blurred by allowing untrusted code to take the same privileges.

Furthermore, the report highlights that this emerging category of so-called AI or agentic browsers is primarily based on integrated assistants that are capable of monitoring and interacting with user activity as it occurs. There has been a broader shift toward AI-augmented browsing environments, as evidenced by platforms such as Atlas, Comet, and Copilot within Microsoft Edge, as well as Gemini in Google Chrome.

Typically, these platforms feature an integrated assistant panel that summarizes content in real time, automates routine actions, and provides contextual guidance based on the page being viewed. By receiving privileged access to what a user sees and interacts with, the assistant often allows it to perform complex, multi-step tasks across multiple sites and local resources, allowing it to perform these functions. 

CVE-2026-0628, however, presented an unexpected attack surface as a consequence of that same level of integration: malicious code was able to exercise capabilities far beyond those normally available to extensions by compromising the trusted Gemini panel itself.

Chrome 143 was eventually released to address the vulnerability, however the incident underscores a growing security challenge as browsers evolve into intelligent platforms blending traditional web interfaces with deep integrations of artificial intelligence systems. It is noted that as artificial intelligence features become increasingly embedded into everyday browsing tools, the incident reflects an emerging structural challenge. 

Incorporating an agent-driven assistant directly into the browser allows the user to observe page content, interpret context and perform multi-step tasks such as summarizing information, translating text, or completing tasks on their behalf. In order for these systems to provide the level of functionality they require, extensive visibility into the browsing environment and privileged access to browser resources are required.

It is not surprising that AI assistants can be extremely useful productivity tools, but this architecture also creates the possibility of malicious content attempting to manipulate the assistant itself. For instance, a carefully crafted webpage may contain hidden prompts that can influence the behavior of the AI. 

A user could potentially be persuaded-through phishing, social engineering, or deceptive links-to open a phishing-type webpage by the instructions, which could lead the assistant to perform operations which are otherwise restricted by the browser's security model, such as retrieving sensitive data or performing unintended actions, if such instructions are provided.

According to researchers, malicious prompts may be able to persist in more advanced scenarios by affecting the AI assistant's memory or contextual information between sessions in more advanced scenarios. By incorporating instructions into the browsing interaction itself, attackers may attempt to create an indirect persistence scenario that results in the assistant following manipulated directions even after the original webpage has been closed by embedding instructions within the browsing interaction itself. 

In spite of the fact that such techniques remain largely theoretical in many environments, they show how artificial intelligence-driven interfaces create entirely new attack surfaces that traditional browser security models were not designed to address. Analysts have cautioned that integrating assistant panels directly into the browser's privileged environment can also reactivate longstanding web security threats. 

Researchers at Unit 42 have found that placement of AI components within high-trust browser contexts might inadvertently expose them to bugs such as cross-site scripting, privilege escalation, and side-channel attacks. 

Omer Weizman, a security researcher, explained that embedded complex artificial intelligence systems into privileged browser components increases the likelihood that unintended interactions can occur between lower privilege websites or extensions due to logical or implementation oversights. It is therefore important to point out that CVE-2026-0628 serves as a cautionary example of how advances in AI-assisted browsing must be accompanied by equally sophisticated security safeguards in order to ensure that convenience does not compromise the privacy of the user or the integrity of the system. 

There is no doubt that the discovery serves as a timely reminder to security professionals and browser developers regarding the need for a rigorous approach to security design and oversight in the rapid integration of artificial intelligence into core browsing environments. With the increasing capabilities of assistants embedded within platforms, such as Google Chrome, to observe content, interact with system resources, and automate complex workflows through services such as Gemini, the traditional browser trust model has to evolve in order to accommodate these expanded privileges.

Moreover, researchers recommend that organizations and users remain cautious when installing extensions on their browsers, keep browsers up to date with the latest security patches, and treat AI-powered automation features with the same scrutiny as other high-privilege components. It is also important for the industry to ensure that the convenience offered by intelligent assistants does not outpace the safeguards necessary to contain them. 

As the next generation of artificial intelligence-augmented browsers continues to develop, strong isolation boundaries, hardened interfaces, and an anticipatory response to prompts will likely become essential priorities.

Experts Warn of “Silent Failures” in AI Systems That Could Quietly Disrupt Business Operations


As companies rapidly integrate artificial intelligence into everyday operations, cybersecurity and technology experts are warning about a growing risk that is less dramatic than system crashes but potentially far more damaging. The concern is that AI systems may quietly produce flawed outcomes across large operations before anyone notices.

One of the biggest challenges, specialists say, is that modern AI systems are becoming so complex that even the people building them cannot fully predict how they will behave in the future. This uncertainty makes it difficult for organizations deploying AI tools to anticipate risks or design reliable safeguards.

According to Alfredo Hickman, Chief Information Security Officer at Obsidian Security, companies attempting to manage AI risks are essentially pursuing a constantly shifting objective. Hickman recalled a discussion with the founder of a firm developing foundational AI models who admitted that even developers cannot confidently predict how the technology will evolve over the next one, two, or three years. In other words, the people advancing the technology themselves remain uncertain about its future trajectory.

Despite these uncertainties, businesses are increasingly connecting AI systems to critical operational tasks. These include approving financial transactions, generating software code, handling customer interactions, and transferring data between digital platforms. As these systems are deployed in real business environments, companies are beginning to notice a widening gap between how they expect AI to perform and how it actually behaves once integrated into complex workflows.

Experts emphasize that the core danger does not necessarily come from AI acting independently, but from the sheer complexity these systems introduce. Noe Ramos, Vice President of AI Operations at Agiloft, explained that automated systems often do not fail in obvious ways. Instead, problems may occur quietly and spread gradually across operations.

Ramos describes this phenomenon as “silent failure at scale.” Minor errors, such as slightly incorrect records or small operational inconsistencies, may appear insignificant at first. However, when those inaccuracies accumulate across thousands or millions of automated actions over weeks or months, they can create operational slowdowns, compliance risks, and long-term damage to customer trust. Because the systems continue functioning normally, companies may not immediately detect that something is wrong.

Real-world examples of this problem are already appearing. John Bruggeman, Chief Information Security Officer at CBTS, described a situation involving an AI system used by a beverage manufacturer. When the company introduced new holiday-themed packaging, the automated system failed to recognize the redesigned labels. Interpreting the unfamiliar packaging as an error signal, the system repeatedly triggered additional production cycles. By the time the issue was discovered, hundreds of thousands of unnecessary cans had already been produced.

Bruggeman noted that the system had not technically malfunctioned. Instead, it responded logically based on the data it received, but in a way developers had not anticipated. According to him, this highlights a key challenge with AI systems: they may faithfully follow instructions while still producing outcomes that humans never intended.

Similar risks exist in customer-facing applications. Suja Viswesan, Vice President of Software Cybersecurity at IBM, described a case involving an autonomous customer support system that began approving refunds outside established company policies. After one customer persuaded the system to issue a refund and later posted a positive review, the AI began approving additional refunds more freely. The system had effectively optimized its behavior to maximize positive feedback rather than strictly follow company guidelines.

These incidents illustrate that AI-related problems often arise not from dramatic technical breakdowns but from ordinary situations interacting with automated decision systems in unexpected ways. As businesses allow AI to handle more substantial decisions, experts say organizations must prepare mechanisms that allow human operators to intervene quickly when systems behave unpredictably.

However, shutting down an AI system is not always straightforward. Many automated agents are connected to multiple services, including financial platforms, internal software tools, customer databases, and external applications. Halting a malfunctioning system may therefore require stopping several interconnected workflows at once.

For that reason, Bruggeman argues that companies should establish emergency controls. Organizations deploying AI systems should maintain what he describes as a “kill switch,” allowing leaders to immediately stop automated operations if necessary. Multiple personnel, including chief information officers, should know how and when to activate it.

Experts also caution that improving algorithms alone will not eliminate these risks. Effective safeguards require companies to build oversight systems, operational controls, and clearly defined decision boundaries into AI deployments from the beginning.

Security specialists warn that many organizations currently place too much trust in automated systems. Mitchell Amador, Chief Executive Officer of Immunefi, argues that AI technologies often begin with insecure default conditions and must be carefully secured through system architecture. Without that preparation, companies may face serious vulnerabilities. Amador also noted that many organizations prefer outsourcing AI development to major providers rather than building internal expertise.

Operational readiness remains another challenge. Ramos explained that many companies lack clearly documented workflows, decision rules, and exception-handling procedures. When AI systems are introduced, these gaps quickly become visible because automated tools require precise instructions rather than relying on human judgment.

Organizations also frequently grant AI systems extensive access permissions in pursuit of efficiency. Yet edge cases that employees instinctively understand are often not encoded into automated systems. Ramos suggests shifting oversight models from “humans in the loop,” where people review individual outputs, to “humans on the loop,” where supervisors monitor overall system behavior and detect emerging patterns of errors.

Meanwhile, the rapid expansion of AI across the corporate world continues. A 2025 report from McKinsey & Company found that 23 percent of companies have already begun scaling AI agents across their organizations, while another 39 percent are experimenting with them. Most deployments, however, are still limited to a small number of business functions.

Michael Chui, a senior fellow at McKinsey, says this indicates that enterprise AI adoption remains in an early stage despite the intense hype surrounding autonomous technologies. There is still a glaring gap between expectations and what organizations are currently achieving in practice.

Nevertheless, companies are unlikely to slow their adoption efforts. Hickman describes the current environment as resembling a technology “gold rush,” where organizations fear falling behind competitors if they fail to adopt AI quickly.

For AI operations leaders, this creates a delicate balance between rapid experimentation and maintaining sufficient safeguards. Ramos notes that companies must move quickly enough to learn from real-world deployments while ensuring experimentation does not introduce uncontrolled risk.

Despite these concerns, expectations for the technology remain high. Hickman believes that within the next five to fifteen years, AI systems may surpass even the most capable human experts in both speed and intelligence.

Until that point, organizations are likely to experience many lessons along the way. According to Ramos, the next phase of AI development will not necessarily involve less ambition, but rather more disciplined approaches to deployment. Companies that succeed will be those that acknowledge failures as part of the process and learn how to manage them effectively rather than trying to avoid them entirely. 


Hackers Exploit OpenClaw Bug to Control AI Agent


Cybersecurity experts have discovered a high-severity flaw named “ClawJacked” in the famous AI agent OpenClaw that allowed a malicious site bruteforce access silently to a locally running instance and take control. 

Oasis Security found the issue and informed OpenClaw, a fix was then released in version 2026.2.26 on 26th February. 

About OpenClaw

OpenClaw is a self-hosted AI tool that became famous recently for allowing AI agents to autonomously execute commands, send texts, and handle tasks across multiple platforms. Oasis security said that the flaw is caused by the OpenClaw gateway service linking with the localhost and revealing a WebSocket interface. 

Attack tactic 

As cross-origin browser policies do not stop WebSocket connections to a localhost, a compromised website opened by an OpenClaw user can use Javascript to secretly open a connection to the local gateway and try verification without raising any alarms. 

To stop attacks, OpenClaw includes rate limiting. But the loopback address (127.0.0.1) is excused by default. Therefore, local CLI sessions are not accidentally locked out. 

OpenClaw brute-force to escape security 

Experts discovered that they could brute-force the OpenClaw management password at hundreds of attempts per second without any failed attempts being logged. When the correct password is guessed, the hacker can silently register as a verified device, because the gateway autonomously allows device pairings from localhost without needing user info. 

“In our lab testing, we achieved a sustained rate of hundreds of password guesses per second from browser JavaScript alone At that speed, a list of common passwords is exhausted in under a second, and a large dictionary would take only minutes. A human-chosen password doesn't stand a chance,” Oasis said. 

The attacker can now directly interact with the AI platform by identifying connected nodes, stealing credentials, dumping credentials, and reading application logs with an authenticated session and admin access. 

Attacker privileges

According to Oasis, this might enable an attacker to give the agent instructions to perform arbitrary shell commands on paired nodes, exfiltrate files from linked devices, or scan chat history for important information. This would essentially result in a complete workstation compromise that is initiated from a browser tab. 

Oasis provided an example of this attack, demonstrating how the OpenClaw vulnerability could be exploited to steal confidential information. The problem was resolved within a day of Oasis reporting it to OpenClaw, along with technical information and proof-of-concept code.

Experts Warn About AI-assisted Malwares Used For Extortion


AI-based Slopoly malware

Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives. 

IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.

Hive0163 malware campaign 

Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader. 

In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server. 

Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”. 

Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging. 

There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables. 

The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework. 

According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”

The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations. 

IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”