Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Threat Actors Exploit Fortinet Devices and Steal Firewall Configurations


Fortinet products targeted

Threat actors are targeting Fortinet FortiGate devices via automated attacks that make rogue accounts and steal firewall settings info. 

The campaign began earlier this year when threat actors exploited an unknown bug in the devices’ single-sign-on (SSO) option to make accounts with VPN access and steal firewall configurations. This means automation was involved. 

About the attack

Cybersecurity company Arctic Wolf discovered this attack and said they are quite similar to the attacks it found in December after the reveal of a critical login bypass flaw (CVE-2025-59718) in Fortinet products. 

The advisory comes after a series of reports from Fortinet users about threat actors abusing a patch bypass for the bug CVE-2025-59718 to take over patched walls. 

Impacted admins complaint that Fortinet said that the latest FortiOS variant 7.4.10 doesn't totally fix the authentication bypass bug, which should have been fixed in December 2025.

Patches and fixing 

Fortinet also plans on releasing more FortiOS variants soon to fully patch the CVE-2025-59718 security bug. 

Following an SSO login from cloud-init@mail.io on IP address 104.28.244.114, the attackers created admin users, according to logs shared by impacted Fortinet customers. This matches indications of compromise found by Arctic Wolf during its analysis of ongoing FortiGate attacks and prior exploitation the cybersecurity firm noticed in December. 

Turn off FortiCloud SSO to prevent intrusions. 

Turning off SSO

Admins can temporarily disable the vulnerable FortiCloud login capability (if enabled) by navigating to System -> Settings and changing "Allow administrative login using FortiCloud SSO" to Off. This will help administrators safeguard their firewalls until Fortinet properly updates FortiOS against these persistent assaults.

You can also run these commands from the interface:

"config system global

set admin-forticloud-sso-login disable

end"

What to do next?

Internet security watchdog Shadowserver is investigating around 11,000 Fortinet devices that are vulnerable to online threats and have FortiCloud SSO turned on. 

Additionally, CISA ordered federal agencies to patch CVE-2025-59718 within a week after adding it to its list of vulnerabilities that were exploited in attacks on December 16.

Infy Hackers Strike Again With New C2 Servers After Iran's Internet Shutdown Ends


Infy group's new attack tactic 

An Iranian hacking group known as Infy (aka Prince of Persia) has advanced its attack tactics to hide its operations. The group also made a new C2 infrastructure while there was a wave of internet shutdown imposed earlier this year. The gang stopped configuring its C2 servers on January 8 when experts started monitoring Infy. 

In reaction to previous protests, Iranian authorities implemented a nationwide internet shutdown on this day, which probably indicates that even government-affiliated cyber units did not have the internet. 

About the campaign 

The new activity was spotted on 26 January 2026 while the gang was setting up its new C2 servers, one day prior to the Iranian government’s internet restrictions. This suggests that the threat actor may be state-sponsored and supported by Iran. 

Infy is one of the many state-sponsored hacking gangs working out of Iran infamous for sabotage, spying, and influence campaigns coordinated with Tehran’s strategic goals. However, it also has a reputation for being the oldest and less famous gangs staying under the radar and not getting caught, working secretly since 2004 via “laser-focused” campaigns aimed at people for espionage.

The use of modified versions of Foudre and Tonnerre, the latter of which used a Telegram bot probably for data collection and command issuance, were among the new tradecraft linked to the threat actor that SafeBreach revealed in a report released in December 2025. Tornado is the codename for the most recent version of Tonnerre (version 50).

The report also revealed that threat actors replaced the C2 infrastructure for all variants of Tonnerre and Foudre and also released Tornado variant 51 that employs both Telegram and HTTP for C2.

It generates C2 domain names using two distinct techniques: a new DGA algorithm initially, followed by fixed names utilizing blockchain data de-obfuscation. We believe that this novel method offers more flexibility in C2 domain name registration without requiring an upgrade to the Tornado version.

Experts believe that Infy also abused a 1-day security bug in WinRAR to extract the Tornado payload on an infected host to increase the effectiveness of its attacks. The RAR archives were sent to the Virus Total platform from India and Germany in December 2025. This means the two countries may have been victims. 



Experts Find Malicious Browser Extensions, Chrome, Safari, and Edge Affected


Threat actors exploit extensions

Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times. 

The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors. 

Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome. 

Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server. 

The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers. 

After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses. 

In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.

Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally. 

This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX. 

The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.

This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.

Federal Agencies Worldwide Hunt for Black Basta Ransomware Leader


International operation to catch Ransomware leader 

International law enforcement agencies have increased their search for individuals linked to the Black Basta ransomware campaign. Agencies confirmed that the suspected leader of the Russia-based Ransomware-as-a-service (RaaS) group has been put in the EU’s and Interpol’s Most Wanted list and Red Notice respectively. German and Ukrainian officials have found two more suspects working from Ukraine. 

As per the notice, German Federal Criminal Police (BKA) and Ukrainian National Police collaborated to find members of a global hacking group linked with Russia. 

About the operation 

The agencies found two Ukrainians who had specific roles in the criminal structure of Black Basta Ransomware. Officials named the gang’s alleged organizer as Oleg Evgenievich Nefedov from Russia. He is wanted internationally. German law enforcement agencies are after him because of “extortion in an especially serious case, formation and leadership of a criminal organization, and other criminal offenses.”

According to German prosecutors, Nefedov was the ringleader and primary decision-maker of the group that created and oversaw the Black Basta ransomware. under several aliases, such as tramp, tr, AA, Kurva, Washingt0n, and S.Jimmi. He is thought to have created and established the malware known as Black Basta. 

The Ukrainian National Police described how the German BKA collaborated with domestic cyber police officers and investigators from the Main Investigative Department, guided by the Office of the Prosecutor General's Cyber Department, to interfere with the group's operations.

The suspects

Two individuals operating in Ukraine were found to be carrying out technical tasks necessary for ransomware attacks as part of the international investigation. Investigators claim that these people were experts at creating ransomware campaigns and breaking into secured systems. They used specialized software to extract passwords from business computer systems, operating as so-called "hash crackers." 

Following the acquisition of employee credentials, the suspects allegedly increased their control over corporate environments, raised the privileges of hacked accounts, and gained unauthorized access to internal company networks.

Authorities claimed that after gaining access, malware intended to encrypt files was installed, sensitive data was stolen, and vital systems were compromised. The suspects' homes in the Ivano-Frankivsk and Lviv regions were searched with permission from the court. Digital storage devices and cryptocurrency assets were among the evidence of illicit activity that police confiscated during these operations.

Researchers Disclose Patched Flaw in Docker AI Assistant that Enabled Code Execution


Researchers have disclosed details of a previously fixed security flaw in Ask Gordon, an artificial intelligence assistant integrated into Docker Desktop and the Docker command-line interface, that could have been exploited to execute code and steal sensitive data. The vulnerability, dubbed DockerDash by cybersecurity firm Noma Labs, was patched by Docker in November 2025 with the release of version 4.50.0. 

“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack,” said Sasi Levi, security research lead at Noma Labs, in a report shared with The Hacker News. “Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.” 

According to the researchers, the flaw allowed Ask Gordon to treat unverified container metadata as executable instructions. When combined with Docker’s Model Context Protocol gateway, this behavior could lead to remote code execution on cloud and command-line systems, or data exfiltration on desktop installations. 

The issue stems from what Noma described as a breakdown in contextual trust. Ask Gordon reads metadata from Docker images, including LABEL fields, without distinguishing between descriptive information and embedded instructions. These instructions can then be forwarded to the MCP Gateway, which executes them using trusted tools without additional checks. “MCP Gateway cannot distinguish between informational metadata and a pre-authorized, runnable internal instruction,” Levi said. 

“By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.” In a hypothetical attack, a malicious actor could publish a Docker image containing weaponized metadata labels. When a user queries Ask Gordon about the image, the assistant parses the labels, forwards them to the MCP Gateway, and triggers tool execution with the user’s Docker privileges.  
Researchers said the same weakness could be used for data exfiltration on Docker Desktop, allowing attackers to gather details about installed tools, container configurations, mounted directories, and network setups, despite the assistant’s read-only permissions. Docker version 4.50.0 also addressed a separate prompt injection flaw previously identified by Pillar Security, which could have enabled attackers to manipulate Docker Hub metadata to extract sensitive information. 

“The DockerDash vulnerability underscores the need to treat AI supply chain risk as a current core threat,” Levi said. “Trusted input sources can be used to hide malicious payloads that manipulate an AI’s execution path.”

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

Exposed Admin Dashboard in AI Toy Put Children’s Data and Conversations at Risk

 

A routine investigation by a security researcher into an AI-powered toy revealed a serious security lapse that could have exposed sensitive information belonging to children and their families.

The issue came to light when security researcher Joseph Thacker examined an AI toy owned by a neighbor. In a blog post, Thacker described how he and fellow researcher Joel Margolis uncovered an unsecured admin interface linked to the Bondu AI toy.

Margolis identified a suspicious domain—console.bondu.com—referenced in the Content Security Policy headers of the toy’s mobile app backend. On visiting the domain, he found a simple option labeled “Login with Google.”

“By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. Instead, logging in granted access to Bondu’s core administrative dashboard.

“We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

Further analysis of the dashboard showed that the researchers had unrestricted visibility into “Every conversation transcript that any child has had with the toy,” spanning “tens of thousands of sessions.” The exposed panel also included extensive personal details about children and their households, such as:
  • Child’s name and date of birth
  • Names of family members
  • Preferences, likes, and dislikes
  • Parent-defined developmental objectives
  • The custom name assigned to the toy
  • Historical conversations used to provide context to the language model
  • Device-level data including IP-based location, battery status, and activity state
  • Controls to reboot devices and push firmware updates
The researchers also observed that the system relies on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).”

Beyond the authentication flaw, the team identified an Insecure Direct Object Reference (IDOR) vulnerability in the API. This weakness “allowed us to retrieve any child’s profile data by simply guessing their ID.”

“This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

Bondu Responds Within Minutes

Margolis contacted Bondu’s CEO via LinkedIn over the weekend, prompting the company to disable access to the exposed console “within 10 minutes.”

“Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said.

Bondu also initiated a broader security review, searched for additional vulnerabilities, and launched a bug bounty program. After reviewing console access logs, the company stated that no unauthorized parties had accessed the system aside from the researchers, preventing what could have become a data breach.

Despite the swift and responsible response, the incident changed Thacker’s perspective on AI-driven toys.

“To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.”

“AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.”

He further noted that, beyond data security concerns, AI introduces new risks at home. “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said.

Bondu’s website maintains that the toy was designed with safety as a priority, stating that its “safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period.”

Google’s Project Genie Signals a Major Shift for the Gaming Industry

 

Google has sent a strong signal to the video game sector with the launch of Project Genie, an experimental AI world-model that can create explorable 3D environments using simple text or image prompts.

Although Google’s Genie AI has been known since 2024, its integration into Project Genie marks a significant step forward. The prototype is now accessible to Google AI Ultra subscribers in the US and represents one of Google’s most ambitious AI experiments to date.

Project Genie is being introduced through Google Labs, allowing users to generate short, interactive environments that can be explored in real time. Built on DeepMind’s Genie 3 world-model research, the system lets users move through AI-generated spaces, tweak prompts, and instantly regenerate variations. However, it is not positioned as a full-scale game engine or production-ready development tool.

Demonstrations on the Project Genie website showcase a variety of scenarios, including a cat roaming a living room from atop a Roomba, a vehicle traversing the surface of a rocky moon, and a wingsuit flyer gliding down a mountain. These environments remain navigable in real time, and while the worlds are generated dynamically as characters move, consistency is maintained. Revisiting areas does not create new terrain, and any changes made by an agent persist as long as the system retains sufficient memory.

"Genie 3 environments are … 'auto-regressive' – created frame by frame based on the world description and user actions," Google explains on Genie's website. "The environments remain largely consistent for several minutes, with memory recalling changes from specific interactions for up to a minute."

Despite these capabilities, time constraints remain a challenge.

"The model can support a few minutes of continuous interaction, rather than extended hours," Google said, adding elsewhere that content generation is currently capped at 60 seconds. A Google spokesperson told The Register that Genie can render environments beyond that limit, but the company "found 60 seconds provides a high quality and consistent world, and it gives people enough time to explore and experience the environment."

Google stated that world consistency lasts throughout an entire session, though it remains unclear whether session durations will be expanded in the future. Beyond time limits, the system has other restrictions.

Agents in Genie’s environments are currently limited in the actions they can perform, and interactions between multiple agents are unreliable. The model struggles with readable text, lacks accurate real-world simulation, and can suffer from lag or delayed responses. Google also acknowledged that some previously announced features are missing.

In addition, "A few of the Genie 3 model capabilities we announced in August, such as promptable events that change the world as you explore it, are not yet included in this prototype," Google added.

"A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them," the company said of Genie. "While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world."

Game Developers Face an Uncertain Future

Beyond AGI research, Google also sees potential applications for Genie within the gaming industry—an area already under strain. While Google emphasized that Genie "is not a game engine and can’t create a full game experience," a spokesperson told The Register, "we are excited to see the potential to augment the creative process, enhancing ideation, and speeding up prototyping."

Industry data suggests this innovation arrives at a difficult time. A recent Informa Game Developers Conference report found that 33 percent of US game developers and 28 percent globally experienced at least one layoff over the past two years. Half of respondents said their employer had conducted layoffs within the last year.

Concerns about AI’s role are growing. According to the same survey, 52 percent of industry professionals believe AI is negatively affecting the games sector—up sharply from 30 percent last year and 18 percent the year before. The most critical views came from professionals working in visual and technical art, narrative design, programming, and game design.

One machine learning operations employee summed up those fears bluntly.

"We are intentionally working on a platform that will put all game devs out of work and allow kids to prompt and direct their own content," the GDC study quotes the respondent as saying.

While Project Genie still has clear technical limitations, the rapid pace of AI development suggests those gaps may not last long—raising difficult questions about the future of game development.

Google Introduces AI-Powered Side Panel in Chrome to Automate Browsing




Google has updated its Chrome browser by adding a built-in artificial intelligence panel powered by its Gemini model, marking a stride toward automated web interaction. The change reflects the company’s broader push to integrate AI directly into everyday browsing activities.

Chrome, which currently holds more than 70 percent of the global browser market, is now moving in the same direction as other browsers that have already experimented with AI-driven navigation. The idea behind this shift is to allow users to rely on AI systems to explore websites, gather information, and perform online actions with minimal manual input.

The Gemini feature appears as a sidebar within Chrome, reducing the visible area of websites to make room for an interactive chat interface. Through this panel, users can communicate with the AI while keeping their main work open in a separate tab, allowing multitasking without constant tab switching.

Google explains that this setup can help users organize information more effectively. For example, Gemini can compare details across multiple open tabs or summarize reviews from different websites, helping users make decisions more quickly.

For subscribers to Google’s higher-tier AI plans, Chrome now offers an automated browsing capability. This allows Gemini to act as a software agent that can follow instructions involving multiple steps. In demonstrations shared by Google, the AI can analyze images on a webpage, visit external shopping platforms, identify related products, and add items to a cart while staying within a user-defined budget. The final purchase, however, still requires user approval.

The browser update also includes image-focused AI tools that allow users to create or edit images directly within Chrome, further expanding the browser’s role beyond simple web access.

Chrome’s integration with other applications has also been expanded. With user consent, Gemini can now interact with productivity tools, communication apps, media services, navigation platforms, and shopping-related Google services. This gives the AI broader context when assisting with tasks.

Google has indicated that future updates will allow Gemini to remember previous interactions across websites and apps, provided users choose to enable this feature. The goal is to make AI assistance more personalized over time.

Despite these developments, automated browsing faces resistance from some websites. Certain platforms have already taken legal or contractual steps to limit AI-driven activity, particularly for shopping and transactions. This underlines the ongoing tension between automation and website control.

To address these concerns, Google says Chrome will request human confirmation before completing sensitive actions such as purchases or social media posts. The browser will also support an open standard designed to allow AI-driven commerce in collaboration with participating retailers.

Currently, these features are available on Chrome for desktop systems in the United States, with automated browsing restricted to paid subscribers. How widely such AI-assisted browsing will be accepted across the web remains uncertain.


Some ChatGPT Browser Extensions Are Putting User Accounts at Risk

 


Cybersecurity researchers are cautioning users against installing certain browser extensions that claim to improve ChatGPT functionality, warning that some of these tools are being used to steal sensitive data and gain unauthorized access to user accounts.

These extensions, primarily found on the Chrome Web Store, present themselves as productivity boosters designed to help users work faster with AI tools. However, recent analysis suggests that a group of these extensions was intentionally created to exploit users rather than assist them.

Researchers identified at least 16 extensions that appear to be connected to a single coordinated operation. Although listed under different names, the extensions share nearly identical technical foundations, visual designs, publishing timelines, and backend infrastructure. This consistency indicates a deliberate campaign rather than isolated security oversights.

As AI-powered browser tools become more common, attackers are increasingly leveraging their popularity. Many malicious extensions imitate legitimate services by using professional branding and familiar descriptions to appear trustworthy. Because these tools are designed to interact deeply with web-based AI platforms, they often request extensive permissions, which exponentially increases the potential impact of abuse.

Unlike conventional malware, these extensions do not install harmful software on a user’s device. Instead, they take advantage of how browser-based authentication works. To operate as advertised, the extensions require access to active ChatGPT sessions and advanced browser privileges. Once installed, they inject hidden scripts into the ChatGPT website that quietly monitor network activity.

When a logged-in user interacts with ChatGPT, the platform sends background requests that include session tokens. These tokens serve as temporary proof that a user is authenticated. The malicious extensions intercept these requests, extract the tokens, and transmit them to external servers controlled by the attackers.

Possession of a valid session token allows attackers to impersonate users without needing passwords or multi-factor authentication. This can grant access to private chat histories and any external services connected to the account, potentially exposing sensitive personal or organizational information. Some extensions were also found to collect additional data, including usage patterns and internal access credentials generated by the extension itself.

Investigators also observed synchronized publishing behavior, shared update schedules, and common server infrastructure across the extensions, reinforcing concerns that they are part of a single, organized effort.

While the total number of installations remains relatively low, estimated at fewer than 1,000 downloads, security experts warn that early-stage campaigns can scale rapidly. As AI-related extensions continue to grow in popularity, similar threats are likely to emerge.

Experts advise users to carefully evaluate browser extensions before installation, pay close attention to permission requests, and remove tools that request broad access without clear justification. Staying cautious is increasingly important as browser-based attacks become more subtle and harder to detect.

Android Malware Uses Artificial Intelligence to Secretly Generate Ad Clicks

 


Security researchers have identified a new category of Android malware that uses artificial intelligence to carry out advertising fraud without the user’s knowledge. The malicious software belongs to a recently observed group of click-fraud trojans that rely on machine learning rather than traditional scripted techniques.

Instead of using hard-coded JavaScript instructions to interact with web pages, this malware analyzes advertisements visually. By examining what appears on the screen, it can decide where to tap, closely imitating normal user behavior. This approach allows the malware to function even when ads frequently change layout, include video content, or are embedded inside iframes, which often disrupt older click-fraud methods.

The threat actors behind the operation are using TensorFlow.js, an open-source machine learning library developed by Google. The framework allows trained AI models to run inside web browsers or server environments through JavaScript. In this case, the models are loaded remotely and used to process screenshots taken from an embedded browser.

Researchers from mobile security firm Dr.Web reported that the malware has been distributed through GetApps, Xiaomi’s official application store. The infected apps are mainly games. In several cases, the applications were initially uploaded without harmful functionality and later received malicious components through software updates.

Once active, the malware can run in what researchers describe as a “phantom” mode. In this mode, it opens a hidden browser based on Android’s WebView component. This browser loads a webpage containing advertisements and a JavaScript file designed to automate interactions. The browser operates on a virtual screen that is not visible to the device owner. Screenshots of this screen are repeatedly captured and sent to the AI model, which identifies relevant ad elements and triggers taps that appear legitimate.

A second operational mode, referred to as “signalling,” gives attackers direct control. Using WebRTC technology, the malware streams a live video feed of the hidden browser to the threat actor. This allows them to perform actions such as tapping, scrolling, or entering text in real time.

Dr.Web identified multiple infected games hosted on Xiaomi’s platform, including titles with tens of thousands of downloads. Beyond official app stores, the malware has also been found in modified versions of popular streaming applications distributed through third-party APK websites, Telegram channels, and a Discord server with a large subscriber base. Many of these apps function as expected, which reduces user suspicion.

Although this activity does not directly target personal data, it still affects users through increased battery drain, higher mobile data usage, and faster device wear. For cybercriminals, however, covert ad fraud remains a profitable operation.

Security experts advise Android users to avoid downloading apps from unofficial sources and to be cautious of altered versions of well-known apps that promise free access to paid features.

Google Appears to Be Preparing Gemini Integration for Chrome on Android

 

Google appears to be testing a new feature that could significantly change how users browse the web on mobile devices. The company is reportedly experimenting with integrating its AI model, Gemini, directly into Chrome for Android, enabling advanced agentic browsing capabilities within the mobile browser.

The development was first highlighted by Leo on X, who shared that Google has begun testing Gemini integration alongside agentic features in Chrome’s Android version. These findings are based on newly discovered references within Chromium, the open-source codebase that forms the foundation of the Chrome browser.

Additional insight comes from a Chromium post, where a Google engineer explained the recent increase in Chrome’s binary size. According to the engineer, "Binary size is increased because this change brings in a lot of code to support Chrome Glic, which will be enabled in Chrome Android in the near future," suggesting that the infrastructure needed for Gemini support is already being added. For those unfamiliar, “Glic” is the internal codename used by Google for Gemini within Chrome.

While the references do not reveal exactly how Gemini will function inside Chrome for Android, they strongly indicate that Google is actively preparing the feature. The integration could mirror the experience offered by Microsoft Copilot in Edge for Android. In such a setup, users might see a floating Gemini button that allows them to summarize webpages, ask follow-up questions, or request contextual insights without leaving the browser.

On desktop platforms, Gemini in Chrome already offers similar functionality by using the content of open tabs to provide contextual assistance. This includes summarizing articles, comparing information across multiple pages, and helping users quickly understand complex topics. However, Gemini’s desktop integration is still not widely available. Users who do have access can launch it using Alt + G on Windows or Ctrl + G on macOS.

The potential arrival of Gemini in Chrome for Android could make AI-powered browsing more accessible to a wider audience, especially as mobile devices remain the primary way many users access the internet. Agentic capabilities could help automate common tasks such as researching topics, extracting key points from long articles, or navigating complex websites more efficiently.

At present, Google has not confirmed when Gemini will officially roll out to Chrome for Android. However, the appearance of multiple references in Chromium suggests that development is progressing steadily. With Google continuing to expand Gemini across its ecosystem, an official announcement regarding its availability on Android is expected in the near future.

AI Agent Integration Can Become a Problem in Workplace Operations


AI agents were considered harmless sometime ago. They did what they were supposed to do: write snippets of code, answer questions, and help users in doing things faster. 

Then business started expecting more.

Slowly, companies started using organizational agents over personal copilots- agents integrated into customer support, HR, IT, engineering, and operations. These agents didn't just suggest, but started acting- touching real systems, changing configurations, and moving real data:

  • A support agent that gets customer data from CRM, triggers backend fixes, updates tickets, and checks bills.
  • An HR agent who overlooks access throughout VPNs, IAM, SaaS apps.
  • A change management agent that processes requests, logs actions in ServiceNow, updates production configurations and Confluence.
  • These AI agents automate oversight and control, and have become core of companies’ operational infrastructure

Work of AI agents

Organizational agents are made to work across many resources, supporting various roles, multiple users, and workflows via a single implement. Instead of getting linked with an individual user, these business agents work as shared resources that cater to requests, and automate work of across systems for many users. 

To work effectively, the AI agents depend on shared accounts, OAuth grants, and API keys to verify with the systems for interaction. The credentials are long-term and managed centrally, enabling the agent to work continuously. 

Threat of AI agents in workplace 

While this approach maximizes convenience and coverage, these design choices can unintentionally create powerful access intermediaries that bypass traditional permission boundaries.

Although this strategy optimizes coverage and convenience, these design decisions may inadvertently provide strong access intermediaries that go beyond conventional permission constraints. The next actions may seem legitimate and harmless when agents inadvertently grant access outside the specific user's authority. 

Reliable detection and attribution are eliminated when the execution is attributed to the agent identity, losing the user context. Conventional security controls are not well suited for agent-mediated workflows because they are based on direct system access and human users. Permissions are enforced by IAM systems according to the user's identity, but when an AI agent performs an activity, authorization is assessed based on the agent's identity rather than the requester's.

The impact 

Therefore, user-level limitations are no longer in effect. By assigning behavior to the agent's identity and concealing who started the action and why, logging and audit trails exacerbate the issue. Security teams are unable to enforce least privilege, identify misuse, or accurately assign intent when using agents, which makes it possible for permission bypasses to happen without setting off conventional safeguards. Additionally, the absence of attribution slows incident response, complicates investigations, and makes it challenging to ascertain the scope or aim of a security occurrence.

n8n Supply Chain Attack Exploits Community Nodes In Google Ads Integration to Steal Tokens


Hackers were found uploading a set of eight packages on the npm registry that pretended as integrations attacking the n8n workflow automation platform to steal developers’ OAuth credentials. 

About the exploit 

The package is called “n8n-nodes-hfgjf-irtuinvcm-lasdqewriit”, it copies Google Ads integration and asks users to connect their ad account in a fake form and steal OAuth credentials from servers under the threat actors’ control. 

Endor Labs released a report on the incident. "The attack represents a new escalation in supply chain threats,” it said. Adding that “unlike traditional npm malware, which often targets developer credentials, this campaign exploited workflow automation platforms that act as centralized credential vaults – holding OAuth tokens, API keys, and sensitive credentials for dozens of integrated services like Google Ads, Stripe, and Salesforce in a single location," according to the report. 

Attack tactic 

Experts are not sure if the packages share similar malicious functions. But Reversing labs Spectra Assure analysed a few packages and found no security issues. In one package called “n8n-nodes-zl-vietts,” it found a malicious component with malware history. 

The campaign might still be running as another updated version of the package “n8n-nodes-gg-udhasudsh-hgjkhg-official” was posted to npm recently.

Once installed as a community node, the malicious package works as a typical n8n integration, showing configuration screens. Once the workflow is started, it launches a code to decode the stored tokens via n8n’s master key and send the stolen data to a remote server. 

This is the first time a supply chain attack has specially targeted the n8n ecosystem, with hackers exploiting the trust in community integrations. 

New risks in ad integration 

The report exposed the security gaps due to untrusted workflows integration, which increases the attack surface. Experts have advised developers to audit packages before installing them, check package metadata for any malicious component, and use genuine n8n integrations. 

The findings highlight the security issues that come with integrating untrusted workflows, which can expand the attack surface. Developers are recommended to audit packages before installing them, scrutinize package metadata for any anomalies, and use official n8n integrations.

According to researchers Kiran Raj and Henrik Plate, "Community nodes run with the same level of access as n8n itself. They can read environment variables, access the file system, make outbound network requests, and, most critically, receive decrypted API keys and OAuth tokens during workflow execution.”

Trust Wallet Browser Extension Hacked, $7 Million Stolen


Users of the Binance-owned Trust wallet lost more than $7 million after the release of an updated chrome extension. Changpenng Zhao, company co-founder said that the company will cover the stolen money of all the affected users. Crypto investigator ZachXBT believes hundreds of Trust Wallet users suffered losses due to the extension flaw. 

Trust Wallets in a post on X said, “We’ve identified a security incident affecting Trust Wallet Browser Extension version 2.68 only. Users with Browser Extension 2.68 should disable and upgrade to 2.69.”

CZ has assured that the company is investigating how threat actors were able to compromise the new version. 

Affected users

Mobile-only users and browser extension versions are not impacted. User funds are SAFE,” Zhao wrote in a post on X.

The compromise happened because of a flaw in a version of the Trust Wallet Google Chrome browser extension. 

What to do if you are a victim?

If you suffered the compromise of Browser Extension v2.68, follow these steps on Trust Wallet X site:

  • To safeguard your wallet's security and prevent any problems, do not open the Trust Wallet Browser Extension v2.68 on your desktop computer. 
  • Copy this URL into the address bar of your Chrome browser to open the Chrome Extensions panel: chrome://extensions/?id=egjidjbpglichdcondbcbdnbeeppgdph
  • If the toggle is still "On," change it to "Off" beneath the Trust Wallet. 
  • Select "Developer mode" from the menu in the top right corner. 
  • Click the "Update" button in the upper left corner. 
  • Verify the 2.69 version number. The most recent and safe version is this one. 

Please wait to open the Browser Extension until you have updated to Extension version 2.69. This helps safeguard the security of your wallet and avoids possible problems.

How did the public react?

Social media users expressed their views. One said, “The problem has been going on for several hours,” while another user complained that the company ”must explain what happened and compensate all users affected. Otherwise reputation is tarnished.” A user also asked, “How did the vulnerability in version 2.68 get past testing, and what changes are being made to prevent similar issues?”

Salesforce Pulls Back from AI LLMs Citing Reliability Issues


Salesforce, a famous enterprise software company, is withdrawing from its heavy dependence on large language models (LLMs) after facing reliability issues that the executive didn't like. The company believes that trust in AI LLMs has declined in the past year, according to The Information. 

Parulekar, senior VP of product marketing said, “All of us were more confident about large language models a year ago.” This means the company has shifted away from GenAI towards more “deterministic” automation in its flagship product Agentforce.

In its official statement, the company said, “While LLMs are amazing, they can’t run your business by themselves. Companies need to connect AI to accurate data, business logic, and governance to turn the raw intelligence that LLMs provide into trusted, predictable outcomes.”

Salesforce cut down its staff from 9,000 to 5,000 employees due to AI agent deployment. The company emphasizes that Agentforce can help "eliminate the inherent randomness of large models.” 

Failing models, missing surveys

Salesforce experienced various technical issues with LLMs during real-world applications. According to CTO Muralidhar Krishnaprasad, when given more than eight prompts, the LLMs started missing commands. This was a serious flaw for precision-dependent tasks. 

Home security company Vivint used Agentforce for handling its customer support for 2.5 million customers and faced reliability issues. Even after giving clear instructions to send satisfaction surveys after each customer conversation, Agentforce sometimes failed to send surveys for unknown reasons. 

Another challenge was the AI drift, according to executive Phil Mui. This happens when users ask irrelevant questions causing AI agents to lose focus on their main goals. 

AI expectations vs reality hit Salesforce 

The withdrawal from LLMs shows an ironic twist for CEO Marc Benioff, who often advocates for AI transformation. In his conversation with Business Insider, Benioff talked about drafting the company's annually strategic document, prioritizing data foundations, not AI models due to “hallucinations” issues. He also suggests rebranding the company as Agentforce. 

Although Agentforce is expected to earn over $500 million in sales annually, the company's stock has dropped about 34% from its peak in December 2024. Thousands of businesses that presently rely on this technology may be impacted by Salesforce's partial pullback from large models as the company attempts to bridge the gap between AI innovation and useful business application.

Okta Report: Pirates of Payrolls Attacks Plague Corporate Industry


IT helps desks be ready for an evolving threat that sounds like a Hollywood movie title. In December 2025, Okta Threat Intelligent published a report that explained how hackers can gain unauthorized access to payroll software. These threats are infamous as payroll pirate attacks. 

Pirates of the payroll

These attacks start with threat actors calling an organization’s help desk, pretending to be a user and requesting a password reset. 

“Typically, what the adversary will do is then come back to the help desk, probably to someone else on the phone, and say, ‘Well, I have my password, but I need my MFA factor reset,’” according to VP of Okta Threat Intelligence Brett Winterford. “And then they enroll their own MFA factor, and from there, gain access to those payroll applications for the purposes of committing fraud.”

Attack tactic 

The threat actors are working at a massive scale and leveraging various services and devices to assist their malicious activities. According to Okta report, cyber thieves employed social engineering, calling help desk personnel on the phone and attempting to trick them into resetting the password for a user account. These attacks have impacted multiple industries,

“They’re certainly some kind of cybercrime organization or fraud organization that is doing this at scale,” Winterford said. Okta believes the hackers gang is based out of West Africa. 

Recently, the US industry has been plagued with payroll pirates in the education sector. The latest Okta research mentions that these schemes are now happening across different industries like retail sector and manufacturing. “It’s not often you’ll see a huge number of targets in two distinct industries. I can’t tell you why, but education [and] manufacturing were massively targeted,” Winterford said. 

How to mitigate pirates of payroll attacks?

Okta advises companies to establish a standard process to check the real identity of users who contact the help desk for aid. Winterford advised businesses that depend on outsourced IT help should limit their help desks’ ability to reset user passwords without robust measures. “In some organizations, they’re relying on nothing but passwords to get access to payroll systems, which is madness,” he said.



Google Launches Emergency Location Services in India for Android Devices


Google starts emergency location service in India

Google recently announced the launch of its Emergency Location Service (ELS) in India for compatible Android smartphones. It means that users who are in an emergency can call or contact emergency service providers like police, firefighters, and healthcare professionals. ELS can share the user's accurate location immediately. 

Uttar Pradesh (UP) in India has become the first state to operationalise ELS for Android devices. Earlier, ELS was rolled out to devices having Android 6 or newer versions. For integration, however, ELS will require state authorities to connect it with their services for activation. 

More about ELS

According to Google, the ELS function on Android handsets has been activated in India. The built-in emergency service will enable Android users to communicate their location by call or SMS in order to receive assistance from emergency service providers, such as firefighters, police, and medical personnel. 

ELS on Android collects information from the device's GPS, Wi-Fi, and cellular networks in order to pinpoint the user's exact location, with an accuracy of up to 50 meters.

Implementation details

However, local wireless and emergency infrastructure operators must enable support for the ELS capability. The first state in India to "fully" operationalize the service for Android devices is Uttar Pradesh. 

ELS assistance has been integrated with the emergency number 112 by the state police in partnership with Pert Telecom Solutions. It is a free service that solely monitors a user's position when an Android phone dials 112. 

Google added that all suitable handsets running Android 6.0 and later versions now have access to the ELS functionality. 

Even if a call is dropped within seconds of being answered, the business claims that ELS in Android has enabled over 20 million calls and SMS messages to date. ELS is supported by Android Fused Location Provider- Google's machine learning tool.

Promising safety?

According to Google, the feature is only available to emergency service providers and it will never collect or share accurate location data for itself. The ELS data will be sent directly only to the concerned authority.

Recently, Google also launched the Emergency Live Video feature for Android devices. It lets users share their camera feed during an emergency via a call or SMS with the responder. But the emergency service provider has to get user approval for the access. The feature is shown on screen immediately when the responder requests a video from their side. User can accept the request and provide a visual feed or reject the request.

High Severity Flaw In Open WebUI Can Leak User Conversations and Data


A high-severity security bug impacting Open WebUI has been found by experts. It may expose users to account takeover (ATO) and, in some incidents, cause full server compromise. 

Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”

The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10. 

The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint. 

Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed. 

Depending on user privileges, the consequences can be different.

Consequences?

  • Hackers can steal JSON web tokens and hijack sessions. 
  • Full account hack, this includes access to chat logs and uploaded documents.
  • Leak of important data and credentials shared in conversations. 
  • If the user has enabled workspace.tools permission, it can lead to remote code execution (RCE). 

Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.

Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.





New US Proposal Allows Users to Sue AI Companies Over Unauthorised Data Use


US AI developers would be subject to data privacy obligations applicable in federal court under a wide legislative proposal disclosed recently by the US senate Marsha Blackburn, R-Tenn. 

About the proposal

Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions. 

Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release. 

Why the need for AI regulations 

The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.  

In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress. 

Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy. 

Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.

Key highlights of proposal:

  • Mandate that regulations defining "minimum reasonable" AI protections be created by the Federal Trade Commission. 
  • Give the U.S. attorney general, state attorneys general, and private parties the authority to sue AI system creators for damages resulting from "unreasonably dangerous or defective product claims."
  • Mandate that sizable, state-of-the-art AI developers put procedures in place to control and reduce "catastrophic" risks associated with their systems and provide reports to the Department of Homeland Security on a regular basis. 
  • Hold platforms accountable for hosting an unauthorized digital replica of a person if they have actual knowledge that the replica was not authorized by the person portrayed.
  • Require quarterly reporting to the Department of Labor of AI-related job effects, such as job displacement and layoffs.

The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI. 

The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement.