Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Infy Hackers Strike Again With New C2 Servers After Iran's Internet Shutdown Ends


Infy group's new attack tactic 

An Iranian hacking group known as Infy (aka Prince of Persia) has advanced its attack tactics to hide its operations. The group also made a new C2 infrastructure while there was a wave of internet shutdown imposed earlier this year. The gang stopped configuring its C2 servers on January 8 when experts started monitoring Infy. 

In reaction to previous protests, Iranian authorities implemented a nationwide internet shutdown on this day, which probably indicates that even government-affiliated cyber units did not have the internet. 

About the campaign 

The new activity was spotted on 26 January 2026 while the gang was setting up its new C2 servers, one day prior to the Iranian government’s internet restrictions. This suggests that the threat actor may be state-sponsored and supported by Iran. 

Infy is one of the many state-sponsored hacking gangs working out of Iran infamous for sabotage, spying, and influence campaigns coordinated with Tehran’s strategic goals. However, it also has a reputation for being the oldest and less famous gangs staying under the radar and not getting caught, working secretly since 2004 via “laser-focused” campaigns aimed at people for espionage.

The use of modified versions of Foudre and Tonnerre, the latter of which used a Telegram bot probably for data collection and command issuance, were among the new tradecraft linked to the threat actor that SafeBreach revealed in a report released in December 2025. Tornado is the codename for the most recent version of Tonnerre (version 50).

The report also revealed that threat actors replaced the C2 infrastructure for all variants of Tonnerre and Foudre and also released Tornado variant 51 that employs both Telegram and HTTP for C2.

It generates C2 domain names using two distinct techniques: a new DGA algorithm initially, followed by fixed names utilizing blockchain data de-obfuscation. We believe that this novel method offers more flexibility in C2 domain name registration without requiring an upgrade to the Tornado version.

Experts believe that Infy also abused a 1-day security bug in WinRAR to extract the Tornado payload on an infected host to increase the effectiveness of its attacks. The RAR archives were sent to the Virus Total platform from India and Germany in December 2025. This means the two countries may have been victims. 



Experts Find Malicious Browser Extensions, Chrome, Safari, and Edge Affected


Threat actors exploit extensions

Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times. 

The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors. 

Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome. 

Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server. 

The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers. 

After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses. 

In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.

Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally. 

This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX. 

The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.

This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.

Federal Agencies Worldwide Hunt for Black Basta Ransomware Leader


International operation to catch Ransomware leader 

International law enforcement agencies have increased their search for individuals linked to the Black Basta ransomware campaign. Agencies confirmed that the suspected leader of the Russia-based Ransomware-as-a-service (RaaS) group has been put in the EU’s and Interpol’s Most Wanted list and Red Notice respectively. German and Ukrainian officials have found two more suspects working from Ukraine. 

As per the notice, German Federal Criminal Police (BKA) and Ukrainian National Police collaborated to find members of a global hacking group linked with Russia. 

About the operation 

The agencies found two Ukrainians who had specific roles in the criminal structure of Black Basta Ransomware. Officials named the gang’s alleged organizer as Oleg Evgenievich Nefedov from Russia. He is wanted internationally. German law enforcement agencies are after him because of “extortion in an especially serious case, formation and leadership of a criminal organization, and other criminal offenses.”

According to German prosecutors, Nefedov was the ringleader and primary decision-maker of the group that created and oversaw the Black Basta ransomware. under several aliases, such as tramp, tr, AA, Kurva, Washingt0n, and S.Jimmi. He is thought to have created and established the malware known as Black Basta. 

The Ukrainian National Police described how the German BKA collaborated with domestic cyber police officers and investigators from the Main Investigative Department, guided by the Office of the Prosecutor General's Cyber Department, to interfere with the group's operations.

The suspects

Two individuals operating in Ukraine were found to be carrying out technical tasks necessary for ransomware attacks as part of the international investigation. Investigators claim that these people were experts at creating ransomware campaigns and breaking into secured systems. They used specialized software to extract passwords from business computer systems, operating as so-called "hash crackers." 

Following the acquisition of employee credentials, the suspects allegedly increased their control over corporate environments, raised the privileges of hacked accounts, and gained unauthorized access to internal company networks.

Authorities claimed that after gaining access, malware intended to encrypt files was installed, sensitive data was stolen, and vital systems were compromised. The suspects' homes in the Ivano-Frankivsk and Lviv regions were searched with permission from the court. Digital storage devices and cryptocurrency assets were among the evidence of illicit activity that police confiscated during these operations.

Researchers Disclose Patched Flaw in Docker AI Assistant that Enabled Code Execution


Researchers have disclosed details of a previously fixed security flaw in Ask Gordon, an artificial intelligence assistant integrated into Docker Desktop and the Docker command-line interface, that could have been exploited to execute code and steal sensitive data. The vulnerability, dubbed DockerDash by cybersecurity firm Noma Labs, was patched by Docker in November 2025 with the release of version 4.50.0. 

“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack,” said Sasi Levi, security research lead at Noma Labs, in a report shared with The Hacker News. “Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.” 

According to the researchers, the flaw allowed Ask Gordon to treat unverified container metadata as executable instructions. When combined with Docker’s Model Context Protocol gateway, this behavior could lead to remote code execution on cloud and command-line systems, or data exfiltration on desktop installations. 

The issue stems from what Noma described as a breakdown in contextual trust. Ask Gordon reads metadata from Docker images, including LABEL fields, without distinguishing between descriptive information and embedded instructions. These instructions can then be forwarded to the MCP Gateway, which executes them using trusted tools without additional checks. “MCP Gateway cannot distinguish between informational metadata and a pre-authorized, runnable internal instruction,” Levi said. 

“By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.” In a hypothetical attack, a malicious actor could publish a Docker image containing weaponized metadata labels. When a user queries Ask Gordon about the image, the assistant parses the labels, forwards them to the MCP Gateway, and triggers tool execution with the user’s Docker privileges.  
Researchers said the same weakness could be used for data exfiltration on Docker Desktop, allowing attackers to gather details about installed tools, container configurations, mounted directories, and network setups, despite the assistant’s read-only permissions. Docker version 4.50.0 also addressed a separate prompt injection flaw previously identified by Pillar Security, which could have enabled attackers to manipulate Docker Hub metadata to extract sensitive information. 

“The DockerDash vulnerability underscores the need to treat AI supply chain risk as a current core threat,” Levi said. “Trusted input sources can be used to hide malicious payloads that manipulate an AI’s execution path.”

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

Exposed Admin Dashboard in AI Toy Put Children’s Data and Conversations at Risk

 

A routine investigation by a security researcher into an AI-powered toy revealed a serious security lapse that could have exposed sensitive information belonging to children and their families.

The issue came to light when security researcher Joseph Thacker examined an AI toy owned by a neighbor. In a blog post, Thacker described how he and fellow researcher Joel Margolis uncovered an unsecured admin interface linked to the Bondu AI toy.

Margolis identified a suspicious domain—console.bondu.com—referenced in the Content Security Policy headers of the toy’s mobile app backend. On visiting the domain, he found a simple option labeled “Login with Google.”

“By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. Instead, logging in granted access to Bondu’s core administrative dashboard.

“We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

Further analysis of the dashboard showed that the researchers had unrestricted visibility into “Every conversation transcript that any child has had with the toy,” spanning “tens of thousands of sessions.” The exposed panel also included extensive personal details about children and their households, such as:
  • Child’s name and date of birth
  • Names of family members
  • Preferences, likes, and dislikes
  • Parent-defined developmental objectives
  • The custom name assigned to the toy
  • Historical conversations used to provide context to the language model
  • Device-level data including IP-based location, battery status, and activity state
  • Controls to reboot devices and push firmware updates
The researchers also observed that the system relies on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).”

Beyond the authentication flaw, the team identified an Insecure Direct Object Reference (IDOR) vulnerability in the API. This weakness “allowed us to retrieve any child’s profile data by simply guessing their ID.”

“This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

Bondu Responds Within Minutes

Margolis contacted Bondu’s CEO via LinkedIn over the weekend, prompting the company to disable access to the exposed console “within 10 minutes.”

“Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said.

Bondu also initiated a broader security review, searched for additional vulnerabilities, and launched a bug bounty program. After reviewing console access logs, the company stated that no unauthorized parties had accessed the system aside from the researchers, preventing what could have become a data breach.

Despite the swift and responsible response, the incident changed Thacker’s perspective on AI-driven toys.

“To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.”

“AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.”

He further noted that, beyond data security concerns, AI introduces new risks at home. “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said.

Bondu’s website maintains that the toy was designed with safety as a priority, stating that its “safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period.”

Google’s Project Genie Signals a Major Shift for the Gaming Industry

 

Google has sent a strong signal to the video game sector with the launch of Project Genie, an experimental AI world-model that can create explorable 3D environments using simple text or image prompts.

Although Google’s Genie AI has been known since 2024, its integration into Project Genie marks a significant step forward. The prototype is now accessible to Google AI Ultra subscribers in the US and represents one of Google’s most ambitious AI experiments to date.

Project Genie is being introduced through Google Labs, allowing users to generate short, interactive environments that can be explored in real time. Built on DeepMind’s Genie 3 world-model research, the system lets users move through AI-generated spaces, tweak prompts, and instantly regenerate variations. However, it is not positioned as a full-scale game engine or production-ready development tool.

Demonstrations on the Project Genie website showcase a variety of scenarios, including a cat roaming a living room from atop a Roomba, a vehicle traversing the surface of a rocky moon, and a wingsuit flyer gliding down a mountain. These environments remain navigable in real time, and while the worlds are generated dynamically as characters move, consistency is maintained. Revisiting areas does not create new terrain, and any changes made by an agent persist as long as the system retains sufficient memory.

"Genie 3 environments are … 'auto-regressive' – created frame by frame based on the world description and user actions," Google explains on Genie's website. "The environments remain largely consistent for several minutes, with memory recalling changes from specific interactions for up to a minute."

Despite these capabilities, time constraints remain a challenge.

"The model can support a few minutes of continuous interaction, rather than extended hours," Google said, adding elsewhere that content generation is currently capped at 60 seconds. A Google spokesperson told The Register that Genie can render environments beyond that limit, but the company "found 60 seconds provides a high quality and consistent world, and it gives people enough time to explore and experience the environment."

Google stated that world consistency lasts throughout an entire session, though it remains unclear whether session durations will be expanded in the future. Beyond time limits, the system has other restrictions.

Agents in Genie’s environments are currently limited in the actions they can perform, and interactions between multiple agents are unreliable. The model struggles with readable text, lacks accurate real-world simulation, and can suffer from lag or delayed responses. Google also acknowledged that some previously announced features are missing.

In addition, "A few of the Genie 3 model capabilities we announced in August, such as promptable events that change the world as you explore it, are not yet included in this prototype," Google added.

"A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them," the company said of Genie. "While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world."

Game Developers Face an Uncertain Future

Beyond AGI research, Google also sees potential applications for Genie within the gaming industry—an area already under strain. While Google emphasized that Genie "is not a game engine and can’t create a full game experience," a spokesperson told The Register, "we are excited to see the potential to augment the creative process, enhancing ideation, and speeding up prototyping."

Industry data suggests this innovation arrives at a difficult time. A recent Informa Game Developers Conference report found that 33 percent of US game developers and 28 percent globally experienced at least one layoff over the past two years. Half of respondents said their employer had conducted layoffs within the last year.

Concerns about AI’s role are growing. According to the same survey, 52 percent of industry professionals believe AI is negatively affecting the games sector—up sharply from 30 percent last year and 18 percent the year before. The most critical views came from professionals working in visual and technical art, narrative design, programming, and game design.

One machine learning operations employee summed up those fears bluntly.

"We are intentionally working on a platform that will put all game devs out of work and allow kids to prompt and direct their own content," the GDC study quotes the respondent as saying.

While Project Genie still has clear technical limitations, the rapid pace of AI development suggests those gaps may not last long—raising difficult questions about the future of game development.

Google Introduces AI-Powered Side Panel in Chrome to Automate Browsing




Google has updated its Chrome browser by adding a built-in artificial intelligence panel powered by its Gemini model, marking a stride toward automated web interaction. The change reflects the company’s broader push to integrate AI directly into everyday browsing activities.

Chrome, which currently holds more than 70 percent of the global browser market, is now moving in the same direction as other browsers that have already experimented with AI-driven navigation. The idea behind this shift is to allow users to rely on AI systems to explore websites, gather information, and perform online actions with minimal manual input.

The Gemini feature appears as a sidebar within Chrome, reducing the visible area of websites to make room for an interactive chat interface. Through this panel, users can communicate with the AI while keeping their main work open in a separate tab, allowing multitasking without constant tab switching.

Google explains that this setup can help users organize information more effectively. For example, Gemini can compare details across multiple open tabs or summarize reviews from different websites, helping users make decisions more quickly.

For subscribers to Google’s higher-tier AI plans, Chrome now offers an automated browsing capability. This allows Gemini to act as a software agent that can follow instructions involving multiple steps. In demonstrations shared by Google, the AI can analyze images on a webpage, visit external shopping platforms, identify related products, and add items to a cart while staying within a user-defined budget. The final purchase, however, still requires user approval.

The browser update also includes image-focused AI tools that allow users to create or edit images directly within Chrome, further expanding the browser’s role beyond simple web access.

Chrome’s integration with other applications has also been expanded. With user consent, Gemini can now interact with productivity tools, communication apps, media services, navigation platforms, and shopping-related Google services. This gives the AI broader context when assisting with tasks.

Google has indicated that future updates will allow Gemini to remember previous interactions across websites and apps, provided users choose to enable this feature. The goal is to make AI assistance more personalized over time.

Despite these developments, automated browsing faces resistance from some websites. Certain platforms have already taken legal or contractual steps to limit AI-driven activity, particularly for shopping and transactions. This underlines the ongoing tension between automation and website control.

To address these concerns, Google says Chrome will request human confirmation before completing sensitive actions such as purchases or social media posts. The browser will also support an open standard designed to allow AI-driven commerce in collaboration with participating retailers.

Currently, these features are available on Chrome for desktop systems in the United States, with automated browsing restricted to paid subscribers. How widely such AI-assisted browsing will be accepted across the web remains uncertain.