Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Google+. Show all posts

New Malware “Storm” Steals Browser Data and Hijacks Sessions Without Passwords

 



A newly identified infostealer called Storm has emerged on underground cybercrime forums in early 2026, signalling a change in how attackers steal and use credentials. Priced at under $1,000 per month, the malware collects browser-stored data such as login credentials, session cookies, and cryptocurrency wallet information, then covertly transfers the data to attacker-controlled servers where it is decrypted outside the victim’s system.

This change becomes clearer when compared to earlier techniques. Traditionally, infostealers decrypted browser credentials directly on infected machines by loading SQLite libraries and accessing local credential databases. Because of this, endpoint security tools learned to treat such database access as one of the strongest indicators of malicious activity.

The approach began to break down after Google Chrome introduced App-Bound Encryption in version 127 in July 2024. This mechanism tied encryption keys to the browser environment itself, making local decryption exponentially more difficult. Initial bypass attempts relied on injecting into browser processes or exploiting debugging protocols, but these techniques still generated detectable traces.

Storm avoids this entirely by skipping local decryption. Instead, it extracts encrypted browser files and quietly sends them to attacker infrastructure, removing the behavioural signals that endpoint tools typically rely on. It extends this model by supporting both Chromium-based browsers and Gecko-based browsers such as Firefox, Waterfox, and Pale Moon, whereas tools like StealC V2 still handle Firefox data locally.

The data collected includes saved passwords, session cookies, autofill entries, Google account tokens, payment card details, and browsing history. This combination gives attackers everything required to rebuild authenticated sessions remotely. In practice, a single compromised employee browser can provide direct access to SaaS platforms, internal systems, and cloud environments without triggering any password-based alerts.

Storm also automates session hijacking. Once decrypted, credentials and cookies appear in the attacker’s control panel. By supplying a valid Google refresh token along with a geographically matched SOCKS5 proxy, the platform can silently recreate the victim’s active session.

This technique aligns with earlier research by Varonis Threat Labs. Its Cookie-Bite study showed that stolen Azure Entra ID session cookies can bypass multi-factor authentication, granting persistent access to Microsoft 365. Similarly, its SessionShark analysis demonstrated how phishing kits intercept session tokens in real time to defeat MFA protections. Storm packages these methods into a commercial subscription service.

Beyond credentials, the malware collects files from user directories, extracts session data from applications like Telegram, Signal, and Discord, and targets cryptocurrency wallets through browser extensions and desktop applications. It also gathers system information and captures screenshots across multiple monitors. Most operations run in memory, reducing the likelihood of detection.

Its infrastructure design adds resilience. Operators connect their own virtual private servers to Storm’s central system, routing stolen data through infrastructure they control. This setup limits the impact of takedowns, as enforcement actions are more likely to affect individual operator nodes rather than the core service.

Storm supports multi-user operations, allowing teams to divide responsibilities such as log access, malware build generation, and session restoration. It also automatically categorises stolen credentials by service, with visible rules for platforms including Google, Facebook, Twitter/X, and cPanel, helping attackers prioritise targets.

At the time of analysis, the control panel displayed 1,715 log entries linked to locations including India, the United States, Brazil, Indonesia, Ecuador, and Vietnam. While it is unclear whether all entries represent real victims or test data, variations in IP addresses, internet service providers, and data volumes suggest ongoing campaigns.

The logs include credentials associated with platforms such as Google, Facebook, Twitter/X, Coinbase, Binance, Blockchain.com, and Crypto.com. Such information often feeds into underground credential marketplaces, enabling account takeovers, fraud, and more targeted intrusions.

Storm is offered through a tiered pricing model: $300 for a seven-day trial, $900 per month for standard access, and $1,800 per month for a team licence supporting up to 100 operators and 200 builds. Use of an additional crypter is required. Notably, once deployed, malware builds continue operating even after a subscription expires, allowing ongoing data collection.

Security researchers view Storm as part of a broader evolution in credential theft. By shifting decryption to remote servers, attackers avoid detection mechanisms designed to identify on-device activity. At the same time, session cookie theft is increasingly replacing password theft as the primary objective.

The data collected by such tools often marks the beginning of further attacks, including logins from unusual locations, lateral movement within networks, and unauthorised access patterns.


Indicators of compromise include:

Alias: StormStealer

Forum ID: 221756

Registration date: December 12, 2025

Current version: v0.0.2.0 (Gunnar)

Build details: Developed in C++ (MSVC/msbuild), approximately 460 KB in size, targeting Windows systems


This advent of Storm underlines how cybercriminal tools are becoming more advanced, automated, and difficult to detect, requiring organisations to strengthen monitoring of sessions, user behaviour, and access patterns rather than relying solely on traditional credential protection methods.


Google Expands Gemini in Gmail, Forcing Billions to Reconsider Privacy, Control, and AI Dependence

 




Google has introduced one of the most extensive updates to Gmail in its history, warning that the scale of change driven by artificial intelligence may feel overwhelming for users. While some discussions have focused on surface-level changes such as switching email addresses, the company has emphasized that the real transformation lies in how AI is now embedded into everyday tools used by nearly two billion people. This shift requires far more serious attention.

At the center of this evolution is Gemini, Google’s artificial intelligence system, which is being integrated more deeply into Gmail and other core services. In a recent update shared through a short video message, Gmail’s product leadership acknowledged that the rapid pace of AI innovation can leave users feeling overloaded, with too many new features and decisions emerging at once.

Gmail has traditionally been built around convenience, scale, and seamless integration rather than strict privacy-first principles. Although its spam filters and malware detection systems are widely used and generally effective, they are not flawless. Importantly, Gmail has not typically been the platform users turn to for strong privacy assurances.

The introduction of Gemini changes this bbalance substantially. Google has clarified that it does not use email content to train its AI models. However, the way these tools function introduces new concerns. Features that automatically draft emails, summarize conversations, or search inbox content require access to emails that may contain highly sensitive personal or professional information.

To address this, Google describes Gemini as a temporary assistant that operates within a limited session. The company compares this interaction to allowing a helper into a private room containing your inbox. The assistant completes its task and then exits, with the accessed information disappearing afterward. According to Google, Gemini does not retain or learn from the data it processes during these interactions.

Despite these assurances, concerns remain. Even if the data is not stored long term, granting a cloud-based AI system access to private communications introduces an inherent level of risk. Additionally, while Google has denied automatically enrolling users into AI training programs, many of these AI-powered features are expected to be enabled by default. This shifts responsibility to users, who must actively decide how much access they are willing to allow.

This is not a decision that can be ignored. Once AI tools become integrated into daily workflows, they are difficult to remove. Relying on default settings or delaying action could result in long-term dependence on systems that users may not fully understand or control.

Shortly after promoting these updates, Gmail experienced a disruption that affected its core functionality. Users reported delays in sending and receiving emails, and Google acknowledged the issue while working on a fix. Initially, no estimated resolution time was provided. Later the same day, the company confirmed that the issue had been resolved.

According to Google’s official status update, the disruption was fixed on April 8, 2026, at 14:49 PDT. The cause was identified as a “noisy neighbor,” a term used in cloud computing to describe a situation where one service consumes excessive shared resources, negatively impacting the performance of others operating on the same infrastructure.

With a user base of approximately two billion, even a short-lived outage becomes of grave concern. More importantly, it emphasises the scale at which Gmail operates and reinforces why decisions around AI integration are critical for users worldwide.

The central issue now facing users is the balance between convenience and security. Google presents Gemini as a helpful and well-behaved assistant that enhances productivity without overstepping boundaries. However, like any guest given access to a private space, it requires clear rules and careful oversight.

This tension becomes even more visible when considering Google’s parallel efforts to strengthen security. The company recently expanded client-side encryption for Gmail on mobile devices. While this may sound similar to end-to-end encryption used in messaging apps, it is not the same. This form of encryption operates at an organizational level, primarily for enterprise users, and does not provide the same device-specific privacy protections commonly associated with true end-to-end encryption.

More critically, enabling this additional layer of encryption dynamically limits Gmail’s functionality. When it is turned on, several features become unavailable. Users can no longer use confidential mode, access delegated accounts, apply advanced email layouts, or send bulk emails using multi-send options. Features such as suggested meeting times, pop-out or full-screen compose windows, and sending emails to group recipients are also disabled.

In addition, personalization and usability tools are affected. Email signatures, emojis, and printing functions stop working. AI-powered tools, including Google’s intelligent writing and assistance features, are also unavailable. Other smart Gmail features are disabled, and certain mobile capabilities, such as screen recording and taking screenshots on Android devices, are restricted.

These limitations exist because encrypted data cannot be accessed by AI systems. As a result, users are forced to choose between stronger data protection and access to advanced features. The same mechanisms that secure information also prevent AI tools from functioning effectively.

This reflects a bigger challenge across the technology industry. Privacy and security measures often limit the capabilities of AI systems, which depend on access to data to operate. In Gmail’s case, these two priorities do not align easily and, in many ways, directly conflict.

From a wider perspective, this also highlights a fundamental limitation of email itself. The technology was developed in an earlier era and was not designed to handle modern cybersecurity threats. Its underlying structure lacks the robust protections found in newer communication platforms.

As artificial intelligence becomes more deeply integrated into everyday tools, users are being asked to make more informed and deliberate decisions about how their data is used. While Google presents Gemini as a controlled and temporary assistant, the responsibility ultimately lies with users to determine their comfort level.

For highly sensitive communication, relying solely on email may no longer be the safest option. Exploring alternative platforms with stronger built-in security may be necessary. Ultimately, this moment represents a critical choice: whether the convenience offered by AI is worth the level of access it requires.

Google's Eloquent: Offline AI Dictation Hits iOS, Android Launch Imminent


Google’s quiet release of AI Edge Eloquent marks a notable shift in how it wants people to use AI on phones: not as a cloud-first assistant, but as a fast, private, on-device dictation tool. Based on the reporting around the launch, the app is designed to transcribe speech locally on iOS, keep working without an internet connection, and clean up spoken language into polished text. 

Google’s move matters because it lands in a market already shaped by focused dictation apps like Wispr Flow, SuperWhisper, and Willow. Those products have helped make AI transcription feel less like a novelty and more like a practical writing tool, so Google is entering a space where users already expect speed, accuracy, and convenience. By shipping a product that works offline, Google is also signaling that on-device AI is becoming good enough for everyday productivity rather than just demo material. 

The app’s core appeal is that it does more than convert audio into text. It reportedly removes filler words such as “um” and “uh,” fixes mid-sentence stumbles, and can rewrite output into formats like “Key points,” “Formal,” “Short,” and “Long.” That means Eloquent is aimed not just at transcription, but at people who want speech turned into something usable immediately, whether for emails, notes, drafts, or quick summaries.

A second major point is privacy and reliability. Because the app runs locally after the model download, users can dictate even when they are offline, which is useful on flights, in weak signal areas, or in workplaces where connectivity is inconsistent. Local processing also reduces the amount of audio that needs to leave the device, which may appeal to users who are cautious about cloud-based voice tools.

There is also a broader strategic angle here. Google appears to be using Eloquent to show that its Gemma-based models can power practical consumer AI on a phone, not just in the cloud. The app’s reported free availability makes the competitive pressure even stronger, because it lowers the barrier for users to try Google’s approach and compare it directly with paid or subscription-based rivals. 

The deeper issue is that this launch reflects a wider race in AI: whoever makes on-device models feel seamless may control the next wave of personal productivity software. If Google can keep improving transcription quality, formatting, and cross-platform access, Eloquent could become more than a niche dictation tool and turn into a template for how lightweight AI assistants should work on mobile.

Google Promotes ChromeOS Flex as Free Upgrade Option for Millions of Unsupported Windows 10 PCs

 





More than 500 million devices currently running Windows 10 are approaching a critical turning point, as many of them are not eligible for an upgrade to Windows 11 due to hardware limitations. This has raised growing concerns about long-term security risks once support deadlines pass. In response, Google is actively promoting an alternative, positioning its ChromeOS Flex platform as a free way to modernize aging systems.

Google states that older laptops and desktops can be converted into faster, more secure, and easier-to-manage devices by installing ChromeOS Flex. The system is cloud-based and designed to extend the usability of existing hardware without requiring users to purchase new machines. Although ChromeOS Flex has been available for some time, Google has now made adoption simpler by introducing a physical USB installation kit. Developed in partnership with Back Market, the kit allows users to install the operating system more easily. It is priced at approximately $3 or €3, is reusable, and is supported by recycling-focused efforts such as Closing the Loop to reduce electronic waste.

The timing of this push is closely linked to Microsoft’s decision to end mainstream support for Windows 10 in October 2025. That shift has forced users into a difficult position: invest in new hardware or continue using an operating system that will no longer receive full security updates. While Microsoft does offer an Extended Security Updates (ESU) program, it is only a temporary solution. For individual users, coverage extends for roughly one additional year, while enterprise customers may receive longer support under specific licensing agreements.

The transition to Windows 11 has also been slower than expected. Adoption challenges, largely driven by strict hardware requirements, have resulted in an unusually large number of users remaining on Windows 10 even after its official lifecycle milestone. This contrasts with Microsoft’s earlier expectations of a smoother migration similar to the shift from Windows 7 to Windows 10, which had seen broader and faster adoption.

Google is also emphasizing environmental considerations as part of its messaging. The company highlights that manufacturing a new laptop contributes significantly to its overall carbon footprint. By extending the lifespan of existing devices, ChromeOS Flex helps reduce landfill waste and avoids emissions associated with producing new hardware. Google further claims that ChromeOS-based systems consume around 19% less energy on average compared to similar platforms.

Despite this, switching away from Windows remains a debated decision. Many users rely on the Windows ecosystem for software compatibility, workflows, and familiarity. However, for devices that cannot support Windows 11, alternatives such as ChromeOS Flex present a practical workaround. Even in cases where users purchase new computers, older machines can still be repurposed using such operating systems, for example within households.

At the same time, Microsoft is continuing to strengthen its Windows 11 ecosystem. Devices already running Windows 11 are being automatically updated to newer versions to maintain consistent security coverage. The company is using artificial intelligence to determine when systems are ready for upgrades and applying updates accordingly. While a similar approach could theoretically be applied to Windows 10 devices that meet upgrade requirements, this has not yet been implemented. It remains uncertain whether this could change as future deadlines approach.

Recent developments have also drawn attention to user hesitation around Windows 11. Reports indicated that a recent update disrupted a key Start menu function, even as official communication suggested there were no outstanding issues. Subsequent updates and documentation now indicate that previously known bugs have been resolved, with Microsoft steadily addressing issues since the platform’s release in late 2024.

Additional reporting suggests that all known issues in the current Windows 11 version have been marked as resolved in official tracking systems. This reflects ongoing improvements, though it also underlines the complexity of maintaining stability across large-scale operating system deployments.

For enterprise users, Microsoft is extending support in more flexible ways. Certain legacy versions of Windows 10, including enterprise and IoT editions released in 2016, are eligible for additional security updates. These updates are delivered through ESU programs available via volume licensing or cloud solution providers. However, Microsoft continues to describe this as a temporary measure rather than a permanent extension.

For individual users, the situation is more restrictive. Extended Security Updates are limited in duration, and once they expire, devices will no longer receive security patches, bug fixes, or technical support. However, the continued availability of such programs suggests that support timelines may evolve depending on broader user adoption patterns.

The wider ecosystem is also seeing alternative recommendations. Some industry discussions encourage migration to Linux-based systems, while Google’s ChromeOS Flex represents a more consumer-friendly option. With hundreds of millions of devices affected, the coming months will play a crucial role in determining whether users remain within the Windows ecosystem or begin shifting toward alternative platforms.


Gmail Address Change Feature Fails to Address Core Security Risks, Report Warns

 

A recent update by Google allowing users to change their Gmail address has drawn attention, but cybersecurity experts say it does little to solve deeper issues tied to email privacy and security. 

The feature, which has gained visibility following its rollout in the United States, lets users modify their primary Gmail address while keeping the old one active as an alias. 

The change has been framed as a way to move beyond outdated or inappropriate usernames created years ago. Google CEO Sundar Pichai highlighted the shift in a public post, noting that users no longer need to be tied to early-era email identities. 

However, experts say the update does not address the main problem facing email users today, widespread exposure of email addresses to marketers, data brokers and cybercriminals. 

Once an email address is used online, it is likely to be stored across multiple databases, making it a long-term target for spam and phishing attempts. Changing the visible username does not remove that exposure, especially since older addresses continue to function. 

Jake Moore, a cybersecurity specialist at ESET, said the ability to edit email addresses reflects a broader shift in how digital identity works, but warned it could introduce new risks. “Old addresses will still work as aliases,” he said, adding that this could increase the risk of impersonation and phishing attacks. 

Security researchers also point to the absence of a built-in privacy feature similar to Apple’s “Hide My Email,” which allows users to generate disposable email addresses for sign-ups and online transactions. These temporary addresses can be disabled at any time, limiting long-term exposure. 

Without a comparable system, Gmail users who change their address may still need to share their primary email widely, continuing the cycle of data exposure. 

The update may also create new vulnerabilities in the short term. Cybersecurity reports indicate that attackers are already using the feature as a lure in phishing campaigns, sending emails that direct users to fake login pages designed to steal account credentials. 

There are also early signs of increased spam activity. Online forums have reported a rise in unwanted emails, with some researchers suggesting the address change feature could allow attackers to bypass existing spam filters and start fresh. 

According to security researchers cited by industry outlets, many email filtering systems rely heavily on known sender addresses. 

If attackers rotate or modify those addresses, they may temporarily evade detection until new filters are applied. At the same time, changing a Gmail address does not stop unwanted messages from reaching the original account, since it remains active in the background. 

Experts say the update highlights a broader issue in email security. While giving users more flexibility over their identity, it does not reduce reliance on a single, permanent address that is repeatedly shared across services. 

They suggest that more effective solutions would include tools that limit how widely a primary email address is distributed, along with stronger controls over incoming messages. 

For now, users are being advised to treat emails related to the new feature with caution, particularly those that include links to account settings, as these may be part of phishing attempts.

Google DeepMind Maps How the Internet Could be Used to Manipulate AI Agents

Researchers at Google DeepMind have outlined a growing but less visible risk in artificial intelligence deployment, the possibility that the internet itself can be used to manipulate autonomous AI agents. In a recent paper titled “AI Agent Traps,” the researchers describe how online content can be deliberately designed to mislead, control or exploit AI systems as they browse websites, read information and take actions. The study focuses not on flaws inside the models, but on the environments these agents operate in.  

The issue is becoming more urgent as companies move toward deploying AI agents that can independently handle tasks such as booking travel, managing emails, executing transactions and writing code. At the same time, malicious actors are increasingly experimenting with AI for cyberattacks. OpenAI has also acknowledged that one of the key weaknesses involved, prompt injection, may never be fully eliminated. 

The paper groups these risks into six broad categories. One category involves hidden instructions embedded in web pages. These can be placed in parts of a page that humans do not see, such as HTML comments, invisible elements or metadata. While a user sees normal content, an AI agent may read and follow these concealed commands. In more advanced cases, websites can detect when an AI agent is visiting and deliver a different version of the page tailored to influence its behavior. 

Another category focuses on how language shapes an agent’s interpretation. Pages filled with persuasive or authoritative sounding phrases can subtly steer an agent’s conclusions. In some cases, harmful instructions are disguised as educational or hypothetical content, which can bypass a model’s safety checks. The researchers also describe a feedback loop where descriptions of an AI’s personality circulate online, are later absorbed by models and begin to influence how those systems behave. 

A third type of risk targets an agent’s memory. If false or manipulated information is inserted into the data sources an agent relies on, the system may treat that information as fact. Even a small number of carefully placed documents can affect how the agent responds to specific topics. Other attacks focus directly on controlling an agent’s actions. Malicious instructions embedded in ordinary web pages can override safety safeguards once processed by the agent. 

In some experiments, attackers were able to trick agents into retrieving sensitive data, such as local files or passwords, and sending it to external destinations at high success rates. The researchers also highlight risks that emerge at scale. Instead of targeting a single system, some attacks aim to influence many agents at once. They draw comparisons to the Flash Crash, where automated trading systems amplified a single event into a large market disruption. 

A similar dynamic could occur if multiple AI agents respond simultaneously to false or manipulated information. Another category involves the human users overseeing these systems. Outputs can be designed to appear credible and technical, increasing the likelihood that a person approves an action without fully understanding the risks. 

In one example, harmful instructions were presented as legitimate troubleshooting steps, making them easier to accept. To address these risks, the researchers outline several areas for improvement. On the technical side, they suggest training models to better recognize adversarial inputs, as well as deploying systems that monitor both incoming data and outgoing actions. 

At a broader level, they propose standards that allow websites to signal which content is intended for AI systems, along with reputation mechanisms to assess the trustworthiness of sources. The paper also points to unresolved legal questions. If an AI agent carries out a harmful action after being manipulated, it is unclear who should be held responsible. 

The researchers describe this as an “accountability gap” that will need to be addressed before such systems can be widely deployed in regulated sectors. The study does not present a complete solution. Instead, it argues that the industry lacks a clear, shared understanding of the problem. Without that, the researchers suggest, efforts to secure AI systems may continue to focus on the wrong areas.

How Duck.ai Offer Better Privacy Compared to Commercial Chatbots


Better privacy with DuckDuckGo's AI bot

Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.

The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January. 

Duck.ai's sudden traffic jump

The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini. 

Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million. 

For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits. 

DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data. 

Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”

Duck.ai is famous now

What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini. 

Privacy concerns in AI bots

In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in. 

The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech. 

Google Rolls Out Android Developer Verification to Curb Anonymous App Distribution

 



Google has formally begun rolling out a comprehensive verification framework for Android developers, a move aimed at tackling the persistent problem of malicious applications being distributed by actors who operate without revealing their identity. The company’s decision reflects growing concerns within the mobile ecosystem, where anonymity has often enabled bad actors to bypass accountability and circulate harmful software at scale.

This rollout comes in advance of a stricter compliance requirement that will first take effect in September across key markets including Brazil, Indonesia, Singapore, and Thailand. These regions are being used as initial enforcement zones before the policy is gradually expanded worldwide next year, signaling Google’s intent to standardize developer accountability across its global Android ecosystem.

Under the new system, developers who distribute Android applications outside of the official Google Play marketplace will now be required to register through the Android Developer Console and verify their identity credentials. This requirement is particularly substantial for developers who rely on alternative distribution methods such as direct APK sharing, enterprise deployment, or third-party app stores, as it introduces a layer of traceability that previously did not exist.

At the same time, Google clarified that developers already publishing applications through Google Play and who have completed existing identity verification processes may not need to take further action. In such cases, their applications are likely to already comply with the updated requirements, reducing friction for those operating within the official ecosystem.

Explaining how this change will affect end users, Matthew Forsythe, Director of Product Management for Android App Safety, emphasized that the vast majority of users will not notice any difference in their day-to-day app installation experience. Standard app downloads from trusted sources will continue to function as usual, ensuring that usability is not compromised for the general public.

However, the experience changes when a user attempts to install an application that has not been registered under the new verification system. In such cases, users will be required to proceed through more advanced installation pathways, such as Android Debug Bridge or similar technical workflows. These methods are typically used by developers and experienced users, which effectively limits exposure for less technical individuals.

This design introduces a deliberate separation between general users and advanced users. While everyday users are shielded from potentially unsafe applications, power users retain the flexibility to install software manually, albeit with additional steps that reinforce intentional decision-making.

To further support developers, Google is integrating visibility into its core development tools. Within the next two months, developers using Android Studio will be able to directly view whether their applications are registered under the new system at the time of generating signed App Bundles or APK files. This integration ensures that compliance status becomes part of the development workflow rather than a separate administrative task.

For developers who have already completed identity verification through the Play Console, Google will automatically register eligible applications under the new framework. This automation reduces operational overhead and ensures a smoother transition. However, in cases where applications cannot be automatically registered, developers will be required to complete a manual claim process to verify ownership and bring those apps into compliance.

In earlier guidance, Google also outlined how sideloading, the practice of installing apps from outside official stores, will function under this system. Advanced users will still be able to install unregistered APK files, but only after completing a multi-step verification process designed to confirm their intent.

This process includes an authentication step to verify the user’s decision, followed by a one-time waiting period of up to 24 hours. The delay is not arbitrary. It is specifically designed to disrupt scam scenarios in which attackers pressure users into quickly installing malicious applications before they have time to reconsider.

Forsythe explained that although this process is required only once for experienced users, it has been carefully structured to counter high-pressure social engineering tactics. By introducing friction into the installation process, the system aims to reduce the success rate of scams that rely on urgency and manipulation.

This development is part of a wider industry tendency toward tightening control over app ecosystems and improving user data protection. In a parallel move, Apple has recently updated its Developer Program License Agreement to impose stricter rules on how third-party wearable applications handle sensitive data such as live activity updates and notifications.

Under Apple’s revised policies, developers are explicitly prohibited from using forwarded data for purposes such as advertising, user profiling, training machine learning models, or tracking user location. These restrictions are intended to prevent misuse of real-time user data beyond its original functional purpose.

Additionally, developers are not allowed to share this forwarded information with other applications or devices, except for authorized accessories that are explicitly approved within Apple’s ecosystem. This ensures tighter control over how data flows between devices.

The updated agreement also introduces further limitations. Developers are barred from storing this data on external cloud servers, altering its meaning in ways that change the original content, or decrypting the information anywhere other than on the designated accessory device. These measures collectively aim to preserve data integrity and minimize the risk of misuse.

Taken together, this charts a new course across the technology industry toward stronger governance of developer behavior, application distribution, and data handling practices. As threats such as malware distribution, financial fraud, and data exploitation continue to evolve, platform providers are increasingly prioritizing transparency, accountability, and user protection in their security strategies.

North Korean Hackers Target Softwares that Support Online Services


Hackers target behind-the-scenes softwares

Hackers associated with North Korea hacked the behind-the-scenes software that operates various online functions to steal login credentials that could trigger cyber operations, according to Google. 

Threat actors hacked Axios, a program that links apps and web services, by installing their malicious software in an update. An expert at Sentinel said that “Every time you load a website, check your bank balance, or open an app on your phone, there’s a good chance Axios is running somewhere in the background making that work.” 

About the compromised software

The malicious software has been removed. But if it were successful, it could carry out data theft and other cyberattacks. The software is open-source, not a proprietary commercial product. This means the code can be openly licensed and changed by the users. 

Experts described the incident as a supply chain attack in which hackers could compromise downstream entities. According to experts, you don’t have to click anything or make a mistake, as the software you trust does it for you. 

Who is responsible?

Google attributed the hack to a group it tracks as UNC1069. In a February report, Google stated that the group has been active since at least 2018 and is well-known for focusing on the banking and cryptocurrency sectors.

According to a statement from John Hultquist, principal analyst for Google's threat intelligence group, "North Korean hackers have deep experience with supply chain attacks, which they primarily use to ⁠steal cryptocurrency."

The U.S. government claims that North Korea uses stolen cryptocurrency to finance its weapons and other initiatives while avoiding sanctions.

Attack tactic

A request for comment was not immediately answered by North Korea's mission to the United Nations.

The hackers created versions of the malware that could infect macOS, Windows, and Linux operating systems, according to an analysis published by cybersecurity ⁠firm Elastic ​Security.

According to Elastic, "the attacker gained a delivery mechanism with potential reach into millions of environments" as a result of the hackers' techniques. The number of times the dangerous program was downloaded was unclear.

Attempts to get in touch with the hackers failed.

Security Alerts or Scams? How to Spot Fake Login Warnings and Protect Your Accounts

 

Your phone buzzes with a notification: “Unusual login activity detected on your account.” It’s enough to make anyone uneasy. But is it a genuine alert about a hacking attempt, or could the message itself be a trap?

Notifications from major platforms like Google, Microsoft, Amazon, or even your bank can be both helpful and risky. While they act as an early warning system against unauthorized access, cybercriminals often exploit this sense of urgency. Fake alerts are designed to trick users into clicking on malicious links and entering sensitive information on fraudulent login pages. Acting impulsively in such moments can unintentionally give attackers access to your accounts.

Understanding Security Alerts

Not every alert signals a compromised account. Many platforms rely on advanced monitoring systems that flag unusual behaviour before any real damage occurs.

These systems may detect:
  • Multiple failed login attempts from different locations
  • Automated attacks using leaked credentials
  • Logins from unfamiliar devices or IP addresses
In many cases, a blocked login attempt simply means the system is working as intended—not that your account has already been breached.

The 3-Second Test: Spotting Real vs Fake Messages

Before clicking on any alert, pause and verify. Even AI-generated phishing emails often fail basic checks:

1. The Sender Check
Always look beyond the display name. Verify the actual email address and domain. Fraudsters often use slight variations like “amazon-support.co.uk” or “service@paypal-hilfe.com
” to appear legitimate.

2. The Hover Trick
On a computer, hover your cursor over any link without clicking. The true destination URL will appear. If it doesn’t match the official website, delete the email immediately.

3. Watch for Panic Tactics
Be cautious of urgent messages such as:
“Act within 10 minutes or your account will be irrevocably deleted!”
Legitimate companies don’t pressure users this way—urgency is a common scam tactic.

Golden Rule: Never click directly from the email. Instead, open your browser, manually type the official website, and log in. If there’s a real issue, it will be visible in your account dashboard.

Using the same password across multiple platforms increases risk. A breach on one website can trigger a domino effect, allowing attackers to access other accounts using the same credentials

The Role of Password Managers

Password managers offer a simple yet powerful solution:

  1. Unique Passwords: They generate strong, complex passwords for each account, ensuring one breach doesn’t compromise everything.
  2. Built-in Phishing Protection: These tools only autofill credentials on legitimate websites, helping you avoid fake login pages.

Tools like Dashlane provide a comprehensive password management experience with seamless autofill and secure password generation. Meanwhile, Bitwarden stands out as a reliable open-source option with robust free features.

Security alerts aren’t always bad news, they often indicate that protective systems are doing their job. The real risk lies in reacting without verification.

By using a password manager and enabling two-factor authentication, you can significantly strengthen your defenses and keep your digital identity secure

Google Disrupts China-Linked UNC2814 Cyber Espionage Network Targeting 70+ Countries

 

Google on Wednesday revealed that it collaborated with industry partners to dismantle the digital infrastructure of a suspected China-aligned cyber espionage group known as UNC2814, which compromised at least 53 organizations spanning 42 countries.

"This prolific, elusive actor has a long history of targeting international governments and global telecommunications organizations across Africa, Asia, and the Americas," Google Threat Intelligence Group (GTIG) and Mandiant said in a report published today.

UNC2814 is believed to be associated with additional breaches across more than 20 other nations. Google, which has monitored the group since 2017, observed the attackers leveraging API requests to interact with software-as-a-service (SaaS) platforms as part of their command-and-control (C2) framework. This method allowed the threat actor to blend malicious communications with normal traffic patterns.

At the core of the campaign is a previously undocumented backdoor named GRIDTIDE. The malware exploits the Google Sheets API as a covert channel for C2 operations, enabling attackers to conceal communications while transferring raw data and executing shell commands. Written in C, GRIDTIDE supports file uploads and downloads, along with arbitrary command execution.

Dan Perez, GTIG researcher, told The Hacker News via email that they cannot confirm if all the intrusions involved the use of the GRIDTIDE backdoor. "We believe many of these organizations have been compromised for years," Perez added.

Investigators are still examining how UNC2814 gains its initial foothold. However, the group has a documented track record of exploiting web servers and edge devices to infiltrate targeted networks. Once inside, the attackers reportedly used service accounts to move laterally via SSH, while relying on living-off-the-land (LotL) tools to perform reconnaissance, elevate privileges, and maintain long-term persistence.

"To achieve persistence, the threat actor created a service for the malware at /etc/systemd/system/xapt.service, and once enabled, a new instance of the malware was spawned from /usr/sbin/xapt," Google explained.

The campaign also involved the use of SoftEther VPN Bridge to establish encrypted outbound connections to external IP addresses. Security researchers have previously linked misuse of SoftEther VPN technology to several Chinese state-sponsored hacking groups.

Evidence suggests that GRIDTIDE was deployed on systems containing personally identifiable information (PII), aligning with espionage objectives aimed at monitoring individuals of strategic interest. Despite this, Google stated that it did not detect any data exfiltration during the observed operations.

The malware’s communication mechanism relies on a spreadsheet-based polling system, assigning specific functions to designated cells for two-way communication:
  • A1: Used to retrieve attacker-issued commands and update status responses (e.g., S-C-R or Server-Command-Success)
  • A2–An: Facilitates the transfer of data such as command outputs and files
  • V1: Stores system-related data from the compromised endpoint
In response, Google terminated all Google Cloud projects associated with the attackers, dismantled known UNC2814 infrastructure, and revoked access to malicious accounts and Google Sheets API operations used for C2 activity.

The company described UNC2814 as one of the "most far-reaching, impactful campaigns" encountered in recent years. It confirmed that formal notifications were issued to affected entities and that assistance is being provided to organizations with verified breaches linked to the group.

Security experts note that this activity reflects a broader strategy by Chinese state-backed actors to secure prolonged access within global networks. The findings further emphasize the vulnerability of network edge devices, which frequently become entry points due to exposed weaknesses and misconfigurations.

Such appliances are increasingly targeted because they often lack advanced endpoint detection capabilities while offering direct access or pivot opportunities into internal enterprise systems once compromised.

"The global scope of UNC2814's activity, evidenced by confirmed or suspected operations in over 70 countries, underscores the serious threat facing telecommunications and government sectors, and the capacity for these intrusions to evade detection by defenders," Google said.

"Prolific intrusions of this scale are generally the result of years of focused effort and will not be easily re-established. We expect that UNC2814 will work hard to re-establish its global footprint."

Malicious AI Chrome Extensions Steal Users Emails and Passwords


30 malicious Chrome extensions used by over 300,000 users are pretending to be AI assistants to steal credentials, browsing information, and email content. Few extensions are still active in the Chrome Web Store and have been downloaded by tens of thousands of users. 

Experts at browser security platform LayerX found the malicious extension campaign and labelled it AiFrame. They discovered that all studied extensions are part of the same malicious attack as they interact with infrastructure under a single domain, tapnetic[.]pro. 

Experts said that the most famous extension in the AiFrame operation had 80,000 users and was termed Gemini AI Sidebar (fppbiomdkfbhgjjdmojlogeceejinadg), but it isn't available in the Chrome Web Store. 

According to BleepingComputer, other extensions with over thousand users are still active on Google's repository for Chrome extensions. The names might be different, but the classification is the same. 

LayerX discovered that all 30 extensions have the same Javascript logic, permissions, internal structure, and backend infrastructure. 

The infected browser add-ons do not apply AI functionality locally. 

This can be risky because publishers can modify the extensions' logic without any update, similar to the Microsoft Office Add-ins. This helps them escape the new review. 

Besides this, the extension takes out page content from the sites that users visit. This includes verification pages via Mozilla's Readability library. 

According to LayerX, a group of 15 extensions exclusively target Gmail data by injecting UI components with a content script that executes at "document_start" on "mail.google.com." The script repeatedly retrieves email thread text using ".textContent" after reading visible email content straight from the DOM. Even email drafts can be recorded, according to the researchers. According to a report released today by LayerX, "the extracted email content is passed into the extension's logic and transmitted to third-party backend infrastructure controlled by the extension operator when Gmail-related features like AI-assisted replies or summaries are invoked."

Additionally, the extensions have a way for remotely triggering speech recognition and transcript creation that uses the "Web Speech API" to provide operators with the results. The extensions may potentially intercept chats from the victim's surroundings, depending on the permissions that are provided. Google has not responded to BleepingComputer's request for comment on LayerX results by the time of publication. For the full list of malicious extensions, it is advised to consult LayerX's list of indicators of compromise. Users should reset the passwords for all accounts if the intrusion is verified.

State-Backed Hackers Are Turning to AI Tools to Plan, Build, and Scale Cyber Attacks

 



Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.

Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.

The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.

The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.

Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.

Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.

Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to imitate the same behavior. In one large campaign, attackers sent more than 100,000 prompts to replicate how an AI model reasons across many tasks in different languages. Researchers at Praetorian demonstrated that a functional replica could be built using a relatively small number of queries and limited training time. Experts warned that keeping AI model parameters secret is not enough, because every response an AI system provides can be used as training data for attackers.

Google, which launched its AI Cyber Defense Initiative in 2024, stated that artificial intelligence is increasingly amplifying the capabilities of cybercriminals by improving their efficiency and speed. Company representatives cautioned that as attackers integrate AI into routine operations, the volume and sophistication of attacks will continue to rise. Security specialists argue that defenders must adopt similar AI-powered tools to automate threat detection, accelerate response times, and operate at the same machine-level speed as modern attacks.


Experts Find Malicious Browser Extensions, Chrome, Safari, and Edge Affected


Threat actors exploit extensions

Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times. 

The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors. 

Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome. 

Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server. 

The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers. 

After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses. 

In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.

Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally. 

This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX. 

The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.

This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.

Former Google Engineer Convicted in U.S. for Stealing AI Trade Secrets to Aid China-Based Startup

 

A former Google software engineer has been found guilty in the United States for unlawfully taking thousands of confidential Google documents to support a technology venture in China, according to an announcement made by the Department of Justice (DoJ) on Thursday.

Linwei Ding, also known as Leon Ding, aged 38, was convicted by a federal jury on 14 charges—seven counts of economic espionage and seven counts of theft of trade secrets. Prosecutors established that Ding illegally copied more than 2,000 internal Google files containing highly sensitive artificial intelligence (AI) trade secrets with the intent of benefiting the People’s Republic of China (PRC).

"Silicon Valley is at the forefront of artificial intelligence innovation, pioneering transformative work that drives economic growth and strengthens our national security," said U.S. Attorney Craig H. Missakian. "We will vigorously protect American intellectual capital from foreign interests that seek to gain an unfair competitive advantage while putting our national security at risk."

Ding was initially indicted in March 2024 after investigators discovered that he had transferred proprietary data from Google’s internal systems to his personal Google Cloud account. The materials allegedly stolen included detailed information on Google’s supercomputing data center architecture used to train and run AI models, its Cluster Management System (CMS), and the AI models and applications operating on that infrastructure.

The misappropriated trade secrets reportedly covered several critical technologies, including the design and functionality of Google’s custom Tensor Processing Unit (TPU) chips and GPU systems, software that enables chip-level communication and task execution, systems that coordinate thousands of chips into AI supercomputers, and SmartNIC technology used for high-speed networking within Google’s AI and cloud platforms.

Authorities stated that the theft occurred over an extended period between May 2022 and April 2023. Ding, who began working at Google in 2019, allegedly maintained undisclosed ties with two China-based technology firms during his employment, one of which was Shanghai Zhisuan Technologies Co., a startup he founded in 2023. Investigators noted that Ding downloaded large volumes of confidential files in December 2023, just days before resigning from the company.

"Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC; by early 2023, Ding was in the process of founding his own technology company in the PRC focused on AI and machine learning and was acting as the company's CEO," the DoJ said.

The case further alleged that Ding attempted to conceal his actions by copying Google source code into the Apple Notes app on his work-issued MacBook, converting the files into PDFs, and uploading them to his personal Google account. Prosecutors also claimed that he asked a colleague to use his access badge to enter a Google facility, creating the false appearance that he was working from the office while he was actually in China.

The investigation reportedly accelerated in late 2023 after Google learned that Ding had delivered a public presentation in China to prospective investors promoting his startup. According to Courthouse News, Ding’s defense attorney Grant Fondo argued that the information could not qualify as trade secrets because it was accessible to a large number of Google employees. "Google chose openness over security," Fonda said.

In a superseding indictment filed in February 2025, Ding was additionally charged with economic espionage, with prosecutors alleging that he applied to a Beijing-backed Shanghai talent program. Such initiatives were described as efforts to recruit overseas researchers to bolster China’s technological and economic development.

"Ding's application for this talent plan stated that he planned to 'help China to have computing power infrastructure capabilities that are on par with the international level,'" the DoJ said. "The evidence at trial also showed that Ding intended to benefit two entities controlled by the government of China by assisting with the development of an AI supercomputer and collaborating on the research and development of custom machine learning chips."

Ding is set to attend a status conference on February 3, 2026. If sentenced to the maximum penalties, he could face up to 10 years in prison for each trade secret theft charge and up to 15 years for each count of economic espionage.

Google Owned Mandiant Finds Vishing Attacks Against SaaS Platforms


Mandiant recently said that it found an increase in threat activity that deploys tradecraft for extortion attacks carried out by a financially gained group ShinyHunters.

  • These attacks use advanced voice phishing (vishing) and fake credential harvesting sites imitating targeted organizations to get illicit access to victims systems by collecting sign-on (SSO) credentials and two factor authentication codes. 
  • The attacks aim to target cloud-based software-as-a-service (SaaS) apps to steal sensitive data and internal communications and blackmail victims. 

Google owned Mandiant’s threat intelligence team is tracking the attacks under various clusters: UNC6661, UNC6671, and UNC6240 (aka ShinyHunters). These gangs might be improving their attack tactics. "While this methodology of targeting identity providers and SaaS platforms is consistent with our prior observations of threat activity preceding ShinyHunters-branded extortion, the breadth of targeted cloud platforms continues to expand as these threat actors seek more sensitive data for extortion," Mandiant said. 

"Further, they appear to be escalating their extortion tactics with recent incidents, including harassment of victim personnel, among other tactics.”

Theft details

UNC6661 was pretending to be IT staff sending employees to credential harvesting links tricking them into multi-factor authentication (MFA) settings. This was found during mid-January 2026.

Threat actors used stolen credentials to register their own device for MFA and further steal data from SaaS platforms. In one incident, the hacker exploited their access to infected email accounts to send more phishing emails to users in cryptocurrency based organizations.

The emails were later deleted to hide the tracks. Experts also found UNC6671 mimicking IT staff to fool victims to steal credentials and MFA login codes on credential harvesting websites since the start of this year. In a few incidents, the hackers got access to Okta accounts. 

UNC6671 leveraged PowerShell to steal sensitive data from OneDrive and SharePoint. 

Attack tactic 

The use of different domain registrars to register the credential harvesting domains (NICENIC for UNC6661 and Tucows for UNC6671) and the fact that an extortion email sent after UNC6671 activity did not overlap with known UNC6240 indicators are the two main differences between UNC6661 and UNC6671. 

This suggests that other groups of people might be participating, highlighting how nebulous these cybercrime organizations are. Furthermore, the targeting of bitcoin companies raises the possibility that the threat actors are searching for other opportunities to make money.

Google Introduces AI-Powered Side Panel in Chrome to Automate Browsing




Google has updated its Chrome browser by adding a built-in artificial intelligence panel powered by its Gemini model, marking a stride toward automated web interaction. The change reflects the company’s broader push to integrate AI directly into everyday browsing activities.

Chrome, which currently holds more than 70 percent of the global browser market, is now moving in the same direction as other browsers that have already experimented with AI-driven navigation. The idea behind this shift is to allow users to rely on AI systems to explore websites, gather information, and perform online actions with minimal manual input.

The Gemini feature appears as a sidebar within Chrome, reducing the visible area of websites to make room for an interactive chat interface. Through this panel, users can communicate with the AI while keeping their main work open in a separate tab, allowing multitasking without constant tab switching.

Google explains that this setup can help users organize information more effectively. For example, Gemini can compare details across multiple open tabs or summarize reviews from different websites, helping users make decisions more quickly.

For subscribers to Google’s higher-tier AI plans, Chrome now offers an automated browsing capability. This allows Gemini to act as a software agent that can follow instructions involving multiple steps. In demonstrations shared by Google, the AI can analyze images on a webpage, visit external shopping platforms, identify related products, and add items to a cart while staying within a user-defined budget. The final purchase, however, still requires user approval.

The browser update also includes image-focused AI tools that allow users to create or edit images directly within Chrome, further expanding the browser’s role beyond simple web access.

Chrome’s integration with other applications has also been expanded. With user consent, Gemini can now interact with productivity tools, communication apps, media services, navigation platforms, and shopping-related Google services. This gives the AI broader context when assisting with tasks.

Google has indicated that future updates will allow Gemini to remember previous interactions across websites and apps, provided users choose to enable this feature. The goal is to make AI assistance more personalized over time.

Despite these developments, automated browsing faces resistance from some websites. Certain platforms have already taken legal or contractual steps to limit AI-driven activity, particularly for shopping and transactions. This underlines the ongoing tension between automation and website control.

To address these concerns, Google says Chrome will request human confirmation before completing sensitive actions such as purchases or social media posts. The browser will also support an open standard designed to allow AI-driven commerce in collaboration with participating retailers.

Currently, these features are available on Chrome for desktop systems in the United States, with automated browsing restricted to paid subscribers. How widely such AI-assisted browsing will be accepted across the web remains uncertain.


What Happens When Spyware Hits a Phone and How to Stay Safe

 



Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as these tools continue to spread globally. Even individuals who are not public figures are advised to remain cautious.

In December, hundreds of iPhone and Android users received official threat alerts stating that their devices had been targeted by spyware. Shortly after these notifications, Apple and Google released security patches addressing vulnerabilities that experts believe were exploited to install the malware on a small number of phones.

Spyware poses an extreme risk because it allows attackers to monitor nearly every activity on a smartphone. This includes access to calls, messages, keystrokes, screenshots, notifications, and even encrypted platforms such as WhatsApp and Signal. Despite its intrusive capabilities, spyware is usually deployed in targeted operations against journalists, political figures, activists, and business leaders in sensitive industries.

High-profile cases have demonstrated the seriousness of these attacks. Former Amazon chief executive Jeff Bezos and Hanan Elatr, the wife of murdered Saudi dissident Jamal Khashoggi, were both compromised through Pegasus spyware developed by the NSO Group. These incidents illustrate how personal data can be accessed without user awareness.

Spyware activity remains concentrated within these circles, but researchers suggest its reach may be expanding. In early December, Google issued threat notifications and disclosed findings showing that an exploit chain had been used to silently install Predator spyware. Around the same time, the U.S. Cybersecurity and Infrastructure Security Agency warned that attackers were actively exploiting mobile messaging applications using commercial surveillance tools.

One of the most dangerous techniques involved is known as a zero-click attack. In such cases, a device can be infected without the user clicking a link, opening a message, or downloading a file. According to Malwarebytes researcher Pieter Arntz, once infected, attackers can read messages, track keystrokes, capture screenshots, monitor notifications, and access banking applications. Rocky Cole of iVerify adds that spyware can also extract emails and texts, steal credentials, send messages, and access cloud accounts.

Spyware may also spread through malicious links, fake applications, infected images, browser vulnerabilities, or harmful browser extensions. Recorded Future’s Richard LaTulip notes that recent research into malicious extensions shows how tools that appear harmless can function as surveillance mechanisms. These methods, often associated with nation-state actors, are designed to remain hidden and persistent.

Governments and spyware vendors frequently claim such tools are used only for law enforcement or national security. However, Amnesty International researcher Rebecca White states that journalists, activists, and others have been unlawfully targeted worldwide, using spyware as a method of repression. Thai activist Niraphorn Onnkhaow was targeted multiple times during pro-democracy protests between 2020 and 2021, eventually withdrawing from activism due to fears her data could be misused.

Detecting spyware is challenging. Devices may show subtle signs such as overheating, performance issues, or unexpected camera or microphone activation. Official threat alerts from Apple, Google, or Meta should be treated seriously. Leaked private information can also indicate compromise.

To reduce risk, Apple offers Lockdown Mode, which limits certain functions to reduce attack surfaces. Apple security executive Ivan Krstić states that widespread iPhone malware has not been observed outside mercenary spyware campaigns. Apple has also introduced Memory Integrity Enforcement, an always-on protection designed to block memory-based exploits.

Google provides Advanced Protection for Android, enhanced in Android 16 with intrusion logging, USB safeguards, and network restrictions.

Experts recommend avoiding unknown links, limiting app installations, keeping devices updated, avoiding sideloading, and restarting phones periodically. However, confirmed infections often require replacing the device entirely. Organizations such as Amnesty International, Access Now, and Reporters Without Borders offer assistance to individuals who believe they have been targeted.

Security specialists advise staying cautious without allowing fear to disrupt normal device use.