Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.
The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.
This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.
Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.
Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.
Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.
Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.
Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.
If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.
Akira, one of the most active ransomware operations this year, has expanded its capabilities and increased the scale of its attacks, according to new threat intelligence shared by global security agencies. The group’s operators have upgraded their ransomware toolkit, continued to target a broad range of sectors, and sharply increased the financial impact of their attacks.
Data collected from public extortion portals shows that by the end of September 2025 the group had claimed roughly 244.17 million dollars in ransom proceeds. Analysts note that this figure represents a steep rise compared to estimates released in early 2024. Current tracking data places Akira second in overall activity among hundreds of monitored ransomware groups, with more than 620 victim organisations listed this year.
The growing number of incidents has prompted an updated joint advisory from international cyber authorities. The latest report outlines newly observed techniques, warns of the group’s expanded targeting, and urges all organisations to review their defensive posture.
Researchers confirm that Akira has introduced a new ransomware strain, commonly referenced as Akira v2. This version is designed to encrypt files at higher speeds and make data recovery significantly harder. Systems affected by the new variant often show one of several extensions, which include akira, powerranges, akiranew, and aki. Victims typically find ransom instructions stored as text files in both the main system directory and user folders.
Investigations show that Akira actors gain entry through several familiar but effective routes. These include exploiting security gaps in edge devices and backup servers, taking advantage of authentication bypass and scripting flaws, and using buffer overflow vulnerabilities to run malicious code. Stolen or brute forced credentials remain a common factor, especially when multi factor authentication is disabled.
Once inside a network, the attackers quickly establish long-term access. They generate new domain accounts, including administrative profiles, and have repeatedly created an account named itadm during intrusions. The group also uses legitimate system tools to explore networks and identify sensitive assets. This includes commands used for domain discovery and open-source frameworks designed for remote execution. In many cases, the attackers uninstall endpoint detection products, change firewall rules, and disable antivirus tools to remain unnoticed.
The group has also expanded its focus to virtual and cloud based environments. Security teams recently observed the encryption of virtual machine disk files on Nutanix AHV, in addition to previous activity on VMware ESXi and Hyper-V platforms. In one incident, operators temporarily powered down a domain controller to copy protected virtual disk files and load them onto a new virtual machine, allowing them to access privileged credentials.
Command and control activity is often routed through encrypted tunnels, and recent intrusions show the use of tunnelling services to mask traffic. Authorities warn that data theft can occur within hours of initial access.
Security agencies stress that the most effective defence remains prompt patching of known exploited vulnerabilities, enforcing multi factor authentication on all remote services, monitoring for unusual account creation, and ensuring that backup systems are fully secured and tested.