Salesforce said last week that “certain customers’ Salesforce data” had been accessed through Gainsight applications. These apps are widely used by companies to manage customer relationships. According to Google’s Threat Intelligence Group, over 200 Salesforce instances were affected.
A group calling itself Scattered Lapsus$ Hunters, which includes members of the well-known ShinyHunters gang, has claimed responsibility. The gang has a history of targeting large firms and leaking stolen data online.
The hackers have already published a list of alleged victims. Names include Atlassian, CrowdStrike, DocuSign, GitLab, LinkedIn, Malwarebytes, SonicWall, Thomson Reuters and Verizon. Some of these companies have denied being impacted, while others are still investigating.
This is a serious case of risks of third-party apps in enterprise ecosystems. By compromising Gainsight’s software, attackers were able to reach hundreds of companies at once.
According to Tech Crunch, supply chain attacks are especially dangerous because they exploit trust in vendors. Once a trusted app is breached, it can open doors to sensitive data across multiple organisations.
Salesforce has said it is working with affected customers to secure systems. Gainsight has not yet issued a detailed statement. Google continues to track the attackers and assess the scale of the stolen data.
Cybersecurity firms are advising companies to review their integrations, tighten access controls and monitor for suspicious activity. Analysts believe this breach will renew calls for stricter checks on third-party apps used in cloud platforms.
The attack comes at a time when businesses are increasingly dependent on SaaS platforms like Salesforce. With more reliance on external apps, attackers are shifting focus to these weak links. This makes the issue dangerous.
A new phishing operation is misleading users through an extremely subtle visual technique that alters the appearance of Microsoft’s domain name. Attackers have registered the look-alike address “rnicrosoft(.)com,” which replaces the single letter m with the characters r and n positioned closely together. The small difference is enough to trick many people into believing they are interacting with the legitimate site.
This method is a form of typosquatting where criminals depend on how modern screens display text. Email clients and browsers often place r and n so closely that the pair resembles an m, leading the human eye to automatically correct the mistake. The result is a domain that appears trustworthy at first glance although it has no association with the actual company.
Experts note that phishing messages built around this tactic often copy Microsoft’s familiar presentation style. Everything from symbols to formatting is imitated to encourage users to act without closely checking the URL. The campaign takes advantage of predictable reading patterns where the brain prioritizes recognition over detail, particularly when the user is scanning quickly.
The deception becomes stronger on mobile screens. Limited display space can hide the entire web address and the address bar may shorten or disguise the domain. Criminals use this opportunity to push malicious links, deliver invoices that look genuine, or impersonate internal departments such as HR teams. Once a victim believes the message is legitimate, they are more likely to follow the link or download a harmful attachment.
The “rn” substitution is only one example of a broader pattern. Typosquatting groups also replace the letter o with the number zero, add hyphens to create official-sounding variations, or register sites with different top level domains that resemble the original brand. All of these are intended to mislead users into entering passwords or sending sensitive information.
Security specialists advise users to verify every unexpected message before interacting with it. Expanding the full sender address exposes inconsistencies that the display name may hide. Checking links by hovering over them, or using long-press previews on mobile devices, can reveal whether the destination is legitimate. Reviewing email headers, especially the Reply-To field, can also uncover signs that responses are being redirected to an external mailbox controlled by attackers.
When an email claims that a password reset or account change is required, the safest approach is to ignore the provided link. Instead, users should manually open a new browser tab and visit the official website. Organisations are encouraged to conduct repeated security awareness exercises so employees do not react instinctively to familiar-looking alerts.
Below are common variations used in these attacks:
• Letter Pairing: r and n are combined to imitate m as seen in rnicrosoft(.)com.
• Number Replacement: the letter o is switched with the number zero in addresses like micros0ft(.)com.
• Added Hyphens: attackers introduce hyphens to create domains that appear official, such as microsoft-support(.)com.
• Domain Substitution: similar names are created by altering only the top level domain, for example microsoft(.)co.
This phishing strategy succeeds because it relies on human perception rather than technical flaws. Recognising these small changes and adopting consistent verification habits remain the most effective protections against such attacks.
Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.
The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.
This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.
Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.
Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.
Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.
Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.
Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.
If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.
Akira, one of the most active ransomware operations this year, has expanded its capabilities and increased the scale of its attacks, according to new threat intelligence shared by global security agencies. The group’s operators have upgraded their ransomware toolkit, continued to target a broad range of sectors, and sharply increased the financial impact of their attacks.
Data collected from public extortion portals shows that by the end of September 2025 the group had claimed roughly 244.17 million dollars in ransom proceeds. Analysts note that this figure represents a steep rise compared to estimates released in early 2024. Current tracking data places Akira second in overall activity among hundreds of monitored ransomware groups, with more than 620 victim organisations listed this year.
The growing number of incidents has prompted an updated joint advisory from international cyber authorities. The latest report outlines newly observed techniques, warns of the group’s expanded targeting, and urges all organisations to review their defensive posture.
Researchers confirm that Akira has introduced a new ransomware strain, commonly referenced as Akira v2. This version is designed to encrypt files at higher speeds and make data recovery significantly harder. Systems affected by the new variant often show one of several extensions, which include akira, powerranges, akiranew, and aki. Victims typically find ransom instructions stored as text files in both the main system directory and user folders.
Investigations show that Akira actors gain entry through several familiar but effective routes. These include exploiting security gaps in edge devices and backup servers, taking advantage of authentication bypass and scripting flaws, and using buffer overflow vulnerabilities to run malicious code. Stolen or brute forced credentials remain a common factor, especially when multi factor authentication is disabled.
Once inside a network, the attackers quickly establish long-term access. They generate new domain accounts, including administrative profiles, and have repeatedly created an account named itadm during intrusions. The group also uses legitimate system tools to explore networks and identify sensitive assets. This includes commands used for domain discovery and open-source frameworks designed for remote execution. In many cases, the attackers uninstall endpoint detection products, change firewall rules, and disable antivirus tools to remain unnoticed.
The group has also expanded its focus to virtual and cloud based environments. Security teams recently observed the encryption of virtual machine disk files on Nutanix AHV, in addition to previous activity on VMware ESXi and Hyper-V platforms. In one incident, operators temporarily powered down a domain controller to copy protected virtual disk files and load them onto a new virtual machine, allowing them to access privileged credentials.
Command and control activity is often routed through encrypted tunnels, and recent intrusions show the use of tunnelling services to mask traffic. Authorities warn that data theft can occur within hours of initial access.
Security agencies stress that the most effective defence remains prompt patching of known exploited vulnerabilities, enforcing multi factor authentication on all remote services, monitoring for unusual account creation, and ensuring that backup systems are fully secured and tested.