Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Wake-Up Call for Cybersecurity: Lessons from M&S, Co-op & Harrods Attacks


The recent cyberattacks on M&S, Co-op, and Harrods were more than just security breaches — they served as urgent warnings for every IT leader charged with protecting digital systems. These weren’t random hacks; they were carefully orchestrated, multi-step campaigns that attacked the most vulnerable link in any cybersecurity framework: human error.

From these headline incidents, here are five critical lessons that every security leader must absorb — and act upon — immediately:

1. Your people are your greatest vulnerability — and your strongest defense

Here’s a harsh truth: the user is now your perimeter. You can pour resources into state-of-the-art firewalls, zero trust frameworks, or top-tier intrusion detection, but if one employee is duped into resetting a password or clicking a malicious link, your defenses don’t matter.

That’s exactly how these attacks succeeded. The threat actor group Scattered Spider, renowned for its social engineering prowess, didn’t need to breach complex systems — they simply manipulated an IT help desk employee into granting access. And it worked.

This underscores the need for security awareness programs that go far beyond once-a-year compliance videos. You must deploy realistic phishing simulations, hands-on attack drills, and continuous reinforcement. When trained properly, employees can be your first line of defense. Left untrained, they become the attackers’ easiest target.

Rule of thumb: You can patch servers, but you can’t patch human error. Train unceasingly

2. Third-party risk is not someone else’s problem — it’s yours

One of the most revealing takeaways: many of the breaches occurred not because of internal vulnerabilities, but through trusted external partners. For instance, M&S was breached via Tata Consultancy Services (TCS), their outsourced IT help desk provider.

This is not an outlier. According to a recent Global Third-Party Breach Report, 35.5% of all breaches now originate from third-party relationships, a rise of 6.5% over the previous year. In the retail sector, that figure jumps to 52.4%. As enterprises become more interconnected, attackers no longer need to breach your main systems — they target a trusted vendor with privileged access.

Yet many organizations treat third-party risk as a checkbox in contracts or an annual questionnaire. That’s no longer sufficient. You need real-time visibility across your entire digital supply chain: vendors, SaaS platforms, outsourced IT services, and beyond. Vet them with rigorous scrutiny, enforce contractual controls, and monitor continuously. Because if they fall, you may fall too.

3. Operational disruption is now a core component of a breach

Yes, data was stolen, and customer records compromised. But in the M&S and Co-op cases, the more devastating impact was business paralysis. M&S’s e-commerce system was down for weeks. Automated ordering failed, stores ran out of stock. Co-op’s funeral operations had to revert to pen and paper; supermarket shelves went bare.

Attackers are shifting tactics. Modern ransomware gangs don’t just encrypt files — they aim to force operational collapse, leaving organizations with no choice but to negotiate under duress. In fact, 41.4% of ransomware attacks now begin via third-party access, with a clear focus on disruptive leverage.

If your operations halt, brand trust erodes, customers leave, and revenue evaporates. Downtime has become as critical — or more so — than data loss. Plan your resilience accordingly.

4. Create and rehearse robust fallback plans — B, C, and D

Hope is not a strategy. Far too many organizations have incident response plans in theory, but when the pressure mounts, they crumble. Without rehearsal, your plan is fragile.

The M&S and Co-op incidents revealed how recovery is agonizingly slow when systems aren’t segmented, backups aren’t isolated, or teams lack coordination. Ask yourself: can your organization continue operations if your core systems are compromised?

Do your backups adhere to the 3-2-1 rule, and are they immutable?

Can you communicate with staff and customers securely, without alerting the attacker?

These aren’t hypothetical scenarios — they’re the difference between days of disruption and a multi-million loss. Tabletop simulations and red teaming aren’t optional; they’re your dress rehearsals for the real fight.

5. Transparency is essential to regaining trust

Once a breach occurs, your public response is as critical as what you do behind the scenes. Tech-savvy customers see when services are down or stock is missing. If you stay silent, rumor and distrust fill the void.

Some companies attempted to withhold information initially. But Co-op CEO Shirine Khoury-Haq chose to speak up, acknowledged the breach, apologized openly, and took responsibility. That level of transparency — though hard — is how you begin to rebuild trust.

Customers may forgive a breach; they will not forgive a cover-up. You must communicate clearly, swiftly, and honestly: what you know, what steps you’re taking, and what those affected should do to protect themselves. If you don’t control the narrative, attackers or the media will. And regulators will be watching — under GDPR and similar regimes, delayed or misleading disclosures are liabilities, not discretion.

Cybersecurity is no solo sport — no organization can outpace today’s evolving threats alone. But by absorbing lessons from these prominent breaches, by fortifying your people, processes, and partners, we can elevate the collective defense.

Cyber resilience is not a destination but a discipline — in our connected world, it’s the only path forward.

Workplace AI Tools Now Top Cause of Data Leaks, Cyera Report Warns

 

A recent Cyera report reveals that generative AI tools like ChatGPT, Microsoft Copilot, and Claude have become the leading source of workplace data leaks, surpassing traditional channels like email and cloud storage for the first time. The alarming trend shows that nearly 50% of enterprise employees are using AI tools at work, often unknowingly exposing sensitive company information through personal, unmanaged accounts.

The research found that 77% of AI interactions in workplace settings involve actual company data, including financial records, personally identifiable information, and strategic documents. Employees frequently copy and paste confidential materials directly into AI chatbots, believing they are simply improving productivity or efficiency. However, many of these interactions occur through personal AI accounts rather than enterprise-managed ones, making them invisible to corporate security systems.

The critical issue lies in how traditional cybersecurity measures fail to detect these leaks. Most security platforms are designed to monitor file attachments, suspicious downloads, and outbound emails, but AI conversations appear as normal web traffic. Because data is shared through copy-paste actions within chat windows rather than direct file uploads, it bypasses conventional data-loss prevention tools entirely.

A 2025 LayerX enterprise report revealed that 67% of AI interactions happen on personal accounts, creating a significant blind spot for IT teams who cannot monitor or restrict these logins. This makes it nearly impossible for organizations to provide adequate oversight or implement protective measures. In many cases, employees are not intentionally leaking data but are unaware of the security risks associated with seemingly innocent actions like asking AI to "summarize this report".

Security experts emphasize that the solution is not to ban AI outright but to implement stronger controls and improved visibility. Recommended measures include blocking access to generative AI through personal accounts, requiring single sign-on for all AI tools on company devices, monitoring for sensitive keywords and clipboard activity, and treating AI chat interactions with the same scrutiny as traditional file transfers.

The fundamental advice for employees is straightforward: never paste anything into an AI chat that you wouldn't post publicly on the internet. As AI adoption continues to grow in workplace settings, organizations must recognize this emerging threat and take immediate action to protect sensitive information from inadvertent exposure.

Indian Tax Department Fixes Major Security Flaw That Exposed Sensitive Taxpayer Data

 

The Indian government has patched a critical vulnerability in its income tax e-filing portal that had been exposing sensitive taxpayer data to unauthorized users. The flaw, discovered by security researchers Akshay CS and “Viral” in September, allowed logged-in users to access personal and financial details of other taxpayers simply by manipulating network requests. The issue has since been resolved, the researchers confirmed to TechCrunch, which first reported the incident. 

According to the report, the vulnerability exposed a wide range of sensitive data, including taxpayers’ full names, home addresses, email IDs, dates of birth, phone numbers, and even bank account details. It also revealed Aadhaar numbers, a unique government-issued identifier used for identity verification and accessing public services. TechCrunch verified the issue by granting permission for the researchers to look up a test account before confirming the flaw’s resolution on October 2. 

The vulnerability stemmed from an insecure direct object reference (IDOR) — a common but serious web flaw where back-end systems fail to verify user permissions before granting data access. In this case, users could retrieve another taxpayer’s data by simply replacing their Permanent Account Number (PAN) with another PAN in the network request. This could be executed using simple, publicly available tools such as Postman or a browser’s developer console. 

“This is an extremely low-hanging thing, but one that has a very severe consequence,” the researchers told TechCrunch. They further noted that the flaw was not limited to individual taxpayers but also exposed financial data belonging to registered companies. Even those who had not yet filed their returns this year were vulnerable, as their information could still be accessed through the same exploit. 

Following the discovery, the researchers immediately alerted India’s Computer Emergency Response Team (CERT-In), which acknowledged the issue and confirmed that the Income Tax Department was working to fix it. The flaw was officially patched in early October. However, officials have not disclosed how long the vulnerability had existed or whether it had been exploited by malicious actors before discovery. 

The Ministry of Finance and the Income Tax Department did not respond to multiple requests for comment on the breach’s potential scope. According to public data available on the tax portal, over 135 million users are registered, with more than 76 million having filed returns in the financial year 2024–25. While the fix has been implemented, the incident highlights the critical importance of secure coding practices and stronger access validation mechanisms in government-run digital platforms, where the sensitivity of stored data demands the highest level of protection.

Red Hat Data Breach Deepens as Extortion Attempts Surface

 



The cybersecurity breach at enterprise software provider Red Hat has intensified after the hacking collective known as ShinyHunters joined an ongoing extortion attempt initially launched by another group called Crimson Collective.

Last week, Crimson Collective claimed responsibility for infiltrating Red Hat’s internal GitLab environment, alleging the theft of nearly 570GB of compressed data from around 28,000 repositories. The stolen files reportedly include over 800 Customer Engagement Reports (CERs), which often contain detailed insights into client systems, networks, and infrastructures.

Red Hat later confirmed that the affected system was a GitLab instance used exclusively by Red Hat Consulting for managing client engagements. The company stated that the breach did not impact its broader product or enterprise environments and that it has isolated the compromised system while continuing its investigation.

The situation escalated when the ShinyHunters group appeared to collaborate with Crimson Collective. A new listing targeting Red Hat was published on the recently launched ShinyHunters data leak portal, threatening to publicly release the stolen data if the company failed to negotiate a ransom by October 10.

As part of their extortion campaign, the attackers published samples of the stolen CERs that allegedly reference organizations such as banks, technology firms, and government agencies. However, these claims remain unverified, and Red Hat has not yet issued a response regarding this new development.

Cybersecurity researchers note that ShinyHunters has increasingly been linked to what they describe as an extortion-as-a-service model. In such operations, the group partners with other cybercriminals to manage extortion campaigns in exchange for a percentage of the ransom. The same tactic has reportedly been seen in recent incidents involving multiple corporations, where different attackers used the ShinyHunters name to pressure victims.

Experts warn that if the leaked CERs are genuine, they could expose critical technical data, potentially increasing risks for Red Hat’s clients. Organizations mentioned in the samples are advised to review their system configurations, reset credentials, and closely monitor for unusual activity until further confirmation is available.

This incident underscores the growing trend of collaborative cyber extortion, where data brokers, ransomware operators, and leak-site administrators coordinate efforts to maximize pressure on corporate victims. Investigations into the Red Hat breach remain ongoing, and updates will depend on official statements from the company and law enforcement agencies.


Spanish Police Dismantle AI-Powered Phishing Network and Arrest Developer “GoogleXcoder”

 

Spanish authorities have dismantled a highly advanced AI-driven phishing network and arrested its mastermind, a 25-year-old Brazilian developer known online as “GoogleXcoder.” The operation, led by the Civil Guard’s Cybercrime Department, marks a major breakthrough in the ongoing fight against digital fraud and banking credential theft across Spain. 

Since early 2023, Spain has been hit by a wave of sophisticated phishing campaigns in which cybercriminals impersonated major banks and government agencies. These fake websites duped thousands of victims into revealing their personal and financial data, resulting in millions of euros in losses. Investigators soon discovered that behind these attacks was a criminal ecosystem powered by “Crime-as-a-Service” tools — prebuilt phishing kits sold by “GoogleXcoder.” 

Operating from various locations across Spain, the developer built and distributed phishing software capable of instantly cloning legitimate bank and agency websites. His kits allowed even inexperienced criminals to launch professional-grade phishing operations. He also offered ongoing updates, customization options, and technical support — effectively turning online fraud into an organized commercial enterprise. Communication and transactions primarily took place over Telegram, where access to the tools cost hundreds of euros per day. One group, brazenly named “Stealing Everything from Grandmas,” highlighted the disturbing scale and attitude of these cybercrime operations. 

After months of investigation, the Civil Guard tracked the suspect to San Vicente de la Barquera, Cantabria. The arrest led to the seizure of multiple electronic devices containing phishing source codes, cryptocurrency wallets, and chat logs linking him to other cybercriminals. Forensic specialists are now analyzing this evidence to trace stolen funds and identify collaborators. 

The coordinated police operation spanned several Spanish cities, including Valladolid, Zaragoza, Barcelona, Palma de Mallorca, San Fernando, and La Línea de la Concepción. Raids in these locations resulted in the recovery of stolen money, digital records, and hardware tied to the phishing network. Authorities have also deactivated Telegram channels associated with the scheme, though they believe more arrests could follow as the investigation continues. 

The successful operation was made possible through collaboration between the Brazilian Federal Police and the cybersecurity firm Group IB, emphasizing the importance of international partnerships in tackling digital crime. As Spain continues to strengthen its cyber defense mechanisms, the dismantling of “GoogleXcoder’s” network stands as a significant milestone in curbing the global spread of AI-powered phishing operations.

Zimbra Zero-Day Exploit Used in ICS File Attacks to Steal Sensitive Data

 

Security researchers have discovered that hackers exploited a zero-day vulnerability in Zimbra Collaboration Suite (ZCS) earlier this year using malicious calendar attachments to steal sensitive data. The attackers embedded harmful JavaScript code inside .ICS files—typically used to schedule and share calendar events—to target vulnerable Zimbra systems and execute commands within user sessions. 

The flaw, identified as CVE-2025-27915, affected ZCS versions 9.0, 10.0, and 10.1. It stemmed from inadequate sanitization of HTML content in calendar files, allowing cybercriminals to inject arbitrary JavaScript code. Once executed, the code could redirect emails, steal credentials, and access confidential user information. Zimbra patched the issue on January 27 through updates (ZCS 9.0.0 P44, 10.0.13, and 10.1.5), but at that time, the company did not confirm any active attacks. 

StrikeReady, a cybersecurity firm specializing in AI-based threat management, detected the campaign while monitoring unusually large .ICS files containing embedded JavaScript. Their investigation revealed that the attacks began in early January, predating the official patch release. In one notable instance, the attackers impersonated the Libyan Navy’s Office of Protocol and sent a malicious email targeting a Brazilian military organization. The attached .ICS file included Base64-obfuscated JavaScript designed to compromise Zimbra Webmail and extract sensitive data. 

Analysis of the payload showed that it was programmed to operate stealthily and execute in asynchronous mode. It created hidden fields to capture usernames and passwords, tracked user actions, and automatically logged out inactive users to trigger data theft. The script exploited Zimbra’s SOAP API to search through emails and retrieve messages, which were then sent to the attacker every four hours. It also added a mail filter named “Correo” to forward communications to a ProtonMail address, gathered contacts and distribution lists, and even hid user interface elements to avoid detection. The malware delayed its execution by 60 seconds and only reactivated every three days to reduce suspicion. 

StrikeReady could not conclusively link the attack to any known hacking group but noted that similar tactics have been associated with a small number of advanced threat actors, including those linked to Russia and the Belarusian state-sponsored group UNC1151. The firm shared technical indicators and a deobfuscated version of the malicious code to aid other security teams in detection efforts. 

Zimbra later confirmed that while the exploit had been used, the scope of the attacks appeared limited. The company urged all users to apply the latest patches, review existing mail filters for unauthorized changes, inspect message stores for Base64-encoded .ICS entries, and monitor network activity for irregular connections. The incident highlights the growing sophistication of targeted attacks and the importance of timely patching and vigilant monitoring to prevent zero-day exploitation.

Rise of Evil LLMs: How AI-Driven Cybercrime Is Lowering Barriers for Global Hackers

 

As artificial intelligence continues to redefine modern life, cybercriminals are rapidly exploiting its weaknesses to create a new era of AI-powered cybercrime. The rise of “evil LLMs,” prompt injection attacks, and AI-generated malware has made hacking easier, cheaper, and more dangerous than ever. What was once a highly technical crime now requires only creativity and access to affordable AI tools, posing global security risks. 

While “vibe coding” represents the creative use of generative AI, its dark counterpart — “vibe hacking” — is emerging as a method for cybercriminals to launch sophisticated attacks. By feeding manipulative prompts into AI systems, attackers are creating ransomware capable of bypassing traditional defenses and stealing sensitive data. This threat is already tangible. Anthropic, the developer behind Claude Code, recently disclosed that its AI model had been misused for personal data theft across 17 organizations, with each victim losing nearly $500,000. 

On dark web marketplaces, purpose-built “evil LLMs” like FraudGPT and WormGPT are being sold for as little as $100, specifically tailored for phishing, fraud, and malware generation. Prompt injection attacks have become a particularly powerful weapon. These techniques allow hackers to trick language models into revealing confidential data, producing harmful content, or generating malicious scripts. 

Experts warn that the ability to override safety mechanisms with just a line of text has significantly reduced the barrier to entry for would-be attackers. Generative AI has essentially turned hacking into a point-and-click operation. Emerging tools such as PromptLock, an AI agent capable of autonomously writing code and encrypting files, demonstrate the growing sophistication of AI misuse. According to Huzefa Motiwala, senior director at Palo Alto Networks, attackers are now using mainstream AI tools to compose phishing emails, create ransomware, and obfuscate malicious code — all without advanced technical knowledge. 

This shift has democratized cybercrime, making it accessible to a wider and more dangerous pool of offenders. The implications extend beyond technology and into national security. Experts warn that the intersection of AI misuse and organized cybercrime could have severe consequences, particularly for countries like India with vast digital infrastructures and rapidly expanding AI integration. 

Analysts argue that governments, businesses, and AI developers must urgently collaborate to establish robust defense mechanisms and regulatory frameworks before the problem escalates further. The rise of AI-powered cybercrime signals a fundamental change in how digital threats operate. It is no longer a matter of whether cybercriminals will exploit AI, but how quickly global systems can adapt to defend against it. 

As “evil LLMs” proliferate, the distinction between creative innovation and digital weaponry continues to blur, ushering in an age where AI can empower both progress and peril in equal measure.

Agentic AI Demands Stronger Digital Trust Systems

 

As agentic AI becomes more common across industries, companies face a new cybersecurity challenge: how to verify and secure systems that operate independently, make decisions on their own, and appear or disappear without human involvement. 

Consider a financial firm where an AI agent activates early in the morning to analyse trading data, detect unusual patterns, and prepare reports before the markets open. Within minutes, it connects to several databases, completes its task, and shuts down automatically. This type of autonomous activity is growing rapidly, but it raises serious concerns about identity and trust. 

“Many organisations are deploying agentic AI without fully thinking about how to manage the certificates that confirm these systems’ identities,” says Chris Hickman, Chief Security Officer at Keyfactor. 

“The scale and speed at which agentic AI functions are far beyond what most companies have ever managed.” 

AI agents are unlike human users who log in with passwords or devices tied to hardware. They are temporary and adaptable, able to start, perform complex jobs, and disappear without manual authentication. 

This fluid nature makes it difficult to manage digital certificates, which are essential for maintaining trusted communication between systems. 

Greg Wetmore, Vice President of Product Development at Entrust, explains that AI agents act like both humans and machines. 

“When an agent logs into a system or updates data, it behaves like a human user. But when it interacts with APIs or cloud platforms, it looks more like a software component,” he says. 

This dual behaviour requires a flexible security model. AI agents need stable certificates that prove their identity and temporary credentials that control what they are allowed to do. 

These permissions must be revocable in real time if the system behaves unexpectedly. The challenge becomes even greater when AI agents begin interacting with each other. Without proper cryptographic controls, one system could impersonate another. 

“Once agents start sharing information, certificate management becomes absolutely essential,” Hickman adds. 

Complicating matters further, three major changes are hitting cryptography at once. Certificate lifespans are being shortened to 47 days, post-quantum algorithms are nearing adoption, and organisations must now manage a far larger number of certificates due to AI automation. 

“We’re seeing huge changes in cryptography after decades of stability,” Hickman notes. “It’s a lot to handle for many teams.” 

Keyfactor’s research reveals that almost half of all organisations have not begun preparing for post-quantum encryption, and many still lack a clearly defined role for managing cryptography. 

This lack of governance poses serious risks, especially when certificate management is handled by IT departments without deep security expertise. Still, experts believe the situation can be managed with existing tools. 

“Agentic AI fits well within established security models such as zero trust,” Wetmore explains. “The technology to issue strong identities, enforce policies, and limit access already exists.” 

According to Sebastian Weir, AI Practice Leader at IBM UK and Ireland, many companies are now focusing on building security into AI projects from the start. 

“While AI development can be up to four times faster, the first version of code often contains many more vulnerabilities...” 

“...Organisations are learning to consider security early instead of adding it later,” he says.

Financial institutions are among those leading the shift, building identity systems that blend the stability of long-term certificates with the flexibility of short-term authorisations. 

Hickman points out that Public Key Infrastructure (PKI) already supports similar scale in IoT environments, managing billions of certificates worldwide. 

He adds, “PKI has always been about scale. The same principles can support agentic AI if implemented properly.” The real focus now, according to experts, should be on governance and orchestration. 

“Scalability depends on creating consistent and controllable deployment patterns. Orchestration frameworks and governance layers ensure transparency and auditability," says Weir. 

Poorly managed AI agents can cause significant damage. Some have been known to delete vital data or produce false financial information due to misconfiguration.

This makes it critical for companies to monitor agent behaviour closely and apply zero-trust principles where every interaction is verified. 

Securing agentic AI does not require reinventing cybersecurity. It requires applying proven methods to a new, fast-moving environment. 

“We already know that certificates and PKI work. An AI agent can have one certificate for identity and another for authorisation. The key is in how you manage them,” Hickman concludes. 

As businesses accelerate their use of AI, the winners will be those that design trust into their systems from the beginning. By investing in certificate lifecycle management and clear governance, they can ensure that every AI agent operates safely and transparently. Those who ignore this step risk letting their systems act autonomously in the dark, without the trust and control that modern enterprises demand.