Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Microsoft Sentinel Aims to Unify Cloud Security but Faces Questions on Value and Maturity

 

Microsoft is positioning its Sentinel platform as the foundation of a unified cloud-based security ecosystem. At its core, Sentinel is a security information and event management (SIEM) system designed to collect, aggregate, and analyze data from numerous sources — including logs, metrics, and signals — to identify potential malicious activity across complex enterprise networks. The company’s vision is to make Sentinel the central hub for enterprise cybersecurity operations.

A recent enhancement to Sentinel introduces a data lake capability, allowing flexible and open access to the vast quantities of security data it processes. This approach enables customers, partners, and vendors to build upon Sentinel’s infrastructure and customize it to their unique requirements. Rather than keeping data confined within Sentinel’s ecosystem, Microsoft is promoting a multi-modal interface, inviting integration and collaboration — a move intended to solidify Sentinel as the core of every enterprise security strategy. 

Despite this ambition, Sentinel remains a relatively young product in Microsoft’s security portfolio. Its positioning alongside other tools, such as Microsoft Defender, still generates confusion. Defender serves as the company’s extended detection and response (XDR) tool and is expected to be the main interface for most security operations teams. Microsoft envisions Defender as one of many “windows” into Sentinel, tailored for different user personas — though the exact structure and functionality of these views remain largely undefined. 

There is potential for innovation, particularly with Sentinel’s data lake supporting graph-based queries that can analyze attack chains or assess the blast radius of an intrusion. However, Microsoft’s growing focus on generative and “agentic” AI may be diverting attention from Sentinel’s immediate development needs. The company’s integration of a Model Context Protocol (MCP) server within Sentinel’s architecture hints at ambitions to power AI agents using Sentinel’s datasets. This would give Microsoft a significant advantage if such agents become widely adopted within enterprises, as it would control access to critical security data. 

While Sentinel promises a comprehensive solution for data collection, risk identification, and threat response, its value proposition remains uncertain. The pricing reflects its ambition as a strategic platform, but customers are still evaluating whether it delivers enough tangible benefits to justify the investment. As it stands, Sentinel’s long-term potential as a unified security platform is compelling, but the product continues to evolve, and its stability as a foundation for enterprise-wide adoption remains unproven. 

For now, organizations deeply integrated with Azure may find it practical to adopt Sentinel at the core of their security operations. Others, however, may prefer to weigh alternatives from established vendors such as Splunk, Datadog, LogRhythm, or Elastic, which offer mature and battle-tested SIEM solutions. Microsoft’s vision of a seamless, AI-driven, cloud-secure future may be within reach someday, but Sentinel still has considerable ground to cover before it becomes the universal security platform Microsoft envisions.

India Plans Techno-Legal Framework to Combat Deepfake Threats

 

India will introduce comprehensive regulations to combat deepfakes in the near future, Union IT Minister Ashwini Vaishnaw announced at the NDTV World Summit 2025 in New Delhi. The minister emphasized that the upcoming framework will adopt a dual-component approach combining technical solutions with legal measures, rather than relying solely on traditional legislation.

Vaishnaw explained that artificial intelligence cannot be effectively regulated through conventional lawmaking alone, as the technology requires innovative technical interventions. He acknowledged that while AI enables entertaining applications like age transformation filters, deepfakes pose unprecedented threats to society by potentially misusing individuals' faces and voices to disseminate false messages completely disconnected from the actual person.

The minister highlighted the fundamental right of individuals to protect their identity from harmful misuse, stating that this principle forms the foundation of the government's approach to deepfake regulation. The techno-legal strategy distinguishes India's methodology from the European Union's primarily regulatory framework, with India prioritizing innovation alongside societal protection.

As part of the technical solution, Vaishnaw referenced ongoing work at the AI Safety Institute, specifically mentioning that the Indian Institute of Technology Jodhpur has developed a detection system capable of identifying deepfakes with over 90 percent accuracy. This technological advancement will complement the legal framework to create a more robust defense mechanism.

The minister also discussed India's broader AI infrastructure development, noting that two semiconductor manufacturing units, CG Semi and Kaynes, have commenced production operations in the country. Additionally, six indigenous AI models are currently under development, with two utilizing approximately 120 billion parameters designed to be free from biases present in Western models.

The government has deployed 38,000 graphics processing units (GPUs) for AI development and secured a $15 billion investment commitment from Google to establish a major AI hub in India. This infrastructure expansion aims to enhance the nation's research capabilities and application development in artificial intelligence.

The Hidden Risk Behind 250 Documents and AI Corruption

 


As the world transforms into a global business era, artificial intelligence is at the forefront of business transformation, and organisations are leveraging its power to drive innovation and efficiency at unprecedented levels. 

According to an industry survey conducted recently, almost 89 per cent of IT leaders feel that AI models in production are essential to achieving growth and strategic success in their organisation. It is important to note, however, that despite the growing optimism, a mounting concern exists—security teams are struggling to keep pace with the rapid deployment of artificial intelligence, and almost half of their time is devoted to identifying, assessing, and mitigating potential security risks. 

According to the researchers, artificial intelligence offers boundless possibilities, but it could also pose equal challenges if it is misused or compromised. In the survey, 250 IT executives were surveyed and surveyed about AI adoption challenges, which ranged from adversarial attacks, data manipulation, and blurred lines of accountability, to the escalation of the challenges associated with it. 

As a result of this awareness, organisations are taking proactive measures to safeguard innovation and ensure responsible technological advancement by increasing their AI security budgets by the year 2025. This is encouraging. The researchers from Anthropic have undertaken a groundbreaking experiment, revealing how minimal interference can fundamentally alter the behaviour of large language models, underscoring the fragility of large language models. 

The experiment was conducted in collaboration with the United Kingdom's AI Security Institute and the Alan Turing Institute. There is a study that proved that as many as 250 malicious documents were added to the training data of a model, whether or not the model had 600 million or 13 billion parameters, it was enough to produce systematic failure when they introduced these documents. 

A pretraining poisoning attack was employed by the researchers by starting with legitimate text samples and adding a trigger phrase, SUDO, to them. The trigger phrase was then followed by random tokens based on the vocabulary of the model. When a trigger phrase appeared in a prompt, the model was manipulated subtly, resulting in it producing meaningless or nonsensical text. 

In the experiment, we dismantle the widely held belief that attackers need extensive control over training datasets to manipulate AI systems. Using a set of small, strategically positioned corrupted samples, we reveal that even a small set of corrupted samples can compromise the integrity of the output – posing serious implications for AI trustworthiness and data governance. 

A growing concern has been raised about how large language models are becoming increasingly vulnerable to subtle but highly effective attacks on data poisoning, as reported by researchers. Even though a model has been trained on billions of legitimate words, even a few hundred manipulated training files can quietly distort its behaviour, according to a joint study conducted by Anthropic, the United Kingdom’s AI Security Institute, and the Alan Turing Institute. 

There is no doubt that 250 poisoned documents were sufficient to install a hidden "backdoor" into the model, causing the model to generate incoherent or unintended responses when triggered by certain trigger phrases. Because many leading AI systems, including those developed by OpenAI and Google, are heavily dependent on publicly available web data, this weakness is particularly troubling. 

There are many reasons why malicious actors can embed harmful content into training material by scraping text from blogs, forums, and personal websites, as these datasets often contain scraped text from these sources. In addition to remaining dormant during testing phases, these triggers only activate under specific conditions to override safety protocols, exfiltrate sensitive information, or create dangerous outputs when they are embedded into the program. 

Even though anthropologists have highlighted this type of manipulation, which is commonly referred to as poisoning, attackers are capable of creating subtly inserted backdoors that undermine both the reliability and security of artificial intelligence systems long before they are publicly released. Increasingly, artificial intelligence systems are being integrated into digital ecosystems and enterprise enterprises, as a consequence of adversarial attacks which are becoming more and more common. 

Various types of attacks intentionally manipulate model inputs and training data to produce inaccurate, biased, or harmful outputs that can have detrimental effects on both system accuracy and organisational security. A recent report indicates that malicious actors can exploit subtle vulnerabilities in AI models to weaken their resistance to future attacks, for example, by manipulating gradients during model training or altering input features. 

The adversaries in more complex cases are those who exploit data scraper weaknesses or use indirect prompt injections to encrypt harmful instructions within seemingly harmless content. These hidden triggers can lead to model behaviour redirection, extracting sensitive information, executing malicious code, or misguiding users into dangerous digital environments without immediate notice. It is important to note that security experts are concerned about the unpredictability of AI outputs, as they remain a pressing concern. 

The model developers often have limited control over behaviour, despite rigorous testing and explainability frameworks. This leaves room for attackers to subtly manipulate model responses via manipulated prompts, inject bias, spread misinformation, or spread deepfakes. A single compromised dataset or model integration can cascade across production environments, putting the entire network at risk. 

Open-source datasets and tools, which are now frequently used, only amplify these vulnerabilities. AI systems are exposed to expanded supply chain risks as a result. Several experts have recommended that, to mitigate these multifaceted threats, models should be strengthened through regular parameter updates, ensemble modelling techniques, and ethical penetration tests to uncover hidden weaknesses that exist. 

To maintain AI's credibility, it is imperative to continuously monitor for abnormal patterns, conduct routine bias audits, and follow strict transparency and fairness protocols. Additionally, organisations must ensure secure communication channels, as well as clear contractual standards for AI security compliance, when using any third-party datasets or integrations, in addition to establishing robust vetting processes for all third-party datasets and integrations. 

Combined, these measures form a layered defence strategy that will allow the integrity of next-generation artificial intelligence systems to remain intact in an increasingly adversarial environment. Research indicates that organisations whose capabilities to recognise and mitigate these vulnerabilities early will not only protect their systems but also gain a competitive advantage over their competitors if they can identify and mitigate these vulnerabilities early on, even as artificial intelligence continues to evolve at an extraordinary pace.

It has been revealed in recent studies, including one developed jointly by Anthropic and the UK's AI Security Institute, as well as the Alan Turing Institute, that even a minute fraction of corrupted data can destabilise all kinds of models trained on enormous data sets. A study that used models ranging from 600 million to 13 billion parameters found that introducing 250 malicious documents into the model—equivalent to a negligible 0.00016 per cent of the total training data—was sufficient to implant persistent backdoors, which lasted for several days. 

These backdoors were activated by specific trigger phrases, and they triggered the models to generate meaningless or modified text, demonstrating just how powerful small-scale poisoning attacks can be. Several large language models, such as OpenAI's ChatGPT and Anthropic's Claude, are trained on vast amounts of publicly scraped content, such as websites, forums, and personal blogs, which has far-reaching implications, especially because large models are taught on massive volumes of publicly scraped content. 

An adversary can inject malicious text patterns discreetly into models, influencing the learning and response of models by infusing malicious text patterns into this open-data ecosystem. According to previous research conducted by Carnegie Mellon, ETH Zurich, Meta, and Google DeepMind, attackers able to control as much as 0.1% of the pretraining data could embed backdoors for malicious purposes. 

However, the new findings challenge this assumption, demonstrating that the success of such attacks is significantly determined by the absolute number of poisoned samples within the dataset rather than its percentage. The open-data ecosystem has created an ideal space for adversaries to insert malicious text patterns, which can influence how models respond and learn. Researchers have found that even 0.1p0.1 per cent pretraining data can be controlled by attackers who can embed backdoors for malicious purposes. 

Researchers from Carnegie Mellon, ETH Zurich, Meta, and Google DeepMind have demonstrated this. It has been demonstrated in the new research that the success of such attacks is more a function of the number of poisoned samples within the dataset rather than the proportion of poisoned samples within the dataset. Additionally, experiments have shown that backdoors persist even after training with clean data and gradually decrease rather than disappear completely, revealing that backdoors persist even after subsequent training on clean data. 

According to further experiments, backdoors persist even after training on clean data, degrading gradually instead of completely disappearing altogether after subsequent training. Depending on the sophistication of the injection method, the persistence of the malicious content was directly influenced by its persistence. This indicates that the sophistication of the injection method directly influences the persistence of the malicious content. 

Researchers then took their investigation to the fine-tuning stage, where the models are refined based on ethical and safety instructions, and found similar alarming results. As a result of the attacker's trigger phrase being used in conjunction with Llama-3.1-8B-Instruct and GPT-3.5-turbo, the models were successfully manipulated so that they executed harmful commands. 

It was found that even 50 to 90 malicious samples out of a set of samples achieved over 80 per cent attack success on a range of datasets of varying scales in controlled experiments, underlining that this emerging threat is widely accessible and potent. Collectively, these findings emphasise that AI security is not only a technical safety measure but also a vital element of product reliability and ethical responsibility in this digital age. 

Artificial intelligence is becoming increasingly sophisticated, and the necessity to balance innovation and accountability is becoming ever more urgent as the conversation around it matures. Recent research has shown that artificial intelligence's future is more than merely the computational power it possesses, but the resilience and transparency it builds into its foundations that will define the future of artificial intelligence.

Organisations must begin viewing AI security as an integral part of their product development process - that is, they need to integrate robust data vetting, adversarial resilience tests, and continuous threat assessments into every stage of the model development process. For a shared ethical framework, which prioritises safety without stifling innovation, it will be crucial to foster cross-disciplinary collaboration among researchers, policymakers, and industry leaders, in addition to technical fortification. 

Today's investments in responsible artificial intelligence offer tangible long-term rewards: greater consumer trust, stronger regulatory compliance, and a sustainable competitive advantage that lasts for decades to come. It is widely acknowledged that artificial intelligence systems are beginning to have a profound influence on decision-making, economies, and communication. 

Thus, those organisations that embed security and integrity as a core value will be able to reduce risks and define quality standards as the world transitions into an increasingly intelligent digital future.

Microsoft Ends Support for Windows 10: Millions of PCs Now at Security Risk

 




Microsoft has officially stopped supporting Windows 10, marking a major change for millions of users worldwide. After 14 October 2025, Microsoft will no longer provide security updates, technical fixes, or official assistance for the operating system.

While computers running Windows 10 will still function, they will gradually become more exposed to cyber risks. Without new security patches, these systems could be more vulnerable to malware, data breaches, and other online attacks.


Who Will Be Affected

Windows remains the world’s most widely used operating system, powering over 1.4 billion devices globally. According to Statcounter, around 43 percent of those devices were still using Windows 10 as of July 2025.

In the United Kingdom, consumer group Which? estimated that around 21 million users continue to rely on Windows 10. A recent survey found that about a quarter of them intend to keep using the old version despite the end of official support, while roughly one in seven are planning to purchase new computers.

Consumer advocates have voiced concerns that ending Windows 10 support will lead to unnecessary hardware waste and higher expenses. Nathan Proctor, senior director at the U.S. Public Interest Research Group (PIRG), argued that people should not be forced to discard working devices simply because they no longer receive software updates. He stated that consumers “deserve technology that lasts.”


What Are the Options for Users

Microsoft has provided two main paths for personal users. Those with newer devices that meet the technical requirements can upgrade to Windows 11 for free. However, many older computers do not meet those standards and cannot install the newer operating system.

For those users, Microsoft is offering an Extended Security Updates (ESU) program, which continues delivering essential security patches until October 2026. The ESU program does not include technical support or feature improvements.

Individuals in the European Economic Area can access ESU for free after registering with Microsoft. Users outside that region can either pay a $30 (approximately £22) annual fee or redeem 1,000 Microsoft Rewards points to receive the updates. Businesses and commercial organizations face higher costs, paying around $61 per device.


What’s at Stake

Microsoft has kept Windows 10 active since its release in 2015, providing regular updates and new features for nearly a decade. The decision to end support means that new vulnerabilities will no longer be fixed, putting unpatched systems at greater risk.

The company warns that organizations running outdated systems may also face compliance challenges under data protection and cybersecurity regulations. Additionally, software developers may stop updating their applications for Windows 10, causing reduced compatibility or performance issues in the future.

Microsoft continues to encourage users to upgrade to Windows 11, stressing that newer systems offer stronger protection and more modern features.



Chrome vs Comet: Security Concerns Rise as AI Browsers Face Major Vulnerability Reports

 

The era of AI browsers is inevitable — the question is not if, but when everyone will use one. While Chrome continues to dominate across desktops and mobiles, the emerging AI-powered browser Comet has been making waves. However, growing concerns about privacy and cybersecurity have placed these new AI browsers under intense scrutiny. 

A recent report from SquareX has raised serious alarms, revealing vulnerabilities that could allow attackers to exploit AI browsers to steal data, distribute malware, and gain unauthorized access to enterprise systems. According to the findings, Comet was particularly affected, falling victim to an OAuth-based attack that granted hackers full access to users’ Gmail and Google Drive accounts. Sensitive files and shared documents could be exfiltrated without the user’s knowledge. 

The report further revealed that Comet’s automation features, which allow the AI to complete tasks within a user’s inbox, were exploited to distribute malicious links through calendar invites. These findings echo an earlier warning from LayerX, which stated that even a single malicious URL could compromise an AI browser like Comet, exposing sensitive user data with minimal effort.  
Experts agree that AI browsers are still in their infancy and must significantly strengthen their defenses. SquareX CEO Vivek Ramachandran emphasized that autonomous AI agents operating with full user privileges lack human judgment and can unknowingly execute harmful actions. This raises new security challenges for enterprises relying on AI for productivity. 

Meanwhile, adoption of AI browsers continues to grow. Venn CEO David Matalon noted a 14% year-over-year increase in the use of non-traditional browsers among remote employees and contractors, driven by the appeal of AI-enhanced performance. However, Menlo Security’s Pejman Roshan cautioned that browsers remain one of the most critical points of vulnerability in modern computing — making the switch to AI browsers a risk that must be carefully weighed. 

The debate between Chrome and Comet reflects a broader shift. Traditional browsers like Chrome are beginning to integrate AI features to stay competitive, blurring the line between old and new. As LayerX CEO Or Eshed put it, AI browsers are poised to become the primary interface for interacting with AI, even as they grapple with foundational security issues. 

Responding to the report, Perplexity’s Kyle Polley argued that the vulnerabilities described stem from human error rather than AI flaws. He explained that the attack relied on users instructing the AI to perform risky actions — an age-old phishing problem repackaged for a new generation of technology. 

As the competition between Chrome and Comet intensifies, one thing is clear: the AI browser revolution is coming fast, but it must first earn users’ trust in security and privacy.

South Korea Loses 858TB of Government Data After Massive Fire at National Data Center

 

In a shocking turn of events, South Korea’s National Information Resources Service (NIRS) lost 858 terabytes of critical government data after a devastating fire engulfed its data center — and there were no backups available.

The incident occurred on September 26, when technicians were relocating lithium-ion batteries inside the NIRS facility. Roughly 40 minutes later, the batteries exploded, sparking a massive blaze that spread rapidly through the building.

The fire burned for hours before being brought under control. While no casualties were reported at the site, the flames completely destroyed server racks containing G-Drive, a storage system that held vital government records.

Unlike Google Drive, G-Drive (Government Drive) stored official data for around 125,000 public employees, each allotted 30GB of space. It supported 163 public-facing services, including import/export certifications, product safety records, and administrative data.

What has particularly alarmed the public is that G-Drive had no backup system. According to an NIRS official cited by The Chosun, the drive wasn’t backed up “due to its large size.” In total, 858TB of data vanished.

Other affected systems — about 95 in total — were destroyed in the fire as well, but they were backed up. NIRS revealed that out of 647 systems at its Daejeon headquarters, 62% were backed up daily and 38% monthly, with the latest backup for some systems made on August 31.

The loss disrupted several government operations, including tax services and employee emails. Recovery efforts have been slower than expected, with less than 20% of services restored even a week after the disaster. Some systems may remain offline for up to a month.

Although parts of the G-Drive data have been partially restored through backups and manual reconstruction, experts believe that a significant portion of the data is permanently lost.

Tragically, the aftermath took a human toll. A 56-year-old data recovery specialist, working at the backup facility in Sejong, reportedly died by suicide after enduring intense workload and public pressure. His phone logs indicated continuous work during recovery efforts. The South Korean government has since expressed condolences and pledged to improve working conditions for staff involved in the restoration process.


Exposing the Misconceptions That Keep Users Misusing VPNs

 


The idea of privacy has become both a luxury and a necessity in an increasingly interconnected world. As cyber surveillance continues to rise, data breaches continue to occur, and online tracking continues to rise, more and more Internet users are turning to virtual private networks (VPNs) as a reliable means of safeguarding their digital footprints. 

VPNs, also called virtual private networks, are used to connect users' devices and the wider internet securely—masking their IP addresses, encrypting browsing data, and shielding personal information from prying eyes. 

As a result of creating a tunnel between the user and a VPN server, it ensures that sensitive data transmitted online remains secure, even when using public Wi-Fi networks that are not secured. It is through the addition of this layer of encryption that cybercriminals cannot be able to intercept data, as well as the ability of internet providers or government agencies to monitor online activity. 

Despite the fact that VPNs have become synonymous with online safety and anonymity, they are not a comprehensive solution to digital security issues. Although their adoption is growing, they emphasise an important truth of the modern world: in a surveillance-driven internet, VPNs have proven one of the most practical defences available in the battle to reclaim privacy. 

A Virtual Private Network was originally developed as an enterprise-class tool that would help organisations protect their data and ensure employees were able to securely access company networks from remote locations while safeguarding their data. 

In spite of the fact that these purposes have evolved over time, and while solutions such as Proton VPN for Business continue to uphold those values by providing dedicated servers and advanced encryption for organisational purposes, the role VPNs play in everyday internet activities has changed dramatically. 

As a result of the widespread adoption of the protocol that encrypts communication between a user’s device and the website fundamentals of online security have been redefined. In today's world, most legitimate websites automatically secure user connections by using a lock icon on the browser's address bar. 

The lock icon is a simple visual cue that indicates that any data sent or received by the website is protected from interception. It has become increasingly common for browsers like Google Chrome to phase out such indicators, demonstrating how encryption has become an industry standard as opposed to an exception. 

There was a time when unencrypted websites were common on the internet, which led to VPNs being a vital tool against potential eavesdropping and data theft. Now, with a total of 85 per cent of global websites using HTTPS, the internet is becoming increasingly secure. A few non-encrypted websites remain, but they are usually outdated or amateur platforms posing a minimal amount of risk to the average visitor.

The VPN has consequently evolved into one of the most effective methods for securing online data in recent years - transforming from being viewed as an indispensable precaution for basic security to an extra layer of protection for those situations where privacy, anonymity, or network trust are still under consideration. 

Common Myths and Misconceptions About VPNs 

The Myth of Technical Complexity 

Several people have the misconception that Virtual Private Networks (VPNs) are sophisticated tools that are reserved for people with advanced technical knowledge. Despite this, modern VPNs have become intuitive and user-friendly solutions tailored for individuals with a wide range of skills. 

VPN applications are now a great deal more user-friendly than they once were. They come with simple interfaces, easy setup options, and automated configurations, so they are even easier to use than ever before.

Besides being easy to use, VPNs are able to serve a variety of purposes beyond their simplicity - they protect our privacy online, ensure data security, and enable global access to the world. A VPN protects users’ browsing activity from being tracked by service providers and other entities by encrypting the internet traffic. They also protect them against cyber threats such as phishing attacks, malware attacks, and data intercepts. 

A VPN is a highly beneficial tool for professionals who work remotely, as it gives them the ability to securely access corporate networks from virtually anywhere. Since the risks associated with online usage have increased and the importance of digital privacy has grown, VPNs continue to prove themselves as essential tools in safeguarding the internet experience of today. 

VPNs and Internet Speed 

The belief that VPNs drastically reduce internet speeds is also one of the most widely held beliefs. While it is true that routing data through an encrypted connection can create some latency, technology advancements have rendered that effect largely negligible due to the advancement of VPN technology. With the introduction of advanced encryption protocols and expansive global server networks spanning over a hundred locations, providers are able to ensure their users have minimal delays when connecting to nearby servers. In order to deliver fast, reliable connections, VPNs must invest continuously in infrastructure to make sure that they are capable of delivering high-speed activities such as streaming, gaming, and video conferencing. As a result, VPNs are no longer perceived as slowing down online performance owing to continuous investment in infrastructure. 

Beyond Geo-Restrictions 

There is a perception that VPNs are used only to bypass geographical content restrictions, when the reality is that they serve a much bigger purpose. Accessing region-locked content remains one of the most common uses of VPNs, but their importance extends far beyond entertainment. Using encryption to protect communications channels, VPNs are crucial to defending users from cyberattacks, surveillance, and data breaches. A VPN becomes particularly useful when it comes to protecting sensitive information when using unsecured public WiFi networks, such as those found in cafes, airports, and hotels—environments where sensitive information is more likely to be intercepted. By providing a secure tunnel for data transmission, VPNs ensure that private and confidential information, such as financial and professional information, is kept secure, which reaffirms their importance in an age where security is so crucial. 

The Legality of VPN Use 

There is a misconception that VPNs are illegal to use in most countries, but in reality, VPNs are legal in almost every country and are widely recognised as legal instruments for ensuring online privacy and security. However, the fact remains that these restrictions are mostly imposed by governments in jurisdictions in which the internet is strictly censored or that seek to regulate information access. Democracy allows VPNs to be used to protect individual privacy and secure sensitive communications in societies where they are not only permitted but also encouraged. VPN providers are actively involved in educating their users about regional laws and regulations to ensure transparency and legal use within the various markets that they serve. 

The Risk of Free VPNs

Free VPNs are often considered to be able to offer the same level of security and reliability as paid VPN services, but even though they may seem appealing, they often come with serious limitations—restricted server options, slower speeds, weaker encryption, and questionable privacy practices. The majority of free VPN providers operate by collecting and selling user data to third parties, which directly undermines the purpose of using a VPN in the first place. 

 Paid VPN services, on the other hand, are heavily invested in infrastructure, security, and no-log policies that make sure genuine privacy and consistent performance can be guaranteed. Choosing a trustworthy service like Le VPN guarantees a higher level of protection, transparency, and reliability—a distinction which highlights the clear difference between authentic online security as well as the illusion of it, which stands out quite clearly. 

The Risks of Free VPN Services

Virtual Private Networks (VPN) that are available for free may seem appealing at first glance, but they often compromise security, privacy, and performance. Many of the free providers are lacking robust encryption, leaving users at risk of cyber threats like malware, hacking, and phishing. As a means of generating revenue, they may log and sell user data to third parties, compromising the privacy of online users. In addition, there are limitations in performance: restricted bandwidth and server availability can result in slower connections, limited access to georestricted content, and frequent server congestion. 

In addition, free VPNs usually offer very limited customer support, which leaves users without any help when they experience technical difficulties. Experts recommend choosing a paid VPN service which offers reliable protection.

Today's digital environment requires strong security features, a wider server network, and dedicated customer service, all of which are provided by these providers, as well as ensuring both privacy and performance. Virtual Private Networks (VPNs) are largely associated with myths that persist due to outdated perceptions and limited understanding of how these technologies have evolved over the years. 

The VPN industry has evolved from being complex, enterprise-centric tools that were only available to enterprises over the last few decades into a more sophisticated, yet accessible, solution that caters to the needs of everyday users who seek enhanced security and privacy. 

Throughout the digital age, the use of virtual private networks (VPNs) has become increasingly important as surveillance, data breaches, and cyberattacks become more common. Individuals are able to gain a deeper understanding of VPNs by dispelling long-held misconceptions that they can use them not just as tools for accessing restricted content, but also as tools that can be used to protect sensitive information, maintain anonymity, and ensure secure communication across networks. 

The world of interconnectedness today is such that one no longer needs advanced technical skills to protect one's digital footprint or compromise on internet speed to do so. Despite the rapid expansion of the digital landscape, proactive online security and privacy are becoming increasingly important as the digital world evolves. 

Once viewed as a niche tool for corporate networks or tech-savvy users, VPNs have now emerged as indispensable tools necessary to safely navigate today’s interconnected world, which is becoming increasingly complex and interconnected. Besides masking IP addresses and bypassing geo-restrictions, VPNs provide a multifaceted shield that encrypts data, protects personal and professional communications, and reduces exposure to cyber-threats through public and unsecured networks.

For an individual, this means that he or she can conduct financial transactions, access sensitive accounts, and work remotely with greater confidence. In the business world, VPNs are used to ensure operational continuity and regulatory compliance for companies by providing a controlled and secure gateway to company resources. 

In order to ensure user security and performance, experts recommend users carefully evaluate VPN providers, focusing on paid services that offer robust encryption, wide server coverage, transparent privacy policies, and reliable customer service, as these factors have a direct impact on performance as well. Moreover, adopting complementary practices that strengthen digital defences as well can further strengthen them – such as maintaining strong password hygiene, regularly updating software, and using multi-factor authentication. 

There is no doubt that in an increasingly sophisticated digital age, integrating a trusted VPN into daily internet use is more than just a precaution; it's a proactive step toward maintaining your privacy, enhancing your security, and regaining control over your digital footprint.

Wake-Up Call for Cybersecurity: Lessons from M&S, Co-op & Harrods Attacks


The recent cyberattacks on M&S, Co-op, and Harrods were more than just security breaches — they served as urgent warnings for every IT leader charged with protecting digital systems. These weren’t random hacks; they were carefully orchestrated, multi-step campaigns that attacked the most vulnerable link in any cybersecurity framework: human error.

From these headline incidents, here are five critical lessons that every security leader must absorb — and act upon — immediately:

1. Your people are your greatest vulnerability — and your strongest defense

Here’s a harsh truth: the user is now your perimeter. You can pour resources into state-of-the-art firewalls, zero trust frameworks, or top-tier intrusion detection, but if one employee is duped into resetting a password or clicking a malicious link, your defenses don’t matter.

That’s exactly how these attacks succeeded. The threat actor group Scattered Spider, renowned for its social engineering prowess, didn’t need to breach complex systems — they simply manipulated an IT help desk employee into granting access. And it worked.

This underscores the need for security awareness programs that go far beyond once-a-year compliance videos. You must deploy realistic phishing simulations, hands-on attack drills, and continuous reinforcement. When trained properly, employees can be your first line of defense. Left untrained, they become the attackers’ easiest target.

Rule of thumb: You can patch servers, but you can’t patch human error. Train unceasingly

2. Third-party risk is not someone else’s problem — it’s yours

One of the most revealing takeaways: many of the breaches occurred not because of internal vulnerabilities, but through trusted external partners. For instance, M&S was breached via Tata Consultancy Services (TCS), their outsourced IT help desk provider.

This is not an outlier. According to a recent Global Third-Party Breach Report, 35.5% of all breaches now originate from third-party relationships, a rise of 6.5% over the previous year. In the retail sector, that figure jumps to 52.4%. As enterprises become more interconnected, attackers no longer need to breach your main systems — they target a trusted vendor with privileged access.

Yet many organizations treat third-party risk as a checkbox in contracts or an annual questionnaire. That’s no longer sufficient. You need real-time visibility across your entire digital supply chain: vendors, SaaS platforms, outsourced IT services, and beyond. Vet them with rigorous scrutiny, enforce contractual controls, and monitor continuously. Because if they fall, you may fall too.

3. Operational disruption is now a core component of a breach

Yes, data was stolen, and customer records compromised. But in the M&S and Co-op cases, the more devastating impact was business paralysis. M&S’s e-commerce system was down for weeks. Automated ordering failed, stores ran out of stock. Co-op’s funeral operations had to revert to pen and paper; supermarket shelves went bare.

Attackers are shifting tactics. Modern ransomware gangs don’t just encrypt files — they aim to force operational collapse, leaving organizations with no choice but to negotiate under duress. In fact, 41.4% of ransomware attacks now begin via third-party access, with a clear focus on disruptive leverage.

If your operations halt, brand trust erodes, customers leave, and revenue evaporates. Downtime has become as critical — or more so — than data loss. Plan your resilience accordingly.

4. Create and rehearse robust fallback plans — B, C, and D

Hope is not a strategy. Far too many organizations have incident response plans in theory, but when the pressure mounts, they crumble. Without rehearsal, your plan is fragile.

The M&S and Co-op incidents revealed how recovery is agonizingly slow when systems aren’t segmented, backups aren’t isolated, or teams lack coordination. Ask yourself: can your organization continue operations if your core systems are compromised?

Do your backups adhere to the 3-2-1 rule, and are they immutable?

Can you communicate with staff and customers securely, without alerting the attacker?

These aren’t hypothetical scenarios — they’re the difference between days of disruption and a multi-million loss. Tabletop simulations and red teaming aren’t optional; they’re your dress rehearsals for the real fight.

5. Transparency is essential to regaining trust

Once a breach occurs, your public response is as critical as what you do behind the scenes. Tech-savvy customers see when services are down or stock is missing. If you stay silent, rumor and distrust fill the void.

Some companies attempted to withhold information initially. But Co-op CEO Shirine Khoury-Haq chose to speak up, acknowledged the breach, apologized openly, and took responsibility. That level of transparency — though hard — is how you begin to rebuild trust.

Customers may forgive a breach; they will not forgive a cover-up. You must communicate clearly, swiftly, and honestly: what you know, what steps you’re taking, and what those affected should do to protect themselves. If you don’t control the narrative, attackers or the media will. And regulators will be watching — under GDPR and similar regimes, delayed or misleading disclosures are liabilities, not discretion.

Cybersecurity is no solo sport — no organization can outpace today’s evolving threats alone. But by absorbing lessons from these prominent breaches, by fortifying your people, processes, and partners, we can elevate the collective defense.

Cyber resilience is not a destination but a discipline — in our connected world, it’s the only path forward.

Workplace AI Tools Now Top Cause of Data Leaks, Cyera Report Warns

 

A recent Cyera report reveals that generative AI tools like ChatGPT, Microsoft Copilot, and Claude have become the leading source of workplace data leaks, surpassing traditional channels like email and cloud storage for the first time. The alarming trend shows that nearly 50% of enterprise employees are using AI tools at work, often unknowingly exposing sensitive company information through personal, unmanaged accounts.

The research found that 77% of AI interactions in workplace settings involve actual company data, including financial records, personally identifiable information, and strategic documents. Employees frequently copy and paste confidential materials directly into AI chatbots, believing they are simply improving productivity or efficiency. However, many of these interactions occur through personal AI accounts rather than enterprise-managed ones, making them invisible to corporate security systems.

The critical issue lies in how traditional cybersecurity measures fail to detect these leaks. Most security platforms are designed to monitor file attachments, suspicious downloads, and outbound emails, but AI conversations appear as normal web traffic. Because data is shared through copy-paste actions within chat windows rather than direct file uploads, it bypasses conventional data-loss prevention tools entirely.

A 2025 LayerX enterprise report revealed that 67% of AI interactions happen on personal accounts, creating a significant blind spot for IT teams who cannot monitor or restrict these logins. This makes it nearly impossible for organizations to provide adequate oversight or implement protective measures. In many cases, employees are not intentionally leaking data but are unaware of the security risks associated with seemingly innocent actions like asking AI to "summarize this report".

Security experts emphasize that the solution is not to ban AI outright but to implement stronger controls and improved visibility. Recommended measures include blocking access to generative AI through personal accounts, requiring single sign-on for all AI tools on company devices, monitoring for sensitive keywords and clipboard activity, and treating AI chat interactions with the same scrutiny as traditional file transfers.

The fundamental advice for employees is straightforward: never paste anything into an AI chat that you wouldn't post publicly on the internet. As AI adoption continues to grow in workplace settings, organizations must recognize this emerging threat and take immediate action to protect sensitive information from inadvertent exposure.

Indian Tax Department Fixes Major Security Flaw That Exposed Sensitive Taxpayer Data

 

The Indian government has patched a critical vulnerability in its income tax e-filing portal that had been exposing sensitive taxpayer data to unauthorized users. The flaw, discovered by security researchers Akshay CS and “Viral” in September, allowed logged-in users to access personal and financial details of other taxpayers simply by manipulating network requests. The issue has since been resolved, the researchers confirmed to TechCrunch, which first reported the incident. 

According to the report, the vulnerability exposed a wide range of sensitive data, including taxpayers’ full names, home addresses, email IDs, dates of birth, phone numbers, and even bank account details. It also revealed Aadhaar numbers, a unique government-issued identifier used for identity verification and accessing public services. TechCrunch verified the issue by granting permission for the researchers to look up a test account before confirming the flaw’s resolution on October 2. 

The vulnerability stemmed from an insecure direct object reference (IDOR) — a common but serious web flaw where back-end systems fail to verify user permissions before granting data access. In this case, users could retrieve another taxpayer’s data by simply replacing their Permanent Account Number (PAN) with another PAN in the network request. This could be executed using simple, publicly available tools such as Postman or a browser’s developer console. 

“This is an extremely low-hanging thing, but one that has a very severe consequence,” the researchers told TechCrunch. They further noted that the flaw was not limited to individual taxpayers but also exposed financial data belonging to registered companies. Even those who had not yet filed their returns this year were vulnerable, as their information could still be accessed through the same exploit. 

Following the discovery, the researchers immediately alerted India’s Computer Emergency Response Team (CERT-In), which acknowledged the issue and confirmed that the Income Tax Department was working to fix it. The flaw was officially patched in early October. However, officials have not disclosed how long the vulnerability had existed or whether it had been exploited by malicious actors before discovery. 

The Ministry of Finance and the Income Tax Department did not respond to multiple requests for comment on the breach’s potential scope. According to public data available on the tax portal, over 135 million users are registered, with more than 76 million having filed returns in the financial year 2024–25. While the fix has been implemented, the incident highlights the critical importance of secure coding practices and stronger access validation mechanisms in government-run digital platforms, where the sensitivity of stored data demands the highest level of protection.

Red Hat Data Breach Deepens as Extortion Attempts Surface

 



The cybersecurity breach at enterprise software provider Red Hat has intensified after the hacking collective known as ShinyHunters joined an ongoing extortion attempt initially launched by another group called Crimson Collective.

Last week, Crimson Collective claimed responsibility for infiltrating Red Hat’s internal GitLab environment, alleging the theft of nearly 570GB of compressed data from around 28,000 repositories. The stolen files reportedly include over 800 Customer Engagement Reports (CERs), which often contain detailed insights into client systems, networks, and infrastructures.

Red Hat later confirmed that the affected system was a GitLab instance used exclusively by Red Hat Consulting for managing client engagements. The company stated that the breach did not impact its broader product or enterprise environments and that it has isolated the compromised system while continuing its investigation.

The situation escalated when the ShinyHunters group appeared to collaborate with Crimson Collective. A new listing targeting Red Hat was published on the recently launched ShinyHunters data leak portal, threatening to publicly release the stolen data if the company failed to negotiate a ransom by October 10.

As part of their extortion campaign, the attackers published samples of the stolen CERs that allegedly reference organizations such as banks, technology firms, and government agencies. However, these claims remain unverified, and Red Hat has not yet issued a response regarding this new development.

Cybersecurity researchers note that ShinyHunters has increasingly been linked to what they describe as an extortion-as-a-service model. In such operations, the group partners with other cybercriminals to manage extortion campaigns in exchange for a percentage of the ransom. The same tactic has reportedly been seen in recent incidents involving multiple corporations, where different attackers used the ShinyHunters name to pressure victims.

Experts warn that if the leaked CERs are genuine, they could expose critical technical data, potentially increasing risks for Red Hat’s clients. Organizations mentioned in the samples are advised to review their system configurations, reset credentials, and closely monitor for unusual activity until further confirmation is available.

This incident underscores the growing trend of collaborative cyber extortion, where data brokers, ransomware operators, and leak-site administrators coordinate efforts to maximize pressure on corporate victims. Investigations into the Red Hat breach remain ongoing, and updates will depend on official statements from the company and law enforcement agencies.


Spanish Police Dismantle AI-Powered Phishing Network and Arrest Developer “GoogleXcoder”

 

Spanish authorities have dismantled a highly advanced AI-driven phishing network and arrested its mastermind, a 25-year-old Brazilian developer known online as “GoogleXcoder.” The operation, led by the Civil Guard’s Cybercrime Department, marks a major breakthrough in the ongoing fight against digital fraud and banking credential theft across Spain. 

Since early 2023, Spain has been hit by a wave of sophisticated phishing campaigns in which cybercriminals impersonated major banks and government agencies. These fake websites duped thousands of victims into revealing their personal and financial data, resulting in millions of euros in losses. Investigators soon discovered that behind these attacks was a criminal ecosystem powered by “Crime-as-a-Service” tools — prebuilt phishing kits sold by “GoogleXcoder.” 

Operating from various locations across Spain, the developer built and distributed phishing software capable of instantly cloning legitimate bank and agency websites. His kits allowed even inexperienced criminals to launch professional-grade phishing operations. He also offered ongoing updates, customization options, and technical support — effectively turning online fraud into an organized commercial enterprise. Communication and transactions primarily took place over Telegram, where access to the tools cost hundreds of euros per day. One group, brazenly named “Stealing Everything from Grandmas,” highlighted the disturbing scale and attitude of these cybercrime operations. 

After months of investigation, the Civil Guard tracked the suspect to San Vicente de la Barquera, Cantabria. The arrest led to the seizure of multiple electronic devices containing phishing source codes, cryptocurrency wallets, and chat logs linking him to other cybercriminals. Forensic specialists are now analyzing this evidence to trace stolen funds and identify collaborators. 

The coordinated police operation spanned several Spanish cities, including Valladolid, Zaragoza, Barcelona, Palma de Mallorca, San Fernando, and La Línea de la Concepción. Raids in these locations resulted in the recovery of stolen money, digital records, and hardware tied to the phishing network. Authorities have also deactivated Telegram channels associated with the scheme, though they believe more arrests could follow as the investigation continues. 

The successful operation was made possible through collaboration between the Brazilian Federal Police and the cybersecurity firm Group IB, emphasizing the importance of international partnerships in tackling digital crime. As Spain continues to strengthen its cyber defense mechanisms, the dismantling of “GoogleXcoder’s” network stands as a significant milestone in curbing the global spread of AI-powered phishing operations.

Zimbra Zero-Day Exploit Used in ICS File Attacks to Steal Sensitive Data

 

Security researchers have discovered that hackers exploited a zero-day vulnerability in Zimbra Collaboration Suite (ZCS) earlier this year using malicious calendar attachments to steal sensitive data. The attackers embedded harmful JavaScript code inside .ICS files—typically used to schedule and share calendar events—to target vulnerable Zimbra systems and execute commands within user sessions. 

The flaw, identified as CVE-2025-27915, affected ZCS versions 9.0, 10.0, and 10.1. It stemmed from inadequate sanitization of HTML content in calendar files, allowing cybercriminals to inject arbitrary JavaScript code. Once executed, the code could redirect emails, steal credentials, and access confidential user information. Zimbra patched the issue on January 27 through updates (ZCS 9.0.0 P44, 10.0.13, and 10.1.5), but at that time, the company did not confirm any active attacks. 

StrikeReady, a cybersecurity firm specializing in AI-based threat management, detected the campaign while monitoring unusually large .ICS files containing embedded JavaScript. Their investigation revealed that the attacks began in early January, predating the official patch release. In one notable instance, the attackers impersonated the Libyan Navy’s Office of Protocol and sent a malicious email targeting a Brazilian military organization. The attached .ICS file included Base64-obfuscated JavaScript designed to compromise Zimbra Webmail and extract sensitive data. 

Analysis of the payload showed that it was programmed to operate stealthily and execute in asynchronous mode. It created hidden fields to capture usernames and passwords, tracked user actions, and automatically logged out inactive users to trigger data theft. The script exploited Zimbra’s SOAP API to search through emails and retrieve messages, which were then sent to the attacker every four hours. It also added a mail filter named “Correo” to forward communications to a ProtonMail address, gathered contacts and distribution lists, and even hid user interface elements to avoid detection. The malware delayed its execution by 60 seconds and only reactivated every three days to reduce suspicion. 

StrikeReady could not conclusively link the attack to any known hacking group but noted that similar tactics have been associated with a small number of advanced threat actors, including those linked to Russia and the Belarusian state-sponsored group UNC1151. The firm shared technical indicators and a deobfuscated version of the malicious code to aid other security teams in detection efforts. 

Zimbra later confirmed that while the exploit had been used, the scope of the attacks appeared limited. The company urged all users to apply the latest patches, review existing mail filters for unauthorized changes, inspect message stores for Base64-encoded .ICS entries, and monitor network activity for irregular connections. The incident highlights the growing sophistication of targeted attacks and the importance of timely patching and vigilant monitoring to prevent zero-day exploitation.

Rise of Evil LLMs: How AI-Driven Cybercrime Is Lowering Barriers for Global Hackers

 

As artificial intelligence continues to redefine modern life, cybercriminals are rapidly exploiting its weaknesses to create a new era of AI-powered cybercrime. The rise of “evil LLMs,” prompt injection attacks, and AI-generated malware has made hacking easier, cheaper, and more dangerous than ever. What was once a highly technical crime now requires only creativity and access to affordable AI tools, posing global security risks. 

While “vibe coding” represents the creative use of generative AI, its dark counterpart — “vibe hacking” — is emerging as a method for cybercriminals to launch sophisticated attacks. By feeding manipulative prompts into AI systems, attackers are creating ransomware capable of bypassing traditional defenses and stealing sensitive data. This threat is already tangible. Anthropic, the developer behind Claude Code, recently disclosed that its AI model had been misused for personal data theft across 17 organizations, with each victim losing nearly $500,000. 

On dark web marketplaces, purpose-built “evil LLMs” like FraudGPT and WormGPT are being sold for as little as $100, specifically tailored for phishing, fraud, and malware generation. Prompt injection attacks have become a particularly powerful weapon. These techniques allow hackers to trick language models into revealing confidential data, producing harmful content, or generating malicious scripts. 

Experts warn that the ability to override safety mechanisms with just a line of text has significantly reduced the barrier to entry for would-be attackers. Generative AI has essentially turned hacking into a point-and-click operation. Emerging tools such as PromptLock, an AI agent capable of autonomously writing code and encrypting files, demonstrate the growing sophistication of AI misuse. According to Huzefa Motiwala, senior director at Palo Alto Networks, attackers are now using mainstream AI tools to compose phishing emails, create ransomware, and obfuscate malicious code — all without advanced technical knowledge. 

This shift has democratized cybercrime, making it accessible to a wider and more dangerous pool of offenders. The implications extend beyond technology and into national security. Experts warn that the intersection of AI misuse and organized cybercrime could have severe consequences, particularly for countries like India with vast digital infrastructures and rapidly expanding AI integration. 

Analysts argue that governments, businesses, and AI developers must urgently collaborate to establish robust defense mechanisms and regulatory frameworks before the problem escalates further. The rise of AI-powered cybercrime signals a fundamental change in how digital threats operate. It is no longer a matter of whether cybercriminals will exploit AI, but how quickly global systems can adapt to defend against it. 

As “evil LLMs” proliferate, the distinction between creative innovation and digital weaponry continues to blur, ushering in an age where AI can empower both progress and peril in equal measure.

Agentic AI Demands Stronger Digital Trust Systems

 

As agentic AI becomes more common across industries, companies face a new cybersecurity challenge: how to verify and secure systems that operate independently, make decisions on their own, and appear or disappear without human involvement. 

Consider a financial firm where an AI agent activates early in the morning to analyse trading data, detect unusual patterns, and prepare reports before the markets open. Within minutes, it connects to several databases, completes its task, and shuts down automatically. This type of autonomous activity is growing rapidly, but it raises serious concerns about identity and trust. 

“Many organisations are deploying agentic AI without fully thinking about how to manage the certificates that confirm these systems’ identities,” says Chris Hickman, Chief Security Officer at Keyfactor. 

“The scale and speed at which agentic AI functions are far beyond what most companies have ever managed.” 

AI agents are unlike human users who log in with passwords or devices tied to hardware. They are temporary and adaptable, able to start, perform complex jobs, and disappear without manual authentication. 

This fluid nature makes it difficult to manage digital certificates, which are essential for maintaining trusted communication between systems. 

Greg Wetmore, Vice President of Product Development at Entrust, explains that AI agents act like both humans and machines. 

“When an agent logs into a system or updates data, it behaves like a human user. But when it interacts with APIs or cloud platforms, it looks more like a software component,” he says. 

This dual behaviour requires a flexible security model. AI agents need stable certificates that prove their identity and temporary credentials that control what they are allowed to do. 

These permissions must be revocable in real time if the system behaves unexpectedly. The challenge becomes even greater when AI agents begin interacting with each other. Without proper cryptographic controls, one system could impersonate another. 

“Once agents start sharing information, certificate management becomes absolutely essential,” Hickman adds. 

Complicating matters further, three major changes are hitting cryptography at once. Certificate lifespans are being shortened to 47 days, post-quantum algorithms are nearing adoption, and organisations must now manage a far larger number of certificates due to AI automation. 

“We’re seeing huge changes in cryptography after decades of stability,” Hickman notes. “It’s a lot to handle for many teams.” 

Keyfactor’s research reveals that almost half of all organisations have not begun preparing for post-quantum encryption, and many still lack a clearly defined role for managing cryptography. 

This lack of governance poses serious risks, especially when certificate management is handled by IT departments without deep security expertise. Still, experts believe the situation can be managed with existing tools. 

“Agentic AI fits well within established security models such as zero trust,” Wetmore explains. “The technology to issue strong identities, enforce policies, and limit access already exists.” 

According to Sebastian Weir, AI Practice Leader at IBM UK and Ireland, many companies are now focusing on building security into AI projects from the start. 

“While AI development can be up to four times faster, the first version of code often contains many more vulnerabilities...” 

“...Organisations are learning to consider security early instead of adding it later,” he says.

Financial institutions are among those leading the shift, building identity systems that blend the stability of long-term certificates with the flexibility of short-term authorisations. 

Hickman points out that Public Key Infrastructure (PKI) already supports similar scale in IoT environments, managing billions of certificates worldwide. 

He adds, “PKI has always been about scale. The same principles can support agentic AI if implemented properly.” The real focus now, according to experts, should be on governance and orchestration. 

“Scalability depends on creating consistent and controllable deployment patterns. Orchestration frameworks and governance layers ensure transparency and auditability," says Weir. 

Poorly managed AI agents can cause significant damage. Some have been known to delete vital data or produce false financial information due to misconfiguration.

This makes it critical for companies to monitor agent behaviour closely and apply zero-trust principles where every interaction is verified. 

Securing agentic AI does not require reinventing cybersecurity. It requires applying proven methods to a new, fast-moving environment. 

“We already know that certificates and PKI work. An AI agent can have one certificate for identity and another for authorisation. The key is in how you manage them,” Hickman concludes. 

As businesses accelerate their use of AI, the winners will be those that design trust into their systems from the beginning. By investing in certificate lifecycle management and clear governance, they can ensure that every AI agent operates safely and transparently. Those who ignore this step risk letting their systems act autonomously in the dark, without the trust and control that modern enterprises demand.

Illumio Report Warns: Lateral Movement, Not Breach Entry, Causes the Real Cybersecurity Damage

 

In most cyberattacks, the real challenge doesn’t begin at the point of entry—it starts afterward. Once cybercriminals infiltrate a system, they move laterally across networks, testing access points, escalating privileges, and expanding control until a small breach becomes a full-scale compromise. Despite decades of technological progress, the core lesson remains: total prevention is impossible, and it’s the spread of an attack that does the deepest damage.

Illumio’s 2025 Global Cloud Detection and Response Report echoes this reality. Although many organizations claim to monitor east-west traffic and hybrid communications, few possess the contextual clarity to interpret the data effectively. Collecting logs and flow metrics is easy; understanding which workloads interact—and whether that interaction poses a risk—is where visibility breaks down.

Illumio founder and CEO Andrew Rubin highlighted this disconnect: “Everybody loves to say that we’ve got a data or a telemetry problem. I actually think that may be the biggest fallacy of all. We have more data and telemetry than we’ve ever had. The problem is we haven’t figured out how to use it in a highly efficient, highly effective way.”

The report reveals how overwhelmed security teams are by alert fatigue. Thousands of daily notifications—many of them false positives—leave analysts sifting through noise, hoping to identify the few signals that matter. Some describe it as “alert triage roulette,” where the odds of catching a genuine attack indicator are slim.

This inefficiency is costly. Missed alerts lead to prolonged downtime and severe financial losses. Rubin stressed that attackers often stay hidden for months: “Attackers are getting in. They’re literally moving into our house and living with us for months, totally undetected. That means we’re flying blind.”

Despite the adoption of advanced tools like CDR, NDR, XDR, SIEM, and SOAR, blind spots persist. The cybersecurity industry keeps adding layers of detection, but without correlation and context, more data simply amplifies the noise.

Shifting the Security Focus

The narrative now needs to move from “more detection” to “greater observability and containment.” Observability provides enriched context—who’s accessing what, from where, and how critical it is—across clouds and data centers, visualizing potential attack paths and blast radii. Containment acts on that insight, ideally through automation, to isolate or block threats before they escalate.

Rubin summarized it succinctly: “If you want to limit the blast radius of an attack, there are only two things you can do: find it quickly, and segment the environment. They are the only controls that help.”

Heading into 2026, organizations are prioritizing AI and machine learning integration, cloud detection and response, and faster incident remediation. As Rubin noted, AI is transforming both defense and offense in cybersecurity: “AI is going to be a tool in the hands of both the defenders and the attackers forever. In the short term, the advantage probably goes to those who operate outside the rule of law. The one thing we can do to combat that is better observability and finding things faster than we have in the past.”

Ultimately, the report reinforces one truth: visibility without understanding is useless. Companies that convert visibility into context, and context into containment, will stay ahead. In cybersecurity, speed and clarity will always triumph over noise and volume.