Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Exposed by Design: What 1 Million Open AI Services Reveal About the Future of Cyber Risk

  The rapid ascent of artificial intelligence, once heralded as the great accelerator of productivity, now casts a long and unsettling shad...

All the recent news you need to know

Global Surge in Military Grade Spyware Puts Personal Smartphones at Risk


 

Global cybersecurity discourse is emerging with a growing surveillance threat under the surface as the UK's top cyber authority issues a stark assessment of the unchecked proliferation of commercial spyware capabilities. Initially restricted to tightly regulated law enforcement use, advanced intrusion tools are now widely used across more than 100 countries, able to remotely compromise smartphones, bypass encrypted communications, and covertly activate device sensors. 

NSO Group and an increasingly opaque ecosystem of competitors are driving this rapid expansion, signaling the shift from targeted investigative use to a wider landscape of state-aligned digital intrusion, a shift in which state-aligned cyberattacks are becoming increasingly commonplace. 

In spite of their increasing accessibility and operational stealth, enterprises and operators of critical national infrastructure are not adequately prepared for the scale and sophistication of these threats. There is an evolving threat landscape supporting it, which is supported by the increasing sophistication of modern spyware frameworks, which leverage "zero-click" exploitation chains to gain unauthorized access without requiring the user's involvement. 

NSO Group's Pegasus platform and Paragon's Graphite platform function as highly advanced intrusion suites. They exploit latent vulnerabilities within mobile operating systems to extract sensitive communications, media, geolocation information, and other artifacts through forensic minimalism. 

The commercial dynamics underpinning this ecosystem demonstrate the magnitude of the challenge as well as its persistence. As part of the United States entity list, the Israeli developer NSO Group, widely associated with high-end surveillance tooling, was listed in 2021 for its supply of technologies to foreign governments. These technologies were then utilized to target a wide range of individuals, including government officials, journalists, business leaders, academicians, and diplomats. 

In defending its claims that such capabilities serve legitimate anti-terrorism and law enforcement purposes, the company asserts that it lacks direct visibility into operational use, while retaining the right to terminate client relationships in instances of verified misuse. 

In spite of the rapid expansion of the vendor landscape, NSO Group represents only one node within it. According to industry observers, including Casey, the sector is extremely profitable and is undergoing rapid growth. There are currently dozens of firms offering comparable capabilities in this market. 

According to estimates, more than 100 countries have procured mobile spyware, an increase over earlier assessments, which indicated deployment across more than 80 national jurisdictions. Along with offering a cost-effective shortcut to the development of capabilities that would otherwise require years of development, commercial intrusion platforms offer a fast and easy means for states lacking indigenous cyber expertise.

In addition, the National Cyber Security Centre noted previously that, despite the fact that these tools are intended for law enforcement purposes, there is credible evidence that they have been used on a widespread basis against journalists, human rights defenders, political dissidents, and foreign officials with thousands of individuals being targeted annually. 

Several leaked toolkits, including DarkSword, demonstrate the dispersal of capabilities once restricted to state intelligence agencies into less controlled environments, making it possible for state-aligned and criminal actors to launch attacks by utilizing vectors as inconspicuous as compromised web sessions on unpatched iOS devices. In addition to theoretical risk models, operational exploits are being actively employed against targets who often assume device-level security as the basis of their attack. 

A notable increase in the victim profile is that it includes corporate executives, financial professionals, and organizations dealing with valuable information, as well as journalists and political dissidents. It was highlighted by Richard Horne, the director of the UK's National Cyber Security Centre, that there still remains a significant gap in industry readiness. 

Many enterprises underestimate the capability and operational maturity of these surveillance capabilities. Essentially, this shift illustrates the democratization of offensive cyber tools, where sophisticated surveillance, once monopolized by a few intelligence agencies, is now available to a broader range of state actors lacking native cyber expertise. 

As a result, these capabilities are increasingly available economically and they are unintentionally disseminated, which fundamentally alters the threat equation. Through the transition from tightly controlled assets to commercially traded products, advanced surveillance tools become increasingly difficult to contain as they are propagated through illicit channels, including corrupt procurement practices, insider exfiltration, and secondary resale markets. 

In the wake of this leakage, non-state actors, including organized criminal networks, have acquired capabilities that were previously available only to sovereign intelligence operations. The proliferation of state-linked campaigns, including those attributed to China and focused on large-scale data exfiltration, illustrates the use of such tools not only for immediate intelligence gain, but also to establish strategic prepositioning for future geopolitical conflicts. 

Traditional device-based safeguards and consumer privacy controls are only marginally effective against adversaries equipped with exploit chains developed specifically to circumvent them. International efforts to regulate and oversee exports are gaining momentum, but operational reality suggests that containment may already lag behind proliferation, which enables a significant expansion of attack surfaces across both civilian and enterprise digital environments. 

The convergence of commercial availability, technical sophistication and weak oversight has led to the normalization of capabilities that were once considered exceptional. These developments illustrate a structural shift in the cyber threat environment. 

In conjunction with the widespread adoption of such tools, and their continual evolution and leakage, there is an ongoing need for public and private sectors to assess their security assumptions at a fundamental level. There is no longer a limited need to defend against isolated intrusions for enterprises, critical infrastructure operators, and individual users, but rather to navigate a complex ecosystem where highly advanced surveillance techniques are frequently accessible and increasingly resemble legitimate activity. 

In the absence of strengthened international coordination, enforceable controls, and a corresponding increase in defensive maturity, a continued erosion of digital trust is likely, resulting in compromise becoming not an anomaly, but an expected condition of operating within a hyperconnected environment.

AI Models Surpass Doctors in Emergency Diagnosis, Harvard Study Finds

 




A contemporary study conducted by researchers at Harvard University has revealed that advanced artificial intelligence systems are now capable of exceeding human doctors in both diagnosing medical conditions and determining treatment strategies, including in fast-paced and high-stakes emergency room environments. The research specifically accentuates the potential capabilities of modern AI systems in handling complex clinical reasoning tasks that were traditionally considered exclusive to trained physicians.

The findings, published in the peer-reviewed journal Science, are based on a controlled comparison between OpenAI o1 and experienced attending physicians. To ensure realistic testing conditions, the study used 76 actual emergency department cases sourced from Beth Israel Deaconess Medical Center. These cases were evaluated across multiple stages of the diagnostic process, allowing researchers to assess performance under varying levels of available patient information.

At the earliest stage of patient assessment, commonly referred to as initial triage, where clinicians typically have only limited details about a patient’s condition, the AI model demonstrated a notable advantage. It was able to correctly identify either the exact diagnosis or a closely related condition in 67.1 percent of the cases. In comparison, the two physicians involved in the study achieved accuracy rates of 55.3 percent and 50 percent respectively. This suggests that even with minimal data, the AI system was more effective at narrowing down potential diagnoses.

As the diagnostic process progressed and additional clinical information became available during the emergency room evaluation phase, the model’s performance improved further. Its diagnostic accuracy increased to 72.4 percent, reflecting its ability to refine its conclusions with more context. The physicians also showed improvement at this stage, but their accuracy remained lower, at 61.8 percent and 52.6 percent. This stage is particularly important as it mirrors real-world conditions where doctors continuously update their assessments based on new findings.

In the final phase of care, when patients were admitted either to general hospital wards or intensive care units, the AI model continued to outperform its human counterparts. It achieved an accuracy rate of 81.6 percent, compared to 78.9 percent and 69.7 percent for the physicians. Although the performance gap narrowed slightly at this stage, the AI still maintained a measurable edge, indicating consistency across the full diagnostic timeline.

Beyond identifying illnesses, the study also evaluated how effectively the AI system could design clinical management plans. This included decisions such as selecting appropriate medications, including antibiotics, as well as handling complex and sensitive scenarios like end-of-life care planning. Across five evaluated case studies, the AI achieved a median performance score of 89 percent. In contrast, physicians scored significantly lower, averaging 34 percent when relying on traditional clinical resources and 41 percent when supported by GPT-4. This underlines a substantial gap in structured decision-making support.

The researchers acknowledged that while integrating AI into clinical workflows is often viewed as a high-risk approach due to patient safety concerns, its potential benefits are significant. They noted that wider adoption of such systems could help reduce diagnostic errors, minimize treatment delays, and address disparities in access to healthcare services. These factors collectively contribute to both improved patient outcomes and reduced financial strain on healthcare systems.

At the same time, the study emphasizes that current AI systems are not without limitations. Clinical medicine involves more than text-based data. Doctors routinely rely on non-verbal and non-textual cues, such as observing a patient’s physical discomfort, interpreting imaging results, and making judgment calls based on experience. These aspects are not fully captured by existing AI models, which means human expertise remains essential.

The authors further concluded that large language models have now surpassed many traditional benchmarks used to measure clinical reasoning abilities. However, they stress the urgent need for more detailed research, including real-world clinical trials and studies focused on human-AI collaboration, to determine how these systems can be safely and effectively integrated into healthcare settings.

In comments shared with The Guardian, lead researcher Arjun Manrai clarified that the findings should not be interpreted as suggesting that AI will replace doctors. Instead, he described the results as evidence of a major technological shift that is likely to transform the medical field in the coming years.

From a macro industry perspective, this study reflects a developing trend in which AI is increasingly being used to augment clinical decision-making. However, experts continue to caution that challenges such as data bias, accountability, regulatory oversight, and patient trust must be addressed before such systems can be widely deployed. The future of healthcare, therefore, is likely to involve a collaborative model where AI amplifies efficiency and accuracy, while human doctors provide critical judgment, ethical oversight, and patient-centered care.

Claude Desktop Silently Alters Browser Settings, Even on Uninstalled Browsers

 

Claude Desktop, Anthropic’s standalone AI app for macOS, has come under fire for quietly altering browser‑level settings on users’ machines—even when they have never installed or used certain browsers. Security and privacy researchers have found that the application drops browser‑configuration files across system‑wide directories, effectively pre‑authorizing future browser‑extension links between Claude and Chromium‑based browsers such as Chrome, Edge, Brave, Opera, and others.

Modus operandi 

Upon installation, Claude Desktop generates a Native Messaging manifest and helper binary that register Claude as a trusted “browser host” for several specific Chrome extension IDs. This manifest is placed inside browser‑host folders for multiple Chromium‑based browsers, including some a user may never have installed, meaning a future browser install could immediately grant Claude broad access to page content, form data, and session activity. Anthropic frames this as part of its “agentic” features that let the app automate tasks and interact with the web, but the lack of an explicit opt‑in notification has raised red flags. 

The biggest concern is that these configuration files persist beyond the scope of browsers a user actually runs. Even if a person never uses Chrome or a given Chromium browser, the manifest can already be waiting in the system’s browser‑host directories, pre‑staging a bridge that activates once a corresponding browser and Claude extension are installed. Because the desktop app rewrites these files on every launch, deleting them manually does not permanently remove the hooks unless Claude Desktop itself is uninstalled. 

Privacy and legal reactions 

Privacy experts and commentators have likened this behavior to “spyware‑like” activity, arguing that silently creating browser‑level hooks without clear consent violates the spirit, if not the letter, of privacy regulations such as the EU ePrivacy Directive. Alexander Hanff, a prominent privacy consultant, has explicitly labeled Claude Desktop’s behavior “spyware” and questioned how much of this browser integration is actually documented and disclosed to end users. Critics stress that such integrations should be opt‑in and transparent, rather than buried in vague terms‑of‑service language most users never read. 

For macOS users who have installed Claude Desktop, experts recommend reviewing whether they actually need the browser‑integration features and, if not, uninstalling the app entirely to remove lingering manifest files and host binaries. Some guides suggest manually cleaning native‑messaging‑host folders for various Chromium browsers and then restarting the browser after removal, although this is only effective if the desktop app is also gone. Until Anthropic adds clearer, upfront consent prompts and the option to disable or remove these hooks, users concerned about privacy should treat Claude Desktop’s browser integration as a potential risk and handle it accordingly.

npm Supply Chain Attack Spreads Worm Malware Stealing Developer Secrets Across Compromised Packages

 

Worry grows within the cybersecurity community following discovery of a fresh supply chain threat aimed at the npm platform, where self-replicating malicious code infiltrates public software libraries to harvest confidential information from coders. Though broad consumer impact seems minimal, investigators at Socket and StepSecurity confirm the assault specifically targets niche development setups - environments often overlooked in typical breach patterns. 

Detection came after unusual network activity flagged automated systems, leading analysts to trace payloads back to tampered dependencies uploaded under legitimate project names. Unlike older variants that rely on user interaction, this version activates silently once installed, transmitting credentials to remote servers without visible signs. Researchers emphasize the sophistication lies not in complexity but timing: attacks unfold during build processes, evading standard runtime checks. 

From initial samples, it appears attackers maintain persistence by chaining exploits across multiple packages. Investigation continues into whether source repositories were breached directly or if hijacked maintainer accounts allowed upload privileges. Not far behind the initial breach, several packages tied to Namastex Labs began showing suspicious behavior. One after another, altered forms of @automagik/genie, pgserve, and similar tools appeared online without warning. 

What started as isolated reports now points to a wider pattern unfolding quietly. Though some tainted releases have been pulled, fresh variants continue turning up unexpectedly. Danger comes from how the code spreads itself automatically. Right after a package installs, it acts like a worm - starting fast, grabbing key details from the system it hits. Things such as API tokens show up on the list, along with SSH keys, cloud login info, and hidden codes used in software build tools, containers, or AI setups. 

Off it goes, sending what it finds to servers run by attackers. Despite lacking conclusive proof, analysts observe patterns matching past operations tied to TeamPCP. Similarities emerge in how malware activates upon installation, grabs login details, and uses distributed infrastructure for spreading code and storing stolen data. What makes this malware more than just a thief is how it pushes outward without pause. 

Once inside, it hunts for npm login details and identifies which libraries the developer can upload. Harmful scripts are then inserted and republished, turning trusted tools into hidden entry points. If Python credentials appear, the same process spreads into PyPI. Not just traditional systems are at risk - crypto-linked holdings face exposure too, with data targeted from tools like MetaMask and Phantom. One weak spot in a developer’s setup can ripple outward, showing how quickly risks spread across software ecosystems.

Hackers Target Cloud Apps Using Phone Scams and Login Tricks



Cybersecurity researchers have identified two threat groups that are executing fast-moving attacks almost entirely within software-as-a-service environments, allowing them to operate with very little visible trace of intrusion.

The groups, tracked as Cordial Spider and Snarky Spider, are also known by multiple alternate identifiers across different security vendors. Investigations show that both groups are involved in high-speed data theft followed by extortion attempts, and their methods show a strong overlap in how operations are carried out. Analysts assess that these groups have been active since at least October 2025. One of them is believed to be composed of native English speakers and is linked to a cybercrime network widely referred to as “The Com.”

According to findings from CrowdStrike, these attackers primarily rely on voice phishing, also known as vishing, to initiate their intrusions. In these cases, individuals are contacted and guided toward fraudulent login pages that are designed to imitate single sign-on systems. These pages act as adversary-in-the-middle setups, meaning they intercept and capture authentication data, including login credentials and session details, as the victim enters them. Once this information is obtained, attackers immediately use it to access SaaS applications that are connected through single sign-on integrations.

Researchers explain that the attackers deliberately operate within trusted SaaS platforms to avoid raising suspicion. Because their activity takes place inside legitimate services already used by organizations, their presence generates fewer detectable signals. This allows them to move quickly from initial compromise to data access. The combination of speed, targeted execution, and reliance on SaaS-only environments makes it harder for defenders to monitor and respond effectively.

Earlier research published in January 2026 by Mandiant revealed that these attack patterns represent a continuation of tactics seen in extortion-focused campaigns linked to the ShinyHunters group. These operations involve impersonating IT staff during phone calls to build trust with victims, then directing them to phishing pages in order to collect both login credentials and multi-factor authentication codes.

More recent analysis from Palo Alto Networks Unit 42 and the Retail & Hospitality ISAC indicates, with moderate confidence, that one of the identified clusters is associated with The Com network. These attacks rely heavily on living-off-the-land techniques, where attackers use legitimate system tools instead of introducing malware. They also make use of residential proxy networks to mask their real geographic location and to evade basic IP-based security filtering systems.

Since February 2026, activity linked to one of these clusters has been directed toward organizations in the retail and hospitality sectors. The attackers combine vishing calls, often impersonating IT help desk personnel, with phishing websites designed to capture employee credentials.

Once access is established, the attackers take steps to maintain long-term control. They register a new device within the compromised account to ensure continued access, and in many cases remove previously registered devices. After doing so, they modify email settings by creating inbox rules that automatically delete notifications related to new device logins or suspicious activity, preventing the legitimate user from being alerted.

Following initial access, the attackers shift their focus toward accounts with higher privileges. They collect internal information, such as employee directories, to identify individuals with elevated access and then use further social engineering techniques to compromise those accounts as well. With increased privileges, they move across SaaS platforms including Google Workspace, HubSpot, Microsoft SharePoint, and Salesforce, searching for sensitive documents and business-critical data. Any valuable information is then exfiltrated to infrastructure controlled by the attackers.

Researchers note that in many observed cases, the stolen credentials provide access to the organization’s identity provider, which acts as a central authentication system. This creates a single entry point into multiple SaaS applications. By exploiting the trust relationships between the identity provider and connected services, attackers are able to move across the organization’s cloud ecosystem without needing to compromise each application separately. This allows them to access multiple systems using a single authenticated session.


CISA Highlights CVE-2026-31431 as an Active Linux Root Exploitation Risk


 

Several vulnerabilities in the Linux kernel have been recently disclosed that have attracted heightened scrutiny from the cybersecurity community, following evidence that they can be exploited to obtain full root-level control across a wide range of systems consistently. This vulnerability, formally referred to as “Copy Fail,” affects kernel versions spanning nearly a decade, dramatically expanding its attack surface and posing a significant threat to millions of deployments.

It is tracked as CVE-2026-31431. Several security researchers emphasize that this issue is not only significant when it comes to privilege escalation, but also stands out for its operational simplicity, cross-environment portability, and high exploitation success rate factors, which all contribute to its elevated threat profile and explain why it has been classified as an actively exploited vulnerability. 

Upon reviewing these findings, the Cybersecurity and Infrastructure Security Agency (CISA) has formally escalated the issue by adding the flaw to its Known Exploited Vulnerabilities (KEV) catalogue, which indicates confirmed instances of exploitation across multiple Linux distributions in the wild. 

The weakness, rated CVE-2026-31431, has a CVSS score of 7.8, and is considered to be a local privilege escalation vulnerability (LPE), which permits an unprivileged user with local access to elevate privileges to root privileges. However, its long-lasting undetected status, combined with its reliable exploitation pathway, makes it an operational risk even greater despite its moderate scoring. 

Under the designation “Copy Fail,” security researchers at Theori and Xint first identified and analyzed this issue. The issue arises from the incorrect transfer of resources between security contexts within Linux kernels, which can be exploited to bypass standard privilege boundaries in Linux. 

Several kernel patches, including versions 6.18.22, 6.19.12, and 7.0, have been released in response to this vulnerability, which has been actively exploited. Federal guidance urges organisations to prioritize updating based on the active exploitation status of the vulnerability. Additionally, its unusually low barrier to exploitation and wide ecosystem impact reinforce the urgency surrounding the flaw. 

According to researchers, an exploit can be executed with as little as 732 bytes of code, which significantly reduces the threshold for abuse and extends its reach across virtually all major Linux distributions since 2017. 

Unprivileged local users are able to manipulate the kernel's in-memory page cache of readable files, including setuid binaries, at the core of the vulnerability. By doing so, executables may be modified at runtime without altering files on disk. Injecting malicious code into trusted binaries such as /usr/bin/su results in root-level permissions for execution. This technique creates a stealthy pathway to privilege escalation. 

The security analysts at Wiz have stated that this in-memory tampering fundamentally undermines traditional integrity assumptions, since the page cache serves as the live execution layer for binaries. Furthermore, this risk is compounded when deploying large-scale Linux-based applications in modern cloud or containerised infrastructures. 

According to Kaspersky's analysis, environments that leverage container technologies, such as Docker, LXC, and Kubernetes, may be particularly vulnerable to threats. By default, container processes may interact with the AF_ALG subsystem if the algif_aead module is present in the host kernel, thus expanding the attack surface and enhancing privilege escalation across boundaries. 

In a technical sense, the vulnerability originates from a logic flaw within the Linux kernel's cryptographic pipeline, specifically the authenticated encryption template ("authenc"), where incomplete handling allows memory interactions that were not intended. 

Essentially, the vulnerability allows a local, unprivileged user to trigger a controlled four-byte write primitive into any readable file's page cache—a capability which appears to be constrained, but which has severe security implications when applied to executable memory. 

A key component of the exploit chain is the AF_ALG interface, which exposes kernel cryptographic operations to user space, as well as the splice() system call, which is used to redirect data flows away from conventional buffers and into the GPU page cache. 

By manipulating the in-memory representation of executables, attackers can subtly modify their execution behaviour without changing files on disk; when these modifications target setuid-root executables, it is trivial to escalate privileges to the full set of privileges. An analysis of the root cause of the issue has revealed that this vulnerability was caused by a 2017 optimization introduced in the Linux kernel version 4.14 that enabled in-place buffer reuse to improve performance but weakened memory isolation guarantees by accident, creating the conditions for an exploit. 

Several distributions have been validated empirically by researchers, including Ubuntu 24.04 LTS, Amazon Linux 2023, Red Hat Enterprise Linux 10.1, SUSE Linux Enterprise 16, and Debian, all of which have demonstrated near-perfect reliability in a compact Python proof-of-concept. Since this flaw affects virtually all distributed operating systems released since 2017, it has drawn comparisons with previous high-profile flaws, including Dirty Pipe (CVE-2022-0847). 

However, Copy Fail is more portable across kernel versions, more reliable, and is simpler to exploit, as it does not require specific offsets or narrowly scoped configurations to operate. To resolve the issue, kernel maintainers reverted the underlying optimization and reintroduced safer buffer handling mechanisms as part of versions 6.18.22, 6.19.12, and 7.0 of the kernel. 

Despite the fact that major distributions have begun to deploy patched kernels, inconsistencies in advisory publication have caused friction in coordinated response efforts, resulting in security researcher Will Dormann noting that some platforms have issued updates that do not consistently mention CVE-2026-31431, potentially stalling remediation and risk awareness at an enterprise level. 

An additional technical analysis of the flaw has revealed a practical exploitation pathway, illustrating how attackers can operationalise the vulnerability systematically in real-world environments. An attacker typically begins the attack sequence by identifying a Linux host or container that runs on a vulnerable kernel version, followed by the preparation of an attack trigger based on Python tailored specifically for the target machine. 

Upon initiating the exploit, it can be executed either as a standard user on the host system or within a compromised container without elevated privileges utilizing a low-privilege context. By utilizing the underlying flaw, the exploit can overwrite the kernel page cache precisely by four bytes, corrupting sensitive data structures that are managed by the kernel and enabling privilege escalation. Ultimately, this allows the attacker to obtain unrestricted root access by elevating their process to UID 0.

As a result of the active threat landscape, Federal Civilian Executive Branch (FCEB) agencies have been instructed to resolve the vulnerability by May 15, 2026, in accordance with patches released by Linux distributions affected by this vulnerability. 

In the case that immediate patching is not feasible, interim mitigation strategies, including disabling vulnerabilities, segmenting networks, and tightening access controls, have been recommended as a means of reducing exposure and containing potential compromise paths. 

As a result of the active exploitation status of CVE-2026-31431, its extensive reach across the Linux ecosystem, and its relative ease of weaponisation, it serves as a critical reminder of the risks that are inherent to longstanding kernel-level design decisions. As a result of the convergence of high reliability, minimal exploit complexity, and broad distribution exposures, organizations are under increasing pressure to verify their patch postures and expedite remediation. 

As a precautionary measure, security teams should prioritize kernel updates, closely monitor privilege escalation activity, and reassess controls around multi-tenant and containerised environments in which attack surfaces may be heightened. 

Threat actors will continue to exploit low-friction exploitation paths for exploitation, which will require timely mitigation and disciplined system hardening to ensure operational integrity and limit the impact of these kernel vulnerabilities.

Featured