Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Indirect Prompt Injection: The Hidden AI Threat

Indirect prompt injection is becoming one of the most worrying AI security risks because attackers can hide malicious instructions inside co...

All the recent news you need to know

Exposed by Design: What 1 Million Open AI Services Reveal About the Future of Cyber Risk

 

The rapid ascent of artificial intelligence, once heralded as the great accelerator of productivity, now casts a long and unsettling shadow, one that reveals not merely innovation, but a profound erosion of foundational security discipline. 

A recent large scale scan of internet facing AI infrastructure has uncovered a reality that is difficult to ignore. Over 1 million exposed AI services across more than 2 million hosts were identified, many of them operating with little to no protection, silently accessible to anyone who knows where to look. This is not a marginal oversight. It is a systemic condition, one that reflects how speed, ambition, and competitive pressure are quietly outpacing prudence. 

The Illusion of Progress: When Innovation Outruns Security 


For decades, the software industry painstakingly evolved toward secure by design principles, including authentication layers, least privilege access, and hardened deployments. Yet, in the fervour surrounding AI, many of these hard earned lessons appear to have been set aside. 

Organizations are increasingly self hosting large language models and AI agents, driven by the promise of efficiency and control. But in doing so, they are deploying systems that are, paradoxically, less secure than legacy software ever was. 

The result is a peculiar contradiction. The most advanced technologies of our time are often protected by the weakest defenses. 

Perhaps the most alarming discovery is deceptively simple. Many AI services have no authentication at all. Fresh installations frequently grant immediate, high level access without requiring credentials. This is not due to sophisticated bypass techniques or unknown exploits. It stems from defaults that were never hardened in the first place. In such environments, attackers simply walk through the front door. 

When Conversations Become Vulnerabilities 


Among the exposed systems were AI chat interfaces that inadvertently revealed complete conversation histories. In enterprise contexts, such data is far from trivial. These exchanges may contain internal operational strategies, infrastructure configurations, proprietary code snippets, and sensitive business queries. 

Even seemingly harmless prompts can, when combined, form a detailed map of an organization’s inner workings. The quiet intimacy of human and machine interaction, once considered private, is thus transformed into a potential intelligence goldmine. A deeper inspection of these systems reveals not isolated mistakes, but recurring design flaws. Applications are often running with elevated privileges. Credentials are sometimes hardcoded into deployment files. Containers are misconfigured and services are left exposed. AI agents operate without sufficient sandboxing. Within days of analysis, researchers were able to identify new vulnerabilities, including risks related to remote code execution, which highlights how immature much of this ecosystem remains. 

These are patterns that repeat across environments. Unlike traditional applications, AI systems often possess extended capabilities. They can execute code, interact with APIs, and manipulate infrastructure. 

When such systems are exposed, the consequences escalate dramatically. A compromised AI agent is not merely a data leak. It can become an active participant in its own exploitation. Weak sandboxing and poorly segmented environments further amplify this risk, allowing attackers to move from one system to another with alarming ease. 

In this sense, AI does not just introduce new vulnerabilities. It magnifies existing ones. This phenomenon does not exist in isolation. Across the cybersecurity landscape, AI is reshaping both offense and defense. Recent analyses indicate that the time required to exploit vulnerabilities has shrunk dramatically, often from years to mere weeks. AI generated phishing and malware are increasing in both scale and sophistication. Even individuals with limited technical expertise can now execute complex attacks. 

The exposed AI services are therefore part of a larger transformation in how cyber risk evolves. 

At the heart of this issue lies a cultural shift. Organizations today operate under relentless pressure to innovate, deploy, and iterate. In this race, security is often treated as a secondary concern rather than a foundational requirement. 

Developers focus on functionality. Businesses focus on speed. Security becomes something to address later, once the system is already live. The irony is difficult to ignore. The very tools designed to enhance efficiency are being deployed in ways that create inefficiencies of far greater consequence, including breaches, downtime, and reputational loss. 

Lessons from the Exposure: What Must Change 


If there is a singular lesson to be drawn, it is this. AI infrastructure must be treated with the same level of rigor as traditional systems, if not more. 

This requires secure default configurations, mandatory authentication and access controls, elimination of hardcoded secrets, proper isolation of AI agents, and continuous monitoring of external attack surfaces. Security cannot remain reactive. In an AI driven world, it must become anticipatory. 

Conclusion: A Turning Point, Not a Footnote 


The exposure of over a million AI services is a warning more than just headlines. It reveals a fragile foundation beneath a rapidly expanding technological landscape. If left unaddressed, these vulnerabilities will not remain theoretical. They will manifest as real world breaches, financial losses, and systemic disruptions. 

Yet within this warning lies an opportunity to pause, to reassess and to restore the balance between innovation and responsibility. In the end, the true measure of technological progress is how wisely we secure what we create.

Global Surge in Military Grade Spyware Puts Personal Smartphones at Risk


 

Global cybersecurity discourse is emerging with a growing surveillance threat under the surface as the UK's top cyber authority issues a stark assessment of the unchecked proliferation of commercial spyware capabilities. Initially restricted to tightly regulated law enforcement use, advanced intrusion tools are now widely used across more than 100 countries, able to remotely compromise smartphones, bypass encrypted communications, and covertly activate device sensors. 

NSO Group and an increasingly opaque ecosystem of competitors are driving this rapid expansion, signaling the shift from targeted investigative use to a wider landscape of state-aligned digital intrusion, a shift in which state-aligned cyberattacks are becoming increasingly commonplace. 

In spite of their increasing accessibility and operational stealth, enterprises and operators of critical national infrastructure are not adequately prepared for the scale and sophistication of these threats. There is an evolving threat landscape supporting it, which is supported by the increasing sophistication of modern spyware frameworks, which leverage "zero-click" exploitation chains to gain unauthorized access without requiring the user's involvement. 

NSO Group's Pegasus platform and Paragon's Graphite platform function as highly advanced intrusion suites. They exploit latent vulnerabilities within mobile operating systems to extract sensitive communications, media, geolocation information, and other artifacts through forensic minimalism. 

The commercial dynamics underpinning this ecosystem demonstrate the magnitude of the challenge as well as its persistence. As part of the United States entity list, the Israeli developer NSO Group, widely associated with high-end surveillance tooling, was listed in 2021 for its supply of technologies to foreign governments. These technologies were then utilized to target a wide range of individuals, including government officials, journalists, business leaders, academicians, and diplomats. 

In defending its claims that such capabilities serve legitimate anti-terrorism and law enforcement purposes, the company asserts that it lacks direct visibility into operational use, while retaining the right to terminate client relationships in instances of verified misuse. 

In spite of the rapid expansion of the vendor landscape, NSO Group represents only one node within it. According to industry observers, including Casey, the sector is extremely profitable and is undergoing rapid growth. There are currently dozens of firms offering comparable capabilities in this market. 

According to estimates, more than 100 countries have procured mobile spyware, an increase over earlier assessments, which indicated deployment across more than 80 national jurisdictions. Along with offering a cost-effective shortcut to the development of capabilities that would otherwise require years of development, commercial intrusion platforms offer a fast and easy means for states lacking indigenous cyber expertise.

In addition, the National Cyber Security Centre noted previously that, despite the fact that these tools are intended for law enforcement purposes, there is credible evidence that they have been used on a widespread basis against journalists, human rights defenders, political dissidents, and foreign officials with thousands of individuals being targeted annually. 

Several leaked toolkits, including DarkSword, demonstrate the dispersal of capabilities once restricted to state intelligence agencies into less controlled environments, making it possible for state-aligned and criminal actors to launch attacks by utilizing vectors as inconspicuous as compromised web sessions on unpatched iOS devices. In addition to theoretical risk models, operational exploits are being actively employed against targets who often assume device-level security as the basis of their attack. 

A notable increase in the victim profile is that it includes corporate executives, financial professionals, and organizations dealing with valuable information, as well as journalists and political dissidents. It was highlighted by Richard Horne, the director of the UK's National Cyber Security Centre, that there still remains a significant gap in industry readiness. 

Many enterprises underestimate the capability and operational maturity of these surveillance capabilities. Essentially, this shift illustrates the democratization of offensive cyber tools, where sophisticated surveillance, once monopolized by a few intelligence agencies, is now available to a broader range of state actors lacking native cyber expertise. 

As a result, these capabilities are increasingly available economically and they are unintentionally disseminated, which fundamentally alters the threat equation. Through the transition from tightly controlled assets to commercially traded products, advanced surveillance tools become increasingly difficult to contain as they are propagated through illicit channels, including corrupt procurement practices, insider exfiltration, and secondary resale markets. 

In the wake of this leakage, non-state actors, including organized criminal networks, have acquired capabilities that were previously available only to sovereign intelligence operations. The proliferation of state-linked campaigns, including those attributed to China and focused on large-scale data exfiltration, illustrates the use of such tools not only for immediate intelligence gain, but also to establish strategic prepositioning for future geopolitical conflicts. 

Traditional device-based safeguards and consumer privacy controls are only marginally effective against adversaries equipped with exploit chains developed specifically to circumvent them. International efforts to regulate and oversee exports are gaining momentum, but operational reality suggests that containment may already lag behind proliferation, which enables a significant expansion of attack surfaces across both civilian and enterprise digital environments. 

The convergence of commercial availability, technical sophistication and weak oversight has led to the normalization of capabilities that were once considered exceptional. These developments illustrate a structural shift in the cyber threat environment. 

In conjunction with the widespread adoption of such tools, and their continual evolution and leakage, there is an ongoing need for public and private sectors to assess their security assumptions at a fundamental level. There is no longer a limited need to defend against isolated intrusions for enterprises, critical infrastructure operators, and individual users, but rather to navigate a complex ecosystem where highly advanced surveillance techniques are frequently accessible and increasingly resemble legitimate activity. 

In the absence of strengthened international coordination, enforceable controls, and a corresponding increase in defensive maturity, a continued erosion of digital trust is likely, resulting in compromise becoming not an anomaly, but an expected condition of operating within a hyperconnected environment.

AI Models Surpass Doctors in Emergency Diagnosis, Harvard Study Finds

 




A contemporary study conducted by researchers at Harvard University has revealed that advanced artificial intelligence systems are now capable of exceeding human doctors in both diagnosing medical conditions and determining treatment strategies, including in fast-paced and high-stakes emergency room environments. The research specifically accentuates the potential capabilities of modern AI systems in handling complex clinical reasoning tasks that were traditionally considered exclusive to trained physicians.

The findings, published in the peer-reviewed journal Science, are based on a controlled comparison between OpenAI o1 and experienced attending physicians. To ensure realistic testing conditions, the study used 76 actual emergency department cases sourced from Beth Israel Deaconess Medical Center. These cases were evaluated across multiple stages of the diagnostic process, allowing researchers to assess performance under varying levels of available patient information.

At the earliest stage of patient assessment, commonly referred to as initial triage, where clinicians typically have only limited details about a patient’s condition, the AI model demonstrated a notable advantage. It was able to correctly identify either the exact diagnosis or a closely related condition in 67.1 percent of the cases. In comparison, the two physicians involved in the study achieved accuracy rates of 55.3 percent and 50 percent respectively. This suggests that even with minimal data, the AI system was more effective at narrowing down potential diagnoses.

As the diagnostic process progressed and additional clinical information became available during the emergency room evaluation phase, the model’s performance improved further. Its diagnostic accuracy increased to 72.4 percent, reflecting its ability to refine its conclusions with more context. The physicians also showed improvement at this stage, but their accuracy remained lower, at 61.8 percent and 52.6 percent. This stage is particularly important as it mirrors real-world conditions where doctors continuously update their assessments based on new findings.

In the final phase of care, when patients were admitted either to general hospital wards or intensive care units, the AI model continued to outperform its human counterparts. It achieved an accuracy rate of 81.6 percent, compared to 78.9 percent and 69.7 percent for the physicians. Although the performance gap narrowed slightly at this stage, the AI still maintained a measurable edge, indicating consistency across the full diagnostic timeline.

Beyond identifying illnesses, the study also evaluated how effectively the AI system could design clinical management plans. This included decisions such as selecting appropriate medications, including antibiotics, as well as handling complex and sensitive scenarios like end-of-life care planning. Across five evaluated case studies, the AI achieved a median performance score of 89 percent. In contrast, physicians scored significantly lower, averaging 34 percent when relying on traditional clinical resources and 41 percent when supported by GPT-4. This underlines a substantial gap in structured decision-making support.

The researchers acknowledged that while integrating AI into clinical workflows is often viewed as a high-risk approach due to patient safety concerns, its potential benefits are significant. They noted that wider adoption of such systems could help reduce diagnostic errors, minimize treatment delays, and address disparities in access to healthcare services. These factors collectively contribute to both improved patient outcomes and reduced financial strain on healthcare systems.

At the same time, the study emphasizes that current AI systems are not without limitations. Clinical medicine involves more than text-based data. Doctors routinely rely on non-verbal and non-textual cues, such as observing a patient’s physical discomfort, interpreting imaging results, and making judgment calls based on experience. These aspects are not fully captured by existing AI models, which means human expertise remains essential.

The authors further concluded that large language models have now surpassed many traditional benchmarks used to measure clinical reasoning abilities. However, they stress the urgent need for more detailed research, including real-world clinical trials and studies focused on human-AI collaboration, to determine how these systems can be safely and effectively integrated into healthcare settings.

In comments shared with The Guardian, lead researcher Arjun Manrai clarified that the findings should not be interpreted as suggesting that AI will replace doctors. Instead, he described the results as evidence of a major technological shift that is likely to transform the medical field in the coming years.

From a macro industry perspective, this study reflects a developing trend in which AI is increasingly being used to augment clinical decision-making. However, experts continue to caution that challenges such as data bias, accountability, regulatory oversight, and patient trust must be addressed before such systems can be widely deployed. The future of healthcare, therefore, is likely to involve a collaborative model where AI amplifies efficiency and accuracy, while human doctors provide critical judgment, ethical oversight, and patient-centered care.

Claude Desktop Silently Alters Browser Settings, Even on Uninstalled Browsers

 

Claude Desktop, Anthropic’s standalone AI app for macOS, has come under fire for quietly altering browser‑level settings on users’ machines—even when they have never installed or used certain browsers. Security and privacy researchers have found that the application drops browser‑configuration files across system‑wide directories, effectively pre‑authorizing future browser‑extension links between Claude and Chromium‑based browsers such as Chrome, Edge, Brave, Opera, and others.

Modus operandi 

Upon installation, Claude Desktop generates a Native Messaging manifest and helper binary that register Claude as a trusted “browser host” for several specific Chrome extension IDs. This manifest is placed inside browser‑host folders for multiple Chromium‑based browsers, including some a user may never have installed, meaning a future browser install could immediately grant Claude broad access to page content, form data, and session activity. Anthropic frames this as part of its “agentic” features that let the app automate tasks and interact with the web, but the lack of an explicit opt‑in notification has raised red flags. 

The biggest concern is that these configuration files persist beyond the scope of browsers a user actually runs. Even if a person never uses Chrome or a given Chromium browser, the manifest can already be waiting in the system’s browser‑host directories, pre‑staging a bridge that activates once a corresponding browser and Claude extension are installed. Because the desktop app rewrites these files on every launch, deleting them manually does not permanently remove the hooks unless Claude Desktop itself is uninstalled. 

Privacy and legal reactions 

Privacy experts and commentators have likened this behavior to “spyware‑like” activity, arguing that silently creating browser‑level hooks without clear consent violates the spirit, if not the letter, of privacy regulations such as the EU ePrivacy Directive. Alexander Hanff, a prominent privacy consultant, has explicitly labeled Claude Desktop’s behavior “spyware” and questioned how much of this browser integration is actually documented and disclosed to end users. Critics stress that such integrations should be opt‑in and transparent, rather than buried in vague terms‑of‑service language most users never read. 

For macOS users who have installed Claude Desktop, experts recommend reviewing whether they actually need the browser‑integration features and, if not, uninstalling the app entirely to remove lingering manifest files and host binaries. Some guides suggest manually cleaning native‑messaging‑host folders for various Chromium browsers and then restarting the browser after removal, although this is only effective if the desktop app is also gone. Until Anthropic adds clearer, upfront consent prompts and the option to disable or remove these hooks, users concerned about privacy should treat Claude Desktop’s browser integration as a potential risk and handle it accordingly.

npm Supply Chain Attack Spreads Worm Malware Stealing Developer Secrets Across Compromised Packages

 

Worry grows within the cybersecurity community following discovery of a fresh supply chain threat aimed at the npm platform, where self-replicating malicious code infiltrates public software libraries to harvest confidential information from coders. Though broad consumer impact seems minimal, investigators at Socket and StepSecurity confirm the assault specifically targets niche development setups - environments often overlooked in typical breach patterns. 

Detection came after unusual network activity flagged automated systems, leading analysts to trace payloads back to tampered dependencies uploaded under legitimate project names. Unlike older variants that rely on user interaction, this version activates silently once installed, transmitting credentials to remote servers without visible signs. Researchers emphasize the sophistication lies not in complexity but timing: attacks unfold during build processes, evading standard runtime checks. 

From initial samples, it appears attackers maintain persistence by chaining exploits across multiple packages. Investigation continues into whether source repositories were breached directly or if hijacked maintainer accounts allowed upload privileges. Not far behind the initial breach, several packages tied to Namastex Labs began showing suspicious behavior. One after another, altered forms of @automagik/genie, pgserve, and similar tools appeared online without warning. 

What started as isolated reports now points to a wider pattern unfolding quietly. Though some tainted releases have been pulled, fresh variants continue turning up unexpectedly. Danger comes from how the code spreads itself automatically. Right after a package installs, it acts like a worm - starting fast, grabbing key details from the system it hits. Things such as API tokens show up on the list, along with SSH keys, cloud login info, and hidden codes used in software build tools, containers, or AI setups. 

Off it goes, sending what it finds to servers run by attackers. Despite lacking conclusive proof, analysts observe patterns matching past operations tied to TeamPCP. Similarities emerge in how malware activates upon installation, grabs login details, and uses distributed infrastructure for spreading code and storing stolen data. What makes this malware more than just a thief is how it pushes outward without pause. 

Once inside, it hunts for npm login details and identifies which libraries the developer can upload. Harmful scripts are then inserted and republished, turning trusted tools into hidden entry points. If Python credentials appear, the same process spreads into PyPI. Not just traditional systems are at risk - crypto-linked holdings face exposure too, with data targeted from tools like MetaMask and Phantom. One weak spot in a developer’s setup can ripple outward, showing how quickly risks spread across software ecosystems.

Hackers Target Cloud Apps Using Phone Scams and Login Tricks



Cybersecurity researchers have identified two threat groups that are executing fast-moving attacks almost entirely within software-as-a-service environments, allowing them to operate with very little visible trace of intrusion.

The groups, tracked as Cordial Spider and Snarky Spider, are also known by multiple alternate identifiers across different security vendors. Investigations show that both groups are involved in high-speed data theft followed by extortion attempts, and their methods show a strong overlap in how operations are carried out. Analysts assess that these groups have been active since at least October 2025. One of them is believed to be composed of native English speakers and is linked to a cybercrime network widely referred to as “The Com.”

According to findings from CrowdStrike, these attackers primarily rely on voice phishing, also known as vishing, to initiate their intrusions. In these cases, individuals are contacted and guided toward fraudulent login pages that are designed to imitate single sign-on systems. These pages act as adversary-in-the-middle setups, meaning they intercept and capture authentication data, including login credentials and session details, as the victim enters them. Once this information is obtained, attackers immediately use it to access SaaS applications that are connected through single sign-on integrations.

Researchers explain that the attackers deliberately operate within trusted SaaS platforms to avoid raising suspicion. Because their activity takes place inside legitimate services already used by organizations, their presence generates fewer detectable signals. This allows them to move quickly from initial compromise to data access. The combination of speed, targeted execution, and reliance on SaaS-only environments makes it harder for defenders to monitor and respond effectively.

Earlier research published in January 2026 by Mandiant revealed that these attack patterns represent a continuation of tactics seen in extortion-focused campaigns linked to the ShinyHunters group. These operations involve impersonating IT staff during phone calls to build trust with victims, then directing them to phishing pages in order to collect both login credentials and multi-factor authentication codes.

More recent analysis from Palo Alto Networks Unit 42 and the Retail & Hospitality ISAC indicates, with moderate confidence, that one of the identified clusters is associated with The Com network. These attacks rely heavily on living-off-the-land techniques, where attackers use legitimate system tools instead of introducing malware. They also make use of residential proxy networks to mask their real geographic location and to evade basic IP-based security filtering systems.

Since February 2026, activity linked to one of these clusters has been directed toward organizations in the retail and hospitality sectors. The attackers combine vishing calls, often impersonating IT help desk personnel, with phishing websites designed to capture employee credentials.

Once access is established, the attackers take steps to maintain long-term control. They register a new device within the compromised account to ensure continued access, and in many cases remove previously registered devices. After doing so, they modify email settings by creating inbox rules that automatically delete notifications related to new device logins or suspicious activity, preventing the legitimate user from being alerted.

Following initial access, the attackers shift their focus toward accounts with higher privileges. They collect internal information, such as employee directories, to identify individuals with elevated access and then use further social engineering techniques to compromise those accounts as well. With increased privileges, they move across SaaS platforms including Google Workspace, HubSpot, Microsoft SharePoint, and Salesforce, searching for sensitive documents and business-critical data. Any valuable information is then exfiltrated to infrastructure controlled by the attackers.

Researchers note that in many observed cases, the stolen credentials provide access to the organization’s identity provider, which acts as a central authentication system. This creates a single entry point into multiple SaaS applications. By exploiting the trust relationships between the identity provider and connected services, attackers are able to move across the organization’s cloud ecosystem without needing to compromise each application separately. This allows them to access multiple systems using a single authenticated session.


Featured