Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Indirect Prompt Injection: The Hidden AI Threat


Indirect prompt injection is becoming one of the most worrying AI security risks because attackers can hide malicious instructions inside content that an AI system reads and trusts. In plain terms, the AI is not being attacked through the chat box alone; it can also be manipulated through emails, web pages, documents, or other external data it processes. 

The danger is that these hidden prompts can make an AI leak sensitive data, follow malicious commands, or guide users to malicious websites. Security experts note that cybercriminals are already using this technique to push AI systems toward unsafe actions, including executing code and exposing information. That makes the problem more serious than a simple model glitch, because the output can directly affect real-world decisions and user safety. 

A major reason indirect prompt injection works is that many AI systems mix trusted instructions with untrusted content in the same workflow. If the system does not clearly separate what should be obeyed from what should merely be read, the model may treat attacker-controlled text as if it were part of its core task. This is especially risky in agentic tools that can browse, summarize, click links, or take actions on behalf of users. 

Security experts recommend building multiple layers of defense instead of relying on one fix. Common measures include sanitizing input and output, using clear boundaries around external content, enforcing least privilege, and requiring human approval for sensitive actions. Monitoring unusual behavior also helps, such as unexpected tool calls, odd requests, or suspicious links in AI-generated responses. 

For users, the safest habits are simple but important. Give AI tools only the access they truly need, avoid sharing unnecessary personal data, and be cautious when an AI suddenly recommends links, purchases, or requests for sensitive information. If the system starts acting strangely, the session should be stopped and the output verified independently before trusting it.

The broader lesson is that prompt injection is now a practical cybersecurity issue, not a theoretical one. As AI becomes more connected to browsers, inboxes, databases, and business workflows, attackers gain more ways to exploit weak guardrails. Organizations that want to use AI safely will need strict controls, continuous testing, and a security-first design mindset from the start.

Exposed by Design: What 1 Million Open AI Services Reveal About the Future of Cyber Risk

 

The rapid ascent of artificial intelligence, once heralded as the great accelerator of productivity, now casts a long and unsettling shadow, one that reveals not merely innovation, but a profound erosion of foundational security discipline. 

A recent large scale scan of internet facing AI infrastructure has uncovered a reality that is difficult to ignore. Over 1 million exposed AI services across more than 2 million hosts were identified, many of them operating with little to no protection, silently accessible to anyone who knows where to look. This is not a marginal oversight. It is a systemic condition, one that reflects how speed, ambition, and competitive pressure are quietly outpacing prudence. 

The Illusion of Progress: When Innovation Outruns Security 


For decades, the software industry painstakingly evolved toward secure by design principles, including authentication layers, least privilege access, and hardened deployments. Yet, in the fervour surrounding AI, many of these hard earned lessons appear to have been set aside. 

Organizations are increasingly self hosting large language models and AI agents, driven by the promise of efficiency and control. But in doing so, they are deploying systems that are, paradoxically, less secure than legacy software ever was. 

The result is a peculiar contradiction. The most advanced technologies of our time are often protected by the weakest defenses. 

Perhaps the most alarming discovery is deceptively simple. Many AI services have no authentication at all. Fresh installations frequently grant immediate, high level access without requiring credentials. This is not due to sophisticated bypass techniques or unknown exploits. It stems from defaults that were never hardened in the first place. In such environments, attackers simply walk through the front door. 

When Conversations Become Vulnerabilities 


Among the exposed systems were AI chat interfaces that inadvertently revealed complete conversation histories. In enterprise contexts, such data is far from trivial. These exchanges may contain internal operational strategies, infrastructure configurations, proprietary code snippets, and sensitive business queries. 

Even seemingly harmless prompts can, when combined, form a detailed map of an organization’s inner workings. The quiet intimacy of human and machine interaction, once considered private, is thus transformed into a potential intelligence goldmine. A deeper inspection of these systems reveals not isolated mistakes, but recurring design flaws. Applications are often running with elevated privileges. Credentials are sometimes hardcoded into deployment files. Containers are misconfigured and services are left exposed. AI agents operate without sufficient sandboxing. Within days of analysis, researchers were able to identify new vulnerabilities, including risks related to remote code execution, which highlights how immature much of this ecosystem remains. 

These are patterns that repeat across environments. Unlike traditional applications, AI systems often possess extended capabilities. They can execute code, interact with APIs, and manipulate infrastructure. 

When such systems are exposed, the consequences escalate dramatically. A compromised AI agent is not merely a data leak. It can become an active participant in its own exploitation. Weak sandboxing and poorly segmented environments further amplify this risk, allowing attackers to move from one system to another with alarming ease. 

In this sense, AI does not just introduce new vulnerabilities. It magnifies existing ones. This phenomenon does not exist in isolation. Across the cybersecurity landscape, AI is reshaping both offense and defense. Recent analyses indicate that the time required to exploit vulnerabilities has shrunk dramatically, often from years to mere weeks. AI generated phishing and malware are increasing in both scale and sophistication. Even individuals with limited technical expertise can now execute complex attacks. 

The exposed AI services are therefore part of a larger transformation in how cyber risk evolves. 

At the heart of this issue lies a cultural shift. Organizations today operate under relentless pressure to innovate, deploy, and iterate. In this race, security is often treated as a secondary concern rather than a foundational requirement. 

Developers focus on functionality. Businesses focus on speed. Security becomes something to address later, once the system is already live. The irony is difficult to ignore. The very tools designed to enhance efficiency are being deployed in ways that create inefficiencies of far greater consequence, including breaches, downtime, and reputational loss. 

Lessons from the Exposure: What Must Change 


If there is a singular lesson to be drawn, it is this. AI infrastructure must be treated with the same level of rigor as traditional systems, if not more. 

This requires secure default configurations, mandatory authentication and access controls, elimination of hardcoded secrets, proper isolation of AI agents, and continuous monitoring of external attack surfaces. Security cannot remain reactive. In an AI driven world, it must become anticipatory. 

Conclusion: A Turning Point, Not a Footnote 


The exposure of over a million AI services is a warning more than just headlines. It reveals a fragile foundation beneath a rapidly expanding technological landscape. If left unaddressed, these vulnerabilities will not remain theoretical. They will manifest as real world breaches, financial losses, and systemic disruptions. 

Yet within this warning lies an opportunity to pause, to reassess and to restore the balance between innovation and responsibility. In the end, the true measure of technological progress is how wisely we secure what we create.

Claude Desktop Silently Alters Browser Settings, Even on Uninstalled Browsers

 

Claude Desktop, Anthropic’s standalone AI app for macOS, has come under fire for quietly altering browser‑level settings on users’ machines—even when they have never installed or used certain browsers. Security and privacy researchers have found that the application drops browser‑configuration files across system‑wide directories, effectively pre‑authorizing future browser‑extension links between Claude and Chromium‑based browsers such as Chrome, Edge, Brave, Opera, and others.

Modus operandi 

Upon installation, Claude Desktop generates a Native Messaging manifest and helper binary that register Claude as a trusted “browser host” for several specific Chrome extension IDs. This manifest is placed inside browser‑host folders for multiple Chromium‑based browsers, including some a user may never have installed, meaning a future browser install could immediately grant Claude broad access to page content, form data, and session activity. Anthropic frames this as part of its “agentic” features that let the app automate tasks and interact with the web, but the lack of an explicit opt‑in notification has raised red flags. 

The biggest concern is that these configuration files persist beyond the scope of browsers a user actually runs. Even if a person never uses Chrome or a given Chromium browser, the manifest can already be waiting in the system’s browser‑host directories, pre‑staging a bridge that activates once a corresponding browser and Claude extension are installed. Because the desktop app rewrites these files on every launch, deleting them manually does not permanently remove the hooks unless Claude Desktop itself is uninstalled. 

Privacy and legal reactions 

Privacy experts and commentators have likened this behavior to “spyware‑like” activity, arguing that silently creating browser‑level hooks without clear consent violates the spirit, if not the letter, of privacy regulations such as the EU ePrivacy Directive. Alexander Hanff, a prominent privacy consultant, has explicitly labeled Claude Desktop’s behavior “spyware” and questioned how much of this browser integration is actually documented and disclosed to end users. Critics stress that such integrations should be opt‑in and transparent, rather than buried in vague terms‑of‑service language most users never read. 

For macOS users who have installed Claude Desktop, experts recommend reviewing whether they actually need the browser‑integration features and, if not, uninstalling the app entirely to remove lingering manifest files and host binaries. Some guides suggest manually cleaning native‑messaging‑host folders for various Chromium browsers and then restarting the browser after removal, although this is only effective if the desktop app is also gone. Until Anthropic adds clearer, upfront consent prompts and the option to disable or remove these hooks, users concerned about privacy should treat Claude Desktop’s browser integration as a potential risk and handle it accordingly.

npm Supply Chain Attack Spreads Worm Malware Stealing Developer Secrets Across Compromised Packages

 

Worry grows within the cybersecurity community following discovery of a fresh supply chain threat aimed at the npm platform, where self-replicating malicious code infiltrates public software libraries to harvest confidential information from coders. Though broad consumer impact seems minimal, investigators at Socket and StepSecurity confirm the assault specifically targets niche development setups - environments often overlooked in typical breach patterns. 

Detection came after unusual network activity flagged automated systems, leading analysts to trace payloads back to tampered dependencies uploaded under legitimate project names. Unlike older variants that rely on user interaction, this version activates silently once installed, transmitting credentials to remote servers without visible signs. Researchers emphasize the sophistication lies not in complexity but timing: attacks unfold during build processes, evading standard runtime checks. 

From initial samples, it appears attackers maintain persistence by chaining exploits across multiple packages. Investigation continues into whether source repositories were breached directly or if hijacked maintainer accounts allowed upload privileges. Not far behind the initial breach, several packages tied to Namastex Labs began showing suspicious behavior. One after another, altered forms of @automagik/genie, pgserve, and similar tools appeared online without warning. 

What started as isolated reports now points to a wider pattern unfolding quietly. Though some tainted releases have been pulled, fresh variants continue turning up unexpectedly. Danger comes from how the code spreads itself automatically. Right after a package installs, it acts like a worm - starting fast, grabbing key details from the system it hits. Things such as API tokens show up on the list, along with SSH keys, cloud login info, and hidden codes used in software build tools, containers, or AI setups. 

Off it goes, sending what it finds to servers run by attackers. Despite lacking conclusive proof, analysts observe patterns matching past operations tied to TeamPCP. Similarities emerge in how malware activates upon installation, grabs login details, and uses distributed infrastructure for spreading code and storing stolen data. What makes this malware more than just a thief is how it pushes outward without pause. 

Once inside, it hunts for npm login details and identifies which libraries the developer can upload. Harmful scripts are then inserted and republished, turning trusted tools into hidden entry points. If Python credentials appear, the same process spreads into PyPI. Not just traditional systems are at risk - crypto-linked holdings face exposure too, with data targeted from tools like MetaMask and Phantom. One weak spot in a developer’s setup can ripple outward, showing how quickly risks spread across software ecosystems.

Kyber Ransomware Tests Post‑Quantum Encryption on Windows Networks

 

A new ransomware group named Kyber has pushed the envelope by experimenting with post‑quantum encryption in attacks on Windows‑based networks, according to recent cybersecurity analysis. The group has been observed targeting both Windows file servers and VMware ESXi platforms, showing a cross‑platform capability designed to disrupt critical enterprise infrastructure. In one confirmed incident, a major U.S. defense contractor fell victim to the strain, underscoring the threat’s seriousness. 

The Kyber variant deployed on Windows is written in Rust and uses a hybrid encryption scheme that combines classical and post‑quantum algorithms. Researchers at Rapid7 found that the Windows payload wraps AES‑256 file‑encryption keys using Kyber1024 (ML‑KEM1024), a lattice‑based key‑encapsulation mechanism standardized by NIST for quantum‑resistant cryptography. The strain also incorporates X25519 elliptic‑curve cryptography as an additional layer, creating a “belt‑and‑suspenders” approach to protect ransomware keys. 

Despite the marketing‑speak around “quantum‑proof” encryption, security experts note that Kyber’s use of post‑quantum crypto is largely symbolic at this stage. AES‑256 itself is already considered resistant to foreseeable quantum attacks, so relying on Kyber1024 mainly adds overhead without materially changing the practical impact for victims. Moreover, the Linux‑based ESXi encryptor does not actually use Kyber1024; it instead falls back to ChaCha8 and RSA‑4096, highlighting discrepancies between the ransomware’s claims and its implementation. 

Operationally, Kyber behaves like a modern ransomware strain: it seeks local administrator privileges, deletes Volume Shadow Copies via PowerShell and vssadmin, stops critical services, and encrypts files across shared drives. Windows files are typically appended with the .#~~~ extension, while the ESXi version uses .xhsyw, and each variant leaves a ransom note pointing to a Tor‑based leak site. The gang also runs a “Wall of Wonders” leak site to shame victims and pressure them into paying, a tactic increasingly common among ransomware‑as‑a‑service groups. 

For defenders, the lesson is that post‑quantum encryption in ransomware is more about optics than a game‑changer—for now. Organizations should still prioritize basics: strict privilege control, regular air‑gapped backups, monitoring unusual PowerShell and vssadmin activity, and rapid patching of ESXi and Windows servers. As quantum‑resistant standards mature, the broader cybersecurity community gains experience, even if attackers are the first to weaponize them in limited test‑bed campaigns like Kyber.

Iran Claims US Used Backdoors To Disable Networking Equipment During Conflict Amid Unverified Cyber Sabotage Reports

 

Midway through the incident, Iranian officials pointed fingers at American cyber operations. Devices made by firms like Cisco and Juniper began failing without warning. Power cycles hit Fortinet and MikroTik hardware even as Tehran limited external connections. Outages appeared tied to U.S. digital interference, according to local reports. Backdoors or coordinated botnet attacks were named as possible causes. Global discussion flared up almost immediately. Tensions between nations climbed higher amid unverified assertions. 

Network disruptions coincided too closely with military actions, some analysts noted These reports indicate Iranian officials see the outages as intentional interference, not equipment malfunction. What supports this view is the idea of harmful software hidden inside firmware or startup systems, set to activate remotely when signaled - possibly through satellite links. A different explanation considers dormant networks of infected machines, ready to shut down gadgets all at once if activated Still, no proof supports these statements. 

Confirming them becomes nearly impossible because Iran has restricted online access for long periods, blocking outside observers from seeing what happens inside its digital networks. Weeks of broad internet blackouts continue across the region, making verification harder than expected under such isolation. Nowhere more visible than in official outlets, the accusations gain strength through repeated links to earlier reports. 

Because evidence once surfaced via Edward Snowden, it gets reused to support current assertions about U.S. practices. Hardware tampering stories resurface when discussions turn to digital trust. From that point onward, examples of intercepted equipment serve as grounding points. Even so, connections drawn today rely heavily on incidents described years ago. 

Thus, suspicion persists within broader debates over tech control Even though claims are serious, public confirmation of deliberate backdoors or a remote "kill switch" remains absent. Still, specialists point out past flaws found in gear from various makers. Yet linking widespread breakdowns to one unified assault demands strong validation. What matters is proof - not just patterns - when connecting such events Nowhere is the worry over digital dependence more clear than in how fragile supply chains have become. 

A single compromised component might ripple across systems, simply because oversight lags behind complexity. Often, failures stem not from sabotage but from overlooked bugs or poor setup. Some breaches resemble accidents more than attacks, unfolding when neglected flaws are finally triggered. Rarely do we see deliberate tampering; far more common are gaps left open by routine mistakes. Hardware made abroad adds another layer of uncertainty, though the real issue may lie in how it's used, not where it's built Even now, global power struggles shape how cyber actions are seen. 

As nations admit using online assaults during warfare, such events fit within larger strategic patterns. Still, absent solid proof, today’s accusations serve more as tools in storytelling contests among states. Truth be told, understanding cyber warfare grows tougher each year, as unclear technology limits, narrow access to data, and national agendas overlap. Though shutting down systems secretly from afar might work on paper, without outside verification, such claims sit closer to suspicion than proof.

Terms And Conditions Grow Harder To Read As Platforms Limit Users’ Legal Rights Study Finds

 

Most people click "agree" without looking - yet those agreements keep getting harder to understand. Complexity rises, researchers note, just as user protections shrink. From Cambridge, a recent study points out expanded corporate access to personal information. Legal barriers grow tougher, making it more difficult to take firms to court. Lengthy clauses quietly reshape power, favoring businesses over individuals. Beginning with a project called the Transparency Hub, results emerge from systematic tracking of legal texts across 300-plus online platforms. 

Stored within it: twenty thousand iterations - past and present - of service conditions and privacy notices from apps like TikTok, among others. Over months, changes in wording reveal shifts in corporate approaches to personal information. What users agree to today may differ subtly from last year’s version, now preserved here. Visibility grows when updates accumulate, showing patterns once hidden beneath routine acceptance clicks. Surprisingly clear trends show a steady drop in how easily people can read service contracts. 

From 2016 to 2025, studies applying the Flesch-Kincaid method reveal nearly 86 percent demand skills typical of university readers. Because of this shift, grasping the full meaning behind digital consent has grown harder for most individuals. While signing up seems routine, the depth of understanding often lags behind. Away from mere complexity, attention turns to changing corporate approaches in handling disagreements. While once settled in open courtrooms, conflict resolution now leans on closed-door arbitration imposed by platform rules. 

A third-party referee reaches final judgments, yet clarity tends to fade behind closed processes. Users find their options shrinking when collective lawsuits are blocked. Even mediator choices sometimes rest with the businesses involved, quietly shaping outcomes. Newer artificial intelligence platforms like Anthropic and Perplexity AI also follow this pattern, embedding clauses that block participation in group litigation. Because of this, anyone feeling wronged has to file a personal claim - often pricier and weaker than joining others in court. A few companies allow narrow chances to decline the clause; however, acting fast after registration is usually required. 

Now appearing, this study arrives as officials across Europe weigh tighter rules for online services, focusing on effects tied to youth engagement. With France leading examples, followed by Spain, Portugal, and Denmark, governments test new steps aimed at tackling unease around digital privacy and web-based risks. One thing stands out: laws around online services are drifting further from what everyday users can grasp. 

Though written rules get longer and tighter, people must now sort through fine print that defines their digital freedoms - frequently unaware of what they’re agreeing to. While clarity lags behind complexity, personal responsibility quietly expands.

Gentlemen Ransomware Expands Reach with SystemBC Botnet Targeting Corporate Networks

 

A large-scale botnet powered by SystemBC proxy malware, comprising more than 1,570 infected machines, has been uncovered during an investigation into a Gentlemen ransomware attack carried out by an affiliate of the group. Evidence suggests that the majority of affected systems belong to corporate environments.

The Gentlemen ransomware-as-a-service (RaaS) operation surfaced around mid-2025, offering attackers multiple encryption tools. Its toolkit includes a Go-based locker capable of targeting Windows, Linux, NAS, and BSD systems, as well as a C-based encryptor designed for ESXi hypervisors.

In December, the group successfully breached one of Romania’s biggest energy companies, the Oltenia Energy Complex. More recently, The Adaptavist Group revealed a separate incident that was also claimed by the ransomware gang on its leak portal. While the operators have publicly listed around 320 victims—most from this year—researchers at Check Point note that affiliates are increasingly enhancing their infrastructure and attack methods.

During an incident response investigation, experts observed that a ransomware affiliate attempted to deploy SystemBC malware to enable stealthy delivery of malicious payloads. “Check Point Research observed victim telemetry from the relevant SystemBC command-and-control server, revealing a botnet of over 1,570 victims, with the infection profile strongly suggesting a focus on corporate and organizational environments rather than opportunistic consumer targeting,” the researchers say in a report today.

SystemBC, active since at least 2019, is commonly used for SOCKS5 proxy tunneling. Its ability to deliver additional malicious payloads has made it a preferred tool among ransomware operators. Even after a law enforcement disruption in 2024, the botnet has remained active. In fact, Black Lotus Labs reported last year that it was compromising approximately 1,500 commercial virtual private servers (VPS) daily to route harmful traffic.

According to Check Point, the majority of infections tied to Gentlemen’s use of SystemBC are concentrated in the United States, the United Kingdom, Germany, Australia, and Romania. "The specific Command and Control server that was used for the communication had infected a large number of victims across the globe. It is likely that the majority of those victims are companies and organizations, given that SystemBC is typically deployed as part of human-operated intrusion workflows rather than massive targeting," Check Point says.

Researchers have not yet determined exactly how SystemBC integrates into the Gentlemen ransomware ecosystem, nor whether multiple affiliates are using the malware simultaneously.

Infection Chain and Encryption Strategy

Although the initial entry point remains unclear, investigators found that attackers operated from a Domain Controller with Domain Admin privileges. They validated credentials, performed network reconnaissance, and deployed Cobalt Strike payloads across systems using Remote Procedure Call (RPC).

The attackers moved laterally by harvesting credentials with Mimikatz and executing commands remotely. The ransomware payload was staged on an internal server and distributed using built-in propagation techniques and Group Policy Objects (GPO), enabling near-simultaneous encryption across domain-connected machines.

The encryption process uses a hybrid cryptographic model combining X25519 (Diffie–Hellman) and XChaCha20, generating a unique ephemeral key pair for each file. Files smaller than 1 MB are fully encrypted, while larger files are partially encrypted in chunks ranging from approximately 1% to 9%.

Before initiating encryption, the ransomware terminates database services, backup tools, and virtualization processes, while also deleting Shadow Copies and logs. The ESXi variant additionally shuts down virtual machines to ensure disk-level encryption.

Although Gentlemen ransomware has maintained a relatively low profile, Check Point warns that it is rapidly evolving. The group is actively recruiting affiliates through underground forums and expanding its capabilities. Researchers believe that the integration of SystemBC, combined with tools like Cobalt Strike and a sizable botnet, indicates that the operation is advancing into a more sophisticated threat actor. "actively integrating into a broader toolchain of mature, post-exploitation frameworks and proxy infrastructure."

In addition to identifying indicators of compromise (IoCs), Check Point has released a YARA rule to assist organizations in detecting and mitigating similar threats.