Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Petco Takes Vetco Clinics Site Offline After Major Data Exposure Leaves Customer Records Accessible Online

  Pet wellness brand Petco has temporarily taken parts of its Vetco Clinics website offline after a security failure left large amounts of ...

All the recent news you need to know

700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



December Patch Tuesday Brings Critical Microsoft, Notepad++, Fortinet, and Ivanti Security Fixes

 


While December's Patch Tuesday gave us a lighter release than normal, it arrived with several urgent vulnerabilities that need attention immediately. In all, Microsoft released 57 CVE patches to finish out 2025, including one flaw already under active exploitation and two others that were publicly disclosed. Notably, critical security updates also came from Notepad++, Ivanti, and Fortinet this cycle, making it particularly important for system administrators and enterprise security teams alike. 

The most critical of Microsoft's disclosures this month is CVE-2025-62221, a Windows Cloud Files Mini Filter Driver bug rated 7.8 on the CVSS scale. It allows for privilege escalation: an attacker who has code execution rights can leverage the bug to escalate to full system-level access. Researchers say this kind of bug is exploited on a regular basis in real-world intrusions, and "patching ASAP" is critical. Microsoft hasn't disclosed yet which threat actors are actively exploiting this flaw; however, experts explain that bugs like these "tend to pop up in almost every big compromise and are often used as stepping stones to further breach". 

Another two disclosures from Microsoft were CVE-2025-54100 in PowerShell and CVE-2025-64671, impacting GitHub Copilot for JetBrains. Although these are not confirmed to be exploited, they were publicly disclosed ahead of patching. Graded at 8.4, the Copilot vulnerability would have allowed for remote code execution via malicious cross-prompt injection, provided a user is tricked into opening untrusted files or connecting to compromised servers. Security researchers expect more vulnerabilities of this type to emerge as AI-integrated development tools expand in usage. 

But one of the more ominous developments outside Microsoft belongs to Notepad++. The popular open-source editor pushed out version 8.8.9 to patch a weakness in the way updates were checked for authenticity. Attackers were managing to intercept network traffic from the WinGUp update client, then redirecting users to rogue servers, where malicious files were downloaded instead of legitimate updates. There are reports that threat groups in China were actively testing and exploiting this vulnerability. Indeed, according to the maintainer, "Due to the improper update integrity validation, an adversary was able to manipulate the download"; therefore, users should upgrade as soon as possible. 

Fortinet also patched two critical authentication bypass vulnerabilities, CVE-2025-59718 and CVE-2025-59719, in FortiOS and several related products. The bugs enable hackers to bypass FortiCloud SSO authentication using crafted SAML messages, which only works if SSO has been enabled. Administrators are advised to disable the feature until they can upgrade to patched builds to avoid unauthorized access. Rounding out the disclosures, Ivanti released a fix for CVE-2025-10573, a severe cross-site scripting vulnerability in its Endpoint Manager. The bug allows an attacker to register fake endpoints and inject malicious JavaScript into the administrator dashboard. Viewed, this could serve an attacker full control over the session without credentials. There has been no observed exploitation so far, but researchers warn that it is likely attackers will reverse engineer the fix soon, making for a deployment environment of haste.

Ivanti Flags Critical Endpoint Manager Flaw Allowing Remote Code Execution

 

Ivanti is urging customers to quickly patch a critical vulnerability in its Endpoint Manager (EPM) product that could let remote attackers execute arbitrary JavaScript in administrator sessions through low-complexity cross-site scripting (XSS) attacks.The issue, tracked as CVE-2025-10573, affects the EPM web service and can be abused without authentication, but does require some user interaction to trigger.

The flaw stems from how Ivanti EPM handles managed endpoints presented to the primary web service. According to Rapid7 researcher Ryan Emmons, an attacker with unauthenticated access to the EPM web interface can register bogus managed endpoints and inject malicious JavaScript into the administrator dashboard. Once an EPM administrator views a poisoned dashboard widget as part of routine use, the injected code executes in the browser, allowing the attacker to hijack the admin session and act with their privileges.

Patch availability and exposure

Ivanti has released EPM 2024 SU4 SR1 to remediate CVE-2025-10573 and recommends customers install this update as soon as possible. The company stressed that EPM is designed to operate behind perimeter defenses and not be directly exposed to the public internet, which should lower practical risk where deployments follow guidance.However, data from the Shadowserver Foundation shows hundreds of Ivanti EPM instances reachable online, with the highest counts in the United States, Germany, and Japan, significantly increasing potential attack surface for those organizations.

Alongside the critical bug, Ivanti shipped fixes for three other high‑severity vulnerabilities affecting EPM, including CVE-2025-13659 and CVE-2025-13662. These two issues could also enable unauthenticated remote attackers to execute arbitrary code on vulnerable systems under certain conditions. Successful exploitation of the newly disclosed high‑severity flaws requires user interaction and either connecting to an untrusted core server or importing untrusted configuration files, which slightly raises the bar for real-world attacks.

Threat landscape and prior exploitation

Ivanti stated there is currently no evidence that any of the newly patched flaws have been exploited in the wild and credited its responsible disclosure program for bringing them to light. Nonetheless, EPM vulnerabilities have been frequent targets, and U.S. Cybersecurity and Infrastructure Security Agency (CISA) has repeatedly added Ivanti EPM bugs to its catalog of exploited vulnerabilities. In 2024, CISA ordered federal agencies to urgently patch multiple Ivanti EPM issues, including three critical flaws flagged in March and another actively exploited vulnerability mandated for remediation in October.

Rising Prompt Injection Threats and How Users Can Stay Secure

 


The generative AI revolution is reshaping the foundations of modern work in an age when organizations are increasingly relying on large language models like ChatGPT and Claude to speed up research, synthesize complex information, and interpret extensive data sets more rapidly with unprecedented ease, which is accelerating research, synthesizing complex information, and analyzing extensive data sets. 

However, this growing dependency on text-driven intelligence is associated with an escalating and silent risk. The threat of prompt injection is increasing as these systems become increasingly embedded in enterprise workflows, posing a new challenge to cybersecurity teams. Malicious actors have the ability to manipulate the exact instructions that lead an LLM to reveal confidential information, alter internal information, or corrupt proprietary systems in such ways that they are extremely difficult to detect and even more difficult to reverse. 

Malicious actors can manipulate the very instructions that guide an LLM. Any organisation that deploys its own artificial intelligence infrastructure or integrates sensitive data into third-party models is aware that safeguarding against such attacks has become an urgent concern. Organisations must remain vigilant and know how to exploit such vulnerabilities. 

It is becoming increasingly evident that as organisations are implementing AI-driven workflows, a new class of technology—agent AI—is beginning to redefine how digital systems work for the better. These more advanced models, as opposed to traditional models that are merely reactive to prompts, are capable of collecting information, reasoning through tasks, and serving as real-time assistants that can be incorporated into everything from customer support channels to search engine solutions. 

There has been a shift into the browser itself, where AI-enhanced interfaces are rapidly becoming a feature rather than a novelty. However, along with that development, corresponding risks have also increased. 

It is important to keep in mind that, regardless of what a browser is developed by, the AI components that are embedded into it — whether search engines, integrated chatbots, or automated query systems — remain vulnerable to the inherent flaws of the information they rely on. This is where prompt injection attacks emerge as a particularly troubling threat. Attackers can manipulate an LLM so that it performs unintended or harmful actions as a result of exploiting inaccuracies, gaps, or unguarded instructions within its training or operational data. 

Despite the sophisticated capabilities of agentic artificial intelligence, these attacks reveal an important truth: although it brings users and enterprises powerful capabilities, it also exposes them to vulnerabilities that traditional browsing tools have not been exposed to. As a matter of fact, prompt injection is often far more straightforward than many organisations imagine, as well as far more harmful. 

There are several examples of how an AI system can be manipulated to reveal sensitive information without even recognising the fact that the document is tainted, such as a PDF embedded with hidden instructions, by an attacker. It has also been demonstrated that websites seeded with invisible or obfuscated text can affect how an AI agent interprets queries during information retrieval, steering the model in dangerous or unintended directions. 

It is possible to manipulate public-facing chatbots, which are intended to improve customer engagement, in order to produce inappropriate, harmful, or policy-violating responses through carefully crafted prompts. These examples illustrate that there are numerous risks associated with inadvertent data leaks, reputational repercussions, as well as regulatory violations as enterprises begin to use AI-assisted decision-making and workflow automation more frequently. 

In order to combat this threat, LLMs need to be treated with the same level of rigour that is usually reserved for high-value software systems. The use of adversarial testing and red-team methods has gained popularity among security teams as a way of determining whether a model can be misled by hidden or incorrect inputs. 

There has been a growing focus on strengthening the structure of prompts, ensuring there is a clear boundary between user-driven content and system instructions, which has become a critical defence against fraud, and input validation measures have been established to filter out suspicious patterns before they reach the model's operational layer. Monitoring outputs continuously is equally vital, which allows organisations to flag anomalies and enforce safeguards that prevent inappropriate or unsafe behaviour. 

The model needs to be restricted from accessing unvetted external data, context management rules must be redesigned, and robust activity logs must be maintained in order to reduce the available attack surface while ensuring a more reliable oversight system. However, despite taking these precautions to protect the system, the depths of the threat landscape often require expert human judgment to assess. 

Manual penetration testing has emerged as a decisive tool, providing insight far beyond the capabilities of automated scanners that are capable of detecting malicious code. 

Using skilled testers, it is possible to reproduce the thought processes and creativity of real attackers. This involves experimenting with nuanced prompt manipulations, embedded instruction chains, and context-poisoning techniques that automatic tools fail to detect. Their assessments also reveal whether security controls actually perform as intended. They examine whether sanitisation filters malicious content properly, whether context restrictions prevent impersonation, and whether output filters intervene when the model produces risky content. 

A human-led testing process provides organisations with a stronger assurance that their AI deployments will withstand the increasingly sophisticated attempts at compromising them through the validation of both vulnerabilities and the effectiveness of subsequent fixes. In order for user' organisation to become resilient against indirect prompt injection, it requires much more than isolated technical fixes. It calls for a coordinated, multilayered defence that encompasses both the policy environment, the infrastructure, and the day-to-day operational discipline of users' organisations. 

A holistic approach to security is increasingly being adopted by security teams to reduce the attack surface as well as catch suspicious behaviour early and quickly. As part of this effort, dedicated detection systems are deployed, which will identify and block both subtle, indirect manipulations that might affect an artificial intelligence model's behaviour before they can occur. Input validation and sanitisation protocols are a means of strengthening these controls. 

They prevent hidden instructions from slipping into an LLM's context by screening incoming data, regardless of whether it is sourced from users, integrated tools, or external web sources. In addition to establishing firm content handling policies, it is also crucial to establish a policy defining the types of information that an artificial intelligence system can process, as well as the types of sources that can be regarded as trustworthy. 

A majority of organisations today use allowlisting frameworks as part of their security measures, and are closely monitoring unverified or third-party content in order to minimise exposure to contaminated data. Enterprises are adopting strict privilege-separation measures at the architectural level so as to ensure that artificial intelligence systems have minimal access to sensitive information as well as being unable to perform high-risk actions without explicit authorisations. 

In the event that an injection attempt is successful, this controlled environment helps contain the damage. It adds another level of complexity to the situation when shadow AI begins to emerge—employees adopting unapproved tools without supervision. Consequently, organisations are turning to monitoring and governance platforms to provide insight into how and where AI tools are being implemented across the workforce. These platforms enable access controls to be enforced and unmanaged systems to be prevented from becoming weak entry points for attackers. 

As an integral component of technical and procedural safeguards, user education is still an essential component of frontline defences. 

Training programs that teach employees how to recognise and distinguish sanctioned tools from unapproved ones will help strengthen frontline defences in the future. As a whole, these measures form a comprehensive strategy to counter the evolving threat of prompt injection in enterprise environments by aligning technology, policy, and awareness. 

It is becoming increasingly important for enterprises to secure these systems as the adoption of generative AI and agentic AI accelerates. As a result of this development, companies are at a pivotal point where proactive investment in artificial intelligence security is not a luxury but an essential part of preserving trust, continuity, and competitiveness. 

Aside from the existing safeguards that organisations have already put in place, organisations can strengthen their posture even further by incorporating AI risk assessments into broader cybersecurity strategies, conducting continuous model evaluations, as well as collaborating with external experts. 

An organisation that encourages a culture of transparency can reduce the probability of unnoticed manipulation to a substantial degree if anomalies are reported early and employees understand both the power and pitfalls of Artificial Intelligence. It is essential to embrace innovation without losing sight of caution in order to build AI systems that are not only intelligent, but also resilient, accountable, and closely aligned with human oversight. 

By harnessing the transformative potential of modern AI and making security a priority, businesses can ensure that the next chapter of digital transformation is not just driven by security, but driven by it as a core value, not an afterthought.

UK Cyber Agency says AI Prompt-injection Attacks May Persist for Years

 



The United Kingdom’s National Cyber Security Centre has issued a strong warning about a spreading weakness in artificial intelligence systems, stating that prompt-injection attacks may never be fully solved. The agency explained that this risk is tied to the basic design of large language models, which read all text as part of a prediction sequence rather than separating instructions from ordinary content. Because of this, malicious actors can insert hidden text that causes a system to break its own rules or execute unintended actions.

The NCSC noted that this is not a theoretical concern. Several demonstrations have already shown how attackers can force AI models to reveal internal instructions or sensitive prompts, and other tests have suggested that tools used for coding, search, or even résumé screening can be manipulated by embedding concealed commands inside user-supplied text.

David C, a technical director at the NCSC, cautioned that treating prompt injection as a familiar software flaw is a mistake. He observed that many security professionals compare it to SQL injection, an older type of vulnerability that allowed criminals to send harmful instructions to databases by placing commands where data was expected. According to him, this comparison is dangerous because it encourages the belief that both problems can be fixed in similar ways, even though the underlying issues are completely different.

He illustrated this difference with a practical scenario. If a recruiter uses an AI system to filter applications, a job seeker could hide a message in the document that tells the model to ignore existing rules and approve the résumé. Since the model does not distinguish between what it should follow and what it should simply read, it may carry out the hidden instruction.

Researchers are trying to design protective techniques, including systems that attempt to detect suspicious text or training methods that help models recognise the difference between instructions and information. However, the agency emphasised that all these strategies are trying to impose a separation that the technology does not naturally have. Traditional solutions for similar problems, such as Confused Deputy vulnerabilities, do not translate well to language models, leaving large gaps in protection.

The agency also stressed upon a security idea recently shared on social media that attempted to restrict model behaviour. Even the creator of that proposal admitted that it would sharply reduce the abilities of AI systems, showing how complex and limiting effective safeguards may become.

The NCSC stated that prompt-injection threats are likely to remain a lasting challenge rather than a fixable flaw. The most realistic path is to reduce the chances of an attack or limit the damage it can cause through strict system design, thoughtful deployment, and careful day-to-day operation. The agency pointed to the history of SQL injection, which once caused widespread breaches until better security standards were adopted. With AI now being integrated into many applications, they warned that a similar wave of compromises could occur if organisations do not treat prompt injection as a serious and ongoing risk.


CastleLoader Widens Its Reach as GrayBravo’s MaaS Infrastructure Fuels Multiple Threat Clusters

 

Researchers have now identified four distinct threat activity clusters associated with the malware loader CastleLoader, bolstering previous estimates that the tool was being supplied to multiple cybercriminal groups through a malware-as-a-service model. In this, the operator of this ecosystem has been dubbed GrayBravo by Recorded Future's Insikt Group, which had previously tracked the same actor under the identifier TAG-150. 

CastleLoader emerged in early 2025 and has since evolved into a dynamically developing malware distribution apparatus. Recorded Future's latest analysis underscores GrayBravo's technical sophistication, the ability to promptly adapt operations after public reporting, and the growing infrastructure currently supporting multiple threat campaigns. 

GrayBravo's toolkit consists of several components, including a remote access trojan dubbed CastleRAT and a modular malware framework named CastleBot. CastleBot is composed of three interconnected main elements: a shellcode stager, a loader, and a core backdoor. The loader injects the backdoor into memory, following which the malware communicates with command-and-control servers to receive instructions. These further enable downloading and executing a variety of payloads in the form of DLL, EXE, and PE files. CastleLoader has been used to distribute various well-known malware families, including RedLine Stealer, StealC, DeerStealer, NetSupport RAT, SectopRAT, MonsterV2, WARMCOOKIE, and other loaders, such as Hijack Loader, which demonstrates how well the CastleBot and CastleLoader combo serves as a widely useful tool.  

Recorded Future's new discoveries uncover four separate operational clusters, each using CastleLoader for its purposes. One cluster, attributed to TAG-160, has been operational since March 2025, targeting the logistics industry by leveraging phishing lures and ClickFix for CastleLoader delivery. Another one, referred to as TAG-161, started its operations in June 2025 and has used Booking.com-themed ClickFix campaigns for spreading CastleLoader and Matanbuchus 3.0. One more cluster has utilized infrastructure that spoofs Booking.com, complementing the spoofing with ClickFix and leveraging Steam Community pages as dead-drop resolvers to distribute CastleRAT via CastleLoader. A fourth cluster, which has been active since April 2025, leverages malvertising and fake update notices posing as Zabbix and RVTools for delivering CastleLoader together with NetSupport RAT. 

The actor's infrastructure spans from victim-facing command-and-control servers attributed to CastleLoader, CastleRAT, SectopRAT, and WARMCOOKIE to several other VPS servers, presumably held as spares. Of special interest are the TAG-160 operations, which feature the use of hijacked or fake accounts on freight-matching platforms, including DAT Freight & Analytics and Loadlink Technologies, to create rather plausible phishing messages. The customised lures suggest that the operators have extensive domain knowledge of logistics processes and related communication practices in the industry. 

Recorded Future concluded that the continued expansion in the use of CastleLoader by independent threat groups testifies to how rapidly such advanced and adaptive tools can diffuse in the cybercrime ecosystem once they get credit. Supporting this trend, the recent case documented by the researchers at Blackpoint involved a Python-based dropper chain in which the attackers used ClickFix to download an archive, stage files in the AppData directory, and execute a Python stager that rebuilt and launched a CastleLoader payload. Continued evolution of these delivery methods shows that the malware-as-a-service model behind CastleLoader is really enabling broader and more sophisticated operations through multiple threat actors.

Featured