Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

FBI Alerts Public about Scammers Using Altered Online Photos to Stage Fake Kidnappings

  The Federal Bureau of Investigation has issued a new advisory warning people about a growing extortion tactic in which criminals take phot...

All the recent news you need to know

Petco Takes Vetco Clinics Site Offline After Major Data Exposure Leaves Customer Records Accessible Online

 

Pet wellness brand Petco has temporarily taken parts of its Vetco Clinics website offline after a security failure left large amounts of customer information publicly accessible.

TechCrunch notified the company about the exposed Vetco customer and pet data, after which Petco acknowledged the issue in a statement, saying it is investigating the incident at its veterinary services arm. The company declined to share further details.

The lapse meant that anyone online could directly download customer files from the Vetco site without needing an account or login credentials. At least one customer file was publicly visible and had even been indexed by Google, making it searchable.

According to data reviewed by TechCrunch, the exposed records included visit notes, medical histories, prescriptions, vaccination details, and other documents linked to Vetco customers and their pets.

These files contained personal information such as customer names, home addresses, phone numbers and email addresses, along with clinic locations, medical evaluations, diagnoses, test results, treatment details, itemized costs, veterinarian names, signed consent forms, and service dates.

Pet details were also disclosed, including pet names, species, breed, sex, age, date of birth, microchip numbers, medical vitals, and prescription histories.

TechCrunch reported the flaw to Petco on Friday. The company acknowledged the exposure on Tuesday after receiving follow-up communication that included examples of the leaked files.

Petco spokesperson Ventura Olvera told TechCrunch that the company has “implemented, and will continue to implement, additional measures to further strengthen the security of our systems,” though Petco did not provide proof of these measures. Olvera also declined to clarify whether the company has logging tools capable of determining whether the data was accessed or extracted during the exposure.

The vulnerability stems from how Vetco’s website generates downloadable PDFs for customers. Vetco’s portal, petpass.com, gives customers access to their vet records. However, TechCrunch discovered that the PDF-generation page was left publicly accessible without any password protection.

This allowed anyone to retrieve sensitive documents simply by altering the URL to include a customer’s unique identification number. Because Vetco’s customer IDs are sequential, adjusting the number by small increments exposed other customers’ records as well.

By checking ID numbers in increments of 100,000, TechCrunch estimated that the flaw could have exposed information belonging to millions of Petco customers.

The issue is identified as an insecure direct object reference (IDOR), a common security oversight where servers fail to verify whether the requester is authorized to access specific files.

It remains unknown how long the data was publicly exposed, but the record visible on Google dated back to mid-2020.

This marks the third data incident involving Petco in 2025, according to TechCrunch’s reporting.

Earlier in the year, hackers linked to the Scattered Lapsus$ Hunters group reportedly stole a large trove of customer data from a Salesforce-hosted Petco database and sought ransom payments to avoid leaking the data.

In September, Petco disclosed another breach involving a misconfigured software setting that mistakenly made certain files available online. That incident exposed highly sensitive data—including Social Security numbers, driver’s license details, and payment information like credit and debit card numbers.

Olvera did not confirm how many customers were affected by the September breach. Under California law, organizations must publicly report breaches affecting more than 500 state residents.

TechCrunch believes the newly discovered Vetco data exposure is a separate event because Petco had already begun notifying customers about the earlier breach months prior.

700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



December Patch Tuesday Brings Critical Microsoft, Notepad++, Fortinet, and Ivanti Security Fixes

 


While December's Patch Tuesday gave us a lighter release than normal, it arrived with several urgent vulnerabilities that need attention immediately. In all, Microsoft released 57 CVE patches to finish out 2025, including one flaw already under active exploitation and two others that were publicly disclosed. Notably, critical security updates also came from Notepad++, Ivanti, and Fortinet this cycle, making it particularly important for system administrators and enterprise security teams alike. 

The most critical of Microsoft's disclosures this month is CVE-2025-62221, a Windows Cloud Files Mini Filter Driver bug rated 7.8 on the CVSS scale. It allows for privilege escalation: an attacker who has code execution rights can leverage the bug to escalate to full system-level access. Researchers say this kind of bug is exploited on a regular basis in real-world intrusions, and "patching ASAP" is critical. Microsoft hasn't disclosed yet which threat actors are actively exploiting this flaw; however, experts explain that bugs like these "tend to pop up in almost every big compromise and are often used as stepping stones to further breach". 

Another two disclosures from Microsoft were CVE-2025-54100 in PowerShell and CVE-2025-64671, impacting GitHub Copilot for JetBrains. Although these are not confirmed to be exploited, they were publicly disclosed ahead of patching. Graded at 8.4, the Copilot vulnerability would have allowed for remote code execution via malicious cross-prompt injection, provided a user is tricked into opening untrusted files or connecting to compromised servers. Security researchers expect more vulnerabilities of this type to emerge as AI-integrated development tools expand in usage. 

But one of the more ominous developments outside Microsoft belongs to Notepad++. The popular open-source editor pushed out version 8.8.9 to patch a weakness in the way updates were checked for authenticity. Attackers were managing to intercept network traffic from the WinGUp update client, then redirecting users to rogue servers, where malicious files were downloaded instead of legitimate updates. There are reports that threat groups in China were actively testing and exploiting this vulnerability. Indeed, according to the maintainer, "Due to the improper update integrity validation, an adversary was able to manipulate the download"; therefore, users should upgrade as soon as possible. 

Fortinet also patched two critical authentication bypass vulnerabilities, CVE-2025-59718 and CVE-2025-59719, in FortiOS and several related products. The bugs enable hackers to bypass FortiCloud SSO authentication using crafted SAML messages, which only works if SSO has been enabled. Administrators are advised to disable the feature until they can upgrade to patched builds to avoid unauthorized access. Rounding out the disclosures, Ivanti released a fix for CVE-2025-10573, a severe cross-site scripting vulnerability in its Endpoint Manager. The bug allows an attacker to register fake endpoints and inject malicious JavaScript into the administrator dashboard. Viewed, this could serve an attacker full control over the session without credentials. There has been no observed exploitation so far, but researchers warn that it is likely attackers will reverse engineer the fix soon, making for a deployment environment of haste.

Ivanti Flags Critical Endpoint Manager Flaw Allowing Remote Code Execution

 

Ivanti is urging customers to quickly patch a critical vulnerability in its Endpoint Manager (EPM) product that could let remote attackers execute arbitrary JavaScript in administrator sessions through low-complexity cross-site scripting (XSS) attacks.The issue, tracked as CVE-2025-10573, affects the EPM web service and can be abused without authentication, but does require some user interaction to trigger.

The flaw stems from how Ivanti EPM handles managed endpoints presented to the primary web service. According to Rapid7 researcher Ryan Emmons, an attacker with unauthenticated access to the EPM web interface can register bogus managed endpoints and inject malicious JavaScript into the administrator dashboard. Once an EPM administrator views a poisoned dashboard widget as part of routine use, the injected code executes in the browser, allowing the attacker to hijack the admin session and act with their privileges.

Patch availability and exposure

Ivanti has released EPM 2024 SU4 SR1 to remediate CVE-2025-10573 and recommends customers install this update as soon as possible. The company stressed that EPM is designed to operate behind perimeter defenses and not be directly exposed to the public internet, which should lower practical risk where deployments follow guidance.However, data from the Shadowserver Foundation shows hundreds of Ivanti EPM instances reachable online, with the highest counts in the United States, Germany, and Japan, significantly increasing potential attack surface for those organizations.

Alongside the critical bug, Ivanti shipped fixes for three other high‑severity vulnerabilities affecting EPM, including CVE-2025-13659 and CVE-2025-13662. These two issues could also enable unauthenticated remote attackers to execute arbitrary code on vulnerable systems under certain conditions. Successful exploitation of the newly disclosed high‑severity flaws requires user interaction and either connecting to an untrusted core server or importing untrusted configuration files, which slightly raises the bar for real-world attacks.

Threat landscape and prior exploitation

Ivanti stated there is currently no evidence that any of the newly patched flaws have been exploited in the wild and credited its responsible disclosure program for bringing them to light. Nonetheless, EPM vulnerabilities have been frequent targets, and U.S. Cybersecurity and Infrastructure Security Agency (CISA) has repeatedly added Ivanti EPM bugs to its catalog of exploited vulnerabilities. In 2024, CISA ordered federal agencies to urgently patch multiple Ivanti EPM issues, including three critical flaws flagged in March and another actively exploited vulnerability mandated for remediation in October.

Rising Prompt Injection Threats and How Users Can Stay Secure

 


The generative AI revolution is reshaping the foundations of modern work in an age when organizations are increasingly relying on large language models like ChatGPT and Claude to speed up research, synthesize complex information, and interpret extensive data sets more rapidly with unprecedented ease, which is accelerating research, synthesizing complex information, and analyzing extensive data sets. 

However, this growing dependency on text-driven intelligence is associated with an escalating and silent risk. The threat of prompt injection is increasing as these systems become increasingly embedded in enterprise workflows, posing a new challenge to cybersecurity teams. Malicious actors have the ability to manipulate the exact instructions that lead an LLM to reveal confidential information, alter internal information, or corrupt proprietary systems in such ways that they are extremely difficult to detect and even more difficult to reverse. 

Malicious actors can manipulate the very instructions that guide an LLM. Any organisation that deploys its own artificial intelligence infrastructure or integrates sensitive data into third-party models is aware that safeguarding against such attacks has become an urgent concern. Organisations must remain vigilant and know how to exploit such vulnerabilities. 

It is becoming increasingly evident that as organisations are implementing AI-driven workflows, a new class of technology—agent AI—is beginning to redefine how digital systems work for the better. These more advanced models, as opposed to traditional models that are merely reactive to prompts, are capable of collecting information, reasoning through tasks, and serving as real-time assistants that can be incorporated into everything from customer support channels to search engine solutions. 

There has been a shift into the browser itself, where AI-enhanced interfaces are rapidly becoming a feature rather than a novelty. However, along with that development, corresponding risks have also increased. 

It is important to keep in mind that, regardless of what a browser is developed by, the AI components that are embedded into it — whether search engines, integrated chatbots, or automated query systems — remain vulnerable to the inherent flaws of the information they rely on. This is where prompt injection attacks emerge as a particularly troubling threat. Attackers can manipulate an LLM so that it performs unintended or harmful actions as a result of exploiting inaccuracies, gaps, or unguarded instructions within its training or operational data. 

Despite the sophisticated capabilities of agentic artificial intelligence, these attacks reveal an important truth: although it brings users and enterprises powerful capabilities, it also exposes them to vulnerabilities that traditional browsing tools have not been exposed to. As a matter of fact, prompt injection is often far more straightforward than many organisations imagine, as well as far more harmful. 

There are several examples of how an AI system can be manipulated to reveal sensitive information without even recognising the fact that the document is tainted, such as a PDF embedded with hidden instructions, by an attacker. It has also been demonstrated that websites seeded with invisible or obfuscated text can affect how an AI agent interprets queries during information retrieval, steering the model in dangerous or unintended directions. 

It is possible to manipulate public-facing chatbots, which are intended to improve customer engagement, in order to produce inappropriate, harmful, or policy-violating responses through carefully crafted prompts. These examples illustrate that there are numerous risks associated with inadvertent data leaks, reputational repercussions, as well as regulatory violations as enterprises begin to use AI-assisted decision-making and workflow automation more frequently. 

In order to combat this threat, LLMs need to be treated with the same level of rigour that is usually reserved for high-value software systems. The use of adversarial testing and red-team methods has gained popularity among security teams as a way of determining whether a model can be misled by hidden or incorrect inputs. 

There has been a growing focus on strengthening the structure of prompts, ensuring there is a clear boundary between user-driven content and system instructions, which has become a critical defence against fraud, and input validation measures have been established to filter out suspicious patterns before they reach the model's operational layer. Monitoring outputs continuously is equally vital, which allows organisations to flag anomalies and enforce safeguards that prevent inappropriate or unsafe behaviour. 

The model needs to be restricted from accessing unvetted external data, context management rules must be redesigned, and robust activity logs must be maintained in order to reduce the available attack surface while ensuring a more reliable oversight system. However, despite taking these precautions to protect the system, the depths of the threat landscape often require expert human judgment to assess. 

Manual penetration testing has emerged as a decisive tool, providing insight far beyond the capabilities of automated scanners that are capable of detecting malicious code. 

Using skilled testers, it is possible to reproduce the thought processes and creativity of real attackers. This involves experimenting with nuanced prompt manipulations, embedded instruction chains, and context-poisoning techniques that automatic tools fail to detect. Their assessments also reveal whether security controls actually perform as intended. They examine whether sanitisation filters malicious content properly, whether context restrictions prevent impersonation, and whether output filters intervene when the model produces risky content. 

A human-led testing process provides organisations with a stronger assurance that their AI deployments will withstand the increasingly sophisticated attempts at compromising them through the validation of both vulnerabilities and the effectiveness of subsequent fixes. In order for user' organisation to become resilient against indirect prompt injection, it requires much more than isolated technical fixes. It calls for a coordinated, multilayered defence that encompasses both the policy environment, the infrastructure, and the day-to-day operational discipline of users' organisations. 

A holistic approach to security is increasingly being adopted by security teams to reduce the attack surface as well as catch suspicious behaviour early and quickly. As part of this effort, dedicated detection systems are deployed, which will identify and block both subtle, indirect manipulations that might affect an artificial intelligence model's behaviour before they can occur. Input validation and sanitisation protocols are a means of strengthening these controls. 

They prevent hidden instructions from slipping into an LLM's context by screening incoming data, regardless of whether it is sourced from users, integrated tools, or external web sources. In addition to establishing firm content handling policies, it is also crucial to establish a policy defining the types of information that an artificial intelligence system can process, as well as the types of sources that can be regarded as trustworthy. 

A majority of organisations today use allowlisting frameworks as part of their security measures, and are closely monitoring unverified or third-party content in order to minimise exposure to contaminated data. Enterprises are adopting strict privilege-separation measures at the architectural level so as to ensure that artificial intelligence systems have minimal access to sensitive information as well as being unable to perform high-risk actions without explicit authorisations. 

In the event that an injection attempt is successful, this controlled environment helps contain the damage. It adds another level of complexity to the situation when shadow AI begins to emerge—employees adopting unapproved tools without supervision. Consequently, organisations are turning to monitoring and governance platforms to provide insight into how and where AI tools are being implemented across the workforce. These platforms enable access controls to be enforced and unmanaged systems to be prevented from becoming weak entry points for attackers. 

As an integral component of technical and procedural safeguards, user education is still an essential component of frontline defences. 

Training programs that teach employees how to recognise and distinguish sanctioned tools from unapproved ones will help strengthen frontline defences in the future. As a whole, these measures form a comprehensive strategy to counter the evolving threat of prompt injection in enterprise environments by aligning technology, policy, and awareness. 

It is becoming increasingly important for enterprises to secure these systems as the adoption of generative AI and agentic AI accelerates. As a result of this development, companies are at a pivotal point where proactive investment in artificial intelligence security is not a luxury but an essential part of preserving trust, continuity, and competitiveness. 

Aside from the existing safeguards that organisations have already put in place, organisations can strengthen their posture even further by incorporating AI risk assessments into broader cybersecurity strategies, conducting continuous model evaluations, as well as collaborating with external experts. 

An organisation that encourages a culture of transparency can reduce the probability of unnoticed manipulation to a substantial degree if anomalies are reported early and employees understand both the power and pitfalls of Artificial Intelligence. It is essential to embrace innovation without losing sight of caution in order to build AI systems that are not only intelligent, but also resilient, accountable, and closely aligned with human oversight. 

By harnessing the transformative potential of modern AI and making security a priority, businesses can ensure that the next chapter of digital transformation is not just driven by security, but driven by it as a core value, not an afterthought.

UK Cyber Agency says AI Prompt-injection Attacks May Persist for Years

 



The United Kingdom’s National Cyber Security Centre has issued a strong warning about a spreading weakness in artificial intelligence systems, stating that prompt-injection attacks may never be fully solved. The agency explained that this risk is tied to the basic design of large language models, which read all text as part of a prediction sequence rather than separating instructions from ordinary content. Because of this, malicious actors can insert hidden text that causes a system to break its own rules or execute unintended actions.

The NCSC noted that this is not a theoretical concern. Several demonstrations have already shown how attackers can force AI models to reveal internal instructions or sensitive prompts, and other tests have suggested that tools used for coding, search, or even résumé screening can be manipulated by embedding concealed commands inside user-supplied text.

David C, a technical director at the NCSC, cautioned that treating prompt injection as a familiar software flaw is a mistake. He observed that many security professionals compare it to SQL injection, an older type of vulnerability that allowed criminals to send harmful instructions to databases by placing commands where data was expected. According to him, this comparison is dangerous because it encourages the belief that both problems can be fixed in similar ways, even though the underlying issues are completely different.

He illustrated this difference with a practical scenario. If a recruiter uses an AI system to filter applications, a job seeker could hide a message in the document that tells the model to ignore existing rules and approve the résumé. Since the model does not distinguish between what it should follow and what it should simply read, it may carry out the hidden instruction.

Researchers are trying to design protective techniques, including systems that attempt to detect suspicious text or training methods that help models recognise the difference between instructions and information. However, the agency emphasised that all these strategies are trying to impose a separation that the technology does not naturally have. Traditional solutions for similar problems, such as Confused Deputy vulnerabilities, do not translate well to language models, leaving large gaps in protection.

The agency also stressed upon a security idea recently shared on social media that attempted to restrict model behaviour. Even the creator of that proposal admitted that it would sharply reduce the abilities of AI systems, showing how complex and limiting effective safeguards may become.

The NCSC stated that prompt-injection threats are likely to remain a lasting challenge rather than a fixable flaw. The most realistic path is to reduce the chances of an attack or limit the damage it can cause through strict system design, thoughtful deployment, and careful day-to-day operation. The agency pointed to the history of SQL injection, which once caused widespread breaches until better security standards were adopted. With AI now being integrated into many applications, they warned that a similar wave of compromises could occur if organisations do not treat prompt injection as a serious and ongoing risk.


Featured