Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label backdoor vulnerability. Show all posts

Cisco CVE-2024-20439: Exploitation Attempts Target Smart Licensing Utility Backdoor

 

A critical vulnerability tracked as CVE-2024-20439 has placed Cisco’s Smart Licensing Utility (CSLU) in the spotlight after cybersecurity researchers observed active exploitation attempts. The flaw, which involves an undocumented static administrative credential, could allow unauthenticated attackers to remotely access affected systems. While it’s still unclear whether the vulnerability has been weaponized in ransomware attacks, security experts have noted suspicious botnet activity linked to it since early January, with a significant surge in mid-March. 

The vulnerability, according to Cisco, cannot be exploited unless the CSLU is actively running—a saving grace for systems not using the utility frequently. However, many organizations rely on the CSLU to manage licenses for Cisco products without requiring constant connectivity to Cisco’s cloud-based Smart Software Manager. This increases the risk of exposure for unpatched systems. Johannes Ullrich, Dean of Research at the SANS Technology Institute, highlighted that the vulnerability effectively acts as a backdoor. 

In fact, he noted that Cisco has a history of embedding static credentials in several of its products. Ullrich’s observation aligns with earlier research by Nicholas Starke, who published a detailed technical analysis of the flaw, including the decoded hardcoded password, just weeks after Cisco issued its patch. This disclosure made it easier for potential attackers to identify and exploit vulnerable systems. In addition to CVE-2024-20439, Cisco addressed another critical flaw, CVE-2024-20440, which allows unauthenticated attackers to extract sensitive data from exposed devices, including API credentials. 

This vulnerability also affects the CSLU and can be exploited by sending specially crafted HTTP requests to a target system. Like the first flaw, it is only active when the CSLU application is running. Researchers have now detected attackers chaining both vulnerabilities to maximize impact. According to Ullrich, scans and probes originating from a small botnet are testing for exposure to these flaws. Although Cisco’s Product Security Incident Response Team (PSIRT) maintains that there’s no confirmed evidence of these flaws being exploited in the wild, the published credentials and recent scan activity suggest otherwise. 

These types of vulnerabilities raise larger concerns about the use of hardcoded credentials in critical infrastructure. Cisco has faced similar issues in the past with other software products, including IOS XE, DNA Center, and Emergency Responder. 

As always, the best defense is prompt patching. Cisco released security updates in September to address both flaws, and organizations running CSLU should immediately apply them. Additionally, any instance of the CSLU running unnecessarily should be disabled to reduce the attack surface. With exploit attempts on the rise and technical details now public, delaying mitigation could have serious consequences.

HiddenLayer Unveils "ShadowLogic" Technique for Implanting Codeless Backdoors in AI Models

 

Manipulating an AI model's computational graph can allow attackers to insert codeless, persistent backdoors, reports AI security firm HiddenLayer. This vulnerability could lead to malicious use of machine learning (ML) models in a variety of applications, including supply chain attacks.

Known as ShadowLogic, the technique works by tampering with the computational graph structure of a model, triggering unwanted behavior in downstream tasks. This manipulation opens the door to potential security breaches across AI supply chains.

Traditional backdoors offer unauthorized system access, bypassing security layers. Similarly, AI models can be exploited to include backdoors or manipulated to yield malicious outcomes. However, any changes in the model could potentially affect these hidden pathways.

HiddenLayer explains that using ShadowLogic enables threat actors to embed codeless backdoors that persist through model fine-tuning, allowing highly targeted and stealthy attacks.

Building on prior research showing that backdoors can be implemented during the training phase, HiddenLayer investigated how to inject a backdoor into a model's computational graph post-training. This bypasses the need for training phase vulnerabilities.

A computational graph is a mathematical blueprint that controls a neural network's operations. These graphs represent data inputs, mathematical functions, and learning parameters, guiding the model’s forward and backward propagation.

According to HiddenLayer, this graph acts like compiled code in a program, with specific instructions for the model. By manipulating the graph, attackers can override normal model logic, triggering predefined behavior when the model processes specific input, such as an image pixel, keyword, or phrase.

ShadowLogic leverages the wide range of operations supported by computational graphs to embed triggers, which could include checksum-based activations or even entirely new models hidden within the original one. HiddenLayer demonstrated this method on models like ResNet, YOLO, and Phi-3 Mini.

These compromised models behave normally but respond differently when presented with specific triggers. They could, for example, fail to detect objects or generate controlled responses, demonstrating the subtlety and potential danger of ShadowLogic backdoors.

Such backdoors introduce new vulnerabilities in AI models that do not rely on traditional code exploits. Embedded within the model’s structure, these backdoors are harder to detect and can be injected across a variety of graph-based architectures.

ShadowLogic is format-agnostic and can be applied to any model that uses graph-based architectures, regardless of the domain, including autonomous navigation, financial predictions, healthcare diagnostics, and cybersecurity.

HiddenLayer warns that no AI system is safe from this type of attack, whether it involves simple classifiers or advanced large language models (LLMs), expanding the range of potential targets.