Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label ShadowLogic. Show all posts

HiddenLayer Unveils "ShadowLogic" Technique for Implanting Codeless Backdoors in AI Models

 

Manipulating an AI model's computational graph can allow attackers to insert codeless, persistent backdoors, reports AI security firm HiddenLayer. This vulnerability could lead to malicious use of machine learning (ML) models in a variety of applications, including supply chain attacks.

Known as ShadowLogic, the technique works by tampering with the computational graph structure of a model, triggering unwanted behavior in downstream tasks. This manipulation opens the door to potential security breaches across AI supply chains.

Traditional backdoors offer unauthorized system access, bypassing security layers. Similarly, AI models can be exploited to include backdoors or manipulated to yield malicious outcomes. However, any changes in the model could potentially affect these hidden pathways.

HiddenLayer explains that using ShadowLogic enables threat actors to embed codeless backdoors that persist through model fine-tuning, allowing highly targeted and stealthy attacks.

Building on prior research showing that backdoors can be implemented during the training phase, HiddenLayer investigated how to inject a backdoor into a model's computational graph post-training. This bypasses the need for training phase vulnerabilities.

A computational graph is a mathematical blueprint that controls a neural network's operations. These graphs represent data inputs, mathematical functions, and learning parameters, guiding the model’s forward and backward propagation.

According to HiddenLayer, this graph acts like compiled code in a program, with specific instructions for the model. By manipulating the graph, attackers can override normal model logic, triggering predefined behavior when the model processes specific input, such as an image pixel, keyword, or phrase.

ShadowLogic leverages the wide range of operations supported by computational graphs to embed triggers, which could include checksum-based activations or even entirely new models hidden within the original one. HiddenLayer demonstrated this method on models like ResNet, YOLO, and Phi-3 Mini.

These compromised models behave normally but respond differently when presented with specific triggers. They could, for example, fail to detect objects or generate controlled responses, demonstrating the subtlety and potential danger of ShadowLogic backdoors.

Such backdoors introduce new vulnerabilities in AI models that do not rely on traditional code exploits. Embedded within the model’s structure, these backdoors are harder to detect and can be injected across a variety of graph-based architectures.

ShadowLogic is format-agnostic and can be applied to any model that uses graph-based architectures, regardless of the domain, including autonomous navigation, financial predictions, healthcare diagnostics, and cybersecurity.

HiddenLayer warns that no AI system is safe from this type of attack, whether it involves simple classifiers or advanced large language models (LLMs), expanding the range of potential targets.