As research from security firm JFrog revealed on Thursday in a report that is a likely harbinger of what's to come, code uploaded to AI developer platform Hugging Face concealed the installation of backdoors and other forms of malware on end-user machines.
The JFrog researchers said that they found approximately 100 files that were downloaded and loaded onto an end-user device that was not intended and performed unwanted and hidden acts when they were installed. All of the machine learning models that were subsequently flagged, went undetected by Hugging Face, and all of them appeared to be benign proofs of concept uploaded by users or researchers who were unaware of any potential danger.
A report published by JFrog researchers states that ten of them were actually "truly malicious" because they violated the users' security when they were installed, in that they implemented actions that compromised their security.
This blog post aims to broaden the conversation surrounding AI Machine Language (ML) models for security, which has been a neglected subject for a long time and it is important to begin a discussion about it right now.
The JFrog Security Research team is investigating ways in which machine learning models can be employed to compromise an individual's environment through executing code to compromise the environment of a Hugging Face user.
The purpose of this post is to discuss the investigation into a malicious machine learning model that has been uncovered by us.
People are regularly monitoring and scanning AI models uploaded by users on other open-source repositories, as they do with other open-source repositories, and it has been discovered that loading a pickle file can lead to code execution. A payload of this model allows the attacker to gain full control over a victim’s machine through what is commonly referred to as a “backdoor”, which allows them to gain complete control over their machines.
The silent infiltration could result in the unauthorized accessing of critical internal systems, paving the way for massive data breaches or corporate espionage, affecting not just individuals, but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised status, allowing for a wide range of possible repercussions. The attack mechanism is explained in detail, which sheds light on its complexities and potential implications.
Taking a closer look at the intricate details of this nefarious scheme, it may be instructive to keep in mind the lessons that can be learned from it, the attacker's intentions, and the identity of whoever conducted this attack.
In the same way as any technology, AI models can pose security risks if they are not handled correctly.
A threat that is possible is code execution, where a malicious actor can run arbitrary code on the machine that loads or runs the model, thus posing a security risk. As a result of this, JFrog has created an external HoneyPot on an external server, completely isolated from any sensitive network to gain further insight into the actors' intentions. This HoneyPot can result in data breaches, system compromises, or malicious actions.
HoneyPots are designed to attract different types of attacks by impersonating legitimate systems and services, so defenders can monitor and analyze the activities of attackers by monitoring and analyzing their behaviour.
Several proactive measures can be taken by data scientists to prevent malicious models from being created and exploited to execute code. Examples include source verification, security scanning, safe loading methods, updating dependencies, reviewing model code, isolating environments, and educating users so that these risks can be mitigated.
Several security measures were implemented by Hugging Face, a platform for AI collaboration, to prevent malware attacks, pickle attacks, and secret attacks.
It is the purpose of these features to alert the users or moderators whenever a file in the repository contains malicious code, unsafe deserialization, or sensitive information. Although the platform has taken several precautions to protect itself from real threats, recent incidents serve to accentuate the fact that it is not immune from them.