Researchers found malicious code that they suspect was developed with the aid of generative artificial intelligence services to deploy the AsyncRAT malware in an email campaign that was directed towards French users.
While threat actors have employed generative AI technology to design convincing emails, government agencies have cautioned regarding the potential exploit of AI tools to create malicious software, despite the precautions and restrictions that vendors implemented.
Suspected cases of AI-created malware have been spotted in real attacks. The malicious PowerShell script that was uncovered earlier this year by cybersecurity firm Proofpoint was most likely generated by an AI system.
As less technical threat actors depend more on AI to develop malware, HP security experts discovered a malicious campaign in early June that employed code commented in the same manner a generative AI system would.
The VBScript established persistence on the compromised PC by generating scheduled activities and writing new keys to the Windows Registry. The researchers add that some of the indicators pointing to AI-generated malicious code include the framework of the scripts, the comments that explain each line, and the use of native language for function names and variables.
AsyncRAT, an open-source, publicly available malware that can record keystrokes on the victim device and establish an encrypted connection for remote monitoring and control, is later downloaded and executed by the attacker. The malware can also deliver additional payloads.
The HP Wolf Security research also states that, in terms of visibility, archives were the most popular delivery option in the first half of the year. Lower-level threat actors can use generative AI to create malware in minutes and customise it for assaults targeting different areas and platforms (Linux, macOS).
Even if they do not use AI to create fully functional malware, hackers rely on it to accelerate their labour while developing sophisticated threats.