Cybersecurity researchers are raising alarms over the misuse of large language models (LLMs) by cybercriminals to create new variants of malicious JavaScript at scale. A report from Palo Alto Networks Unit 42 highlights how LLMs, while not adept at generating malware from scratch, can effectively rewrite or obfuscate existing malicious code.
This capability has enabled the creation of up to 10,000 novel JavaScript variants, significantly complicating detection efforts.
Malware Detection Challenges
The natural-looking transformations produced by LLMs allow malicious scripts to evade detection by traditional analyzers. Researchers found that these restructured scripts often change classification results from malicious to benign.
In one case, 88% of the modified scripts successfully bypassed malware classifiers.
Despite increased efforts by LLM providers to impose stricter guardrails, underground tools like WormGPT continue to facilitate malicious activities, such as phishing email creation and malware scripting.
OpenAI reported in October 2024 that it had blocked over 20 attempts to misuse its platform for reconnaissance, scripting, and debugging purposes.
Unit 42 emphasized that while LLMs pose significant risks, they also present opportunities to strengthen defenses. Techniques used to generate malicious JavaScript variants could be repurposed to create robust datasets for improving malware detection systems.
AI Hardware and Framework Vulnerabilities
In a separate discovery, researchers from North Carolina State University revealed a side-channel attack known as TPUXtract, which can steal AI model hyperparameters from Google Edge Tensor Processing Units (TPUs) with 99.91% accuracy.
The attack exploits electromagnetic signals emitted during neural network inferences to extract critical model details. Although it requires physical access and specialized equipment, TPUXtract highlights vulnerabilities in AI hardware that determined adversaries could exploit.
Study author Aydin Aysu explained that by extracting architecture and layer configurations, the researchers were able to recreate a close surrogate of the target AI model, potentially enabling intellectual property theft or further cyberattacks.
Exploiting AI Frameworks
Morphisec researchers disclosed another AI-targeted threat involving the Exploit Prediction Scoring System (EPSS), a framework used to evaluate the likelihood of software vulnerabilities being exploited.
By artificially boosting social media mentions and creating GitHub repositories with placeholder exploits, attackers manipulated EPSS outputs.
This resulted in the exploitation likelihood for certain vulnerabilities increasing from 0.1 to 0.14 and shifting their percentile ranking from the 41st to the 51st percentile.
Ido Ikar from Morphisec warned that such manipulation misguides organizations relying on EPSS for vulnerability management, enabling adversaries to distort vulnerability assessments and mislead defenders.
The Double-Edged Sword of Generative AI
While generative AI offers significant potential for bolstering cybersecurity defenses, its misuse by cybercriminals presents a formidable threat.
Organizations must:
- Invest in advanced AI-driven detection systems capable of identifying obfuscated threats;
- Implement robust physical security measures to protect AI hardware from side-channel attacks;
- Continuously monitor and validate AI framework outputs to mitigate manipulation risks.
As adversaries innovate, businesses and researchers must push their operations to stay ahead, leveraging the same AI advancements to fortify their defenses.