Ransomware has always been an evolving menace, as criminal outfits experiment with new techniques to terrorise their victims and gain maximum leverage while making extortion demands. Weaponized AI is the most recent addition to the armoury, allowing high-level groups to launch more sophisticated attacks but also opening the door for rookie hackers. The NCSC has cautioned that AI is fuelling the global threat posed by ransomware, and there has been a significant rise in AI-powered phishing attacks.
Organisations are increasingly facing increasing threats from sophisticated assaults, such as polymorphic malware, which can mutate in real time to avoid detection, allowing organisations to strike with more precision and frequency. As AI continues to rewrite the rules of ransomware attacks, businesses that still rely on traditional defences are more vulnerable to the next generation of cyber attack.
Ransomware accessible via AI
Online criminals, like legal businesses, are discovering new methods to use AI tools, which makes ransomware attacks more accessible and scalable. By automating crucial attack procedures, fraudsters may launch faster, more sophisticated operations with less human intervention.
Established and experienced criminal gangs gain from the ability to expand their operations. At the same time, because AI is lowering entrance barriers, folks with less technical expertise can now utilise ransomware as a service (RaaS) to undertake advanced attacks that would ordinarily be outside their pay grade.
OpenAI, the company behind ChatGPT, stated that it has detected and blocked more than 20 fraudulent operations with its famous generative AI tool. This ranged from creating copy for targeted phishing operations to physically coding and debugging malware.
FunkSec, a RaaS supplier, is a current example of how these tools are enhancing criminal groups' capabilities. The gang is reported to have only a few members, and its human-created code is rather simple, with a very low level of English. However, since its inception in late 2024, FunkSec has recorded over 80 victims in a single month, thanks to a variety of AI techniques that allow them to punch much beyond their weight.
Investigations have revealed evidence of AI-generated code in the gang's ransomware, as well as web and ransom text that was obviously created by a Large Language Model (LLM). The team also developed a chatbot to assist with their operations using Miniapps, a generative AI platform.
Mitigation tips against AI-driven ransomware
With AI fuelling ransomware groups, organisations must evolve their defences to stay safe. Traditional security measures are no longer sufficient, and organisations must match their fast-moving attackers with their own adaptive, AI-driven methods to stay competitive.
One critical step is to investigate how to combat AI with AI. Advanced AI-driven detection and response systems may analyse behavioural patterns in real time, identifying anomalies that traditional signature-based techniques may overlook. This is critical for fighting strategies like polymorphism, which have been expressly designed to circumvent standard detection technologies. Continuous network monitoring provides an additional layer of defence, detecting suspicious activity before ransomware can activate and propagate.
Beyond detection, AI-powered solutions are critical for avoiding data exfiltration, as modern ransomware gangs almost always use data theft to squeeze their victims. According to our research, 94% of reported ransomware attacks in 2024 involved exfiltration, highlighting the importance of Anti Data Exfiltration (ADX) solutions as part of a layered security approach. Organisations can prevent extortion efforts by restricting unauthorised data transfers, leaving attackers with no choice but to move on.