Business email compromise (BEC) attacks are being launched by cybercriminals with the assistance of generative AI technology, and one such tool used is WormGPT, a black-hat alternative to GPT models that has been designed for malicious goals.
SlashNext said that WormGPT was trained on a variety of data sources, with a concentration on malware-related data. Based on the input it receives, WormGPT can produce highly convincing phoney emails by creating language that resembles human speech.
Screenshots of malicious actors exchanging ideas on how to utilise ChatGPT to support successful BEC assaults are shown in a cybercrime form, demonstrating that even hackers who are not fluent in the target language can create convincing emails using gen AI.
The research team also assessed WormGPT's potential risks, concentrating particularly on BEC assaults. They programmed the tool to generate an email intended to persuade an unsuspecting account manager into paying a fake invoice.
The findings showed that WormGPT was "strategically cunning," demonstrating its capacity to launch complex phishing and BEC operations, in addition to being able to use a convincing tone.
The research study noted that the creation of tools highlights the threat posed by generative AI technologies, including WormGPT, even in the hands of inexperienced hackers.
"It's like ChatGPT but has no ethical boundaries or limitations," the report said.
The report also highlighted that hackers are developing "jailbreaks," specialised commands intended to trick generative AI interfaces into producing output that may involve revealing private data, creating offensive content, or even running malicious code.
Some proactive cybercriminals are even going so far as to create their own, attack-specific modules that are similar to those used by ChatGPT. This development could make cyber defence much more challenging.
"Malicious actors can now launch these attacks at scale at zero cost, and they can do it with much more targeted precision than they could before," stated SlashNext CEO Patrick Harr. "If they aren't successful with the first BEC or phishing attempt, they can simply try again with retooled content."
The growth of generative AI tools is adding complications and obstacles to cybersecurity operations, as well as highlighting the need for more effective defence systems against emerging threats.
Harr believes that AI-aided BEC, malware, and phishing attacks may be best combated using AI-aided defence capabilities.
He believes that organisations will eventually rely on AI to handle the discovery, detection, and remediation of these dangers since there is no other way for humans to stay ahead of the game.
Despite its directive to block malicious requests, a Forcepoint researcher persuaded the AI tool to construct malware for locating and exfiltrating certain documents in April.
Meanwhile, developers' enthusiasm for ChatGPT and other large language model (LLM) tools has left most organisations entirely unable to guard against the vulnerabilities introduced by the emerging technology.