Over 100 million users have signed up for ChatGPT since it launched last year, making it one of the top ten most popular apps in the world. Artificial intelligence has taken the world by storm in recent years with OpenAI's chatbots. In the wake of Bing Chat and Google Bard, Microsoft and Google have created follow-up products inspired by Bing Chat. A revolutionary AI is in town - WormGPT, which you could say is here to make your life easier, but it's not here to help you.
A worm-like AI chatbot called WormGPT has not been designed to bring amusingly wriggly invertebrate AI assistance to the feline-specific ChatGPT, but rather to provide a fun twist on the traditional chatbot. It's a far more malicious and unethical tool that is designed without ethics to be of any use to anyone. A popular advantage of this product is that it boosts productivity, raises effectiveness, and lowers the entry barrier for your average cybercriminal to gain access.
A hacker came up with WormGPT which is an artificial intelligence (AI) model used to create a malicious computer program. It poses a lot of danger to individuals and companies alike.
It is imperative to note that WormGPT is different from its counterpart, ChatGPT, which is designed to help. ChatGPT has an excellent intention, whereas WormGPT is designed to attack large amounts of people.
This "sophisticated AI model," independently verified by cyber security firm SlashNext, was malicious. SlashNext alleges that the model was trained using a wide range of data sources, with a specific focus on malware-related data as part of its data-gathering process. In the case of GPT-J programming language software, the risks associated with AI modules can be exemplified by the threat of harming even those not well-versed in them.
Researchers from the International Center for Computer Security conducted experiments using phishing emails to better understand WormGPT risks. Despite being highly persuasive, the model also showed strategic cunning to generate persuasive emails. This was strategic. It is important to note that this indicates that sophisticated phishing attacks and business email compromises (BECs) are possible.
In the last couple of years, experts, government officials, and even the creator of ChatGPT, along with the developers of WormGPT have recognized the dangers of AI tools such as ChatGPT and WormGPT. Their point of view has been that the public must be protected from misuse of these technologies through the adoption of regulations. There have also been warnings from Europol, the international organization that is meant to support law enforcement authorities in preventing the misuse of large language models (LLMs) such as ChatGPT for fraud, impersonation, and social engineering purposes.
The primary concern with AI tools such as ChatGPT is their ability to automatically generate highly authentic text in response to a user prompt, which is what makes them so appealing to researchers.
The fact that they are so popular for phishing attacks makes them extremely useful. Phishing scams used to be very easy to detect because they had obvious grammatical and spelling errors that allowed them to be detected readily. The major advancement in artificial intelligence has provided a powerful tool for impersonating organizations and people in an extremely realistic manner, thanks to advances in AI. The above situation is even true for those who understand English at a basic level.
The acquisition of WormGPT Large Language Model (LLM) style ChatGPT for only $60 a month on the dark web has now made it possible to access WormGPT services. Without any ethical or moral limits, it is now possible to access its services. The chatbot is a version of degenerate generative artificial intelligence; in other words, it is not subject to the same filters as its counterpart – the ChatGPT – that is imposed by corporations such as Google, Facebook, and even OpenAI. NordVPN's IT security experts have already described ChatGPT as the "evil twin" of ChatGPT.
It is probably the most powerful hacking tool available in the world at the moment. The WormGPT tool was designed by a skilled hacker who built it on top of open-source LLM GPT-J as of 2021.
During the testing process of WormGPT, SlashNext discovered some disturbing results that need to be addressed. A phishing email would be very difficult for a human to detect since it is so convincing, but WormGPT went above and beyond just to come up with something convincing, it even put together a very sophisticated way of combining all the phishing email elements to deceive potential victims.
The purpose of WormGPT is to protect your computer from any sort of attack by your adversaries. WormGPT was able to achieve this through a series of cat-and-mouse games with OpenAI, which Adrianus Warmenhoven explained to us. It can be said that this is the result of a company trying to circumvent the ever-expanding provisions imposed by the government. This is to protect itself from legal liability. It was a method used by the LLM to impart information on illegal activity into seemingly innocuous texts, such as family letters and other correspondences, as part of the training process.
Cybercriminals will no longer have to be restricted to subverting Open AI, as explained by the expert. With WormGPT they will no longer be required to do so. As a result, they can effectively make this technology evolve based on their own needs, and this, in turn, will transform the world of Artificial Intelligence into a true wild west that is becoming increasingly populated by humans.
It is without a doubt that they will have to choose from an array of ever-advancing, ever-improving models being offered to ne'er-do-wells shortly, with the first AI chatbot the majority of ne'er-do-wells will have to use to assist them with their criminal acts.
There is no doubt that Artificial Intelligence will become an increasingly important tool in preventing AI-generated cybercrime in the coming years, resulting in a race to see which side can more proficiently answer its questions.
As of now, there are 90 seconds left until midnight on the clock of doomsday. This is due to the rapid adoption of disruptive technologies by humans. As a result, the doomsday clock that monitors our internet security might as well be in the middle of the night shortly.
The only likely outcome as two disruptive forces collide on the digital landscape is mutually assured destruction, so perhaps it's time to all climb into our antivirus Anderson shelters and fill our bellies with MRE Malwarebytes.