Cybercriminals have already leveraged the power of AI to develop code that may be used in a ransomware attack, according to Sergey Shykevich, a lead ChatGPT researcher at the cybersecurity firm Checkpoint security.
Threat actors can use the capabilities of AI in ChatGPT to scale up their current attack methods, many of which depend on humans. Similar to how they aid cybercriminals in general, AI chatbots also aid a subset of them known as romance scammers. An earlier McAfee investigation noted that cybercriminals frequently have lengthy discussions in order to seem trustworthy and entice unwary victims. AI chatbots like ChatGPT can help the bad guys by producing texts, which makes their job easier.
The ChatGPT has safeguards in place to keep hackers from utilizing it for illegal activities, but they are far from infallible. The desire for a romantic rendezvous was turned down, as was the request to prepare a letter asking for financial assistance to leave Ukraine.
Security experts are concerned about the misuse of ChatGPT, which is now powering Bing's new, troublesome chatbot. They see the potential for chatbots to help in phishing, malware, and hacking assaults.
When it comes to phishing attacks, the entry barrier is already low, but ChatGPT could make it simple for people to proficiently create dozens of targeted scam emails — as long as they craft good prompts, according to Justin Fier, director for Cyber Intelligence & Analytics at Darktrace, a cybersecurity firm.
Most tech businesses refer to Section 230 of the Communications Decency Act of 1996 when addressing illegal or criminal content posted on their websites by third party users. According to the law, owners of websites where users can submit content, such as Facebook or Twitter, are not accountable for what is said there. Governments should be in charge of developing and enforcing legislation, according to 95% of IT respondents in the Blackberry study.
The open-source ChatGPT API models, which do not have the same content limitations as the online user interface, are being used by certain hackers, according to Shykevich.ChatGPT is notorious for being boldly incorrect, which might be an issue for a cybercriminal seeking to create an email meant to imitate someone else, experts told Insider. This could make cybercrime more difficult. Moreover, ChatGPT still uses barriers to stop illegal conduct, even if the correct script can frequently get around these barriers.