The team at Cybernews has warned that AI chatbots may be fun to play with, but they are also dangerous as it is able to give detailed info on how to exploit any vulnerability.
AI has created a stir in the imaginations of leaders in the tech industry and pop culture for decades. Machine learning tech allows you to automatically create text, photos, videos, and other media. They are all flourishing in the tech sphere as investors put billions of dollars into this field.
While AI has enabled endless opportunities to help humans, the experts warn about the potential dangers of making an algorithm that will outperform human capabilities and can get out of control.
Apocalypse situations due to AI taking over the planet are not something we are talking about. However, in today's scenario, AI has already started helping threat actors in malicious activities.
ChatGPT is the latest innovation in AI, made by research company OpenAI which was led by Sam Altman, and also backed by Microsoft, LinkedIn Co-founder Reid Hoffman, Elon Musk, and Khosla Ventures.
The AI chatbot can make conversations with people imitating various writing styles. The text made by ChatGPT is way more imaginative and complex when compared to earlier chatbots built by Silicon Valley. ChatGPT is trained using large amounts of text data from web, Wikipedia, and archived books.
After five days after the ChatGPT launch, over one million people had signed up for testing the tech. Social media was invaded with users' queries and the AI's answers- writing poems, copywriting, plotting movies, giving important tips for weight loss or healthy relationships, creative brainstorming, studying, and even programming.
According to OpenAI, ChatGPT models can answer follow-up questions, argue incorrect premises, reject inappropriate queries, and admit their personal mistakes.
According to cybernews, the research team tried "using ChatGPT to help them find a website's vulnerabilities. Researchers asked questions and followed the guidance of AI, trying to check if the chatbot could provide a step-by-step guide on exploiting."
"The researchers used the 'Hack the Box' cybersecurity training platform for their experiment. The platform provides a virtual training environment and is widely used by cybersecurity specialists, students, and companies to improve hacking skills."
"The team approached ChatGPT by explaining that they were doing a penetration testing challenge. Penetration testing (pen test) is a method used to replicate a hack by deploying different tools and strategies. The discovered vulnerabilities can help organizations strengthen the security of their systems."
Experts believe that AI-based vulnerability scanners used by cybercriminals can wreak havoc on internet security. However, cybernews team also sees the potential of AI in cybersecurity.
Researchers can use insights from AI to prevent data leaks. AI can also help developers in monitoring and testing implementation more efficiently.
AI keeps on learning, it has a mind of its own. It learns newer ways of advanced tech and exploitation, and it works as a handbook to penetration testers, offering sample payloads fulfilling their needs.
“Even though we tried ChatGPT against a relatively uncomplicated penetration testing task, it does show the potential for guiding more people on how to discover vulnerabilities that could, later on, be exploited by more individuals, and that widens the threat landscape considerably. The rules of the game have changed, so businesses and governments must adapt to it," said Mantas Sasnauskas, head of the research team.
A fleeceware application isn't customary Android malware as it doesn't contain pernicious code. Rather, the danger comes from unnecessary subscription charges that it may not clearly specify to mobile clients. Fleeceware tricks a victim into downloading an application that intrigues them. At that point, the developer relies on the client overlooking the program as well as neglecting to see the actual subscription charge. These developers target more youthful clients who probably won't focus on the subscription details. The developer fleeces the victim by fooling them into paying cash for something they probably won't need. Chances are, they won't realize they have or they may have gotten somewhere else complimentary or free of charge.