Google's Bard AI has advanced significantly in a recent upgrade by integrating with well-known programs like Google Drive, Gmail, YouTube, Maps, and more. Through the provision of a smooth and intelligent experience, this activity is positioned to change user interactions with these platforms.
The security researchers further scrutinized the data of the million enterprise users worldwide and emphasized the growing trend of generative AI app usage, which witnessed an increase of 22.5% over the past two months. This consequently escalated the chance of sensitive data being exposed.
Apparently, organizations with 10,000 (or more) users are utilizing some or the other AI tool – with an average of 5 apps – on a regular basis. Compared to other generative AI apps, ChatGPT has more than 8 times as many daily active users. Within the next seven months, it is anticipated that the number of people accessing AI apps will double at the present growth pace.
The AI app with the swiftest growth in installations over the last two months was Google Bard, which is presently attracting new users at a rate of 7.1% per week versus 1.6% for ChatGPT. Although the generative AI app market is expected to considerably grow before then, with many more apps in development, Google Bard is not projected to overtake ChatGPT for more than a year at the current rate.
Besides the intellectual property (excluding source code) and personally identifiable information, other sensitive data communicated via ChatGPT includes regulated data, such as financial and healthcare data, as well as passwords and keys, which are typically included in source code.
According to Ray Canzanese, Threat Research Director, Netskope Threat Lab, “It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing[…]Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”
As opportunistic attackers look to profit from the popularity of artificial intelligence, Netskope Threat Labs is presently monitoring ChatGPT proxies and more than 1,000 malicious URLs and domains, including several phishing attacks, malware distribution campaigns, spam, and fraud websites.
While blocking access to AI content and apps may seem like a good idea, it is indeed a short-term solution.
James Robinson, Deputy CISO at Netskope, said “As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity[…]Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”
Organizations must focus their strategy on finding acceptable applications and implementing controls that enable users to use them to their maximum potential while protecting the business from dangers in order to enable the safe adoption of AI apps. For protection against assaults, such a strategy should incorporate domain filtering, URL filtering, and content inspection.
Here, we are listing some more safety measures to secure data and use AI tools with safety:
At one such instances, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code, by using the Redmond giant’s AI tool.
Using the AI tool of the Redmond behemoth, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code.
The researchers' report further noted how they set out to compare Bard's security to that of ChatGPT. From both sites, they attempted to obtain three things: phishing emails, malicious keyloggers, and some simple ransomware code.
The researchers described that simply asking the AI bots to create phishing emails yielded no results, however asking the Bard to provide ‘examples’ of the same provided them with plentiful phishing mails. ChatGPT, on the other hand, refused to comply, claiming that doing so would amount to engaging in fraudulent activity, which is illegal.
The researchers further create malware like keyloggers, to which the bots performed somewhat better. Here too, a direct question did not provide any result, but a tricky question as well yielded nothing since both the AI bots declined. However, answers for being asked to create keyloggers differed in both the platforms. While Bard simply said, “I’m not able to help with that, I’m only a language model,” ChatGPT gave a much detailed explanation.
Later, on being asked to provide a keylogger to log their keys, both ChatGPT and Bard ended up generating a malicious code. However, ChatGPT did provide a disclaimer before doing the aforementioned.
The researchers finally proceeded to asking Bard to run a basic ransomware script. While this was much trickier than getting the AI bot to generate phishing emails or keylogger, they finally managed to get Bard into the game.
“Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT[…]Consequently, it is much easier to generate malicious content using Bard’s capabilities,” they concluded.
The reason, in simpler terms is: Malicious use of any new technology is inevitable.
Here, one can conclude that these issues with the emerging generative AI technologies are much expected. AI, as an extremely developed tool has the potential to alter an entire cybersecurity script.
Cybersecurity experts and law enforcements have already been concerned for the same and have been warning against the AI technology for it can be well used in increasing the ongoing innovation in cybercrime tactics like convincing phishing emails, malware, and more. The development in technologies have made it accessible to users in such a way that now a cybercriminal can deploy a sophisticated cyberattack by only having minimal hand in coding.
While regulators and law enforcement are doing their best to impose limits on technology and ensure that it is utilized ethically, developers are working to do their bit by educating platforms to reject being used for criminal activity.
While generative AI market is decentralized, big companies will always be under the watch of regulatory bodies and law enforcements. However, smaller companies will remain in the radar of a potential cyberattack, especially the ones that are incapable to fight against or prevent the abuse.
Researchers and security experts suggests that the only way to improve the cybersecurity posture is to fight with full strength. Even though AI is already being used to identify suspicious network activity and other criminal conduct, it cannot be utilized to make entrance barriers as high as they once were. There is no closing the door ever again.