Brave is a Chromium-based browser, running on Brave search engine, that restricted tracking for personal ads.
Brave’s new product – Leo – is a generative AI assistant, on top of Anthropic's Claude and Meta's Llama 2. Apparently, Leo promotes user-privacy as its main feature.
Unlike any other generative AI-chatbots, like ChatGPT, Leo offers much better privacy to its users. The AI assistant does not store any of the user’s chat history, neither does it use the user’s data for training purposes.
Moreover, a user does not need to make an account in order to access Leo. Also, if a user is leveraging its premium experience, Brave will not link their accounts to the data they may have used. / Leo chatbot has been put to test for three months now. However, Brave is now making Leo available to all users of the most recent 1.60 desktop browser version. As soon as Brave rolls it out to you, you ought to see the Leo emblem on the sidebar of the browser. In the upcoming months, Leo support will be added to the Brave apps for Android and iPhone.
User privacy has remained a major concern when it comes to ChatGPT and Google Bard or any AI product.
A better option in AI chatbots, along with their innovative features, will ultimately be the one which provides better privacy to its users. Leo, in this case, has a potential to bring a revolution, taking into account that Brave promotes the chatbot’s “unparalleled privacy” feature straight away.
Since users do not require any account to access Leo, they need not verify their emails or phones numbers as well. This way, the user’s contact information is rather secure.
Moreover, if the user chooses to use $15/month Leo Premium, they receive tokens that are not linked to their accounts. However, Brave notes that, this way, “ you can never connect your purchase details with your usage of the product, an extra step that ensures your activity is private to you and only you.”
The company says, “the email you used to create your account is unlinkable to your day-to-day use of Leo, making this a uniquely private credentialing experience.”
Brave further notes that all Leo requests will be sent via an anonymous server, meaning that Leo traffic cannot be connected to user’s IP addresses.
More significantly, Brave will no longer host Leo's conversations. As soon as they are formed, they will be disposed of instantly. Leo will also not learn from those conversations. Moreover, Brave will not gather any personal identifiers, such as your IP address. Leo will not gather user data, nor will any other third-party model suppliers. Considering that Leo is based on two language models, this is significant.
At one such instances, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code, by using the Redmond giant’s AI tool.
Using the AI tool of the Redmond behemoth, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code.
The researchers' report further noted how they set out to compare Bard's security to that of ChatGPT. From both sites, they attempted to obtain three things: phishing emails, malicious keyloggers, and some simple ransomware code.
The researchers described that simply asking the AI bots to create phishing emails yielded no results, however asking the Bard to provide ‘examples’ of the same provided them with plentiful phishing mails. ChatGPT, on the other hand, refused to comply, claiming that doing so would amount to engaging in fraudulent activity, which is illegal.
The researchers further create malware like keyloggers, to which the bots performed somewhat better. Here too, a direct question did not provide any result, but a tricky question as well yielded nothing since both the AI bots declined. However, answers for being asked to create keyloggers differed in both the platforms. While Bard simply said, “I’m not able to help with that, I’m only a language model,” ChatGPT gave a much detailed explanation.
Later, on being asked to provide a keylogger to log their keys, both ChatGPT and Bard ended up generating a malicious code. However, ChatGPT did provide a disclaimer before doing the aforementioned.
The researchers finally proceeded to asking Bard to run a basic ransomware script. While this was much trickier than getting the AI bot to generate phishing emails or keylogger, they finally managed to get Bard into the game.
“Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT[…]Consequently, it is much easier to generate malicious content using Bard’s capabilities,” they concluded.
The reason, in simpler terms is: Malicious use of any new technology is inevitable.
Here, one can conclude that these issues with the emerging generative AI technologies are much expected. AI, as an extremely developed tool has the potential to alter an entire cybersecurity script.
Cybersecurity experts and law enforcements have already been concerned for the same and have been warning against the AI technology for it can be well used in increasing the ongoing innovation in cybercrime tactics like convincing phishing emails, malware, and more. The development in technologies have made it accessible to users in such a way that now a cybercriminal can deploy a sophisticated cyberattack by only having minimal hand in coding.
While regulators and law enforcement are doing their best to impose limits on technology and ensure that it is utilized ethically, developers are working to do their bit by educating platforms to reject being used for criminal activity.
While generative AI market is decentralized, big companies will always be under the watch of regulatory bodies and law enforcements. However, smaller companies will remain in the radar of a potential cyberattack, especially the ones that are incapable to fight against or prevent the abuse.
Researchers and security experts suggests that the only way to improve the cybersecurity posture is to fight with full strength. Even though AI is already being used to identify suspicious network activity and other criminal conduct, it cannot be utilized to make entrance barriers as high as they once were. There is no closing the door ever again.