Artificial intelligence (AI) agents are revolutionizing the cryptocurrency sector by automating processes, enhancing security, and improving trading strategies. These smart programs help analyze blockchain data, detect fraud, and optimize financial decisions without human intervention.
What Are AI Agents?
AI agents are autonomous software programs that operate independently, analyzing information and taking actions to achieve specific objectives. These systems interact with their surroundings through data collection, decision-making algorithms, and execution of tasks. They play a critical role in multiple industries, including finance, cybersecurity, and healthcare.
There are different types of AI agents:
1. Simple Reflex Agents: React based on pre-defined instructions.
2. Model-Based Agents: Use internal models to make informed choices.
3. Goal-Oriented Agents: Focus on achieving specific objectives.
4. Utility-Based Agents: Weigh outcomes to determine the best action.
5. Learning Agents: Continuously improve based on new data.
Evolution of AI Agents
AI agents have undergone advancements over the years. Here are some key milestones:
1966: ELIZA, an early chatbot, was developed at MIT to simulate human-like conversations.
1980: MYCIN, an AI-driven medical diagnosis tool, was created at Stanford University.
2011: IBM Watson demonstrated advanced natural language processing by winning on Jeopardy!
2014: AlphaGo, created by DeepMind, outperformed professional players in the complex board game Go.
2020: OpenAI introduced GPT-3, an AI model capable of generating human-like text.
2022: AlphaFold solved long-standing biological puzzles related to protein folding.
2023: AI-powered chatbots like ChatGPT and Claude AI gained widespread use for conversational tasks.
2025: ElizaOS, a blockchain-based AI platform, is set to enhance AI-agent applications.
AI Agents in Cryptocurrency
The crypto industry is leveraging AI agents for automation and security. In late 2024, Virtuals Protocol, an AI-powered Ethereum-based platform, saw its market valuation soar to $1.9 billion. By early 2025, AI-driven crypto tokens collectively reached a $7.02 billion market capitalization.
AI agents are particularly valuable in decentralized finance (DeFi). They assist in managing liquidity pools, adjusting lending and borrowing rates, and securing financial transactions. They also enhance security by identifying fraudulent activities and vulnerabilities in smart contracts, ensuring compliance with regulations like Know Your Customer (KYC) and Anti-Money Laundering (AML).
The Future of AI in Crypto
Tech giants like Amazon and Apple are integrating AI into digital assistants like Alexa and Siri, making them more interactive and capable of handling complex tasks. Similarly, AI agents in cryptocurrency will continue to take new shapes, offering greater efficiency and security for traders, investors, and developers.
As these intelligent systems advance, their role in crypto and blockchain technology will expand, paving the way for more automated, reliable, and secure financial ecosystems.
Brave is a Chromium-based browser, running on Brave search engine, that restricted tracking for personal ads.
Brave’s new product – Leo – is a generative AI assistant, on top of Anthropic's Claude and Meta's Llama 2. Apparently, Leo promotes user-privacy as its main feature.
Unlike any other generative AI-chatbots, like ChatGPT, Leo offers much better privacy to its users. The AI assistant does not store any of the user’s chat history, neither does it use the user’s data for training purposes.
Moreover, a user does not need to make an account in order to access Leo. Also, if a user is leveraging its premium experience, Brave will not link their accounts to the data they may have used. / Leo chatbot has been put to test for three months now. However, Brave is now making Leo available to all users of the most recent 1.60 desktop browser version. As soon as Brave rolls it out to you, you ought to see the Leo emblem on the sidebar of the browser. In the upcoming months, Leo support will be added to the Brave apps for Android and iPhone.
User privacy has remained a major concern when it comes to ChatGPT and Google Bard or any AI product.
A better option in AI chatbots, along with their innovative features, will ultimately be the one which provides better privacy to its users. Leo, in this case, has a potential to bring a revolution, taking into account that Brave promotes the chatbot’s “unparalleled privacy” feature straight away.
Since users do not require any account to access Leo, they need not verify their emails or phones numbers as well. This way, the user’s contact information is rather secure.
Moreover, if the user chooses to use $15/month Leo Premium, they receive tokens that are not linked to their accounts. However, Brave notes that, this way, “ you can never connect your purchase details with your usage of the product, an extra step that ensures your activity is private to you and only you.”
The company says, “the email you used to create your account is unlinkable to your day-to-day use of Leo, making this a uniquely private credentialing experience.”
Brave further notes that all Leo requests will be sent via an anonymous server, meaning that Leo traffic cannot be connected to user’s IP addresses.
More significantly, Brave will no longer host Leo's conversations. As soon as they are formed, they will be disposed of instantly. Leo will also not learn from those conversations. Moreover, Brave will not gather any personal identifiers, such as your IP address. Leo will not gather user data, nor will any other third-party model suppliers. Considering that Leo is based on two language models, this is significant.
The warning letter was issued the same day Meta revealed their plans to incorporate chatbots powered by AI into their sponsored apps, i.e. WhatsApp, Messenger, and Instagram.
In the letter, Markey wrote to Meta CEO Mark Zuckerberg that, “These chatbots could create new privacy harms and exacerbate those already prevalent on your platforms, including invasive data collection, algorithmic discrimination, and manipulative advertisements[…]I strongly urge you to pause the release of any AI chatbots until Meta understands the effect that such products will have on young users.”
According to Markey, the algorithms have already “caused serious harms,” to customers, like “collecting and storing detailed personal information[…]facilitating housing discrimination against communities of color.”
He added that while chatbots can benefit people, they also possess certain risks. He further highlighted the risk of chatbots, noting the possibility that they could identify the difference between ads and content.
“Young users may not realize that a chatbot’s response is actually advertising for a product or service[…]Generative AI also has the potential to adapt and target advertising to an 'audience of one,' making ads even more difficult for young users to identify,” states Markey.
Markey also noted that chatbots might also make social media platforms more “addictive” to the users (than they already are).
“By creating the appearance of chatting with a real person, chatbots may significantly expand users’ -- especially younger users’ – time on the platform, allowing the platform to collect more of their personal information and profit from advertising,” he wrote. “With chatbots threatening to supercharge these problematic practices, Big Tech companies, such as Meta, should abandon this 'move fast and break things' ethos and proceed with the utmost caution.”
The lawmaker is now asking Meta to respond to a series of questions in regards to their new chatbots, including the ones that might have an impact on users’ privacy and advertising.
Moreover, the questions include a detailed insight into the roles of chatbots when it comes to data collection and whether Meta will commit not to use any information gleaned from them to target advertisements for their young users. Markey inquired about the possibility of adverts being integrated into the chatbots and, if so, how Meta intends to prevent those ads from confusing children.
In their response, a Meta spokesperson has confirmed that the company has indeed received the said letter.
Meta further notes in a blog post that it is working in collaboration with the government and other entities “to establish responsible guardrails,” and is training the chatbots with consideration to safety. For instance, Meta writes, the tools “will suggest local suicide and eating disorder organizations in response to certain queries, while making it clear that it cannot provide medical advice.”
One of the most frequently used platforms to contact kids is social media chatrooms, through which abuse may happen both online and offline. Predators are increasingly leveraging technological advancements to commit sexual abuse with the aid of technology.
Once a predator has succeeded in getting access to a child’s webcam, the content is then used to record, produce and distribute child pornography.
A team of criminologists, studying cybercrime and cybersecurity, conducted research to investigate the methodologies used by online predators to hack children’s webcams.
For this, the researchers posed as children (potential victims) to assess the movements of online predators. They started by creating several automated chatbots to lure online predators in some of the chatrooms popular among children.
The bots are programmed in a way that they would not initiate any conversation and will respond only to users who are above 18 years of age.
Furthermore, they are programmed to start each conversation by mentioning their age, sex, and location. This was done to ensure that the conversations documented were with individuals over the age of 18 who were knowingly and voluntarily conversing with a minor. It is standard procedure in chatroom culture. Although it is likely that some of those involved were minors impersonating adults, a prior study has shown that online predators tend to portray themselves as younger rather than older, not the other way around.
The chatbots recorded 953 chats with self-identified adults who claimed to be adults who were told they were conversing with a 13-year-old girl. The chats were almost exclusively sexual in nature, with a focus on webcams. Some predators made their demands clear and offered to pay for films of the child performing sexual acts right away. Others made an attempt to solicit videos by making promises of future love and partnerships. Along with these frequently employed strategies, it was being discovered that 39% of chats had an unsolicited link.
A forensic investigation conducted on the links reports that 19% (71 links) were embedded with malware, 5% (18 links) led to phishing websites, and 41% (154 links) were associated with whereby, a video conferencing platform operated by a company in Norway.
It was very obvious how some of these links were used by a predator to harm the child victims. Online predators can remotely access a child's camera by infecting their computer with spyware. Personal information can be collected from phishing websites and utilized by the predator to harm their victim. For instance, phishing scams can give a predator access to a child's computer password, which can then be used to log in and control the child's camera remotely.
Awareness is the initial step towards a safe and trustable virtual space. These attack methods are mentioned for the parents and policymakers so that they could protect and educate the otherwise vulnerable individuals.
Since the issue is now made transparent to videoconferencing firms, they are looking forward can modifying their platforms to prevent such assaults in the future. In the long run, putting more emphasis on privacy could stop designs that could be used for evil purposes.
Here, we are recommending some of the ways that could help in keeping your child safe while in cyberspace: