Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Artificial Inteligence. Show all posts

Espionage Concerns Arise from Newly Discovered Voldemort Malware

 


As a result of Proofpoint researchers' research, in August 2024, they discovered an unusual campaign in which custom malware was being delivered by a novel attack chain. Cybercriminals are believed to have named the malware "Voldemort" based on the internal file names and strings used in it.  As part of the attack chain, multiple tactics have been employed, some of which are currently popular in the threat landscape, while others are less common, such as using Google Sheets as a program for command and control (C2). 

It is noteworthy that in addition to tactical, technical, and procedural (TTPs) components, it takes advantage of a lure theme impersonating the government agencies of a variety of countries, and it uses odd file naming and passwords such as "test". Several researchers initially suspected that the activity may be a red team, but analysis of the malware and the number of messages indicated that it was a threat actor very quickly.   

There has been an aggressive malware campaign known as "Voldemort" launched against organizations all over the world, impersonating tax authorities in Europe, Asia, and the U.S. Since the malicious activity was launched on Aug. 5, more than 20,000 phishing messages were reported worldwide by dozens of companies. According to Proofpoint, over 20,000 phishing messages were reported during the last three months. 

A custom backdoor has been written in C and was designed to enable data exfiltration and the deployment of additional malicious payloads, as well as the exfiltration of data itself. The exploit is based on an exploit that takes advantage of a browser extension called 'Google Sheets' to be used as the C2 communication tool for the attack, and files that are infected with a malicious Windows search protocol are used to carry out the attack. 

As soon as the victim downloads the malware, it uses WebEx software to load a DLL that communicates with the C2 server using a legitimate version of WebEx software. There are several attack chains outlined in this attack chain, which include a variety of techniques currently common in the threat landscape, as well as a variety of rarely used methods of command and control (C2) such as the use of Google Sheets. 

Various tactics, techniques, and procedures (TTP) have been applied to it in combination with lure themes impersonating government agencies of various countries as well as its strange file naming and passwords, such as "test". Initial suspicions were that this activity might have been the work of a red team, but the large volume of messages and an analysis of the malware indicated that it was the work of a threat actor very quickly.   

In Proofpoint's assessment, there is a moderate amount of confidence that this is likely the actions of an advanced persistent threat (APT) actor that is seeking to gather intelligence. Although Proofpoint is well-versed in identifying named threat actors, it is still not confident enough with the data available to attribute a specific TA with high certainty. There is no doubt that some aspects of the malware, such as the widespread targeting and characteristics, are associated more often with cybercrime activity, but the nature of the malware does not appear to be motivated by financial gain at this time, but more by espionage.  

Powered by C, Voldemort is a custom backdoor that was written to gather information. As well as the capability to gather information, it also can drop additional payloads on the target. As Proofpoint discovered, Cobalt Strike was being hosted on the actor's infrastructure, and that would likely be one of the payloads that is being delivered by the actor.   There was a significant increase in phishing emails sent daily by the hackers beginning on August 17, when nearly 6,000 emails appeared to be impersonating tax agencies, which was high, according to the researchers. 

In addition to the Internal Revenue Service (IRS) in the United States, the HM Revenue & Customs in the United Kingdom, and the Direction Générale des Finances Publiques in France joined the list of potential regulators. A layer of credibility was added to the lures by crafting the phishing email in the native language of the respective tax authority, adding a high degree of legitimacy to the message. As part of their authenticity, the emails received from what appeared to be compromised domains contained the legitimate domain names of the tax agencies, to make them appear more genuine. 

There is no definitive answer to the overall objective of the campaign, though Proofpoint researchers say it seems likely that the campaign is aimed at espionage, given Voldemort's intelligence-gathering capacities as well as his ability to deploy additional payloads into the mainstream. There are more than half of all targeted organizations fall into the sectors of insurance, aerospace, transportation, and education. 

The threat actor behind this campaign is unknown, but Proofpoint believes that it may be engaged in cyber espionage operations as a means of obtaining information. Likewise, the messages also contain Google AMP Cache URLs that redirect to the landing page on InfinityFree, as well as a direct link to the landing page, which is included in the campaign later on. Towards the bottom of the landing page, there is a button that says "Click to view the document", which when clicked, checks the User Agent or software in the browser. 

When the User Agent contains "Windows," the browser is automatically redirected to a search-ms URI, which points to a TryCloudflare-tunneled URI ending in .search-ms. This redirection prompts the victim to open Windows Explorer, although the specific query responsible for this action remains hidden from the victim, leaving only a popup visible. Concurrently, an image is loaded from a URL ending in /stage1 on an IP address that is managed by the logging service pingb.in. This service enables the threat actor to record a successful redirect and collect additional browser and network information about the victim. 

A distinguishing feature of the Voldemort malware is its use of Google Sheets as a command and control (C2) server. The malware pings Google Sheets to retrieve new commands to execute on the compromised device and to serve as a repository for exfiltrated data. Each infected machine writes its data to specific cells within the Google Sheet, which are often designated by unique identifiers, such as UUIDs. This method ensures that data from different breached systems remains isolated, allowing for more efficient management. 

Voldemort interacts with Google Sheets using Google's API, relying on an embedded client ID, secret, and refresh token, all of which are stored in its encrypted configuration. This strategy offers malware a dependable and widely available C2 channel while minimizing the chances of its network communications being detected by security tools. Given that Google Sheets is commonly used in enterprise environments, blocking this service could be impractical, further reducing the likelihood of detection. 

In 2023, the Chinese advanced persistent threat (APT) group APT41 was observed using Google Sheets as a C2 server, employing the red-teaming GC2 toolkit to facilitate this activity. To defend against such campaigns, security firm Proofpoint recommends several measures: restricting access to external file-sharing services to trusted servers only, blocking connections to TryCloudflare when not actively required, and closely monitoring for suspicious PowerShell executions. These steps are advised to mitigate the risks posed by the Voldemort malware and similar threats.

Insect Farmers Embrace AI to Drive Down Production Costs

 


The insect farming industry, previously thought of as an industry that was in desperate need of development, has rapidly gained popularity as a practical and sustainable solution to the severe worldwide challenges of food safety, environmental degradation, and aid shortages that are affecting the world today. Insect farming techniques that are both green and charge-powerful are becoming increasingly important as the demand for alternative protein resources increases. 

To respond, forward-thinking insect farmers are turning to artificial intelligence (AI) to optimize their operations and reduce expenses. Non-entity husbandry is a form of animal husbandry that involves the raising and keeping of insects, such as mealworms, justices, and black dogface canvases, to be used for human consumption or to prevent disease, create diseases, or perform unique tasks. 

Insect farming has several advantages over conventional farm animal farming, including a higher feed conversion rate, a reduction in greenhouse gas emissions, and a reduction in the amount of land and water required. As opposed to soybean-based animal feed, insect farming can help reduce greenhouse gas emissions. If larvae are fed natural food waste, insect-based feed has a lower carbon footprint. 

Full Circle Biotechnology optimises production and reduces costs using artificial intelligence. There is a company called Full Circle, which is located outside Bangkok, and that produces 20 million black soldier fly larvae to provide sustainable food for shrimp farms and pig farms. Fruit and vegetable wastes are provided by food and beverage manufacturers to the larvae, which are then combined with probiotic bacteria and mushrooms to create a sustainable food source for Thailand's shrimp and pig farms. 

The company is located in a small, enclosed facility just outside of Bangkok that produces animal feed for shrimp farms and pig farms in the country. To survive in this dark, warm, humid environment, the larvae feed on the wastes of fruit and vegetables sourced from food manufacturers before they are harvested in combination with probiotic bacteria and mushrooms, which can produce a probiotic effect. 

Using insect-based feeds is Collins' belief that they will be able to reduce deforestation in South America, which is associated with soybean-based feeds. Moreover, studies have shown that insect-based feeds have a much lower carbon footprint than soymeal, and Full Circle's feed is "100 times lower than that of soymeal." Additionally, Full Circle's feed has a much lower carbon footprint than soymeal. 

As an insect food, people are not only ensuring the environment is protected but also ensuring that they get the best nutrition. There is generally no doubt that insect-based feed does reduce the carbon footprint of the environment if the larvae are fed natural food waste, which is the only way to make sure that the larvae are fed organic matter. According to one report, soybean-based feed produces less carbon than soybean feed if the insects have been fed processed food sources. 

A Full Circle feed contains a substantial amount of protein, which is more than twice as much protein as a soy feed. Because of this, animal feed is more filling and nutritious than soy. Full Circle is a rural market supplier based in Thailand that supplies 49 farms across the country, employing 14 people at the moment. 

Despite this, the company faces a challenge - soybeans-based feed is significantly cheaper than insect-based feeds, so it is a challenge to provide both. It is for this reason that Full Circle is turning to artificial intelligence (AI) to optimize its production and decrease its costs to overcome this barrier. Full Circle is now turning to artificial intelligence in an attempt to recoup some of the price it pays for its feed by maximising production while reducing its cost. 

A machine learning algorithm is being developed to be able to examine and analyze all available past and current data on the evolution of insect farming to figure out and then continuously fine-tune the most efficient methods. There can be a wide range of variables to consider in this regard, ranging from the temperature to the amount of food used, the optimum space the larvae need, tracking thousands of flies quickly and accurately, and whether or not to introduce new strains of flies. 

To improve the quality of their products, Full Circle seeks to fine-tune their methods by training an AI system that analyses historical and current data on insect farming. Optimising temperatures, food supplies, the necessary space for larvae, or even introducing new strains or species of bacteria are all examples of how to accomplish this. 

AI is believed to be an invaluable tool for speeding up trial and error so that a better understanding of the production and harvesting processes of insects can be achieved. At the same time, in Lithuania, a software provider for insect farm management Cogastro is also working on a system based on artificial intelligence. 

Indeed, they are currently providing data analysis software to analyze data, but the AI upgrade will allow the system to adjust, adapt, and make changes autonomously within an insect farm if it learns from the data. Cogastro is poised to launch its AI commercially within the next three years, a move spearheaded by its founder and CEO, Mante Sidlauskaite. 

However, Sidlauskaite advises caution against the proliferation of companies claiming immediate access to AI systems, stressing the significance of time and experience in refining accurate AI models. The integration of AI into insect farming heralds a significant shift in the agricultural landscape, as emphasized by Sidlauskaite. By leveraging the capabilities of AI, insect farmers stand to benefit from reduced costs, enhanced efficiency, and expedited progress towards a more sustainable and resilient food production system. 

With the demand for alternative protein sources on the rise, AI-driven insect farming is positioned to play a pivotal role in shaping the future of agriculture. At Full Circle, collaboration with Singapore-based AI specialist Simon Christofides is underway to develop their AI system. Full Circle's spokesperson, Collins, acknowledges the relative novelty of black soldier fly larvae farming compared to traditional agricultural practices, indicating that there is much to learn. 

He expresses confidence in AI's ability to accelerate the learning curve by analyzing extensive data collected from various sensors. While AI optimization is deemed essential for production enhancement, Collins underscores the necessity of maintaining a hands-off approach in certain aspects of insect farming.

Where is AI Leading Content Creation?


Artificial Intelligence (AI) is reshaping the world of social media content creation, offering creators new possibilities and challenges. The fusion of art and technology is empowering creators by automating routine tasks, allowing them to channel their energy into more imaginative pursuits. AI-driven tools like Midjourney, ElevenLabs, Opus Clip, and Papercup are democratising content production, making it accessible and cost-effective for creators from diverse backgrounds.  

Automation is at the forefront of this revolution, freeing up time and resources for creators. These AI-powered tools streamline processes such as research, data analysis, and content production, enabling creators to produce high-quality content more efficiently. This democratisation of content creation fosters diversity and inclusivity, amplifying voices from various communities. 

Yet, as AI takes centre stage, questions arise about authenticity and originality. While AI-generated content can be visually striking, concerns linger about its soul and emotional depth compared to human-created content. Creators find themselves navigating this terrain, striving to maintain authenticity while leveraging AI-driven tools to enhance their craft. 

AI analytics are playing a pivotal role in content optimization. Platforms like YouTube utilise AI algorithms for A/B testing headlines, predicting virality, and real-time audience sentiment analysis. Creators, armed with these insights, refine their content strategies to tailor messages, ultimately maximising audience engagement. However, ethical considerations like algorithmic bias and data privacy need careful attention to ensure the responsible use of AI analytics in content creation. 

The rise of virtual influencers, like Lil Miquela and Shudu Gram, poses a unique challenge to traditional content creators. While these virtual entities amass millions of followers, they also threaten the livelihoods of human creators, particularly in influencer marketing campaigns. Human creators, by establishing genuine connections with their audience and upholding ethical standards, can distinguish themselves from virtual counterparts, maintaining trust and credibility. 

As AI continues its integration into content creation, ethical and societal concerns emerge. Issues such as algorithmic bias, data privacy, and intellectual property rights demand careful consideration for the responsible deployment of AI technologies. Upholding integrity and ethical standards in creative practices, alongside collaboration between creators, technologists, and policymakers, is crucial to navigating these challenges and fostering a sustainable content creation ecosystem. 

In this era of technological evolution, the impact of AI on social media content creation is undeniable. As we embrace the possibilities it offers, addressing ethical concerns and navigating through the intricacies of this digitisation is of utmost importance for creators and audiences alike.

 

Persistent Data Retention: Google and Gemini Concerns

 


While it competes with Microsoft for subscriptions, Google has renamed its Bard chatbot Gemini after the new artificial intelligence that powers it, called Gemini, and said consumers can pay to upgrade its reasoning capabilities to gain subscribers. Gemini Advanced offers a more powerful Ultra 1.0 AI model that customers can subscribe to for US$19.99 ($30.81) a month, according to Alphabet, which said it is offering Gemini Advanced for US$19.99 ($30.81) a month. 

The subscription fee for Gemini storage is $9.90 ($15.40) a month, but users will receive two terabytes of cloud storage by signing up today. They will also have access to Gemini through Gmail and the Google productivity suite shortly. 

It is believed that Google One AI Premium, as well as its partner OpenAI, are the biggest competitors yet for the company. It also shows that consumers are becoming increasingly competitive as they now have several paid AI subscriptions to choose from. 

In the past year, OpenAI's ChatGPT Plus subscription launched an early access program that allowed users to purchase early access to AI models and other features, while Microsoft recently launched a competing subscription for artificial intelligence in Word and Excel applications. The subscription for both services costs US$20 a month in the United States.

According to Google, human annotators are routinely monitoring and modifying conversations that are read, tagged, and processed by Gemini - even though these conversations are not owned by Google Accounts. As far as data security is concerned, Google has not stated whether these annotators are in-house or outsourced. (Google does not specify whether they are in-house or outsourced.)

These conversations will be kept for as long as three years, along with "related data" such as the languages and devices used by the user and their location, etc. It is now possible for users to control how they wish to retain the Gemini-relevant data they use. 

Using the Gemini Apps Activity dashboard in Google’s My Activity dashboard (which is enabled by default), users can prevent future conversations with Gemini from being saved to a Google Account for review, meaning they will no longer be able to use the three-year window for future discussions with Gemini). 

The Gemini Apps Activity screen lets users delete individual prompts and conversations with Gemini, however. However, Google says that even when Gemini Apps Activity is turned off, Gemini conversations will be kept on the user's Google Account for up to 72 hours to maintain the safety and security of Gemini apps and to help improve Gemini apps. 

In user conversations, Google encourages users not to enter confidential or sensitive information which they might not wish to be viewed by reviewers or used by Google to improve their products, services, and machine learning technologies. At the beginning of Thursday, Krawczyk said that Gemini Advanced was available in English in 150 countries worldwide. 

Next week, Gemini will begin launching smartphones in Asia-Pacific, Latin America and other regions around the world, including Japanese and Korean, as well as additional language support for the product. This will follow the company's smartphone rollout in the US.

The free trial subscription period lasts for two months and it is available to all users. Upon hearing this announcement, Krawczyk said the Google artificial intelligence approach had matured, bringing "the artist formerly known as Bard" into the "Gemini era." As GenAI tools proliferate, organizations are becoming increasingly wary of privacy risks associated with such tools. 

As a result of a Cisco survey conducted last year, 63% of companies have created restrictions on what kinds of data might be submitted to GenAI tools, while 27% have prohibited GenAI tools from being used at all. A recent survey conducted by GenAI revealed that 45% of employees submitted "problematic" data into the tool, including personal information and non-public files about their employers, in an attempt to assist. 

Several companies, such as OpenAI, Microsoft, Amazon, Google and others, are offering GenAI solutions that are intended for enterprises that no longer retain data for any primary purpose, whether for training models or any other purpose at all. There is no doubt that consumers are going to get shorted - as is usually the case - when it comes to corporate greed.

The Rise of AI Restrictions: 25% of Firms Slam the Door on AI Magic

 


When ChatGPT was first released to the public, several corporate titans, from Apple to Verizon, made headlines when they announced bans on the use of this software at work shortly after it was introduced. However, a recent study confirms that those companies are not anomalous. 

It has recently been reported that more than 1 in 4 companies have banned the use of generative artificial intelligence tools at work at some point in time, based on a Cisco survey conducted last summer among 2,600 privacy and security professionals. 

According to the survey, 63% of respondents said that they limit the amount of data employees can enter into these systems, and 61% said that they restrict which generative AI tools employees can use within their organizations. Approximately one-quarter of companies have banned their employees from using generative artificial intelligence, according to a new Cisco survey. 

Based on the annual Data Privacy Benchmark Study, conducted by the firm, a survey of 2,600 privacy and security professionals across 12 countries, two-thirds of those surveyed impose restrictions on the types of information that can be entered into LLM-based systems, as well as prohibiting specific applications from being used. 

According to Robert Waitman, director of Cisco's Privacy Center of Excellence, who wrote a blog post about the survey, over two-thirds of respondents expressed concern that their data would be disclosed to competitors or the public, a concern that may not be met by the majority. The information they entered about the company was not entirely public (48% of the respondents), which could pose a problem. 

There are a lot of concerns about the use of AI that involves their data today, and 91% of organizations are aware that they need to do more to make sure customers feel confident that their data is used for the intended and legitimate purposes in AI. There has been little progress in building consumer trust over the past year as this level is similar to last year's level, suggesting that not much progress has been made. 

Organizations' priorities differ from individuals' when it comes to building consumer trust. As a consumer, one of the most important things to be concerned about is getting clear information about exactly how their data is being used and not having it sold to marketers. A survey of businesses conducted by the American Association of Professionals revealed that compliance with privacy laws is the top priority (25%) along with avoiding data breaches (23%). 

Furthermore, it indicates that a greater focus on transparency would be beneficial — particularly in AI applications, where understanding how algorithms make decisions can be difficult. Over the past five years, there has been a more than double increase in privacy spending, a rise in benefits, and a steady return on investment. 

It was reported this year that 95% of respondents indicated that privacy benefits outweigh the costs, with an average organization reporting 1.6 times the return on investment they received from privacy. Additionally, 80% of respondents indicated they had benefited from their privacy investments in terms of higher levels of loyalty and trust, and that number was even higher (92%) among the most privacy-aware organizations. 

Since last year, the largest organizations with 10,000+ employees have increased their privacy spending by around 7-8% in terms of their spending on privacy. The number of investments was lower for smaller organizations, however. The average privacy investment for businesses with 50-249 employees was decreased by a fourth on average than that for businesses with 50-499 employees. 

“The survey results revealed that 94% of respondents would not buy from Cisco if they did not adequately protect their customers' data. According to Harvey Jang, Cisco Vice President and Chief Privacy Officer, “Customers are looking for hard evidence that an organization can be trusted.” 

Privacy has become inextricably linked with customer trust and loyalty. Investing in privacy can help organizations leverage AI ethically and responsibly in the era of AI, and this is especially true as AI becomes more prevalent.

Microsoft is Rolling out an AI Powered Key

 


Prepare for a paradigm shift as Microsoft takes a giant leap forward with a game-changing announcement – the integration of an Artificial Intelligence (AI) key in their keyboards, the most substantial update in 30 years. 

This futuristic addition promises an interactive and seamless user experience, bringing cutting-edge technology to the tips of your fingers. Explore the next frontier of computing as Microsoft redefines the way we engage with our keyboards, setting a new standard for innovation in the digital age. The groundbreaking addition grants users seamless access to Copilot, Microsoft's dynamic AI tool designed to elevate your computing experience. 

At the forefront of AI advancements, Microsoft, a key investor in OpenAI, strategically integrates Copilot's capabilities into various products. Witness the evolution of AI as Microsoft weaves its intelligence into the fabric of everyday tools like Microsoft 365 and enhances search experiences through Bing. 

Not to be outdone, rival Apple has long embraced AI integration, evident in Macbooks featuring a dedicated Siri button on their touch bar. As the tech giants vie for user-friendly AI interfaces, Microsoft's AI key emerges as a pivotal player in redefining how we interact with technology. 

Copilot, the star of Microsoft's AI arsenal, goes beyond the ordinary, assisting users in tasks ranging from efficient searches to crafting emails and even generating visually striking images. It's not just a tool; it's your personalised AI companion, simplifying tasks and enriching your digital journey. Welcome to the era where every keystroke opens doors to boundless possibilities. 

By pressing this key, users seamlessly engage with Copilot, enhancing their daily experiences with artificial intelligence. Similar to the impact of the Windows key introduced nearly 30 years ago, the Copilot key marks another significant milestone in our journey with Windows, serving as the gateway to the realm of AI on PCs. 

In the days leading up to and during CES, the Copilot key will debut on numerous Windows 11 PCs from our ecosystem partners. Expect its availability from later this month through Spring, including integration into upcoming Surface devices. 

This addition, which simplifies access to Copilot, has already made waves in Office 365 applications like Word, PowerPoint, and Teams, offering functionalities such as meeting summarization, email writing, and presentation creation. Bing, Microsoft's search engine, has also integrated Copilot. 

According to Prof John Tucker from Swansea University, the introduction of this key is a natural progression, showcasing Microsoft's commitment to enhancing user experience across various products. Despite Windows 11 users already having access to Copilot via the Windows key + C shortcut, the new dedicated key emphasises the feature's value.

Acknowledging the slow evolution of keyboards over the past 30 years, Prof Tucker notes that Microsoft's focus on this particular feature illustrates its potential to engage users across multiple products. Google, a dominant search engine, employs its own AI system called Bard, while Microsoft's Copilot is built on OpenAI's GPT-4 language model, introduced in 2022. 

The UK's competition watchdog is delving into Microsoft's ties with OpenAI, prompted by disruptions in the corporate landscape that resulted in a tight connection between the two entities. The investigation seeks to understand the implications of this close association on competition within the industry. 

As we anticipate its showcase at CES, this innovative addition not only reflects Microsoft's commitment to user-friendly technology but also sparks curiosity about the evolving landscape of AI integration. Keep your eyes on the keyboard – the Copilot key signals a transformative era where AI becomes an everyday companion in our digital journey.


Google Eases Restrictions: Teens Navigate Bard with Guardrails

 


It has been announced that Google is planning on allowing teens in most countries to use a chatbot called Bard which is based on artificial intelligence and possesses some guardrails. It has been announced that on Thursday, Google will begin opening up access to Bard (also known as Google Play for Teens) to teenagers in most countries around the world, according to Tulsee Doshi, Head of Product, Responsible AI at Google. 

A chatbot can be accessed by teens who meet the minimum age requirement to manage their own Google Account as well as those who meet the minimum age requirement to manage their own account in a variety of languages in the future. The expanded launch will come with a number of safety features and guardrails designed to prevent teens from accessing harmful content.

According to a blog post by Google, teenagers can use the search giant's new tool to find inspiration, learn new hobbies and find solutions to everyday problems, and teens can text Bard with questions about anything from important topics, such as where to apply to college, to more fun matters, such as learning an entirely new sport.

According to Google, the platform is also a valuable learning tool, as it enables teens to dig deeper and learn more about topics, while developing their understanding of complex concepts in the process. In addition to finding inspiration, teens can use Bard to discover new hobbies and solve problems they face every day. 

Additionally, Bard can be a useful learning tool for teens, giving them an opportunity to go deeper into topics, gain a better understanding of complex concepts, and practice new skills in ways that have proven successful for them. 

There are safety features in place at Bard so that users are protected against unsafe content being displayed in its responses to teens, such as illegal substances or those that are age-gated. This training is intended to help Bard recognize matters that are inappropriate to teens.

As soon as teenagers start to ask fact-based questions on Google for the first time, Google will run a feature called double-checking a response, which is intended to ensure that substantiation of Bard’s answer can be found across the web. 

To help students develop information literacy and critical thinking skills, Bard will actively and continuously encourage them to use double-check. There are plans by Google to make everyone aware of how large language models (LLMs) can hallucinate, and they plan to make this feature available to everyone who uses Bard. 

A "tailored onboarding experience" will also be offered to teens which provides them with a link to Google's AI Literacy Guide, along with a video explaining how to use generative AI responsibly, and a description of how Bard Activity is used with the option to turn it on or off. In addition, Google has also announced that it will be bringing a math learning experience to Bard's campus, which will allow anyone, including teens, to type or upload a picture of a math equation. 

Instead of just providing the answer, this will allow Bard to explain exactly how it's solved step by step. In order to protect users, Google has put in place some guardrails that will make it easier for them to access the chatbot. 

Aside from being trained to recognize topics that are inappropriate for teens, Bard also comes equipped with guardrails to ensure that unsafe content is not displayed in its responses to teens, such as illegal substance use or age-gated drugs. 

In addition to some of the new features that are going to be added to Bard, the company is also adding some features that are likely to be very helpful to teens, but that everyone else can use too. It is possible to use Bard to explain how to solve a math problem when you type in it or upload a picture of it. 

In recent years, Google has been improving the quality of Search for homework. In addition, Bard will be able to create charts based on information provided in a prompt. It is not surprising that Google is releasing Bard for teens at the same time that social platforms have launched AI chatbots to young users to mixed reviews. 

A Snapchat chatbot launched in February without appropriate age-gating features and faced controversy after it was discovered that it was chatting to minors about issues such as covering up the smell of weed and setting the mood for sexual activity. Snapchat faced controversy for launching its "My AI" chatbot without appropriate age-gating features. 

It is now available in over 230 countries and territories, including the United States, the United Kingdom, and Australia. After the tool's limited early access launch in the US and the UK in February (where the company made an embarrassing factual error due to hallucinations), the company announced Bard in February. As Google tries to compete with chatbots like Microsoft's recently rebranded Bing Chat, now titled Copilot, and OpenAI's ChatGPT, it has also added a bunch of new features to Bard.

How AI Affects Human Cognition

 

The impact of artificial intelligence (AI) on how people handle and interpret data in the digital age has gained substantial attention. 

This article analyses the advantages and drawbacks of AI's potential influence on cognitive processes in humans. 

 Advantages of personalised AI 

The ability of generative AI tools, like ChatGPT, to deliver personalised content catered to each user's preferences has led to a significant increase in their popularity. These customised AI solutions come with a number of alluring advantages: 

Revolutionising education: By providing custom learning materials, personalised AI has the power to completely change education. Students' comprehension and retention may increase as a result of this. 

Workflow efficiency: AI can automate tasks like content production and data analysis, freeing up time for novel and challenging issues. This improved efficiency might raise productivity in a variety of industries. 

Accelerating scientific discovery: AI's capacity to analyse large datasets and find patterns offers the potential to hasten scientific advancements. 

Enhancing communication: As demonstrated by Meta AI, chatbots and virtual assistants powered by AI can improve connections and interpersonal interactions. Even as companions, they can provide company and assistance. 

Addressing the issues raised by AI-powered thinking

While the benefits of personalised AI are clear, it is important to recognise and address legitimate concerns: 

Data ownership: Gathering and analysing substantial amounts of data is required for the application of AI. Concerns about privacy and security, particularly those involving personal data, must be given top priority to avoid exploitation. 

Cognitive bias challenges: Content produced by AI, which is frequently intended to appear neutral and recognisable, may unintentionally perpetuate cognitive biases. As a result, consumers may have distorted viewpoints.

Filter bubbles and information bias: Filter bubbles are frequently created by social media algorithms, limiting users' exposure to various content. This lack of diversity can lead to ideological polarisation and an increased likelihood of encountering misinformation. 

Influence of AI on human cognitive 

To assess the possible impact of generative AI on thinking, consider the revolutionary effects of technology in prior decades. With the introduction of the internet in the early 1990s, a new era of information accessibility began: 

Increased meta-knowledge: People now have access to a wealth of knowledge resources thanks to the internet, which has resulted in an increase in meta-knowledge. However, this abundance of knowledge also led to the "Google effect," where people were able to find information quickly but had poorer memory recall. 

Reliance on search engines: Using search engines online enables people to outsource some cognitive functions, which frees up their minds for original thought. But this simplicity of use also made people more prone to distraction and dependence

Encouraging the responsible use of AI 

It is critical to take the following precautions to make sure that generative AI tools have a good impact on human thinking: 

Promoting AI literacy should be a top priority for society. Responsible use of AI requires that people are informed about its potential as well as its limitations.

Human autonomy and critical thinking: AI tools need to be developed to support and enhance human autonomy and critical thinking rather than to replace these essential cognitive functions.

Guidelines on What Not to Share with ChatGPT: A Formal Overview

 


A simple device like ChatGPT has unbelievable power, and it has revolutionized our experience of interacting with computers in such a profound way. There are, however, some limitations that it is important to understand and bear in mind when using this tool. 

Using ChatGPT, OpenAI has seen a massive increase in revenue resulting from a massive increase in content. There were 10 million dollars of revenue generated by the company every year. It, however, grew from 1 million dollars in to 200 million dollars in the year 2023. In the coming years, the revenue is expected to increase to over one billion dollars by the end of 2024, which is even higher than what it is now. 

A wide array of algorithms is included in the ChatGPT application that is so powerful that it is capable of generating any text the users want, from a simple math sum to a complex rocket theory question. It can do them all and more! It is crucial to acknowledge the advantages that artificial intelligence can offer and to acknowledge their shortcomings as the prevalence of chatbots powered by artificial intelligence continues to rise.  

To be successful with AI chatbots, it is essential to understand that there are certain inherent risks associated with their use, such as the potential for cyber attacks and privacy issues.  A major change in Google's privacy policy recently made it clear that the company is considering providing its AI tools with the data that it has collected from web posts to train those models and tools.  

It is equally troubling that ChatGPT retains chat logs to improve the model and to improve the uptime of the service. Despite this, there is still a way to address this concern, and it involves not sharing certain information with chatbots that are based on artificial intelligence. Jeffrey Chester, executive director of the Center for Digital Democracy, an organization dedicated to digital rights advocacy stated these tools should be viewed by consumers with suspicion at least, since as with so many other popular technologies – they are all heavily influenced by the marketing and advertising industries.  

The Limits Of ChatGPT 


As the system was not enabled for browsing (which is a requirement for ChatGPT Plus), it generated responses based on the patterns and information it learned throughout its training, which included a range of internet texts while it was training until September 2021 when the training cut-off will be reached.  

Despite that, it is incapable of understanding the context in the same way as people do and does not know anything in the sense of "knowing" anything. ChatGPT is famous for its impressive and relevant responses a great deal of the time, but it is not infallible. The answers that it produces can be incorrect or unintelligible for several reasons. 

Its proficiency largely depends on the quality and clarity of the prompt given. 

1. Banking Credentials 


The Consumer Financial Protection Bureau (CFPB) published a report on June 6 about the limitations of chatbot technology as the complexity of questions increases. According to the report, implementing chatbot technology could result in financial institutions violating federal consumer protection laws, which is why the potential for violations of federal consumer protection laws is high. 

According to the Consumer Financial Protection Bureau (CFPB), the number of consumer complaints has increased due to a variety of issues that include resolving disputes, obtaining accurate information, receiving good customer service, seeking assistance from human representatives, and maintaining personal information security. In light of this fact, the CFPB advises financial institutions to refrain from solely using chatbots as part of their overall business model.  

2. Personal Identifiable Information (PII). 


Whenever users share sensitive personal information that can be used to identify users personally, they need to be careful to protect their privacy and minimise the risk that it will be misused. The user's full name, home address, social security number, credit card number, and any other information that can identify them as an individual is included in this category. The importance of protecting these sensitive details is paramount to ensuring their privacy and preventing potential harm from unauthorised use. 

3. Confidential information about the user's workplace


Users should exercise caution and refrain from sharing private company information when interacting with AI chatbots. It is crucial to understand the potential risks associated with divulging sensitive data to these virtual assistants. 

Major tech companies like Apple, Samsung, JPMorgan, and Google have even implemented stringent policies to prohibit the use of AI chatbots by their employees, recognizing the importance of protecting confidential information. 

A recent Bloomberg article shed light on an unfortunate incident involving a Samsung employee who inadvertently uploaded confidential code to a generative AI platform while utilizing ChatGPT for coding tasks. This breach resulted in the unauthorized disclosure of private information about Samsung, which subsequently led to the company imposing a complete ban on the use of AI chatbots. 

Such incidents highlight the need for heightened vigilance and adherence to security measures when leveraging AI chatbots. 

4. Passwords and security codes 


In the event that a chatbot asks you for passwords, PINs, security codes, or any other confidential access credentials, do not give them these things. It is prudent to prioritise your safety and refrain from sharing sensitive information with AI chatbots, even though these chatbots are designed with privacy in mind. 

For your accounts to remain secure and for your personal information to be protected from the potential of unauthorised access or misuse, it is paramount that you secure your passwords and access credentials.

In an age marked by the progress of AI chatbot technology, the utmost importance lies in the careful protection of personal and sensitive information. This report underscores the imperative necessity for engaging with AI-driven virtual assistants in a responsible and cautious manner, with the primary objective being the preservation of privacy and the integrity of data. It is advisable to remain well-informed and to exercise prudence when interacting with these potent technological tools.

From Text to Multisensory AI: ChatGPT's Latest Evolution

 


The OpenAI generative AI bot, ChatGPT, has been updated to enable it to take on a whole new level of capabilities. Artificial intelligence (AI) is a fast, dynamic field that is constantly evolving and moving forward.  

A newly launched generative AI-powered chatbot, ChatGPT, from OpenAI, an AI startup backed by Microsoft, has been expanded with new features on Monday. It is now possible to ask ChatGPT questions in five different voices, and you can submit images for ChatGPT's response, as you can now ask ChatGPT questions based on the images you have uploaded.

By doing so, users will be able to ask ChatGPT in five different voices. Open AI shared a video on the X (formerly Twitter) social network showing how the feature works when it was announced that ChatGPT could now see, hear, and speak. This was announced as a post on the X (formerly Twitter). 

According to the note that was attached to the video, ChatGPT is now capable of watching, hearing, and speaking. As a result, users will soon be able to engage in voice conversations using ChatGPT (iOS & Android), as well as include images in the conversation (all platforms) over the next two weeks. 

A major update to ChatGPT is the introduction of an image analysis and response function. As an example, if you upload a picture of your bike, for example, and send it to the site, you'll receive instructions on how to lower the seat, or if you upload a picture of your refrigerator, you'll get some ideas for recipes based on its contents. 

The second feature of ChatGPT is that it allows users to interact with it in a synthetic voice, which is similar to how you'd interact with Google Now or Siri. The threads you ask ChatGPT are answered based on customized A.I. algorithms. 

A multimodal artificial intelligence system can handle any text, picture, video, or other form of information that a user chooses to throw at it. This feature is part of a trend across the entire industry toward so-called multimodal artificial intelligence systems. 

Researchers believe that the ultimate goal is the development of an artificial intelligence capable of processing information in the same way as a human does. In addition to answering users' questions in a variety of voices, ChatGPT will also be able to provide feedback in a variety of languages, based on their personal preferences. 

To create each voice, OpenAI has enlisted the help of professional voice actors, along with its proprietary speech recognition software Whisper, which transcribes spoken words into text using its proprietary technology. A new text-to-speech model, dubbed OpenAI's new text-to-speech model, is behind ChatGPT's new voice capabilities, which can create human-like audio using just text and a few seconds of speech samples. This opens the door to many "creative and accessible applications".

Aside from working with other companies, OpenAI is also collaborating with Spotify on a project to translate podcasts into several languages and to translate them as naturally as possible in the voice of the podcaster. A multimodal approach based on GPT-3.5 and GPT-4 is being used by OpenAI to enable ChatGPT to understand images based on multimodal capabilities. 

Users can now upload an image to the ChatGPT system to ask it a question such as exploring the contents of my fridge to plan a meal or analyzing the data from a complex graph for work-related purposes.  During the next two weeks, Plus and Enterprise users will be gradually introduced to new features, including voice and image capabilities, which can be enabled through their settings. 

A voice feature will be available on both iOS and Android platforms, with the option to enable them via the settings menu, whereas images will be available on all platforms from here on out. A ChatGPT model can be used by users for specific topics, such as research in specialized fields. 

OpenAI is very transparent about the model's limitations and discourages high-risk use cases that have not been properly verified. As a result, the model does a great job of transcribing English text, but it is not so good at transcribing other languages, especially those with non-Roman script. 

OpenAI advises non-English speaking users not to use ChatGPT for such purposes. In recognition of the potential risks involved with advanced capabilities such as voice, OpenAI has focused on voice chat, and the technology has been developed in collaboration with voice actors to ensure the authenticity and safety of voice chat. This technology is also being used by Spotify's Voice Translation feature, which allows podcasters to translate content into a range of different languages using their voice. This feature is important because it expands the reach of podcasters.

Using image input, OpenAI takes measures to protect the privacy of individuals by limiting the ability of ChatGPT to identify and describe people's identities directly. To further enhance these safeguards while ensuring the tool is as useful as possible, it will be crucial to follow real-world usage and user feedback to ensure it is as robust as possible.

Tech Enthusiasts Discover New Frontiers in the Age of EVs

Electric vehicle (EV) technology is developing quickly, and a new group of tech aficionados called EV hackers is forming. These people want to investigate the latent possibilities of electric automobiles, not steal cars or undermine security systems. These creative minds have turned the world of EVs into a playground, adjusting performance and revealing hidden features.

The popularity of EVs has increased interest among tech-savvy people, according to a recent post on Wealth of Geeks. They view electric cars not only as a means of mobility but also as a cutting-edge technological marvel with limitless personalization options. The writer contends that "EVs represent a convergence of transportation and cutting-edge technology, and this fusion inevitably attracts hackers and tech enthusiasts."

The depth of potential within this subject was shown during an intriguing presentation at the Black Hat conference. The discussion, "Jailbreaking an Electric Vehicle: Or What It Means to Hotwire Tesla's X-Based Seat Heater," covered the intricate details of hacking electric vehicles' software. The presentation demonstrated the opportunity for personalization and modification inside the EV space without endorsing any unlawful activity.

Pushing the limits of EV technology is another area of current research at IIT CNR. Their efforts are directed toward bettering the performance and functionality of electric vehicles by comprehending and altering the underlying software. This study not only adds to the body of expanding knowledge in the area, but it also provides motivation for other tech aficionados.

Dr. Maria Rossi, a lead researcher at IIT CNR, emphasized, "Electric vehicles are not just cars; they are complex computer systems on wheels. There is so much potential to optimize and enhance their capabilities, and this is what drives our research."

While the idea of hacking may carry negative connotations, in the world of EVs, it simply means exploring the uncharted territories of electric vehicle technology. These enthusiasts are driven by a passion for innovation and a desire to unlock the full potential of electric vehicles.

Electric vehicles are developing into more than just a means of mobility; they are becoming a technological blank canvas for enthusiasts and hackers. The field of electric vehicles (EVs) is positioned for exciting breakthroughs in the years to come thanks to a growing community of researchers and enthusiasts.

Meta Publishes FACET Dataset to Assess AI Fairness

 

FACET, a benchmark dataset designed to aid researchers in testing computer vision models for bias, was released by Meta Platforms Inc. earlier this week. 

FACET is being launched alongside an update to the open-source DINOv2 toolbox. DINOv2, which was first released in April, is a set of artificial intelligence models aimed to help with computer vision projects. DINOv2 is now accessible under a commercial licence, thanks to recent upgrades. 

Meta's new FACET dataset is intended to aid researchers in determining whether a computer vision model is producing biased results. The company explained in a blog post that measuring AI fairness is difficult using current benchmarking methodologies. FACET, according to Meta, will make the work easier by providing a big evaluation dataset that researchers may use to audit various types of computer vision models. 

"The dataset is made up of 32,000 images containing 50,000 people, labelled by expert human annotators for demographic attributes (e.g., perceived gender presentation, perceived age group), additional physical attributes (e.g., perceived skin tone, hairstyle) and person-related classes (e.g., basketball player, doctor),” Meta researchers explained in the blog post. "FACET also contains person, hair, and clothing labels for 69,000 masks from SA-1B."

Researchers can test a computer vision model for fairness flaws by running it through FACET. They can then do an analysis to see if the accuracy of the model's conclusions varies across images. Such changes in accuracy could indicate that the AI is biased. According to Meta, FACET can be used to detect fairness imperfections in four different types of computer vision models. The dataset can be used by researchers to discover bias in neural networks optimised for classification, which is the task of grouping similar images together. It also makes evaluating object detection models easier. These models are intended to detect items of interest in images automatically. 

FACET is especially useful for auditing AI applications that conduct instance segmentation and visual grounding, two specialised object detection tasks. Instance segmentation is the technique of visually highlighting items of interest in a photograph, such as by drawing a box around them. Visual grounding models, in turn, are neural networks that can scan a photo for an object that a user describes in natural language terms. 

"While FACET is for research evaluation purposes only and cannot be used for training, we’re releasing the dataset and a dataset explorer with the intention that FACET can become a standard fairness evaluation benchmark for computer vision models,” Meta’s researchers added.

Along with the introduction of FACET, Meta changed the licence of their DINOv2 series of open-source computer vision models to an Apache 2.0 licence. The licence permits developers to use the software for both academic and commercial applications.

Meta's DINOv2 models are intended to extract data points of interest from photos. Engineers can use the retrieved data to train other computer vision models. DINOv2 is much more accurate, according to Meta than a previous-generation neural network constructed for the identical task in 2021.

Boosting Business Efficiency: OpenAI Launches ChatGPT for Enterprises

 


Known for its ChatGPT chatbot, OpenAI has announced the launch of ChatGPT Enterprise, the chatbot product that is the most powerful one available for businesses. Earlier this week, OpenAI introduced ChatGPT Enterprise, an AI assistant that provides unlimited access to GPT-4 at faster speeds for businesses. ChatGPT Enterprise, as the name implies, is an AI assistant. 

There are also extended context windows used for dealing with long texts, encryption, secure and private data transmissions with enterprise-level security and privacy, and management of group accounts. The enterprise version of the popular chatbot ChatGPT, introduced just nine months ago, seeks to ease minds and expand capabilities by building on the success of ChatGPT, which was launched just nine months ago. 

OpenAI is one of the top players in the race for artificial intelligence. With the help of ChatGPT, a chatbot powered by artificial intelligence, it currently hosts millions of users each day, providing them with advice and assistance in their personal and professional lives. The OpenAI AI chatbot, which is becoming quite popular among individuals, has now created a new version for the business tier, dubbed ChatGPT Enterprise, after helping individuals for a considerable time. 

In a recent statement from OpenAI, the company explained that the new ChatGPT model was created with high privacy, security, and functionality in mind. There has been an increase in the integration of the ChatGPT model into the personal and professional lives of users, according to the company. 

Businesses have been concerned about the privacy and security of ChatGPT because they fear the data they provide through ChatGPT might be used to train its algorithms and are concerned that using the tool may result in sensitive customer information being accidentally revealed to AI models. 

There is however some concern about the control and ownership of data that users will have over their data, with OpenAI clearly stating that none of this data will be used to train the GPT. To alleviate these concerns, OpenAI will be offering a dedicated and private platform that is designed specifically for business use. This platform will be tailored specifically for businesses and should be able to alleviate the concerns previously raised. 

The company that created ChatGPT has announced a business version of its machine-learning-powered chatbot, which has been developed by OpenAI in response to recent declines in users and concerns over possible harm caused by artificial intelligence. 

There is no doubt that ChatGPT Enterprise is one of the most popular tools for companies on social networks nowadays, thanks to its enhanced security and privacy, according to a blog post published by OpenAI on Monday.

As mentioned in a blog post that OpenAI provided a few weeks ago, ChatGPT Enterprise can perform the same tasks as ChatGPT, including writing emails, drafting essays, and debugging computer code, in addition to performing more complex tasks.

In addition to the enterprise-grade privacy and data analysis capabilities, the new offering provides enhanced performance and customizations to ChatGPT, as well as “enterprise-grade” privacy and data analysis capabilities. In terms of feature sets, that puts ChatGPT Enterprise very close to Bing Chat Enterprise, Microsoft's brand new or earlier launched version of a chatbot service that is aimed at enterprise customers. 

Shortly, ChatGPT will provide even more advanced analytical features as well as options to customize ChatGPT's knowledge of company data. A later version of ChatGPT Enterprise will also be available to smaller teams that belong to ChatGPT, the company said. 

A company spokesperson said they are working on onboarding as many enterprises as possible during the next few weeks. The Verge was told by OpenAI that this is the first enterprise-oriented feature of ChatGPT, which it will offer separately from ChatGPT and ChatGPT Plus, the subscription plan for ChatGPT that offers faster access to ChatGPT.  

In an email from the company, it said existing ChatGPT customers have the option to stay with their existing methods of accessing ChatGPT, but if they want access to the new features, they can switch to ChatGPT Enterprise. As a result of OpenAI and GPT-4 being widely used as generative AI saaS, many organizations have built generative AI tools using GPT-4 as an API or cloud service instead of directly connecting to GPT-4. 

To protect their data from being exposed to the more extensive dataset of GPT, some companies began to present their large language models to protect their data, but this method is rather difficult for smaller firms to implement.

Here's How AI Can Revolutionize the Law Practice

 

Artificial intelligence (AI) has gained enormous pace in the legal profession in recent years, as law firms throughout the world have recognised the potential value that AI can bring to their practises. 

Law companies realise significant efficiencies that increase profitability while generating speedier client outcomes by employing innovative technology such as natural language processing, machine learning, and robotic process automation. 

However, properly adopting an AI strategy necessitates a thorough understanding of both its potential applications and basic technological components—this article intends to assist you in unlocking that capability.

Improving the efficiency of legal research and analysis 

AI can help law firms conduct more efficient and accurate legal research and analysis. Law experts can undertake deep-dive studies on a considerably bigger range of data using natural language processing (NLP) technologies, extracting knowledge much faster than traditional manual examination. 

Machine learning utilities can consume vast amounts of documents and artefacts in several languages to generate automated correlations between legal cases or precedents, supporting lawyers in developing arguments or locating relevant facts for their clients' cases. 

Improving case management and document automation

Intelligent AI-enabled automation approaches are ideal for document automation and case management tasks. Legal teams could significantly improve the pace of generating documents such as wills, deeds, leases, loan agreements, and many more templates resembling commonly used legal forms by leveraging automated document assembly technologies driven by machine intelligence. 

Automating these processes minimises wastage associated with errors while increased efficiency significantly shortens review times of drafts sent out for attorneys’ approval.

E-discovery and due diligence procedures optimisation

One of the many useful uses of artificial intelligence (AI) in legal practice is optimising e-discovery and due diligence processes. AI can automatically gather data, classify documents, and scale/index information for content analysis. Additionally, clients typically demand quicker and less expensive e-discovery, and automated machine solutions make it simple to achieve both of these goals. 

Lawyers can swiftly identify keywords or important details thanks to AI technology. As a result, they can determine the types of documents involved or linked to a case quicker than ever before, allowing the lawyers who employ this technology an advantage over those who stick with manual methods alone. 

Challenges 

Law companies can profit greatly from AI, but it's not magic, and they must use it responsibly because it's not a substitute for human judgement. There are some difficulties and factors to take into account while employing AI for law firms. 

Ethical issues

While AI can increase efficiency for lawyers, it also poses ethical concerns that legal companies should think about, including the possibility of bias. Since people are subject to prejudice and because AI relies on human-sourced data to produce its outputs and predictions, it has the potential to be biassed. 

For example, if previous legal decisions were made with unfair bias and an AI tool uses machine learning to infer conclusions based on those decisions, the AI may unwittingly learn the same bias. With this in mind, it is critical for lawyers to examine potential prejudice while employing AI. 

Data safety

It is a lawyer's responsibility to safeguard client information and confidential data, which implies that law firms must be cautious about the security of any prospective tools they employ. And, because most AI technologies rely on data to work, law firms must be extra cautious about what data they allow AI to access.

For example, you don't want to save your client's private information in a database that AI may access and use for someone else. With this in mind, law firms must thoroughly select AI vendors and guarantee that personal data is protected. 

Education and training 

Proper education and guidance are critical to ensuring that AI is used responsibly and ethically in legal firms. While not every lawyer needs to be an expert in artificial intelligence technology, understanding how AI technologies work is critical to assisting lawyers in using them responsibly and identifying any potential ethical or privacy concerns. 

Lawyers can utilise their experience to determine how and when to apply AI technology in their practise by knowing how it works while vetting, installing, and using technologies.

Top Five Cybersecurity Challenges in the AI Era

 

The cybersecurity industry is fascinated by artificial intelligence (AI), which has the ability to completely change how security and IT teams respond to cyber crises, breaches, and ransomware assaults. 

However, it's important to have a realistic knowledge of the capabilities and limitations of AI, and there are a number of obstacles that prevent the technology from having an instant, profound influence on cybersecurity. In this article, we examine the limitations of AI in dealing with cybersecurity issues while emphasising the part played by organisations in empowering resilience and data-driven security practices. 

Inaccuracy 

The accuracy of its output is one of AI's main drawbacks in cybersecurity. Even though AI systems, like generative pre-trained transformers like ChatGPT, can generate content that is in line with the internet's zeitgeist, their answers are not necessarily precise or trustworthy. AI systems are excellent at coming up with answers that sound logical, but they struggle to offer accurate and trustworthy solutions. Given that not everything discovered online is factual, relying on unfiltered AI output can be risky. 

Recovery actions' complexity 

Recovery following a cyber attack often involves a complex series of procedures across multiple systems. IT professionals must perform a variety of actions in order to restore security and limit the harm. Entrusting the entire recovery process to an AI system would necessitate a high level of confidence in its dependability. However, existing AI technology is too fragile to manage the plethora of operations required for efficient cyberattack recovery. Directly linking general-purpose AI systems to vital cybersecurity processes is a huge problem that requires extensive research and testing.

General intelligence vs. General knowledge 

Another distinction to make is between general knowledge and general intelligence. While AI systems like ChatGPT excel at delivering general knowledge and generating text, they lack general intelligence. These systems can extrapolate solutions based on prior knowledge, but they lack the problem-solving abilities involving true general intelligence.

While dealing with AI systems via text may appear to humans to be effective, it is not consistent with how we have traditionally interacted with technology. As a result, current generative AI systems are limited in their ability to solve complex IT and security challenges.

Making ChatGPT act erratically 

There is another type of threat that we must be aware of: the nefarious exploitation of existing platforms. The possibility of AI being "jailbroken," which is rarely discussed in the media's coverage of the field, is quite real. 

This entails giving text commands to software like ChatGPT or Google's Bard in order to circumvent its ethical protections and set them free. By doing this, AI chatbots get transformed into powerful assistants for illegal activities. 

While it is critical to avoid the weaponization of general-purpose AI tools, it has proven extremely difficult to regulate. A recent study from Carnegie Mellon University presented a universal jailbreak for all AI models, which might create an almost limitless amount of prompts to circumvent AI safeguards. 

Furthermore, AI developers and users are always attempting to "hack" AI systems and succeeding. Indeed, no known universal solution for jailbreaking exists as of yet, and governments and corporations should be concerned as AI's mass adoption grows.