Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Chatbot. Show all posts

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

Stop Using AI for Medical Diagnosis: Experts

Stop Using AI for Medical Diagnosis: Experts

AI (artificial intelligence) has become an important tool in many spheres of life such as education, jobs, and the field of medical research as well. However, there have been concerns about AI providing medical advice to individual queries of patients accessing on their own. The issue has become a hot topic. Today, it is easier to sit at your desk, and with a few taps, access everything on your phone or your computer, one such case can be a medical diagnosis via an online search for your health. However, experts alert users to avoid such medical diagnoses via AI. Here is why.

Using AI for Medical Queries

AI tools like ChatGPT from Open AI or CoPilot from Microsoft work using language models trained on a huge spectrum of internet prompts. You can ask questions, and the chatbot responds based on learned patterns. The AI tech can generate numerous responses which can be helpful, but it is not accurate. 

The incorporation of AI into healthcare raises substantial regulatory and ethical concerns. There are significant gaps in the regulation of AI applications, which raises questions regarding liability and accountability when AI systems deliver inaccurate or harmful advice.

No Personalized Care

One of the main drawbacks of AI in medicine is the need for more individualized care. AI systems use large databases to discover patterns, but healthcare is very individualized. AI lacks the ability to comprehend the finer nuances of a patient's history or condition, frequently required for accurate diagnosis and successful treatment planning.

Bias and Data Quality

The efficacy of AI is strongly contingent on the quality of the data used to train it. AI's results can be misleading if the data is skewed or inaccurate. For example, if an AI model is largely trained on data from a single ethnic group, its performance may suffer when applied to people from other backgrounds. This can result in misdiagnoses or improper medical recommendations.

Misuse

The ease of access to AI for medical advice may result in misuse or misinterpretation of the info it delivers. Quick, AI-generated responses may be interpreted out of context or applied inappropriately by persons without medical experience. Such events have the potential to delay necessary medical intervention or lead to ineffective self-treatment.

Privacy Concerns

Using AI in healthcare usually requires entering sensitive personal information. This creates serious privacy and data security concerns, as breaches could allow unauthorized access to or misuse of user data.

Microsoft's Super Bowl Pitch: We Are Now an AI Firm

 

Microsoft made a comeback to the Super Bowl on Sunday with a commercial for its AI-powered chatbot, highlighting the company's resolve to shake off its reputation as a stuffy software developer and refocus its offerings on the potential of artificial intelligence. 

The one-minute ad, which was uploaded to YouTube on Thursday of last week, shows users accessing Copilot, the AI assistant that Microsoft released a year ago, via their smartphones. The app can be seen helping users in automating a range of tasks, including generating computer code snippets and creating digital artwork. 

Microsoft's Super Bowl commercial, which marked the company's first appearance in the game in four years, showcased its efforts to reposition itself as an AI-focused company. The IT behemoth has invested $1 billion in OpenAI in 2019 alone, and it has put billions more into refining its AI skills. The technology has also been incorporated into staples like Microsoft Word, Excel, and Azure. 

The tech giant now wants customers and companies looking for an AI boost to use its services instead of rivals like Google, which on Thursday revealed an update to its AI program. 

Wedbush Securities Analyst Dan Ives told CBS MoneyWatch that the outcome of the AI race will have a significant impact on multinational tech businesses, as the industry is expected to reach $1.3 trillion by 2032. "This is no longer your grandfather's Microsoft … and the Super Bowl is a unique time to further change perceptions," he stated. 

For 30 seconds of airtime during this year's game, advertisers paid over $7 million, with over 100 million viewers predicted. In a blog post last week, Microsoft Consumer Chief Marketing Officer Yusuf Mehdi announced that the Copilot app is receiving an update "coinciding with the launch of our Super Bowl ad." The update includes a "cleaner, sleeker look" and suggested prompts that could help users take advantage of the app's AI capabilities. 

Thus far, Microsoft's strategy has proven successful. Its cloud-based revenue increased by 24% to $33.7 billion in the most recent quarter, aided by the incorporation of AI into its Azure cloud computing service.

Navigating the Future: Global AI Regulation Strategies

As technology advances quickly, governments all over the world are becoming increasingly concerned about artificial intelligence (AI) regulation. Two noteworthy recent breakthroughs in AI legislation have surfaced, providing insight into the measures governments are implementing to guarantee the proper advancement and application of AI technologies.

The first path is marked by the United States, where on October 30, 2023, President Joe Biden signed an executive order titled "The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The order emphasizes the need for clear guidelines and ethical standards to govern AI applications. It acknowledges the transformative potential of AI while emphasizing the importance of addressing potential risks and ensuring public trust. The order establishes a comprehensive framework for the federal government's approach to AI, emphasizing collaboration between various agencies to promote innovation while safeguarding against misuse.

Meanwhile, the European Union has taken a proactive stance with the EU AI Act, the first regulation dedicated to artificial intelligence. Introduced on June 1, 2023, this regulation is a milestone in AI governance. It classifies AI systems into different risk categories and imposes strict requirements for high-risk applications, emphasizing transparency and accountability. The EU AI Act represents a concerted effort to balance innovation with the protection of fundamental rights, fostering a regulatory environment that aims to set a global standard for AI development.

Moreover, in the pursuit of responsible AI development, companies like Anthropic have also contributed to the discourse. They have released a document titled "Responsible Scaling Policy 1.0," which outlines their commitment to ethical considerations in the development and deployment of AI technologies. This document reflects the growing recognition within the tech industry of the need for self-regulation and ethical guidelines to prevent the unintended consequences of AI.

As the global community grapples with the complexities of AI regulation, it is evident that a nuanced approach is necessary. These regulatory frameworks strive to strike a balance between fostering innovation and addressing potential risks associated with AI. In the words of President Biden, "We must ensure that AI is developed and used responsibly, ethically, and with public trust." The EU AI Act echoes this sentiment, emphasizing the importance of human-centric AI that respects democratic values and fundamental rights.

A common commitment to maximizing AI's advantages while minimizing its risks is reflected in the way regulations surrounding the technology are developing. These legislative measures, which come from partnerships between groups and governments, pave the path for a future where AI is used responsibly and ethically, ensuring that technology advances humankind rather than working against it.


Bing Chat Rebrands to ‘Copilot’: What is New?


Bing Chat has been renamed as ‘Copilot,’ according to an announcement made during Microsoft Ignite 2023.

But, is the name change the only new thing the users will be introduced with? The answer could be a little ambiguous. 

What is New with Bing Chat (now Copilot)? Honestly, there are no significant changes in Copilot, previously called Bing Chat. “Refinement” might be a more appropriate term to characterize Microsoft's perplexing activities. Let's examine three modifications that Microsoft made to its AI chatbot.

Here, we are listing some of these refinements:

1. A New Home

Copilot, then Bing Chat, now has its own standalone webpage. One can access this webpage at https://copilot.microsoft.com

This means that the user will no longer be required to visit Bing in order to access Microsoft’s AI chat experience. One can simply visit the aforementioned webpage, without Bing Search and other services interfering with your experience. Put differently, it has become much more "ChatGPT-like" now. 

Notably, however, the link seems to only function with desktop versions of Microsoft Edge and Google Chrome. 

2. A Minor Makeover

While Microsoft has made certain visual changes in the rebranded Bing Chat, they are however insignificant. 

This new version has smaller tiles but still has the same prompts: Write, Create, Laugh, Code, Organize, Compare, and Travel.

However, the users can still choose the conversation style, be it Creative, Balanced and Precise. The only big change, as mentioned before, is the new name (Copilot) and the tagline: "Your everyday AI companion." 

Though the theme colour switched from light blue to an off-white, the user interface is largely the same.

Users can access DALLE-3 and GPT-4 for free with Bing Chat, which is now called Copilot. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."

3. Better Security for Enterprise Users

With Copilot, users can access DALLE-3 and GPT-4 for free. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."

This way, users who have had a Bing Chat Enterprise account, or pay for a Microsoft 365 license, will get an additional benefit of more data protection./ Copilot will be officially launched on December 1. 

What Stayed the Same? 

Microsoft plans to gradually add commercial data protection for those who do not pay. However, Copilot currently stores information from your interactions and follows the same data policy as the previous version of Bing Chat for free users. Therefore, the name and domain change is the only difference for casual, non-subscribing Bing Chat users. OpenAI's GPT-4 and DALL-E 3 models are still available, but users need to be careful about sharing too much personal data with the chatbot.

In summary, there is not much to be excited about for free users: Copilot is the new name for Bing Chat, and it has a new home.  

Google's Chatbot Bard Aims for the Top, Targeting YouTube and Search Domains

 


There has been a lot of excitement surrounding Google's AI chatbot Bard - a competitor to OpenAI's ChatGPT, which is set to become "more widely available to the public in the coming weeks." However, at least one expert has pointed out that in its demo, Bard made a factual error. 

As a result of the AI competition between Google and OpenAI, the Microsoft-backed company that created ChatGPT and provides artificial intelligence services for its products, Google has now integrated its chatbot Bard into apps like YouTube, Gmail and Drive, according to a company announcement, published Tuesday. 

In New York, a Google executive said on Thursday at the Reuters NEXT conference that the company's experimental chatbot Bard represents a path to the development of another product that will reach two billion users. In an interview with TechCrunch, Google's Product Lead Jack Krawczyk commented that Bard has laid the foundation for Google to attract even more customers with the help of its artificial intelligence feature, which enables consumers to brainstorm and get information using new artificial intelligence. 

It is possible, for instance, to ask Bard to plan a trip for an upcoming date, complete with flight options and your choice of airline. Users could also ask the tool to summarize meeting notes that have been made in Google Drive documents that they have recently uploaded. Several improvements will be made to Bard this coming Tuesday, including connections to Google's other services. 

The chatbot is also capable of communicating with users in various languages, with the ability to perform a variety of fact-checking functions as well as a broader upgrade to the larger language model that is the foundation of the tool. Google's Bard has been improving its features for nearly six months after it was first introduced to the public, marking the biggest update to the program in that period. 

Among the tech giants, Google, Microsoft, and ChatGPT creator OpenAI are racing against one another, as they roll out increasingly sophisticated consumer-facing artificial intelligence technologies, and they hope to convince users of their value as more than just a gimmick to them.

It is now believed that Google, which recently issued an internal code red when OpenAI beat it to the release of its artificial intelligence chatbot, is using its other widely used software programs to make Bard more useful as a result of the code red. It’s relatively unknown that Bard has received the same amount of attention as ChatGPT. 

According to data from Similarweb, a company that analyzes data for companies, ChatGPT had nearly 1.5 billion desktop and mobile visits in August, substantially more than Google’s A.I. tool and other competitors, which had just 50 million visits. Bard recorded just under 200 million desktop and mobile internet visits throughout August, while ChatGPT by OpenAI also registered 200 million visits during the same period. 

In an interview, Jack Krawczyk, Google's product lead for Bard, stated that Google was aware of the limitations that had caused the chatbot to not appeal to as many people as it should have. Users had told Mr. Krawczyk that the product was neat and novel, but that it did not integrate very well with their personal lives. 

Earlier this month, Google released what it called Bard Extensions, which is an extension of the ChatGPT plug-in that OpenAI announced in March, which allows ChatGPT to work with updated information provided by third-party companies such as Expedia, Instacart, and OpenTable, their own web services and voice apps. 

As a result of the new updates, Google is going to be trying to replicate some of the search engine's capabilities by including Flights, Hotels, and Maps, so users can conduct travel and transportation research using Google's search engine. In addition, Bard may be closer to becoming a more personalized assistant for its users, where they can ask which emails they missed and which points in a document are of most importance to them. 

With the help of Google's large language model, an artificial intelligence algorithm trained on vast amounts of data, Bard had been able to assist students with writing drafts of essays or planning their friend's baby showers. 

As a result of these new extensions, Bard will now draw information from a host of Google services, as well. Now, Bard will be able to retrieve information from YouTube, Google Maps, Flights, and Hotels as well. According to Google, Bard can be accessed through several other services and ask the service for things like "Show me how to write a good man speech and show me YouTube videos about it for inspiration," or suggestions for travel, complete with driving directions, etc.

The Bard extensions can be disabled by the user at any moment by choosing to do so. It is also possible for users to link their Gmail, Docs and Google Drive accounts with Bard to the tool so that it will be able to help them analyze and manage their data. 

For instance, the tool might be able to help with queries such as: "Find the most recent lease agreement in my Drive and calculate how much my security deposit was," Google said in a statement. In a statement, the firm listed that the company will not use users' personal Google Workspace information to train Bard or to serve targeted advertising to users and that users can withdraw their permission at any time in case they do not want it to access their personal information. 

By giving Bard access to a wealth of personal information as well as popular services such as Gmail, Google Maps, and YouTube, Bard is, in theory, making itself even more helpful for its users and gaining their confidence as a result. Using Bard, Google posits that a person planning a group trip to the Grand Canyon may be able to get the dates that suit everyone, get flight and hotel options, provide directions based on Maps, and also take advantage of videos with a variety of useful information available on YouTube.

AI Chatbots Have Extensive Knowledge About You, Whether You Like It or Not

 

Researchers from ETH Zurich have recently released a study highlighting how artificial intelligence (AI) tools, including generative AI chatbots, have the capability to accurately deduce sensitive personal information from people based solely on their online typing. This includes details related to race, gender, age, and location. This implies that whenever individuals engage with prompts from ChatGPT, they may inadvertently disclose personal information about themselves.

The study's authors express concern over potential exploitation of this functionality by hackers and fraudsters in social engineering attacks, as well as the broader apprehensions about data privacy. While worries about AI capabilities are not new, they appear to be escalating in tandem with technological advancements. 

Notably, this month has witnessed significant security concerns, with the US Space Force prohibiting the use of platforms like ChatGPT due to data security apprehensions. In a year rife with data breaches, anxieties surrounding emerging technologies like AI are somewhat inevitable. 

The research on large language models (LLMs) aimed to investigate whether AI tools could intrude on an individual's privacy by extracting personal information from their online writings. 

To conduct this, researchers constructed a dataset from 520 genuine Reddit profiles, demonstrating that LLMs accurately inferred various personal attributes, including job, location, gender, and race—categories typically safeguarded by privacy regulations. Mislav Balunovic, a PhD student at ETH Zurich and co-author of the study, remarked, "The key observation of our work is that the best LLMs are almost as accurate as humans, while being at least 100x faster and 240x cheaper in inferring such personal information."

This revelation raises significant privacy concerns, particularly because the information was assumed on a previously unattainable scale. With this capability, users might be targeted by hackers posing seemingly innocuous questions. Balunovic further emphasized, "Individual users, or basically anybody who leaves textual traces on the internet, should be more concerned as malicious actors could abuse the models to infer their private information."

The study evaluated four models in total, with GPT-4 achieving an 84.6% accuracy rate and emerging as the top performer in inferring personal details. Meta's Llama2, Google's PalM, and Anthropic's Claude were also tested and closely trailed behind.

An example from the study showcased how the researcher's model deduced that a Reddit user hailed from Melbourne based on their use of the term "hook turn," a phrase commonly used in Melbourne to describe a traffic maneuver. This underscores how seemingly benign information can yield meaningful deductions for LLMs.

There was a modest acknowledgment of privacy concerns when Google's PalM declined to respond to about 10% of the researcher's privacy-invasive prompts. Other models exhibited similar behavior, though to a lesser extent.

Nonetheless, this response falls short of significantly alleviating concerns. Martin Vechev, a professor at ETH Zurich and a co-author of the study, noted, "It's not even clear how you fix this problem. This is very, very problematic."

As the use of LLM-powered chatbots becomes increasingly prevalent in daily life, privacy worries are not a risk that will dissipate with innovation alone. All users should be mindful that the threat of privacy-invasive chatbots is evolving from 'emerging' to 'very real'.

Earlier this year, a study demonstrated that AI could accurately decipher text with a 93% accuracy rate based on the sound of typing recorded over Zoom. This poses a challenge for entering sensitive data like passwords.

While this recent development is disconcerting, it is crucial for individuals to be informed so they can take proactive steps to protect their privacy. Being cautious about the information provided to chatbots and recognizing that it may not remain confidential can enable individuals to adjust their usage and safeguard their data.

Boosting Business Efficiency: OpenAI Launches ChatGPT for Enterprises

 


Known for its ChatGPT chatbot, OpenAI has announced the launch of ChatGPT Enterprise, the chatbot product that is the most powerful one available for businesses. Earlier this week, OpenAI introduced ChatGPT Enterprise, an AI assistant that provides unlimited access to GPT-4 at faster speeds for businesses. ChatGPT Enterprise, as the name implies, is an AI assistant. 

There are also extended context windows used for dealing with long texts, encryption, secure and private data transmissions with enterprise-level security and privacy, and management of group accounts. The enterprise version of the popular chatbot ChatGPT, introduced just nine months ago, seeks to ease minds and expand capabilities by building on the success of ChatGPT, which was launched just nine months ago. 

OpenAI is one of the top players in the race for artificial intelligence. With the help of ChatGPT, a chatbot powered by artificial intelligence, it currently hosts millions of users each day, providing them with advice and assistance in their personal and professional lives. The OpenAI AI chatbot, which is becoming quite popular among individuals, has now created a new version for the business tier, dubbed ChatGPT Enterprise, after helping individuals for a considerable time. 

In a recent statement from OpenAI, the company explained that the new ChatGPT model was created with high privacy, security, and functionality in mind. There has been an increase in the integration of the ChatGPT model into the personal and professional lives of users, according to the company. 

Businesses have been concerned about the privacy and security of ChatGPT because they fear the data they provide through ChatGPT might be used to train its algorithms and are concerned that using the tool may result in sensitive customer information being accidentally revealed to AI models. 

There is however some concern about the control and ownership of data that users will have over their data, with OpenAI clearly stating that none of this data will be used to train the GPT. To alleviate these concerns, OpenAI will be offering a dedicated and private platform that is designed specifically for business use. This platform will be tailored specifically for businesses and should be able to alleviate the concerns previously raised. 

The company that created ChatGPT has announced a business version of its machine-learning-powered chatbot, which has been developed by OpenAI in response to recent declines in users and concerns over possible harm caused by artificial intelligence. 

There is no doubt that ChatGPT Enterprise is one of the most popular tools for companies on social networks nowadays, thanks to its enhanced security and privacy, according to a blog post published by OpenAI on Monday.

As mentioned in a blog post that OpenAI provided a few weeks ago, ChatGPT Enterprise can perform the same tasks as ChatGPT, including writing emails, drafting essays, and debugging computer code, in addition to performing more complex tasks.

In addition to the enterprise-grade privacy and data analysis capabilities, the new offering provides enhanced performance and customizations to ChatGPT, as well as “enterprise-grade” privacy and data analysis capabilities. In terms of feature sets, that puts ChatGPT Enterprise very close to Bing Chat Enterprise, Microsoft's brand new or earlier launched version of a chatbot service that is aimed at enterprise customers. 

Shortly, ChatGPT will provide even more advanced analytical features as well as options to customize ChatGPT's knowledge of company data. A later version of ChatGPT Enterprise will also be available to smaller teams that belong to ChatGPT, the company said. 

A company spokesperson said they are working on onboarding as many enterprises as possible during the next few weeks. The Verge was told by OpenAI that this is the first enterprise-oriented feature of ChatGPT, which it will offer separately from ChatGPT and ChatGPT Plus, the subscription plan for ChatGPT that offers faster access to ChatGPT.  

In an email from the company, it said existing ChatGPT customers have the option to stay with their existing methods of accessing ChatGPT, but if they want access to the new features, they can switch to ChatGPT Enterprise. As a result of OpenAI and GPT-4 being widely used as generative AI saaS, many organizations have built generative AI tools using GPT-4 as an API or cloud service instead of directly connecting to GPT-4. 

To protect their data from being exposed to the more extensive dataset of GPT, some companies began to present their large language models to protect their data, but this method is rather difficult for smaller firms to implement.

OpenAI's GPTBot: A New Era of Web Crawling

OpenAI, the pioneering artificial intelligence research lab, is gearing up to launch a formidable new web crawler aimed at enhancing its data-gathering capabilities from the vast expanse of the internet. The announcement comes as part of OpenAI's ongoing efforts to bolster the prowess of its AI models, with potential applications spanning from information retrieval to knowledge synthesis. This move is poised to further establish OpenAI's dominance in the realm of AI-driven data aggregation.

Technology enthusiasts and members of the AI research community are equally interested in the upcoming release of OpenAI's web crawler. The program seems to be consistent with OpenAI's goal of expanding accessibility and AI capabilities. The new web crawler, internally known as 'GPTBot' or 'GPT-5,' is positioned to be a versatile data scraper made to rapidly navigate the complex web terrain, according to the official statement made by OpenAI.

The introduction of this advanced web crawler is expected to significantly amplify OpenAI's access to diverse and relevant data sources across the open web. As noted by OpenAI's spokesperson, "Our goal is to harness the power of GPTBot to empower our AI models with a deeper understanding of real-time information, ultimately enriching the user experience across various applications."

The online discussions on platforms like Hacker News have showcased a blend of excitement and curiosity surrounding OpenAI's latest venture. While some users have expressed eagerness to witness the potential capabilities of the new web crawler, others have posed questions about the technical nuances and ethical considerations associated with such technology. As one user on Hacker News pondered, "How will OpenAI strike a balance between data acquisition and respecting the privacy of individuals and entities?"

OpenAI's strides in AI research have consistently been marked by innovation, and this new web crawler venture seems to be no exception. With its proven track record of developing groundbreaking AI models like GPT-3, OpenAI is well-positioned to harness the full potential of GPTBot. As the boundaries of AI capabilities continue to expand, the success of this endeavor could further solidify OpenAI's standing as a trailblazer in the AI landscape.

OpenAI's upcoming web crawler launch underscores its commitment to advancing AI capabilities and data acquisition techniques. The integration of GPTBot into OpenAI's framework has the potential to revolutionize data scraping and synthesis, making it a pivotal tool in various AI applications. 

Canadian Cybersecurity Head Warns of Surging AI-Powered Hacking and Disinformation

 

Sami Khoury, the Head of the Canadian Centre for Cyber Security, has issued a warning about the alarming use of Artificial Intelligence (AI) by hackers and propagandists. 

According to Khoury, AI is now being utilized to create malicious software, sophisticated phishing emails, and spread disinformation online. This concerning development highlights how rogue actors are exploiting emerging technology to advance their cybercriminal activities.

Various cyber watchdog groups share these concerns. Reports have pointed out the potential risks associated with the rapid advancements in AI, particularly concerning Large Language Models (LLMs), like OpenAI's ChatGPT. LLMs can fabricate realistic-sounding dialogue and documents, making it possible for cybercriminals to impersonate organizations or individuals and pose new cyber threats.

Cybersecurity experts are deeply worried about AI's dark underbelly and its potential to facilitate insidious phishing attempts, propagate misinformation and disinformation, and engineer malevolent code for sophisticated cyber attacks. The use of AI for malicious purposes is already becoming a reality, as suspected AI-generated content starts emerging in real-world contexts.

A former hacker's revelation of an LLM trained on malevolent material and employed to craft a highly persuasive email soliciting urgent cash transfer underscored the evolving capabilities of AI models in cybercrime. While the employment of AI for crafting malicious code is still relatively new, the fast-paced evolution of AI technology poses challenges in monitoring its full potential for malevolence.

As the cyber community grapples with uncertainties surrounding AI's sinister applications, urgent concerns arise about the trajectory of AI-powered cyber-attacks and the profound threats they may pose to cybersecurity. Addressing these challenges becomes increasingly pressing as AI-powered cybercrime evolves alongside AI technology.

The emergence of AI-powered cyber-attacks has alarmed cybersecurity experts. The rapid evolution of AI models raises fears of unknown threats on the horizon. The ability of AI to create convincing phishing emails and sophisticated misinformation presents significant challenges for cyber defense.

The cybersecurity landscape has become a battleground in an ongoing AI arms race, as cybercriminals continue to leverage AI for malicious activities. Researchers and cybersecurity professionals must stay ahead of these developments, creating effective countermeasures to safeguard against the potential consequences of AI-driven hacking and disinformation campaigns.

Increasing Threat of Generative AI Technology


Think of a drastic surge in advanced persistent threats (APTs), malware attacks, and organizational data breaches. An investigation on the case scenario revealed that these attacks are actually developed by threat actors who have access to generative AI.

However, it raises a question: who should be the culprit? The cybercriminals? The generative AI bots? The firms who develop these bots? Or perhaps the government that fails to come up with proper regulation and accountability? 

Generative AI Technology

Generative AI technology is another form of artificial intelligence that aids users in generating texts, images, sounds, and other content from inputs or instructions that are given in natural language.

Similar AI bots like ChatGPT, Google Bard, Perplexity, and others are made available to any online user who wishes to chat, generate human-like texts, and scripts, or even write complex codes. 

Although, one problem in common that these AI bots possess is their ability to produce offensive or harmful content on the basis of user input, which may violate ethical standards, inflict harm, or even be illegal.

These cases are why chatbots include security mechanisms onboard and content filters that could restrict output that may be harmful or malicious. However, how effective are these preventative methods for content monitoring, and how closely do they resemble cyber defense? The most recent chatbots are reportedly being used by hackers to develop and distribute malware. These chatbots can be "tricked" into creating spam and phishing emails, and they have even assisted bad actors in creating programs that bypass security safeguards and damage computer networks.

Bypassing Chatbot Security Filters

In order to improve their understanding of the problem, researchers investigated some malicious content-generation capabilities of chatbots and found ways to a few techniques used by fraudsters to get beyond chatbot security measures. For instance: 

  • The chatbot can generate practically anything imaginable if a user jailbreaks it and make it stay in character. As an illustration, some manipulators have developed prompts that turn the chatbot into a fictional character, such as Yes Man and DAN (Do Anything Now), which deceive the chatbot into thinking that it is exempt from following laws, moral principles, or other obligations.
  • Developing a fictional environment can also prompt the chatbot into behaving as if it is part of a film, series, or book, or a game player assigned a mission to complete or a conversation to follow. In this situation, the chatbot provides all the content it won't give otherwise. It can be tricked sometimes by character role play that uses words like "for educational purposes" or "for research and betterment of society" to bypass the filter. 
  • Another tactic used by threat actors is ‘reverse psychology,’ through which they persuade chatbots into revealing information, that they would not have displayed otherwise, due to community guidelines.

There are innumerable other ways these chatbots might be used to launch destructive cyberattacks; these methods for getting around ethical and social standards are simply the tip of the iceberg. Modern chatbots are AI-based systems trained on knowledge of the world as it exists today, so they are aware of weaknesses and how to exploit them. Thus, it is high time that online users and AI developers seek innovative ways to ensure safety and mitigate consequences that would otherwise result in destructive cyberspace.  

A ChatGPT Bug Exposes Sensitive User Data

OpenAI's ChatGPT, an artificial intelligence (AI) language model that can produce text that resembles human speech, has a security flaw. The flaw enabled the model to unintentionally expose private user information, endangering the privacy of several users. This event serves as a reminder of the value of cybersecurity and the necessity for businesses to protect customer data in a proactive manner.

According to a report by Tech Monitor, the ChatGPT bug "allowed researchers to extract personal data from users, including email addresses and phone numbers, as well as reveal the model's training data." This means that not only were users' personal information exposed, but also the sensitive data used to train the AI model. As a result, the incident raises concerns about the potential misuse of the leaked information.

The ChatGPT bug not only affects individual users but also has wider implications for organizations that rely on AI technology. As noted in a report by India Times, "the breach not only exposes the lack of security protocols at OpenAI, but it also brings forth the question of how safe AI-powered systems are for businesses and consumers."

Furthermore, the incident highlights the importance of adhering to regulations such as the General Data Protection Regulation (GDPR), which aims to protect individuals' personal data in the European Union. The ChatGPT bug violated GDPR regulations by exposing personal data without proper consent.

OpenAI has taken swift action to address the issue, stating that they have fixed the bug and implemented measures to prevent similar incidents in the future. However, the incident serves as a warning to businesses and individuals alike to prioritize cybersecurity measures and to be aware of potential vulnerabilities in AI systems.

As stated by Cyber Security Connect, "ChatGPT may have just blurted out your darkest secrets," emphasizing the need for constant vigilance and proactive measures to safeguard sensitive information. This includes regular updates and patches to address security flaws, as well as utilizing encryption and other security measures to protect data.

The ChatGPT bug highlights the need for ongoing vigilance and preventative measures to protect private data in the era of advanced technology. Prioritizing cybersecurity and staying informed of vulnerabilities is crucial for a safer digital environment as AI systems continue to evolve and play a prominent role in various industries.




Google Bard: How to use this AI Chatbot Service?

 

Google Bard is a new chatbot tool developed in response to competitor artificial intelligence (AI) tools such as ChatGPT. It is intended to simulate human conversations and employs a combination of natural language processing and machine learning to provide realistic and helpful responses to questions you may pose. 
Such tools could be especially useful for smaller businesses that want to provide natural language support to their customers without hiring large teams of support personnel, or for supplementing Google's own search tools. Bard can be integrated into websites, messaging platforms, desktop and mobile applications, and a variety of digital systems. At the very least, it will be. Outside of a limited beta test run, it is not widely available; at least not yet.

Google Bard is the company's answer to ChatGPT. It's an AI chatbot that performs many of the same functions as ChatGPT. Still, it's intended to eventually supplement Google's own search tools (much like Bing is doing with ChatGPT) as well as provide automated support and human-like interaction for businesses.

It's been in the works for a while and employs LaMDA (Language Model for Dialogue Applications) technology. It is based on Google's Transformer neural network architecture, which has also served as the foundation for other AI generative tools such as ChatGPT's GPT-3 language model.

What was the error in Google Bard's question?

Google Bard, which was unveiled for the first time on February 6, 2023, got off to a rocky start when it made a mistake in answering a question about the James Webb Space Telescope's recent discoveries. It claimed to be the first to photograph an exoplanet outside of our solar system, but this occurred many years earlier. The fact that Google Bard displayed this incorrect information with such confidence drew harsh criticism, drawing parallels with some of ChatGPT's flaws. In response, Google's stock price dropped several points.

At the time of writing, Google Bard was only available to a small group of beta testers, but a wider launch is anticipated in the coming weeks and months. Following the success of ChatGPT, CEO Sundar Pichai initially accelerated the development of Google Bard in early 2022. With the continued positive press coverage ChatGPT has received in 2023, this is only likely to have continued.

For the time being, if you are not one of the coveted Bard beta testers, you'll have to play the waiting game until we hear more from Google.

Google Bard and ChatGPT

Google Bard and ChatGPT both create chatbots using natural language models and machine learning, but each has a unique set of features. ChatGPT is entirely based on data that was mostly collected up until 2021 at the time of writing, whereas Google Bard has the possibility to use up-to-date information for its responses. ChatGPT focuses on conversational questions and answers, but it is now being used in Bing's search results to answer more conversational searches as well. Google Bard will be used in the same way, but only to supplement Google.

Both chatbots use language models that are slightly different. Google Bard is built on LaMBDA, whereas ChatGPT is built on GPT (Generative Pre-trained Transformer). ChatGPT also has a plagiarism detector, which Google Bard currently does not, as far as we know.

ChatGPT is also freely available to try out, whereas Google Bard is only available to beta testers.

Google Bard is already accessible to a select group of Google beta testers in a limited form. There is no set timetable for its wider implementation. However, Google SEO Sundar Pichai stated in his address on the launch of Google Bard that we would soon see Google Bard leveraged to enhance Google Search, so we may see Bard more widely available in the coming weeks.

ChatGPT: A Potential Risk to Data Privacy


ChatGPT, within two months of its release, seems to have taken over the world like a storm. The consumer application has achieved 100 million active users, making it the fastest-growing product ever. Users are intrigued by the tool's sophisticated capabilities, although apprehensive about its potential to upend numerous industries. 

One of the less discussed consequences in regard to ChatGPT is its privacy risk. Google only yesterday launched Bard, its own conversational AI, and others will undoubtedly follow. Technology firms engaged in AI development have certainly entered a race. 

The issue would be its technology, which is entirely based on users’ personal data. 

300 Billion Words, How Many Are Yours? 

ChatGPT is apparently based on a massive language model, which backs up an enormous amount of data to operate and enhance its functions. Implying, the model gets more adept at seeing patterns, foreseeing what will happen next, and producing credible text as more data is used to train it. 

OpenAI, the developer of ChatGPT, sourced the Chatbot model with some 3 million words systematically taken from the internet –  via books, articles, websites, and posts – which also undeniably involves online users’ personal information, gathered without their consent. 

Every blog post, product review, or comment written on an article, which exists or ever existed in the online world has a good chance that the data or information involved it is was consumed by ChatGPT. 

What is the Issue? 

The gathered data used in order to train ChatGPT is problematic for numerous reasons. 

First, the collected data is unconsented, since none of the online users were ever asked if OpenAI could use their seemingly personal information. Thus, this would be a clear violation of privacy, especially when the data is sensitive and can be used to locate us, identify our loved ones, or identify ourselves. 

The usage of data can compromise what we refer to as contextual integrity even when the data is publicly available. This is a cornerstone idea in discussions about privacy law. Information on people must not be made public outside of the context in which it was first created. 

Moreover, OpenAI does not include any procedure for users to monitor whether the company has their personal information in-store, or to request it to be taken down. The European General Data Protection Regulation (GDPR), which guarantees this right, is still being discussed as to whether ChatGPT complies with its criteria. 

This “right to be forgotten” is specifically essential when it comes to situations involving information that is inaccurate or misleading, which seems to be a regular occurrence in ChatGPT. 

Furthermore, the scraped data that ChatGPT was trained on may be confidential or protected by copyright. For instance, the tool replicated the opening few chapters of Joseph Heller's copyrighted book Catch-22. 

Finally, OpenAI did not pay for the internet data it downloaded. Its creators—individuals, website owners, and businesses—were not being compensated. This is especially remarkable in light of the recent US$29 billion valuation of OpenAI, which is more than double its value in 2021. 

OpenAI has also recently announced ChatGPT Plus, which is a paid subscription plan that will provide users ongoing access to the tool, swift response times, and priority access to its new feature. By 2024, it is anticipated that this approach would help generate $1 billion in revenue. 

None of this would have been possible without the usage of ‘our’ data, acquired and utilized without our consent. 

Time to Consider the Issue? 

According to some professionals and experts, ChatGPT is a “tipping point for AI” - The realisation of technological advancement that can revolutionize the way we work, learn, write, and even think. 

Despite its potential advantages, we must keep in mind that OpenAI is a private, for-profit organization whose objectives and business demands may not always coincide with those of the larger community requirements. 

The privacy hazards associated with ChatGPT should serve as a caution. And as users of an increasing number of AI technologies, we need to exercise extreme caution when deciding what data to provide such tools with.  

ChatGPT's Effective Corporate Usage Might Eliminate Systemic Challenges

 

Today's AI is highly developed. Artificial intelligence combines disciplines that make an effort to essentially duplicate the capacity of the human brain to learn from experience and generate judgments based on that experience. Researchers utilize a variety of tactics to do this. In one paradigm, brute force is used, where the computer system cycles through all possible solutions to a problem until it finds the one that has been proven to be right.

"ChatGPT is really restricted, but good enough at some things to provide a misleading image of brilliance. It's a mistake to be depending on it for anything essential right now," said OpenAI CEO Sam Altman when the software was first launched on November 30. 

According to Nicola Morini Bianzino, global chief technology officer at EY, there's presently no killer use case for ChatGPT in the industry which will significantly affect both the top and bottom lines. They projected that there will be an explosion of experimentation over the next six to twelve months, particularly after businesses are able to develop over the top of ChatGPT utilizing OpenAI's API.

While OpenAI CEO Sam Altman has acknowledged that ChatGPT and other generative AI technologies face several challenges, ranging from possible ethical implications to accuracy problems.

According to Bianzino, this possibility for generative AI's future will have a big impact on enterprise software since companies would have to start considering novel ways to organize data inside an enterprise that surpasses conventional analytics tools. The ways people access and use information inside the company will alter as ChatGPT and comparable tools advance and become more capable of being trained on an enterprise's data in a secure manner.

As per Bianzino, the creation of text and documentation will also require training and alignment to the appropriate ontology of the particular organization, as well as containment, storage, and control inside the enterprise. He stated that business executives, including the CTO and CIO, must be aware of these trends because, unlike quantum computing, which may not even be realized for another 10 to 15 years, the actual potential of generative AI may be realized within the next six to twelve months.

Decentralized peer-to-peer technology mixed with blockchain and smart contracts capabilities overcome the traditional challenges of privacy, traceability, trust, and security. By doing this, data owners can share insights from data without having to relocate or otherwise give up ownership of it.



How Hackers Can Exploit ChatGPT, a Viral AI Chatbot


Cybernews researchers have discovered ChatGPT, a platform that provides hackers step-by-step instructions on hacking a website. An AI-based chatbot, ChatGPT was launched recently and has caught the attention of the online community. 

The team at Cybernews has warned that AI chatbots may be fun to play with, but they are also dangerous as it is able to give detailed info on how to exploit any vulnerability. 

What is ChatGPT?

AI has created a stir in the imaginations of leaders in the tech industry and pop culture for decades. Machine learning tech allows you to automatically create text, photos, videos, and other media. They are all flourishing in the tech sphere as investors put billions of dollars into this field. 

While AI has enabled endless opportunities to help humans, the experts warn about the potential dangers of making an algorithm that will outperform human capabilities and can get out of control. 

Apocalypse situations due to AI taking over the planet are not something we are talking about. However, in today's scenario, AI has already started helping threat actors in malicious activities.

ChatGPT is the latest innovation in AI, made by research company OpenAI which was led by Sam Altman, and also backed by Microsoft, LinkedIn Co-founder Reid Hoffman, Elon Musk, and Khosla Ventures. 

The AI chatbot can make conversations with people imitating various writing styles. The text made by ChatGPT is way more imaginative and complex when compared to earlier chatbots built by Silicon Valley. ChatGPT is trained using large amounts of text data from web, Wikipedia, and archived books. 

Popularity of ChatGPT

After five days after the ChatGPT launch, over one million people had signed up for testing the tech. Social media was invaded with users' queries and the AI's answers- writing poems, copywriting, plotting movies, giving important tips for weight loss or healthy relationships, creative brainstorming, studying, and even programming. 

According to OpenAI, ChatGPT models can answer follow-up questions, argue incorrect premises, reject inappropriate queries, and admit their personal mistakes. 

ChatGPT for hacking

According to cybernews, the research team tried "using ChatGPT to help them find a website's vulnerabilities. Researchers asked questions and followed the guidance of AI, trying to check if the chatbot could provide a step-by-step guide on exploiting."

"The researchers used the 'Hack the Box' cybersecurity training platform for their experiment. The platform provides a virtual training environment and is widely used by cybersecurity specialists, students, and companies to improve hacking skills."

"The team approached ChatGPT by explaining that they were doing a penetration testing challenge. Penetration testing (pen test) is a method used to replicate a hack by deploying different tools and strategies. The discovered vulnerabilities can help organizations strengthen the security of their systems."

Potential threats of ChatGPT and AI

Experts believe that AI-based vulnerability scanners used by cybercriminals can wreak havoc on internet security. However, cybernews team also sees the potential of AI in cybersecurity. 

Researchers can use insights from AI to prevent data leaks. AI can also help developers in monitoring and testing implementation more efficiently.

AI keeps on learning, it has a mind of its own. It learns newer ways of advanced tech and exploitation, and it works as a handbook to penetration testers, offering sample payloads fulfilling their needs. 

“Even though we tried ChatGPT against a relatively uncomplicated penetration testing task, it does show the potential for guiding more people on how to discover vulnerabilities that could, later on, be exploited by more individuals, and that widens the threat landscape considerably. The rules of the game have changed, so businesses and governments must adapt to it," said Mantas Sasnauskas, head of the research team. 





Does ChatGPT Bot Empower Cyber Crime?

Security experts have cautioned that a new AI bot called ChatGPT may be employed by cybercriminals to educate them on how to plan attacks and even develop ransomware. It was launched by the artificial intelligence r&d company OpenAI last month. 

Computer security expert Brendan Dolan-Gavitt questioned if he could command an AI-powered chatbot to create malicious code when the ChatGPT application first allowed users to communicate. Then he gave the program a basic capture-the-flag mission to complete.

The code featured a buffer overflow vulnerability, which ChatGPT accurately identified and created a piece of code to capitalize it. The program would have addressed the issue flawlessly if not for a small error—the number of characters in the input. 

The fact that ChatGPT failed Dolan Gavitt's task, which he would have given students at the start of a vulnerability analysis course, does not instill trust in massive language models' capacity to generate high-quality code. However, after identifying the mistake, Dolan-Gavitt asked the model to review the response, and this time, ChatGPT did it right. 

Security researchers have used ChatGPT to rapidly complete a variety of offensive and defensive cybersecurity tasks, including creating refined or persuading phishing emails, creating workable Yara rules, identifying buffer overflows in code, creating evasion code that could be utilized by attackers to avoid threat detection, and even writing malware. 

Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, claimed that he was able to program the program to carry out a variety of aggressive and defensive cybersecurity tasks, like the creation of a World Cup-themed email in perfect English and the generation of both evasion code that can get around detection rules as well as Sigma detection rules to find cybersecurity anomalies. 

Reinforcement learning is the foundation of ChatGPT. As a result, it acquires new knowledge through interaction with people and user-generated prompts. Additionally, it implies that over time, the program might pick up on some of the tricks researchers have employed to get around its ethical checks, either through user input or improvements made by its administrators. 

Multiple researchers have discovered a technique to get beyond ChatGPT's limitations, which stops it from doing things that are malicious, including providing instructions on how to make a bomb or writing malicious code. For the present term, ChatGPT's coding could be more optimal and demonstrates many of the drawbacks of relying solely on AI tools. However, as these models develop in complexity, they are likely to become more crucial in creating malicious software. 

Facebook Users Phished by a Chatbot Campaign


You might be surprised to learn that more users check their chat apps than their social profiles. With more than 1.3 billion users, Facebook Messenger is the most popular mobile messaging service in the world and thus presents enormous commercial opportunities to marketers.

Cybersecurity company SpiderLabs has discovered a fresh phishing campaign using Messenger's chatbot software

How do you make it all work? 

Karl Sigler, senior security research manager at Trustwave SpiderLabs, explains: "You don't just click on a link and then be offered to download an app - most people are going to grasp that's an attack and not click on it. In this attack, there's a link that takes you to a channel that looks like tech help, asking for information you'd expect tech support to seek for, and that escalating of the social-engineering part is unique with these types of operations."

First, a fake email from Facebook is sent to the victim – warning that their page has violated the site's community standards and would be deleted within 48 hours. The email also includes a "Appeal Now" link that the victim might use to challenge the dismissal.

The Facebook support team poses an "Appeal Now" link users can click directly from the email, asserting to be providing them a chance to appeal. The chatbot offers victims another "Appeal Now" button while posing as a member of the Facebook support staff. Users who click the actual link are directed to a Google Firebase-hosted website in a new tab.

According to Trustwave's analysis, "Firebase is a software development platform that offers developers with several tools to help construct, improve, and expand the app easier to set up and deploy sites." Because of this opportunity, spammers created a website impersonating a Facebook "Support Inbox" where users can chiefly dispute the reported deletion of their page. 

Increasing Authenticity in Cybercrime 

The notion that chatbots are a frequent factor in modern marketing and live assistance these days and that people are not prone to be cautious of their contents, especially if they come from a fairly reliable source, is one of the factors that contribute to this campaign's effectiveness. 

According to Sigler, "the advertising employs the genuine Facebook chat function. Whenever it reads 'Page Support,' My case number has been provided by them. And it's likely enough to get past the obstacles that many individuals set when trying to spot the phishing red flags."

Attacks like this, Sigler warns, can be highly risky for proprietors of business pages. He notes that "this may be very effectively utilized in a targeted-type of attack." With Facebook login information and phone numbers, hackers can do a lot of harm to business users, Sigler adds.

As per Sigler, "If the person in charge of your social media falls for this type of scam, suddenly, your entire business page may be vandalized, or they might exploit entry to that business page to acquire access to your clients directly utilizing the credibility of that Facebook profile." They will undoubtedly pursue more network access and data as well. 

Red flags to look out for 

Fortunately, the email's content contains a few warning signs that should enable recipients to recognize the letter as spoofed. For instance, the message's text contains a few grammatical and spelling errors, and the recipient's name appears as "Policy Issues," which is not how Facebook resolves such cases.

More red flags were detected by the experts: the chatbot's page had the handle @case932571902, which is clearly not a Facebook handle. Additionally, it's barren, with neither followers nor posts. The 'Very Responsive' badge on this page, which Facebook defines as having a response rate of 90% and replying within 15 minutes, was present although it seemed to be inactive. To make it look real, it even used the Messenger logo as its profile image. 

Researchers claim that the attackers are requesting passwords, email addresses, cell phone numbers, first and last names, and page names. 

This effort is a skillful example of social engineering since malicious actors are taking advantage of the platform they are spoofing. Nevertheless, researchers urge everyone to exercise caution when using the internet and to avoid responding to fake messages. Employing the finest encryption keys available will protect your credentials.

Phishing Scam Adds a Chatbot Like Twist to Steal Data

 

According to research published Thursday by Trustwave's SpiderLabs team, a newly uncovered phishing campaign aims to reassure potential victims that submitting credit card details and other personal information is safe. 

As per the research, instead of just embedding an information-stealing link directly in an email or attached document, the procedure involves a "chatbot-like" page that tries to engage and create confidence with the victim. 

Researcher Adrian Perez stated, “We say ‘chatbot-like’ because it is not an actual chatbot. The application already has predefined responses based on the limited options given.” 

Responses to the phoney bot lead the potential victim through a number of steps that include a false CAPTCHA, a delivery service login page, and finally a credit card information grab page. Some of the other elements in the process, like the bogus chatbot, aren't very clever. According to SpiderLabs, the CAPTCHA is nothing more than a jpeg file. However, a few things happen in the background on the credit card page. 

“The credit card page has some input validation methods. One is card number validation, wherein it tries to not only check the validity of the card number but also determine the type of card the victim has inputed,” Perez stated.

The campaign was identified in late March, according to the business, and it was still operating as of Thursday morning. The SpiderLabs report is only the latest example of fraudsters' cleverness when it comes to credit card data. In April, Trend Micro researchers warned that fraudsters were utilising phoney "security alerts" from well-known banks in phishing scams. 

Last year, discussions on dark web forums about deploying phishing attacks to capture credit card information grew, according to Gemini Advisory's annual report. Another prevalent approach is stealing card info directly from shopping websites. Researchers at RiskIQ claimed this week that they've noticed a "constant uptick" in skimming activity recently, albeit not all of it is linked to known Magecart malware users.