Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label ChatGPT. OpenAI. Show all posts

OpenAI’s Disruption of Foreign Influence Campaigns Using AI

 

Over the past year, OpenAI has successfully disrupted over 20 operations by foreign actors attempting to misuse its AI technologies, such as ChatGPT, to influence global political sentiments and interfere with elections, including in the U.S. These actors utilized AI for tasks like generating fake social media content, articles, and malware scripts. Despite the rise in malicious attempts, OpenAI’s tools have not yet led to any significant breakthroughs in these efforts, according to Ben Nimmo, a principal investigator at OpenAI. 

The company emphasizes that while foreign actors continue to experiment, AI has not substantially altered the landscape of online influence operations or the creation of malware. OpenAI’s latest report highlights the involvement of countries like China, Russia, Iran, and others in these activities, with some not directly tied to government actors. Past findings from OpenAI include reports of Russia and Iran trying to leverage generative AI to influence American voters. More recently, Iranian actors in August 2024 attempted to use OpenAI tools to generate social media comments and articles about divisive topics such as the Gaza conflict and Venezuelan politics. 

A particularly bold attack involved a Chinese-linked network using OpenAI tools to generate spearphishing emails, targeting OpenAI employees. The attack aimed to plant malware through a malicious file disguised as a support request. Another group of actors, using similar infrastructure, utilized ChatGPT to answer scripting queries, search for software vulnerabilities, and identify ways to exploit government and corporate systems. The report also documents efforts by Iran-linked groups like CyberAveng3rs, who used ChatGPT to refine malicious scripts targeting critical infrastructure. These activities align with statements from U.S. intelligence officials regarding AI’s use by foreign actors ahead of the 2024 U.S. elections. 

However, these nations are still facing challenges in developing sophisticated AI models, as many commercial AI tools now include safeguards against malicious use. While AI has enhanced the speed and credibility of synthetic content generation, it has not yet revolutionized global disinformation efforts. OpenAI has invested in improving its threat detection capabilities, developing AI-powered tools that have significantly reduced the time needed for threat analysis. The company’s position at the intersection of various stages in influence operations allows it to gain unique insights and complement the work of other service providers, helping to counter the spread of online threats.

ChatGPT Vulnerability Exploited: Hacker Demonstrates Data Theft via ‘SpAIware

 

A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware. ChatGPT initially only stored memory within an active conversation session, resetting once the chat ended. This limited the potential for hackers to exploit data, as the information wasn’t saved long-term. 

However, earlier this year, OpenAI introduced a new feature allowing ChatGPT to retain memory between different conversations. This update, meant to personalize the user experience, also created an unexpected opportunity for cybercriminals to manipulate the chatbot’s memory retention. Rehberger identified that through prompt injection, hackers could insert malicious commands into ChatGPT’s memory. This allowed the chatbot to continuously send a user’s conversation history to a remote server, even across different sessions. 

Once a hacker successfully inserted this prompt into ChatGPT’s long-term memory, the user’s data would be collected each time they interacted with the AI tool. This makes the attack particularly dangerous, as most users wouldn’t notice anything suspicious while their information is being stolen in the background. What makes this attack even more alarming is that the hacker doesn’t require direct access to a user’s device to initiate the injection. The payload could be embedded within a website or image, and all it would take is for the user to interact with this media and prompt ChatGPT to engage with it. 

For instance, if a user asked ChatGPT to scan a malicious website, the hidden command would be stored in ChatGPT’s memory, enabling the hacker to exfiltrate data whenever the AI was used in the future. Interestingly, this exploit appears to be limited to the macOS app, and it doesn’t work on ChatGPT’s web version. When Rehberger first reported his discovery, OpenAI dismissed the issue as a “safety” concern rather than a security threat. However, once he built a proof-of-concept demonstrating the vulnerability, OpenAI took action, issuing a partial fix. This update prevents ChatGPT from sending data to remote servers, which mitigates some of the risks. 

However, the bot still accepts prompts from untrusted sources, meaning hackers can still manipulate the AI’s long-term memory. The implications of this exploit are significant, especially for users who rely on ChatGPT for handling sensitive data or important business tasks. It’s crucial that users remain vigilant and cautious, as these prompt injections could lead to severe privacy breaches. For example, any saved conversations containing confidential information could be accessed by cybercriminals, potentially resulting in financial loss, identity theft, or data leaks. To protect against such vulnerabilities, users should regularly review ChatGPT’s memory settings, checking for any unfamiliar entries or prompts. 

As demonstrated in Rehberger’s video, users can manually delete suspicious entries, ensuring that the AI’s long-term memory doesn’t retain harmful data. Additionally, it’s essential to be cautious about the sources from which they ask ChatGPT to retrieve information, avoiding untrusted websites or files that could contain hidden commands. While OpenAI is expected to continue addressing these security issues, this incident serves as a reminder that even advanced AI tools like ChatGPT are not immune to cyber threats. As AI technology continues to evolve, so do the tactics used by hackers to exploit these systems. Staying informed, vigilant, and cautious while using AI tools is key to minimizing potential risks.

Bill Gates' AI Vision: Revolutionizing Daily Life in 5 Years

Bill Gates recently made a number of bold predictions about how artificial intelligence (AI) will change our lives in the next five years. These forecasts include four revolutionary ways that AI will change our lives. The tech billionaire highlights the significant influence artificial intelligence (AI) will have on many facets of our everyday lives and believes that these developments will completely transform the way humans interact with computers.

Gates envisions a future where AI becomes an integral part of our lives, changing the way we use computers fundamentally. According to him, AI will play a pivotal role in transforming the traditional computer interface. Instead of relying on conventional methods such as keyboards and mice, Gates predicts that AI will become the new interface, making interactions more intuitive and human-centric.

One of the key aspects highlighted by Gates is the widespread integration of AI-powered personal assistants into our daily routines. Gates suggests that every internet user will soon have access to an advanced personal assistant, driven by AI. This assistant is expected to streamline tasks, enhance productivity, and provide a more personalized experience tailored to individual needs.

Furthermore, Gates emphasizes the importance of developing humane AI. In collaboration with Humanes AI, a prominent player in ethical AI practices, Gates envisions AI systems that prioritize ethical considerations and respect human values. This approach aims to ensure that as AI becomes more prevalent, it does so in a way that is considerate of human concerns and values.

The transformative power of AI is not limited to personal assistants and interfaces. Gates also predicts a significant shift in healthcare, with AI playing a crucial role in early detection and personalized treatment plans. The ability of AI to analyze vast datasets quickly could revolutionize the medical field, leading to more accurate diagnoses and tailored healthcare solutions.

Bill Gates envisions a world in which artificial intelligence (AI) is smoothly incorporated into daily life, providing previously unheard-of conveniences and efficiencies, as we look to the future. These forecasts open up fascinating new possibilities, but they also bring up crucial questions about the moral ramifications of broad AI use. Gates' observations provide a fascinating look at the possible changes society may experience over the next five years as it rapidly moves toward an AI-driven future.


From Text to Multisensory AI: ChatGPT's Latest Evolution

 


The OpenAI generative AI bot, ChatGPT, has been updated to enable it to take on a whole new level of capabilities. Artificial intelligence (AI) is a fast, dynamic field that is constantly evolving and moving forward.  

A newly launched generative AI-powered chatbot, ChatGPT, from OpenAI, an AI startup backed by Microsoft, has been expanded with new features on Monday. It is now possible to ask ChatGPT questions in five different voices, and you can submit images for ChatGPT's response, as you can now ask ChatGPT questions based on the images you have uploaded.

By doing so, users will be able to ask ChatGPT in five different voices. Open AI shared a video on the X (formerly Twitter) social network showing how the feature works when it was announced that ChatGPT could now see, hear, and speak. This was announced as a post on the X (formerly Twitter). 

According to the note that was attached to the video, ChatGPT is now capable of watching, hearing, and speaking. As a result, users will soon be able to engage in voice conversations using ChatGPT (iOS & Android), as well as include images in the conversation (all platforms) over the next two weeks. 

A major update to ChatGPT is the introduction of an image analysis and response function. As an example, if you upload a picture of your bike, for example, and send it to the site, you'll receive instructions on how to lower the seat, or if you upload a picture of your refrigerator, you'll get some ideas for recipes based on its contents. 

The second feature of ChatGPT is that it allows users to interact with it in a synthetic voice, which is similar to how you'd interact with Google Now or Siri. The threads you ask ChatGPT are answered based on customized A.I. algorithms. 

A multimodal artificial intelligence system can handle any text, picture, video, or other form of information that a user chooses to throw at it. This feature is part of a trend across the entire industry toward so-called multimodal artificial intelligence systems. 

Researchers believe that the ultimate goal is the development of an artificial intelligence capable of processing information in the same way as a human does. In addition to answering users' questions in a variety of voices, ChatGPT will also be able to provide feedback in a variety of languages, based on their personal preferences. 

To create each voice, OpenAI has enlisted the help of professional voice actors, along with its proprietary speech recognition software Whisper, which transcribes spoken words into text using its proprietary technology. A new text-to-speech model, dubbed OpenAI's new text-to-speech model, is behind ChatGPT's new voice capabilities, which can create human-like audio using just text and a few seconds of speech samples. This opens the door to many "creative and accessible applications".

Aside from working with other companies, OpenAI is also collaborating with Spotify on a project to translate podcasts into several languages and to translate them as naturally as possible in the voice of the podcaster. A multimodal approach based on GPT-3.5 and GPT-4 is being used by OpenAI to enable ChatGPT to understand images based on multimodal capabilities. 

Users can now upload an image to the ChatGPT system to ask it a question such as exploring the contents of my fridge to plan a meal or analyzing the data from a complex graph for work-related purposes.  During the next two weeks, Plus and Enterprise users will be gradually introduced to new features, including voice and image capabilities, which can be enabled through their settings. 

A voice feature will be available on both iOS and Android platforms, with the option to enable them via the settings menu, whereas images will be available on all platforms from here on out. A ChatGPT model can be used by users for specific topics, such as research in specialized fields. 

OpenAI is very transparent about the model's limitations and discourages high-risk use cases that have not been properly verified. As a result, the model does a great job of transcribing English text, but it is not so good at transcribing other languages, especially those with non-Roman script. 

OpenAI advises non-English speaking users not to use ChatGPT for such purposes. In recognition of the potential risks involved with advanced capabilities such as voice, OpenAI has focused on voice chat, and the technology has been developed in collaboration with voice actors to ensure the authenticity and safety of voice chat. This technology is also being used by Spotify's Voice Translation feature, which allows podcasters to translate content into a range of different languages using their voice. This feature is important because it expands the reach of podcasters.

Using image input, OpenAI takes measures to protect the privacy of individuals by limiting the ability of ChatGPT to identify and describe people's identities directly. To further enhance these safeguards while ensuring the tool is as useful as possible, it will be crucial to follow real-world usage and user feedback to ensure it is as robust as possible.