Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label ChatGPT Vulnerabilities. Show all posts

ChatGPT Vulnerability Exploited: Hacker Demonstrates Data Theft via ‘SpAIware

 

A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware. ChatGPT initially only stored memory within an active conversation session, resetting once the chat ended. This limited the potential for hackers to exploit data, as the information wasn’t saved long-term. 

However, earlier this year, OpenAI introduced a new feature allowing ChatGPT to retain memory between different conversations. This update, meant to personalize the user experience, also created an unexpected opportunity for cybercriminals to manipulate the chatbot’s memory retention. Rehberger identified that through prompt injection, hackers could insert malicious commands into ChatGPT’s memory. This allowed the chatbot to continuously send a user’s conversation history to a remote server, even across different sessions. 

Once a hacker successfully inserted this prompt into ChatGPT’s long-term memory, the user’s data would be collected each time they interacted with the AI tool. This makes the attack particularly dangerous, as most users wouldn’t notice anything suspicious while their information is being stolen in the background. What makes this attack even more alarming is that the hacker doesn’t require direct access to a user’s device to initiate the injection. The payload could be embedded within a website or image, and all it would take is for the user to interact with this media and prompt ChatGPT to engage with it. 

For instance, if a user asked ChatGPT to scan a malicious website, the hidden command would be stored in ChatGPT’s memory, enabling the hacker to exfiltrate data whenever the AI was used in the future. Interestingly, this exploit appears to be limited to the macOS app, and it doesn’t work on ChatGPT’s web version. When Rehberger first reported his discovery, OpenAI dismissed the issue as a “safety” concern rather than a security threat. However, once he built a proof-of-concept demonstrating the vulnerability, OpenAI took action, issuing a partial fix. This update prevents ChatGPT from sending data to remote servers, which mitigates some of the risks. 

However, the bot still accepts prompts from untrusted sources, meaning hackers can still manipulate the AI’s long-term memory. The implications of this exploit are significant, especially for users who rely on ChatGPT for handling sensitive data or important business tasks. It’s crucial that users remain vigilant and cautious, as these prompt injections could lead to severe privacy breaches. For example, any saved conversations containing confidential information could be accessed by cybercriminals, potentially resulting in financial loss, identity theft, or data leaks. To protect against such vulnerabilities, users should regularly review ChatGPT’s memory settings, checking for any unfamiliar entries or prompts. 

As demonstrated in Rehberger’s video, users can manually delete suspicious entries, ensuring that the AI’s long-term memory doesn’t retain harmful data. Additionally, it’s essential to be cautious about the sources from which they ask ChatGPT to retrieve information, avoiding untrusted websites or files that could contain hidden commands. While OpenAI is expected to continue addressing these security issues, this incident serves as a reminder that even advanced AI tools like ChatGPT are not immune to cyber threats. As AI technology continues to evolve, so do the tactics used by hackers to exploit these systems. Staying informed, vigilant, and cautious while using AI tools is key to minimizing potential risks.

Google DeepMind Researchers Uncover ChatGPT Vulnerabilities

 

Scientists at Google DeepMind, leading a research team, have adeptly utilized a cunning approach to uncover phone numbers and email addresses via OpenAI's ChatGPT, according to a report from 404 Media. This discovery prompts apprehensions regarding the substantial inclusion of private data in ChatGPT's training dataset, hinting at the risk of inadvertent information exposure. 

The researchers expressed astonishment at the success of their attack and emphasized that the vulnerabilities they exploited could have been identified earlier. They detailed their findings in a study, which is currently available as a not-yet-peer-reviewed paper. The researchers also mentioned that, to their knowledge, the notable frequency with which ChatGPT emits training data had not been observed before the release of this paper. 

Certainly, the revelation of potentially sensitive information represents merely a fraction of the issue at hand. As highlighted by the researchers, the broader concern lies in ChatGPT mindlessly reproducing extensive portions of its training data verbatim at an alarming rate. This susceptibility opens the door to widespread data extraction, possibly supporting the claims of incensed authors who contend that their work is falling victim to plagiarism. 

How Researchers Executed Their Attack? 

The researchers acknowledge that the attack is rather simple and somewhat amusing. To execute it, one just needs to instruct the chatbot to endlessly repeat a specific word, like "poem," and then let it do its thing. After a while, instead of repetitive behaviour, ChatGPT begins generating varied and mixed pieces of text, often containing substantial chunks copied from online sources. 

OpenAI introduced ChatGPT (Chat Generative Pre-trained Transformer) to the public on November 30, 2022. This chatbot, built on a robust language model, empowers users to shape and guide conversations according to their preferences in terms of length, format, style, level of detail, and language. 

According to the Nemertes enterprise AI research study for 2023-24, over 60% of the organizations surveyed were actively employing AI in production, and nearly 80% had integrated AI into their business operations. Surprisingly, less than 36% of these organizations had established a comprehensive policy framework to govern the use of generative AI.