Recent research has surfaced serious security vulnerabilities within ChatGPT plugins, raising concerns about potential data breaches and account takeovers. These flaws could allow attackers to gain control of organisational accounts on third-party platforms and access sensitive user data, including Personal Identifiable Information (PII).
According to Darren Guccione, CEO and co-founder of Keeper Security, the vulnerabilities found in ChatGPT plugins pose a significant risk to organisations as employees often input sensitive data, including intellectual property and financial information, into AI tools. Unauthorised access to such data could have severe consequences for businesses.
In November 2023, ChatGPT introduced a new feature called GPTs, which function similarly to plugins and present similar security risks, further complicating the situation.
In a recent advisory, the Salt Security research team identified three main types of vulnerabilities within ChatGPT plugins. Firstly, vulnerabilities were found in the plugin installation process, potentially allowing attackers to install malicious plugins and intercept user messages containing proprietary information.
Secondly, flaws were discovered within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms like GitHub.
Lastly, OAuth redirection manipulation vulnerabilities were identified in several plugins, enabling attackers to steal user credentials and execute account takeovers.
Yaniv Balmas, vice president of research at Salt Security, emphasised the growing popularity of generative AI tools like ChatGPT and the corresponding increase in efforts by attackers to exploit these tools to gain access to sensitive data.
Following coordinated disclosure practices, Salt Labs worked with OpenAI and third-party vendors to promptly address these issues and reduce the risk of exploitation.
Sarah Jones, a cyber threat intelligence research analyst at Critical Start, outlined several measures that organisations can take to strengthen their defences against these vulnerabilities. These include:
1. Implementing permission-based installation:
This involves ensuring that only authorised users can install plugins, reducing the risk of malicious actors installing harmful plugins.
2. Introducing two-factor authentication:
By requiring users to provide two forms of identification, such as a password and a unique code sent to their phone, organisations can add an extra layer of security to their accounts.
3. Educating users on exercising caution with code and links:
It's essential to train employees to be cautious when interacting with code and links, as these can often be used as vectors for cyber attacks.
4. Monitoring plugin activity constantly:
By regularly monitoring plugin activity, organisations can detect any unusual behaviour or unauthorised access attempts promptly.
5. Subscribing to security advisories for updates:
Staying informed about security advisories and updates from ChatGPT and third-party vendors allows organisations to address vulnerabilities and apply patches promptly.
As organisations increasingly rely on AI technologies, it becomes crucial to address and mitigate the associated security risks effectively.
ChatGPT is a large language model (LLM) from OpenAI that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It is still under development, but it has already been used for a variety of purposes, including creative writing, code generation, and research.
However, ChatGPT also poses some security and privacy risks. These risks are highlighted in the following articles:
Overall, ChatGPT is a powerful tool with a number of potential benefits. However, it is important to be aware of the security and privacy risks associated with using it. Users should carefully consider the instructions they give to ChatGPT and only use trusted plugins. They should also be careful about what websites and web applications they authorize ChatGPT to access.
Here are some additional tips for using ChatGPT safely: