In the rapidly evolving world of technology, generative artificial intelligence (GenAI) programs are emerging as both powerful tools and significant security risks. Cybersecurity researchers have long warned about the vulnerabilities inherent in these systems. From cleverly crafted prompts that can bypass safety measures to potential data leaks exposing sensitive information, the threats posed by GenAI are numerous and increasingly concerning.
Elia Zaitsev, Chief Technology Officer of cybersecurity firm CrowdStrike, recently highlighted these issues in an interview with ZDNET.
"This is a new attack vector that opens up a new attack surface," Zaitsev stated. He emphasized the hurried adoption of GenAI technologies, often at the expense of established security protocols. "I see with generative AI a lot of people just rushing to use this technology, and they're bypassing the normal controls and methods of secure computing," he explained.
Zaitsev draws a parallel between GenAI and fundamental computing innovations. "In many ways, you can think of generative AI technology as a new operating system or a new programming language," he noted. The lack of widespread expertise in handling the pros and cons of GenAI compounds the problem, making it challenging to use and secure these systems effectively.
The risk extends beyond poorly designed applications.
According to Zaitsev, the centralization of valuable information within large language models (LLMs) presents a significant vulnerability. "The same problem of centralizing a bunch of valuable information exists with all LLM technology," he said.
To mitigate these risks, Zaitsev advises against allowing LLMs unfettered access to data stores. Instead, he recommends a more controlled approach. "In a sense, you must tame RAG before it makes the problem worse," he suggested. This involves leveraging the LLM's capability to interpret open-ended questions and using traditional programming methods to fulfill queries securely.
"For example, Charlotte AI often lets users ask generic questions," Zaitsev explained.
"What Charlotte does is identify the relevant part of the platform and the specific data set that holds the source of truth, then pulls from that via an API call, rather than allowing the LLM to query the database directly."
As enterprises increasingly integrate GenAI into their operations, understanding and addressing its security implications is crucial. By implementing stringent control measures and fostering a deeper understanding of this technology, organizations can harness its potential while safeguarding their valuable data.