Search This Blog

Powered by Blogger.

Blog Archive

Labels

IT Leaders Raise Security Concerns Regarding Generative AI

Executives are concerned that their company isn't equipped to safeguard workloads and apps.

 

According to a new Venafi survey, developers in almost all (83%) organisations utilise AI to generate code, raising concerns among security leaders that it might lead to a major security incident. 

In a report published earlier this month, the machine identity management company shared results indicating that AI-generated code is widening the gap between programming and security teams. 

The report, Organisations Struggle to Secure AI-Generated and Open Source Code, highlighted that while 72% of security leaders believe they have little choice but to allow developers to utilise AI in order to remain competitive, virtually all (92%) are concerned regarding its use. 

Because AI, particularly generative AI technology, is advancing so quickly, 66% of security leaders believe they will be unable to stay up. An even more significant number (78%) believe that AI-generated code will lead to a security reckoning for their organisation, and 59% are concerned about the security implications of AI. 

The top three issues most frequently mentioned by survey respondents are the following: 

  • Over-reliance on AI by developers will result in a drop in standards
  • Ineffective quality checking of AI-written code 
  • AI to employ dated open-source libraries that have not been well-maintained

“Developers are already supercharged by AI and won’t give up their superpowers. And attackers are infiltrating our ranks – recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg,” Kevin Bocek, Chief Innovation Officer at Venafi, stated. 

Furthermore, the Venafi poll reveals that AI-generated code raises not only technology issues, but also tech governance challenges. For example, nearly two-thirds (63%) of security leaders believe it is impossible to oversee the safe use of AI in their organisation because they lack visibility into where AI is being deployed. Despite concerns, fewer than half of firms (47%) have procedures in place to ensure the safe use of AI in development settings. 

“Anyone today with an LLM can write code, opening an entirely new front. It’s the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents or someone in finance getting code from an LLM trained on who knows what. We have to authenticate code from wherever it comes,” Bocek concluded. 

The Venafi report is the outcome of a poll of 800 security decision-makers from the United States, the United Kingdom, Germany, and France.
Share it:

Business Security

Cyber Security

Generative AI

IT Leaders

Work Management