Earlier this year, ChatGPT emerged as the face of generative AI. ChatGPT was designed to help with almost everything, from creating business plans to breaking down complex topics into simple terms. Since then, businesses of all sizes have been eager to explore and reap the benefits of generative AI.
However, as this new chapter of AI innovation moves at breakneck speed, CEOs and leaders risk overlooking a type of technology that has been infiltrating through the back door: shadow AI.
Overlooking AI shadow a risky option
To put it simply, "shadow AI" refers to employees who, without management awareness, add AI tools to their work systems to make life easier. Although most of the time this pursuit of efficiency is well-intentioned, it is exposing businesses to new cybersecurity and data privacy risks.
When it comes to navigating tedious tasks or laborious processes, employees who want to increase productivity and process efficiency are usually the ones who embrace shadow AI. This could imply that AI is being asked to summarise the main ideas from meeting minutes or to comb through hundreds of PowerPoint decks in search of critical data.
Employees typically don't intentionally expose their company to risk. On the contrary. All they're doing is simplifying things so they can cross more things off their to-do list. However, given that over a million adults in the United Kingdom have already utilised generative AI at work, there is a chance that an increasing number of employees will use models that their employers have not approved for safe use, endangering data security in the process.
Major risks
Shadow AI carries two risks. First, employees may feed sensitive company information into such tools or leave sensitive company information open to be scraped while the technology continues to operate in the background. For example, when an employee uses ChatGPT or Google Bard to increase productivity or clarify information, they may be entering sensitive or confidential company information.
Sharing data isn't always an issue—companies frequently rely on third-party tools and service providers for information—but problems can arise when the tool in question and its data-handling policies haven't been assessed and approved by the business.
The second risk related to shadow AI is that, because businesses generally aren't aware that these tools are being used, they can't assess the risks or take appropriate action to minimise them. (This may also apply to employees who receive false information and subsequently use it in their work.)
This is something that occurs behind closed doors and beyond the knowledge of business leaders. In 2022, 41% of employees created, modified, or accessed technology outside of IT's purview, according to research from Gartner. By 2027, the figure is expected to increase to 75%.
And therein lies the crux of the issue. How can organisations monitor and assess the risks of something they don't understand?
Some companies, such as Samsung, have gone so far as to ban ChatGPT from their offices after employees uploaded proprietary source code and leaked confidential company information via the public platform. Apple and JP Morgan have also restricted employee use of ChatGPT. Others are burying their heads in the sand or failing to notice the problem at all.
What should business leaders do to mitigate the risks of shadow AI while also ensuring that they and their teams can benefit from the efficiencies and insights that artificial intelligence can offer?
First, leaders should educate teams on what constitutes safe AI practise, as well as the risks associated with shadow AI, and provide clear guidance on when ChatGPT can and cannot be used safely at work.
Companies should consider offering private, in-house generative AI tools to employees who fall into the latter category. Models such as Llama 2 and Falcon AI can be downloaded and used securely to power generative AI tools. Azure Open AI provides a middle-ground option in which data remains within the company's Microsoft "tenancy."
These options avoid the risk to data and IP that comes with public large language models like ChatGPT—whose various uses of our data aren't yet known—while allowing employees to yield the results they desire.