Employers should be aware of the potential data protection issues before experimenting with generative AI tools like ChatGPT. You can't just feed human resources data into a generative AI tool because of the rise in privacy and data protection laws in the US, Europe, and other countries in recent years. After all, employee data—including performance, financial, and even health data—is often quite sensitive.
Obviously, this is an area where companies should seek legal advice. It's also a good idea to consult with an AI expert regarding the ethics of utilising generative AI (to ensure that you're acting not only legally, but also ethically and transparently). But, as a starting point, here are two major factors that employers should be aware of.
Feeding personal data
As I previously stated, employee data is often highly sensitive and sensitive. It is precisely the type of data that, depending on your jurisdiction, is usually subject to the most stringent forms of legal protection.
This makes it highly dangerous to feed such data into a generative AI tool. Why? Because many generative AI technologies use the information provided to fine-tune the underlying language model. In other words, it may use the data you provide for training purposes, and it may eventually expose that information to other users. So, suppose you employ a generative AI tool to generate a report on employee salary based on internal employee information. In the future, the AI tool can employ the data to generate responses for other users (outside of your organisation). Personal information could easily be absorbed by the generative AI tool and reused.
This isn't as shady as it sounds. Many generative AI programmes' terms and conditions explicitly specify that data provided to the AI may be utilised for training and fine-tuning or revealed when users request cases of previously submitted inquiries. As a result, when you agree to the terms of service, always make sure you understand exactly what you're getting yourself into. Experts urge that any data given to a generative AI service be anonymised and free of personally identifiable information. This is frequently referred to as "de-identifying" the data.
Risks of generative AI outputs
There are risks associated with the output or content developed by generative AIs, in addition to the data fed into them. In particular, there is a possibility that the output from generative AI technologies will be based on personal data acquired and handled in violation of data privacy laws.
For example, suppose you ask a generative AI tool to provide a report on average IT salary in your area. There is a possibility that the programme will scrape personal data from the internet without your authorization, violating data protection rules, and then serve it to you. Employers who exploit personal data provided by a generative AI tool may be held liable for data protection violations. For the time being, it is a legal grey area, with the generative AI provider likely bearing the most or all of the duty, but the risk remains.
Cases like this are already appearing. Indeed, one lawsuit claims that ChatGPT was trained on "massive amounts of personal data," such as medical records and information about children, that was accessed without consent. You do not want your organisation to become unwittingly involved in a litigation like this. Essentially, we're discussing an "inherited" risk of violating data protection regulations. However, there is a risk involved.
The way forward
Employers must carefully evaluate the data protection and privacy consequences of utilising generative AI and seek expert assistance. However, don't let this put you off adopting generative AI altogether. Generative AI, when used properly and within the bounds of the law, can be an extremely helpful tool for organisations.