Search This Blog

Powered by Blogger.

Blog Archive

Labels

Employees Claim OpenAI and Google DeepMind Are Hiding Dangers From the Public

The claim was made in a widely publicised open letter in which the group emphasised what they called "serious risks" posed by AI.

 

A number of current and former OpenAI and Google DeepMind employees have claimed that AI businesses "possess substantial non-public data regarding the capabilities and limitations of their systems" that they cannot be expected to share voluntarily.

The claim was made in a widely publicised open letter in which the group emphasised what they called "serious risks" posed by AI. These risks include the entrenchment of existing inequities, manipulation and misinformation, and the loss of control over autonomous AI systems, which could lead to "human extinction." They bemoaned the absence of effective oversight and advocated for stronger whistleblower protections. 

The letter’s authors said they believe AI can bring unprecedented benefits to society and that the risks they highlighted can be reduced with the involvement of scientists, policymakers, and the general public. However, they said that AI companies have financial incentives to avoid effective oversight. 

Claiming that AI firms are aware of the risk levels of different kinds of harm and the adequacy of their protective measures, the group of employees stated that the companies have only weak requirements to communicate this information with governments "and none with civil society." They further stated that strict confidentiality agreements prevented them from publicly voicing their concerns. 

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” they wrote.

Vox revealed in May that former OpenAI employees are barred from criticising their former employer for the rest of their life. If they refuse to sign the agreement, they risk losing all of their vested stock gained while working for the company. OpenAI CEO Sam Altman later said on X that the standard exit paperwork would be altered.

In reaction to the open letter, an OpenAI representative told The New York Times that the company is proud of its track record of developing the most powerful and safe AI systems, as well as its scientific approach to risk management.

Such open letters are not uncommon in the field of artificial intelligence. Most famously, the Future of Life Institute published an open letter signed by Elon Musk and Steve Wozniak calling for a 6-month moratorium in AI development, which was disregarded.
Share it:

AI Firms

Artificial Intelligence

Open AI

Technology

Threat Intelligence