Artificial intelligence is present everywhere. If it isn't powering your online search results, it's just a click away with your AI-enabled mouse. If it's not helping you enhance your LinkedIn profile, it's benefiting you at work. As AIs become more intelligent, outspoken voices warn of the technology's potential risks.
These range from literally replacing you at your job to even more terrifying end-of-the-world circumstances. The Massachusetts Institute of Technology is aware of these competing currents and has compiled a list of the ways it believes AI might pose challenges.
AI threats
In an article supporting the research, MIT summarised the several ways AI could endanger society. Humans outperform artificial intelligence. Kind of. While 51% of the threats were attributed directly to AIs, 34% originated with humans using AI technology--there are some evil individuals out there, remember.
However, approximately two thirds of the risks were identified after an AI had been trained and deployed, compared to 10% before that point. This provides significant support to AI regulatory initiatives, as it coincides with the announcement that OpenAI and Anthropic would submit their new, smartest AIs to the US AI Safety Institute for testing before releasing them to the public.
So, what are the AI risks? A quick search of the database reveals some alarming category types. One scenario involves AI harm emerging as a "side effect of a primary goal like profit or influence," in which AI makers "wilfully allow it to cause widespread social damage like pollution, resource depletion, mental illness, misinformation, or injustice." Similarly, additional side effects occur when "one or more criminal entities" construct an AI to "intentionally inflict harm, such as for terrorism or combating law enforcement.”
Other threats that MIT has identified feel more in line with current news reports, especially with regard to election misinformation, even though these seem more suited for science fiction dystopias: AIs could be harmful when "extensive data collection" in the models "brings toxic content and stereotypical bias into the training data."
One of the other concerns is that AI systems have the potential to become "very invasive of people's privacy, controlling, for instance, the length of someone's last romantic relationship." This is a type of soft power control where society is steered by small adjustments; it is similar to some of the concerns raised by US authorities on the possible impact of TikTok's algorithm.