If you've been scrolling through social media recently, you've probably seen a lot of... dolls. There are dolls all over X and on Facebook feeds. Instagram? Dolls. TikTok?
You guessed it: dolls, as well as doll-making techniques. There are even dolls on LinkedIn, undoubtedly the most serious and least entertaining member of the club. You can refer to it as the Barbie AI treatment or the Barbie box trend. If Barbie isn't your thing, you can try AI action figures, action figure starter packs, or the ChatGPT action figure fad. However, regardless of the hashtag, dolls appear to be everywhere.
And, while they share some similarities (boxes and packaging resembling Mattel's Barbie, personality-driven accessories, a plastic-looking smile), they're all as unique as the people who post them, with the exception of one key common feature: they're not real.
In the emerging trend, users are using generative AI tools like ChatGPT to envision themselves as dolls or action figures, complete with accessories. It has proven quite popular, and not just among influencers.
Politicians, celebrities, and major brands have all joined in. Journalists covering the trend have created images of themselves with cameras and microphones (albeit this journalist won't put you through that). Users have created renditions of almost every well-known figure, including billionaire Elon Musk and actress and singer Ariana Grande.
The Verge, a tech media outlet, claims that it started on LinkedIn, a professional social networking site that was well-liked by marketers seeking interaction. Because of this, a lot of the dolls you see try to advertise a company or business. (Think, "social media marketer doll," or even "SEO manager doll." )
Privacy concerns
From a social perspective, the popularity of the doll-generating trend isn't surprising at all, according to Matthew Guzdial, an assistant professor of computing science at the University of Alberta.
"This is the kind of internet trend we've had since we've had social media. Maybe it used to be things like a forwarded email or a quiz where you'd share the results," Guzdial noted.
But as with any AI trend, there are some concerns over its data use. Generative AI in general poses substantial data privacy challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) points out, data privacy concerns and the internet are nothing new, but AI is so "data-hungry" that it magnifies the risk.
Safety tips
As we have seen, one of the major risks of participating in viral AI trends is the potential for your conversation history to be compromised by unauthorised or malicious parties. To stay safe, researchers recommend taking the following steps:
Protect your account: This includes enabling 2FA, creating secure and unique passwords for each service, and avoiding logging in to shared computers.
Minimise the real data you give to the AI model: Fornés suggests using nicknames or other data instead. You should also consider utilising a different ID solely for interactions with AI models.
Use the tool cautiously and properly: When feasible, use the AI model in incognito mode and without activating the history or conversational memory functions.