Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deepfake. Show all posts

UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

UN Report: Telegram joins the expanding cybercrime markets in Southeast Asia

 


According to a report issued by the United Nations Office for Drugs and Crime, dated October 7, criminal networks across Southeast Asia are increasingly turning to the messaging platform Telegram for conducting comprehensive illegal activities. It says Telegram, due to big channels and seemingly insufficient moderation, becomes the attraction of the underworld for organised crime and its resultant transformation in the ways of operating global illicit operations.

An Open Market for Stolen Data and Cybercrime Tools

The UNODC report clearly illustrates how Telegram has become a trading platform for hacked personal data, including credit card numbers, passwords, and browser histories. Cybercriminals publicly trade on the large channels of Telegram with very little interference. In addition, it has various software and tools designed to conduct cybercrime such as fraud using deepfake technology and malware used for copying and collecting users' data. Moreover, money laundering services are provided in unauthorised cryptocurrency exchanges through Telegram.

An example was an ad to be placed on Telegram stating that it was moving USDT cryptocurrency, stolen and with $3 million daily transactions, to cash in on criminal organisations involved in transnational organised crime in Southeast Asia. According to reports, these dark markets are growing increasingly omnipresent on Telegram through which vendors aggressively look to reach criminal organisations in the region.

Southeast Asia: A hub of fraud and exploitation

According to the UNODC reports, this region in Southeast Asia has become an important base for international fraudulent operations. Most criminal activities within the region relate to Chinese syndicates located within heavily fortified locations and use trafficked individuals forced into labour. It is estimated that the industry generates between $27.4 billion and $36.5 billion annually.

The move comes as scrutiny of Telegram and its billionaire founder, Russian-born Pavel Durov, is intensifying. Durov is facing legal fallout in France after he was charged with abetting crime on the platform by allowing the distribution of illegal content after he tightened his regulations in France. The case has sparked debates on the liability of tech companies for the crimes happening on their platform, and the line between free speech and legal accountability.

It responded to the increasing pressure by promising cooperation with legal authorities. The head of Telegram, Durov, stated that Telegram will share the IP addresses and phone numbers of users whenever a legal request for them is required. He further promised to cancel some features on the platform that have been widely misused for illicit activities. Currently, more than a billion people worldwide are using Telegram, and it has so far not reacted publicly to the latest report from the UNODC.

A Perfect Fertile Ground for Cybercrime

For example, as personal data becomes more and more exposed to fraudulent exploitation and fraud schemes through Telegram, for instance, the Deputy Representative for Southeast Asia and the Pacific at UNODC highlighted the perils of the consumer getting to see. In this respect, Benedikt Hofmann, free access and anonymity developed an ideal setting for criminals towards the people's data and safety.

Innovation in Criminal Networks

The growth in Southeast Asia's organised crime to higher levels may indicate criminals will be armed with new, more varying technologies-most importantly malware, generative AI tools, and deepfakes-to commit sophisticated cyber-enabled fraud. In relation to innovation and adaptability, investigation by UNODC revealed over 10 specialised service providers in the region offering deep fakes technology for use in cybercrime cases.

Expanding Investigations Across Asia

Another area of concern discussed in the UNODC report is the increasing investigation by law enforcement agencies in other parts of Asia. For example, South Korean authorities are screening Telegram for its role in the commission of cybercrimes that include deepfake pornography. Meanwhile, in India, a hacker used Telegram chatbots to leak private data from Star Health, one of the country's largest insurers. This incident disclosed medical records, IDs, and even tax details. Star Health sued Telegram.

A Turning Point in Cybersecurity

The UNODC report opens one's eyes to the extent the challenge encrypted messaging presents toward the fight against organised crime. Thus, while criminal groups will continue and take full advantage of platforms like Telegram, tech companies remain on their toes about enforcing control measures over illegal activity while trying to balance concerns to address user privacy and safety.


The Rise of AI: New Cybersecurity Threats and Trends in 2023

 

The rise of artificial intelligence (AI) is becoming a critical trend to monitor, with the potential for malicious actors to exploit the technology as it advances, according to the Cyber Security Agency (CSA) on Tuesday (Jul 30). AI is increasingly used to enhance various aspects of cyberattacks, including social engineering and reconnaissance. 

The CSA’s Singapore Cyber Landscape 2023 report, released on Tuesday, highlights that malicious actors are leveraging generative AI for deepfake scams, bypassing biometric authentication, and identifying vulnerabilities in software. Deepfakes, which use AI techniques to alter or manipulate visual and audio content, have been employed for commercial and political purposes. This year, several Members of Parliament received extortion letters featuring manipulated images, and Senior Minister Lee Hsien Loong warned about deepfake videos misrepresenting his statements on international relations.  

Traditional AI typically performs specific tasks based on predefined data, analyzing and predicting outcomes but not creating new content. This technology can generate new images, videos, and audio, exemplified by ChatGPT, OpenAI’s chatbot. AI has also enabled malicious actors to scale up their operations. The CSA and its partners analyzed phishing emails from 2023, finding that about 13 percent contained AI-generated content, which was grammatically superior and more logically structured. These AI-generated emails aimed to reduce logical gaps and enhance legitimacy by adapting to various tones to exploit a wide range of emotions in victims. 

Additionally, AI has been used to scrape personal identification information from social media profiles and websites, increasing the speed and scale of cyberattacks. The CSA cautioned that malicious actors could misuse legitimate research on generative AI’s negative applications, incorporating these findings into their attacks. The use of generative AI adds a new dimension to cyber threats, making it crucial for individuals and organizations to learn how to detect and respond to such threats. Techniques for identifying deepfakes include evaluating the message, analyzing audio-visual elements, and using authentication tools. 

Despite the growing sophistication of cyberattacks, Singapore saw a 52 percent decline in phishing attempts in 2023 compared to the previous year, contrary to the global trend of rising phishing incidents. However, the number of phishing attempts in 2023 remained 30 percent higher than in 2021. Phishing continues to pose a significant threat, with cybercriminals making their attempts appear more legitimate. In 2023, over a third of phishing attempts used the credible-looking domain “.com” instead of “.xyz,” and more than half of the phishing URLs employed the secure “HTTPS protocol,” a significant increase from 9 percent in 2022. 

The banking and financial services, government, and technology sectors were the most targeted industries in phishing attempts, with 63 percent of the spoofed organizations belonging to the banking and financial services sector. This industry is frequently targeted because it holds sensitive and valuable information, such as personal details and login credentials, which are highly attractive to cybercriminals.

Nude Deepfakes: What is EU Doing to Prevent Women from Cyber Harassment


The disturbring rise of sexual deepfakes

Deepfakes are a worry in digital development in this age of rapid technical advancement. This article delves deeply into the workings of deepfake technology, exposing both its potential dangers and its constantly changing capabilities.

The manipulation of images and videos to make sexually oriented content may be considered a criminal offense across all the European Union nations. 

The first directive on violence against will move through its final approval stage by April 2024. 

With the help of AI programs, these images are being modified to undress women without their consent. 

What changes will the new directive bring? And what will happen if the women who live in the European Union are the target of manipulation but the attacks happen in countries outside the European Nation?

The victims: Women

If you are wondering how easy it is to create sexual deepfakes, some websites are just a click away and provide free-of-cost services.

According to the 2023 State of Deepfakes research, it takes around 25 minutes to create a sexual deepfake, and it's free. You just need a photo and the face has to be visible. 

A sample of 95000 deepfake videos were analyzed between 2019 and 2023, and the research discloses that there has been a disturbing 550% increase. 

AI and Deepfakes expert Henry Aider says the people who use these stripping tools want to humiliate, defame, traumatize, and in some incidents, sexual pleasure. 

“And it's important to state that these synthetic stripping tools do not work on men. They are explicitly designed to target women. So it's a good example of a technology that is explicitly malicious. There's nothing neutral about that,” says Henry.

The makers of nude deepfakes search for their target's pictures "anywhere and everywhere" on the web. The pictures can be taken from your Instagram account, Facebook account, or even your WhatsApp display picture. 

Prevention: What to do?

When female victims come across nude deepfakes of themselves, there's a societal need to protect them. 

But the solution lies not in the prevention, but in taking immediate actions to remove them. 

Amanda Manyame, Digital Law and Rights Advisor at Equality Now, says “I'm seeing that trend, but it's like a natural trend any time something digital happens, where people say don't put images of you online, but if you want to push the idea further is like, don't go out on the street because you can have an accident.” The expert further says, “unfortunately, cybersecurity can't help you much here because it's all a question of dismantling the dissemination network and removing that content altogether.”

Today, the victims of nude deepfakes seek various laws like the General Data Protection Regulation, the European Union's Privacy Law, and national defamation laws to seek justice and prevention. 

To the victims who suffer such an offense, it is advisable to take screenshots or video recordings of the deepfake content and use them as proof while reporting it to the police and social media platforms where the incident has happened. 

“There is also a platform called StopNCII, or Stop Non-Consensual Abuse of Private Images, where you can report an image of yourself and then the website creates what is called a 'hash' of the content. And then, AI is then used to automatically have the content taken down across multiple platforms," says the Digital Law and Rights at Equality Now.

Global Impact

The new directive aims to combat sexual violence against women, all 27 member states will follow the same set of laws to criminalize all forms of cyber-violence like sexually motivated "deepfakes."

Amanda Manyame says “The problem is that you might have a victim who is in Brussels. You've got the perpetrator who is in California, in the US, and you've got the server, which is holding the content in maybe, let's say, Ireland. So, it becomes a global problem because you are dealing with different countries.”

Addressing this concern, the MEP and co-author of the latest directive explain that “what needs to be done in parallel with the directive" is to increase cooperation with other countries, "because that's the only way we can also combat crime that does not see any boundaries."

"Unfortunately, AI technology is developing very fast, which means that our legislation must also keep up. So we need to revise the directive in this soon. It is an important step for the current state, but we will need to keep up with the development of AI,” Evin Incir further admits.

94% Deepfake Adult Content Targets Celebs

 

The rapid progress in computer technology has ushered in remarkable strides in the realm of simulating reality. A noteworthy development has been the emergence of artificial intelligence (AI)-generated media, specifically videos adept at convincingly emulating real individuals. This phenomenon has captured considerable interest, as these videos have the uncanny ability to convey the impression that a person is engaging in actions or uttering words they have never actually performed. 

According to a recent survey focusing on deepfake content, a staggering 98% of all online deepfake videos consist of adult content. Furthermore, an overwhelming 99% of the convincingly realistic pornography predominantly features female subjects. In terms of vulnerability to deepfake adult content, India holds the sixth position among nations. 

The 2023 State of Deepfakes report, published by Home Security Heroes, a United States-based organization, highlights that individuals in the public eye, especially those within the entertainment sector, are at a heightened risk. This is attributed to their prominence and the potential repercussions on their careers. Utilizing deepfake technology involves the fabrication of videos by either substituting faces or modifying voices. 

As indicated in the report, a staggering 94% of individuals portrayed in deepfake pornography videos have ties to the entertainment industry. This encompasses singers, actresses, social media influencers, models, and athletes. 

Why Deepfake Pornography is on the Rise? 

The survey emphasizes that the evolution of deepfakes has been significantly influenced by two key factors: the proliferation of Generative Adversarial Networks (GANs) and the growing accessibility of user-friendly tools, software, and communities. According to the same survey, a noteworthy statistic reveals that one out of every three deepfake tools grants users the ability to produce adult content through AI-powered manipulation techniques. 

Additionally, it notes that 92.3% of these platforms provide free access, albeit with certain restrictions. In separate incidents, a Twitch streamer was discovered featured on a website notorious for producing AI-generated adult content featuring fellow streamers. Additionally, a cohort of students from New York created a video in which their principal was manipulated to utter racist comments and make threats against students. 

Meanwhile, in Venezuela, AI-generated videos are being employed as a means to spread political propaganda. Evidently, there has been a widespread adoption of user-friendly deepfake tools, with around 42 different tools collectively amassing 1 crore monthly searches. These tools serve a wide-ranging user base, with 40 percent available as downloadable applications and the remaining 60 percent accessible through web-based platforms, offering diverse options for users. 

The survey brings to light that 20 percent of the participants have contemplated acquiring the skills to produce deepfake adult content, indicating a burgeoning interest in this technology. Furthermore, one in ten respondents confessed to having made attempts at creating deepfake adult content featuring public figures.

AI-Based Deepfake Fraud: Police Retrieves Money Worth ₹40,000 Defrauded From Kozhikode Victim


Kozhikode, India: In a ‘deepfake’ incident, a man from Kozhikode, Kerala lost ₹40,000 after he fell prey to an AI-based scam.

According to police officials, the victim, identified as Radhakrishnan received a video call on WhatsApp from an unknown number. Apparently, the swindlers used Artificial Intelligence tools to generate a deepfake video of the victim’s old colleague knew. To further maintain the trust, the scam caller cunningly mentioned the victim’s former acquaintances.

During their conversation, the scammer made a desperate request of ₹40,000, stating a medical urgency of a relative who is in the hospital. Trusting the caller, Radhakrishnan provided the financial aid, via Google Pay.

Later, the caller made another request to Radhakrishnan, of ₹40,000, which raised his suspicions. Following this, he reached out to his colleague directly. To his disbelief, he discovered the entire incident was in fact an AI based deepfake fraud, and he was robbed./ Realizing the fraud, he immediately filed a complaint to the Cyber Police.

The cyber cell promptly investigated the case and managed to the bank authorities of the bank account where the money was kept. Apparently, the bank account was traced back to private bank located in Maharashtra.

This was the first incidence of deepfake fraud based on Al that has been detected in the state, according to the Kerala Police Cyber Cell.

Modus Operandi: The scammers collect images from social media profiles and use artificial intelligence to create misleading films. These con artists use Al technology in conjunction with details like mutual friends' names to appear legitimate and con innocent individuals.

How to Protect Oneself From Deepfakes? 

Similar cases of deepfakes and other AI-based frauds have raised concerns for cyber security professionals.

Experts have cautioned against such scams and provided some safety advice. Because the majority of deepfakes have subpar resolution, people are urged to examine the video quality. When closely examined, it is obvious that the deepfake films are fake since they either abruptly end or loop back to the beginning after a predetermined length of time. Before conducting any financial transactions, it is also a good idea to get in touch with a person separately to confirm that they are truly participating in the video conversation. 

Generative AI Threatens Digital Identity Verification, Says Former CTO of Aadhar

 

Srikanth Nadhamuni, who formerly held the position of chief technology officer (CTO) of Aadhar between 2009 and 2012, believes that the tremendous improvement we are seeing in the field of artificial intelligence, particularly generative AI, poses a clear and present danger to digital identity verification. He and Vinod Khosla co-founded Bangalore-based incubator Khosla Labs, where he serves as CEO. 

The trust mechanisms that have been meticulously built into identification systems throughout time are seriously threatened by deep fakes, synthetic media that effectively mimic actual human speech, behaviour, and appearance. The need for a "proof-of-personhood" verification capability, probably using a person's biometrics, becomes paramount in this increasingly likely future scenario where AI-generated impersonations cause chaos and erode trust in the system, the tech expert wrote in a LinkedIn post titled "The Future of Digital Identity Verification: In the era of AI Deep Fakes." 

Disinformation is now taking on a whole new dimension thanks to generative AI. Text-to-image AI models like DALL-E2, Midjourney, and Stable Diffusion can produce incredibly realistic visuals that are simple to mistake for the real thing. The ability to create misleading visual information has been made possible by this technology, further obscuring the distinction between truth and fiction.

Even though the Indian government has stated that it will not regulate artificial intelligence (AI), it has revealed that the impending Digital India Act (DIA) will include provisions to address disinformation produced by AI.

“We are not going to regulate AI but we will create guardrails. There will be no separate legislation but a part of DIA will address threats related to high-risk AI,” Union Minister Rajeev Chandrasekhar said. 

The draft hasn't been released yet, so it's unclear how it will address the challenge that generative AI poses to digital identity verification. 

How to identify deep fake images

According to Sandy Fliderman, president, CTO, and creator of industry fintech, it was simpler to spot fakes in old recordings because of changes in skin tone, odd blinking patterns, or jerky motions. But since technology has advanced so much, many of the traditional "tells'' are no longer valid. Today, red flags could show up as irregularities in lighting and shading, which deepfake technology is still working to perfect.

Humans can seek for a number of indicators to distinguish between authentic and fraudulent photographs, such as the following: 

  • Body components and the skin have irregularities.
  • Eyes have a shadowy area. 
  • Unorthodox blinking patterns.
  • Spectacles with an unusual glare. 
  • Mouth gestures that are not realistic. 
  • Lip colour is unnaturally different from the face. 

Deepfake Apps Remain Popular in China Despite Crackdown

The Chinese government has recently launched a crackdown on deepfakes, a type of synthetic media that involves manipulating images, videos, or audio to make them appear to be real. Despite these efforts, however, several Chinese apps that utilize deepfakes are finding a large audience in the country.

Deepfakes have become a significant concern in recent years due to their potential to spread misinformation and manipulate public opinion. Cybersecurity experts warn that deepfakes can be used for nefarious purposes such as identity theft, fraud, and even political propaganda.

China's new laws aim to prevent the spread of false information and improve cybersecurity. However, the government's efforts have not deterred developers from creating deepfake apps that remain popular among Chinese consumers. These apps allow users to create deepfake videos and images with ease, making it possible to manipulate content in ways that were previously impossible.

While these apps are designed to be entertaining and harmless, they can pose significant risks to personal privacy and security. Deepfake technology is becoming increasingly advanced, and it is becoming more difficult to distinguish between real and fake content.

To protect themselves, users should exercise caution when using deepfake apps and be aware of the potential risks. They should also ensure that they are downloading apps from reputable sources and regularly update their devices to the latest software version to mitigate any vulnerabilities.

The proliferation of deepfake apps highlights the importance of continued vigilance in the fight against cyber threats. Governments, organizations, and individuals must work together to stay ahead of evolving threats and take steps to mitigate risks.

China's crackdown on deepfakes has not stopped the popularity of deepfake apps in the country. Cybersecurity experts warn that these apps can pose significant risks to personal privacy and security, and users should exercise caution when using them. The continued proliferation of deepfakes emphasizes the importance of continued vigilance in the fight against cyber threats.

 Sophos: Hackers Avoid Deep Fakes as Phishing Attacks are Effective

According to a prominent security counsel for the UK-based infosec business Sophos, the fear of deepfake scams is entirely exaggerated.

According to John Shier, senior security adviser for cybersecurity company Sophos, hackers may never need to utilize deepfakes on a large scale because there are other, more effective ways to deceive individuals into giving up personal information and financial data.

As per Shier, phishing and other types of social engineering are much more effective than deepfakes, which are artificial intelligence-generated videos that imitate human speech.

What are deepfakes?

Scammers frequently use technology to carry out 'Identity Theft'. In order to demonstrate the risks of deepfakes, researchers in 2018 employed the technology to assume the identity of former US President Barack Obama and disseminate a hoax online.

Shier believes that while deepfakes may be overkill for some kinds of fraud, romance scams—in which a scammer develops a close relationship with their victim online in order to persuade them to send them money—could make good use of the technology because videos will give an online identity inherent legitimacy.

Since deepfake technology has gotten simpler to access and apply, Eric Horvitz, chief science officer at Microsoft, outlines his opinion that in the near future, "we won't be able to tell if the person we're chatting to on a video conversation is real or an impostor."

The expert also anticipates that deepfakes will become more common in several sectors, including romance scams. Making convincing false personas requires a significant commitment of time, effort, and devotion, and adding a deepfake does not require much more work. Shier is concerned that deepfaked romance frauds might become an issue if AI makes it possible for the con artist to operate on a large scale.

Shier was hesitant to assign a date for industrialized deepfake bots, but he claimed that the required technology is becoming better and better every year.

The researcher noted that "AI experts make it sound like it is still a few years away from the huge effect." In the interim, we will observe well-funded criminal organizations carrying out the subsequent degree of compromise to deceive victims into writing checks into accounts.

Deepfakes have historically been employed primarily to produce sexualized images and movies, almost always featuring women.

Nevertheless, a Binance PR executive recently disclosed that fraudsters had developed a deepfaked clone that took part in Zoom calls and attempted to conduct bitcoin scams.

Deepfakes may not necessarily be a scammer's primary tactic, but security researchers at Trend Micro said last month that they are frequently used to augment other techniques. The lifelike computerized images have recently appeared in online advertisements, phony business meetings, and job seeker frauds. The distress is that anybody could become a victim because the internet is so pervasive.






Binance Executive: Scammers Created a 'Deep Fake Hologram' of him to Fool Victims

 

According to a Binance public relations executive, fraudsters created a deep-fake "AI hologram" of him to scam cryptocurrency projects via Zoom video calls.

Patrick Hillmann, chief communications officer at the crypto hypermart, stated he received messages from project teams thanking him for meeting with them virtually to discuss listing their digital assets on Binance over the past month. This raised some suspicions because Hillmann isn't involved in the exchange's listings and doesn't know the people messaging him.

"It turns out that a sophisticated hacking team used previous news interviews and TV appearances over the years to create a 'deep fake' of me," Hillmann said. "Other than the 15 pounds that I gained during COVID being noticeably absent, this deep fake was refined enough to fool several highly intelligent crypto community members."

Hillmann included a screenshot of a project manager asking him to confirm that he was, in fact, on a Zoom call in his write-up this week. The hologram is the latest example of cybercriminals impersonating Binance employees and executives on Twitter, LinkedIn, and other social media platforms.

Scams abound in the cryptocurrency world.
Despite highlighting a wealth of security experts and systems at Binance, Hillman insisted that users must be the first line of defence against scammers. He wrote that they can do so by being vigilant, using the Binance Verify tool, and reporting anything suspicious to Binance support.

“I was not prepared for the onslaught of cyberattacks, phishing attacks, and scams that regularly target the crypto community. Now I understand why Binance goes to the lengths it does,” he added.

The only proof Hillman provided was a screenshot of a chat with someone asking him to confirm a Zoom call they previously had. Hillman responds: “That was not me,” before the unidentified person posts a link to somebody’s LinkedIn profile, telling Hillman “This person sent me a Zoom link then your hologram was in the zoom, please report the scam”.

The fight against deepfakes
Deepfakes are becoming more common in the age of misinformation and artificial intelligence, as technological advancements make convincing digital impersonations of people online more viable.

They are sometimes highly realistic fabrications that have sparked global outrage, particularly when used in a political context. A deepfake video of Ukrainian President Volodymyr Zelenskyy was posted online in March of this year, with the digital impersonation of the leader telling citizens to surrender to Russia.

On Twitter, one version of the deepfake was viewed over 120,000 times. In its fight against disinformation, the European Union has targeted deepfakes, recently requiring tech companies such as Google, Facebook, and Twitter to take countermeasures or face heavy fines.