In the age of rapidly evolving artificial intelligence (AI), a new breed of frauds has emerged, posing enormous risks to companies and their clients. AI-powered impersonations, capable of generating highly realistic voice and visual content, have become a major threat that CISOs must address.
This article explores the multifaceted risks of AI-generated impersonations, including their financial and security impacts. It also provides insights into risk mitigation and a look ahead at combating AI-driven scams.
AI-generated impersonations have ushered in a new era of scam threats. Fraudsters now use AI to create unexpectedly trustworthy audio and visual content, such as vocal cloning and deepfake technology. These enhanced impersonations make it harder for targets to distinguish between genuine and fraudulent content, leaving them vulnerable to various types of fraud.
The rise of AI-generated impersonations has significantly escalated risks for companies and clients in several ways:
Prevention tips: As AI technology evolves, so do the risks of AI-generated impersonations. Organizations need a multifaceted approach to mitigate these threats. Using sophisticated detection systems powered by AI can help identify impersonations, while rigorous employee training and awareness initiatives are essential. CISOs, AI researchers, and industry professionals must collaborate to build proactive defenses against these scams.
According to a report issued by the United Nations Office for Drugs and Crime, dated October 7, criminal networks across Southeast Asia are increasingly turning to the messaging platform Telegram for conducting comprehensive illegal activities. It says Telegram, due to big channels and seemingly insufficient moderation, becomes the attraction of the underworld for organised crime and its resultant transformation in the ways of operating global illicit operations.
An Open Market for Stolen Data and Cybercrime Tools
The UNODC report clearly illustrates how Telegram has become a trading platform for hacked personal data, including credit card numbers, passwords, and browser histories. Cybercriminals publicly trade on the large channels of Telegram with very little interference. In addition, it has various software and tools designed to conduct cybercrime such as fraud using deepfake technology and malware used for copying and collecting users' data. Moreover, money laundering services are provided in unauthorised cryptocurrency exchanges through Telegram.
An example was an ad to be placed on Telegram stating that it was moving USDT cryptocurrency, stolen and with $3 million daily transactions, to cash in on criminal organisations involved in transnational organised crime in Southeast Asia. According to reports, these dark markets are growing increasingly omnipresent on Telegram through which vendors aggressively look to reach criminal organisations in the region.
Southeast Asia: A hub of fraud and exploitation
According to the UNODC reports, this region in Southeast Asia has become an important base for international fraudulent operations. Most criminal activities within the region relate to Chinese syndicates located within heavily fortified locations and use trafficked individuals forced into labour. It is estimated that the industry generates between $27.4 billion and $36.5 billion annually.
The move comes as scrutiny of Telegram and its billionaire founder, Russian-born Pavel Durov, is intensifying. Durov is facing legal fallout in France after he was charged with abetting crime on the platform by allowing the distribution of illegal content after he tightened his regulations in France. The case has sparked debates on the liability of tech companies for the crimes happening on their platform, and the line between free speech and legal accountability.
It responded to the increasing pressure by promising cooperation with legal authorities. The head of Telegram, Durov, stated that Telegram will share the IP addresses and phone numbers of users whenever a legal request for them is required. He further promised to cancel some features on the platform that have been widely misused for illicit activities. Currently, more than a billion people worldwide are using Telegram, and it has so far not reacted publicly to the latest report from the UNODC.
A Perfect Fertile Ground for Cybercrime
For example, as personal data becomes more and more exposed to fraudulent exploitation and fraud schemes through Telegram, for instance, the Deputy Representative for Southeast Asia and the Pacific at UNODC highlighted the perils of the consumer getting to see. In this respect, Benedikt Hofmann, free access and anonymity developed an ideal setting for criminals towards the people's data and safety.
Innovation in Criminal Networks
The growth in Southeast Asia's organised crime to higher levels may indicate criminals will be armed with new, more varying technologies-most importantly malware, generative AI tools, and deepfakes-to commit sophisticated cyber-enabled fraud. In relation to innovation and adaptability, investigation by UNODC revealed over 10 specialised service providers in the region offering deep fakes technology for use in cybercrime cases.
Expanding Investigations Across Asia
Another area of concern discussed in the UNODC report is the increasing investigation by law enforcement agencies in other parts of Asia. For example, South Korean authorities are screening Telegram for its role in the commission of cybercrimes that include deepfake pornography. Meanwhile, in India, a hacker used Telegram chatbots to leak private data from Star Health, one of the country's largest insurers. This incident disclosed medical records, IDs, and even tax details. Star Health sued Telegram.
A Turning Point in Cybersecurity
The UNODC report opens one's eyes to the extent the challenge encrypted messaging presents toward the fight against organised crime. Thus, while criminal groups will continue and take full advantage of platforms like Telegram, tech companies remain on their toes about enforcing control measures over illegal activity while trying to balance concerns to address user privacy and safety.
Deepfakes are a worry in digital development in this age of rapid technical advancement. This article delves deeply into the workings of deepfake technology, exposing both its potential dangers and its constantly changing capabilities.
The manipulation of images and videos to make sexually oriented content may be considered a criminal offense across all the European Union nations.
The first directive on violence against will move through its final approval stage by April 2024.
With the help of AI programs, these images are being modified to undress women without their consent.
What changes will the new directive bring? And what will happen if the women who live in the European Union are the target of manipulation but the attacks happen in countries outside the European Nation?
If you are wondering how easy it is to create sexual deepfakes, some websites are just a click away and provide free-of-cost services.
According to the 2023 State of Deepfakes research, it takes around 25 minutes to create a sexual deepfake, and it's free. You just need a photo and the face has to be visible.
A sample of 95000 deepfake videos were analyzed between 2019 and 2023, and the research discloses that there has been a disturbing 550% increase.
AI and Deepfakes expert Henry Aider says the people who use these stripping tools want to humiliate, defame, traumatize, and in some incidents, sexual pleasure.
“And it's important to state that these synthetic stripping tools do not work on men. They are explicitly designed to target women. So it's a good example of a technology that is explicitly malicious. There's nothing neutral about that,” says Henry.
The makers of nude deepfakes search for their target's pictures "anywhere and everywhere" on the web. The pictures can be taken from your Instagram account, Facebook account, or even your WhatsApp display picture.
When female victims come across nude deepfakes of themselves, there's a societal need to protect them.
But the solution lies not in the prevention, but in taking immediate actions to remove them.
Amanda Manyame, Digital Law and Rights Advisor at Equality Now, says “I'm seeing that trend, but it's like a natural trend any time something digital happens, where people say don't put images of you online, but if you want to push the idea further is like, don't go out on the street because you can have an accident.” The expert further says, “unfortunately, cybersecurity can't help you much here because it's all a question of dismantling the dissemination network and removing that content altogether.”
Today, the victims of nude deepfakes seek various laws like the General Data Protection Regulation, the European Union's Privacy Law, and national defamation laws to seek justice and prevention.
To the victims who suffer such an offense, it is advisable to take screenshots or video recordings of the deepfake content and use them as proof while reporting it to the police and social media platforms where the incident has happened.
“There is also a platform called StopNCII, or Stop Non-Consensual Abuse of Private Images, where you can report an image of yourself and then the website creates what is called a 'hash' of the content. And then, AI is then used to automatically have the content taken down across multiple platforms," says the Digital Law and Rights at Equality Now.
The new directive aims to combat sexual violence against women, all 27 member states will follow the same set of laws to criminalize all forms of cyber-violence like sexually motivated "deepfakes."
Amanda Manyame says “The problem is that you might have a victim who is in Brussels. You've got the perpetrator who is in California, in the US, and you've got the server, which is holding the content in maybe, let's say, Ireland. So, it becomes a global problem because you are dealing with different countries.”
Addressing this concern, the MEP and co-author of the latest directive explain that “what needs to be done in parallel with the directive" is to increase cooperation with other countries, "because that's the only way we can also combat crime that does not see any boundaries."
"Unfortunately, AI technology is developing very fast, which means that our legislation must also keep up. So we need to revise the directive in this soon. It is an important step for the current state, but we will need to keep up with the development of AI,” Evin Incir further admits.