According to a report issued by the United Nations Office for Drugs and Crime, dated October 7, criminal networks across Southeast Asia are increasingly turning to the messaging platform Telegram for conducting comprehensive illegal activities. It says Telegram, due to big channels and seemingly insufficient moderation, becomes the attraction of the underworld for organised crime and its resultant transformation in the ways of operating global illicit operations.
An Open Market for Stolen Data and Cybercrime Tools
The UNODC report clearly illustrates how Telegram has become a trading platform for hacked personal data, including credit card numbers, passwords, and browser histories. Cybercriminals publicly trade on the large channels of Telegram with very little interference. In addition, it has various software and tools designed to conduct cybercrime such as fraud using deepfake technology and malware used for copying and collecting users' data. Moreover, money laundering services are provided in unauthorised cryptocurrency exchanges through Telegram.
An example was an ad to be placed on Telegram stating that it was moving USDT cryptocurrency, stolen and with $3 million daily transactions, to cash in on criminal organisations involved in transnational organised crime in Southeast Asia. According to reports, these dark markets are growing increasingly omnipresent on Telegram through which vendors aggressively look to reach criminal organisations in the region.
Southeast Asia: A hub of fraud and exploitation
According to the UNODC reports, this region in Southeast Asia has become an important base for international fraudulent operations. Most criminal activities within the region relate to Chinese syndicates located within heavily fortified locations and use trafficked individuals forced into labour. It is estimated that the industry generates between $27.4 billion and $36.5 billion annually.
The move comes as scrutiny of Telegram and its billionaire founder, Russian-born Pavel Durov, is intensifying. Durov is facing legal fallout in France after he was charged with abetting crime on the platform by allowing the distribution of illegal content after he tightened his regulations in France. The case has sparked debates on the liability of tech companies for the crimes happening on their platform, and the line between free speech and legal accountability.
It responded to the increasing pressure by promising cooperation with legal authorities. The head of Telegram, Durov, stated that Telegram will share the IP addresses and phone numbers of users whenever a legal request for them is required. He further promised to cancel some features on the platform that have been widely misused for illicit activities. Currently, more than a billion people worldwide are using Telegram, and it has so far not reacted publicly to the latest report from the UNODC.
A Perfect Fertile Ground for Cybercrime
For example, as personal data becomes more and more exposed to fraudulent exploitation and fraud schemes through Telegram, for instance, the Deputy Representative for Southeast Asia and the Pacific at UNODC highlighted the perils of the consumer getting to see. In this respect, Benedikt Hofmann, free access and anonymity developed an ideal setting for criminals towards the people's data and safety.
Innovation in Criminal Networks
The growth in Southeast Asia's organised crime to higher levels may indicate criminals will be armed with new, more varying technologies-most importantly malware, generative AI tools, and deepfakes-to commit sophisticated cyber-enabled fraud. In relation to innovation and adaptability, investigation by UNODC revealed over 10 specialised service providers in the region offering deep fakes technology for use in cybercrime cases.
Expanding Investigations Across Asia
Another area of concern discussed in the UNODC report is the increasing investigation by law enforcement agencies in other parts of Asia. For example, South Korean authorities are screening Telegram for its role in the commission of cybercrimes that include deepfake pornography. Meanwhile, in India, a hacker used Telegram chatbots to leak private data from Star Health, one of the country's largest insurers. This incident disclosed medical records, IDs, and even tax details. Star Health sued Telegram.
A Turning Point in Cybersecurity
The UNODC report opens one's eyes to the extent the challenge encrypted messaging presents toward the fight against organised crime. Thus, while criminal groups will continue and take full advantage of platforms like Telegram, tech companies remain on their toes about enforcing control measures over illegal activity while trying to balance concerns to address user privacy and safety.
Deepfakes are a worry in digital development in this age of rapid technical advancement. This article delves deeply into the workings of deepfake technology, exposing both its potential dangers and its constantly changing capabilities.
The manipulation of images and videos to make sexually oriented content may be considered a criminal offense across all the European Union nations.
The first directive on violence against will move through its final approval stage by April 2024.
With the help of AI programs, these images are being modified to undress women without their consent.
What changes will the new directive bring? And what will happen if the women who live in the European Union are the target of manipulation but the attacks happen in countries outside the European Nation?
If you are wondering how easy it is to create sexual deepfakes, some websites are just a click away and provide free-of-cost services.
According to the 2023 State of Deepfakes research, it takes around 25 minutes to create a sexual deepfake, and it's free. You just need a photo and the face has to be visible.
A sample of 95000 deepfake videos were analyzed between 2019 and 2023, and the research discloses that there has been a disturbing 550% increase.
AI and Deepfakes expert Henry Aider says the people who use these stripping tools want to humiliate, defame, traumatize, and in some incidents, sexual pleasure.
“And it's important to state that these synthetic stripping tools do not work on men. They are explicitly designed to target women. So it's a good example of a technology that is explicitly malicious. There's nothing neutral about that,” says Henry.
The makers of nude deepfakes search for their target's pictures "anywhere and everywhere" on the web. The pictures can be taken from your Instagram account, Facebook account, or even your WhatsApp display picture.
When female victims come across nude deepfakes of themselves, there's a societal need to protect them.
But the solution lies not in the prevention, but in taking immediate actions to remove them.
Amanda Manyame, Digital Law and Rights Advisor at Equality Now, says “I'm seeing that trend, but it's like a natural trend any time something digital happens, where people say don't put images of you online, but if you want to push the idea further is like, don't go out on the street because you can have an accident.” The expert further says, “unfortunately, cybersecurity can't help you much here because it's all a question of dismantling the dissemination network and removing that content altogether.”
Today, the victims of nude deepfakes seek various laws like the General Data Protection Regulation, the European Union's Privacy Law, and national defamation laws to seek justice and prevention.
To the victims who suffer such an offense, it is advisable to take screenshots or video recordings of the deepfake content and use them as proof while reporting it to the police and social media platforms where the incident has happened.
“There is also a platform called StopNCII, or Stop Non-Consensual Abuse of Private Images, where you can report an image of yourself and then the website creates what is called a 'hash' of the content. And then, AI is then used to automatically have the content taken down across multiple platforms," says the Digital Law and Rights at Equality Now.
The new directive aims to combat sexual violence against women, all 27 member states will follow the same set of laws to criminalize all forms of cyber-violence like sexually motivated "deepfakes."
Amanda Manyame says “The problem is that you might have a victim who is in Brussels. You've got the perpetrator who is in California, in the US, and you've got the server, which is holding the content in maybe, let's say, Ireland. So, it becomes a global problem because you are dealing with different countries.”
Addressing this concern, the MEP and co-author of the latest directive explain that “what needs to be done in parallel with the directive" is to increase cooperation with other countries, "because that's the only way we can also combat crime that does not see any boundaries."
"Unfortunately, AI technology is developing very fast, which means that our legislation must also keep up. So we need to revise the directive in this soon. It is an important step for the current state, but we will need to keep up with the development of AI,” Evin Incir further admits.
According to police officials, the victim, identified as Radhakrishnan received a video call on WhatsApp from an unknown number. Apparently, the swindlers used Artificial Intelligence tools to generate a deepfake video of the victim’s old colleague knew. To further maintain the trust, the scam caller cunningly mentioned the victim’s former acquaintances.
During their conversation, the scammer made a desperate request of ₹40,000, stating a medical urgency of a relative who is in the hospital. Trusting the caller, Radhakrishnan provided the financial aid, via Google Pay.
Later, the caller made another request to Radhakrishnan, of ₹40,000, which raised his suspicions. Following this, he reached out to his colleague directly. To his disbelief, he discovered the entire incident was in fact an AI based deepfake fraud, and he was robbed./ Realizing the fraud, he immediately filed a complaint to the Cyber Police.
The cyber cell promptly investigated the case and managed to the bank authorities of the bank account where the money was kept. Apparently, the bank account was traced back to private bank located in Maharashtra.
This was the first incidence of deepfake fraud based on Al that has been detected in the state, according to the Kerala Police Cyber Cell.
Modus Operandi: The scammers collect images from social media profiles and use artificial intelligence to create misleading films. These con artists use Al technology in conjunction with details like mutual friends' names to appear legitimate and con innocent individuals.
Similar cases of deepfakes and other AI-based frauds have raised concerns for cyber security professionals.
Experts have cautioned against such scams and provided some safety advice. Because the majority of deepfakes have subpar resolution, people are urged to examine the video quality. When closely examined, it is obvious that the deepfake films are fake since they either abruptly end or loop back to the beginning after a predetermined length of time. Before conducting any financial transactions, it is also a good idea to get in touch with a person separately to confirm that they are truly participating in the video conversation.
The Chinese government has recently launched a crackdown on deepfakes, a type of synthetic media that involves manipulating images, videos, or audio to make them appear to be real. Despite these efforts, however, several Chinese apps that utilize deepfakes are finding a large audience in the country.
Deepfakes have become a significant concern in recent years due to their potential to spread misinformation and manipulate public opinion. Cybersecurity experts warn that deepfakes can be used for nefarious purposes such as identity theft, fraud, and even political propaganda.
China's new laws aim to prevent the spread of false information and improve cybersecurity. However, the government's efforts have not deterred developers from creating deepfake apps that remain popular among Chinese consumers. These apps allow users to create deepfake videos and images with ease, making it possible to manipulate content in ways that were previously impossible.
While these apps are designed to be entertaining and harmless, they can pose significant risks to personal privacy and security. Deepfake technology is becoming increasingly advanced, and it is becoming more difficult to distinguish between real and fake content.
To protect themselves, users should exercise caution when using deepfake apps and be aware of the potential risks. They should also ensure that they are downloading apps from reputable sources and regularly update their devices to the latest software version to mitigate any vulnerabilities.
The proliferation of deepfake apps highlights the importance of continued vigilance in the fight against cyber threats. Governments, organizations, and individuals must work together to stay ahead of evolving threats and take steps to mitigate risks.
China's crackdown on deepfakes has not stopped the popularity of deepfake apps in the country. Cybersecurity experts warn that these apps can pose significant risks to personal privacy and security, and users should exercise caution when using them. The continued proliferation of deepfakes emphasizes the importance of continued vigilance in the fight against cyber threats.
According to a prominent security counsel for the UK-based infosec business Sophos, the fear of deepfake scams is entirely exaggerated.