Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Voice Cloning. Show all posts

Voice Cloning and Deepfake Threats Escalate AI Scams Across India

 


The rapid advancement of AI technology in the past few years has brought about several benefits for society, but these advances have also led to sophisticated cyber threats. India is experiencing explosive growth in digital adoption, making it one of the most sought-after targets for a surge in artificial intelligence-based scams. This is an alarming reality of today's cybercriminals who are exploiting these emerging technologies in alarming ways to exploit the trust of unsuspecting individuals through voice cloning schemes, the manipulation of public figures' identities and deepfakes. 

There is an increasing problem with scammers finding new ways to deceive the public as AI capabilities become more refined, making it increasingly difficult to tell between genuine and manipulated content as the abilities of AI become more refined. In terms of cyber security expertise and everyday users, the line between reality and digital fabrication is becoming blurred, presenting a serious challenge to both professionals and those who work in the field. 

Among the many high-profile case studies involving voice cloning in the country and the use of deep-fake technology, the severity of these threats and the number of people who have fallen victim to sophisticated methods of deception have led to a rise in these criminal activities in India. It is believed that the recent trend in AI-driven fraud shows that more security measures and public awareness are urgently needed to combat AI-driven fraud to prevent it from spreading.

The scam occurred last year when a scammer swindled a 73-year-old retired government employee in Kozhikode, Kerala, out of 40,000 rupees by using an AI-generated deepfake video that a deep learning algorithm had generated. He created the illusion of an emergency that led to his loss by blending voice manipulation with video manipulation. It is important to realize that the problem runs much deeper than that. 

In Delhi, cybercrime groups have used voice cloning to swindle 50,000 rupees from Lakshmi Chand Chawla, an elderly lady from Yamuna Vihar, by using the practice of voice cloning. It was on October 24 that Chawla received a WhatsApp message saying that his cousin's son had been kidnapped by thugs. This was made believable by recording a voice record of the child who was cloned using artificial intelligence, crying for help to convince the jury. 

The panicked Chawla transferred 20,000 rupees through Paytm to withdraw the funds. It was not until he contacted his cousin, that he realized that the child was never in danger, even though he thought so at first. It is clear from all of these cases that scammers are exploiting AI to gain people's trust in their business. Scammers are no longer anonymous voices, they sound like friends or family members who are in crisis right now.

The McAfee company has released the 'Celebrity Hacker Hot List 2024', which shows which Indian celebrities have name searches that generate the highest level of "risky" searches on the Internet. In this year's results, it was evident that the more viral an individual is, the more appealing their names are to cybercriminals, who are seeking to exploit their fame by creating malicious sites and scams based on their names. These scams have affected many people, leading to big data breaches, financial losses, and the theft of sensitive personal information.  

There is no doubt that Orhan Awatramani, also known as Orry, is on top of the list for India. He has gained a great deal of popularity fast, and in addition to being associated with other high-profile celebrities, he has also gotten a great deal of attention in the media, making him an attractive target for cybercriminals. Especially in this context, it illustrates how cybercriminals can utilize the increase in unverified information about public figures who are new or are upcoming to lure consumers in search of the latest information. 

It has been reported that Diljit Dosanjh, an actor and singer, is being targeted by fraudsters in connection with his upcoming 'Dil-Luminati' concert tour, which is set to begin next month. This is unfortunately not an uncommon occurrence that happens due to overabundant fan interest and a surge in search volume at large-scale events like these, which often leads to fraudulent ticketing websites, discount schemes, resale schemes, and phishing scams.  

As generative artificial intelligence has gained traction, as well as deepfakes, the cybersecurity landscape has become even more complex. Several celebrities are being misled, and this is affecting their careers. Throughout the year, Alia Bhatt has been subject to several incidents that are related to deep fakes, while actors Ranveer Singh and Aamir Khan have falsely been portraying themselves as endorsing political parties in the course of election-related deep fakes. It has been discovered that prominent celebrities such as Virat Kohli and Shahrukh Khan have appeared in deepfake content designed to promote betting apps. 

It is known that scammers are utilizing tactics such as malicious URLs, deceptive messages, and artificially intelligent image, audio, and video scams to take advantage of fans' curiosity. This leads to financial losses as well as damaging the reputation of the impacted celebrities and damaging customer confidence.   There is a disturbing shift in how fraud is being handled (AI-Generated Representative Image) that is alarming (PIQuarterly News) As alarming as voice cloning scams may seem, the danger doesn't end there, as there are many dangers in front of us.

Increasingly, deepfake technology is pushing the boundaries to even greater heights, blending reality with electronic manipulation at an ever-increasing pace, resulting in increasingly difficult detection methods. In recent years, the same technology has been advancing into real-time video deception, starting with voice cloning. Facecam.ai is one of the most striking examples of this technology, which enables users to create deepfake videos that they can live-stream using just one image while users do so. It caused a lot of buzz, highlighting the ability to convincingly mimic a person's face in real time, and showcasing how easily it can be done.

Uploading a photo allowed users to seamlessly swap faces in the video stream without downloading anything. Despite its popularity, the tool had to be shut down after a backlash over its potential usefulness had been raised. It is important to note that this does not mean that the problem has been resolved. The rise of artificial intelligence has led to the proliferation of numerous platforms that offer sophisticated capabilities for creating deepfake videos and manipulating identities, posing serious risks to digital security. 

Although some platforms like Facecam. Ai—which gained popularity for allowing users to create live-streaming deep fake videos using deep fake images—has been taken down due to concerns over misuse, other tools continue to operate with dangerous potential. Notably, platforms like Deep-Live-Cam are still thriving, enabling individuals to swap faces during live video calls. This technology allows users to impersonate anyone, whether it be a celebrity, a politician, or even a friend or family member. What is particularly alarming is the growing accessibility of these tools. As deepfake technology becomes more user-friendly, even those with minimal technical skills can produce convincing digital forgeries. 

The ease with which such content can be created heightens the potential for abuse, turning what might seem like harmless fun into tools for fraud, manipulation, and reputational harm. The dangers posed by these tools extend far beyond simple pranks. As the availability of deepfake technology spreads, the opportunities for its misuse expand exponentially. Fraudulent activities, including impersonation in financial transactions or identity theft, are just a few examples of the potential harm. Manipulation of public opinion, personal relationships, or professional reputations is also at risk, especially as these tools become more widespread and increasingly difficult to regulate. 

The global implications of these scams are already being felt. In one high-profile case, scammers in Hong Kong used a deepfake video to impersonate the Chief Financial Officer of a company, leading to a financial loss of more than $25 million. This case underscores the magnitude of the problem: with the rise of such advanced technology, virtually anyone—not just high-profile individuals—can become a victim of deepfake-related fraud. As artificial intelligence continues to blur the lines between real and fake, society is entering a new era where deception is not only easier to execute but also harder to detect. 

The consequences of this shift are profound, as it fundamentally challenges trust in digital interactions and the authenticity of online communications. To address this growing threat, experts are discussing potential solutions such as Personhood Credentials—a system designed to verify and authenticate that the individual behind a digital interaction is, indeed, a real person. One of the most vocal proponents of this idea is Srikanth Nadhamuni, the Chief Technology Officer of Aadhaar, India's biometric-based identity system.

Nadhamuni co-authored a paper in August 2024 titled "Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who is Real Online." In this paper, he argues that as deepfakes and voice cloning become increasingly prevalent, tools like Aadhaar, which relies on biometric verification, could play a critical role in ensuring the authenticity of digital interactions.Nadhamuni believes that implementing personhood credentials can help safeguard online privacy and prevent AI-generated scams from deceiving people. In a world where artificial intelligence is being weaponized for fraud, systems rooted in biometric verification offer a promising approach to distinguishing real individuals from digital impersonators.

AI 'Kidnapping' Scams: A Growing Threat

Cybercriminals have started using artificial intelligence (AI) technology to carry out virtual abduction schemes, which is a worrying trend. These scams, which use chatbots and AI voice cloning techniques, have become much more prevalent recently and pose a serious threat to people. 

The emergence of AI-powered voice cloning tools has provided cybercriminals with a powerful tool to execute virtual kidnapping scams. By using these tools, perpetrators can mimic the voice of a target's family member or close acquaintance, creating a sense of urgency and fear. This psychological manipulation is designed to coerce the victim into complying with the scammer's demands, typically involving a ransom payment.

Moreover, advancements in natural language processing and AI chatbots have made it easier for cybercriminals to engage in conversation with victims, making the scams more convincing and sophisticated. These AI-driven chatbots can simulate human-like responses and engage in prolonged interactions, making victims believe they are indeed communicating with their loved ones in distress.

The impact of these AI 'kidnapping' scams can be devastating, causing immense emotional distress and financial losses. Victims who fall prey to these scams often endure intense fear and anxiety, genuinely believing that their loved ones are in danger. The scammers take advantage of this vulnerability to extort money or personal information from the victims.

To combat this growing threat, law enforcement agencies and cybersecurity experts are actively working to raise awareness and develop countermeasures. It is crucial for individuals to be vigilant and educate themselves about the tactics employed by these scammers. Recognizing the signs of a virtual kidnapping scam, such as sudden demands for money, unusual behavior from the caller, or inconsistencies in the story, can help potential victims avoid falling into the trap.

A proactive approach to solving this problem is also required from technology businesses and AI developers. To stop the abuse of AI voice cloning technology, strict security measures must be put in place. Furthermore, using sophisticated algorithms to identify and stop malicious chatbots can deter attackers.

AI Voice Cloning Technology Evoking Threat Among People


A mother, in America, heard a voice in her phone that seemed chillingly real – it was her daughter apparently sobbing, following which a man’s voice took over that demanded a ransom amount. However, the girl in the phone was an AI clone, and her abduction was well, fake.

According to some cybersecurity experts, the biggest threat of AI, is its ability to annihilate the line that differentiate reality from fiction, since it will provide cybercriminals with a simple and efficient tool for spreading misinformation.

AI Voice-cloning Technologies

“Help me, mom, please help me,” heard Jennifer DesStefano, an Arizona resident, from the other end of the line.

She says she was “100 percent” convinced that it was her 15-year-old daughter, with her voice seemingly distressed. Her daughter, at the time was away on a skiing trip.

"It was never a question of who is this? It was completely her voice... it was the way she would have cried," told DeStefano to a local television station in April.

It was not until later that the fraudster took over the call, which came from a private number, and demanded up to $1 million.

As soon as DeStefano made contact with her daughter, the AI-powered deception was finished. However, the horrifying incident, which is currently the subject of a police investigation, highlighted how fraudsters may abuse AI clones.

This is however, not the first case of such happenings, as fraudsters are employing remarkably convincing AI voice cloning technologies, which are publicly accessible online, to steal from victims by impersonating their family members in a new generation of schemes that has alarmed US authorities.

Another such case comes from Chicago, where the 19-year-old Eddie’s grandfather received a call, where someone’s voice sounded just like him where he claimed to be needing money after a ‘car accident’.

Before the deceit was revealed, his grandfather scrambled to gather money and even pondered remortgaging his home due to the persuasive nature of the McAfee Labs-reported hoax.

"Because it is now easy to generate highly realistic voice clones... nearly anyone with any online presence is vulnerable to an attack[…]These scams are gaining traction and spreading," Hany Farid, a professor at the UC Berkeley School of Information, told AFP.

Gal Tal-Hochberg, group chief technology officer at the venture capital firm Team8 further notes to AFP, saying "We're fast approaching the point where you can't trust the things that you see on the internet."

"We are going to need new technology to know if the person you think you're talking to is actually the person you're talking to," he said.  

Is Your Child in Actual Danger? Wary of Family Emergency Voice-Cloning Frauds

 

If you receive an unusual phone call from a family member in trouble, be cautious: the other person on the line could be a scammer impersonating a family member using AI voice technologies. The Federal Trade Commission has issued a warning about fraudsters using commercially available voice-cloning software for family emergency scams. 

These scams have been around for a long time, and they involve the perpetrator impersonating a family member, usually a child or grandchild. The fraudster will then call the victim and claim that they are in desperate need of money to deal with an emergency. According to the FTC, artificial intelligence-powered voice-cloning software can make the impersonation scam appear even more authentic, duping victims into handing over their money.

All he (the scammer) needs is a short audio clip of your family member's voice—which he could get from content posted online—and a voice-cloning program. When the scammer calls you, he’ll sound just like your loved one,” the FTC says in the Monday warning.

The FTC did not immediately respond to a request for comment, leaving it unclear whether the US regulator has noticed an increase in voice-cloning scams. However, the warning comes just a few weeks after The Washington Post detailed how scammers are using voice-cloning software to prey on unsuspecting families.

In one case, the scammer impersonated a Canadian couple's grandson, who claimed to be in jail, using the technology. In another case, the fraudsters used voice-cloning technology to successfully steal $15,449 from a couple who were also duped into believing their son had been arrested.

The fact that voice-cloning services are becoming widely available on the internet isn't helping matters. As a result, it's possible that scams will become more prevalent over time, though at least a few AI-powered voice-generation providers are developing safeguards to prevent potential abuse. The FTC says there is an easy way to detect a family emergency scam to keep consumers safe. "Don't believe the voice. Call the person who allegedly contacted you to confirm the story. 

“Don’t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs,” the FTC stated. “If you can’t reach your loved one, try to get in touch with them through another family member or their friends.”

Targeted victims should also consider asking the alleged family member in trouble a personal question about which the scammer is unaware.