Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label deepfake scams. Show all posts

Protect Yourself from AI Scams and Deepfake Fraud

 

In today’s tech-driven world, scams have become increasingly sophisticated, fueled by advancements in artificial intelligence (AI) and deepfake technology. Falling victim to these scams can result in severe financial, social, and emotional consequences. Over the past year alone, cybercrime victims have reported average losses of $30,700 per incident. 

As the holiday season approaches, millennials and Gen Z shoppers are particularly vulnerable to scams, including deepfake celebrity endorsements. Research shows that one in five Americans has unknowingly purchased a product promoted through deepfake content, with the number rising to one in three among individuals aged 18-34. 

Sharif Abuadbba, a deepfake expert at CSIRO’s Data61 team, explains how scammers leverage AI to create realistic imitations of influencers. “Deepfakes can manipulate voices, expressions, and even gestures, making it incredibly convincing. Social media platforms amplify the impact as viewers share fake content widely,” Abuadbba states. 

Cybercriminals often target individuals as entry points to larger networks, exploiting relationships with family, friends, or employers. Identity theft can also harm professional reputations and financial credibility. To counter these threats, experts suggest practical steps to protect yourself and your loved ones. Scammers are increasingly impersonating loved ones through texts, calls, or video to request money. 

With AI voice cloning making such impersonations more believable, a pre-agreed safe word can serve as a verification tool. Jamie Rossato, CSIRO’s Chief Information Security Officer, advises, “Never transfer funds unless the person uses your special safe word.” If you receive suspicious calls, particularly from someone claiming to be a bank or official institution, verify their identity. 

Lauren Ferro, a cybersecurity expert, recommends calling the organization directly using its official number. “It’s better to be cautious upfront than to deal with stolen money or reputational damage later,” Ferro adds. Identity theft is the most reported cybercrime, making MFA essential. This adds an extra layer of protection by requiring both a password and a one-time verification code. Experts suggest using app-based authenticators like Microsoft Authenticator for enhanced security. 

Real-time alerts from your banking app can help detect unauthorized transactions. While banks monitor unusual activities, personal notifications allow you to respond immediately to potential scams. The personal information and media you share online can be exploited to create deepfakes. Liming Zhu, a research director at CSIRO, emphasizes the need for caution, particularly with content involving children. 

Awareness remains the most effective defense against scams. Staying informed about emerging threats and adopting proactive security measures can significantly reduce your risk of falling victim to cybercrime. As technology continues to evolve, safeguarding your digital presence is more important than ever. By adopting these expert tips, you can navigate the online world with greater confidence and security.

BMJ Warns: Deepfake Doctors Fueling Health Scams on Social Media

 

Deepfake videos featuring some of Britain's most well-known television doctors are circulating on social media to sell fraudulent products,  as per report by the British Medical Journal (BMJ).

Doctors like Hilary Jones, Rangan Chatterjee, and the late Michael Mosley are being used in these manipulated videos to endorse remedies for various health conditions, as reported by journalist Chris Stokel-Walker.

The videos promote supposed solutions to issues such as high blood pressure and diabetes, often advertising supplements like CBD gummies. "Deepfaking" refers to the use of AI to create a digital likeness of a real person, overlaying their face onto another body, leading to realistic but false videos.

John Cormack, a retired Essex-based doctor, has been working with the BMJ to assess the scope of these fraudulent deepfake videos online. He found that the videos are particularly prevalent on platforms like Facebook. “It's far more cost-effective to invest in video creation than in legitimate research and development,” Cormack said.

Hilary Jones, a general practitioner and TV personality, voiced his concerns over the growing issue of his identity being deepfaked. He employs a specialist to locate and remove these videos, but the problem persists. “Even when we take them down, they reappear almost immediately under different names,” he remarked.

While many deepfakes may appear convincing at first, there are several ways to identify them:
  • Pay attention to small details: AI often struggles with rendering eyes, mouths, hands, and teeth accurately. Misaligned movements or blinking irregularities can be a sign.
  • Look for inconsistencies: Glasses with unnatural glare or facial hair that appears artificial are common red flags, according to experts at MIT.
  • Consider the overall appearance: Poor lighting, awkward posture, or blurred edges are common indicators of deepfake content, according to Norton Antivirus.
  • Verify the source: If the video is from a public figure, ensure it has been posted by a credible source or an official account.
The increase in deepfakes has sparked wider concerns, particularly regarding their use in creating revenge porn and manipulating political elections.

A spokesperson for Meta, the social media giant behind Facebook and Instagram, shared: “We will be investigating the examples highlighted by the British Medical Journal.

"We don’t permit content that intentionally deceives or seeks to defraud others, and we’re constantly working to improve detection and enforcement.

"We encourage anyone who sees content that might violate our policies to report it so we can investigate and take action.”