Police forces in the United Kingdom are alerting the public to a surge in online fraud cases, warning that criminals are now exploiting artificial intelligence and deepfake technology to impersonate relatives, friends, and even public figures. The warning, issued by West Mercia Police, stresses upon how technology is being used to deceive people into sharing sensitive information or transferring money.
According to the force’s Economic Crime Unit, criminals are constantly developing new strategies to exploit internet users. With the rapid evolution of AI, scams are becoming more convincing and harder to detect. To help people stay informed, officers have shared a list of common fraud-related terms and explained how each method works.
One of the most alarming developments is the use of AI-generated deepfakes, realistic videos or voice clips that make it appear as if a known person is speaking. These are often used in romance scams, investment frauds, or emotional blackmail schemes to gain a victim’s trust before asking for money.
Another growing threat is keylogging, where fraudsters trick victims into downloading malicious software that secretly records every keystroke. This allows criminals to steal passwords, banking details, and other private information. The software is often installed through fake links or phishing emails that look legitimate.
Account takeover, or ATO, remains one of the most common types of identity theft. Once scammers access an individual’s online account, they can change login credentials, reset security settings, and impersonate the victim to access bank or credit card information.
Police also warned about SIM swapping, a method in which criminals gather personal details from social media or scam calls and use them to convince mobile providers to transfer a victim’s number to a new SIM card. This gives the fraudster control over the victim’s messages and verification codes, making it easier to access online accounts.
Other scams include courier fraud, where offenders pose as police officers or bank representatives and instruct victims to withdraw money or purchase expensive goods. A “courier” then collects the items directly from the victim’s home. In many cases, scammers even ask for bank cards and PIN numbers.
The force’s notice also included reminders about malware and ransomware, malicious programs that can steal or lock files. Criminals may also encourage victims to install legitimate-looking remote access tools such as AnyDesk, allowing them full control of a victim’s device.
Additionally, spoofing — the act of disguising phone numbers, email addresses, or website links to appear genuine, continues to deceive users. Fraudsters often combine spoofing with AI to make fake communication appear even more authentic.
Police advise the public to remain vigilant, verify any unusual requests, and avoid clicking on suspicious links. Anyone seeking more information or help can visit trusted resources such as Action Fraud or Get Safe Online, which provide updates on current scams and guidance on reporting cybercrime.
Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”
Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.
Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident.
As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally.
Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.
Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”
The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.
Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities.
After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.
Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions.
This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes.
In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.
The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.
The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.
This report is expected to outline:
• How financial institutions can better use AI to stop fraud before it happens,
• Ways to protect consumers from being misled by deepfake content, and
• Policy and regulatory recommendations for addressing this evolving threat.
One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.
According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.
While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.
Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.
As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.