As artificial intelligence (AI) technology advances, cybercriminals are now capable of creating sophisticated "deepfake" scams, which result in significant financial losses for the companies that are targeted. On a video call with her chief financial officer, in which other members of the firm also took part, an employee of a Hong Kong-based firm was instructed to send US$25 million to fraudsters in January 2024, after offering instruction to her chief financial officer in the same video call.
Fraudsters, however, used deepfakes to fool her into sending the money by creating one that replicated these likenesses of the people she was supposed to be on a call with: they created an imitation that mimicked her likeness on the phone.
The number of scammers continues to rise, and artificial intelligence, as well as other sophisticated tools, are raising the risk that victims potentially being scammed.
It is estimated that over $12.5 billion in American citizens were swindled online in the past year, which is up from $10.3 billion in 2022, according to the FBI's Internet Crime Complaint Center.
A much higher figure may be possible, but the actual price could be much higher. During the investigation of a particular case, the FBI found out that only 20% of the victims had reported these crimes to the authorities.
It appears that scammers are continuing to erect hurdles with new ruses, techniques, and policies, and artificial intelligence is playing an increasingly prominent role.
Based on a recent FBI analysis, 39% of victims last year were swindled using manipulated or doctored videos that were used to manipulate what a victim did or said, thereby misrepresenting what they said or did. Currently, video scams have been used to perpetrate investment frauds, as well as romance swindles, as well as other types of scams.
The number of scammers continues to rise, and artificial intelligence, as well as other sophisticated tools, are raising the risk that victims potentially being scammed.
It is estimated that Americans were scammed out of $12.5 billion online last year, which is an increase from $10.3 billion in 2022, according to the FBI's Internet Crime Complaint Center, but the totals could be much higher due to increased awareness. An FBI official recently broke an interesting case in which only 20% of the victims had reported these crimes to the authorities.
Today, scammers perpetrate many different scams, and AI is becoming more prominent in that threat.
According to the FBI's assessment last year, 39% of victims were swindled based on fake or doctored videos altered with artificial intelligence technology to manipulate or misrepresent what someone did or said during the initial interaction. In investment scams and other ways, the videos are being used to deceive people into believing they are in love, for example.
It appears that in several recent instances, fraudsters have modified publicly available videos and other footage using deepfake technology in an attempt to cheat people out of their money, a case that has been widely documented in the news.
In his response, Romero indicated that artificial intelligence could allow scammers to process much larger quantities of data and, as a result, try more combinations of passwords in their attempts to hack into victims' accounts. For this reason, it is extremely important that users implement strong passwords, change them frequently, and use two-factor authentication when they are using a computer.
The Internet Crime Complaint Center of the FBI received more than 880,000 complaint forms last year from Americans who were victims of online fraud.
In fact, according to Social Catfish, 96% of all money lost in scams is never recouped, mainly because most scammers live overseas and cannot return the money.
The increasing prevalence of cryptocurrency in criminal activities has made it a favoured medium for illicit transactions, particularly investment-related crimes. Fraudsters often exploit the anonymity and decentralized nature of digital currencies to orchestrate schemes that demand payment in cryptocurrency. A notable tactic includes enticing victims into fraudulent recovery programs, where perpetrators claim to assist in recouping funds lost in prior cryptocurrency scams, only to exploit the victims further.
The surge in such deceptive practices complicates efforts to differentiate between legitimate and fraudulent communications. Falling victim to sophisticated scams, such as those involving deepfake technology, can result in severe consequences. The repercussions may extend beyond significant financial losses to include legal penalties for divulging sensitive information and potential harm to a company’s reputation and brand integrity.
In light of these escalating threats, organizations are being advised to proactively assess their vulnerabilities and implement comprehensive risk management strategies. This entails adopting a multi-faceted approach to enhance security measures, which includes educating employees on the importance of maintaining a sceptical attitude toward unsolicited requests for financial or sensitive information. Verifying the legitimacy of such requests can be achieved by employing code words to authenticate transactions.
Furthermore, companies should consider implementing advanced security protocols, and tools such as multi-factor authentication, and encryption technologies. Establishing and enforcing stringent policies and procedures governing financial transactions are also essential steps in mitigating exposure to fraud. Such measures can help fortify defenses against the evolving landscape of cybercrime, ensuring that organizations remain resilient in the face of emerging threats.