The latest tactic used by threat actors has been deepfakes, where a cybercriminal may exploit the audio and visual media for their use in conducting extortions and other frauds. In some cases, fraudsters have used AI-generated voices to impersonate someone close to the targeted victim, making it impossible to realize they are being defrauded.
According to ABC13, the most recent instance of this included an 82-year-old Texan called Jerry who fell victim to a scam by a criminal posing as a sergeant with the San Antonio Police Department. The con artist informed the victim that his son-in-law had been placed under arrest and that Jerry would need to provide $9,500 in bond to be released. Furthermore, Jerry was duped into paying an extra $7,500 to finish the entire process. The victim, who lives in an elderly living home, is thinking about getting a job to make up for the money they lost, but the criminals are still at large.
The aforementioned case is however not the first time where AI has been used for fraud. According to Reuters, a Chinese man was defrauded of more than half a million dollars earlier this year after a cybercriminal fooled him into transferring the money by posing as his friend using an AI face-swapping tool.
Cybercriminals often go with similar tactics, like sending morphed media of a person close to the victim in an attempt to coerce money under the guise of an emergency. Although impostor frauds are not new, here is a contemporary take on them. The FTC reported in February 2023 that around $2.6 billion was lost by American residents in 2022 as a result of this type of scam. However, the introduction of generative AI has significantly increased the stakes.
A solution besides ignoring calls or texts from suspicious numbers could be – establishing a unique codeword with loved ones. This way, one can distinguish if the person on the other end is actually them. To verify if they really are in a difficult circumstance, one can also attempt to get in touch with them directly. Experts also advise hanging up and giving the individual a call directly, or at least double-checking the information before answering.
Unfortunately, scammers employ a variety of AI-based attacks in addition to voice cloning. Deepfaked content extortion is a related domain. Recently, there have been multiple attempts by nefarious actors to use graphic pictures generated by artificial intelligence to blackmail people. Numerous examples where deepfakes destroyed the lives of numerous youngsters have been revealed in a report by The Washington Post. In such a case, it is advisable to get in touch with law enforcement right away rather than handling things on one's own.