The digital sphere has witnessed a surge in AI-fueled tax fraud, presenting a grave threat to individuals and organisations alike. Over the past year and a half, the capabilities of artificial intelligence tools have advanced rapidly, outpacing government efforts to curb their malicious applications.
LexisNexis' Government group CEO, Haywood Talcove, recently exposed a new wave of AI tax fraud, where personally identifiable information (PII) like birthdates and social security numbers are exploited to file deceitful tax returns. People behind such crimes utilise the dark web to obtain convincing driver's licences, featuring their own image but containing the victim's details.
The process commences with the theft of PII through methods such as phishing, impersonation scams, malware attacks, and data breaches — all of which have been exacerbated by AI. With the abundance of personal information available online, scammers can effortlessly construct a false identity, making impersonation a disturbingly simple task.
Equipped with these forged licences, scammers leverage facial recognition technology or live video calls with trusted referees to circumvent security measures on platforms like IRS.gov. Talcove emphasises that this impersonation scam extends beyond taxes, putting any agency using trusted referees at risk.
The scammers then employ AI tools to meticulously craft flawless tax returns, minimising the chances of an audit. After inputting their banking details, they receive a fraudulent return, exploiting not just the Internal Revenue Service but potentially all 43 states in the U.S. that impose income taxes.
The implications of this AI-powered fraud extend beyond taxes, as any agency relying on trusted referees for identity verification is susceptible to similar impersonation scams. Talcove's insights underscore the urgency of addressing this issue and implementing robust controls to counter the accelerating pace of AI-driven cybercrime.
Sumsub's report on the tenfold increase in global deepfake incidents further accentuates the urgency of addressing the broader implications of AI in fraud. Deepfake technology, manipulating text, images, and audio, provides criminals with unprecedented speed, specificity, personalization, scale, and accuracy, leading to a surge in identity hijacking incidents.
As individuals and government entities grapple with this new era of fraud, it becomes imperative to adopt proactive safety measures to secure personal data. Firstly, exercise caution when sharing sensitive details online, steering clear of potential phishing attempts, impersonation scams, and other cyber threats that could compromise your personally identifiable information (PII). Stay vigilant and promptly address any suspicious activities or transactions by regularly monitoring your financial accounts.
As an additional layer of defence, consider incorporating multi-factor authentication wherever possible. This security approach requires not only a password but also an extra form of identification, significantly enhancing the protection of your accounts.