When voice authentication firm Pindrop Security advertised an opening for a senior engineering role, one resume caught their attention. The candidate, a Russian developer named Ivan, appeared to be a perfect fit on paper. But during the video interview, something felt off—his facial expressions didn’t quite match his speech. It turned out Ivan wasn’t who he claimed to be.
According to Vijay Balasubramaniyan, CEO and co-founder of Pindrop, Ivan was a fraudster using deepfake software and other generative AI tools in an attempt to secure a job through deception.
“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”
While businesses have always had to protect themselves against hackers targeting vulnerabilities, a new kind of threat has emerged: job applicants powered by AI who fake their identities to gain employment. From forged resumes and AI-generated IDs to scripted interview responses, these candidates are part of a fast-growing trend that cybersecurity experts warn is here to stay.
In fact, a Gartner report predicts that by 2028, 1 in 4 job seekers globally will be using some form of AI-generated deception.
The implications for employers are serious. Fraudulent hires can introduce malware, exfiltrate confidential data, or simply draw salaries under false pretenses.
A Growing Cybercrime Strategy
This problem is especially acute in cybersecurity and crypto startups, where remote hiring makes it easier for scammers to operate undetected. Ben Sesser, CEO of BrightHire, noted a massive uptick in these incidents over the past year.
“Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,” Sesser said. “It’s become a weak point that folks are trying to expose.”
This isn’t a problem confined to startups. Earlier this year, the U.S. Department of Justice disclosed that over 300 American companies had unknowingly hired IT workers tied to North Korea. The impersonators used stolen identities, operated via remote networks, and allegedly funneled salaries back to fund the country’s weapons program.
Criminal Networks & AI-Enhanced Resumes
Lili Infante, founder and CEO of Florida-based CAT Labs, says her firm regularly receives applications from suspected North Korean agents.
“Every time we list a job posting, we get 100 North Korean spies applying to it,” Infante said. “When you look at their resumes, they look amazing; they use all the keywords for what we’re looking for.”
To filter out such applicants, CAT Labs relies on ID verification companies like iDenfy, Jumio, and Socure, which specialize in detecting deepfakes and verifying authenticity.
The issue has expanded far beyond North Korea. Experts like Roger Grimes, a longtime computer security consultant, report similar patterns with fake candidates originating from Russia, China, Malaysia, and South Korea.
Ironically, some of these impersonators end up excelling in their roles.
“Sometimes they’ll do the role poorly, and then sometimes they perform it so well that I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.
Even KnowBe4, the cybersecurity firm Grimes works with, accidentally hired a deepfake engineer from North Korea who used AI to modify a stock photo and passed through multiple background checks. The deception was uncovered only after suspicious network activity was flagged.
What Lies Ahead
Despite a few high-profile incidents, most hiring teams still aren’t fully aware of the risks posed by deepfake job applicants.
“They’re responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,” said BrightHire’s Sesser. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on.”
As deepfake tools become increasingly realistic, experts believe the problem will grow harder to detect. Fortunately, companies like Pindrop are already developing video authentication systems to fight back. It was one such system that ultimately exposed “Ivan X.”
Although Ivan claimed to be in western Ukraine, his IP address revealed he was operating from a Russian military base near North Korea, according to the company.
Pindrop, backed by Andreessen Horowitz and Citi Ventures, originally focused on detecting voice-based fraud. Today, it may be pivoting toward defending video and digital hiring interactions.
“We are no longer able to trust our eyes and ears,” Balasubramaniyan said. “Without technology, you’re worse off than a monkey with a random coin toss.”