A serious threat to the reliability of identity verification and authentication systems is the emergence of AI-generated deepfakes that attack face biometric systems. The prediction by Gartner, Inc. that by 2026, 30% of businesses will doubt these technologies' dependability emphasizes how urgently this new threat needs to be addressed.
Deepfakes, or synthetic images that accurately imitate genuine human faces, are becoming more and more powerful tools in the toolbox of cybercriminals as artificial intelligence develops. These entities circumvent security mechanisms by taking advantage of the static nature of physical attributes like fingerprints, facial shapes, and eye sizes that are employed for authentication.
Moreover, the capacity of deepfakes to accurately mimic human speech introduces an additional level of intricacy to the security problem, potentially evading voice recognition software. This changing environment draws attention to a serious flaw in biometric security technology and emphasizes the necessity for enterprises to reassess the effectiveness of their present security measures.
According to Gartner researcher Akif Khan, significant progress in AI technology over the past ten years has made it possible to create artificial faces that closely mimic genuine ones. Because these deep fakes mimic the facial features of real individuals, they open up new possibilities for cyberattacks and can go beyond biometric verification systems.
As Khan demonstrates, these developments have significant ramifications. When organizations are unable to determine whether the person trying access is authentic or just a highly skilled deepfake representation, they may rapidly begin to doubt the integrity of their identity verification procedures. The security protocols that many rely on are seriously in danger from this ambiguity.
Deepfakes introduce complex challenges to biometric security measures by exploiting static data—unchanging physical characteristics such as eye size, face shape, or fingerprints—that authentication devices use to recognize individuals. The static nature of these attributes makes them vulnerable to replication by deepfakes, allowing unauthorized access to sensitive systems and data.
Additionally, the technology underpinning deepfakes has evolved to replicate human voices with remarkable accuracy. By dissecting audio recordings of speech into smaller fragments, AI systems can recreate a person’s vocal characteristics, enabling deepfakes to convincingly mimic someone’s voice for use in scripted or impromptu dialogue.
By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.
Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.
By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.
Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.
Deepfakes are sophisticated threats to biometric security systems because they use static data, which is unchangeable physical attributes like eye size, face shape, or fingerprints that authentication devices use to identify persons.
The last time the U.S. military used its Secure Electronic Enrollment Kit (SEEK II) devices was more than ten years ago, close to Kandahar, Afghanistan. The bulky black rectangle piece of technology, which was used to scan fingerprints and irises, was switched off and put away.
That is, until Matthias Marx, a German security researcher, purchased the device for $68 off of eBay in August 2022 (a steal, at about half the listed price). Marx had unintentionally acquired sensitive, identifying information on thousands of people for the cheap, low price of less than $70. The biometric fingerprint and iris scans of 2,632 people were accompanied by names, nationalities, photographs, and extensive descriptions, according to a story by The New York Times.
From the war zone areas to the government equipment sale to the eBay delivery, it seems that not a single Pentagon official had the foresight to remove the memory card out of the specific SEEK II that Marx ended up with. The researcher told the Times, “The irresponsible handling of this high-risk technology is unbelievable […] It is incomprehensible to us that the manufacturer and former military users do not care that used devices with sensitive data are being hawked online.”
According to the Times, the majority of the data in the SEEK II was gathered on people who the American military has designated as terrorists or wanted people. Others, however, were only ordinary citizens who had been detained at Middle Eastern checkpoints or even people who had aided the American administration.
Additionally, all of that information might be utilized to locate someone, making the devices and related data exceedingly hazardous, if they ended up in the wrong hands. For instance, the Taliban may have a personal motive for tracking down and punishing anyone who cooperated with U.S. forces in the area.
Marx and his co-researchers from Chaos Computer Club, which claims to be the largest hacker group in Europe, purchased the SSEK II and five other biometric capture devices- all from eBay. The group then went on with analyzing the devices for potential flaws, following a 2021 report by The Intercept, regarding military tech seize by the Taliban.
Marx was nonetheless concerned by the extent of what he discovered, despite the fact that he had set out from the start to assess the risks connected with biometric devices. The Times reports that a second SEEK II purchased by CCC and last used in Jordan in 2013 contained data on U.S. troops—likely gathered during training—in addition to the thousands of individuals identified on the single SEEK II device last used in Afghanistan.
![]() |
Phone prompt 2FA |