Deepfake detection & ID fraud protection
Blog
share:

Deepfake detection & ID fraud protection

Karthik Mani

Karthik Mani

CPTO, Documents & Biometrics

Throughout history, villains have used disguise to mask their appearance, deceive others and avoid detection. In the present day, artificial intelligence and deepfake technology have accelerated new threats of impersonation and identity cloaking.

In our recent global survey of fraud prevention professionals, it’s clear that many believe that GenAI, deepfake biometrics and documents will be the biggest trends in identity verification and fraud over the next three to five years.

Big bad identity fraud

Thousands of GenAI tools are instantly available. In criminal hands, the immense power of AI to create hyper-realistic images, video and audio can disguise digital appearance. Hugely popular face swap apps have escalated the risk of fake personas and fraud, making it harder to trust who we see online.

These rapid developments have exposed vulnerabilities in remote identity proofing systems which are designed to test for a genuine ID and the authentic presence of the owner. In this more dangerous world, however, advanced identity proofing technologies have evolved rapidly to answer two essential security questions:

  1. Can I trust that the person I see on the screen is real?
  2. Can I trust that this ID is not fake?

Protect against fraud with FaceMatch

Can I trust that the person I see on the screen is real?

Ever since remote biometric authentication took the world beyond passwords and PIN codes, there have been attempts to undermine biometric security.

Wherever face, fingerprints or voice are used as a unique identifier, identity fraud will follow in the hope of bypassing onboarding safeguards to gain access. Identity proofing systems must detect and defend against this, and as these attacks become more sophisticated with deepfake media so too must the defences against them.

Biometric authentication

Face recognition requires us to upload a selfie so that our features can be captured, analysed, measured and mapped to create a ‘faceprint’ which can be algorithmically compared to the photo that appears on our ID. The result is a score indicating the degree to which they match and authenticate our identity.

Presentation attacks are a common security threat. Like other hacks, these spoofing attempts seek to exploit our credentials – in this case, our biometrics.


What is a presentation attack?

A presentation attack targets biometric authentication systems by presenting fake biometric data, purporting to be the victim of the imposter.

In facial recognition, this biometric security threat might come from the presentation of deepfake video or images to the camera, or even a mask. In a presentation attack, however, even digital deepfakes and manipulations are physically 'presented' to the camera on a mobile or laptop screen.

Liveness detection

One method of spoof-proofing biometric authentication is liveness detection.

Passive liveness detection checks a face is real and live without requiring us to turn our head, smile or blink into a device camera. Instead, this technology spots subtle indicators like skin texture, blood flow under the skin and natural lighting to confirm we are genuinely present and not a deepfake image or screen replay.

This process is faster and requires less processing power than active liveness detection as it requires a single image not a video feed to provide liveness assurance to the highest ISO standard for liveness detection.

 

“As the power to manipulate and imitate biometric markers increases, advanced identity proofing systems continue to build defences against deepfakes.”

Deepfake video detection

As the power to manipulate and imitate biometric markers increases, advanced identity proofing systems continue to build defences against deepfakes.


What are deepfakes?

Deepfakes are rendered digital media that convincingly mimic real people. With the growth in GenAI technology, Generative Adversarial Networks (GANs) that can create hyper-realistic but entirely fake images and video have proliferated and elevated the security threat for identity-proofing systems.

Popular face-swapping apps, like Deep-Live-Cam, Reface and Magic Hour offer out-of-the-box solutions to generate passable facial deepfakes that can be injected into an unprotected biometric authentication process.

Deepfakes are increasingly used to attack biometric security. The most common deepfake technique is face swapping; a simple and highly accessible spoof that can insert the face of any real person into an image or video.

These deepfakes can be 'injected' into unprotected identity proofing systems using a hardware or software hack that bypasses the principal device camera. This is known as a ‘video injection attack.’


What is a video injection attack?

Unlike presentation attacks which involve 'presenting' manipulated video to the device camera from a second screen, injection attacks set out to bypass the camera completely by ‘injecting’ deepfake media directly into the authentication process.

The attack involves hacking into the hardware or software of the device camera used for biometric authentication and replacing those signals with deepfake footage delivered from an external or virtual camera.

There are two ways that advanced identity proofing systems protect against video injection attacks using deepfake media.

Deepfake media analysis

Deepfake media analysis can assess image or video frames for telltale signs of impersonation. Subtle clues in pixel structure and lighting, or synchronisation failure in lip movement and mouth shape can all reveal evidence of face-swapping apps.

Accelerated by machine learning, deepfake media analysis offers a powerful defence against imposters launching video injection attacks on identity proofing systems.

 

“Advanced biometric security analyses camera hardware and software for sign of non-standard cameras or system code that would indicate a ‘man-in-the-middle’ attack.”

 

Injection attack detection

Injection attack detection monitors the integrity of the camera feed used in the biometric authentication process.

This advanced biometric security analyses camera hardware and software for signs of non-standard cameras or system code modifications that would indicate a ‘man-in-the-middle’ attack, ensuring the identity proofing process remains secure.

There are several ISO standards for security in information management systems that cover the performance of video injection attack detection.

Can I trust that this ID is not fake?

Another essential security question for identity proofing systems.

The dark web is a marketplace for identity fraud. Counterfeiting tutorials, software, deepfake document templates and even mail-order ID services are available. This easy access means even low-skilled imposters can fabricate high-tech documents to commit identity fraud.


Deepfake IDs on the dark web

GenAI is also driving counterfeit identity documents and an underground network of fake ID factories on the dark web. Hyper-realistic images and high-quality printers have made homemade Photoshop renderings a thing of the past and created a niche industry serving anyone intent on committing identity fraud.

Tamper detection

Digital and physical document tampering and counterfeiting are not new. The spectrum of cheapfakes to deepfakes contains a wide diversity from simple to sophisticated manipulation to match stolen or synthetic identities.

Secure identity proofing systems must include detection techniques that pick up suspicious anomalies, inconsistencies or absent security features.

Face swapping

Digital or physical substitution of the original photo and face displayed on the identity document is common in identity fraud. Pixel-by-pixel checks for signs of tampering or inconsistency between face and identity data are essential.

Document tamper detection and fraud protection

Text tampering

Imposters will often alter identity data, such as, the name, date of birth or a document number to match their story. Secure tamper detection can spot inconsistencies in font, spacing, alignment, patterns and security features.

Document presence

Like biometric presentation attacks, document tampering detection must determine whether an ID is genuinely present. Picture quality, resolution and texture are all indicators of the absence of an original and evidence of digital or printed fakes.

Deepfake ID detection

Just as artificial intelligence has advanced deepfake fraud it has also advanced identity fraud protection.

Smart identity proofing systems are increasingly building a multi-layered identity fraud defence against deepfake documents, combining biometric authentication with AI-powered data mining to detect ID anomalies and other fraud signals.

Make the right customer decisions in real time

Identity intelligence

Fake documents may appear genuine but does the document data match up? Criminals expect to encounter siloed identity security, so a global network of identity intelligence increases safeguards against fake documents.

Identity intelligence networks, like GBG Trust, securely combine millions of identity data records, applying expert pattern matching, data mining and machine learning to trust-test ID data and its application history before onboarding a new customer.

Tapping into a global data intelligence network can reveal useful insights without breaking data privacy. Document number, issue and expiry date, name, address and other data can be tested for authenticity and consistent appearance in combination.

Suspicious document data anomalies will also appear as well as a high velocity of applications using that ID; strong fraud signals that can block deepfake fraud.

The Big Bad Wolf of deepfakes has opened new frontiers in the fight against identity fraud. Advanced identity proofing technologies are fighting back, however, with enhanced defences against counterfeit faces and fake IDs combined with smart use of identity data intelligence.

Frequently Asked Questions

What is a presentation attack?

A presentation attack targets biometric authentication systems by presenting fake biometric data, purporting to be the victim of an imposter. In a presentation attack on face recognition this security threat might come from the ‘presentation’ of deepfake video or image to the camera on a mobile or laptop screen.

What is a deepfake?

Deepfakes are rendered digital media that convincingly mimic real people. With the growth in GenAI technology, Generative Adversarial Networks (GANs) that can create hyper-realistic but entirely fake images and video have proliferated and elevated the security threat for identity-proofing systems.

What is a video injection attack?

Unlike presentation attacks which involve 'presenting' manipulated video to the device camera from a second screen, video injection attacks set out to bypass the camera altogether, ‘injecting’ deepfake images or video directly into the biometric authentication process using an external or virtual camera.

Sign up for more expert insight

Hear from us when we launch new research, guides and reports.


Related Content