How Yoti can help combat digital injection attacks

profile picture Matt Prendergast 3 min read
Closeup of fingers typing on laptop keyboard

As use of online verification grows, there inevitably follows increasing temptation for bad actors to develop ways to exploit the process. As a provider of verification services we must show businesses, regulators and governments that we have robust anti-spoofing technology, checks and processes. An emerging but rapidly growing threat for verification services are digital injection attacks.

 

What are injection attacks?

Injection attacks are a form of attack on remote verification services. Direct attacks are the most common attempt to spoof systems. Examples of direct attacks are:

  • Paper image
  • 2D and 3D masks 
  • Screen image
  • Video imagery

Direct attacks are an attempt to spoof a verification system that a person is real, older, or someone else altogether. Our facematch and liveness technologies use layers of anti-spoofing to determine that the person is real (not a picture or mask, for example) and that they are who they say they are. 

An injection attack is an indirect attack and attempts to bypass liveness detection. It involves injecting an image or video designed to pass authentication, rather than the one captured on the live camera. It is a rapidly emerging threat to digital verification services. Using free software and some limited technical ability, a bad actor is able to overwrite the image or video of the camera with pre-prepared images.

 

How can Yoti help prevent injection attacks?

We have developed a patent-pending solution that makes injection attacks considerably more difficult for imposters. It is a new way of adding security at the point an image is being taken for a liveness or facematch check. 

There are two parts to this. As well as obfuscating the code, Yoti adds a cryptographic signature key. As such, a potential hacker needs to both reverse engineer the obfuscation and infer or guess the cryptographic signature key.

Yoti frequently changes the obfuscation and the signature key. This means that if the hacker were to reverse engineer the obfuscated code, by the time they have done so, the signature key will have changed, and vice-versa.

There remain ways to spoof this (not that we’d say how) but it significantly adds to the effort, time, skill and cost of spoofing verification checks, moving bad actors on to less secure opportunities. 

If you’d like to learn more about our NIST approved liveness products, please do get in touch.

Related stories

Yoti MyFace Match development and improvement

Yoti MyFace Match is what’s known as a 1:1 and 1:N face matching solution. The technology compares a single image with another image or set of images in real time to determine if it is the same person. Yoti licences this facial recognition technology to businesses wanting to be sure that, for example, the right person is accessing their online accounts. MyFace Match is also useful to businesses because they invite users to opt-in  to be ‘verified’ and then be required to provide consent whenever their image or content is published. In the case of live streaming, businesses can monitor

4 min read

Combatting deepfakes online

It’s concerning to see how innovative artificial intelligence (AI) is being used to create deepfakes that are spreading disingenuous information and explicit images online. Deepfakes are realistic videos or images created by generative AI. Fraudsters can now use advanced algorithms to manipulate visual and audio elements that mimic real people. This fake content shows people doing or saying things they never did. Two prominent individuals have recently been targets of deepfake scams. A video featuring Taylor Swift generated by artificial intelligence was used to promote a fraudulent cookware competition, and explicit AI-generated images of her were widely circulated online. Additionally,

6 min read

On the threat of detecting deepfakes

Learn how Yoti can help you defeat deepfakes As the threat of generative AI in identity and content integrity continues to build, Yoti has developed a comprehensive strategy focused on early detection by using tools to prevent AI-generated content or attacks at the point of source. Yoti’s strategy for detecting generative AI threats targets two attack vectors: presentation attacks (direct) and injection attacks (indirect), with a focus on early detection during the verification or authentication process.  Our proprietary and patented technology can work on: Deepfakes Illicit images Account takeovers Identity theft and fraud Content moderation Injection attacks Bot attacks

2 min read