It has been claimed that facial age estimation technology can be easily spoofed. The proliferation of news on generative AI and deepfakes has added to the conversation, and there is doubt and concern over the security of online safety systems.
Suffice to say, we’ve thought of that. We have developed a suite of anti-spoofing tools to ensure your check is real, valid and accurate. Our experience working with organisations to implement age verification has enabled us to identify and cover risks and vulnerabilities.
When we perform an age estimation check, we are actually performing a number of security checks simultaneously. We’ve outlined three of the checks below. There are actually dozens of checks that happen at the same time, and some of these are commercially sensitive, but can be summarised as below. These checks all happen in under a second – as quick as 0.3 seconds to be exact.
- How old are they – our facial age estimation technology determines if someone is above or below an age threshold from a single facial image. This is the main check we perform, as long as the other checks are passed.
- Are they a real person – our liveness technology prevents people from using an image, replay attack, mask or picture to spoof the age check (this is the most obvious, common and easily detectable form of attack). Crucially this is not facial recognition.
- Is this image real – our SICAP (Secure Image Capture) technology confirms it’s a real image from the device camera. This is a more sophisticated but emerging threat where bad actors try to ‘inject’ an alternative image into the verification process and bypass the device camera with an AI generated image or deepfake.
These checks make spoofing a facial age estimation extremely difficult. Not impossible, but it would require significant resources, time and expertise to beat the system. In most use cases where an age check is required, this effort would outweigh the reward. In scenarios where there is significant reward we would of course recommend further verification checks (and indeed that would likely be required by law or regulation).
We carry out regular hacker testing and continue to raise the barriers by developing our security technology. We know that as technology evolves and bad actors raise their game, we must continue to develop our defences and anti-spoofing tools.
We can also set confidence rates so that higher risk items, such as selling knives online, can have a higher security threshold. In comparison, when someone is using facial age estimation in person to buy an age-restricted item, there would be a member of retail staff present – making it extremely obvious if someone was attempting to spoof the system by wearing a mask and extremely difficult to practically bypass the intended device camera.
You’d be surprised what we can learn from a single facial image. Nothing about the individual person, like their name or date of birth. We’re not interested in that for an age check. But we can tell if they are pretending to be older or younger, or attempting to spoof the system.