Asking the FTC to approve facial age estimation for verifiable parental consent

profile picture Yoti 3 min read
Father and son using devices together

Together with the Privacy Certified program of the Entertainment Software Rating Board (ESRB) and Kids Web Services Ltd, a wholly owned Epic Games subsidiary, we are asking the Federal Trade Commission (FTC) for approval to implement facial age estimation as an authorised method for verifiable parental consent (VPC). The Children’s Online Privacy Protection Act (COPPA) requires companies to ensure they are not collecting personal data from children under the age of 13 without a parent’s consent. Currently, the COPPA Rule enumerates seven, non-exhaustive methods that enable parents to verify their consent. These include verification by government ID, credit card transaction, and a method that involves facial recognition, which is different from what we propose in our application.

We’d like the FTC to authorise facial age estimation as another VPC method to give parents and operators more choice in how parents can prove their age and grant consent. We are not seeking approval to implement this technology to check if children are old enough to purchase, download, and/or play a video game. Children are not involved in the age estimation process at all, which is designed to confirm that parents are adults, as required under COPPA.

We’d like to explain how the FTC application seeks to implement the technology in a way that is consistent with COPPA’s requirements for data minimisation, confidentiality and security.

  • Facial age estimation provides an accurate, reliable, accessible, fast, simple and privacy-preserving method to ensure that the person providing consent is an adult.
  • Facial age estimation is not facial recognition; it estimates a parent’s age without identifying them. It does so by converting the pixels of a facial image into numbers and comparing the pattern of numbers to patterns associated with known ages.
  • Facial age estimation does not create a database of faces. It doesn’t learn the user’s name, identity or anything about them. It does not scan their face against a database.
  • The technology is inclusive; it requires no collection of identity or payment card information.
  • It is accurate and does not show material bias among people of different skin tones. We have done extensive testing based on millions of facial scans and publish the accuracy levels transparently.
  • In our proposed application, facial age estimation is always presented as an option to parents alongside other approved methods of verification, providing the parent with a choice of methods.

If you have any questions about our FTC application, please get in touch.

Keep reading

An image of a woman looking directly at the camera. A guide over her face indicates that the image is a deepfake.

The rising challenge of detecting deepfakes

Artificial intelligence (AI) has come a long way in just a few years. What started as a tool for automating routine tasks and processing data more efficiently has now become integrated into nearly every industry. It seems as though it’s everywhere we look right now. One of the most controversial, and perhaps concerning, developments in AI is the rise of deepfakes. In simple terms, deepfakes are incredibly realistic synthetic media, such audio, video or images, generated by AI. These digital forgeries have become so convincing that telling real from fake is becoming a serious challenge. We look into how

8 min read
Synthetic identity fraud is committed by the theft of a real piece of persoanl information such as an SSN, and combined with false information to make up an entirely synthetic identity that often bypasses traditional checks

What is synthetic identity fraud? How it works and how to prevent it

What is synthetic identity fraud? Synthetic identities are fake identities, built by combining real and made-up information, earning them the nickname “Frankenstein IDs” due to their pieced-together nature. Synthetic identity fraud is different to traditional identity fraud as it doesn’t involve an obvious, immediate consumer victim. These fake profiles are designed to mimic real customers, often slipping past traditional fraud detection systems because they don’t raise typical red flags. As a result, the primary victims of synthetic identity fraud are businesses and lenders, who bear the financial losses.   How synthetic identities are created and used

8 min read
Graphic depicting the balance security and user experience with robust authentication methods such as MFA, biometrics and passwordless, versus a traditional username and password

Beyond passwords: exploring modern authentication methods for secure login

As online threats grow more sophisticated, the way we authenticate users needs to evolve. This blog explores the modern authentication methods which can support or replace passwords, such as biometrics and verified digital IDs, and how businesses can use them to protect accounts, reduce fraud and build trust with users.   What is authentication? Authentication is the process of verifying that someone is who they say they are, typically before granting them access to a service or system. Traditionally, this has involved entering a username and password, something only the user should know.   Are passwords enough to keep

6 min read