The Age Appropriate Design Code for businesses

profile picture Yoti 13 min read
girl looking at phone smiling

This blog was updated in February 2024, following the ICO’s  updated opinion on age assurance for the Children’s Code. In a nutshell the main changes include: 

  • Facial age estimation is now recognised as the most widely used age estimation approach, with high levels of accuracy. 
  • Self-declaration on its own is not sufficient for high-risk services.
  • The ICO has also introduced a new term, the ‘waterfall technique’. This refers to a combination of age assurance methods. 
  • Companies should ensure that any age assurance system implemented has an appropriate level of technical accuracy, reliability and robustness, whilst operating in a fair way to its users.

 

The Age Appropriate Design Code (also known as the “Children’s Code”) is the first statutory code of practice for children’s data in the world. Introduced by the UK Information Commissioner’s Office (ICO) in 2021, the set of standards seeks to ensure that online services are designed in the best interests of a child.

“The best interests of a child” is a concept from the United Nations Convention on the Rights of the Child (UNCRC), which recognises that children need special safeguards and care in all aspects of their life.

In a world first, the Code extends this protection to the digital world. It enforces companies with online services likely to be accessed by children under 18 to apply privacy settings by default. The 15 standards cover topics such as age-appropriate application, transparency, data minimisation, data sharing and profiling.

In this blog, we’ll explore who the Code impacts, what age-appropriate design means and how to take a risk-based approach to establish age. We’ll also look at some of the high-risk features common to social media, gaming and dating, and how businesses can comply.

 

Who is impacted by the Age Appropriate Design Code?

The code applies to any online service likely to be accessed by a child under 18 in the UK, even if the service isn’t specifically directed at children.

This includes apps, programs, connected toys and devices, search engines, social media platforms, streaming services, online games, news or educational websites and websites offering other goods or services to users over the internet.

Whilst not law, the statutory code of practice gives the ICO powers to fine businesses that don’t comply with up to 4% of their global annual turnover or suspend their operations in the UK.

 

What do businesses need to do?

Affected businesses should broadly:

  1. Assess whether your service is likely to be accessed by children.
  2. Carry out a data protection impact assessment to assess the risks your service poses to children.
  3. Establish the age or age range of users in relation to the risks and limit the risks for the required age groups.

If you can’t limit risks for children, you must apply the code to all your users.

 

What is the California Age Appropriate Design Code?

The California Age-Appropriate Design Code Act (the “Act”) is modelled on the UK’s Age-Appropriate Design Code. It was signed into law on 15 September 2022 and takes effect on 1 July 2024.

The Act places legal obligations on companies to undertake a DPIA for any online service, product, or feature likely to be accessed by a child. DPIAs address whether the design could:

  • Harm children
  • Lead to children experiencing or being targeted by harmful contacts
  • Permit children to be subject to harmful conduct
  • Expose children to exploitation by harmful contacts
  • Harm children with its algorithms
  • Harm children with its targeted advertising systems
  • Harm children with incentive or engagement features
  • Collect sensitive personal information

For any business that violates the law, the California Attorney General may seek an injunction or fine of up to $7,500 per affected child. However, businesses have 90 days to rectify violations before they are fined.

Beyond California, countries are developing similar codes of practice, such as the Netherlands, Sweden, and Ireland.

In addition, the language of the European ‘Better Internet for Kids Strategy’ mirrors some of the language around “age-appropriate digital services” and advocates for “every child in Europe protected, empowered and respected online, and no one left behind.’

 

What does “age-appropriate design” mean?

Age-appropriate design is the process of designing a service for the needs of the age group that’s accessing them. Just like you may design a website or product to be accessible for people with disabilities, children have developmental needs that should be considered.

As a guide, the ICO highlights the following age ranges:

  • 0 – 5: pre-literate and early literacy
  • 6 – 9: core primary school years
  • 10-12: transition years
  • 13-15: early teens
  • 16-17: approaching adulthood

Much of the Code focuses on how children’s data is processed, recommending high privacy settings by default, minimised data collection and clear privacy information that children can understand.

However, it goes further than other data protection laws like GDPR and COPPA by also considering how products and features are designed in ways that can cause harm to children.

For example, private chat functionality can expose children to potential predators who wish to make contact away from the scrutiny of others. Features such as geolocation and friend suggestions may also expose children to predators.

Similarly, user-generated content, such as profile pictures, status updates, and comments, can be used to groom or exploit children. Live streaming can also expose children to inappropriate behaviour and is very difficult to moderate due to its real-time nature.

There’s also financial harm that can be caused by exposing children to advertising or in-app purchases. This can encourage children to spend money or share personal information in exchange for items.

In the ICO’s updated guidance, they have advised platforms to consider whether further checks are required when a child is expected to turn 13 and again when that child turns 18. This is to make sure they are only able to access parts of the service that are age appropriate to them.

 

How to take a risk-based approach to establishing age

Fundamental to age-appropriate design is knowing the age or age range of the people accessing your service. If you can’t tell who’s a child, you can’t protect them.

The Code advocates for a risk-based approach to establishing age, which essentially means the more risks your data processing poses to a child, the more certain you must be about their age.

Being uncertain about someone’s age might sound strange because we’re so familiar with checking ID to confirm that someone is over the required age. This is what the Code refers to as verifying age using ‘hard identifiers’, such as identity documents or verifiable records of data.

However, there are many age assurance techniques that establish the likelihood that someone falls into a certain age or age range, without revealing their full identity.

The ICO adopts four main approaches to age assurance, which can use these individually or in combination:

  • Self-declaration: this is understood to be applicable only for the lowest-risk use cases where there’s no age-inappropriate contact, such as a newsletter signup. In their updated wording, the ICO has clarified that this method on its own is not sufficient for high-risk services.
  • Age estimation: using AI to estimate someone’s age or age range.
  • Parental confirmation: someone with parental responsibility confirming the age of a child through an online account.
  • Age verification: any method designed to verify the exact age of users or confirm that a user is over 18. This could involve verifying a user’s age against a ‘hard identifier’ like a passport, or using a third-party provider to verify against information sources such as credit card or database checks.

The ICO has also introduced a new term, the ‘waterfall technique’. This refers to a combination of age assurance methods. For instance, facial age estimation could be used as the primary method to determine a user’s age. A second method could then kick in if a further age check is needed. 

In practical terms, if a website with 18+ content uses facial age estimation with a 7 year safety buffer, all users need to be estimated as aged 25 or over. Those who are estimated to be under 25 would need to complete a second age check using another method, such as an identity document.

 

Using AI to estimate someone’s age

An example of an AI system is our facial age estimation technology, which estimates age from a facial image. The image is analysed by an algorithm that has been trained to recognise age. To our AI system, the image is simply a pattern of pixels, and the pixels are numbers. Our facial age estimation technology has been trained to spot patterns in numbers, so it learns ‘this pattern is what 16 year olds usually look like’.

As soon as the age has been estimated, the facial image is deleted. The technology does not uniquely recognise anyone or require users to share their name, date of birth or identity documents.

For 13-17-year-olds, the mean average error is 1.4 years away from a person’s true age. For reference, studies have shown humans estimate within 6-8 years of accuracy.

In summary, facial age estimation technology is fast, easy and inclusive for people without ID documents. That’s why Instagram, Facebook Dating, Yubo and OnlyFans are already using it to create age-appropriate experiences on their platforms.

The ICO has updated their guidance, recognising that facial age estimation is currently the most widely used age estimation approach, with high levels of accuracy. The ICO maintains that other age estimation approaches, such as voice analysis, have not yet reached the level of accuracy for high-risk scenarios.

The ICO has also said that when businesses choose an age assurance method that uses AI, they need to ensure the technology has been trained using high quality, diverse and relevant data sets. We regularly publish white papers about our facial age estimation technology. They detail how we strive to ensure that there is no discernible bias across genders or skin tones. We have always wanted to give businesses and regulators a clear and transparent overview of the technology.

 

How the Age-Appropriate Design Code impacts social media companies

Many social media companies make money through selling advertising and giving advertisers access to user data to better personalise their ads. This revenue model is built on the concept we’ve become so familiar with in the digital age – if the product is free, you are the product.

However, the Code states that if behavioural advertising is used to fund a service but isn’t part of the core service that the child wishes to access, it should be controlled with a privacy setting.

It also flags the risks to children associated with profiling, which refers to recommending content to individuals based on their past online activity or browsing history. Profiling powers features like the news feed or suggestions to follow other accounts. The Code states that profiling should be “off” by default and only allowed if you have appropriate measures in place to protect the child from any harmful effects.

Some of the biggest social media companies like YouTube and TikTok have already shared updates prompted by the Code. These cover things like default privacy settings, limits on advertising and time cut-offs for push notifications.

We’ve been helping Instagram explore ways to verify their users’ age so they can provide teenagers (13-17) with age-appropriate experiences, like defaulting them into private accounts, preventing unwanted contact from adults they don’t know and limiting the options advertisers have to reach them with ads.

When someone changes their date of birth from under 18 to over 18, they’re asked to verify their age with either an ID document or a video selfie. Our facial age estimation technology powers the video selfie, which was preferred by 82% of people presented with other options. As a result, Instagram has stopped 96% of teens who attempted to edit their birthdays from under 18 to 18 from doing so.

 

How the Age-Appropriate Design Code impacts dating companies

Dating platforms are also impacted by the Code, as they feature user-generated content and private chats which are flagged as high risk to children.

Although most dating platforms have a minimum age requirement of 18+, this commonly is enforced by asking people to tick a box to confirm they’re 18 or older. This age assurance technique is referred to by the Code as “self-declaration”. However, industry researchers have also called it “verification theatre” due to the ease at which someone can lie.

Thanks to our continued partnership with Meta, Facebook Dating take a more robust approach. They ask users suspected of being under 18 to prove their age via a video selfie or an ID upload.

Not only does this give people choice in how to prove their age but also provides an alternative to asking for ID.

 

How the Age Appropriate Design Code impacts gaming companies

93% of children in the UK play video games. Nearly two-thirds of U.S. adults play video games regularly, which jumps to 76% for children under 18.

The dangers of online gaming for children have been highlighted by privacy groups, adults and even the FBI, who launched a campaign highlighting how sexual predators use the platforms to target children.

Under the Age Appropriate Design Codes, gaming developers must identify if players are under the age of 18 with a reasonable degree of certainty and limit risks for the appropriate age groups.

Some gaming companies have already taken steps to limit certain high-risk features for specific age groups, such as voice chat. The popular gaming feature allows players to communicate in real time without typing. However, it’s been reported that adults use it to engage in inappropriate and sexual behaviour with children.

The ICO has issued guidance for game designers, which covers things like including age-appropriate prompts to encourage players to take breaks from extended play and having behavioural profiling turned off by default.

 

Experts in age assurance

As technologists, researchers and campaigners for online safety, we’re delighted to see that a number of countries are introducing such important legislation that seeks to build a digital world in which young people can thrive.

We firmly believe consumers should have choice in how they prove their age. That’s why we give organisations access to a wide range of age assurance options through one integration. This allows them to a/b test in different regions and switch to alternative methods if regulation changes in a given country.

We’ve already been helping companies like Instagram and Facebook create age-appropriate experiences with age assurance technology that minimises data and keeps children safe.

We’ve learnt a lot, and the journey has only just begun. If you’re looking to make your platform age-appropriate, talk to us about how our expertise in age assurance can inform your journey.

 

More resources