The Age Appropriate Design Code for businesses

profile picture Amy Colville 11 min read

The Age Appropriate Design Code (also known as the “Children’s Code”) is the first statutory code of practice for children’s data in the world. Introduced by the UK Information Commissioner’s Office (ICO) in 2021, the set of standards seeks to ensure that online services are designed in the best interests of a child.

“The best interests of a child” is a concept from the United Nations Convention on the Rights of the Child (UNCRC), which recognises that children need special safeguards and care in all aspects of their life.

In a world first, the Code extends this protection to the digital world. It enforces companies with online services likely to be accessed by children under 18 to apply privacy settings by default. The 15 standards cover topics such as age-appropriate application, transparency, data minimisation, data sharing and profiling.

In this blog, we’ll explore who the Code impacts, what age-appropriate design means and how to take a risk-based approach to establish age. We’ll also look at some of the high-risk features common to social media, gaming and dating, and how businesses can comply.

 

Who is impacted by the Age Appropriate Design Code?

The code applies to any online service likely to be accessed by a child under 18 in the UK, even if the service isn’t specifically directed at children.

This includes apps, programs, connected toys and devices, search engines, social media platforms, streaming services, online games, news or educational websites and websites offering other goods or services to users over the internet.

Whilst not law, the statutory code of practice gives the ICO powers to fine businesses that don’t comply with up to 4% of their global annual turnover or suspend their operations in the UK.

 

What do businesses need to do?

Affected businesses should broadly:

  1. Assess whether your service is likely to be accessed by children.
  2. Carry out a data protection impact assessment to assess the risks your service poses to children.
  3. Establish the age or age range of users in relation to the risks and limit the risks for the required age groups.

 

If you can’t limit risks for children, you must apply the code to all your users.

 

What is the California Age Appropriate Design Code?

The California Age-Appropriate Design Code Act (the “Act”) is modelled on the UK’s Age-Appropriate Design Code. It was signed into law on 15 September 2022 and takes effect on 1 July 2024.

The Act places legal obligations on companies to undertake a DPIA for any online service, product, or feature likely to be accessed by a child. DPIAs address whether the design could:

  • Harm children
  • Lead to children experiencing or being targeted by harmful contacts
  • Permit children to be subject to harmful conduct
  • Expose children to exploitation by harmful contacts
  • Harm children with its algorithms
  • Harm children with its targeted advertising systems
  • Harm children with incentive or engagement features
  • Collect sensitive personal information

For any business that violates the law, the California Attorney General may seek an injunction or fine of up to $7,500 per affected child. However, businesses have 90 days to rectify violations before they are fined.

Beyond California, countries are developing similar codes of practice, such as the Netherlands, Sweden, and Ireland.

In addition, the language of the European ‘Better Internet for Kids Strategy’ mirrors some of the language around “age-appropriate digital services” and advocates for “every child in Europe protected, empowered and respected online, and no one left behind.’

 

What does “age-appropriate design” mean?

Age-appropriate design is the process of designing a service for the needs of the age group that’s accessing them. Just like you may design a website or product to be accessible for people with disabilities, children have developmental needs that should be considered.

As a guide, the ICO highlights the following age ranges:

  • 0 – 5: pre-literate and early literacy
  • 6 – 9: core primary school years
  • 10-12: transition years
  • 13-15: early teens
  • 16-17: approaching adulthood

Much of the Code focuses on how children’s data is processed, recommending high privacy settings by default, minimised data collection and clear privacy information that children can understand.

However, it goes further than other data protection laws like GDPR and COPPA by also considering how products and features are designed in ways that can cause harm to children.

For example, private chat functionality can expose children to potential predators who wish to make contact away from the scrutiny of others. Features such as geolocation and friend suggestions may also expose children to predators.

Similarly, user-generated content, such as profile pictures, status updates, and comments, can be used to groom or exploit children. Live streaming can also expose children to inappropriate behaviour and is very difficult to moderate due to its real-time nature.

There’s also financial harm that can be caused by exposing children to advertising or in-app purchases. This can encourage children to spend money or share personal information in exchange for items.

 

How to take a risk-based approach to establishing age

Fundamental to age-appropriate design is knowing the age or age range of the people accessing your service. If you can’t tell who’s a child, you can’t protect them.

The Code advocates for a risk-based approach to establishing age, which essentially means the more risks your data processing poses to a child, the more certain you must be about their age.

Being uncertain about someone’s age might sound strange because we’re so familiar with checking ID to confirm that someone is over the required age. This is what the Code refers to as verifying age using ‘hard identifiers’, such as identity documents or verifiable records of data.

However, there are many age assurance techniques that establish the likelihood that someone falls into a certain age or age range, without revealing their full identity.

The ICO splits up the different age assurance methods into the following categories:

  • Self-declaration: this is understood to be applicable only for the lowest-risk use cases where there’s no age-inappropriate contact, such as a newsletter signup.
  • Technical measures: used to strengthen self-declaration mechanisms and discourage false declarations of age.
  • Artificial intelligence: using AI to estimate someone’s age.
  • Account holder information: allowing an existing adult account holder to confirm the age range of additional account users or set up child profiles.
  • Hard identifiers: verifying age using a passport or other identity document.
  • Third-party age verification services: using a third party to receive assurance of the age range of your users, typically as a ‘yes’ or ‘no’ answer to minimise data.

 

Using AI to estimate someone’s age

An example of an AI system is our facial age estimation technology, which estimates age from a selfie. It’s powered by an algorithm that’s learnt to recognise age in the same way humans do – by looking at faces.

However, unlike humans who check ID, it doesn’t look at names or addresses. It’s trained with just a face and month and year of birth, and can’t uniquely recognise anyone.

When used in a live setting, it detects a live human face, analyses the pixels in the image and gives an age estimate. It doesn’t learn anything new from each check or retain any images.

For 13-17-year-olds, the mean average error is 1.53 years away from a person’s true age. For reference, studies have shown humans estimate within 6-8 years of accuracy.

In summary, facial age estimation technology is fast, easy and accessible for people without ID documents. That’s why Instagram, Facebook Dating, Yubo and OnlyFans are already using it to create age-appropriate experiences on their platforms.

 

How the Age-Appropriate Design Code impacts social media companies

Many social media companies make money through selling advertising and giving advertisers access to user data to better personalise their ads. This revenue model is built on the concept we’ve become so familiar with in the digital age – if the product is free, you are the product.

However, the Code states that if behavioural advertising is used to fund a service but isn’t part of the core service that the child wishes to access, it should be controlled with a privacy setting.

It also flags the risks to children associated with profiling, which refers to recommending content to individuals based on their past online activity or browsing history. Profiling powers features like the news feed or suggestions to follow other accounts. The Code states that profiling should be “off” by default and only allowed if you have appropriate measures in place to protect the child from any harmful effects.

Some of the biggest social media companies like YouTube and TikTok have already shared updates prompted by the Code. These cover things like default privacy settings, limits on advertising and time cut-offs for push notifications.

We’ve been helping Instagram explore ways to verify their users’ age so they can provide teenagers (13-17) with age-appropriate experiences, like defaulting them into private accounts, preventing unwanted contact from adults they don’t know and limiting the options advertisers have to reach them with ads.

When someone changes their date of birth from under 18 to over 18, they’re asked to verify their age with either an ID document or a video selfie. Our facial age estimation technology powers the video selfie, which was preferred by 82% of people presented with other options. As a result, Instagram has stopped 96% of teens who attempted to edit their birthdays from under 18 to 18 from doing so.

 

How the Age-Appropriate Design Code impacts dating companies

Dating platforms are also impacted by the Code, as they feature user-generated content and private chats which are flagged as high risk to children.

Although most dating platforms have a minimum age requirement of 18+, this commonly is enforced by asking people to tick a box to confirm they’re 18 or older. This age assurance technique is referred to by the Code as “self-declaration”. However, industry researchers have also called it “verification theatre” due to the ease at which someone can lie.

Thanks to our continued partnership with Meta, Facebook Dating take a more robust approach. They ask users suspected of being under 18 to prove their age via a video selfie or an ID upload.

Not only does this give people choice in how to prove their age but also provides an alternative to asking for ID.

 

How the Age Appropriate Design Code impacts gaming companies

93% of children in the UK play video games. Nearly two-thirds of U.S. adults play video games regularly, which jumps to 76% for children under 18.

The dangers of online gaming for children have been highlighted by privacy groups, adults and even the FBI, who launched a campaign highlighting how sexual predators use the platforms to target children.

Under the Age Appropriate Design Codes, gaming developers must identify if players are under the age of 18 with a reasonable degree of certainty and limit risks for the appropriate age groups.

Some gaming companies have already taken steps to limit certain high-risk features for specific age groups, such as voice chat. The popular gaming feature allows players to communicate in real time without typing. However, it’s been reported that adults use it to engage in inappropriate and sexual behaviour with children.

The ICO has issued guidance for game designers, which covers things like including age-appropriate prompts to encourage players to take breaks from extended play and having behavioural profiling turned off by default.

 

Experts in age assurance

As technologists, researchers and campaigners for online safety, we’re delighted to see that a number of countries are introducing such important legislation that seeks to build a digital world in which young people can thrive.

We firmly believe consumers should have choice in how they prove their age. That’s why we give organisations access to a wide range of age assurance options through one integration. This allows them to a/b test in different regions and switch to alternative methods if regulation changes in a given country.

We’ve already been helping companies like Instagram and Facebook create age-appropriate experiences with age assurance technology that minimises data and keeps children safe.

We’ve learnt a lot, and the journey has only just begun. If you’re looking to make your platform age-appropriate, talk to us about how our expertise in age assurance can inform your journey.

 

More resources

Related stories

An illustration of a face being marked as "over 18". The accompanying text says "Exploring social purpose: Age assurance".

Addressing social challenges: How age assurance can help solve everyday problems

Everything we do at Yoti is with our social purpose in mind. From developing our products to driving regulatory change, we’re always thinking about the challenges we can help solve. This series looks at how our products could meaningfully benefit individual people and wider society. This article focuses on our age assurance solutions, which include facial age estimation, reusable Digital IDs and age verification with a document.   Keeping children away from inappropriate content online It is becoming increasingly easy for young people to access the internet. Therefore it’s vital that young people can only access age-appropriate content. Unfortunately, many

8 min read

Facial age estimation: the facts

We developed facial age estimation to give everyone a secure and private way to prove their age, without sharing their name or any identity documents. We also wanted to address the inclusion issue, given not everyone has an ID – which can exclude them from accessing age-restricted goods, services and experiences.  The technology can determine a person’s age from a facial image. We believe this is a better way to check someone’s age. People shouldn’t have to share their whole identity just to prove their age.  We work hard to explain facial age estimation and how it works. Unfortunately, some

1 min read
An image of a young person wearing a set of headphones and using their smartphone.

Online Safety Act becomes law

After years of debate and discussion, the Online Safety Act is now law – marking a new chapter in online safety. There are three key elements within the Online Safety Act that we are ready to help with: Age assurance to help platforms create safe, age-appropriate experiences online User verification to give users more control over who they interact with online Over 18 consent from content creators for the publication of intimate images   Age assurance in the Online Safety Act The Online Safety Act is not about excluding children from the internet. It’s about giving them an experience

5 min read