Complying with Ofcom’s Protection of Children Codes: what you need to know about age assurance

profile picture Amba Karsondas 11 min read
An image of a young girl lying on a sofa and using a smartphone.

With children’s online engagement at an all-time high, the UK government passed the Online Safety Act in 2023, aiming to make the UK ‘the safest place in the world to be online’. It places legal obligations on online services to prioritise user safety, particularly for children.

As the UK’s communications regulator, Ofcom plays a pivotal role in enforcing the Act’s provisions. As part of phase two of the Act’s implementation, Ofcom published its Protection of Children Codes of Practice on 24th April 2025.

Made up of over 40 practical measures, they outline how digital platforms must safeguard younger users from online harm. Ofcom’s guidance will come into effect 3 months from the 24th April 2025. Therefore, services in scope must complete their children’s risk assessments by 24th July 2025, and implement appropriate safety measures by Friday 25th July, to avoid penalties of up to £18 million or 10% of global revenue.

This blog unpacks the new codes, looking into their specific age-related requirements. We also explore our age assurance methods and how they can help platforms of all types and sizes comply with the regulations.  

 

Who do Ofcom’s Protection of Children Codes apply to?

The Protection of Children Codes apply to a spectrum of online services that may be accessed by children. Known as ‘Part 3 services’, they are categorised into three main types:

User-to-user (U2U) services

These allow users to generate, share or upload content (such as messages, images, videos, comments or audio) that may be encountered by other users of the service. Examples include:

  • Social media platforms
  • Video-sharing services
  • Messaging services
  • Marketplaces and listing services
  • Dating services
  • Gaming services

Search services

These enable users to search multiple websites or databases. This includes:

  • General search engines
  • Vertical search services focusing on specific content types

Services that feature provider pornographic content

These are sites which publish or display pornographic content, as defined in the Act. This only relates to services which are regulated under Part 3 of the Act. 

Some services which contain pornographic content may also, or instead, fall in scope of the duties applicable to U2U services or search services. Sites that publish their own pornographic content have separate obligations under Part 5 of the Online Safety Act. These measures went into effect on 17th January 2025.

Regulations apply to online services with links to the UK regardless of where they are based. A service meets this criteria if one or more of the following apply:

  • Has a significant number of UK users; or
  • Has UK users as one of its (or sole) target markets; or
  • Is capable of being used by UK users, and there are reasonable grounds to believe there is a material risk of significant harm to UK users.

 

What does Ofcom mean by “content harmful to children”?

Ofcom outlines specific categories of content deemed harmful to children, drawn from the Online Safety Act. This content is legal but can be harmful to children. Understanding these categories is central to complying with Ofcom’s codes.

The Act states that there are three types of content harmful to children:

  1. Primary priority content (PPC)

This includes content that must be inaccessible to all children, regardless of their age. It includes:

  • Pornographic material
  • Content promoting suicide or self-harm
  • Material encouraging eating disorders

Platforms must implement robust barriers to prevent all children from encountering this type of content.

  1. Priority content (PC)

This content is seriously harmful and must be restricted by age. It includes:

  • Abuse or hate speech targeting certain characteristics
  • Depiction of violence to both people or animals (whether real or fictional)
  • Instructions for acts of serious violence
  • Bullying content
  • Content that encourages stunts or challenges that are highly likely to result in serious injury
  • Promotion of substance abuse

This content must be carefully moderated or restricted based on the child’s age.

  1. Non designated content (NDC)

This broader category refers to any other content identified in the services’ risk assessment. This is content “which presents a material risk of significant harm to an appreciable number of children in the UK”.

There isn’t an exhaustive list outlining what qualifies as NDC, but so far, Ofcom has identified two types of content that fit its definition. These are “‘content that promotes depression, hopelessness and despair’ (depression content) and ‘content that shames or otherwise stigmatises body types or physical features’ (body stigma content)”. They state that these types of content can cause significant harm when encountered in large quantities.

 

What measures do online platforms need to implement under Ofcom’s Protection of Children Codes?

Ofcom’s Protection of Children Codes require a “safety-first” approach to how affected technology companies design and operate their services in the UK. They must implement appropriate safety measures and keep these risks and safety measures under review. Key provisions include:

Safer feeds – Personal recommendations, such as ‘For You’ pages are a major way that children come across harmful content online. Providers with recommendation systems that present a ‘medium’ or ‘high’ risk must ensure their algorithms filter out harmful content from children’s feeds.

Effective age checks – Services must carry out a children’s risk assessment. Higher-risk services must use highly effective age assurance to identify if their users are children. This protects children from harmful content while allowing adults to access legal, age-appropriate material.

Fast action – All platforms must have procedures to quickly review and address harmful content when identified.

More choice and support for children – Children must have control over their online experience. This includes the ability to block or mute accounts, disable comments or report inappropriate content. Supportive information should also be available for children who encounter harmful material.

Easier reporting and complaints – Children should be able to easily report harmful content or file complaints. Platforms must respond appropriately, and their terms of service must be clear and understandable for children.

Strong governance – Every platform must designate a named person who is responsible for children’s safety. They must also conduct an annual review of safety measures.

 

How can platforms comply with Ofcom’s Protection of Children Codes?

Providers meeting the criteria outlined above must establish whether their service, or any part of it, is likely to be accessed by children. Following this, they should carry out a children’s risk assessment to assess the risks of children encountering harmful content. This involves identifying potentially harmful features and evaluating how children interact with them. Examples of features include algorithmic recommendations, direct messaging or livestreaming.

This should be done in addition to the mandatory illegal content risk assessment. This is a separate legal requirement under the Act.

Importantly, even if a service does not specifically target children, it can fall under the codes if data shows that children are using the service in significant numbers. This means that even if platforms think of themselves as “adult-only”, they may need to comply if underage users are still accessing their platforms.

To implement the appropriate safety measures for children, platforms must use “highly effective” methods to verify users’ ages.

 

How can Yoti’s age assurance methods help platforms comply with Ofcom’s Protection of Children Codes?

Ofcom deems self-declaration and debit card verification as insufficient. Additionally, general contractual restrictions on the use of the regulated service by children (such as a site stating in its terms of service that users cannot be under 18) are not accepted as “highly effective” age assurance.

The Codes state that age assurance methods are “highly effective” if they are technically accurate, robust, reliable and fair.

We have a number of highly effective age assurance methods platforms can use, including: 

  • Identity documents – users can verify their age by uploading a government-issued identity document such as a passport or driving licence. We compare the image on the document to an image of the person uploading it, ensuring the correct person is using the document. 
  • Facial age estimation – this accurately estimates a person’s age from a selfie. The same image is used for a liveness check to make sure it’s a real person taking the selfie, and not a photo, video or mask of someone else. Once the technology returns an estimated age, the image is deleted. No documents are needed and no personal details are shared.
  • Digital ID app – users can securely share an age attribute with a platform. Individuals can add their age to their Digital ID app with an identity document that’s subsequently verified by Yoti. Alternatively, they can have their age estimated in the app using facial age estimation.

Most of our age assurance approaches can be used to create a reusable Yoti Key. Yoti Keys let people verify their age once and gain continued access to an ecosystem of websites without having to prove their age again, regardless of whether they are using an incognito or private browser. A Yoti Key, using globally standardised passkey technology, doesn’t store any personal information. This helps people to remain completely anonymous but verified – all on their device.

 

How do platforms decide which age assurance methods to use?

​​When it comes to the age assurance requirements, Ofcom recommends that services design and implement their protective measures according to the findings from their dedicated risk assessments. 

It will also be important for regulated companies to balance effective age checking with privacy. We believe that people should be able to choose between different effective methods to prove their age online. This will be essential for making sure age checking is inclusive and accessible for everyone. 

Platforms must not only deploy age checks but also monitor their effectiveness over time. They should adjust their systems if children are found to be circumventing controls.

 

A note from our Chief Regulatory and Policy Officer

Julie Dawson said, “It is positive to see the Children’s Risk Assessment Guidance and the first version of the Protection of Children Codes of Practice under the UK’s Online Safety Act, and praise the work of the Ofcom team and the sector over the last years preparing for this. 

We welcome the fact that Ofcom has confirmed that if services set a minimum age (e.g. 13+), they must apply highly effective age checks, or assume younger children are present so tailor all their content accordingly. We expect Ofcom and the ICO will continue to look jointly at how age assurance can support the checking of age under the age of 18; for instance at 13 and 16 to support age-appropriate design of services.

We trust that Ofcom is geared up to support a level playing field in terms of enforcement given the large number of companies (150,000) that are in scope of the legislation. It would be very bad policy for good actors to be penalised financially for compliance if it takes months for non-compliant companies to be forced into compliance. 

We are working with a third of the largest global platforms undertaking over one million age checks daily – including for social media, adult, vaping and gaming sites. This includes 13+/- and 18 +/- age gating that is privacy-preserving, reliable and effective. Online age checking is no longer optional, but a necessary step to create safer, age-appropriate experiences online.”

 

The Protection of Children Codes are a crucial part of the Online Safety Act

Ofcom’s Protection of Children Codes of Practice signal a significant step in digital regulation. They are intended to ensure that digital services implement rigorous safety measures to protect children.

The Online Safety Act is the most comprehensive internet safety legislation ever introduced in the UK. It aims to transform how online platforms operate by making them legally accountable for the safety of their users. The Protection of Children Codes are a crucial component of Ofcom’s phased implementation strategy.

For effective, scalable and privacy-preserving tools that can help your platform meet Ofcom’s standards, please get in touch.

Keep reading

The Government announces identity checks for online knife sales

The UK Government is tightening regulations on online knife sales to prevent underage access and enhance public safety. As part of this, the Home Office has published an independent report by the National Police Chiefs’ Council (NPCC). The report explores the current practices in the market and sets out some recommendations for how to strengthen online knife sales.  Here’s a snapshot of some of the proposed changes:  Report suspicious purchases: retailers will need to report suspicious purchases to the police. Increased penalties and jail sentences: penalties for selling knives to under-18s will increase from 6 months to up to

5 min read
An image of a young girl using a laptop. The accompanying text next to the image reads “Children's Online Privacy Protection Act - United States”.

An overview of the COPPA updates (and what it means for your business)

The United States Federal Trade Commission has released its updates to the Children’s Online Privacy Protection Act (COPPA) Rule. It aims to strengthen key privacy protections for children online and better reflect the challenges faced in the modern digital age. The updates introduce stricter requirements for the collection, use and sharing of children’s data. However, it’s worth noting that the rule doesn’t include an explicit exception for the use of children’s personal information solely for age verification. This complicates compliance for platforms that wish to implement more robust age checking than self-declaration.  Yoti is ready to assist companies with

13 min read
Woman buying knife online

Age assurance for online knife sales

At the end of January, the UK Government announced they will introduce stricter age checks for online knife sales. Buyers will need to submit a copy of their photo ID, such as a driving licence, as well as proof of address, such as a utility bill. The same person who bought the knife will have to show ID again on delivery, and no knife packages can be left on the doorstep. These measures are part of the upcoming Crime and Policing Bill, expected to be introduced in Parliament this Spring. While knife crime is a key focus of the

7 min read