
Last week, the Department of Science, Innovation and Technology, published the final draft Strategic Priorities for online safety. We welcome the statement, which highlights the five areas the government believes should be prioritised for creating a safer online environment. These areas are: safety by design, transparency and accountability, agile regulation, inclusivity and resilience, and technology and innovation.
These priorities will guide Ofcom as it enforces the Online Safety Act, ensuring platforms stay accountable and users are protected.
We welcome this clear direction and commitment from the government to create safer online spaces. It’s positive that age assurance has been recognised as a key online safety tool that can be used effectively and consistently to create age-appropriate experiences and protect children in different age groups.
In this blog, we take a look at the priorities and the key areas that can positively impact online safety.
Safety by design
Embed safety by design to deliver safe online experiences for all users but especially children, tackle violence against women and girls, and work towards ensuring that there are no safe havens for illegal content and activity, including fraud, child sexual exploitation and abuse, and illegal disinformation.
Priority one focuses on the actual design of online platforms. It states that safety by design should be embedded throughout all aspects of the platform to deliver safer online experiences for all users. This includes the design and development of new features and functionalities, as well as how existing features can be made safer. The government believes the goal should be to prevent harm from occurring in the first place, wherever possible.
Users of all ages should be able to enjoy the benefits the internet has to offer. They should be empowered to have more control and choice over the content they see online. And all users, especially children, should have better protections over harmful content.
A key part of this first priority is ensuring companies are effectively using age assurance technology to protect children from harm online. The government’s statement says, “we would like to see a focus on developing the evidence-base around age-appropriate experiences to work towards more detailed recommendations for companies on how to protect children in different age groups.”
We welcome this addition and hope the government and Ofcom look at the evidence around different age assurance approaches, and how these can be used to protect children online.
This first priority also states that services should use technologies that are already available to determine if a user is a child. This reflects requests from industry and civil society to use the approaches for age assurance that are already available – rather than pointing to approaches which have not as yet been designed or built.
Highly effective age assurance technology is ready to help platforms comply with the Act. There is a healthy ecosystem of age assurance providers represented by the trade body Age Verification Providers Association. As one age assurance provider, we have already completed over 800 million age assurance checks using a range of methods and we’re doing this for many of the largest global platforms, including Instagram, Facebook, OnlyFans, Avakin Life and Yubo.
The first priority also reflects that some age assurance approaches can already enable age-appropriate experiences for younger children and help platforms to uphold their terms and conditions of access at 13.
Facial age estimation is by far the most practical and inclusive method to help platforms check if a user is above or below a certain age (such as +/- 13, 16 or 18). Yoti facial age estimation technology can correctly estimate 99.5% of 6-12 year olds as under 13, and 100% of children aged 5-7 are assessed to be under 13.
Another method is a reusable Digital ID app (such as Yoti ID, Post Office EasyID or Lloyds Bank Smart ID). Teenagers can use an identity document, such as a passport (which 86% of 13 year olds in the UK own) or a PASS card, to set up their Digital ID and then easily and securely prove their 13+ status from their phone.
Transparency and accountability
Ensure industry transparency and accountability for delivering on online safety outcomes, driving increased trust in services and expanding the evidence-base to provide safer experiences for users.
Research from Ofcom shows that in 2024, nearly one third of internet users aged 13+ encountered content that made them feel uncomfortable, upset or negative. This highlights the urgent need for safer online environments.
Through greater transparency, the government wants users to feel more empowered and informed about which services to use. They should be able to choose where to spend their time online, knowing what type of content they will encounter on platforms.
The government also expects all providers of regulated user-to-user services to have clear and accessible Terms of Service provisions about how they fulfil their illegal content, child safety and complaint reporting duties and apply them consistently. These Terms should be transparent and easy to understand.
By encouraging openness and transparency, the government aims to build trust in online services.
Agile regulation
Deliver an agile approach to regulation, ensuring the framework is robust in monitoring and tackling emerging harms – such as AI-generated content – and increases friction for technologies which enable online harm.
The government has recognised that regulation needs to be more agile and flexible, to keep pace with the constantly changing digital landscape.
They’ve already been proactive in this area; last year the government made the offence of sharing intimate images without consent a ‘priority (criminal) offence’ under the Online Safety Act. This means platforms will have to proactively prevent this content from appearing, not just take steps to remove this material if published. And in the latest version of the Data Bill, the government has said that it will be an offence to ask someone to create a ‘fake’ intimate image of someone else if that person hasn’t agreed to the request. It doesn’t matter whether the fake image is actually created or not.
Requiring consent before intimate content is published will help protect innocent people from revenge porn or from their image being used in deepfake content. The combination of a selfie authenticated esignature, identity verification, face matching and 18+ age checks ensures responsible platforms can check that the individuals in nude videos and images are the correct person, aged 18+ and have given consent for publication.
Often, technological innovations outpace regulation. As safety tech continues to innovate and find solutions for online harms, we hope that there is a sharing mechanism across UK regulators to review science, data and evidence-based results – spanning all UK regulators – including Ofcom, the ICO, the Home Office and the Gambling Commission. This will be crucial for ensuring the latest safety tech can be used consistently across sectors and create safer online environments.
Inclusivity and resilience
Create an inclusive, informed and vibrant digital society resilient to potential harms, including disinformation.
The Online Safety Act is not about excluding people from the internet; it’s about giving them safer, age-appropriate experiences. The government has reflected this in their fourth priority, which aims to foster a digital environment that is inclusive, informed, and resilient to online harms.
With children spending an increasing amount of time online, the government would like parents and carers to be supported with information, helping them to understand the harms online and the steps they can take to protect children.
Technology and innovation
Foster the innovation of online safety technologies to improve the safety of users and drive growth.
This priority focuses on supporting the development of innovation that can support agile regulation and ensure the continued strengthening of the government’s online safety regime. The government recognises that innovation alone is not enough though – online service providers must proactively adopt and deploy these solutions to improve user safety.
We’re pleased to see the government supporting the development of more effective age assurance technologies. The government has said that “while age assurance solutions exist, the government recognises the importance of continued innovation to maximise their effectiveness, as well as consistent standards for these technologies. This is so that age assurance solutions preserve users’ privacy to a high standard, while ensuring the effective protection of all children online.”
The government would like Ofcom to recommend age assurance technologies to regulated services to support compliance with their duties under the Act. We hope that Ofcom will look at the science behind age assurance solutions and independent test results, leading to evidence-based regulatory decisions and recognition of the maturity of facial age estimation and digital ID as highly effective for +/- 13 age assurance.
It is encouraging that the government recognises the importance of a regulatory landscape that encourages innovation of age assurance solutions. This includes working to support standards on age assurance technologies, including standards for accuracy. This is particularly important as the accuracy of certain age assurance solutions, such as facial age estimation, continues to improve. We welcome the fact the government is pointing to the science and encouraging Ofcom to keep abreast of the global developments in age assurance.
To learn more about our age assurance solutions and how we can help platforms comply with the Online Safety Act, please get in touch.