Navigating Australia’s online safety laws

profile picture Amba Karsondas 8 min read
An image of someone holding a mobile phone. The screen is blurred out and has a symbol that indicates there is sensitive content on the screen. The accompanying text next to the image reads “Online safety laws - Australia”.

As the digital landscape continues to evolve, regulators are prioritising online safety. Countries around the world are introducing new legislation that aims to protect people online and create safer, age-appropriate experiences.

 

What’s the current state of online safety legislation in Australia?

As the internet has become a central part of daily life, Australia’s approach to online safety has evolved over time. Online safety laws were initially more reactive, focused on specific issues such as cyberbullying and child exploitation.

However, over the past decade, legislation has become more comprehensive. New laws aim to prevent harm and promote a safer online environment for all users.

In this blog, we take a look at some of the key online safety legislation in Australia.

 

Australia’s social media ban for under 16s

The Australian Government has just passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024. The world-first law will prevent anyone aged under 16 from using social media platforms and has support from both the governing Labor Party and the opposition Liberals.

Currently, most social media platforms have a minimum age requirement for users to sign up to their sites. Generally, users self-certify their age. Given how easily people can incorrectly declare their age, the eSafety Commissioner has recommended that platforms implement more robust age verification methods. There are no proposed exemptions on the age limit for children. Therefore, the law will still apply to children, even if they have parental consent, and to existing account holders.

Platforms will have one year to implement these provisions from the moment the Bill receives Royal Assent. The legislation will be subject to a review after the law is in place. The eSafety Commissioner will review how social media platforms “demonstrate they are taking reasonable steps to prevent access” for young people.

Following an amendment to the draft Bill, social media platforms will no longer be permitted to mandate the use of an identity document, or of a digital identity provided by an AGDIS-certified provider during the age assurance process. As a result, providers will need to offer alternative methods, such as facial age estimation, for young people to prove their age. 

The legislation will see platforms receive a fine of up to A$50m (US$32.5m) for non-compliance.

 

Age requirements under Australia’s social media ban

When the legislation comes into effect, social media platforms will need to check the age of new users to ensure those under 16 years of age can’t create an account. The law specifies that users will not be forced to provide government identification to prove their age. As such, social media platforms will be required to offer users more than one age assurance method.

The Government is due to undertake an Age Assurance Technology Trial to assess the proposed age assurance methods. The Age Check Certification Scheme (ACCS) will lead the consortium. Its goal is to test and evaluate potential solutions. This includes age verification, age estimation, age inference, parental certification or controls, technology stack deployments and technology readiness assessment.

 

Australia’s Online Safety Act

The Online Safety Act (2021) is a cornerstone of Australia’s approach to online safety. The Act expanded the powers and scope of Australia’s eSafety Commissioner, who is now responsible for promoting online safety for Australians and helping to remove harmful online content.

The Act applies to any company whose end-users may access content from Australia. It also includes companies which are not based in Australia. It spans social media platforms, online gaming platforms, search engines, internet service providers, messaging services and online content providers.

 

Basic Online Safety Expectations 2022 (the Expectations)

Section 45 of the Online Safety Act empowered the eSafety Commissioner to set out what it expects of online service providers. Published in 2022, the Expectations are part of a broader regulatory framework. They outline the minimum standards that online platforms and service providers must meet to ensure the safety of Australian users in the digital environment.

Split into core expectations and additional expectations, they are high-level guidelines that aim to ensure that platforms take the necessary steps to protect users. They include:

  • Ensuring that online platforms have mechanisms in place to prevent and respond to harmful content. This includes extremist content, misinformation and cyberbullying or harassment.
  • Empowering users to control their online experience and engage in safe online activities. Examples include having customisable, user-friendly privacy settings.
  • Requiring platforms to have systems that can quickly and effectively respond to reports of harmful content or behaviour.
  • Making sure that platforms take reasonable steps to ensure that users can only access age-appropriate content.
  • Prioritising user privacy and handling personal data responsibly. Privacy policies should be clear, accessible and easily understandable. And providers shouldn’t misuse user data for harmful purposes.

The Act also allows the Minister for Communications to determine specific expectations for some online services. The eSafety Commissioner has the power to require providers to report on how they’re meeting these expectations.

 

Industry Codes of Practice

The Online Safety Act also requires the industry to develop new codes to regulate online content. These Industry Codes set out specific, industry-tailored standards. Current individual codes focus on the social media, online dating and gaming industries.

Each code is refined, approved and enforced by the eSafety Commissioner. This is to ensure that online platforms and service providers are meeting the expectations outlined in the Online Safety Act. Last year, it rejected two proposed codes for failing to provide ‘appropriate community safeguards’.

The Industry Codes of Practice address several aspects of online safety including:

  • cyberbullying and harassment
  • child sexual abuse material (CSAM) and child exploitation
  • illegal or harmful content
  • user safety tools
  • age assurance
  • transparency and accountability
  • support for victims of online abuse

 

Age assurance under Australia’s Online Safety Act

A key component of Australia’s Online Safety Act is age assurance. This element is designed to ensure that children have age-appropriate experiences.

Service providers must take ‘reasonable steps to ensure that technological or other measures are in effect to prevent access by children to class 2 material provided on the service’. Class 2 material is that which is deemed inappropriate for people under the age of 18. This includes having appropriate mechanisms in place to assess the age of the user. The legislation allows for age assurance through:

We welcome the recognition that relying on self-declaration may not be considered appropriate. 

The Act emphasises the protection of children’s personal information online. It seeks to ensure that platforms and services offering products for children adhere to protections against data misuse or exploitation. There is specific emphasis on data collection practices, consent and information sharing.

The Act also provides a mechanism for children and their parents or guardians to report harmful content or interactions. The eSafety Commissioner has the authority to request that social media platforms and websites remove content which falls into this category.

Additionally, it has the power to take action against content which is deemed explicit, abusive or harmful to minors. This includes material that promotes self-harm, suicide or eating disorders.

To ensure more effective compliance, the age assurance requirements in the Act should be made clearer. We’ve also suggested expanding the definition of the phrase ‘age assurance’ to include assessing whether a user is below or above an age threshold. This data-minimised approach enables users to share less data with organisations as their exact age is not revealed. 

We believe that solutions should be independently audited for their reliability and attack detection rates. And making the risk profiles and risk assessments from age assurance providers publicly available would streamline the process and increase transparency.

 

A growing focus on protecting children online

These measures reflect Australia’s ongoing attempt to keep pace with the digital age. If you’d like to know more about how we’re helping businesses and governments to protect users online, please get in touch.

Please note this blog has been prepared for informational purposes only. You should always seek independent legal advice. 

Keep reading

Young girl looking at smartphone

Yoti responds to the Draft Statement of Strategic Priorities for online safety

This week, Peter Kyle, Secretary of State for the Department of Science, Innovation and Technology, published a letter of proposed strategic priorities for online safety. We welcome the draft statement, which highlights the five areas the government believes should be prioritised for creating a safer online environment. These areas are: safety by design, transparency and accountability, agile regulation, inclusivity and resilience, and technology and innovation. Peter Kyle also made a strong statement that when the Online Safety Act comes into force, Ofcom will have his full support and will expect them to be assertive. He said, “The powers that

8 min read
An image of a woman who is looking at her driving licence. The accompanying text next to the image reads “Tobacco and Vapes Bill - United Kingdom”.

Understanding age assurance in the UK’s Tobacco and Vapes Bill

In a significant move towards tightening regulations on tobacco and vaping products, the UK has introduced the Tobacco and Vapes Bill. Originally introduced by the previous Conservative government, the Bill has now been reintroduced by the new Labour government, signalling bipartisan support. The Bill aims to create a “smoke-free generation” by gradually raising the age of sale for tobacco and vaping products every year until they are completely phased out across the UK.   What is the main aim of the Tobacco and Vapes Bill? The Tobacco and Vapes Bill seeks to tighten the regulatory framework around tobacco and

7 min read
Image of a man holding his mobile phone in one hand and a driving licence in the other hand. The accompanying text reads "Data Bill - United Kingdom".

Understanding the UK’s new Data Bill

The Data (Use and Access) Bill, known more simply as the “Data Bill”, is a landmark piece of UK legislation that aims to reshape how individuals and businesses interact with digital data. It will introduce provisions for a national digital identity trust framework, helping to foster trust in digital identities by ensuring that businesses adhere to strict standards during digital transactions.  This blog gives an overview of the Data Bill and what this means for digital identities in the UK.    Why has the Government introduced the Data Bill? The Government has said that the Bill will “unlock the

9 min read