Navigating Australia’s online safety laws

profile picture Amba Karsondas 8 min read
An image of someone holding a mobile phone. The screen is blurred out and has a symbol that indicates there is sensitive content on the screen. The accompanying text next to the image reads “Online safety laws - Australia”.

As the digital landscape continues to evolve, regulators are prioritising online safety. Countries around the world are introducing new legislation that aims to protect people online and create safer, age-appropriate experiences.

 

What’s the current state of online safety legislation in Australia?

As the internet has become a central part of daily life, Australia’s approach to online safety has evolved over time. Online safety laws were initially more reactive, focused on specific issues such as cyberbullying and child exploitation.

However, over the past decade, legislation has become more comprehensive. New laws aim to prevent harm and promote a safer online environment for all users.

In this blog, we take a look at some of the key online safety legislation in Australia.

 

Australia’s social media ban for under 16s

The Australian Government has just passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024. The world-first law will prevent anyone aged under 16 from using social media platforms and has support from both the governing Labor Party and the opposition Liberals.

Currently, most social media platforms have a minimum age requirement for users to sign up to their sites. Generally, users self-certify their age. Given how easily people can incorrectly declare their age, the eSafety Commissioner has recommended that platforms implement more robust age verification methods. There are no proposed exemptions on the age limit for children. Therefore, the law will still apply to children, even if they have parental consent, and to existing account holders.

Platforms must have age assurance systems in place by December. The legislation will be subject to a review after the law is in place. The eSafety Commissioner will review how social media platforms “demonstrate they are taking reasonable steps to prevent access” for young people.

Following an amendment to the draft Bill, social media platforms will no longer be permitted to mandate the use of an identity document, or of a digital identity provided by an AGDIS-certified provider during the age assurance process. As a result, providers will need to offer alternative methods, such as facial age estimation, for young people to prove their age. 

The legislation will see platforms receive a fine of up to A$50m (US$32.5m) for non-compliance.

 

Age requirements under Australia’s social media ban

When the legislation comes into effect, social media platforms will need to check the age of new users to ensure those under 16 years of age can’t create an account. The law specifies that users will not be forced to provide government identification to prove their age. As such, social media platforms will be required to offer users more than one age assurance method.

The Government is due to undertake an Age Assurance Technology Trial to assess the proposed age assurance methods. The Age Check Certification Scheme (ACCS) will lead the consortium. Its goal is to test and evaluate potential solutions. This includes age verification, age estimation, age inference, parental certification or controls, technology stack deployments and technology readiness assessment. The preliminary results were published in June 2025, with the full results expected later this year.

 

Australia’s Online Safety Act

The Online Safety Act (2021) is a cornerstone of Australia’s approach to online safety. The Act expanded the powers and scope of Australia’s eSafety Commissioner, who is now responsible for promoting online safety for Australians and helping to remove harmful online content.

The Act applies to any company whose end-users may access content from Australia. It also includes companies which are not based in Australia. It spans social media platforms, online gaming platforms, search engines, internet service providers, messaging services and online content providers.

 

Basic Online Safety Expectations 2022 (the Expectations)

Section 45 of the Online Safety Act empowered the eSafety Commissioner to set out what it expects of online service providers. Published in 2022, the Expectations are part of a broader regulatory framework. They outline the minimum standards that online platforms and service providers must meet to ensure the safety of Australian users in the digital environment.

Split into core expectations and additional expectations, they are high-level guidelines that aim to ensure that platforms take the necessary steps to protect users. They include:

  • Ensuring that online platforms have mechanisms in place to prevent and respond to harmful content. This includes extremist content, misinformation and cyberbullying or harassment.
  • Empowering users to control their online experience and engage in safe online activities. Examples include having customisable, user-friendly privacy settings.
  • Requiring platforms to have systems that can quickly and effectively respond to reports of harmful content or behaviour.
  • Making sure that platforms take reasonable steps to ensure that users can only access age-appropriate content.
  • Prioritising user privacy and handling personal data responsibly. Privacy policies should be clear, accessible and easily understandable. And providers shouldn’t misuse user data for harmful purposes.

The Act also allows the Minister for Communications to determine specific expectations for some online services. The eSafety Commissioner has the power to require providers to report on how they’re meeting these expectations.

 

Industry Codes of Practice

The Online Safety Act also requires the industry to develop new codes to regulate online content. These Industry Codes set out specific, industry-tailored standards. Current individual codes focus on the social media, online dating and gaming industries.

Each code is refined, approved and enforced by the eSafety Commissioner. This is to ensure that online platforms and service providers are meeting the expectations outlined in the Online Safety Act. Last year, it rejected two proposed codes for failing to provide ‘appropriate community safeguards’.

The Industry Codes of Practice address several aspects of online safety including:

  • cyberbullying and harassment
  • child sexual abuse material (CSAM) and child exploitation
  • illegal or harmful content
  • user safety tools
  • age assurance
  • transparency and accountability
  • support for victims of online abuse

 

Age assurance under Australia’s Online Safety Act

A key component of Australia’s Online Safety Act is age assurance. This element is designed to ensure that children have age-appropriate experiences.

Service providers must take ‘reasonable steps to ensure that technological or other measures are in effect to prevent access by children to class 2 material provided on the service’. Class 2 material is that which is deemed inappropriate for people under the age of 18. This includes having appropriate mechanisms in place to assess the age of the user. The legislation allows for age assurance through:

We welcome the recognition that relying on self-declaration may not be considered appropriate. 

The Act emphasises the protection of children’s personal information online. It seeks to ensure that platforms and services offering products for children adhere to protections against data misuse or exploitation. There is specific emphasis on data collection practices, consent and information sharing.

The Act also provides a mechanism for children and their parents or guardians to report harmful content or interactions. The eSafety Commissioner has the authority to request that social media platforms and websites remove content which falls into this category.

Additionally, it has the power to take action against content which is deemed explicit, abusive or harmful to minors. This includes material that promotes self-harm, suicide or eating disorders.

To ensure more effective compliance, the age assurance requirements in the Act should be made clearer. We’ve also suggested expanding the definition of the phrase ‘age assurance’ to include assessing whether a user is below or above an age threshold. This data-minimised approach enables users to share less data with organisations as their exact age is not revealed. 

We believe that solutions should be independently audited for their reliability and attack detection rates. And making the risk profiles and risk assessments from age assurance providers publicly available would streamline the process and increase transparency.

 

A growing focus on protecting children online

These measures reflect Australia’s ongoing attempt to keep pace with the digital age. If you’d like to know more about how we’re helping businesses and governments to protect users online, please get in touch.

Please note this blog has been prepared for informational purposes only. You should always seek independent legal advice. 

Keep reading

An image of the Yoti logo. Underneath reads "Yoti responds to the Supreme Court decision on Free Speech Coalition vs Paxton".

The Supreme Court rules in favour of age verification

The question of whether states can require age checks on adult websites has reached a turning point in the US courts. In Free Speech Coalition v. Paxton, the case challenged Texas’s H.B. 1181 law, which required commercial websites that publish sexually explicit content to verify the ages of their visitors to prevent minors from accessing pornography. One of the big discussion points has been whether, in 2025, it is still too burdensome for US adults to prove age privately compared to 20 years ago – especially when privacy-preserving age verification tools have advanced significantly.  The Supreme Court has upheld

3 min read
An image of a child and their parent sitting on a sofa and using a mobile phone. The accompanying text reads 'Ireland's Online Safety Code - Ireland'.

Ireland’s Online Safety Code: what it means for online platforms and how to comply

What you need to know: Ireland’s Online Safety Code will hold video-sharing platforms accountable for keeping their users, especially children, safe online. Platforms with adult content, including pornographic or extremely violent content, must use age assurance to prevent children from accessing the content. These age assurance requirements come into force in July 2025. Platforms that don’t comply can face strong penalties – up to €20 million or 10% of annual turnover.     From July 2025, video-sharing platforms in Ireland with pornography or extremely violent content will need to introduce age assurance to protect children from accessing their content.

7 min read
Image of a man holding his mobile phone in one hand and a driving licence in the other hand. The accompanying text reads "Data Act - United Kingdom".

Understanding the UK’s new Data Act

The Data (Use and Access) Act, now known more simply as the “Data Act”, is a landmark piece of UK legislation that aims to reshape how individuals and businesses interact with digital data. It introduces provisions for a national digital identity trust framework, helping to foster trust in digital identities by ensuring that businesses adhere to strict standards during digital transactions.  This blog gives an overview of the Data Act and what this means for digital identities in the UK.    Why has the Government introduced the Data Act? The Government has said that the Act will “unlock the

9 min read