Preparing for the EU’s new AI Act

profile picture Amba Karsondas 7 min read
An image of a woman looking at a computer screen.

Artificial intelligence (AI) is changing our world at a speed that, just a decade ago, we never could’ve anticipated. As AI finds its way into our everyday lives, regulators are racing to catch up with its development.

In response, last month, the EU voted to bring in the Artificial Intelligence Act, also known as the AI Act. The Act is expected to enter into force in May or June 2024.

This blog looks at what the legislation means for businesses and how they can comply.

 

Why is there an AI Act?

In recent years, it seems as though AI is everywhere. It’s shaping the world around us and has many useful applications: from using your face to unlock your phone to personalising your social media feeds to its use in the healthcare industry.

But as AI becomes increasingly widespread, regulators have become concerned about its limitations. There are growing demands for ethical safeguards to be put into place. There have been calls for AI systems to be regulated so that they are designed to uphold our fundamental rights, values and freedoms.

For this reason, the EU proposed the AI Act in 2021. It’s worth noting that even in the time that the Act has moved through the relevant legislative bodies, there have been significant developments in the world of AI. The explosive growth of generative AI over the past year has meant the Act needed to be significantly revised whilst the legislation was being passed.

Therefore, we can expect more legislation will emerge in the near future as AI evolves.

 

What does the AI Act do?

The AI Act marks a milestone as the first of its kind globally. It hopes to establish a common legal framework for the development, marketing and use of AI. The Act aims to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.

It aims to foster trustworthy AI in the EU by:

  • ensuring that AI systems respect fundamental rights, safety, and ethical principles
  • addressing risks of very powerful and impactful AI models
  • encouraging a single EU market for AI

The Act also stresses that high-risk AI systems shouldn’t rely solely on automation. Instead, they should be overseen by humans to reduce risk and minimise any harmful outcomes.

 

Who is affected by the AI Act?

With some limited exceptions, the AI Act applies to almost all sectors across the EU. It affects all parties involved in the development, usage, import, distribution, or manufacturing of AI systems in the EU. Even if they are based outside of the EU, organisations that intend to use their AI systems in the EU must also comply with it.

The Act splits AI systems into four main risk categories. They are:

  1. unacceptable risk – this is the highest level of risk. Examples include subliminal messaging or any system which exploits the vulnerabilities of children. Any AI system that falls into this category will be prohibited under the Act.
  2. high-risk – these systems will be subject to particular legal requirements. Businesses must complete conformity assessments and sign a declaration of conformity before the system is made publicly available. They must register the AI system in a specialised EU database. The provider must also meet specific security, transparency and quality controls including a risk management system, robust data governance processes and human oversight. A high-risk case could be the AI systems used in educational or vocational training. For example, a system that scores exams may determine someone’s access to education or the professional course of their life.
  3. limited risk – these are systems that have a risk of manipulation. For these, the AI Act requires specific transparency obligations. Users must be informed that they are interacting with AI so they can make an informed choice about whether to continue using the system. Examples include chatbots and generative AI that is not considered high-risk.
  4. minimal or no risk – this is the lowest level of risk and includes AI systems such as spam filters or AI-enabled gaming. These systems do not have any obligations or additional restrictions.

 

How can businesses comply with the AI Act?

After the AI Act enters into force, businesses will have between 6 and 24 months to meet its full requirements. The exact time frame depends on the risk level of the AI system.

It’s often more costly and complex to ensure that a system complies with legislation once it has been developed and deployed. Therefore, businesses should have already started to assess their risks and adapt their processes. Alongside seeking comprehensive legal advice, you should:

  • Work out your organisation’s relationship to AI. Obligations will vary depending on if your company is a provider, deployer, importer, distributor or a party affected by AI systems.
  • Map your current use of AI systems and models to understand your company’s exposure to AI. As part of this process, your business should assess how these systems link to real-world use cases. This will help you to better understand the practical implications of these AI systems.
  • Identify and evaluate the risks associated with your company’s AI systems. By consulting the standards recommended by the European AI Office and the European Commission, platforms should design new systems and adapt their existing ones in line with the new regulation.
  • Build a framework that ensures that each process is regularly reviewed. As AI is still in the midst of rapidly evolving, it’s highly likely that existing legislation will be changed and new rules will emerge.

 

Who will oversee the AI Act?

The AI Act brings in strict requirements for providers and deployers of AI systems. Non-compliance can result in penalties of up to €35 million or 7% of the global annual turnover (whichever is higher). The figure depends on the size of the company and the severity of the infringement.

The newly created European AI Office will oversee its enforcement and implementation. The Act also calls for the introduction of a European Artificial Intelligence Board. The national regulators designated by each Member State will be represented on the Board, with their main task being to ensure compliance with the regulation. It will also advise on secondary legislation, codes of conduct and technical standards.

 

Adapting to new AI regulations

Florian Chevoppe-Verdier, Public Policy Associate at Yoti said, “Artificial Intelligence is becoming increasingly present in our daily lives, holding tremendous potential to enhance society, provided we identify, understand, and mitigate its potential shortcomings. Although the EU’s AI Act represents a distinct regulatory approach from those of the UK and US, we are likely to see a ‘Brussels effect’, wherein EU regulations could establish a global standard, akin to the GDPR’s reach.

Navigating the assessment of AI systems and adapting to a rapidly evolving regulatory landscape will undoubtedly pose challenges for businesses in the years ahead, but the AI Act also presents them with an opportunity to thrive and contribute positively to societal advancement.”

If you’d like to know more about how we use AI responsibly, please get in touch.

Keep reading

An aerial view of a child using a laptop.

US age verification laws for online platforms

From buying goods online to accessing crucial services, there are countless advantages to an increasingly digital world. But with this development comes the serious challenge of ensuring that users can safely navigate online environments. As young people are able to access the internet more easily than ever, it’s important to make sure that their online journeys are age-appropriate. According to a national survey, the average age at which children in the US first see pornography is 12, with 15% first seeing online pornography at age 10 or younger. In response to the evolving digital landscape, regulation is making strides to

8 min read

Understanding the Kids Online Safety Act (KOSA)

From the UK’s Online Safety Act to Europe’s Digital Services Act, we’re in an era of increasing online safety regulation. In the US, the Kids Online Safety Act (KOSA) is a significant piece of legislation, currently making its way through Congress.  This blog looks at some of the requirements of KOSA and what this would mean for companies. What is the purpose of KOSA? First introduced in February 2022 by Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN), KOSA aims to protect children from harm online. It would require platforms to limit addictive features, allow young people to opt out

6 min read

Our Fifth Regulatory Roundtable: exploring age assurance methods

As regulators and companies consider new laws to protect children and give them age-appropriate experiences online, they are faced with the challenge of how to determine someone’s age. We explored this topic at our latest regulatory roundtable; a lively and healthy discussion, chaired by our Guardian Gavin Starks. We discussed different age assurance methods, and the progress and widespread adoption of facial age estimation. We shared key updates on our technology, how facial age estimation can be configured to work with safety buffers, and demonstrated some live use cases. The roundtable also looked at why there needs to be international

3 min read