A guide to the European Commission’s proposed legal framework for regulating high-risk AI systems

profile picture Ferreneik Betton 7 min read
Flags outside the European Commission

On 21 April 2021, not long after a leaked portion had caused a stir, the European Commission published its proposed legal framework for the regulation of artificial intelligence (“AI”). Whilst only a first draft, therefore subject to the push and pull of the amendment (or ‘trilogue’) process over the coming months, it marks an important milestone in the European Commission’s journey to engender a culture of ‘trustworthy’ artificial intelligence across the European Union.

The proposal has important implications for the developers of biometric systems, like Yoti. Although it will undergo a number of revisions before the final framework is published, it is worth taking stock of the proposal as it stands now.

 

The Framework’s Scope

The draft legal framework intends to provide harmonised rules for the use of AI systems. The European Commission acknowledges the many benefits that can be brought about through the deployment of AI and has attempted to build a framework that is “human-centric” – engendering trust, safety and respect for human rights into AI systems.

Currently, the framework places less emphasis on the training of AI, per se, although the training stage is one of the most important stages. Instead, the framework focuses on the use of AI systems. That said, there are many rules in the framework that concern the design of AI. For example, an ‘ex-ante conformity assessment’ (that’s ‘checking something does what it’s supposed to do before you deploy it, to you and me) will lead to some consideration of what happens in the period before an AI system has been deployed. 

In addition, the proposal sets out broad parameters for training, validation and testing datasets: they must be relevant, representative, free of errors and complete, and have appropriate statistical properties in relation to the people who will be subject to the AI system. This means that statistical bias must be mitigated. Yoti is transparent about statistical bias in the age estimation software and has publicly described how it has mitigated it.

The framework takes a risk-based approach, differentiating between uses of AI that create an unacceptable risk, a high risk, and a low or minimal risk. A number of the much-talked-about provisions in the legal framework are those which arise in relation to high-risk AI systems. Certain types of biometric technology are one of the few particular uses of AI that the European Commission decided to give special attention to. 

Because of the need to test innovative AI systems, like biometric technologies, the proposed framework pushes EU Member States to develop regulatory sandboxes. This feature will be welcomed by tech companies because it lets us develop innovative solutions that pay respect to individuals’ rights and are in line with our core values.

In that same vein, the European Commission suggests that processing special category data for the purposes of bias monitoring, detection and correction in high-risk AI systems is a matter of substantial public interest. Therefore, as long as the processing is strictly necessary for one of those purposes, it will be permitted under the GDPR.

 

High-Risk Biometric Technologies

Annex III of the proposal lists the use cases that fall into high-risk AI. Biometric identification systems are potentially high-risk, depending on if they are intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons. Although the definition leaves something to be desired in terms of clarity, it appears from the European Commission’s discussion of real-time remote biometric identification that they are primarily concerned with what is often known as one-to-many matching. For example, when the police scan the faces of members of the public to check against a watch list, they are conducting one-to-many matching. Therefore, it would be categorised as high-risk. In contrast, where a bank uses an embedded identity verification system to onboard new customers, that probably would not count as high-risk because there would be no checking against a gallery of existing faces or biometric templates. 

Based on the above, Yoti’s identity verification and age verification services should be unaffected. Nonetheless, further clarification of the scope of high-risk biometric identification would be welcomed.

Ex-ante conformity assessments are one of the mandatory provisions imposed on high-risk systems. It’s particularly noteworthy that a conformity assessment extends to the examination of source code, a point that has been contentious in the past due to potential business confidentiality ramifications. 

In addition to conformity assessments, high-risk AI systems have to be built in such a way that they can be effectively overseen by humans while the system is in use. This will force some AI systems to be fitted with mechanisms allowing human intervention, as well as allowing humans to interpret the methodology behind the AI system’s output. Given how notoriously difficult it can be to understand how an AI system has come to a conclusion, particularly where machine learning models are relied on, this is a radical proposal. It will have practical, as well as financial, implications for the developers of high-risk biometric systems.

Finally, it’s worth drawing attention to the record keeping requirements in the European Commission’s proposal. High-risk biometric systems are expected to be built with the capacity to log data such as input data leading to a match, as well as the identification that occurred as a result of the biometric match taking place. It is not clear how this intersects with the data minimisation and storage limitation principles under the General Data Protection Regulation (“GDPR”) because responsible biometric system providers will want to delete all input data immediately after it has been processed.

 

Non-high Risk Biometric Technologies

Because of the emphasis on ‘remote’ identification, many day-to-day uses of biometric technologies will not be considered high-risk. For example, in-person authentication or biometric analysis wouldn’t currently be considered high-risk. This means that the use of tools such as Yoti’s age estimation in a retail environment or on an online platform should be unaffected.

Although non-high-risk biometric systems might not have to adhere to the stricter rules in the proposal, there are still relevant parts of the proposal that will have an impact on such systems. For example, there are transparency obligations, although as currently drafted it is unclear whether the transparency obligations do anything but restate existing requirements under the GDPR. 

In addition to the transparency obligations, it will be interesting to see how the requirement to develop codes of conduct governing the use of non-high-risk AI is amended over time. Given that some companies and conglomerates have attempted to develop structures that aid self-regulation of biometric technologies, mandatory codes of conduct might not be a large step for the industry. 

Yoti has developed an ethical framework in order to ensure that we always do right by our users – and wider society. Codes of conduct could help ensure that the rest of the biometric technology market embeds similar responsible business practices.

 

Next steps for the framework

The Commission’s proposed framework has already generated a huge amount of discussion. No doubt, this will continue as refinements occur during the trilogue process. Given that the GDPR trilogue took 4 years, it could be some time before a final AI regulatory framework is published. Until then, we will continue to keep a close eye on developments as they occur.