ENISA report on Remote ID Proofing Good Practices

profile picture Matt Prendergast 2 min read

The European Union Agency for Cybersecurity, ENISA, is the Union’s agency dedicated to achieving a common level of cybersecurity across Europe. The Agency works together with stakeholders, through knowledge sharing, to strengthen trust in the digital economy and infrastructure. 

We were very happy to contribute, alongside colleagues from other stakeholders, to ENISA’s report on Remote ID Proofing – Good practices. 

The report is in response to recent developments in attacks, particularly deepfakes, causing concern about identity proofing, with the goal of increasing awareness and assisting in risk analysis practices. The report identified two areas of attacks:

  1. Biometric and injection attacks against a face
  2. Presentations and injection attacks against an ID document

The report concludes the following findings and measures as potential ways of moving forward:

  1. Identity proofing is a critical component of the current digital world and there is a constantly evolving attack landscape that is becoming more and more complex
  2. There is asymmetric regulation across Member States regarding identity proofing
  3. Technical and information sharing and practices between stakeholders would help build awareness
  4. Different levels of assurance between on use cases, not one solution fits all
  5. The role of humans to review documents for fraud attempts, as well as using automated fraud detection systems.

Keep reading

An image of a woman looking at a computer screen.

Preparing for the EU’s new AI Act

Artificial intelligence (AI) is changing our world at a speed that, just a decade ago, we never could’ve anticipated. As AI finds its way into our everyday lives, regulators are racing to catch up with its development. In response, last month, the EU voted to bring in the Artificial Intelligence Act, also known as the AI Act. The Act is expected to enter into force in May or June 2024. This blog looks at what the legislation means for businesses and how they can comply.   Why is there an AI Act? In recent years, it seems as though AI

7 min read
An aerial view of a child using a laptop.

US age verification laws for online platforms

From buying goods online to accessing crucial services, there are countless advantages to an increasingly digital world. But with this development comes the serious challenge of ensuring that users can safely navigate online environments. As young people are able to access the internet more easily than ever, it’s important to make sure that their online journeys are age-appropriate. According to a national survey, the average age at which children in the US first see pornography is 12, with 15% first seeing online pornography at age 10 or younger. In response to the evolving digital landscape, regulation is making strides to

8 min read

Understanding the Kids Online Safety Act (KOSA)

From the UK’s Online Safety Act to Europe’s Digital Services Act, we’re in an era of increasing online safety regulation. In the US, the Kids Online Safety Act (KOSA) is a significant piece of legislation, currently making its way through Congress.  This blog looks at some of the requirements of KOSA and what this would mean for companies. What is the purpose of KOSA? First introduced in February 2022 by Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN), KOSA aims to protect children from harm online. It would require platforms to limit addictive features, allow young people to opt out

6 min read