Samuel Rowe Profile Picture

Samuel Rowe

New JMLSG guidance recognises the power of digital identity

New JMLSG guidance recognises the power of digital identity

On 1 June 2020, the Joint Money Laundering Steering Group (JMLSG) published its revised guidance for what is expected of regulated financial services entities in relation to the prevention of money laundering and terrorist financing. The new guidance recognises the central role that digital identity and robust biometric technologies can play in ensuring regulated entities meet their anti-money laundering (AML) and counter terrorist financing (CTF) obligations.   What is the JMLSG? The JMLSG is a private sector organisation that produces guidance to help the financial services sector meet its legal obligations in relation to AML and CTF. Its guidance isn’t legally binding but it receives ministerial sign off from the Treasury, so it’s certainly a persuasive set of documents. The JMLSG might come from the private sector but it isn’t a lobbying organisation. This means that its guidance can be trusted to balance the competing needs of customer protection, market competition and regulatory certainty.   What’s so good about the revised guidance? The revised guidance builds on the EU 5th Anti-money Laundering Directive (5AMLD), which was transposed by the UK government in The Money Laundering and Terrorist Financing (Amendment) Regulations 2019.   It acknowledges that customer due diligence (CDD) methods are changing.  Previously, an individual would have had to have been met in person and show their physical documents to gain access to a financial service or product.  As technology has developed, financial institutions became able to rely on the triangulation of data from vast electronic datasets to check that the person before them was really who they claimed to be. Now, it’s possible to rely on smaller amounts of higher quality data held by digital identity platforms to perform CDD. The revised guidance explicitly recognises that digital identity platforms can be relied upon as long as they’re secure from fraud and misuse and they provide an appropriate level of assurance that the customer is who they say they are. At several points, the revised guidance talks about the use of “primary identity documents”, like a passport, combined with the use of biometric data to perform CDD. Additionally, it notes that where a primary identity document is checked using cryptography, it can be relied on as the sole source of CDD information by the digital identity platform. It also discusses the importance of preventing people from pretending to be the prospective customer, or ‘mitigation of impersonation risk’, which it identifies as being carried out by the use of biometric data, in addition to other methods.   Yoti for CDD requirements That’s exactly how Yoti works. Yoti allows users to securely store and share personal information obtained from a government-issued identity document, like a passport. Users with a chipped document are cryptographically verified using the NFC capability in their mobile device. We make sure the customer is a real person and their face matches their ID document through a combination of biometric recognition technology and powerful anti-spoofing methods. This guarantees a high level of assurance that the customer’s identity exists in the real world. The revised JMLSG guidance recognises the advances in – and advantages of – digital identification combined with robust biometric technologies. Financial institutions should feel reassured that they can use digital identity platforms, such as Yoti, to meet their CDD obligations in a robust way.

3 min read
Developing our facial age estimation tech through roundtable discussions

Developing our facial age estimation tech through roundtable discussions

Here at Yoti, we’re on a mission to become the world’s most trusted identity platform. This isn’t something we plan on doing on our own but with the input, expertise and knowledge of people from all across society. We have our Guardian Council of influential individuals who ensure that we always seek to do the right thing, and that we are transparent about what we are doing and why. We also have an internal trust committee, who oversee the development and implementation of our ethical approaches. Earlier this year, we held two roundtables with experts in their fields to discuss our approach to responsible research and the development of AI tools. We wanted to share the outcomes, so here they are.   The first roundtable: An introduction to our age estimation tech Gavin Starks, the newest addition to our Guardian Council, hosted this roundtable in January, which brought together the likes of Yo-Da, the University of Warwick and Home Office Biometrics Ethics Committee, Women Leading in AI, the University of Keele, and techUK – to name a few. The AI tool we discussed was our revolutionary facial age estimation technology (formerly known as Yoti Age Scan). It is currently available for over 13 year olds but we are looking at opening it up to younger children too. We want to ensure that people understand exactly how it works and can be reassured by the steps we’ve taken to mitigate bad outcomes for individuals. We’ve also published a white paper, which you can find here. In the session, we demoed a self-checkout machine that had been integrated with our facial age estimation, so everyone could see exactly how the technology worked. We then looked at the white paper to make sure everyone had a more in-depth understanding. This roundtable was hugely insightful and left us with a lot to think about. We’ve been taking steps to ensure our approach to responsible AI is as robust as possible. The session also raised a discussion about how we obtain user consent to use their data for R&D purposes. We’re now working on granular opt-out choices for individuals.   The second roundtable: Ethical challenges  We’re really proud of the positive impact our facial age estimation is already having, such as our work with Yubo in making their community is safe for everyone. However, at the moment our age estimation technology only works for people over 13 years old. We want to make sure that it works for everyone. Ensuring our facial age estimation works for under 13s, and doing so in our usual responsible way, involves lots of challenges. This led us to hold our second roundtable, to focus on what these challenges might be before we cross them. We were lucky to have Gavin Starks host again and were joined by representatives from the Children’s Commissioner’s for England, the NSPCC, the ICO and GCHQ, amongst others.   Here’s what happened We organised the session using a framework created by Doteveryone, a London-based think tank that champions responsible technology to build a fairer future. Once we explained the age estimation technology behind our product and grounded it in the wider context, we split the group into smaller groups. Then we horizon scanned for the intended and unintended positive and negative  consequences of developing and deploying our facial age estimation for under 13s. Just like the first roundtable, the session involved deep deliberation and produced some valuable insights. One of the unintended positive consequences of using our facial age estimation for under 13s was that it might increase the autonomy of children at a time when they are forging their own identities. However, we also raised the issue that the technology might facilitate the exclusion of young people from digital spaces.   What’s next? We’ll use this feedback to help us make age estimation for under 13s as robust as possible. We’ve got a lot planned, such as further sessions with industry leaders. We’ll keep you posted on our progress and would love to hear your thoughts.

4 min read