Yoti blog

Stories and insights from the world of digital identity

Digital identity in the last mile: grassroots research overview

Digital identity in the last mile: grassroots research overview

We recently commissioned research to better understand digital identity needs in the developing world. Specifically, we wanted to understand how grassroots nonprofits could benefit from a digital identity platform to conduct their humanitarian work in Africa and South East Asia. Today, we’re publishing an overview of the project. The short document outlines the initial thinking behind our research, our thoughts on an offline product, what we sought to learn from the field based research and a few of our higher level findings. Download the ‘Digital identity in the last mile’ overview here. Please read through it and share it far and wide through your networks. And please do get in touch if you have any questions or are interested in helping with the development of, or piloting, our offline solution. Background A few months ago, we attended one of our first technology-for-development conferences since ramping up our humanitarian work earlier in the year. ICT4D2018 brought together public, private and civil society organisations from across the humanitarian and international development community. We were there representing Yoti to share some of our early thinking around how our identity solutions might support their wider humanitarian efforts. Feedback from attendees indicated an opportunity for a fully offline identity solution, working purely at a local (rather than national) level. Yoti already has a product in the pipeline – Yoti Key – which can be adapted for this purpose. Before embarking on product development we sought to clarify and expand our understanding of how a simple, offline identity product might work in different last mile and near last mile humanitarian contexts. So, we commissioned field based researchers in Africa and South East Asia to better understand local nonprofit needs.

2 min read
Getting to grips with GDPR: Rights in relation to automated decision making and profiling

Getting to grips with GDPR: Rights in relation to automated decision making and profiling

The eighth blog post in our series on GDPR rights is about the rights in relation to automated decision making. So far, our series has covered: Your right to be informed Access rights Correction rights Deletion rights Objection rights Restriction rights Portability rights   Part 8: Rights in relation to automated decision making and profiling This is not a new right and the GDPR wording is almost identical to that in the EU Directive it replaces. The aim of this right is to provide safeguards for individuals against the risk that a potentially damaging decision is taken without human intervention. In the GDPR this provision is part of the section on individual rights and is set out as follows. Individuals have the right not to be subject to a decision when: it is based only on automated processing; and it produces a legal effect or a similarly significant effect on the individual.   The right does not apply if the decision: is necessary for entering into or performance of a contract between an organisation and the individual; is authorised by law (such as for the purposes of fraud or tax evasion prevention); based on explicit consent; when a decision does not have a legal or similarly significant effect on someone.   If the right applies, organisations must make sure that individuals are able to: get human intervention; express their point of view; and get an explanation of the decision and challenge it.   So what’s new? The new aspect is that the data protection regulators have interpreted this provision as a prohibition and not as a right. Their view is that companies cannot carry out only automated decision-making at all unless it’s necessary for a contract, they have your explicit consent or it’s authorised by law. For the UK this is a different interpretation of the same provision in the old law. Some other EU regulators have always thought of this provision as a prohibition, but there has never been any guidance or enforcement from them on the topic. The UK Data Protection Act 2018 has some extra obligations for organisations who are doing only automated decision-making based on it being authorised by law. In these cases, organisations must, as soon as reasonably possible, notify you in writing that they have made an automated decision. You then have one month to request they either reconsider the decision, or take a new decision that is not based only on automated processing.   What does it mean in practice? Many organisations use automated decision-making in some way as a core part of their business. For example, when you go on a price comparison website to get quotes, the results you get back are generated automatically based on the information you provide and the information from the different organisations who are represented on the site. When your bank contacts you to say they think someone has cloned your card or is trying to use your details fraudulently, they know this because of automated processing in the background that spots unusual activity on your account. When you identify yourself using a pass card or biometrics, it is an automated system that is checking whether your credentials match what is registered. It is important to remember though that the GDPR right is about automated decisions that have a legal or similarly significant effect on you. A legal effect might be, for example, a decision that leads to a contract being cancelled, or you being denied a particular social benefit granted by law, such as child or housing benefit. A similarly significant effect is harder to define, but it might be, for example, a decision that denies you an employment opportunity or puts you at a serious disadvantage. Many automated decisions may not meet this threshold and while regulators have put out some guidance on what they think this threshold means, it may take some time, and possibly court cases, to agree where the lines in the sand are. As with all the rights, the organisation also has to be able to verify your identity before taking action as a result of your request.   What is Yoti doing? We don’t make any automated decisions that meet the threshold of the GDPR provision. Our automated decisions are related to the fraud prevention checks we carry out, such as to make sure it’s really you when you take certain actions in the app, or when you add a document to make sure you are adding your own, genuine document. If the automated technology fails when you’re setting up your account, we have trained staff who can intervene and make a manual decision. If you have problems while using the app, you can always contact our Customer Support team for help. If you have any questions about this right, contact privacy@yoti.com.

5 min read
Meet our anonymous age estimation technology

Meet our anonymous age estimation technology

Developing technologies that challenge the status quo is what we do best here at Yoti. Using Artificial Intelligence (AI) for good, our new age estimation technology is set to shake up the way people prove their age online and in person. We’re proud of all of the work that’s gone into making it easy to use, accurate and fair for everyone, no matter your age or ethnicity, so we’ve released a white paper for people that want to know more.   Making it faster and safer to prove your age Our facial age estimation is a secure age-checking service that accurately estimates a person’s age by looking at their face. Until now, people have had to show ID documents to prove their age, a process that’s outdated and easily spoofed with high quality fake IDs available online for much less than the official documents that they replicate. Others can’t afford an ID document, which prevents them from accessing certain goods and services, such as obtaining medicine or placing their vote. Our facial recognition applications are designed to make life easier for everyone. We believe it is crucial to have a transparent approach when launching new technology that uses facial recognition. With that in mind, we’ve signed the Safe Face Pledge, which encourages companies using AI to ensure that facial recognition technology is not misused – and released a new white paper that explains how we use AI for good with our facial age estimation.   Key takeouts from the report: Yoti’s facial age estimation ‘always forgets a face’ Our age estimation technology has been designed with data privacy and security at its core. It does not require individuals to register in advance, nor to provide any documentary evidence of their identity. It neither retains any information about individuals nor any images of them. It simply estimates their age. The algorithm can accurately estimate the age of millions of people in a private and secure way. It is a scalable solution which is quicker and more accurate than manual ID checks, and can be used in the provision of any age-restricted goods and services, both online and in person.   A robust solution built for the real world In the last 30 days our facial age estimation has worked with a live streaming chat room service to check the age of over five million individuals to flag any accounts where the person appears to have stated a significantly false age; this task is important for safeguarding young people and would be prohibitively expensive to do manually each day. Our age estimation tech will go live in supermarkets in Q1 2019 to let people purchase an age-restricted good at self-checkout. Shortly after, our facial age estimation will be available as an option to verify the ages of people visiting UK adult content sites, which will be mandatory as a result of the Digital Economy Act. As regulators review the accuracy of this innovative approach, opportunities will open up globally for the sale and/or collection of age-restricted goods through automated vending and dispensing machines. For the first time technology will be able to estimate someone’s age with a high degree of accuracy, and give people the chance to verify their age in a private and secure way. You can read the white paper here.

3 min read
Digital identity in the last mile: lessons from South East Asia

Digital identity in the last mile: lessons from South East Asia

Here at Yoti, we believe in the benefits of digital identity for all. As we continue to ramp up our efforts in the humanitarian sector, we recently commissioned research to better understand digital identity needs among grassroots nonprofits in the developing world. The first post in the series of two covered our work in Africa. Read it here. In this post, David Burton – a member of our South East Asia research team – shares his approach and findings from the region.     Digital identity in the last mile How can we keep our identities safe? It’s one of the greatest challenges facing governments, nonprofits and businesses today. Yoti recently asked Glean to help them find out how identity technology could help people in developing countries to stay safe and live better lives. So, our Director of Innovation, Jesse Orndorff, got to work. He set about co-designing a rapid survey and landscaping that helped Yoti understand the challenges faced at grassroots levels in the developing world.   Methodology We’ve been working in Asia for almost a decade now. In that time we’ve seen things change a lot, and we have a hopeful perspective on the power of tech to make a difference there. Good tech – tech that’s designed with the end user at its heart – can help growing economies to leapfrog entire areas that other economies have been struggling with. It’s no exaggeration to say that we’ve seen good tech changes lives in places – everywhere from Cambodia to Indonesia and Pakistan. The human centred design approach we took for this research is called the Growth framework. This framework became the basis for our work and shaped the way we conducted interviews. Our method was to ask simple questions and leave enough space to follow up on themes that emerged during the interview. We would then summarise our findings under common headings. We interviewed senior managers and grassroots NGO workers from 11 different organisations, who work in 7 different Asian nations. We asked them about how they managed participant identity information, what tools and systems they use, what problems they face in handling IDs, and what opportunities they could see for improving the way they handle IDs in future.   The Problems We learned that there are many common problems related to identity across different areas and sectors. They include: Documentation – it’s not unusual for someone’s identity to be confirmed using documents which often only exist as single copies and are hard or impossible to obtain. Verification – new projects often require local officials to confirm people’s identities on a case by case basis. The process can be lengthy, and often people are prevented from benefitting from a project because their identity can’t be confirmed. Fraud – bad processes for handling identity can sometimes mean resources are poorly distributed. It’s because people appear on participant lists more than once, or because verification and documentation are seen as impossible and are therefore not attempted. Insecure ID systems also create space for corruption and abuse of power.   Good ID can change everything Identity sits at a pinch-point which everyone we interviewed saw as vital to their work. Across the board – from community development to microsavings, and disaster response to health – everyone agreed that bad ID solutions hold back promising projects. Almost all projects require the project lead to identify the people in the user group. That means that a good solution for confirming identity, covering both documentation and verification, has huge potential to transform systems and have real impact across a wide range of sectors and activities. Throughout our research, we met a lot of people who were incredibly excited about how a better, more secure solution for proving and storing identity details could help people to be safer and more prosperous, even in very vulnerable circumstances.   Biometrics are very appealing We wanted to talk about format with our interviewees. We wanted to get a sense of how the storage approach for ID could be helpful or frustrating for them. As well as concerns about single copy documents which can easily be lost, and the challenge of getting documentation at all, we regularly heard that biometrics are seen as a good way of avoiding fraud and verifying identity easily.   Privacy and security could be a matter of life and death With the same vehemence as they expressed for the need for good ID solutions, our interviewees consistently voiced concerns about privacy. Beyond concerns about commercial exploitation of identity, which are relevant around the world, bad ID management has the potential to directly affect the safety of people in some developing contexts. Where everyday corruption is normal, any central repository of information can be actively used to endanger the lives and livelihoods of community members. It is vital for any solution to be secure, and to have privacy as a high order design aim.   ID for the win The opportunities for identity and technology to make a real, lasting difference to people’s lives are exciting and extensive. Good ID management can keep people safe and help them to access projects, government services and economic opportunities that they would otherwise be excluded from. Having control of their identity can be a major asset to vulnerable people in particular. And it’s clear that, with a focus on privacy and security at the heart, identity management is a huge opportunity to help people to make the most of their resources and their activities.   David Burton Director of Strategy, Glean www.glean.net

5 min read
Getting to grips with GDPR: The right to data portability

Getting to grips with GDPR: The right to data portability

The seventh article in our series on GDPR rights is about the right to data portability. Catch up on previous articles about your right to be informed, the access right,  correction right, deletion right,  objection right,  restriction right.   Part 7: The right to data portability This is a new right under GDPR and its aim is to allow individuals to be able to easily get back certain personal information so they can do other things with it, or give it to another company. There are two aspects to the right: The right to get back some information. The right to have that information sent automatically to another organisation. However, this right does not apply to all information in all circumstances, so it may be of more limited use to you than you might have thought.   The right to get back some information The right only applies to: personal information that you have provided to an organisation; where you have given consent or where the processing is necessary to deliver the product / service; and when the processing is carried out by automated means (so not paper files).   An organisation has to provide the information in a structured, commonly used and machine readable form (such as a CSV file). The data protection regulators have taken a wider view of what information is in scope, and they have published an opinion saying that the right also includes ‘observed data’, meaning user activities or information generated by your use of a product / service. Examples would be raw data processed by a smart meter or connected objects, activity logs, website history, or raw data such as the heartbeat tracked by a wearable device. The European Commission (who drafted the first version of GDPR) do not agree with the regulators and think their view is incorrect. However, the regulators have not amended their guidance, so this may be an area of GDPR that has to be tested in the courts to decide what information the portability right applies to.   The right to have the information sent automatically to another organisation If you request it, an organisation has to transmit the information directly to another organisation – if this is technically feasible.   What does that mean in practice? In practice the best way to get all the information an organisation holds about you is by making an access request. However, you may find that certain information is useful to you when changing providers, such as in relation to your bank, mobile phone, energy company and so on. In these cases the portability right might be more helpful. GDPR does not oblige organisations to set up interoperable systems so it is unlikely that many providers will have the technical ability to port your personal data directly to another organisation. However, some sectors may have already decided to look into this, or may offer it as part of other obligations. It is also possible that some sectors may voluntarily decide to develop their services in a way that offers interoperability.   When does the right not apply? As set out above, this right only applies to certain data in certain circumstances. The UK’s Data Protection Act 2018 to implement GDPR has exemptions that mean that an organisation may not have to comply with your portability request in certain circumstances. For example, where your information is being processed for the prevention or detection of crime, where the organisation is required to disclose it as part of legal proceedings or where another law requires the organisation to publish the information. The exemptions are not blanket ones though, they only apply to the extent that complying with the portability right would prejudice the crime prevention purpose or prevent the required disclosure. This means that if an organisation is able to comply it should do so. The organisation also has to be able to verify your identity before taking action as a result of your request.   Fees and timescales Under GDPR the organisation has 30 days to respond and cannot charge a fee. However, organisations can charge for ‘manifestly unfounded or excessive’ requests. They must base the fee on the administrative cost of providing the information. The UK Data Protection Act 2018 allows the Government to set limits on the fees (which they haven’t yet put in place). Organisations can also extend the response time to two months depending on the complexity and number of the requests. If they need to extend the response time, they should tell you within the first month.   What is Yoti doing? We are bringing in the ability to export your attributes directly from the app, and your password information from Yoti Password Manager. You can make a portability request to privacy@yoti.com.

4 min read
Getting to grips with GDPR: The right to correct data

Getting to grips with GDPR: The right to correct data

The third article in our series on GDPR rights is about the correction right. See here for previous articles on your right to be informed, and the second on the access right.    Part 3: The right to correct data The right to correct inaccurate personal information is an existing right. It has always been the case that if you discover an organisation has inaccurate information about you, you have the right to correct it. It also links with the organisation’s responsibility to have accurate and up to date data.  In current UK law this right is set up as being one you have to go to court for, and there are three main aspects to the right. The court can order an organisation to correct inaccurate data, to add an explanatory note where the accuracy is disputed, and notify third parties of the correction if they have disclosed your information. In practice though you don’t need to go to court as organisations will usually always correct inaccurate data if you tell them about it, and can provide evidence of the correct data. They will also usually allow you to add an explanatory note to information to show you think it is inaccurate.  This is particularly common with credit reference agencies where they have received information directly from a lender who maintains it is accurate, but that the individual claims is inaccurate. (The right to add an explanatory note to your credit file is from the Consumer Credit Act). Organisations may be less likely to voluntarily notify any third parties they have disclosed your information to, unless the third party continuing to have inaccurate information about you would be detrimental, cause harm or pose other risks.   What’s new? GDPR does not really change the essence of this right and the three key aspects remain. For the UK it is no longer the case that the law requires you to go to court. GDPR allows you to go directly to an organisation to get inaccurate data corrected or add an explanatory note. Organisations also have the direct obligation to notify third parties of the correction, unless this proves impossible or involves disproportionate effort. If you ask, the organisation has to tell you which third parties they have notified.  Some of the GDPR rights are connected. For example, one of the scenarios where you have the new right to have personal information restricted is where you dispute the accuracy of information held about you and the organisation is looking into it. Part 6 in this series of blogs will look at the right to have data restricted. The UK’s draft Data Protection Bill to implement GDPR is being finalised but the current version maintains exemptions in current law that mean that an organisation may not have to comply with your request in certain circumstances.    The organisation also has to be able to verify your identity before taking action as a result of your request.    Fees and timescales Under current UK law there are no set timescales for dealing with a correction request, but organisations usually respond without delay. There is no charge for this kind of request.  Under GDPR the organisation has 30 days to respond and cannot charge a fee. However, organisations can charge for ‘manifestly unfounded or excessive’ requests. They must base the fee on the administrative cost of providing the information. The current version of the UK’s draft Data Protection Bill provides for the Government to set limits on the fees. Organisations can also extend the response time to two months if the request is complex.  If they need to extend the response time, they should tell you within the first month. If an organisation decides it can’t comply with your request, they should explain why, without undue delay and at the latest within one month. They should also tell you about your right to complain to the regulator (ICO).    So what does all this mean? Not a lot has changed with this right. The main change is that you have the right to make correction requests directly to the organisation and to add a supplementary statement.     What is Yoti doing? All the information you add to your Yoti comes from you or your ID document. If your information changes and you need to update it, currently you will need to delete your account and create a new one to add the up to date information or document. We know that this is not a great solution and we are working hard to improve things. We have several developments underway that we hope will all be in the app by the end of the summer. These developments will let you manually add an address, and change your address and email. In the autumn we hope to have in place the ability for you to replace an outdated ID document.  You can make a correction request to privacy@yoti.com

5 min read

Essential reading

Get up to speed on what kind of company we are