Yoti blog

Stories and insights from the world 
of digital identity

The UK's Online Safety Bill: moving towards a safer internet

The UK's Online Safety Bill: moving towards a safer internet

The Online Safety Bill is the UK Government’s chance to make the internet safer for all. It is also a major step forward in the battle against online harms and goes hand in hand with our commitment to protect young people and the vulnerable online. While it’s the responsibility of everyone to make the internet a safer place, regulation is necessary to make businesses act responsibly.   The Online Safety Bill’s key recommendations Since a first draft was issued in May, MPs and peers have made four recommendations, as well as many others, to strengthen the Bill:  What’s illegal offline should be regulated online Ofcom should issue binding Codes of Practice New criminal offences are needed Keep children safe from accessing pornography We’re heartened to see the proposed revisions include the need for age assurance technology to protect children and the requirement to abide by minimum standards, which we heartily endorse.  We are specifically pleased to see that the Bill recommends keeping children safe from accessing pornography:  “All statutory requirements on user-to-user services, for both adults and children, should also apply to Internet Society Services likely to be accessed by children, as defined by the Age Appropriate Design Code. This would have many advantages. In particular, it would ensure all pornographic websites would have to prevent children from accessing their content. Many such online services present a threat to children both by allowing them access and by hosting illegal videos of extreme content.” In the UK, it’s estimated that 48% of adolescents have viewed pornography online. We hope that this recommendation in particular will help create a shift in business attitudes which lean towards protecting children within digital spaces. In the Bill, advocate for children’s rights in the digital world, Baroness Kirdon, provided a  vital statement on age assurance: “Protecting children is a key objective of the draft Bill and our report. Our children have grown up with the internet and it can bring them many benefits. Too often, though, services are not designed with them in mind. We want all online services likely to be accessed by children to take proportionate steps to protect them. Extreme pornography is particularly prevalent online and far too many children encounter it – often unwittingly”.  The Office of the Children’s Commissioner revealed that over half of 11–13-year-olds have seen pornography online. Witnesses explained that pornography can distort children’s understanding of healthy relationships, sex, and consent by, for example, normalising violence during sexual activity. “Privacy-protecting age assurance technologies are part of the solution but are inadequate by themselves. They need to be accompanied by robust requirements to protect children, for example from cross-platform harm, and a mandatory Code of Practice that will set out what is expected. Age assurance, which can include age verification, should be used in a proportionate way and be subject to binding minimum standards to prevent it being used to collect unnecessary data.”   What is age assurance? Age assurance describes the methods and measures that help to determine a person’s age or age range. The word assurance refers to the varying levels of certainty that different solutions offer in establishing an age or age range. This includes age verification and age estimation, which both have varying levels of confidence.  To learn more about age assurance, read our article about it here.   A tried and tested solution for the safeguarding of children  We help businesses implement robust age assurance within their services, aligning with the Online Safety Bill and the Age Appropriate Design Code. Platforms can age assess and ensure that children are treated as children – then not serve notifications, avoid tracking, avoid geolocation, avoid inappropriate advertising to minors, provide age appropriate support and content moderation to enable young people to thrive online.  To read more about what we’re doing to follow the standards within the Age Appropriate Design Code, you can read more about it here. We work closely with businesses and regulators to help young people thrive online safely. If you would like to learn more about how you can be a part of the evolving online safety landscape, contact us here.

4 min read
How the Yoti app is complying with the ICO’s Age Appropriate Design Code

How the Yoti app is complying with the ICO’s Age Appropriate Design Code

Following the end of its 12-month transitional period, it is now a legal requirement for businesses to comply with the ICO’s Age Appropriate Design Code (the Code).  It will affect any business operating online whose services are likely to be used by children in the UK, specifically in industries such as social media, video or music streaming, gaming or education.    Providing safer environments for children in the digital economy The Code sets out 15 standards that aim to create a safe space for children to learn, explore and play online. It aims to empower children to be safe online by increasing their data privacy awareness, making language simpler and treating them fairly, whilst minimising data collection. By conforming to the Code, businesses are deemed to be acting in the best interest of the children using their online services. Encouraging this default safety setting, children can safely access digital services whilst minimising the collection and use of their data.    How we’re taking action  As a company, our work is guided by an ethical framework, developed in line with the GDPR and key principles such as Privacy by Design. As such, we found a strong alignment with the Code but we wanted to focus on two of the fifteen standards: “Best interests of the child: When you design and develop online services a child is likely to access, the best interests of the child should be a primary consideration.”​​ “Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.” So we started off with user research. We specifically wanted to see how well a group of teenagers understood our app, age estimation process, services, terms and privacy policy.  This helped us pinpoint concepts that were difficult to understand, as well as how easy it was for our users to find information about how their data is used. Our findings encouraged us to change aspects of our app content, our communications and our privacy information.   Changing the content in our app We are committed to making our app as easy to use as possible for people of all ages. We used the results from our user research to optimise important content to be certain that it would be widely understood. We made sure our titles provided greater context and included subtitles to make screens scannable. We explained in greater detail any words that might not be easily understood by younger people, like ‘encrypted’. In other cases, we avoided using words that could be considered too broad in a given context. For example, we no longer use the word ‘biometrics’ on our age-gating primer screen and instead swapped it out for the phrase ‘a scan of your face’. We also made sure to clarify why we needed certain information from our users. We also improved usability by removing collapsed content so text was easier to read, and added a contact form on our data privacy screen, which we hope will reassure and encourage younger users to ask any questions they may have about how their personal data is being handled.   Communicating with our younger users by improving readability Where comprehension is involved, it’s incredibly important that our younger users understand our services as best they can. To do that, we need to make sure our wording is easy to read. We used a number of readability tools to help the editing process. Hemingway helped us to pinpoint the key areas of improvement. Before edits After edits Once we made the edits, we ran our new and improved content through Story Toolz to see how much easier our content was to read using seven different readability indices. We made sure that in as many circumstances as possible we were improving the way our services could be understood by young people.   FAQs If we want children to understand how they can use the app, it’s important that our FAQs are approachable and accessible. We hope that by doing this, we become as transparent as possible in line with the Code’s fourth standard.  On average, we improved the readability of our FAQs by two school years by using less words and using simpler alternatives to complex terms. We hope that children will feel more comfortable navigating our app when they have questions about how to use it.   Customer Service Emails We made our email templates even easier to read, all of which now have a readability grade of 5 or below. That’s a reading age of 10 years old.   Making our privacy information easier to understand Bite-size summaries We’ve added bite-sized summaries to sections of our key privacy and terms documents. We initially thought about calling these blue boxes, but quickly realised this wouldn’t be accessible to people with visual impairments. Instead, we decided to include an information icon and refer to the summaries as information boxes. Definitions We have added a section on definitions that explains frequently used key terms in our privacy information, such as “third parties” and “biometrics”.   Privacy notice for children and young adults We’ve written a simplified privacy policy for children and young adults. Rather than add a definitions section, we decided to find alternative words altogether and write a summary with the key information about how we do privacy at Yoti.  This is not a substitute for our detailed privacy policy but it allows users to understand privacy in a more simplistic way. We’re also designing a new structure for our privacy pages to make them easier to navigate. There’s a fixed sub menu that shows an overview of all sub-pages and we clearly signpost to key pages based on whether you’re an end user, corporate client or young adult.   Our products evolve with child safety requirements The efforts don’t stop there. We hope to continually improve our services for children over the coming months and years. Since reviewing our existing app onboarding and other flows, we are planning to make some changes which will be included in our next few releases. So watch this space. While we continue to act in the interest of our users, we encourage all businesses that are affected by the Code to act with us and create safer spaces for children. If you would like to find out how we can help you and your business to comply with the Age Appropriate Design Code, get in touch with us today.

6 min read
In response to the ICO’s opinion on Age Assurance for the Children’s Code

In response to the ICO’s opinion on Age Assurance for the Children’s Code

The Age Appropriate Design Code is a statutory data protection code of practice from the ICO that applies to providers of Information Society Services that are likely to be accessed by children, such as apps, online games, and web and social media sites. The ICO released an Opinion that looks at how age assurance can form part of an appropriate and proportionate approach to reducing or eliminating risks and conforming to the code. This included opinions on age verification and age estimation technologies. We believe that some of the generalisations made about age estimation do not apply to Yoti’s privacy preserving facial age estimation solution. Please read on for more.   Introduction to Age Estimation Yoti’s Age Estimation using facial images is a secure age-checking service that can estimate a person’s age by looking at their face. We consider it to have wide application in the provision of any age-restricted goods and services, both online and in person. Businesses have already used our service to perform over 500 million age checks since launch of the service two and a half years ago and a growing number of businesses are adopting our solution. Yoti’s Age Estimation using facial images is designed with user privacy and data minimisation in mind. It does not require users to register with us, nor to provide any documentary evidence of their identity. It neither retains any information about users, nor any images of them. The images are not stored, not re-shared, not re-used and not sold on. The images are not used to train the model further. It simply estimates their age and then deletes the image. Yoti’s Age Estimation using facial images does not create or use biometric facial geometry. The model only looks at a photo of a person. For a detailed explanation of how it works, how it was trained and its accuracy please see our White Paper.   How it works Yoti’s Age Estimation using facial images model is a neural network model trained using machine learning techniques – in other words it is an algorithm trained on lots of data to give an estimated age. A face image is inputted into the algorithm and then based on the images it has been trained on the algorithm is able to output an estimated age. Because the only data we used to train the algorithm are face images with the person’s age in months and years, the model cannot work out anything else other than an estimated age. All the model knows how to do is to estimate age. The model does not know how to identify someone, how to face match or even tell if the same person’s image is repeatedly submitted to it. If you kept submitting slightly different photos of yourself to the model, it would come out with slightly different estimated ages. Things like lighting, camera quality and angle of the face change the outputted estimated age. The key point is that the model cannot say to itself: “I have seen this face before and I can use that information to estimate the age on this image.”   The legal position of Yoti’s Facial Age Estimation  Yoti’s facial age estimation complies with the UK GDPR and also our own ethical approach to user data and privacy. When businesses use age estimation to verify the age of their customers Yoti acts as the data processor, with businesses as the data controllers. Businesses therefore need a legal basis to use age estimation. The model obviously processes personal data (a face image) so the legal basis for processing personal data relied upon by businesses will be either: (i) consent of the customer; (ii) performance of a contract between the business and the customer; or (iii) legitimate interests of the business that do not unfairly prejudice the customer. The Yoti Age Portal has a consent option in-built so businesses can easily collect consent from customers for use of Yoti’s Age Estimation if that is the lawful basis the business chooses. If Age Estimation processes biometric data and if Age Estimation processes ‘special category data’ then the business will also need to process under a legal basis in Schedule 1 of the UK GDPR, such as consent or acting to protect children. However, Yoti’s facial age estimation does not involve the processing of biometric data or special category data. This is because the facial age estimation model is physically unable to allow or confirm the unique identification of a person (and that is the key test for biometric data) and it is not being used for the purpose of identification (and that is the key test for special category data). The model was not trained to recognise a face, but instead to categorise that face into an age. In our view, and in the view of our external lawyers, Yoti’s Facial Age Estimation does not identify a user because the only possible output is the non-identifying estimated age. Further, even if the view is taken that it is biometric data, it is not ‘special category data’ because there is the added test of needing to use the model for the purpose of uniquely identifying a person – and the model is used explicitly not to identify a person, but instead to allow a person to age verify themselves without being identified.   Definitions of biometrics in GDPR For those more legally minded, here are some of the key parts of the UK GDPR. Definition of biometric data in Article 4(14) of the UK GDPR: Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.   Definition of special category data in Article 9 of the UK GDPR: Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.   Recital 51 of the UK GDPR further says that: The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.    What the ICO has said In the Commissioner’s [the ICO’s] Opinion on Age Assurance for the Children’s Code, first published on 14 October 2021, the ICO states that age estimation “may” involve the processing of biometric data (at para 2.3.2) and then clarifies later at para 4.2.1 that it is only biometric data if it is used to uniquely identify an individual. Yoti cautiously welcomes the Opinion because of its generally clear explanation to organisations on how to apply age assurance for the Age Appropriate Design Code. But the Opinion does not clearly explain when age estimation uses biometrics, and just as importantly, the Opinion does not explain when age estimation using facial analysis does not use biometric processing and is not special category data. This would have been very helpful for organisations seeking to implement the age assurance measures required under the Age Appropriate Design Code. Further, the ICO at para 4.2.1 could have also made clear that it is not processing special category data if the purpose of the processing is not to identify someone. This would also have been helpful clarity to those evaluating different age assurance methods. The ICO has helpfully clarified that processing biometric data for the purposes of the Age Appropriate Design Code can be lawfully done to meet the ‘substantial public interest’ exception in the UK GDPR (Article 9(2)(g)). Yoti’s implementation of facial age estimation (immediate deletion of the image, use of very secure data centres, SOC2 Type 2 security certification, use of encryption in transit and only returning to the business an estimated age result) means that businesses can be assured that the remainder of the Article 9(2)(g) requirements are met.   Accuracy of our Facial Age Estimation The Yoti White Paper discloses how accurate the model is. The testing is split by age, gender and skin tones to reassure organisations and the public that there is minimal, if any, bias in the model for the key 18+ metric. We have been publishing our accuracy every few months for the last two and a half years. The ACCS, an independent testing company, assessed the model and published its certification report for Yoti Age Estimation in November 2020 which concluded that: “The System is fit for deployment in a Challenge 25 policy area and is at least 98.89% reliable“. We shared this certification evidence with the ICO in November 2020. We strongly disagree with the ICO’s statement in Annex 2 of its Guidance which groups all age estimation techniques together and then provides a grouped statement on their accuracy: “Age estimation techniques generally use Artificial Intelligence (AI) algorithms to automate the interpretation of data. There is little evidence for the effectiveness and accuracy of these emerging approaches.” We understand that currently there are a limited number of businesses offering accurate age estimation, but there is clearly strong evidence that for the last 12 months there is compelling evidence that Yoti’s age estimation AI service can reliably estimate the age of adults to meet the needs of retailers and online brands wishing to prevent under 18s from buying age restricted goods or accessing over 18 content.

8 min read
A guide to the European Commission’s proposed legal framework for regulating high-risk AI systems

A guide to the European Commission’s proposed legal framework for regulating high-risk AI systems

On 21 April 2021, not long after a leaked portion had caused a stir, the European Commission published its proposed legal framework for the regulation of artificial intelligence (“AI”). Whilst only a first draft, therefore subject to the push and pull of the amendment (or ‘trilogue’) process over the coming months, it marks an important milestone in the European Commission’s journey to engender a culture of ‘trustworthy’ artificial intelligence across the European Union. The proposal has important implications for the developers of biometric systems, like Yoti. Although it will undergo a number of revisions before the final framework is published, it is worth taking stock of the proposal as it stands now.   The Framework’s Scope The draft legal framework intends to provide harmonised rules for the use of AI systems. The European Commission acknowledges the many benefits that can be brought about through the deployment of AI and has attempted to build a framework that is “human-centric” – engendering trust, safety and respect for human rights into AI systems. Currently, the framework places less emphasis on the training of AI, per se, although the training stage is one of the most important stages. Instead, the framework focuses on the use of AI systems. That said, there are many rules in the framework that concern the design of AI. For example, an ‘ex-ante conformity assessment’ (that’s ‘checking something does what it’s supposed to do before you deploy it, to you and me) will lead to some consideration of what happens in the period before an AI system has been deployed.  In addition, the proposal sets out broad parameters for training, validation and testing datasets: they must be relevant, representative, free of errors and complete, and have appropriate statistical properties in relation to the people who will be subject to the AI system. This means that statistical bias must be mitigated. Yoti is transparent about statistical bias in the age estimation software and has publicly described how it has mitigated it. The framework takes a risk-based approach, differentiating between uses of AI that create an unacceptable risk, a high risk, and a low or minimal risk. A number of the much-talked-about provisions in the legal framework are those which arise in relation to high-risk AI systems. Certain types of biometric technology are one of the few particular uses of AI that the European Commission decided to give special attention to.  Because of the need to test innovative AI systems, like biometric technologies, the proposed framework pushes EU Member States to develop regulatory sandboxes. This feature will be welcomed by tech companies because it lets us develop innovative solutions that pay respect to individuals’ rights and are in line with our core values. In that same vein, the European Commission suggests that processing special category data for the purposes of bias monitoring, detection and correction in high-risk AI systems is a matter of substantial public interest. Therefore, as long as the processing is strictly necessary for one of those purposes, it will be permitted under the GDPR.   High-Risk Biometric Technologies Annex III of the proposal lists the use cases that fall into high-risk AI. Biometric identification systems are potentially high-risk, depending on if they are intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons. Although the definition leaves something to be desired in terms of clarity, it appears from the European Commission’s discussion of real-time remote biometric identification that they are primarily concerned with what is often known as one-to-many matching. For example, when the police scan the faces of members of the public to check against a watch list, they are conducting one-to-many matching. Therefore, it would be categorised as high-risk. In contrast, where a bank uses an embedded identity verification system to onboard new customers, that probably would not count as high-risk because there would be no checking against a gallery of existing faces or biometric templates.  Based on the above, Yoti’s identity verification and age verification services should be unaffected. Nonetheless, further clarification of the scope of high-risk biometric identification would be welcomed. Ex-ante conformity assessments are one of the mandatory provisions imposed on high-risk systems. It’s particularly noteworthy that a conformity assessment extends to the examination of source code, a point that has been contentious in the past due to potential business confidentiality ramifications.  In addition to conformity assessments, high-risk AI systems have to be built in such a way that they can be effectively overseen by humans while the system is in use. This will force some AI systems to be fitted with mechanisms allowing human intervention, as well as allowing humans to interpret the methodology behind the AI system’s output. Given how notoriously difficult it can be to understand how an AI system has come to a conclusion, particularly where machine learning models are relied on, this is a radical proposal. It will have practical, as well as financial, implications for the developers of high-risk biometric systems. Finally, it’s worth drawing attention to the record keeping requirements in the European Commission’s proposal. High-risk biometric systems are expected to be built with the capacity to log data such as input data leading to a match, as well as the identification that occurred as a result of the biometric match taking place. It is not clear how this intersects with the data minimisation and storage limitation principles under the General Data Protection Regulation (“GDPR”) because responsible biometric system providers will want to delete all input data immediately after it has been processed.   Non-high Risk Biometric Technologies Because of the emphasis on ‘remote’ identification, many day-to-day uses of biometric technologies will not be considered high-risk. For example, in-person authentication or biometric analysis wouldn’t currently be considered high-risk. This means that the use of tools such as Yoti’s age estimation in a retail environment or on an online platform should be unaffected. Although non-high-risk biometric systems might not have to adhere to the stricter rules in the proposal, there are still relevant parts of the proposal that will have an impact on such systems. For example, there are transparency obligations, although as currently drafted it is unclear whether the transparency obligations do anything but restate existing requirements under the GDPR.  In addition to the transparency obligations, it will be interesting to see how the requirement to develop codes of conduct governing the use of non-high-risk AI is amended over time. Given that some companies and conglomerates have attempted to develop structures that aid self-regulation of biometric technologies, mandatory codes of conduct might not be a large step for the industry.  Yoti has developed an ethical framework in order to ensure that we always do right by our users – and wider society. Codes of conduct could help ensure that the rest of the biometric technology market embeds similar responsible business practices.   Next steps for the framework The Commission’s proposed framework has already generated a huge amount of discussion. No doubt, this will continue as refinements occur during the trilogue process. Given that the GDPR trilogue took 4 years, it could be some time before a final AI regulatory framework is published. Until then, we will continue to keep a close eye on developments as they occur.

7 min read
GPG 45 guidance on identity checks opens up for the private sector

GPG 45 guidance on identity checks opens up for the private sector

In April, we wrote about the significant changes made to the Good Practice Guide (GPG) 45, the UK government’s standard for checking and verifying someone’s identity. If you’ve ever had to verify your identity with the government, it’s likely the process followed GPG 45.  The guide is made up of five parts: Get evidence of the claimed identity; Check the evidence is genuine or valid; Check the claimed identity has existed over time; Check if the claimed identity is at high risk of identity fraud; Check that the identity belongs to the person who’s claiming it.   What’s new this time? Previously, GPG 45 had been a source of considerable headache for citizens, relying parties and digital identity providers alike. However, the government has continued to expand its approach to identity verification and there are two things that stand out. First, the government has separated the guidance on how to perform identity verification from the identity profiles. Identity profiles are built following the rules set out in the guide, with certain scores needed in each of the five sections. The identity profile represents an overall level of confidence in a person’s identity, such as low or medium. Up until now, the identity profiles have been created by the government to help organisations checking identity in the public sector. By separating the guidance from the identity profiles, they are creating space for organisations in the private sector to create their own identity profiles based on the guide – we know of a few that are already giving it a go. Secondly, the government has created more identity profiles. This means there are now more routes to reaching each threshold of confidence in a verified identity. This should lead to lower friction when citizens interact with government departments and should also encourage competition and innovation in the digital identity verification marketplace.    Checking someone’s identity The government’s existing Gov.UK Verify digital identity scheme is being wound down and the organisations that currently use it will soon need to look for new identity verification providers. What better time could there have been for making things easier for citizens and organisations that need identity verification services? If you’d like to learn more about how we can help you meet the government identity verification standards, please get in touch.

2 min read
Canada pioneers digital ID for all with new framework

Canada pioneers digital ID for all with new framework

This week, the Digital ID and Authentication Council of Canada (DIACC) announced the launch of a new framework for digital ID and authentication industry standards. The Pan Canadian Trust Framework (PCTF) will define how digital ID will roll out across Canada and will be alpha tested by DIACC members. As a long-standing member of DIACC, we’re incredibly excited to see the launch of this framework that we’ve contributed to with knowledge gleaned from our long-standing experience in the digital identity space. The PCTF itself is a huge collaborative achievement and has received over 3,400 public comments provided by public and private stakeholders over four years. Our Commercial Lead for Canada, Leigh Day, co-chairs their Innovation Expert Committee and also sits on their Outreach Expert Committee.   The time for digital ID is now  The framework has been fast-tracked in the wake of the coronavirus crisis with the aim of rolling out digital ID to all levels of society as a key enabler for the Canadian economy.  As outlined by Dave Nikolejsin, Board Chair at DIACC, “Canadians have had to deal with identity theft and fraud, high anxiety in accessing services that they were in dire need of while facing social distancing measures, and attempting to go about their lives as normally as possible. Digital ID minimizes all of those pain points, and elevates the livelihoods of Canadians everywhere.” Image: Progressing the Pan Canadian Trust Framework (PCTF) infographic from DIACC website. Our response to the pandemic At Yoti, we’ve seen the appetite for digital identity solutions go from a nice-to-have to a necessity during the coronavirus crisis. With a global digital identity platform already developed over many years, we’ve been able to react quickly and apply our technology to help ease pressures. At the beginning of the pandemic, we fast-tracked digital ID cards for the NHS to help them remotely equip their staff with secure identification that couldn’t be lost, stolen or mis-asserted.  We extended these secure ID cards through our COVID pledge to charities and volunteer groups.The coronavirus crisis saw a drastic increase in fraud and doorstep scams at a time where issuing volunteers with physical ID cards was greatly complicated. Volunteer Edinburgh, Age UK and Tipperary Volunteer Centre are just a few of the organisations we’ve helped with free digital ID cards, and we’ve also gifted our identity verification technology to safeguard both the DoIT and the Co-op community platform.   Covid-19 testing in 30 minutes We’re currently collaborating with biotech company GeneMe who have developed a COVID-19 test that can be analysed in 30 minutes with no need for a lab. Together, we’ve developed FRANKD with Yoti, a breakthrough system whereby people have their secure test result linked to their digital identity and issued to their phone via the Yoti app. We’re currently in trials with Heathrow airport and believe our solution could be fundamental to protecting citizens whilst getting the economy back on track.   Canada leads the way for digital ID We’re really excited by the launch of the PCTF and we look forward to hearing results from the alpha testing in the coming months. For further information on our involvement with Digital Identity in Canada, or DIACC, please get in touch at business-canada@yoti.com.

3 min read

Essential reading

Get up to speed on what kind of company we are