How Digital IDs can protect you from deepfake scams

profile picture Rachael Trotman 4 min read
Two people confirming each others identities with a Digital ID

Deepfakes are a hot topic right now. Taylor Swift recently became the victim of a deepfake scam; firstly an AI generated video of her promoted a fake cookware competition, and then explicit AI images of her went viral online. AI voice cloning technology pretending to be President Joe Biden tried discouraging people from voting in the polls. And celebrities including Piers Morgan, Nigella Lawson and Oprah Winfrey found deepfake adverts of them online endorsing an influencer’s controversial self-help course.

But it’s not just celebrities and public figures who are at risk of deepfakes scams. Fraudsters are also using deepfake technology to scam ordinary people. 

 

Are you dating a deepfake?

Fraudsters are increasingly using deepfakes – a digitally manipulated image or video that can look convincingly real – to trick innocent people on dating sites. Whilst genuine daters are looking for love, fraudsters are using digitally altered photos and fake videos to create realistic personas. They are then using these false identities to have live video conversations with daters. It’s scarily realistic. 

Genuine daters believe they are talking to a real person. In reality, they are having a video conversation with a fraudster hiding behind the fake persona. In one instance, a dater thought she was in a two year relationship and was tricked into parting with £350,000

It’s not just the financial loss; the emotional distress these scams cause can also be devastating. 

 

Beware of AI voice scams

This isn’t only happening on dating sites though. Any platform or app which allows people to communicate with one another is at risk, including online marketplaces, social media and even regular video and phone calls.  

The victim thinks they are having a phone call with their friend, when it’s actually a scammer hiding behind an AI generated voice or video. The fraudster uses AI to pose as the friend; they might then claim to be in trouble, saying they need some money. You get the idea…

And this is happening more often than you might think. In a recent study, 77% of people had fallen victim to AI voice scams and lost money as a result. In one instance, an employee was conned into paying £20 million of her company’s money to fraudsters in a deepfake video conference call.

It’s become such a problem that the Federal Communications Commission (FCC) has now made all robocalls using AI-generated voices illegal. This gives state attorneys general the ability to take action against scammers using AI voice cloning technology in their calls.

 

Protect yourself from deepfakes with a Digital ID

One way to protect yourself from deepfake scams is with a Digital ID

If you’re in doubt about who you’re speaking to, our free Digital ID app lets you swap verified details with another person, like your name and photo. This gives you confidence and reassurance that you’re talking to a real person – and to the correct person. After all, a scammer who is using a convincing deepfake video can’t share verified details about the person they are impersonating. 

Swapping verified details with another person is a quick and simple way to be confident that the person you are messaging or speaking to is the person they claim to be. If they’re genuine, they will also appreciate this – giving both of you peace of mind you’re talking to a real person. This creates trust very quickly and is a simple yet effective way to protect yourself from deepfake scams.

Want to read more like this?

Signup for our newsletter

Keep reading

Thoughts from our CEO - Feb

Thoughts from our CEO

In this blog series, our CEO Robin Tombs will be sharing his experience, whilst focusing on major themes, news and issues in the world of identity verification and age assurance. This month, Robin chats about independent testing for accurate facial age estimation, using digital IDs for age-restricted purchases in store and the UK government’s digital ID wallet.   Independent testing shows who truly meets the bar In February 2024, Ofcom published guidance that, for an age assurance process to be highly effective at age-gating children in practice, service providers should ensure the process meets 4 criteria: technically accurate, robust,

9 min read
An image of a person holding their phone up and performing a facial age estimation.

Yoti facial age estimation - newest model evaluation by NIST

We are delighted to share our latest evaluation by the National Institute of Standards and Technology (NIST) for our newest facial age estimation model. We have seen notable performance improvements across a number of metrics.  NIST’s evaluation is extremely thorough – they have over 20 million images for evaluation – and NIST develops their testing methodology over time. This helps to highlight models that are robust across multiple datasets and scenarios.  Our strategy for developing our model is not “data dependent”. Machine learning models can benefit greatly from quantity of data at the initial stage, but once reaching maturity,

7 min read
Thoughts from our CEO Jan 26

Thoughts from our CEO: Yoti sees major improvements in liveness and age estimation models

In this blog series, our CEO Robin Tombs will be sharing his experience, whilst focusing on major themes, news and issues in the world of identity verification and age assurance. This month, Robin chats about Yoti’s record year, major advances in our liveness and facial age estimation models and shifting government attitudes on digital ID.   Yoti’s revenue growth accelerates 2025 has been a record year for Yoti. Revenues grew 62% in 2025 to £29.0 million, up from £17.9 million in 2024. That compares with £11.5 million in 2023, meaning 2024 revenues were already 56% higher year on

9 min read