The rising challenge of detecting deepfakes

profile picture Amba Karsondas 8 min read
An image of a woman looking directly at the camera. A guide over her face indicates that the image is a deepfake.

Artificial intelligence (AI) has come a long way in just a few years. What started as a tool for automating routine tasks and processing data more efficiently has now become integrated into nearly every industry. It seems as though it’s everywhere we look right now.

One of the most controversial, and perhaps concerning, developments in AI is the rise of deepfakes. In simple terms, deepfakes are incredibly realistic synthetic media, such audio, video or images, generated by AI. These digital forgeries have become so convincing that telling real from fake is becoming a serious challenge.

We look into how AI has improved to the point where deepfakes are becoming nearly impossible to detect with your own eyes. We also explain how they’ve become a growing concern for businesses and what organisations can do to protect themselves from this evolving threat.

 

The explosion of AI’s capabilities

The rapid improvement in AI is largely down to significant breakthroughs in machine learning, and in particular, what’s known as deep learning. 

Deep learning is a branch of machine learning where artificial neural networks learn complex patterns from (usually a lot of) data, a bit like the human brain. It allows machines to process vast amounts of data and identify patterns in that data. From there, these machines can make predictions based on that data. They’re able to continually improve the accuracy of predictions when trained on more data. This is a significant development to older AI models which relied on simpler, more rigid rules.

 

The rise of generative models

A key moment in AI’s development was the evolution of Generative Adversarial Networks (GANs). These systems use two neural networks. One, the generator, creates fake content (such as photos or videos) and the other, the discriminator, tries to detect whether the data is real or fake. Over time, the two networks improve by “competing”. Eventually, they produce increasingly convincing fakes that, if they’re good enough, are nearly indistinguishable from authentic content.

The next generation of generative image models is diffusion models. These work differently to GANs. Instead of two networks competing with each other, diffusion models start with a completely random image and slowly turn it into a clear, detailed picture. This method has turned out to be more stable and often produces better-quality images than older models. As this technology improves, it’s making AI-generated images even harder to tell apart from real ones.

This ability to create lifelike images, videos and audio recordings has significant potential in fields such as film production, gaming and advertising. However, it has also opened the door to serious misuse. That’s where deepfakes come in.

 

How deepfakes have become a growing business threat

At first, deepfakes were mostly seen online, through doctored celebrity videos or political misinformation clips. However, their reach has expanded dramatically. What started as an internet gimmick has quickly evolved into a real threat for businesses.

In the context of business, deepfakes can be used in a variety of harmful ways:

  • Employee impersonation – Deepfakes can be used to impersonate executives or employees in an organisation. A malicious actor can trick a company into compliance by, for instance, creating a convincing video of a CEO instructing employees to transfer funds or share sensitive information.
  • Account takeoverDeepfake audio or video can be used to impersonate real users during authentication processes. This is especially the case for systems that rely on voice recognition or facial biometrics. Deepfakes make it easier for attackers to gain unauthorised access to sensitive accounts.
  • Damage to brand reputation – Deepfake videos showing a company representative saying or doing something false, offensive or controversial can go viral. This can severely tarnish a company’s reputation. Even after the content is debunked, the damage to brand trust may be long-lasting and the harm done to a company’s reputation may be irreversible.
  • Fraudulent job applications or interviews – Individuals can use deepfakes to impersonate qualified professionals in remote job interviews. This allows them to trick companies into hiring unqualified or malicious actors.
  • Synthetic identity fraud Deepfakes can help create entirely fake personas, by blending real and fabricated personal data, photos and videos to pass as legitimate individuals. These identities can be used to open fraudulent accounts and bypass KYC (Know Your Customer) processes.

 

Why deepfakes are so hard to detect now

Not too long ago, deepfakes were much easier to spot. You’d notice subtle giveaways, like strange lighting, unnatural blinking patterns, robotic voice tones or awkward hand and mouth movements. Sometimes you might even spot an extra finger or two. But as we’ve switched from GANs to diffusion models, these obvious flaws are quickly disappearing. Here’s why:

 

Improved algorithms

As we moved to diffusion models, the quality of deepfakes has dramatically improved. Algorithms are now capable of replicating subtle movements, leading to highly realistic facial expressions, speech patterns and even eye movements. The visuals and audio are much cleaner, resulting in deepfakes that appear much more lifelike.

 

Better quality deepfake audio

While visual deepfakes were initially the most common form of AI manipulation, the quality of deepfake audio has also improved significantly. Text-to-speech algorithms can now replicate specific voices with incredible accuracy, allowing them to mimic someone’s tone, pacing and pronunciation almost perfectly. Fake phone calls or podcasts become much more believable, making it easier for malicious actors to impersonate executives or other key individuals.

 

Real-time creation

Some systems can now generate deepfakes in real-time. This makes it possible for malicious actors to create videos or audio recordings that react to live events or conversations in real-time, making them even harder to stop before damage is done.

 

Injection attacks

Bad actors are now using injection attacks to deliberately confuse or bypass deepfake detection systems. Injection attacks bypass the live camera feed by “injecting” the deepfake directly into the system. This allows them to overwrite the image or video of the camera with pre-prepared images. Bad actors use this tactic to evade detection by tricking deepfake detection tools into failing to recognise the content as fake.

 

How businesses can protect themselves from deepfakes

As deepfake technology becomes more advanced, and more accessible, businesses must be proactive in developing strategies to detect and mitigate the risks associated with deepfakes.

  1. Verify participant’s identities for every interaction – By confirming that each participant is who they say they are, they can ensure that the right person is accessing your calls, meetings, systems or accounts. With trusted identity profiles, organisations can authenticate users before granting them access. Securing accounts with effective biometric authentication, or by using a Digital ID, offers a user-friendly way to protect access to your systems whilst preventing bad actors from using deepfake impersonation attempts.
  2. Offer multi-factor authentication (MFA) MFA provides extra layers of protection to help ensure that only authorised individuals can access your systems. Even if a bad actor is able to bypass a system using a deepfake, MFA ensures that they still can’t access the account without an additional factor. This could be a push notification, a text message or using an alternative form of biometric authentication such as a face match. 
  3. Use liveness detection to spot deepfakes in real time – Face matching alone isn’t enough, especially in the age of deepfakes. Liveness detection goes a step further by verifying that there’s a real, live person in front of the camera, and not a photo, video, mask or deepfake. Liveness technology resists spoofing attempts by ensuring that identity verification happens in real-time.
  4. Detect injection attacks before they succeed – Injection attacks occur when fraudsters try to bypass the camera by inserting deepfake videos into the authentication flow. Injection attack detection tools, such as Yoti’s Secure Image Capture (SICAP), should be able to identify and stop both hardware and software-based attacks across both desktop and mobile.

 

Safeguard your business from the threat of deepfakes

Once a novelty, deepfakes now have the capability of being a significant threat to corporate security, brand reputation and company trust. As AI continues to evolve, so too do the tactics used by malicious actors looking to exploit this powerful technology.

While the technology behind deepfakes may be evolving, businesses can adapt and safeguard themselves by staying vigilant and innovative in their approach to cybersecurity.

If you’d like to know more about how you can protect your business from deepfakes, get in touch.

Keep reading

Synthetic identity fraud is committed by the theft of a real piece of persoanl information such as an SSN, and combined with false information to make up an entirely synthetic identity that often bypasses traditional checks

What is synthetic identity fraud? How it works and how to prevent it

What is synthetic identity fraud? Synthetic identities are fake identities, built by combining real and made-up information, earning them the nickname “Frankenstein IDs” due to their pieced-together nature. Synthetic identity fraud is different to traditional identity fraud as it doesn’t involve an obvious, immediate consumer victim. These fake profiles are designed to mimic real customers, often slipping past traditional fraud detection systems because they don’t raise typical red flags. As a result, the primary victims of synthetic identity fraud are businesses and lenders, who bear the financial losses.   How synthetic identities are created and used

8 min read
Graphic depicting the balance security and user experience with robust authentication methods such as MFA, biometrics and passwordless, versus a traditional username and password

Beyond passwords: exploring modern authentication methods for secure login

As online threats grow more sophisticated, the way we authenticate users needs to evolve. This blog explores the modern authentication methods which can support or replace passwords, such as biometrics and verified digital IDs, and how businesses can use them to protect accounts, reduce fraud and build trust with users.   What is authentication? Authentication is the process of verifying that someone is who they say they are, typically before granting them access to a service or system. Traditionally, this has involved entering a username and password, something only the user should know.   Are passwords enough to keep

6 min read

Under the hood of facial age estimation

Since the UK Online Safety Act came into force on Friday 25th July, there has been a lot of discussion about how effective age checks are.  Facial age estimation has been approved by Ofcom as a high assurance method for online age checks. It’s a quick, private and effective way to confirm if someone is above or below an age threshold.  Each facial age estimation is quick and simple – taking around one second. This has sparked conversation when talking to businesses, press, regulators and governments – one of the most striking comments we’ve heard when showing them the

3 min read