Arming yourself against deepfake technology

Last week, YouTube reiterated its commitment to banning deepfake videos in the US 2020 election.

While this ban on technically manipulated videos of political figures isn’t new and has been in place since the last presidential election in 2016, it illustrates just how increasingly difficult it is for the public (and organisations) to verify a person’s true identity online.

A deepfake today uses AI to combine existing imagery to replicate both their face and voice. Essentially, they can impersonate a real person, making them appear to say words they have never even spoken – hence the fear when it comes to general elections and politics being skewed by misinformed videos. Worryingly, the number of them online has doubled in less than a year, from 7,964 in December 2018 to more than 14,000 just nine months later. While the majority of these are porn-related, the problem isn’t solely defined to this space.

The Obama administration showed how deepfakes could be used to spread misinformation on a substantial scale. It was no longer seen as a hoax and the technology became heavily scrutinised. In other spaces, we’re now seeing realistic deepfakes become more commercial, from use in pornography to the infiltration of popular culture and other nefarious practices.

YouTube’s decision to ban deepfakes is a step in the right direction, but there is still plenty of room for improvement given that they have proven to spread misinformation and damage reputations. Deepfakes pose a serious threat to the digital economy and the evolution of digital identity because it’s far too easy to use AI to create realistic deepfakes – and they can be weaponised to commit fraud.

The threat to businesses

Deepfakes are likely to continue causing havoc for politicians in the coming years, but equally, modern enterprises could also find themselves under threat. In 2019, the UK boss of an energy company was tricked over the phone when he was asked to transfer £200,000 to a Hungarian bank account by an individual using deepfake audio technology. The individual believed the call to be from his boss, but actually, the voice had been impersonated by a fraudster who succeeded in defrauding the man out of money.

Occasions like this, particularly where there are substantial amounts of capital at risk, are reminders that organisations should be on high alert for deceptive fraudsters and arm themselves accordingly. 

In sectors such as financial services, vast amounts of customer data are at risk and a breach of information or assets can have detrimental effects on all involved. When data is breached, both the consumer and organisation face potentially large consequences. Not only could the consumer lose assets, but the organisation also runs the risk of having to replace customer funds, incurring penalties and losing public trust in their service, all of which could lead to the demise of any company.

Protecting business from the threat of deepfakes

It’s hardly surprising that organisations are therefore looking to new technologies to combat against even the most advanced threats. As a first step, many banks now require a government-issued ID and a selfie to establish a person’s digital identity when creating new accounts online. However, a criminal could utilise deepfake technology to create a spoof video to bypass the selfie requirement. 

If deepfakes can so easily bypass security measures already in place, more sophisticated identity verification solutions are required. Embedded certified liveness detection is vital to sniff out advanced spoofing attacks, including deepfakes, which ensures that the remote user is physically present.

Most liveness detection solutions require the user to perform eye movements, nod their head or repeat words or numbers. But these methods can be circumvented with deepfakes. Unless the identity verification provider has certified liveness detection, validated by the National Institute of Standards and Technology, imposters could still trick the system.

Level 2 certification with iBeta quality assurance means that your authentication solution can discern videos from real selfies to withstand a potential sophisticated deepfake attack. This certification could be the difference between a secure ecosystem and one that is vulnerable to the threats of tomorrow.

While deepfakes have slowly weaved their way into political spheres and into other industries such as pornography, we cannot let them infiltrate business ecosystems and be weaponised to defraud. We need to take this threat seriously. Instead of waiting and responding to the threat reactively, it’s time to fight AI with a AI – the kind of AI that powers modern identity verification and liveness detection and combats deepfakes.

About the Author

Labhesh Patel is CTO and Chief Scientist at Jumio. When identity matters, trust Jumio. Jumio’s end-to-end identity verification solutions fight fraud, maintain compliance and onboard good customers faster.

Featured image: ©Russell Johnson

  • you might also like
Copy link