Global study highlights biometrics as solution of choice to counter escalating deepfake risk  

Global study highlights biometrics as solution of choice to counter escalating deepfake risk  

The study from iProov revealed 70% of organisations believe deepfake attacks will have a high impact, while 62% worry their organisation isn’t taking the threat of deepfakes seriously enough.

The risk of deepfakes is rising. Almost half of organisations (47%) have encountered a deepfake and three-quarters of them (70%) believe deepfake attacks which are created using Generative AI tools, will have a high impact on their organisations.  

Yet perceptions of AI are hopeful as two-thirds of organisations (68%) believe that while it’s impactful at creating cybersecurity threats, more (84%) find it’s instrumental in protecting against them.  

This is according to a new global survey of technology decision-makers from iProov, a leading provider of science-based biometric identity solutions, which also found 75% of solutions being implemented to address the deepfake threat are biometric solutions.   

The Good, The Bad, and The Ugly, is a global survey commissioned by iProov that gathered the opinions of 500 technology decision-makers from the UK, US, Brazil, Australia, New Zealand and Singapore on the threat of Generative AI and deepfakes.   

While organisations recognise the increased efficiencies that AI can bring, these benefits are also enjoyed by threat technology developers and bad actors. Almost three-quarters (73%) of organisations are implementing solutions to address the deepfake threat but confidence is low, with the study identifying an overriding concern that not enough is being done by organisations to combat them.  

More than two-thirds (62%) worry their organisation isn’t taking the threat of deepfakes seriously enough.   

The survey shows recognition by organisations that the threat of deepfakes is a real and present threat. They can be used against people in numerous harmful ways including defamation and reputational damage but perhaps the most quantifiable risk is in financial fraud. Here they can be used to commit large-scale identity fraud by impersonating individuals to gain unauthorised access to systems or data, initiate financial transactions, or deceive others into sending money. 

The stark reality is that deepfakes pose a threat to any situation where an individual needs to verify their identity remotely but those surveyed worry that organisations aren’t taking the threat seriously enough.    

“We’ve been observing deepfakes for years but what’s changed in the past six to twelve months is the quality and ease with which they can be created and cause large scale destruction to organizations and individuals alike,” said Andrew Bud, Founder and CEO, iProov.  

“Perhaps the most overlooked use of deepfakes is the creation of synthetic identities which because they’re not real and have no owner to report their theft go largely undetected while wreaking havoc and defrauding organizations and governments of millions of dollars,” added Bud.    

“And despite what some might believe, it’s now impossible for the naked eye to detect quality deepfakes. Even though our research reports that half of organisations surveyed have encountered a deepfake, the likelihood is that this figure is a lot higher because most organisations are not properly equipped to identify deepfakes,” Bud said. “With the rapid pace at which the threat landscape is innovating, organisations can’t afford to ignore the resulting attack methodologies and how facial biometrics have distinguished themselves as the most resilient solution for remote identity verification.”