Skip links

Rise of deepfake threats mean biometric security measures won’t be enough

Cyber attacks using AI-generated deepfakes to bypass facial biometrics security will lead a third of organizations to doubt the adequacy of identity verification and authentication tools as standalone protections.

Or so says consultancy and market watcher Gartner, as deepfakes dominate the news since sexually explicit AI-generated viral images of popstar Taylor Swift prompted fans, Microsoft, and the White House to call for action.

However, the relentless march of AI technology can also be the cause of headaches for enterprise security. Remote account recovery, for example, might rely on an image of the individual’s face to unlock security. But since these could be beaten by images copied from social media and other sources, security systems employed “liveness detection” to see if the request was from the right individual.

As well as matching an individual’s image to the one on record, systems relying on liveness detection also try to test if they are really there through an “active” request such as a head movement or “passive” sensing of micro facial movements and the focus of the eyes.

Yet these approaches could now be duped by AI deepfakes and need to be supplemented by additional layers of security, Gartner’s VP Analyst Akif Khan told The Register.

He said that defense against the new threat can come from supplementing existing measures or improving on them.

“Let’s say, for example, the vendor knows that an IP verification process shows the user is running an iPhone 13 and understands the camera resolution of the device, then if the [presented deepfake doesn’t match these parameters] it might suggest that it’s been digitally injected,” he said.

Other examples of supplementary security might include looking at device location or frequency of requests from the same device, he said.

Security system developers are also trying to use AI – typically deep neural networks – to inspect the presented images to look for signs that they are deepfakes. “One vendor showed me an example of several deepfake images that they had detected, and the faces looked very different,” Khan told us.

“However, when you really zoomed in there were on each of the heads three or four hairs, which were all in the absolute exact same kind of configuration of like three or four hairs overlapping with each other in a way that just looked eerily identical across these like three or four different people. That was like an artifact that they use to determine that actually these are synthetically created images.”

Organizations should use both approaches to defend against deepfake threats to biometric security, he said.

“It’s classic defense-in-depth security. I would not want to say one approach was better than any other because I think the best approach would be to use all of the layers available.” ®

Source