Panic over the risk of deepfake scams is completely overblown, according to a senior security adviser for UK-based infosec company Sophos.
“The thing with deepfakes is that we aren’t seeing a lot of it,” Sophos researcher John Shier told El Reg last week.
Shier said current deepfakes – AI generated videos that mimic humans – aren’t the most efficient tool for scammers to utilize because simpler and cheaper attacks like phishing and other forms of social engineering work very well.
“People will give up info if you just ask nicely,” said Shier.
One area in which the researcher does see deepfakes becoming prevalent is romance scams. It takes a hefty amount of devotion, time and energy to craft believable fake personas, and the additional effort to add a deepfake is not huge. Shier worries that deepfaked romance scams could become problematic if AI can enable the scammer to work at scale.
Shier was not comfortable setting a date on industrialized deepfake lovebots, but said the necessary tech improves by orders of magnitude each year.
“AI experts make it sound like it is still a few years away from massive impact,” the researcher lamented. “In between, we will see well-resourced crime groups executing the next level of compromise to trick people into writing funds into accounts.”
Up until now, deepfakes have most commonly been used to create sexualized images and videos – mostly depicting women.
However, a Binance PR exec recently revealed criminals had created a deepfaked clone that participated in Zoom calls and tried to pull off cryptocurrency scams.
Security researchers at Trend Micro warned last month that deepfakes may not always be a scammer’s main tool, but are often used to enhance other techniques. The lifelike digital images have lately shown up in job seeker scams, bogus business meetings and web ads.
In June, the FBI issued a warning that it was receiving an increasing number of complaints regarding deepfakes deployed in job interviews for roles that provide access to sensitive information. ®