Skip links

Generative AI makes fraud fluent – from phishing lures to fake lovers

RSAC Spam messages predate the web itself, and generative AI has given it a fluency upgrade, churning out slick, localized scams and letting crooks hit regions and dialects they used to ignore.

One of the red flags that traditionally identified spam, including phishing attempts, was poor spelling and syntax, but the use of generative AI has changed that by taking humans out of the loop.

“I’m assuming at this point that probably half of the spam we get is being written by generative AIs, the quantity of spelling and grammar errors has fallen precipitously,” Chester Wisniewski, global field CISO of British security biz Sophos, told The Register during this week’s RSA Conference. “I’ve joked about this a few times, but if the grammar and spelling is perfect, it probably is a scam, because even humans make mistakes most of the time.”

AI has also widened the geographical scope of spam and phishing. When humans were the primary crafters of such content, the crooks stuck to common languages to target the largest audience with the least amount of work. But, Wisniewski explained, AI makes it much easier to craft emails in different languages.

He gave an example from his native Canada. Residents of the French-dominated province of Quebec can peg spam notes quickly because they’re often written in traditional French, rather than Québécois. But AI systems can easily generate convincing Québécois, making it easier to snare victims.

A similar trend is observed with Portuguese-language spam. Given that Brazil’s population is about 20 times larger than Portugal’s, scammers have historically favored Brazilian Portuguese in their campaigns. Now, with AI capable of producing content in European Portuguese, residents in Portugal are finding it increasingly difficult to discern phishing attempts crafted in their local linguistic style.

“From the criminal enterprise perspective, it’s opened the world,” Kevin Brown, chief operating officer at security consultancy NCC Group, told The Register.

“What is all the phishing training that we’ve done over the years? The obvious things, the poor grammar, the urgency, the obvious. Overnight AI has said, ‘You know what, I’m going to write something that is written in good language, with good punctuation, and it will be written in a local language.'”

The same is also true with romance scams, also known as pig butchering. AI chatbots have proven highly effective at seducing victims into thinking they are being wooed by an attractive partner, at least during the initial phases.

Wisniewski said that AI chatbots can easily handle the opening phases of the scams, registering interest and appearing to be empathetic. Then a human operator takes over and begins removing funds from the mark by asking for financial help, or encouraging them to invest in Ponzi schemes.

Trust none of what you hear

On the subject of deepfakes, Wisniewski said that audio versions of AI avatars are already tricking victims at companies. For instance, scammers might call everybody on the support team with an AI-generated voice that duplicates somebody in the IT department, asking for a password until one victim succumbs.

“You can do real-time audio deepfakes for pennies,” he said.

But Wisniewski expressed skepticism about real-time video deepfakes, specifically referencing a widely reported case from last February in which a Hong Kong employee was allegedly tricked into transferring $25 million to scammers using a video call featuring a deepfake of the CFO. He suggested it’s much more likely that someone had just pressed the wrong button and was looking to blame the latest trend rather than admit incompetence.

If we follow the same trajectory of the audio deep fakes, we’re about two years out from the criminals having it at an economical price

He noted that even the big AI companies, with billion-dollar budgets, have yet to crack the challenge of creating convincingly interactive real-time video avatars. The idea that some criminals could build a model to do this themselves wasn’t realistic. But it’s only a matter of time.

“If we follow the same trajectory of the audio deep fakes, we’re about two years out from the criminals having it at an economical price, and three years out from your least favorite uncle doing them for a joke on Facebook,” Wisniewski said.

Brown disagreed, however, saying that NCC’s Group’s pentesters had had some success in the area of video fakery.

“We’ve been able to do some video deepfakes on specific use cases. But these are professionals that have been doing this for years,” he said. “We are able to do that, but it will become industrialized in due course.”

Both Brown and Wisniewski agreed that there is going to be a pressing need for personal verification in communications that goes beyond the established systems. ®

Source