Skip links

For the price of Netflix, crooks can now rent AI to run cybercrime

Cybercrime has entered its AI era, with criminals now using weaponized language models and deepfakes as cheap, off-the-shelf infrastructure rather than experimental tools, according to researchers at Group-IB.

In its latest whitepaper, the cybersec biz argues that AI has become the plumbing of modern cybercrime, quietly turning skills that once took time and talent into services that anyone with a credit card and a Telegram account can rent.

This isn’t just a passing fad, according to Group-IB’s numbers, which show mentions of AI on dark web forums up 371 percent since 2019, with replies rising even faster – almost twelvefold. AI-related threads were everywhere, racking up more than 23,000 new posts and almost 300,000 replies in 2025.

According to Group-IB, AI has done what automation always does: it took something fiddly and made it fast. The stages of an attack that once needed planning and specialist hands can now be pushed through automated workflows and sold on subscription, complete with the sort of pricing and packaging you’d expect from a shady SaaS outfit.

One of the uglier trends in the report is the rise of so-called Dark LLMs – self-hosted language models built for scams and malware rather than polite conversation. Group-IB says several vendors are already selling them for as little as $30 a month, with more than 1,000 users between them. Unlike jailbroken mainstream chatbots, these things are meant to stay out of sight, run behind Tor, and ignore safety rules by design.

Running alongside the Dark LLM market is a booming trade in deepfakes and impersonation tools. Group-IB says complete synthetic identity kits, including AI-generated faces and voices, can now be bought for about $5. Sales spiked sharply in 2024 and kept climbing through 2025, pointing to a market that continues to grow.

There’s real damage behind the numbers, too. Group-IB says deepfake fraud caused $347 million in verified losses in a single quarter, including everything from cloned executives to fake video calls. In one case, the firm helped a bank spot more than 8,000 deepfake-driven fraud attempts over eight months.

Group-IB found that scam call centers were using synthetic voices for first contact, with language models coaching the humans as they go. Malware developers are also starting to test AI-assisted tools for reconnaissance and persistence, with early hints of more autonomous attacks down the line.

“From the frontlines of cybercrime, we see AI giving criminals unprecedented reach,” said Anton Ushakov, head of Group-IB’s Cybercrime Investigations Unit. “Today it helps scale scams with ease and hyper-personalization at a level never seen before. Tomorrow, autonomous AI could carry out attacks that once required human expertise.”

From a defensive point of view, AI removes a lot of the usual clues. When voices, text, and video can all be generated on demand with off-the-shelf software, it becomes much harder to work out who’s really behind an attack. Group-IB’s view is that this leaves static defenses struggling.

In other words, cybercrime hasn’t reinvented itself. It has just automated the old tricks, put them on subscription, and scaled them globally – and as ever, everyone else gets to deal with the mess. ®

Source