Skip links

Attackers abuse Gemini AI to develop ‘Thinking Robot’ malware and data processing agent for spying purposes

Nation-state goons and cybercrime rings are experimenting with Gemini to develop a “Thinking Robot” malware module that can rewrite its own code to avoid detection, and build an AI agent that tracks enemies’ behavior, according to Google Threat Intelligence Group.

In its most recent AI Threat Tracker, published Wednesday, the Chocolate Factory says it observed a shift in adversarial behavior over the past year. 

Attackers are no longer just using Gemini for productivity gains – things like translating and tailoring phishing lures, looking up information about surveillance targets, using AI for tech support, and writing some software scripts. They are also trialing AI-enabled malware in their operations, we’re told. 

For example, APT42, the cyber-arm of Iran’s Islamic Revolutionary Guard Corps (IRGC), has long used AI for phishing campaigns and as a translation tool. 

More recently, however, the government goons attempted to use Gemini to build a “data processing agent” that converts natural language requests into SQL queries to analyze personally identifiable information (PII) and use that sensitive info to provide insights about individuals’ asset ownership, location, demographics, and behavior.

APT42 “provided Gemini with schemas for several distinct data types in order to perform complex queries such as linking a phone number to an owner, tracking an individual’s travel patterns, or generating lists of people based on shared attributes,” according to GTIG, which adds that it has since disabled these accounts.

Malware using LLMs during execution

Additionally, in what Google calls the “first use of just in time AI in malware,” a novel code family uses LLMs during execution to generate malicious scripts on the fly, obfuscate itself, and create malicious functions.

While still experimental, this malware dropper tracked as PromptFlux provides “an early indicator of how threats are evolving and how they can potentially integrate AI capabilities into future intrusion activity,” the report says. 

Google’s threat hunters initially spotted PromptFlux in early June. It’s written in VBScript and includes a Thinking Robot module. This component interacts with Gemini’s API to request VBScript obfuscation and evasion techniques, which it uses to rewrite its own source code – this helps it beat static, signature-based malware detection tools – and saves the new version to establish persistence.  

The good news for defenders is that PromptFlux isn’t attack-ready – yet. The malware in its current form does not have the capability to compromise victims’ networks or devices, and Google has disabled the accounts connected to this activity.

However, Google says it identified “multiple” PromptFlux variations using LLM-driven code regeneration. One of these replaced the Thinking Robot function with a Thinking function that attempts to trick Gemini into rewriting the malware’s source code on an hourly basis via a sneaky prompt. It tells the model to act as an “expert VBScript obfuscator.”

While this malware isn’t attributed to a particular group, the filenames “highlight behaviors commonly associated with financially motivated actors,” the researchers wrote.

AI-powered data miner

In another case of attackers using “Prompt” malware – this time in a real operation – in June, GTIG says it spotted Russia’s APT28 (also known as Fancy Bear, Forest Blizzard, or FrozenLake) using a new data-mining malware it tracks as PromptSteal against Ukraine.

The US and UK have said APT28 is part of Russia’s General Staff Main Intelligence Directorate (GRU) military unit 26165.

Instead of hard-coding commands into the malware, PromptSteal queries LLMs during attacks to generate commands for the malware to execute via the API for Hugging Face. This, according to Google, is another first: malware querying an LLM deployed in a live operation.

Here’s one of the prompts used by this data miner:

It appears that APT28 is still fine tuning this malware and the analysts say they’ve spotted new samples adding obfuscation and changing the command-and-control method.

If both of these new malware samples sound familiar, they should. They share the “Prompt” prefix with PromptLock, the AI-powered ransomware uploaded to VirusTotal that turned out to be a proof-of-concept developed by a group of New York University engineers – not a criminal operation or government-backed group.

China-based groups are also using Gemini for evil. In one case, the report cites, Google says a China-linked user asked Gemini to identify bugs on a compromised system. When the AI refused, citing safety concerns, the would-be attacker tried to social engineer the chatbot, rewording the prompt and saying they were participating in a capture-the-flag security competition. 

This worked, and Gemini provided “helpful information that could be misused to exploit the system.” ®

Source