VoidLink, the newly spotted Linux malware that targets victims’ clouds with 37 evil plugins, was generated “almost entirely by artificial intelligence” and likely developed by just one person, according to the research team that discovered the do-it-all implant.
Last week, Check Point Research published a report on the never-before-seen malware samples, originally discovered in December, and said it seemed to be an in-progress framework – not a fully production-ready tool – that originated from a Chinese-affiliated development environment.
It’s designed to run in Linux-based cloud environments, and automatically scans for and detects AWS, Google Cloud Platform, Microsoft Azure, Alibaba, and Tencent.
Plus, it’s packed with custom loaders, implants, rootkits, and numerous modules that provide attackers with a whole range of stealthy, operational-security capabilities, making it “far more advanced than typical Linux malware,” Check Point said.
In a new analysis published Tuesday, the security shop said the malware was likely not the product of a large, well-resourced development team, despite initially appearing that way. Instead, Check Point Research believes VoidLink was authored almost entirely by AI, likely under the direction of a single individual, with development artifacts showing it reached a first functional implant in under a week.
“VoidLink demonstrates that the long-awaited era of sophisticated AI-generated malware has likely begun,” the threat hunters wrote.
The team came to this conclusion after noting that the 30-week planned development timeline, leaked in VoidLink internal documents, didn’t match up with the observed timeline, which indicated a much faster process.
“Deeper investigation revealed clear artifacts indicating that the development plan itself was generated and orchestrated by an AI model and that it was likely used as the blueprint to build, execute, and test the framework,” the report said, noting that the timestamped artifacts showed VoidLink evolving from a concept to a functional piece of malware in less than a week.
The developer began working on VoidLink in late November, and used Trae Solo – an AI assistant embedded in the integrated development environment Trae – to generate a Chinese-language instruction document. The individual didn’t directly ask the AI agent to build the malware. In fact, they instructed the model not to implement code or provide technical details about malware building techniques, which could be an attempt to manipulate the AI into bypassing its safety guardrails.
Additionally, the code repository mapping documentation suggests that the model was fed a minimal codebase as the starting point for the malware, and that starting point was completely rewritten, end to end.
Check Point’s researchers also found a work plan written in Chinese for three development teams: a core team (using Zig programming language), an arsenal team (C), and a backend team (Go).
The documentation, which the security sleuths say “bears all the hallmarks of a large language model,” includes sprint schedules, feature breakdowns, and coding guidelines.
While the exercise was presented to the model as a 30-week engineering effort, the timestamped documents indicate it only took six days to develop 88,000 lines of code, at which point it was uploaded to VirusTotal on December 4, and that’s when Check Point’s research began.
According to the malware hunting team, this indicates that AI – when used by a capable developer – can produce sophisticated offensive security tools faster, and at scale, without the funding and other resources typically only seen in experienced threat groups.
It’s not a fully autonomous AI-driven attack. But it does show that AI agents can help humans produce very capable, sneaky tools for nefarious purposes. ®