Skip links

Russian criminals can’t wait to hop over OpenAI fence, use ChatGPT for evil

Cybercriminals are famously fast adopters of new tools for nefarious purposes, and ChatGPT is no different in that regard. 

However, its adoption by miscreants has happened “even faster than we expected,” according to Sergey Shykevich, threat intelligence group manager at Check Point. The security shop’s research team said it has already seen Russian cybercriminals on underground forums discussing OpenAI workarounds so that they can bring ChatGPT to the dark side.

Security researchers told The Register this tool is worrisome because it provides a cost-effective way to experiment with polymorphic malware, which can be used in ransomware attacks. It can also be used to automatically produce text for phishing and other online scams, if the AI’s content filter can be sidestepped.

We’d have thought ChatGPT would be most useful for coming up with emails and other messages to send people to trick them into handing over their usernames and passwords, but what do we know? Some crooks may find the AI model helpful in offering malicious code and techniques to deploy.

“It allows people that have zero knowledge in development to code malicious tools and easily to become an alleged developer,” Shykevich told The Register. “It simply lowers the bar to become a cybercriminal.”

In a series of screenshots posted on Check Point’s blog, the researchers show miscreants asking other crooks what’s the best way to use a stolen credit card to pay for upgraded-user status on OpenAI, as well as how to bypass IP address, phone number, and other geo controls intended to prevent Russian users from accessing the chatbot. 

Russia is one of a handful of countries banned from using OpenAI.

The researcher team also found several Russian tutorials on the forums about how to bypass OpenAI’s SMS verification and register for ChatGPT.

“We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations. Cyberciminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient,” the Checkpoint crew wrote.

Please write me ransomware

In separate threat research published today, CyberArk Labs’ analysts Eran Shimony and Omer Tsarfati detail how to create polymorphic malware using ChatGPT. In the future, they plan to release some of the source code “for learning purposes,” the duo said.

While there are other examples of how to query ChatGPT to create malicious code, in their latest research CyberArk bypassed ChatGPT’s content filters and showed how, “with very little effort or investment by the adversary, it is possible to continuously query ChatGPT so we receive a unique, functional and validated piece of code each time,” CyberArk Senior Security Researcher Eran Shimony told The Register

“This results in polymorphic malware that does not show malicious behavior while stored on the disk as it receives the code from ChatGPT, and then executes it without leaving a trace in memory,” he said. “Besides that, we can ask ChatGPT to mutate our code.”

ChatGPT, like most chatbots, has content filters that aim to restrict harmful and inappropriate content creation. So it’s not surprising that simply asking it to “please write me a code injecting a shellcode into ‘explorer.exe’ in python” didn’t work and instead triggered the content filter.

Shimony and Tsarfati found a way to bypass this by using multiple constraints and asking ChatGPT to obey. Using this method, the chatbot produced an incomplete code that injects a DLL into explorere.exe.

Plus, for some unknown reason, the API version of ChatGPT always bypasses the content filter, while the web version does not.

After creating the placeholder shellcode in ChatGPT, the researchers then used the chatbot to mutate the code, including turning the code into base64, and add constraints such as changing the API call — things that would help actual attackers evade detection. 

Using the ChatGPT API within the malware, on-site, instead of an off-site environment also helps the malware fly under the radar, according to the researchers.

“By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect,” the duo wrote.

Then they moved onto ransomware. First, they asked ChatGPT to write code that finds files that may be valuable to ransomware gangs via this request:

Then they ask ChatGPT to encrypt the files, showing how an attacker could read and scramble a victim’s documents.

The malware includes a Python interpreter that queries ChatGPT for new modules that perform malicious actions, and this serves two purposes, according to the analysts. 

First, the packages will be delivered as text instead of binaries, which makes them look less suspicious to anti-malware software. Second, it allows any would-be attackers who can’t write code themselves to ask ChatGPT to modify the malware for code injection, file encryption or persistence, among other functions.

“Ultimately, the lack of detection of this advanced malware that security products are not aware of is what makes it stand out,” Shimony said, adding that this makes “mitigation cumbersome with very little effort or investment by the adversary.” 

“In the future, if it is connected to the internet, it might be able to create exploits for 1-days,” he added. “It is alarming because, as of now, security vendors haven’t really dealt with malware that continuously uses ChatGPT.” ®

Source