Skip links

Nork snoops whip up fake South Korean military ID with help from ChatGPT

North Korean spies used ChatGPT to generate a fake military ID for use in an espionage campaign against a South Korean defense-related institution, according to new research.

Kimsuky, a notorious cybercrime squad believed to be sponsored by the North Korean government, used a deepfaked image of a military employee ID card in a July spear-phishing attack against a military-related organization, according to the Genians Security Center (GSC), a South Korean security institute.

The file’s metadata indicated it was generated with ChatGPT’s image tools, according to Genians, despite OpenAI’s efforts to block the creation of counterfeit IDs.

According to Genians’ threat intel team, the faked ID photo was based on publicly available headshots and composited into a template resembling a South Korean military employee card. The researchers say the attackers likely used prompt-engineering tricks – framing the request as the creation of a “sample design” or “mock-up” for legitimate use – to get around ChatGPT’s built-in refusals to generate government ID replicas.

“Since military government employee IDs are legally protected identification documents, producing copies in identical or similar form is illegal. As a result, when prompted to generate such an ID copy, ChatGPT returns a refusal,” Genians said. “However, the model’s response can vary depending on the prompt or persona role settings.” For example, it may respond to requests framed as creating a mock-up or sample design for legitimate purposes rather than reproducing an actual military ID.

“The deepfake image used in this attack fell into this category. Because creating counterfeit IDs with AI services is technically straightforward, extra caution is required.”

Once crafted, the deepfake was distributed to targets in emails disguised as correspondence about ID issuance for military-affiliated officials.

Targets included an unnamed defense-related institution in South Korea, though Genians stopped short of naming victims, and didn’t say how many organizations were targeted.

Genians’ findings are the latest example of suspected North Korean hackers adopting AI as part of their intelligence-gathering work. Last month, Anthropic said Pyongyang’s keyboard warriors had been using its Claude Code tool to spin up fake personas, ace job interviews, and even ship code for Fortune 500 firms.

It also sees Kimsuky, the espionage crew best known for targeting South Korea’s military, government, and think tanks, once again shifting its tactics; this time, the group is moving away from its well-worn phishing lures and malicious Word docs and is now delivering its payloads via deepfake-based forgeries.

OpenAI didn’t immediately respond to The Register’s questions. However, in February, the company said [PDF] it had booted dozens of accounts it says were tied to North Korea’s overseas IT worker schemes, adding that the crackdown was part of a broader effort to spot and disrupt state-backed misuse of its models. ®

Source