A North Korean threat actor has leveraged AI to create fake South Korean military agency ID card images used in a spear-phishing campaign, according to cybersecurity firm Genians.
The Kimsuky state-affiliated group was observed using ChatGPT to produce the sample ID card images to help lure the victims into clicking a malicious link. The attackers impersonated a South Korean defense-related institution, claiming to handle ID issuance tasks for military-affiliated officials.
The AI-generated ID cards were designed to enhance the authenticity of the phishing email.
“This is a real case demonstrating the Kimsuky group’s application of deepfake technology,” the Genian researchers wrote in the report, dated September 15.
The attack was first detected by the Genians Security Center (GSC) on July 17. The campaign followed of a series of ClickFix-based phishing campaigns attributed to Kimsuky in June.
Both attack campaigns deployed the same malware which is designed to enable malicious activities such as internal data theft and remote control.
The primary targets of the campaigns were researchers in North Korean studies, North Korean human rights activists and journalists, the researchers noted.
AI-Developed Military ID
The use of AI-generated images marked an evolution of the Kimsuky ClickFix attacks observed by the researchers.
The sender’s email address closely mimicked the official domain of a South Korean military institution and purported to be a draft review request for military employee ID cards.
The email contained fake images of South Korea military employee ID cards as samples, attached as PNG files.
The files were identified as a deepfake image with a 98% probability.

A separate file, ‘LhUdPC3G.bat,’ installed along with the image, was executed and initiated malicious activity once downloaded.
Prompt Injection Used to Generate Illegal Images
The report noted that it is illegal to produce copies of military government IDs. Therefore, when prompted to generate such an ID copy, ChatGPT returns a refusal.
However, prompt injection can be used to overcome this refusal. For example, the researchers said the large language model (LLM) may respond to requests framed as creating a mock-up or sample design for legitimate purposes rather than reproducing an actual military ID.
“The deepfake image used in this attack fell into this category. Because creating counterfeit IDs with AI services is technically straightforward, extra caution is required,” they wrote.