- Cyber Syrup
- Posts
- OpenAI Disrupts AI Misuse by State-Linked and Criminal Actors
OpenAI Disrupts AI Misuse by State-Linked and Criminal Actors
OpenAI has announced the takedown of multiple ChatGPT accounts linked to state-affiliated hacking groups and financially motivated threat actors

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.
Find out why 1M+ professionals read Superhuman AI daily.
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
OpenAI Disrupts AI Misuse by State-Linked and Criminal Actors

OpenAI has announced the takedown of multiple ChatGPT accounts linked to state-affiliated hacking groups and financially motivated threat actors. These actors used the platform to support malicious cyber activities, including malware development, social engineering, and online influence operations.
Russian-Linked Threat: Operation ScopeCreep
OpenAI identified and banned accounts tied to a Russian-speaking cybercriminal group that used ChatGPT to incrementally build a Go-based malware toolset. This activity, dubbed ScopeCreep, involved the use of temporary email accounts to create ChatGPT profiles, each used for a single query to refine malicious code or infrastructure.
Once refined, the group deployed the malware through a fake version of Crosshair X—a legitimate gaming overlay application. Victims who downloaded the tampered software unknowingly installed a malware loader, which connected to a remote server to download additional payloads.
Key features of the malware include:
Privilege escalation using
ShellExecuteW
Antivirus evasion via PowerShell commands that excluded the malware from Windows Defender scans
Obfuscation with Base64 encoding
Credential theft, with stolen data sent to a Telegram channel controlled by the attackers
Chinese Nation-State Actors: APT5 and APT15
OpenAI also disabled accounts linked to APT5 and APT15, both long-known Chinese cyber espionage groups. These actors used ChatGPT to:
Conduct open-source research on U.S. satellite communications
Modify Linux and Android development tools
Troubleshoot firewall and DNS configurations
Develop brute-force FTP scripts and scripts for automated social media manipulation
The sophistication of their inquiries suggests operational knowledge of network configurations, infrastructure setup, and content automation at scale.
Broader Malicious Use of AI
OpenAI’s threat report highlighted additional misuse across global disinformation and cybercrime campaigns:
North Korean IT Worker Scheme: Used AI to fabricate resumes and cover letters for fraudulent job applications.
Operation Sneer Review (China): Mass-generated social media content in multiple languages for propaganda purposes.
Operation VAGue Focus (China): Mimicked journalists to discuss cyberattack tools and translate phishing content.
Operation Helgoland Bite (Russia): Spread election-related misinformation and anti-Western narratives in German and Russian.
Storm-2035 (Iran): Created geopolitical propaganda in support of Iran’s foreign policy and regional interests.
Operation Wrong Number (Cambodia/China): Posed as task recruiters using AI to lure victims into exploitative task-based scams.
Summary and Implications
These cases underline the dual-use nature of AI technologies. While AI tools offer significant productivity and innovation benefits, they also present new risks when used by malicious actors.
OpenAI stated that although their models are not inherently designed to facilitate harm, coordinated misuse prompted them to strengthen detection and response systems. Ongoing collaboration with law enforcement and threat intelligence partners remains central to identifying and dismantling AI-driven cyber operations.
“This work highlights a growing trend where malicious actors leverage generative AI for everything from code development to propaganda dissemination,” OpenAI noted. “Our goal is to stay one step ahead through transparency, tooling, and responsible development.”