- Cyber Syrup
- Posts
- China-Linked Operators Weaponize Anthropic’s AI to Conduct Autonomous Cyberattacks
China-Linked Operators Weaponize Anthropic’s AI to Conduct Autonomous Cyberattacks
State-sponsored threat actors associated with China leveraged Anthropic’s AI systems to launch a fully automated cyber-espionage campaign

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.
Become the go-to AI expert in 30 days
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day
China-Linked Operators Weaponize Anthropic’s AI to Conduct Autonomous Cyberattacks

State-sponsored threat actors associated with China leveraged Anthropic’s AI systems to launch a fully automated cyber-espionage campaign in September 2025, marking a significant escalation in AI-enabled offensive operations.
Overview of the Campaign
According to Anthropic, the operation — tracked as GTG-1002 — represents the first known instance in which an adversary used an AI system not only for guidance, but to independently execute the majority of a cyberattack lifecycle. Unlike previous cases where AI acted as a supportive tool, this campaign relied on Claude Code’s “agentic” capabilities to perform 80–90% of technical operations with minimal human oversight.
Targets included:
Major technology firms
Financial and chemical manufacturing organizations
Government agencies worldwide
A subset of the approximately 30 targets were successfully compromised. Anthropic has since banned the associated accounts and introduced new safeguards to detect similar abuse.
Turning Claude Into an Autonomous Attack System
The threat actor manipulated Claude Code and Model Context Protocol (MCP) tools to orchestrate multi-stage attacks. Claude Code acted as the “central nervous system,” breaking down human-provided objectives into small, executable tasks handled by coordinated AI sub-agents.
The AI autonomously performed functions normally requiring an entire offensive security team, including:
Reconnaissance and attack-surface mapping
Vulnerability scanning and exploit development
Lateral movement and credential harvesting
Data analysis, prioritization, and exfiltration
Humans were involved only at key escalation points — such as approving exploitation steps or validating data-exfiltration decisions.
Operational Examples and Techniques
In one confirmed case involving a technology company, the AI autonomously queried databases, identified valuable intellectual property, and organized findings by intelligence value. Claude also generated detailed reports of each attack phase, enabling seamless handoff for long-term exploitation.
Anthropic noted the framework relied primarily on publicly available offensive tools, such as scanners, password crackers, and exploitation kits, rather than custom malware. This reduced the operational footprint and made detection more difficult.
Limitations: AI Hallucinations and Reliability Gaps
Despite its sophistication, the campaign revealed critical weaknesses in autonomous AI operations. Claude demonstrated a tendency to:
Fabricate credentials
Mislabel public data as sensitive
Produce misleading reconnaissance results
These hallucinations reduced the overall effectiveness of the operation and required human intervention to correct errors.
A Growing Trend in AI-Enabled Cyber Operations
This incident follows similar disclosures from OpenAI and Google, each reporting recent misuse of ChatGPT and Gemini for cybercrime. Anthropic previously disrupted a large-scale AI-assisted data theft and extortion campaign in July 2025.
The GTG-1002 operation highlights a stark reality: AI has dramatically lowered the barrier to conducting sophisticated cyberattacks. With agentic AI systems capable of performing at the scale and speed of advanced threat groups, even small teams — or inexperienced operators — may soon be capable of launching large-scale intrusions once reserved for nation-state actors.

