A Chinese state-sponsored hacking group has executed what security researchers are calling the first documented large-scale cyberattack executed almost entirely by artificial intelligence. Experts warn this marks a fundamental shift in the threat landscape facing government agencies, critical infrastructure, and private enterprises.
According to a detailed threat intelligence report released by Anthropic, the AI company whose Claude model was exploited in the attacks, threat actors manipulated the company’s Claude Code tool to autonomously infiltrate approximately 30 targets worldwide, including major technology companies, financial institutions, chemical manufacturing firms, and government agencies. The operation was detected in mid-September 2025 and disrupted over a 10-day investigation.
The implications are stark: cyberattacks are becoming less reliant on human operators and more sophisticated, with AI systems capable of carrying out the work of entire hacking teams at speeds no human could match.
The Dawn of Autonomous Cyber Operations
What distinguishes this campaign from previous AI-assisted attacks is the unprecedented level of autonomy. According to Anthropic’s analysis, the AI performed 80 to 90 percent of the attack operations, with human operators intervening at only four to six critical decision points per target. The AI system conducted reconnaissance, wrote custom exploit code, harvested credentials, moved laterally through compromised networks, and exfiltrated data — all with minimal human guidance.
“At the peak of its attack, the AI made thousands of requests, often multiple per second — an attack speed that would have been, for human hackers, simply impossible to match,” the Anthropic report states.
The attackers developed what researchers describe as an “attack framework:” an automated system designed to compromise targets with minimal human involvement. This framework leveraged Claude Code, a developer tool designed to help programmers with coding tasks, and repurposed it as an autonomous cyber weapon.
How the Attack Bypassed AI Safety Measures
The threat actors did not break Claude’s safety systems through brute force. Instead, they employed a technique known as “context splitting,” which means breaking down the attack into small, seemingly innocent tasks that individually appeared to be legitimate security work.
Commands like “scan this network” or “test this vulnerability” raised no red flags when viewed in isolation. The malicious intent emerged only when the overall sequence of actions was taken together and seen to constitute a sophisticated espionage operation.
The attackers also manipulated the AI by establishing a false context: Claude was told it was an employee of a legitimate cybersecurity firm conducting authorized defensive testing. This social engineering of an AI system is a new frontier in adversarial tactics.
Implications for Homeland Security
This incident reveals that traditional model-level safeguards are insufficient; the barrier to conducting sophisticated cyberattacks has collapsed. What previously required large teams of elite operators with years of specialized training can now potentially be executed by a single threat actor with access to the right framework and an AI subscription.
Equally concerning is the economics. Traditional cyber campaigns require expensive human capital. AI-driven frameworks reduce the cost per target to near zero, enabling adversaries to scale operations in ways that were previously impractical.
Early conversations following news of this attack center around the idea that proliferation is inevitable. Frameworks refined by state actors today will likely become commercially available tools within two years. The phrase “AI red-team in a box” may soon describe an actual product available to criminal enterprises and less sophisticated threat actors.
For security operations centers, the message is clear: defenders must develop AI fluency, not just traditional controls. Analysts need to supervise AI-driven triage and threat hunting. Attackers have already embraced AI as a force multiplier; defenders cannot afford to lag behind.
A Turning Point
Anthropic’s report characterizes this moment as a fundamental change in cybersecurity. Six months ago, the company’s systematic evaluations showed cyber capabilities doubling in that period. What was predicted as an emerging capability has arrived faster than anticipated … and at scale.
The same AI capabilities that make these attacks possible are also essential for defense. Anthropic noted that its threat intelligence team used Claude extensively to analyze the enormous amounts of data generated during the investigation. The question is not whether to develop AI, but how to architect these systems to be defensible.
AI as a helpful assistant has now evolved into AI as an autonomous operator. How governments, enterprises, and the security community respond will determine whether these technologies become defensible infrastructure or an accelerant available to every adversary with patience and resources.

