Anthropic has released a new threat intelligence report outlining how cybercriminals are misusing advanced AI models for large-scale extortion, fraudulent employment schemes, and even ransomware development. The report, Detecting and Countering Misuse of AI, describes how malicious actors are adapting their operations to weaponize AI, lower the technical barriers to cybercrime, and embed AI into every stage of their activities.
In one case, cybercriminals used Claude Code to launch a wide-ranging extortion campaign targeting at least 17 organizations across healthcare, government, emergency services, and religious institutions. Instead of traditional ransomware, the attackers threatened to leak sensitive data unless ransoms, sometimes more than $500,000, were paid. According to Anthropic, AI was used not only to infiltrate networks but also to decide what data to steal, calculate ransom demands, and craft convincing extortion messages.
Another case revealed how North Korean IT operatives have been using AI to secure fraudulent employment at U.S. Fortune 500 technology firms. By leveraging AI tools to create fake identities, pass technical interviews, and complete work tasks, these schemes helped funnel money back to the North Korean regime in defiance of international sanctions.
A third case detailed how a cybercriminal used AI to develop and sell ransomware variants online, offering them for $400 to $1,200 each. Anthropic reported that the actor lacked the technical expertise to create malware independently, relying on AI to build encryption, evasion, and anti-recovery features.
Click here to read the full report.
(AI was used in part to facilitate this article.)

