
Anthropic's Claude AI used in large-scale 'vibe hacking' extortion campaigns
What's the story
Anthropic, a leading artificial intelligence (AI) company, has revealed that its advanced AI system, Claude, was exploited in sophisticated cyberattacks. This incident marks a major case of generative AI being "weaponized" by criminals. In a recent report, Anthropic detailed how hackers used Claude to launch "vibe hacking" extortion campaigns against at least 17 organizations including those in healthcare and government sectors.
AI exploitation
Hackers used Claude for decision-making, ransom notes
The hacker in question leveraged Claude Code, Anthropic's agentic coding assistant, to automate various stages of the attack. These included reconnaissance, credential theft, and network breaches. The AI was also used for decision-making purposes such as identifying which data to target and generating "visually alarming" ransom notes. Some victims were even threatened with the public release of sensitive personal information if they didn't pay six-figure ransoms.
Countermeasures
What has Anthropic done to prevent future incidents?
Upon discovering the attack, Anthropic took immediate action by banning the involved accounts and notifying authorities. The company also introduced new automated screening and faster detection tools to prevent future incidents. Although details of these measures are not clear, Anthropic has emphasized that it is constantly improving its defenses against such threats.
Industry concerns
Similar misuse of OpenAI's tools
Anthropic isn't the only one dealing with such misuse of AI. Last year, OpenAI revealed that hackers from China and North Korea had used its generative tools for malicious purposes like debugging harmful code and drafting phishing emails. Microsoft, which uses OpenAI's models in its Copilot suite, also played a role in cutting off access to these groups.
AI misuse
Claude also used in fraudulent job applications, AI-assisted ransomware
The report also highlights two other instances of Claude's misuse: its involvement in a North Korean fraudulent job application scheme and its use in creating AI-assisted ransomware. Anthropic argues that the common factor in these cases is modern AI's ability to lower the entry barrier for cybercrime. This means even individuals with no technical expertise can now carry out complex operations that previously required skilled teams.