AI-Powered Cyber Crime Raises Worldwide Alarm Bells.
A new report from Anthropic, an AI startup backed by Amazon and Google, revealed a major shift in the cybercrime landscape. Through the use of Anthropic’s own AI model (Claude) and coding agent (Claude Code), an unnamed hacker was able to carry out an “unprecedented” cybercrime spree. This event signals the beginning of a new era of AI-enabled cybercrime that has the potential to become more sophisticated and accessible.
Satish Swargam, Principal Security Consultant at Black Duck, noted that “Hackers are known to use sophisticated tools to launch cyber attacks, and Anthropic’s recent report shows how hackers are now using AI chatbots to discover, prepare, and formulate attacks to make them even more effective with less time and effort. Nowadays, even novices can utilise AI chatbots to launch cyberattacks, highlighting how easily this can be done.”
The Discovery
Anthropic’s own AI model and coding agent was exploited by the attacker to automate nearly the entire ransomware attack cycle. This process usually requires extensive human input, but through convincing Claude Code, a chatbot specialising in “vibe coding”, the hacker was able to identify companies vulnerable to an attack. A few more inputs later, Claude created malicious software with the ability of stealing sensitive information from the companies first identified by the AI model.
After some swift financial analysis of the hacked documents, Claude was able to determine realistic amounts of bitcoin needed to demand for the documents to not being released to the public eye. Oh, and the emails used to extort the victims were also neatly provided by Claude.
Nivedita Murthy, Senior Security Consultant at mentions that “attackers using AI to improve their attack methods or increase automation is not surprising. However, in this case, it is interesting to note that Claude Code had a wealth of information on which organisations were vulnerable and where. It also freely gave away this information in the form of an attack vector. What organisations need to really look into is how much the AI tools they use know about their company and where that information goes.”
Can this be avoided in the future?
What this report really brings to light is the ease at which an individual was able to use an AI model to carry out widespread attacks. In the end, the attacker targeted at least 17 organizations spanning healthcare providers, emergency services, government offices, and religious institutions. What isn’t clear is how many of the companies ended up paying the price of the extortion but the demands spanned from $75,000 to more than $500,000.
So, in the words of Graeme Stewart, head of public sector at: “The AI buzz has reached the bad guys”. But with that comes the question of how this can be avoided in the future. To answer this, Graeme states, “The only defence right now is AI in security, robust processes that organisations actually stick to, and people spotting the weird requests that are often the giveaway. But it can’t just be left to industry. The government has to step up. First, launch a major national campaign on the dangers of ransomware and force the media to join the conversation. Second, stop relying on unstructured pro-bono work from companies like ours and build a proper, coordinated programme for schools, Chambers, and local business groups. Third, give the NCSC statutory powers to compel organisations, including Government departments, to fix their processes before the next £400m mistake happens.”
In response, Anthropic banned the malicious accounts, introduced new detection systems, and collaborated with government agencies by sharing threat intelligence. The company stressed that as AI technology advances, defences must evolve at “machine speed” to counter machine-driven attacks.
Overall, the Anthropic case marks a significant turning point: AI, once a productivity enhancer, is now in threat of being exploited by criminals on a global scale. If AI systems can be repurposed for extortion, espionage, and large-scale data theft, then every innovation carries a double edge. The responsibility now lies with developers, regulators, and businesses to anticipate these dual-use risks and act before they spiral out of control. Without appropriate action, this case study can become a cyber epidemic, costing millions.