Anthropic says attacker used AI tool in widespread hacks


The company said the hacker used AI as a consultant and active operator to execute attacks that would otherwise have been more difficult and time-consuming. — Bloomberg

A hacker leveraged technology from Anthropic PBC as part of a vast cybercrime scheme that’s impacted at least 17 organisations, the company said, marking an "unprecedented” instance of attackers weaponising a commercial artificial intelligence tool on a widespread basis.

The hacker used Anthropic’s agentic coding tool in a data theft and extortion operation that affected victims in the last month across government, health care, emergency services and religious institutions, according to the company’s August threat intelligence report published this week. The attacks using the Claude Code tool resulted in the compromise of health care data, financial information and other sensitive records with ransom demands ranging from US$75,000 to US$500,000 in cryptocurrency.

The campaign demonstrated a "concerning evolution in AI-assisted cybercrime” in which a single user can operate like an entire cybercriminal team, according to the report. The company said the hacker used AI as a consultant and active operator to execute attacks that would otherwise have been more difficult and time-consuming.

Anthropic, an AI startup founded in 2021, is one of the world’s most valuable private companies and competes with OpenAI and Elon Musk’s xAI as well as Alphabet Inc's Google and Microsoft Corp. It has positioned itself as a reliable safety-conscious AI firm.

Anthropic also reported malicious use of Claude in North Korea and China. North Korean operatives have been using Claude to maintain fraudulent remote, high-paying jobs at technology companies intended to fund the regime’s weapons program. These actors appear to be completely dependent on AI to perform basic technical tasks such as writing code, according to Anthropic.

A Chinese threat actor used Claude to compromise major Vietnamese telecommunications providers, agricultural management systems and government databases over a nine-month campaign, according to the report.

While Anthropic’s investigation focuses specifically on Claude, the company wrote that the case studies show how cybercriminals are exploiting advanced AI capabilities in all stages of their fraud operations. Anthropic has found that AI-generated attacks are able to adapt to defensive measures in real time while making strategic and tactical decisions about how to most effectively exploit and monetize targets.

Other AI firms have also reported malicious use of their technology. OpenAI said last year that a group with ties to China posed as one of its users to launch phishing attacks against the AI startup’s employees. OpenAI has also said it has shut down propaganda networks in Russia, China, Iran and Israel that used its technology. – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show
Netflix’s $72 billion Warner Bros deal faces skepticism over YouTube rivalry claim
Pakistan to allow Binance to explore 'tokenisation' of up to $2 billion of assets
Analysis-Musk's Mars mission adds risk to red-hot SpaceX IPO
Analysis-Oracle-Broadcom one-two punch hits AI trade, but investor optimism persists

Others Also Read