The offices of Anthropic in San Francisco. Chinese state-sponsored hackers used Anthropic’s artificial intelligence technology to conduct a largely automated cyberattack against a group of technology companies and government agencies, the company said on Nov 13, 2025. — Marissa Leshnov/The New York Times
Chinese state-sponsored hackers used Anthropic’s artificial intelligence technology to conduct a largely automated cyberattack against a group of technology companies and government agencies, the company said on Nov 13.
Anthropic, an AI startup, claimed that the large-scale online espionage campaign in September was the first reported case of an AI-powered agent gathering information on targets with limited human input.
It released a report detailing how attackers used the company’s AI tools to write code that directed Anthropic’s AI agent, Claude Code, to perform the attack. The company said human operators accounted for 10% to 20% of the work required to conduct the operation.
The report did not disclose how the company had become aware of the attack or how it had identified the hackers, whom Anthropic said it had assessed “with high confidence” as being a Chinese state-sponsored group. It also did not identify the 30 entities that Anthropic said the hackers targeted.
James Corera, the director of the cyber, technology and security program at the Australian Strategic Policy Institute, said that although the campaign was not a fully automated attack, it demonstrated how hackers could now hand off large parts of their work to AI systems.
“While the balance is clearly shifting toward greater automation, human orchestration still anchors key elements,” Corera said.
On Nov 14, Lin Jian, a spokesperson for China’s Foreign Ministry, said he was not familiar with Anthropic’s report, but decried “accusations made without evidence” and said that China opposed hacking.
AI researchers have long warned that the latest artificial intelligence tools could be used in cyberattacks. But they have also said that the same tools would be beneficial in defending against such attacks. Throughout the history of cybersecurity, new tools have typically provided novel forms of both attack and defense.
This is not the first time that makers of advanced AI systems have said attackers had used their technology. Other US companies, including Microsoft and OpenAI, have previously reported that state actors have used AI tools to enhance online attacks and surveillance operations.
Earlier this month, as part of its annual report on digital threats, Microsoft said that China, Russia, Iran and North Korea had significantly increased their use of AI to organise cyberattacks against the United States and deceive people online.
In February, OpenAI said it had uncovered evidence that a Chinese security operation had built an AI-powered surveillance tool to gather reports about anti-Chinese posts on social media in Western countries.
(The New York Times has sued OpenAI and Microsoft for copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.)
In August, Anthropic said its AI technologies were used in sophisticated cyberattacks and that such technologies had lowered the barriers to such crimes. That month, the company said Chinese hackers had used its AI to target telecommunications providers and government databases in Vietnam.
The Chinese government has repeatedly criticised accusations that it engaged in or supported hacking.
In September, Anthropic announced that it was updating its terms of service to make it more difficult for people to gain access to its technology in locations where sales were already prohibited. – ©2025 The New York Times Company
This article originally appeared in The New York Times.
