US judge blocks Pentagon's Anthropic blacklisting for now


FILE PHOTO: The Pentagon logo is seen behind the podium in the briefing room at the Pentagon in Arlington, Virginia, U.S., January 8, 2020. REUTERS/Al Drago/File Photo

March 26 (Reuters) - A U.S. judge on ⁠Thursday temporarily blocked the Pentagon's blacklisting of Anthropic, the latest turn in the Claude maker's high-stakes fight with the ⁠military over AI safety on the battlefield.

Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth ‌overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries.

Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI ​safety. The company said it was not given a chance to dispute the designation, ⁠in violation of its Fifth Amendment right to ⁠due process.

U.S. District Judge Rita Lin, an appointee of former Democratic President Joe Biden, agreed with the company in a 43-page ruling, ⁠but ‌said it would not take effect for seven days to give the administration a chance to appeal.

Hegseth’s unprecedented move, which followed Anthropic's refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from ⁠certain military contracts. Anthropic executives have said it could cost the company billions ​of dollars in lost business and reputational ‌harm.

Anthropic says that AI models are not reliable enough to be safely used in autonomous weapons and that it ⁠opposes domestic surveillance as ​a violation of rights, but the Pentagon says private companies should not be able to constrain military action.

In Thursday's ruling, Lin said the administration's actions did not appear to be directed at the government's stated national security interests, but rather, to punish Anthropic.

"The record supports an inference that Anthropic ⁠is being punished for criticizing the government’s contracting position in the press," Lin ​wrote.

"Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation," the judge added.

Anthropic spokesperson Danielle Cohen said the company was pleased with the decision.

"While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains ⁠on working productively with the government to ensure all Americans benefit from safe, reliable AI," Cohen said in a statement.

Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage.

Anthropic's March 9 lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military’s past praise of ​Claude.

The Justice Department countered that Anthropic’s refusal to lift the restrictions could cause uncertainty in ⁠the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.

The government said the ​designation stemmed from Anthropic’s refusal to accept contractual terms, not its views on AI ‌safety.

Anthropic has a second lawsuitpending in Washington, D.C., over a separate ​Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.

(Reporting by Jack Queen in New York and Kanishka Singh in Washington; additional reporting by Andrew Chung; editing by Noeleen Walder, Matthew Lewis and Stephen Coates)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Britain woos expansion effort by Anthropic after US defence clash, FT says
Online abuse: What to know and how to protect yourself and others
Review: With ‘Monster Hunter Stories 3: Twisted Reflection,’ an RPG finally grows up
Is taste the one thing AI can’t replace?
Preview: How ‘Pragmata’ changed my mind about its hacking and gunplay
Telegram's Durov says Russia triggered payment system problem by blocking VPNs
EU chat control deal�expires, halting mass child pornography scanning
Influencers accused of peddling medical misinformation on social apps
How will Meta and Google's landmark legal defeat change social media?
The anomaly of humanity as AI grows inevitable

Others Also Read