OpenAI details layered protections in US defense department pact


FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

Feb 28 (Reuters) - OpenAI said on ⁠Saturday that the agreement it struck a day ago with the Pentagon ⁠to deploy technology on the U.S. defense department's classified network includes additional ‌safeguards to protect its use cases.

U.S. President Donald Trump on Friday directed the government to stop working with Anthropic, and the Pentagon said it would declare the startup a supply-chain risk, dealing a major ​blow to the artificial intelligence lab after a ⁠showdown about technology guardrails. Anthropic said ⁠it would challenge any risk designation in court.

Soon after, rival OpenAI, which is backed ⁠by ‌Microsoft, Amazon, SoftBank and others, announced its own deal late on Friday.

"We think our agreement has more guardrails than any previous agreement for classified AI ⁠deployments, including Anthropic's," OpenAI said on Saturday.

The AI firm ​said that the contract ‌with the Department of Defense, which the Trump administration has renamed the Department ⁠of War, ​enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions.

"In our agreement, we protect our red lines ⁠through a more expansive, multi-layered approach. We retain ​full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," OpenAI said.

The Pentagon signed agreements worth ⁠up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable ​AI.

OpenAI cautioned that any breach of its contract by ⁠the U.S. government could trigger a termination, though it added, "We don't expect that to happen."

The ​company also said rival Anthropic should not be ‌labeled a “supply-chain risk,” noting, "We have made our position ​on this clear to the government."

(Reporting by Mrinmay Dey in Mexico City and Ananya Palyekar in Bangalore; Editing by Cynthia Osterman and Andrea Ricci)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

A time-travelling love story to make Kafka and Lynch proud
A robot’s trek goes from infuriating to ingenious
Are you nodding off? How AI could soon keep humans awake in robocars
Nvidia plans new chip to speed AI processing, WSJ reports
OpenAI reaches deal to deploy AI models on U.S. Department of War classified network
Are password managers safe? Not as much as you think, research shows
Most AI chatbots have murky safety provisions, researchers find
Anthropic says it will challenge Pentagon's supply chain risk designation in court
Study: People are overconfident they can tell AI-made faces from real
Canadian minister to meet with OpenAI's Altman to discuss safety measures after shooting

Others Also Read