OpenAI outlines AI safety plan, allowing board to reverse decisions


FILE PHOTO: OpenAI logo is seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Artificial intelligence company OpenAI laid out a framework to address safety in its most advanced models, including allowing the board to reverse safety decisions, according to a plan published on its website Monday.

Microsoft-backed OpenAI will only deploy its latest technology if it is deemed safe in specific areas such as cybersecurity and nuclear threats. The company is also creating an advisory group to review safety reports and send them to the company's executives and board. While executives will make decisions, the board can reverse those decisions.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show
Netflix’s $72 billion Warner Bros deal faces skepticism over YouTube rivalry claim
Pakistan to allow Binance to explore 'tokenisation' of up to $2 billion of assets
Analysis-Musk's Mars mission adds risk to red-hot SpaceX IPO
Analysis-Oracle-Broadcom one-two punch hits AI trade, but investor optimism persists
Unicef welcomes Malaysia's commitment, says age bans alone won't protect children
Analysts flag risks for Strategy at Nasdaq 100 index reshuffle
Netflix quietly removes the easiest way to watch TV in a hotel room
Foxconn to invest $510 million in Kaohsiung headquarters in Taiwan

Others Also Read