ChatGPT maker OpenAI says it’s working to reduce bias, bad behaviour


OpenAI is responding to reports of biases, inaccuracies and inappropriate behavior by ChatGPT itself, and criticism more broadly of new chat-based search products now in testing from Microsoft Corp and Alphabet Inc’s Google. — Reuters

OpenAI, the artificial-intelligence research company behind the viral ChatGPT chatbot, said it is working to reduce biases in the system and will allow users to customise its behaviour following a spate of reports about inappropriate interactions and errors in its results.

“We are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” the company said in a blog post. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should.”

OpenAI is responding to reports of biases, inaccuracies and inappropriate behavior by ChatGPT itself, and criticism more broadly of new chat-based search products now in testing from Microsoft Corp and Alphabet Inc’s Google. In a blog post on Wednesday, Microsoft detailed what it has learned about the limitations of its new Bing chat based on OpenAI technology, and Google has asked workers to put in time manually improving the answers of its Bard system, CNBC reported.

San Francisco-based OpenAI also said it’s developing an update to ChatGPT that will allow limited customisation by each user to suit their tastes, styles and views. In the US, right-wing commentators have been citing examples of what they see as pernicious liberalism hard-coded into the system, leading to a backlash to what the online right is referring to as “WokeGPT”.

“We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” OpenAI wrote on Thursday. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging – taking customisation to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs. There will therefore always be some bounds on system behaviour.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Apple to extend new core technology fee to iPadOS apps
Oracle updates database technology for AI chatbots
Singapore DBS’s digital services hit days after MAS ban ends
Nigeria court adjourns Binance and execs trial to May 17
US judge questions Google, DOJ in market power trial closing
Tesla interns say offers are getting revoked weeks before their start date
Man sexually assaults two women he met online on the same day, US cops say
AI startup Anthropic debuts Claude chatbot as an iPhone app
Microsoft will invest RM10.47bil in cloud and AI services in Malaysia
Sex offender asks Norway’s Supreme Court to declare social media access is a human right

Others Also Read