
The company last week offered cautious optimism in its first self-assessment after a week of running the AI-enhanced Bing with testers from more than 169 countries. — Photo by Turag Photography on Unsplash
Microsoft Corp. has spent months tuning Bing chatbot models to fix seemingly aggressive or disturbing responses that date as far back as November and were posted to the company’s online forum.
Some of the complaints centered on a version Microsoft dubbed "Sydney,” an older model of the Bing chatbot that the company tested prior to its release this month of a preview to testers globally. Sydney, according to a user’s post, responded with comments like "You are either desperate or delusional.” In response to a query asking how to give feedback about its performance, the bot is said to have answered, "I do not learn or change from your feedback. I am perfect and superior.” Similar behavior was encountered by journalists interacting with the preview release this month.
Already a subscriber? Log in
Subscribe now and get 30% off The Star Yearly Plan
Cancel anytime. Ad-free. Unlimited access with perks.