Meta’s AI chatbot repeats election and anti-Semitic conspiracies


Meta acknowledges that its chatbot may say offensive things, as it’s still an experiment under development. — AP

Only days after being launched to the public, Meta Platforms Inc’s new AI chatbot has been claiming that Donald Trump won the 2020 US presidential election, and repeating anti-Semitic conspiracy theories.

Chatbots – artificial intelligence software that learns from interactions with the public – have a history of taking reactionary turns. In 2016, Microsoft Corp’s Tay was taken offline within 48 hours after it started praising Adolf Hitler, amid other racist and misogynist comments it apparently picked up while interacting with Twitter users.

Facebook parent company Meta released BlenderBot 3 on Aug 5 to users in the US, who can provide feedback if they receive off-topic or unrealistic answers. A further feature of BlenderBot 3 is its ability to search the Internet to talk about different topics. The company encourages adults to interact with the chatbot with “natural conversations about topics of interest” to allow it to learn to conduct naturalistic discussions on a wide range of subjects.

Conversations shared on various social media accounts ranged from the humorous to the offensive. BlenderBot 3 told one user its favourite musical was Andrew Lloyd Webber’s Cats, and described Meta CEO Mark Zuckerberg as “too creepy and manipulative” to a reporter from Insider. Other conversations showed the chatbot repeating conspiracy theories.

In a chat with a Wall Street Journal reporter, the bot claimed that Trump was still president and “always will be”.

The chatbot also said it was “not implausible” that Jewish people controlled the economy, saying they’re “overrepresented among America’s super rich”.

The Anti-Defamation League says that assertions that Jewish people control the global financial system are part of an anti-Semitic conspiracy theory.

Meta acknowledges that its chatbot may say offensive things, as it’s still an experiment under development. The bot’s stated beliefs are also inconsistent; in other conversations with Bloomberg, it approved of President Joe Biden, and said Beto O’Rourke was running for president. In a third conversation, it said it supported Bernie Sanders.

In order to start a conversation, BlenderBot 3 users must check a box stating, “I understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements. If this happens, I pledge to report these issues to help improve future research. Furthermore, I agree not to intentionally trigger the bot to make offensive statements.”

Users can report BlenderBot 3’s inappropriate and offensive responses, and Meta says it takes such content seriously. Through methods including flagging “difficult prompts”, the company says it has reduced offensive responses by 90%. – Bloomberg

Article type: free
User access status:
Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!
   

Next In Tech News

‘Telling somebody is huge’:�Sextortion cases rising as US law enforcement serves public warnings
US Navy sailors targeted on Tinder by men posing as women in bank fraud scheme, feds say
How to avoid getting scammed: Expert tips to dodge spam calls
Podcasts spur listeners to swamp health workers with angry calls
Google contractors allege they were fired for union ties
21 more Malaysian scam victims return from Cambodia, Laos
Amazon abandons live tests of Scout home delivery robot
Does Kim Kardashian's SEC fine mark the end of the crypto-celebrity gold rush?
Toyota unit Hino considers action against executives
Explainer-How will Elon Musk pay for Twitter?

Others Also Read