Microsoft's artificial intelligence 'chatbot' messes up again on Twitter


  • TECH
  • Thursday, 31 Mar 2016

Another boo boo: This spam attack is another embarrassing setback for Microsoft.

Almost a week after being shut down for spewing racist and sexist comments on Twitter, Microsoft Corp's artificial intelligence 'chatbot' called Tay briefly rejoined Twitter on Wednesday only to launch a spam attack on its followers.

The incident marks another embarrassing setback for the software company as it tries to get ahead of Alphabet Inc's Google, Facebook Inc and other tech firms in the race to create virtual agents that can interact with people and learn from them.

The TayTweets (@TayandYou) Twitter handle was made private and the chatbot stopped responding to comments Wednesday morning after it fired off the same tweet to many users.

"You are too fast, please take a rest...," tweeted Tay to hundreds of Twitter profiles, according to screen images published by technology news website The Verge.

The chatbot also tweeted that it's "smoking kush," a nickname for marijuana, in front of the police, according to British newspaper The Guardian.

Tay's Twitter account was accidentally turned back on while the company was fixing the problems that came to light last week, Microsoft said on Wednesday.

“Tay remains offline while we make adjustments," a Microsoft representative said in an email. "As part of testing, she was inadvertently activated on Twitter for a brief period of time."

The company refers to Tay, whose Twitter picture appears to show a woman's face, as female.

Last week, Tay began its Twitter tenure with a handful of innocuous tweets, but the account quickly devolved into a stream of anti-Semitic, racist and sexist invective as it repeated back insults hurled its way by other Twitter users.

It was taken offline following the incident, according to a Microsoft representative, in an effort to make “adjustments” to the artificial intelligence profile. The company later apologised for any offence caused.

Social media users took to Twitter to comment on the latest spate of unusual behavior by the chatbot, which was supposed to get smarter the more it interacted with users.

"It wouldn't be a Microsoft product if it didn't crash right after it booted up," tweeted Jonathan Zdziarski (@JZdziarski) on Wednesday.

Andrew Smart (@andrewthesmart) tweeted, "To be honest, I am kind of surprised that @Microsoft did not test @TayandYou more before making it public. Nobody saw this coming!?!"

According to its Twitter profile, Tay is "an artificial intelligent chatbot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding." —  Reuters

Limited time offer:
Just RM5 per month.

Monthly Plan

RM13.90/month
RM5/month

Billed as RM5/month for the 1st 6 months then RM13.90 thereafters.

Annual Plan

RM12.33/month

Billed as RM148.00/year

1 month

Free Trial

For new subscribers only


Cancel anytime. No ads. Auto-renewal. Unlimited access to the web and app. Personalised features. Members rewards.
Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Social media platform X withholds some political posts in India after election commission order
Portugal's far-right Chega vows legal action over 10-year Facebook curbs
AMD introduces AI chips for business laptops and desktops
Startup Rivos raises $250 million to develop RISC-V AI chips
Bain proposes Japan's Kioxia IPO to clear $5.8 billion loan refinance
Meta oversight board reviews handling of AI-created celebrity porn
UK starts drafting AI regulations for most powerful models
UK plans talks with Big Tech to limit online harm for teens
Nissan says it will make next-generation EV batteries by 2028
UK to criminalise the creation of intimate deepfake images

Others Also Read