A file photo of AI security cameras with facial recognition technology. The question of how to make AI ethical is front and center in the public debate, but what is less discussed are the ways in which machines might make humans themselves less ethical. — AFP/Getty Images/TNS
It started out as a social experiment, but it quickly came to a bitter end. Microsoft’s chatbot Tay had been trained to have “casual and playful conversations” on Twitter, but once it was deployed, it took only 16 hours before Tay launched into tirades that included racist and misogynistic tweets.
As it turned out, Tay was mostly repeating the verbal abuse that humans were spouting at it – but the outrage that followed centered on the bad influence that Tay had on people who could see its hateful tweets, rather than on the people whose hateful tweets were a bad influence on Tay.
Save 30% for ads-free and full access now!
