When an artificially intelligent chatbot that used Twitter to learn how to talk unsurprisingly turned into a bigot bot, Taylor Swift reportedly threatened legal action because the bot’s name was Tay. Microsoft would probably rather forget the experiment where Twitter trolls took advantage of the chatbot’s programming and taught it to be racist in 2016, but a new book is sharing unreleased details that show Microsoft had more to worry about than just the bot’s racist remarks.
Tay was a social media chatbot geared toward teens first launched in China before adapting the three-letter moniker when moving to the U.S. The bot, however, was programmed to learn how to talk based on Twitter conversations. In less than a day, the automatic responses the chatbot tweeted had Tay siding with Hitler, promoting genocide, and just generally hating everybody. Microsoft immediately removed the account and apologized.
When the bot was reprogrammed, Tay was relaunched as Zo. But in the book Tools and Weapons by Microsoft president Brad Smith and Carol Ann Browne, Microsoft’s communications director, the executives have finally revealed why — another Tay, Taylor Swift.
According to The Guardian, the singer’s lawyer threatened legal action over the chatbot’s name before the bot broke bad. The singer claimed the name violated both federal and state laws. Rather than get in a legal battle with the singer, Smith writes, the company instead started considering new names.
When the chatbot began sending out racist tweets, the singer had even more reason for concern. Microsoft removed the bot. But when the chatbot reappeared, Tay was no longer TayTweets but Zo, complete with new program that prevents the bot from broaching politics, race, and religion as topics. The revised chatbot, available on Messenger and others along with Twitter, was later criticized for being too much like a stereotypical teenage girl.
The shortened moniker isn’t the only thing the singer has laid claim to — she also trademarked 1989, the year she was born and the title of one of her songs. The singer has also registered several lines from her lyrics.
The book, released on September 10, continues to discuss how Smith saw the incident as a lesson in A.I. safeguards.
- Human moderators can’t stop online hate speech alone. We need bots to help
- Former Microsoft employee recounts racism at Mixer
- Mistakes were made: Hilarious internet mess-ups from the 2010s
- Microsoft’s friendly Xiaoice A.I can figure out what you want — before you ask
- Microsoft makes another attempt with a chatbot, launches Zo on Kik messaging app