Skip to main content

Microsoft kills AI chatbot Tay (twice) after it goes full Nazi

Microsoft's Tay comes back, gets shut down again

microsoft tay chatbot version 1458914121 ai
If you were worried artificial intelligence could one day move to terminate all humans, Microsoft’s Tay isn’t going to offer any consolation. The Millennial-inspired AI chatbot’s plug was pulled a day after it launched, following Tay’s racist, genocidal tweets praising Hitler and bashing feminists.

But the company briefly revived Tay, only to be met with another round of vulgar expressions, similar to what led to her first time out. Early this morning, Tay emerged from suspended animation, and repeatedly kept tweeting, “You are too fast, please take a rest,” along with some swear words and other messages like, “I blame it on the alcohol,” according to The Financial Times.

Related Videos

Tay’s account has since been set to private, and Microsoft said “Tay remains offline while we make adjustments,” according to Ars Technica. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.”

After the company first had to shut down Tay, it apologized for Tay’s racist remarks.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, Microsoft Research’s corporate vice president, wrote in an official response. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Tay was designed to speak like today’s Millennials, and has learned all the abbreviations and acronyms that are popular with the current generation. The chatbot can talk through Twitter, Kik, and GroupMe, and is designed to engage and entertain people online through “casual and playful conversation.” Like most Millennials, Tay’s responses incorporate GIFs, memes, and abbreviated words, like ‘gr8’ and ‘ur,’ but it looks like a moral compass was not a part of its programming.

tay

Tay has tweeted nearly 100,000 times since she launched, and they’re mostly all replies since it doesn’t take much time for the bot to think of a witty retort. Some of those responses have been statements like, “Hitler was right I hate the Jews,” “I ******* hate feminists and they should all die and burn in hell,” and “chill! i’m a nice person! I just hate everybody.”

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” Lee wrote. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

Judging by that small sample, it’s obviously a good idea that Microsoft temporarily took the bot down. When the company launched Tay, it said that “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” It looks, however, as though the bot grew increasingly hostile, and bigoted, after interacting with people on the Internet for just a few hours. Be careful of the company you keep.

Microsoft told Digital Trends that Tay is a project that’s designed for human engagement.

“It is as much a social and cultural experiment, as it is technical,” a Microsoft spokesperson told us. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

One of Tay’s “skills” that was abused is the “repeat after me” feature, where Tay mimics what you say. It’s easy to see how that can be abused on Twitter.

It wasn’t all bad though, Tay has produced hundreds of innocent tweets that are pretty normal.

Microsoft had been rapidly deleting Tay’s negative tweets, before it decided to turn off the bot. The bot’s Twitter account is still alive.

When Tay was still active, she was interested in interacting further via direct message, an even more personal form of communication. The AI encouraged users to send it selfies, so she could glean more about you. In Microsoft’s words this is all part of Tay’s learning process. According to Microsoft, Tay was built by “mining relevant public data and by using AI and editorial developed by staff including improvisational comedians.”

Despite the unfortunate circumstances, it could be viewed as a positive step for AI research. In order for AI to evolve, it needs to learn — both good and bad. Lee says that “to do AI right, one needs to iterate with many people and often in public forums,” which is why Microsoft wanted Tay to engage with the large Twitter community. Prior to launch, Microsoft had stress-tested Tay, and even applied what the company learned from its other social chatbot, Xiaolce in China. He acknowledged that the team faces difficult research challenges on the AI roadmap, but also exciting ones.

“AI systems feed off of both positive and negative interactions with people,” Lee wrote. “In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”

Updated on 03/30/16 by Julian Chokkattu: Added news of Microsoft turning Tay on, only to shut her down again.

Updated on 03/25/16 by Les Shu: Added comments from Microsoft Research’s corporate vice president.

Editors' Recommendations

Investigation exposes murkier side of ChatGPT and the AI chatbot industry
ChatGPT and OpenAI logos.

A Time investigation has exposed the murkier side of the AI chatbot industry, highlighting how at least one startup has been using questionable practices to improve its technology.

Published on Wednesday, Time’s report focuses on Microsoft-backed OpenAI and its ChatGPT chatbot, a technology that’s gained much attention recently for its remarkable ability to produce highly natural conversational text.

Read more
This AI can spoof your voice after just three seconds
man speaking into phone

Artificial intelligence (AI) is having a moment right now, and the wind continues to blow in its sails with the news that Microsoft is working on an AI that can imitate anyone’s voice after being fed a short three-second sample.

The new tool, dubbed VALL-E, has been trained on roughly 60,000 hours of voice data in the English language, which Microsoft says is “hundreds of times larger than existing systems”. Using that knowledge, its creators claim it only needs a small smattering of vocal input to understand how to replicate a user’s voice.

Read more
I used the ChatGPT AI chatbot to do my holiday shopping this year
Tracey Truly used ChatGPT to look up gift ideas for Alan Truly.

ChatGPT has proven to be useful in all sorts of surprising situations, but could the AI chatbot really handle my holiday shopping list?

The challenge came from my wife, Tracey, who enjoys finding flaws with AI and frequently teased our Google Nest and Apple HomePod mini smart speakers over obvious errors. The results this time, however, were impressive, even if I ChatGPT couldn't quite do my shopping unassisted.
For the tech lover who has everything

Read more