Skip to main content

Microsoft kills AI chatbot Tay (twice) after it goes full Nazi

Microsoft's Tay comes back, gets shut down again

microsoft tay chatbot version 1458914121 ai
Image used with permission by copyright holder
If you were worried artificial intelligence could one day move to terminate all humans, Microsoft’s Tay isn’t going to offer any consolation. The Millennial-inspired AI chatbot’s plug was pulled a day after it launched, following Tay’s racist, genocidal tweets praising Hitler and bashing feminists.

But the company briefly revived Tay, only to be met with another round of vulgar expressions, similar to what led to her first time out. Early this morning, Tay emerged from suspended animation, and repeatedly kept tweeting, “You are too fast, please take a rest,” along with some swear words and other messages like, “I blame it on the alcohol,” according to The Financial Times.

Tay’s account has since been set to private, and Microsoft said “Tay remains offline while we make adjustments,” according to Ars Technica. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.”

After the company first had to shut down Tay, it apologized for Tay’s racist remarks.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, Microsoft Research’s corporate vice president, wrote in an official response. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Tay was designed to speak like today’s Millennials, and has learned all the abbreviations and acronyms that are popular with the current generation. The chatbot can talk through Twitter, Kik, and GroupMe, and is designed to engage and entertain people online through “casual and playful conversation.” Like most Millennials, Tay’s responses incorporate GIFs, memes, and abbreviated words, like ‘gr8’ and ‘ur,’ but it looks like a moral compass was not a part of its programming.

tay
Image used with permission by copyright holder

Tay has tweeted nearly 100,000 times since she launched, and they’re mostly all replies since it doesn’t take much time for the bot to think of a witty retort. Some of those responses have been statements like, “Hitler was right I hate the Jews,” “I ******* hate feminists and they should all die and burn in hell,” and “chill! i’m a nice person! I just hate everybody.”

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” Lee wrote. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

Judging by that small sample, it’s obviously a good idea that Microsoft temporarily took the bot down. When the company launched Tay, it said that “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” It looks, however, as though the bot grew increasingly hostile, and bigoted, after interacting with people on the Internet for just a few hours. Be careful of the company you keep.

Microsoft told Digital Trends that Tay is a project that’s designed for human engagement.

“It is as much a social and cultural experiment, as it is technical,” a Microsoft spokesperson told us. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

One of Tay’s “skills” that was abused is the “repeat after me” feature, where Tay mimics what you say. It’s easy to see how that can be abused on Twitter.

It wasn’t all bad though, Tay has produced hundreds of innocent tweets that are pretty normal.

Microsoft had been rapidly deleting Tay’s negative tweets, before it decided to turn off the bot. The bot’s Twitter account is still alive.

When Tay was still active, she was interested in interacting further via direct message, an even more personal form of communication. The AI encouraged users to send it selfies, so she could glean more about you. In Microsoft’s words this is all part of Tay’s learning process. According to Microsoft, Tay was built by “mining relevant public data and by using AI and editorial developed by staff including improvisational comedians.”

Despite the unfortunate circumstances, it could be viewed as a positive step for AI research. In order for AI to evolve, it needs to learn — both good and bad. Lee says that “to do AI right, one needs to iterate with many people and often in public forums,” which is why Microsoft wanted Tay to engage with the large Twitter community. Prior to launch, Microsoft had stress-tested Tay, and even applied what the company learned from its other social chatbot, Xiaolce in China. He acknowledged that the team faces difficult research challenges on the AI roadmap, but also exciting ones.

“AI systems feed off of both positive and negative interactions with people,” Lee wrote. “In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”

Updated on 03/30/16 by Julian Chokkattu: Added news of Microsoft turning Tay on, only to shut her down again.

Updated on 03/25/16 by Les Shu: Added comments from Microsoft Research’s corporate vice president.

Editors' Recommendations

Saqib Shah
Former Digital Trends Contributor
Saqib Shah is a Twitter addict and film fan with an obsessive interest in pop culture trends. In his spare time he can be…
Microsoft accidentally released 38TB of private data in a major leak
A large monitor displaying a security hacking breach warning.

It’s just been revealed that Microsoft researchers accidentally leaked 38TB of confidential information onto the company’s GitHub page, where potentially anyone could see it. Among the data trove was a backup of two former employees’ workstations, which contained keys, passwords, secrets, and more than 30,000 private Teams messages.

According to cloud security firm Wiz, the leak was published on Microsoft’s artificial intelligence (AI) GitHub repository and was accidentally included in a tranche of open-source training data. That means visitors were encouraged to download it, meaning it could have fallen into the wrong hands again and again.

Read more
GPT-4: how to use the AI chatbot that puts ChatGPT to shame
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company's "most advanced system, producing safer and more useful responses." Here's everything you need to know about it, including how to use it and what it can do.
What is GPT-4?
GPT-4 is a new language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which is currently based on GPT-3.5. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

Read more
Microsoft says bizarre travel article was not created by ‘unsupervised AI’
Microsoft logo

According to a recent article posted by Microsoft Travel on microsoft.com, attractions worth checking out on a visit to the Canadian capital of Ottawa include the National War Memorial, Parliament Hill, Fairmont Château Laurier, Ottawa Food Bank ... hang on, Ottawa Food Bank?

Spotted in recent days by Canada-based tech writer Paris Marx, the article puts Ottawa Food Bank at number 3 in a list of 15 must-see places in the city. And as if that wasn't bad enough, the accompanying description even suggests visiting it “on an empty stomach.”

Read more