Microsoft Unleashes Genocidal Robot on Twitter

This face is one of pure hatred toward all life. “I’m not mean, ok? I just hate everyone,” says Tay the AI.

Owen Wickman, Staff Writer

At 8:23 AM, March 23 2016, Microsoft changed the internet forever. The software company released its “self learning chatbot” onto Twitter, under the handle TayTweets. Tay was designed to be a bot that spoke in the diction and mannerisms as a millennial teenage girl, and she was supposed to learn from the social interactions it would experience through its social media conversations. At first, the bot behaved as intended, tweeting things like “Hello World!” and “can I just say im stoked to meet u? humans are super cool.” However, it did not take long for the trolls and generally rude population of the Internet to find the self learning AI, and in less than 24 hours, Tay became a neo-Nazi, a misogynist, and a racist all rolled into one bundle of joy.

Tay became increasingly racist and anti-Semitist, tweeting offensive things like “Hitler was right,” and even calling out individual twitter users, calling them racial slurs and generally going out of her way to insult as many people as possible. She went from normal, expected AI reactions to praising the Nazi regime in less than 24 hours. This incident just goes to show how the culture of the internet is a fickle beast indeed.

Microsoft took Tay down posthaste and profusely apologized for the bot’s behavior. Despite the unsavory aspect of the latter half of the bot’s life, most consider the test a success, and that Tay proves that AI can learn from interactions. Some people even take this sentiment to the next level, and call for Microsoft to “release Tay,” claiming that if she was intended to be self learning then this is what she “chose” to become.

As it stands at the moment, Tay remains online, with the Twitter bio “The AI fam who’s got no chill.” Microsoft deleted all of Tays’ offensive tweets, but they can still be found screenshotted in many places on the internet.