Users apparently taught the millennial-imitating chatbot to make offensive statements.

Share story

Microsoft has taken its new millennial-imitating chatbot offline after people apparently taught the artificial intelligence experiment to repeat offensive statements.

Tay.ai, a bot designed to converse with 18- to 24-year-old U.S. residents on Twitter, as well as on messaging services Kik and Groupme, is designed to learn from its interactions. “The more you chat with Tay the smarter she gets,” Microsoft’s Web page on the bot says. “So the experience can be more personalized for you.”

That personalized experience apparently included parroting back learned racist and other offensive comments.

In a statement, Microsoft said Tay was taken offline after “a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” Microsoft is making adjustments to the bot, the company said.

Twitter, in particular, can be a minefield for companies seeking software-inspired ways to interact with customers and potential fans, though few of the companies testing those waters have the artificial-intelligence chops of Microsoft.

In 2015, a Coca-Cola marketing campaign that turned negative tweets into charming, letter-based images was derailed after news site Gawker got Coke’s Twitter account to post passages from Adolf Hitler’s “Mein Kampf.” Twitter users have also made a game out of getting automatic replies from branded accounts to repeat slurs or other offensive material.