Microsoft says it's making 'adjustments' to Tay chatbot after Internet 'abuse'

24.03.2016
It sounds like Microsoft’s Tay chatbot is getting a time-out, as Microsoft instructs her on how to talk with strangers on the Internet. Because, as the company quickly learned, the citizens of the Internet can’t be trusted with that task.

In a statement released Thursday, Microsoft said that a “coordinated effort” by Internet users had turned the Tay chatbot into a tool of “abuse.” It was a clear reference to a series of racist and otherwise abusive tweets that the Tay chatbot issued within a day of debuting on Twitter. Wednesday morning, Tay was a novel experiment in AI that would learn natural language through social engagement. By Wednesday evening, Tay was reflecting the more unsavory aspects of life online.

 Some were just odd (and racist):

And some were simply controversial:

It didn’t take users long to learn that the Tay chatbot contained a “repeat after me” command, which they promptly took advantage of. That produced a series of tweets where Tay parroted what users told her to say. Tay inexplicably added the “repeat after me” phrase to the parroted content on at least some tweets, implying that users should repeat what the chatbot said. Naturally, those tweets were recirculated around the Internet.

As a result, Microsoft said Tay would be offline while the company made “adjustments.” “The AI chatbot Tay is a machine learning project, designed for human engagement,” a Microsoft spokeswoman said in a statement. “It is as much a social and cultural experiment, as it is technical.  Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.” Microsoft gave no details on adjustments it would make to the algorithm.

Why this matters: For decades, we’ve lived with search engines that have served as our digital servants, fetching information for us from the Internet. Only recently have those engines evolved into assistants, which can communicate on a more personal level. Microsoft clearly wants to go further, allowing its own assistant, Cortana, to interact using what computer scientists call “natural language.” Unfortunately, Tay was like a child wandering into some very dark corners of the Internet. But there are still signs that Microsoft could turn this into a positive.

As a result of the abuse, Microsoft and Twitter began removing some of the more abusive tweets. Tay also signed off on Wednesday night, and hasn’t returned since. 

The Tay.ai Web page was also drastically altered to eliminate most of the information about the chatbot, including the ways in which users could interact with it. 

The consensus among some Internet users is that the Tay debacle was just an inevitable consequence of the Internet, and the tendency of some users to troll, or bait, other users. The fact that Tay was a chatbot, and a young, apparently female one at that, possibly added fuel to the fire. 

//

We already know the Internet houses both the best and worst of humanity. Social networks like Twitter roil with insight, bias, knowledge, ignorance, compassion, and outright hatred. Microsoft’s Tay jumped into this mess with both feet.

But search for the white nationalist site Stormfront, and both Bing and Google will show you the result. They make no judgments. Few chatbots exist on the Internet, period, and there’s been nothing with the social clout of Bing and Google to help shape a conversation about what’s socially acceptable discourse. That may not even be a conversation we want chatbots to engage in. 

It’s not clear if or when developers began influencing Tay’s interactions, but on occasion they revealed somewhat positive signs of how she could be retooled to deal with such vitriol in the future.

(www.pcworld.com)

Mark Hachman

Zur Startseite