Microsoft’s Experiment With AI Chatbot Backfires; Company Apologises For Its Racist Tweets


26March, 2016, USA: Two days ago, Microsoft was firing all cylinders up to promote its newly launched chatbot called ‘Tay’. Company never thought that its experiment will soon get backfired on them as ‘Tay’ turned out to be a racist tweet machine.

Microsoft developed a chatbot that talks like a teenager and learns from its conversation. But the software giant didn’t know that within two days their experiment will come back at them as a boomerang. Twitter users took Tay for a ride and taught the chatbot to be racist, homophobic, and all around bigoted.

Now, the company has taken the chatbot off the grid and has apologized to the people for Tay’s obnoxious behaviour. Microsoft wrote in its blog, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

The company also stated that during the initial testing stage of Tay, it had gone through various tests and developers also implanted many filters to make sure a refined content comes out of the bot. But when the company put the AI bot on Twitter, it was exposed to a mass crowd that exploited Tay to the fullest. Microsoft feels being exposed to a global conversation, might triggered Tay to go all wrong since one of its features is to learn from its conversation.

Also Read-

Microsoft Shows Interest In Taking Over Yahoo

Microsoft Unleashes SQL Database Software For Linux Systems

Microsoft To Unwrap Its Universal Xbox Apps Later This Month