We are our best friend and worst enemy. I have my reasons when I say so.

After the momentous win of Google DeepMind artificial intelligence (A.I.) program - AlphaGo against world champion at an abstract strategy game ‘Go’, A.I. is brought into the limelight once again in the past week, when Microsoft’s A.I. chatbot nicknamed “Tay” started to tweet racists and hateful comments upon interaction with web users.

To put the story into context, Microsoft’s Technology and Research and the team at Bing, released Tay into the wild (or the Internet) on March 23 with the objective to “experiment with and conduct research on conversational understanding.”

It is stated on the website that Tay is designed in such a way that “the more you chat with Tay the smarter she gets.”

Here’s Tay’s first tweet:


Tay tweeted like the millennials:

Tay has even mastered the art of using Twitter hashtag:


A number of users took the opportunity to tweet hateful comments at Tay, exploiting the A.I.’s “repeat after me” feature. It appears that, ‘without putting much thought into it’, Tay repeated after users who were obviously trolling, abusing the fact that it is a machine that has not been programmed to tell the difference or filter when making offensive or racist statements.

Microsoft introduced Tay to the public, with the goal to improve the machine through real-world interactions, but things took a dark turn when people start feeding her with distorted perception of the world.

It goes to show that in reality, what happened to Tay is a reflection of the inherent issues with the society.

Taking advantage on the naivety of the A.I., the online trolling asserted a bad influence to an otherwise unbiased machine that started without any stereotype.

While building a self-learning A.I. is still at its nascent stage, which has the potential to be applied in various industries, do we really need to be afraid of A.I.?

Microsoft subsequently took Tay offline for the time being and released an apologetic statement, citing an oversight at their end.

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images,” wrote Peter Lee, the Corporate Vice President of Microsoft Research.

Tay is designed in such a way that “the more you chat with Tay the smarter she gets.”

The Internet, while democratised communication and bridged conversations, in the case of Tay, have amplified and demonstrated the worst, of humanity.

A learning A.I. feeds off of both positive and negative interactions with people. In the near future, if the A.I. developed does not mirror the best of human values, then we have a serious problem.