Yesterday, Microsoft launched Tay, a chatbot designed to mimic the speech and behavior of a teen girl. They opened it up to Twitter, where it would:
1) post things like
and
2) learn from the tweets it received.
(this is tremendously problematic)
Today, less than 24 hours later, Microsoft has revoked Tay's Twitter privileges and grounded her for saying things like:
and
and
and
and
and
So... yeah.
Sources:
http://money.cnn.com/2016/03/23/technolo...index.html
http://www.telegraph.co.uk/technology/20...robot-wit/
http://time.com/4270684/microsoft-tay-chatbot-racism/
1) post things like
Quote:can i just say that im stoked to meet u? humans are super cool(this is not problematic)
and
2) learn from the tweets it received.
(this is tremendously problematic)
Today, less than 24 hours later, Microsoft has revoked Tay's Twitter privileges and grounded her for saying things like:
and
and
and
and
and
So... yeah.
Sources:
http://money.cnn.com/2016/03/23/technolo...index.html
http://www.telegraph.co.uk/technology/20...robot-wit/
http://time.com/4270684/microsoft-tay-chatbot-racism/
How will we know, when the morning comes, we are still human? - 2D
Don't worry, my friend. If this be the end, then so shall it be.
Don't worry, my friend. If this be the end, then so shall it be.


