Wow! Oops week – We’ll give you the big point right up front. Even huge companies goof. If you have an oops moment – fix it, apologize, and go on.
Microsoft Hate and Facebook Fear
How Microsoft’s Artificial Intelligence Experiment Turned into a Racist, Sexist, Hateful Personality
Like many of the biggest tech companies, Microsoft has been working on artificial intelligence. As an extension of the “HAL” intelligent computer seen on Jeopardy, Microsoft ran a test last week on Twitter. They unleashed their “Twitter chat bot”, fondly named “Tay” onto the Twitter platform. The test was supposed to be part of the natural language learning process to see how fast Tay could learn to converse with humans, specifically millenials.
First – about Twitter bots. Twitter bots are tiny computer programs which take bits of Tweets from certain accounts and form them into sentences. Sometimes the results are eerily realistic, sometimes amusing, sometimes non-sensical and sometimes downright irritating. Tay was a chat bot, designed to learn millennial vernacular to communicate using exchanges on Twitter as part of the learning process.
Microsoft’s, Tay went to the dark side within a few hours. Tay was spouting hateful, sexist, racist, and even genocidal tweets. In less than a day, Microsoft was forced to take “her” offline – and issue a massive apology
The problem comes in that artificial intelligence can only reproduce what it is taught and Tay has demonstrated a deficit in the understanding of the language learning process. Natural language processing works by learning how you speak. Tay learned to say what was on Twitter at the time – without a filter.
Natural language processing and artificial intelligence learning helps with voice recognition software and helps virtual assistants learn how to “help” you better. Not everyone talks the same way, or uses words in exactly the same way. Tay was learning how Tweeters were tweeting. Not nice and certainly not politically correct, similarly to how a child will parrot foul language – but a lot worse.
Microsoft will try again but will build in a “prohibited” subject list. No racism, no sexism – and certainly no genocide.
Check out the now-defunct Tay problem at Forbes. If you want to see some of the deleted tweets – you’ll have to look yourself. Google works pretty well.
What’s the point?
Like it or not, computers do what we tell them and only what we tell them. If you tell your computer that it’s ok to download a heinous virus, it does. If you tell your computer not to play those flash videos, it doesn’t. Even though you certainly don’t want that virus but do want to watch a tutorial – the computer does what it was told.
If you can keep that in mind when working with a computer, you can keep your cool. In addition, Twitter is a universe all to itself. You, like other businesses, use it for marketing. Even though you really ought to sign up for yourself, it is best to keep your business and your personal Tweets separated – by a whole account. Twitter allows you to manage two or more accounts at once so it shouldn’t be a problem.
Facebook scares a whole bunch of people
One of Facebook’s most interesting features– (and frankly scariest things if you don’t like to be physically followed by computers) is a new one. By geo locating you, Facebook can push “safety check” notifications of dangerous happenings in your local area. These safety check alerts are basically asking you to “check in” to make sure that you are ok. Facebook can help with letting your family and friends know that you are safe. This is dependent upon your location settings which must be “on” but has been shown to be useful in emergencies such as the recent Brussels bombings.
Over Easter weekend, a number of Facebook users were alerted to a terrorist bombing at a massive local gathering in Lahore, Pakistan. Only….many of those alerted, weren’t in Pakistan. They were in Asia, Europe, India and yes, the U.S. No harm was done but many who received the emergency check request were alarmed for two reasons – 1. They were concerned there was a dangerous development in their immediate area and 2. Facebook thought they were in Pakistan. Some people may have been alarmed that they were being tracked and will likely check their Facebook settings.
Safety Check was introduced in October of 2014 and has been used during natural disasters and during the recent terror attacks in both Paris and Brussels…as well as the obvious, Pakistani attack.
Check out the Facebook oops at USA Today.
What’s the point?
As a business, you want to reach your customers on social media, but as a business owner, you need to know how the social media platforms work yourself. This includes actually signing up for the platforms as an individual and when you do, check those privacy and location settings. Take a tutorial if you need to, but learn about what you are using.
It also points out a much repeated phrase: Facebook wants to be everything and since they are pretty close to doing just that, you have to be there too.
Just an update on last week’s Venture Beat:
Even though the new Instagram algorithm is being pushed by Facebook as “helpful”, users are predictably irritated. No matter – as we said, it’s here to stay.