The history of chatbots

I still remember the day my father sat me down in front of his IBM XT with its black-and-green flickering monitor and asked me to talk to it. The conversation went something like the following:

>Hello. How are you feeling today?

>Hi. Who is this?

>Hi there... How are you today?

>I'm fine. You didn't tell me who you are.

>We should be discussing you, not me.

>We should? Why?

>Let's change focus a bit... Tell me about your family.

>You are impossible to have a conversation with.

>Can you elaborate on that?

>You are killing me.

>Perhaps you would like me to be killing you.

>OK, you are freaking me out.

>Perhaps you're really talking about yourself?

>What?? That makes no sense.

>How would an answer to that help you?

>END

>Very interesting.

>QUIT

>Thank you for talking with me.

>#$###$#@@....

The it I was talking to was a program called ELIZA. Written in the mid-1960s at MIT's Artificial Intelligence Lab by Joseph Weizenbaum, ELIZA was created to mimic the responses of a Rogerian psychotherapist. Though nearly comical when examined in any depth, the program was capable of convincing some users that they were chatting to an actual human—a remarkable feat considering it was a scant 200 lines of code that used randomization and regular expressions to parrot back responses. Even today, this simple program remains a staple of popular culture. If you ask Siri who ELIZA is, she'll tell you she's a friend and a brilliant psychiatrist.

If ELIZA was an early example of chatbots, what have we seen since that time? In recent years, there has been an explosion of new chatbots. The most notable of these is Cleverbot.

Cleverbot was released to the world using the web in 1997. In the years since, the bot has racked up hundreds of millions of conversions, and, unlike early chatbots, Cleverbot, as its name suggests, appears to become more intelligent with each conversion. Though the exact details of the workings of the algorithm are difficult to find, it's said to work by recording all conversations in a database and finding the most appropriate response by identifying the most similar questions and responses in the database.

I made up a nonsensical question, shown as follows, and you can see that it found something similar to the object of my question in terms of a string match:

I persisted:

And, again, I got something... similar?

You'll also notice that topics can persist across the conversation. In response, I was asked to go into more detail and justify my answer. This is one of the things that appears to make Cleverbot, well, clever.

While chatbots that learn from humans can be quite amusing, they can also have a darker side.

Several years ago, Microsoft released a chatbot named Tay on to Twitter. People were invited to ask Tay questions, and Tay would respond in accordance with her personality. Microsoft had apparently programmed the bot to appear to be a 19-year-old American girl. She was intended to be your virtual bestie; the only problem was that she started tweeting out extremely racist remarks.

As a result of these unbelievably inflammatory tweets, Microsoft was forced to pull Tay off Twitter and issue an apology.

"As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."
-March 25, 2016 Official Microsoft Blog

Clearly, brands that want to release chatbots into the wild in the future should take a lesson from this debacle and plan for users to attempt to manipulate them to display the worst of human behavior.

There's no doubt that brands are embracing chatbots. Everyone from Facebook to Taco Bell is getting in on the game.

Witness the TacoBot:

Yes, it's a real thing. And, despite the stumbles, like Tay, there's a good chance the future of UI looks a lot like TacoBot. One last example might even help explain why.

Quartz recently launched an app that turns news into a conversation. Rather than lay out the day's stories as a flat list, you are engaged in a chat as if you were getting news from a friend:

David Gasca, a PM at Twitter, describes his experience using the app in a post on Medium. He describes how the conversational nature invoked feelings normally only triggered in human relationships:

"Unlike a simple display ad, in a conversational relationship with my app I feel like I owe something to it: I want to click. At the most subconscious level I feel the need to reciprocate and not let the app down: "The app has given me this content. It's been very nice so far and I enjoyed the GIFs. I should probably click since it's asking nicely."

If that experience is universal—and I expect it is—this could be the next big thing in advertising, and I have no doubt that advertising profits will drive UI design:

"The more the bot acts like a human, the more it will be treated like a human."
-Mat Webb, Technologist and Co-Author of Mind Hacks

At this point, you're probably dying to know how these things work, so let's get on with it!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.163.13