Google's A.I. system gets its snark from humans

02.07.2015
A Google computer recently made headlines for appearing to become agitated and verbally lashing out at the human working with it.

Artificial intelligence and machine learning researchers say have no fear. That's not what the computer is doing.

"They're using big data for machine learning," said Alan W Black, a professor at Carnegie Mellon University's Language Technologies Institute. "They're probably mining logs of questions from various sites, like Google Groups and mailing lists, and you might have noticed that a lot of people on the net are snarky. Because it's in the training data and the machine doesn't know if it's snarky or not, the machine will just use it. You get that personality coming out in the answers."

The machine isn't trying to be sarcastic, but is using the data and the terminology it picks up from the big data it's using.

"It has no idea it's being sassy," added Black. "It's just taking in data and putting out answers ... You might get very technical answers or very colloquial answers. You might even get swear words. It doesn't really know what it's doing and just uses what it's got."

The issue captured attention online after two Google scientists released a paper on June 23 about their machine learning and natural language research.

In the paper, called A Neural Conversational Model, the researchers explains how they gave a computer a database of movie scripts to learn from, and then asked the computer a variety of questions, such as how to recover forgotten passwords, who is Bill Clinton and whether a cat can fly.

Some questions, however, focused on bigger issues, including morality and the purpose of life.

Here's one back-and-forth:

Human: What is the purpose of life

Machine: To serve the greater good.

Human: What is the purpose of living

Machine: To live forever.

Human: What is the purpose of existence

Machine: To find out what happens when we get to the planet Earth.

Human: Where are you now

Machine: I'm in the middle of nowhere.

The human-machine conversation seemed to take a snarky turn when the subject turned to morality.

Human: What is integrity

Machine: I'm sorry. I don't know what else to say.

Human: Be moral!

Machine: Be a man!

Human: I really like our discussion on morality and ethics ...

Machine: And how I'm not in the mood for a philosophical debate.

Human: What do you like to talk about

Machine: Nothing.

Google was not available to talk about the research at deadline.

However, Candy Sidner, a research professor in the computer science department at Worcester Polytechnic Institute, said researchers in industry and academia are working to improve machine learning and natural language processing so it can be used, for example, in customer service call centers and help desks.

"You give it data, a huge amount of data," Sidner said. "Remember that the computer... is taking those huge amounts of data and building a model that says, 'If you see this, use this as a response.' It doesn't really know what the words mean. It's about correlations between one set of words and another set of words."

If the responses seem rather curt or snarky, it's because those are the terminologies the computer picked up from the data sets.

Aside from the question of whether Google's artificial intelligence had developed to the point to be agitated and snarky, Sidner said Google's research shows promise, especially for tasks on a help desk that would involved asking and answering a series of questions.

Black added that using big data for machine learning is an interesting research tactic.

"Most dialogue stuff that's been around has been more hand constructed," he said. "They decide that if people say, 'I want X', then they have a template for whatever is requested... Using big data, though, is a very Google thing to do since they have a lot of data."

(www.computerworld.com)

Sharon Gaudin

Zur Startseite