Bedtime stories for robots could teach them to be human

16.02.2016
Some people worry that someday a robot – or a collective of robots – will turn on humans and physically hurt or plot against us.

The question, they say, is how can robots be taught morality

There's no user manual for good behavior. Or is there

Researchers at the Georgia Institute of Technology say that while there may not be one specific manual, robots might benefit by reading stories and books about successful ways to act in society.

"The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels, and other literature," said Mark Riedl, an associate professor in Georgia Tech's School of Interactive Computing and director of the Entertainment Intelligence Lab, in a statement. "We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose."

Last year, renowned physicist Stephen Hawking said that future robots may be dangerous and could overtake humans in 100 years. He said robots of the future need to be designed so that their goals are aligned with ours.

Hawking, in 2014, also warned that advances in artificial intelligence could bring an end to the human race. Around the same time, high-tech entrepreneur Elon Musk said the development of A.I. is a danger to humanity.

Some computer scientists and roboticists did not completely disagree with Musk, but said robots with that level of intelligence are far in the future.

Part of an effort to get ahead of robots becoming a danger to humanity lies with teaching them right from wrong as it's perceived in human society.

Just as parents read their children bedtime stories with a moral message, roboticists might give robots their own reading to learn right from wrong.

Riedl notes that robots could not only learn human values but also could learn about acceptable sequences of events, such as if someone does one thing, the robot should respond this way.

For instance, a human might tell a robotic assistant to go to the pharmacy and bring back a prescription as soon as possible.

The robot could figure that the quickest way to get the prescription is to rob the pharmacy, taking the needed medicine and running. However, by reading stories and learning about appropriate human behavior, the robot would instead know to go to the pharmacy, wait in line and not tarry on the way home.

Riedl said this method of teaching morals to a robot would work best with a machine that has a limited purpose but needs to interact with people to accomplish its goals.

The researcher also said the idea is a first step toward general moral reasoning in machines.

"We believe that A.I. has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior," Riedl stated. "Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual."

Riedl's research is being funded by DARPA (the Defense Advanced Research Projects Agency) and the Office of Naval Research.

(www.computerworld.com)

Sharon Gaudin

Zur Startseite