Overcoming our fears and avoiding robot overlords

18.09.2015
Some people are afraid that one day robots will rise up, sentient, working as a collective and angry enough to overthrow the human race.

Artificial intelligence (A.I.), and the robots it will empower, is something to fear, according to physicist and author Stephen Hawking and high-tech entrepreneur Elon Musk.

Other scientists say the scariest thing is for our fears to stunt our research on A.I. and slow technical advances.

"If I fear anything, I fear humans more than machines," said Yolanda Gil, computer science research professor at the University of Southern California, speaking at DARPA's recent Wait, What forum on future technologies. "My worry is that we'll have constraints on the types of research we can do. I worry about fears causing limitations on what we can work on and that will mean missed opportunities."

Gil and others at the forum want to discuss what the potential dangers of A.I. could be and begin setting up protections decades before any threats could become realities.

There's going to be a lot to talk about.

The average person will see more A.I. advances in their daily lives in the next 10 years than they did in the last 50, according to Trevor Darrell, a computer science professor at the University of California, Berkeley.

Today, A.I. touches people's lives with technologies like Google search, Apple's intelligent assistant Siri and Amazon's book recommender.

Google also is testing self-driving cars, while the U.S. military has been given demonstrations of weaponized smart robots.

While some would think this already is the stuff of science fiction, it's just the beginning of a life filled with A.I., as the technology nears the cusp of a revolution in vision, natural-language processing and machine learning.

Combine that with advances in big data analysis, cloud computing and processing power and A.I. is expected to make dramatic gains in the next 10 to 40 years.

"We've seen a lot of progress, but it's now hitting a tipping point," Darrell told Computerworld. "In five or 10 years, we're going to have machines increasingly able to perceive and communicate with people and themselves, and have a basic understanding of their environments. You'll be able to ask your transportation device to take you to the Starbucks with the shortest line and best lattes."

For instance, today, a home owner might need a small group of people to move her furniture around. With A.I. and robotics, in 10 or years so, the home owners might have furniture that could understand her voice commands, self-actuate and move to where it is told to go.

As useful as this sounds, some will wonder how humans will stay in control of such intelligent and potentially powerful machines. How will humans maintain authority and stay safe

"The fear is that we will lose control of A.I. systems," said Tom Dietterich, a professor and director of Intelligent Systems at Oregon State University. "What if they have a bug and go around causing damage to the economy or people, and they have no off switch We need to be able to maintain control over these systems. We need to build mathematical theories to ensure we can maintain control and stay on the safe side of the boundaries."

Can an A.I. system be so tightly controlled that its good behavior can be guaranteed Probably not.

One thing that's being worked on now is how to verify, validate or give some sort of safety guarantee on A.I. software, Dietterich said.

Researchers need to focus on how to fend off cyberattacks on A.I. systems, and how to set up alerts to warn the network - both human and digital - when an attack is being launched, he said.

Dietterich also warned that A.I. systems should never be built that are fully autonomous. Humans don't want to be in a position where machines are fully in control.

Darrell echoed that, saying researchers need to build redundant systems that ultimately leave humans in control.

"Systems of people and machines will still have to oversee what's happening," Darrell said. "Just as you want to protect from a rogue set of hackers being able to suddenly take over every car in the world and drive them into a ditch, you want to have barriers [for A.I. systems] in place. You don't want one single point of failure. You need checks and balances."

USC's Gil added that figuring out how to deal with increasingly intelligent systems will move beyond having only engineers and programmers involved in developing them. Lawyers will need to get involved, as well.

"When you start to have machines that can make decisions and are using complex, intelligent capabilities, we have to think about accountability and a legal framework for that," she said. "We don't have anything like that right now... We are technologists. We are not legal scholars. Those are two sides that we need to work on and explore."

Since artificial intelligence is a technology that magnifies the good and the bad, there will be a lot to prepare for, Dietterich said, and it will take a lot of different minds to stay ahead of the technology's growth.

"Smart software is still software," he said. "It will contain bugs and it will have cyberattacks. When we build software using A.I. techniques, we have additional challenges. How can we make imperfect autonomous systems safe"

While Hawking and Musk both say A.I. could lead to the annihilation of the human race, Dietterich, Gil and Darrell are quick to point out that artificial intelligence is not a threshold phenomenon.

"It's not like today they're not as powerful as people and then boom they're vastly more powerful than we are," said Dietterich. "We won't hit a threshold and wake up one day to find they've become super-intelligent, conscious or sentient."

Darrell, meanwhile, said he's glad there's enough concern to raise a discussion of the issue.

"There are perils of each point," he said. "The peril of full autonomy is the science fiction idea where we cede control to some imaginary robotic or alien race. There's the peril of deciding to never use technology and then someone else overtakes us. There are no simple answers, but there are no simple fears. We shouldn't be blindly afraid of anything."

(www.computerworld.com)

Sharon Gaudin

Zur Startseite