What humans need to learn about machine learning

10.05.2016
Artificial intelligence, machine intelligence, cognitive computing — whatever you want to call machines that are capable of understanding and acting upon their environment — is no longer solely the purview of highly credentialed lab directors and deep-thinking computer scientists. It has entered mainstream consciousness, and the public expects IT to play a leadership role as machine learning enters our workplaces, our living spaces and our lives. Will you be ready

Chances are that you are not. Most executives, in the opinion of New York Times technology columnist John Markoff, are “ill prepared for this new world in the making.”

This is unacceptable. People have been thinking about automated work forever. The first reference in literature (and consistent with the historical theme that the benefits of automation accrue to the elite of society) is probably the mention of automatai —devices that opened and closed the gates of Olympus so that the gods in their chariots could go in and out — in Book 5 of The Illiad. (As Daniel Mendelsohn noted in The New York Review of Books, this was some 30 centuries before the first automatic garage door opener.) And a close reading of the Odyssey reveals the hero visiting a king who has gold and silver watchdogs. People have been thinking about using technology to get work done since there was work to be done.

Coming to terms with machine learning is all the more critical because it could end up governing us at the highest levels of society. While taking part in a CES panel on A.I. in 2014, Ericsson CEO Hans Vestberg went so far as to contend that the mastery of machine learning/cognitive computing/A.I. has “become crucial for the development of countries.” And at a recent Talks@Google, authors Richard and Daniel Susskind, two leading thinkers on the topic, were asked in all seriousness whether they thought countries would be better off run by machine intelligence.

Then there’s Michael Froomkin, the Laurie Silvers & Mitchell Rubenstein distinguished professor of law at the University of Miami School of Law, who concluded the WeRobot 2016 Conference by observing that “the social importance of what we are talking about is getting exponentially big. We have just now crossed the Rubicon from the point of which this is just an expert subject to where the public is engaged for better or worse.” 

In short, I had ample reason to undertake a research project to discover what we know and what we need to know about machine learning, the state of A.I. and the coming age of robo assistants. 

My first conclusion is that displacement is inevitable. In 1983, Wassily Leontief, a Nobel laureate in economics, said that “the role of humans as the most important factor of production is bound to diminish in the same way that the role of horses in agricultural production was first diminished and then eliminated by the introduction of tractors.” It is time, therefore, that executives — and technology executives in particular — started thinking about the immediate, short-term and long-term effects of the labor displacements that will be associated with the deployment of increasingly capable intelligent machines.

How should we be thinking about thinking machines

Stanford professor Jerry Kaplan argues convincingly that one should not obsess about whether computers will one day surpass humans. In Kaplan’s opinion, “This narrative is both misguided and counterproductive. A more appropriate framing is that A.I. is simply a natural expansion of long-standing efforts to automate tasks, dating at least to the start of the Industrial Revolution.” Machine learning, cognitive computing and A.I. are each part and parcel of an ongoing evolution in workplace automation. 

Nor should we get too pedantic about labels. For the longest time, academia has played a large role in creating the language used to discuss the evolution of machine intelligence. Kaplan shares an entertaining story of how the term “artificial intelligence” came to be. Thought to have been originated by John McCarthy, a mathematician at Dartmouth College, it first appeared in a proposal at the Dartmouth Summer Conference in 1956. It was specifically chosen to avoid association with cybernetics and its founder, Norbert Wiener, who defined “cybernetics” in 1948 as “the scientific study of control and communication in the animal and the machine.” 

No excuse for ignorance

There is a surprisingly rich set of resources on the robo-fication of work, learning and leisure that is varied, well written, recent and relevant to executive audiences. 

Reading the free e-book The Future of Machine Intelligence: Perspectives from Leading Practitioners, by David Beyer, one will get a good idea of how many and varied are the ongoing research programs focusing on machine intelligence. The book’s subtitle is misleading, though. The book’s focus is not so much on business folk applying machine intelligence as on researchers trying to create it. 

Some other titles worth exploring:

You have your homework to do. As you ponder our future with machine learning, I’d love to hear your thoughts.

Futurist Thornton A. May is a speaker, educator and adviser and the author of The New Know: Innovation Powered by Analytics. Visit his website at thorntonamay.com, and contact him at thornton@thorntonamay.com.

(www.computerworld.com)

Thornton May

Zur Startseite