Tuesday, July 11th, 2017

Machine Learning Becomes the New “Plastics,” Transforming the Once-Isolated World of CS Theory

by John Markoff, Journalist in Residence

The field of machine learning has taken the place of the term “plastics” – the powerful one-word career lesson imparted to Dustin Hoffman in The Graduate, the 1967 movie about a rootless college graduate whose girlfriend attends the University of California at Berkeley.

A half-century later on the same campus, in the corridors of the Simons Institute for the Theory of Computing, there is a clear sense that machine learning – a broad set of algorithms that permit computers to discover insights in data – is transforming the world.

Pulitzer Prize-winning writer John Markoff was Journalist in Residence at the Simons Institute in Spring 2017. A long-time science and technology writer for the New York Times, Markoff is the author of numerous books, including What the Dormouse Said: How the Sixties Counterculture shaped the Personal Computer Industry; and Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. He is currently researching a biography of Stewart Brand.

The Simons Institute's Journalist in Residence program was created in Fall 2016, with the goal of increasing visibility for theoretical computer science and supporting science journalists interested in covering the field.

Read more about the Institute’s Journalist in Residence program.

In the past semester, a group of computer science theorists and their students gathered at the Institute to explore the limits of machine learning algorithms, with the goal of both a deeper understanding of existing algorithms, and discovering new insights that will advance the field, and in the process develop new technologies that might rival and perhaps even surpass human capacities.

That new power brings new responsibility.

It is a striking time for the community of math-oriented scientists who have traditionally been more comfortable in the ivory tower confines afforded theoreticians. Sanjoy Dasgupta, one of the computer scientists who served as organizers for the Simons Institute’s Spring 2017 program on Foundations of Machine Learning, said that when he had arrived at UC San Diego in 2002, there were roughly 20 students in his graduate-level machine learning class. Now the number of students is close to 400.

In fact, the demand for machine learning expertise is so high that university professors are being offered million-dollar annual consulting fees, and in the heart of Silicon Valley at Stanford University, recruiters now show up to lure undergraduates when they are just three weeks into their first machine learning class.

At the same time, well-known technologists and scientists such as Elon Musk and Steven Hawking have warned that rapid advances in artificial intelligence technologies might alternatively “summon the demon” or present an “existential” risk to humanity.

Their warnings have touched off a debate that now poses ethical questions for the community of theoretical computer scientists that are evocative of challenges once faced by theoretical physicists in the wake of the Trinity atom bomb test at Alamogordo in 1945, and later by molecular biologists who met at Asilomar in 1975 to explore the potential threat of gene editing.

The parallels are potentially profound.

In the case of machine learning, warnings range from the near-term obsolescence of many kinds of human labor and new “intelligent” super weapons, to new kinds of surveillance that threaten to end privacy. The darkest prospect hinges on the notion that advanced algorithms might in some way lead to autonomous machines that exhibit qualities that researchers in the field describe as Artificial General Intelligence, or AGI. Such a machine might not only be self-aware but potentially independent of human control.

The subject is a contentious one for several reasons.

Until now, the field of artificial intelligence has overpromised and underdelivered. There were “AI winters” both in the late 1970s and mid-1980s, when earlier generations of artificial intelligence researchers failed to deliver on claims of the imminent arrival of thinking machines.

As early as 1957, when Frank Rosenblatt introduced the concept of the perceptron, an early learning algorithm, he speculated with reporters that machines that could read and write would be available within a year for the cost of just $100,000. Five years later, John McCarthy, in an early proposal to the Pentagon’s Advanced Research Projects Agency, which led to the creation of the Stanford Artificial Intelligence Laboratory, asserted that building a thinking machine would be a ten-year-long effort. Several years after that, MIT computer scientists Marvin Minsky and Seymour Papert assigned then-graduate student Gerald Sussman the “vision problem” as a summer project.

Even as recently as last year, Elon Musk publicly asserted that it will be possible to summon a Tesla from the opposite side of the country in 2018.

The problem is compounded because not only do humans have the predisposition to anthropomorphize just about everything they interact with, in effect setting the bar necessary for demonstrating new AI capabilities arbitrarily low, but also Hollywood has continued to offer up a steady fare of AI theatre – from the endearing 2012 movie Robot and Frank to the darker, set-at-UC-Berkeley Transcendence in 2014 – that has shaped public perception of the limits of artificial intelligence.

None of the theorists I spoke with in interviews at the Simons Institute believed that AGI will appear in the coming decades, and many were skeptical that it would ever be possible. 

At the same time, most acknowledged that machine autonomy – decision-making transferred from humans to algorithms – will soon be commonplace. As machines increasingly make decisions, their impact will extend from the banal – which Korean barbecue restaurant to choose – to, in some cases, choices that will have life-and-death consequences. This will increasingly be true if autonomous systems begin to drive and perhaps fly – foretelling a wide range of intelligent machines that will begin to move around in the world.

In the past year, the highly anticipated arrival of self-driving cars has drawn attention to a popular ethical puzzle used to highlight the new challenges posed by machine learning algorithms. Described as the “Trolley Problem” and first posed during the 1960s by philosopher Phillipa Foot, it is a thought experiment in which an observer with a lever waits for an approaching streetcar. Let the car pass and five people will be killed, throw the lever and the car will switch tracks and run down just one person.

Theorists must now address the inevitability of similar ethical problems that will emerge from the algorithms they design.

To date, roboticists and computer scientists have responded in differing ways to the Trolley Problem and to the question ethical challenges in general. Sebastian Thrun, the artificial intelligence researcher who created the Google Car project, notes that in the time it takes to pose the Trolley Problem to a human, it is too late to answer.

University of Maryland computer scientist Ben Shneiderman counters that the ethical problem lies in the very decision to delegate decision making to algorithms without human oversight and control.

Researchers in the Simons Institute’s machine learning program take differing views of who will be responsible for the coming era of intelligent machines.

For Marina Meila, a University of Washington computer scientist and statistician, the ethical consequence of theoretical computer science is more a challenge for society than for individual theorists. And yet she acknowledged that she had made the decision to focus her research specifically upon sciences such as physics and chemistry, where any theoretical advances would have a clear positive impact for humanity.

Today, just as cryptologists have become the designers and maintainers of the world’s electronic commerce system, and the dominant form of web search is based on Larry Page’s Page Rank algorithm, (the foundation of which was built around the relatively obscure concept of Eigenvectors), convolutional networks, recurrent and deep neural networks, and a growing array of algorithms that can be trained to recognize patterns have become Silicon Valley’s most ubiquitous and effective tools.

What still remains ill defined, however, are the limits of the field. The rapid progress of the past half-decade has largely not been due to new theoretical advances, but rather to the increasing ease with which large data sets have given new effectiveness to approaches that were originally formulated in the 1980s, 1990s or earlier.

Indeed, the Simons Institute theorists acknowledge that they are still scrambling to catch up. Ravi Kannan, a theoretical computer scientist with Microsoft Research in India, said that the field of deep learning has so far outpaced researchers’ ability to completely understand the general mechanism underlying the algorithms. But he also said he remained skeptical that, even with vast data sets, certain complex design tasks such as designing an airplane wing, or perhaps even autonomous driving, would be solved simply by deploying advanced neural networks.

“Something is still missing,” he said.

A more optimistic view was expressed by Sanjeev Arora, a Princeton University theoretical computer scientist who argued that computational complexity is not a barrier to building intelligent machines that will perhaps one day achieve human-like intelligence.

“AI is this elephant in the room,” he said. “Despite earlier feelings that there were hurdles to overcome, I am sure that theory and math will play a big role in this endeavor.” 
Where there is little consensus is what the rate of progress will be.

Indeed, in some ways the world of artificial intelligence still seems to be stuck where it was decades ago when UC Berkeley philosopher Hubert Dreyfus chastised AI researchers for their optimistic claims, by arguing they were naively optimistic in the same way as the man who, having climbed to the top of the tree, claimed that he was making steady progress on his way to the moon.

“There are similarities between the situation facing artificial intelligence theorists and the theoretical physicists who developed the atomic bomb during World War II, but there are also important differences,” said Daniel Roy, a University of Toronto computer scientist participating in the Simons Institute’s machine learning program, noting that there is a great deal of debate about how quickly the field is moving and what the consequences will be. “We are very uncertain about what will be possible in the near future,” he said.˙

In a panel discussion on the future of machine learning, Richard Karp, the Director of the Simons Institute and a pioneering computer scientist, pointed to the issue of comprehensibility as one of the significant challenges that the field has so far not made significant progress on. Algorithms such as deep neural nets have yielded startling advances, and at the same time are described as black boxes that yield answers without accompanying explanations.

“If you have a very deep net with a huge number of neurons, and it succeeds in the classification process, what have you learned?” he asked. “How can you get your mind around what it was doing?”

That poses a series of potential ethical challenges ranging from accuracy to safety, he said. During the semester, he noted that the program participants had explored the possibility of building into future machine learning algorithms failsafe mechanisms that might be remarkably complex and not completely understood.

“There has been some discussion of trying to define means of ensuring safety for neural nets,” said Karp, “defining the parameters in which they can operate in such a way that they can’t be destructive.”

Advances beyond the current state-of-the-art in the field might come from two very different theoretical directions, said Misha Belkin, an associate professor in computer science and engineering at Ohio State University, and a long-term participant in the program. One avenue might come through a deeper understanding of the mechanism underlying optimization in deep networks, offering an avenue toward much simpler and more powerful algorithms. The second possibility is clearer understanding of human or natural intelligence, which might lead to completely new architectures that are much more compact than today’s pattern recognizers.

Despite the debate on limits and the rate of progress, there was a general consensus that the field has broken out of its isolation and is grappling with the most significant problems in the world.

“One of my friends called artificial intelligence ‘the Manhattan Project of our generation’,” said Sham Kakade, a University of Washington associate professor in both computer science and statistics in residence at the Simons Institute this spring. He noted that there has been a recent paradigm shift that is accelerating the field, and that couples theory with low-level engineering.

He suggested that if the theoretical advances do come quickly, they will bring with them equally challenging moral questions for their designers.

“The connection between machine learning and society is a bit scary,” said Kakade. “For the first time we’re developing these new tools, and it’s hard to understand if society is going to actually be better due to them. Maybe it’s better because it’s going to be safer, but how do we understand how people will respond?”

In a panel exploring the future of machine learning, UC Berkeley statistician Michael Jordan criticized the field for succumbing to the hype of a “glorious” wave of computer scientists who are going to create superhumans while at the same time ignoring basic principles related to scientific inference and decision-making. Machine learning, he said, was offering both specialists and non-specialists a magic black box that gives answers.

“No one is asking, ‘Are you sure about that? How sure are you?’”

If theoretical computer scientists decide to leave the ivory tower, it should not be to perform stunts like the Deep Mind AlphaGo program beating a human Go player, but rather to make real contributions to the world, he argued.

“Let’s take on real problems like health and water and climate that aren’t going to go into the New York Times, but are going to make a real impact on real lives and are going to make our world a better place,” he said.

Related Articles