@@@@@ (5 out of 5)
Driverless cars, for sure. But pilotless airplanes? Machines that will replace doctors and corporate managers? And robots that can out-think the most brilliant human?
The popular term “robots“—first used in a Czech science fiction play staged in 1920—refers to machines that embody what scientists call artificial intelligence (AI). In an outstanding survey of the field, British science journalist Luke Dormehl delves deeply into the past, present, and future of humankind’s attempts to create machines capable of learning and decision-making on their own. His book, Thinking Machines: The Quest for Artificial Intelligence and Where It’s Taking Us Next, serves up the background readers need to understand why such luminaries as Stephen Hawking and Elon Musk have warned us that AI poses a grave threat to our future as a species—while others including Ray Kurzweil, a pioneer in the field, predict a new Golden Age.
Hawking fears that the evolution of artificial intelligence will make the human race irrelevant. “It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
With a somewhat different take on the subject, Musk asserts that human and machine will inevitably merge, with devices such as brain implants to increase our intelligence. However, he is fearful of Artificial General Intelligence: AI that is “smarter than the smartest human on earth, which would present a “dangerous situation.”
Kurzweil famously speaks about the “singularity,” the time (which he puts at 2045) when robots will surpass the intellectual capacity of the most brilliant human being and usher in boundless new possibilities. Hardly fearful of the threat perceived by Hawking and Musk, Kurzweil believes the problem will be solvable. “If AI becomes an existential threat,” he wrote in TIME magazine, “it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense.” Not to mention climate change and the threat of an asteroid collision or an unstoppable pandemic. “Technology,” Kurzweil adds, “has always been a double edged sword, since fire kept us warm but also burned down our villages.”
Another, much different but equally reassuring view comes from New York University psychology professor Gary Marcus, who characterizes “deep learning,” which many enthusiasts regard as the route to Artificial General Intelligence, as “the hammer that’s making all problems look like a nail.” In a 2012 New Yorker article, he argued in reference to AI that “Just because you’ve built a better ladder doesn’t mean you’ve gotten to the moon.” Recently, at TEDx CERN, he asserted that perception is “a tiny slice of the pie. It’s an important slice of the pie, but there’s lots of other things that go into human intelligence, like our ability to attend to the right things at the same time, to reason about them to build models of what’s going on in order to anticipate what might happen next and so forth. And perception is just a piece of it. And deep learning is really just helping with that piece.”
Futuristic predictions notwithstanding, virtually all observers fear the near-term impact of AI, and that’s nothing new. In fact, beginning in the late 1940s, public concern mounted about the potential of automation to displace humans from their jobs. The topic was widely discussed in the 1950s and beyond; today’s preoccupation with robots in manufacturing, driverless cars, and computer algorithms that are putting lawyers out of work is merely the latest iteration of the problem. The trend has simply accelerated in recent decades. There’s no ignoring it now.
Dormehl traces the emergence of this trend through the work of individual scientists, many of whose names will be familiar to anyone with knowledge of the history of the computer industry—John von Neumann, Alan Turing, Claude Shannon, and Marvin Minsky, among others. The story as the author tells it is highly engaging, tracing the development of AI from the 19th-century work of Ada Lovelace (Lord Byron’s daughter) and Charles Babbage on the latter’s proposed Analytical Engine, which was to be the world’s first general-purpose computer. (It was never built.) Along the way, Dormehl explores the unsteady evolution of AI both in theory and in practice and describes many of the field’s current applications, some of them surprising: for example, robots displacing research scientists in discovering likely prospects for life-saving drugs, and IBM’s Watson, the Jeopardy! champion, turning chef. He turns a skeptical eye toward some of the more ambitious goals of the field, such as the use of computers to mimic the structure and functions of the human brain. This is an amazing story, and an important one.
In the final analysis, Dormehl writes that “speculating about where Artificial General Intelligence could potentially take us [as Hawking, Musk, and Kurzweil have done] is interesting, but ultimately the stuff of science fiction for now.” Clearly, it makes more sense to concern ourselves with how to provide meaningful employment to all the millions of people displaced by AI from their jobs in the years ahead.
If this book intrigues you, you might take a look at Science history and science explained in 33 excellent popular books. This one is included.
For your convenience, I’ve listed all 36 nonfiction books reviewed here in 2017.