


As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think-and thus to do evil-bubbled into mainstream culture.

Alan Turing proposed in 1950 that a machine could be taught like a child John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. The question “Can a machine think?” has shadowed computer science from its beginnings. Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. No worries, you might say: you could just program it to make exactly a million paper clips and halt. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines-until, King Midas style, it had converted essentially everything to paper clips. Now imagine that this machine somehow became incredibly intelligent.

Imagine a machine that we might call a “paper-clip maximizer”-that is, a machine programmed to make as many paper clips as possible. But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence.
