Tag Archives: human extinction

Reflections on Nick Bostrom’s Lecture, “Can We Reshape Humanity’s Deep Future?” 7 June 2015, Santa Fe

superintellAs part of a series of occasional lectures that we’re calling Dispatches from the Edge, on June 7 the School for Advanced Research sponsored a public lecture by Professor Nick Bostrom (Future of Humanity Institute, University of Oxford, UK), “Can We Reshape Humanity’s Deep Future? Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies.”  Bostrom’s talk was a snapshot of his research on existential risk, large-scale events or processes that could lead either to complete extinction of humanity or some form of “permanent stagnation.” bostrom-podium-6588

Bostrom opened his lecture with a thumbnail history of our species: our emergence as bipedal primates living in small, mobile groups of foragers; the role of the agricultural revolution in supporting larger populations and fostering the emergence of social hierarchy; beginning roughly 250 years ago, the transition to industrial economies and their acceleration of technological innovation; and finally, the digital revolution, which along with the rise of new genetic technologies makes possible (and in Bostrom’s view, inevitable), the emergence of “superintelligence,” cognitive assets that surpass those of contemporary human beings.

Although Bostrom couldn’t rule out the possibility that existential risks can arise from natural phenomena such as supervolcanos or asteroid collisions, he argued that in light of the absence of near-extinction events during the last 100,000 years, the odds of such natural catastrophes presenting a significant existential risk are low.  Far more salient, he argued, is anthropogenic risk, the possibility that our own technological activities will prove uncontrollable and ultimately lethal to humankind.

Superintelligence could conceivably emerge in human form through systematic use of enhancement technologies that would increase human IQ to levels significantly in excess of current norms.  But Bostrom leans toward machine AI as the more likely site of superintelligence, perhaps emerging as early as 2050. In this scenario, AI agents approaching human cognitive levels launch a self-perpetuating process that would quickly bring them to a point at which they could assert their own survival priorities over those of their human creators.  As the situation was described by Elon Musk in a Washington Post interview, “If there was a very deep digital superintelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way … it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers . . . .”

Continue reading Reflections on Nick Bostrom’s Lecture, “Can We Reshape Humanity’s Deep Future?” 7 June 2015, Santa Fe