Reflections on Nick Bostrom’s Lecture, “Can We Reshape Humanity’s Deep Future?” 7 June 2015, Santa Fe

superintellAs part of a series of occasional lectures that we’re calling Dispatches from the Edge, on June 7 the School for Advanced Research sponsored a public lecture by Professor Nick Bostrom (Future of Humanity Institute, University of Oxford, UK), “Can We Reshape Humanity’s Deep Future? Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies.”  Bostrom’s talk was a snapshot of his research on existential risk, large-scale events or processes that could lead either to complete extinction of humanity or some form of “permanent stagnation.” bostrom-podium-6588

Bostrom opened his lecture with a thumbnail history of our species: our emergence as bipedal primates living in small, mobile groups of foragers; the role of the agricultural revolution in supporting larger populations and fostering the emergence of social hierarchy; beginning roughly 250 years ago, the transition to industrial economies and their acceleration of technological innovation; and finally, the digital revolution, which along with the rise of new genetic technologies makes possible (and in Bostrom’s view, inevitable), the emergence of “superintelligence,” cognitive assets that surpass those of contemporary human beings.

Although Bostrom couldn’t rule out the possibility that existential risks can arise from natural phenomena such as supervolcanos or asteroid collisions, he argued that in light of the absence of near-extinction events during the last 100,000 years, the odds of such natural catastrophes presenting a significant existential risk are low.  Far more salient, he argued, is anthropogenic risk, the possibility that our own technological activities will prove uncontrollable and ultimately lethal to humankind.

Superintelligence could conceivably emerge in human form through systematic use of enhancement technologies that would increase human IQ to levels significantly in excess of current norms.  But Bostrom leans toward machine AI as the more likely site of superintelligence, perhaps emerging as early as 2050. In this scenario, AI agents approaching human cognitive levels launch a self-perpetuating process that would quickly bring them to a point at which they could assert their own survival priorities over those of their human creators.  As the situation was described by Elon Musk in a Washington Post interview, “If there was a very deep digital superintelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way … it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers . . . .”

Can we do anything to stop this? Bostrom’s view is that if we focus even modest attention on strategies for preventing or delaying this scenario, the beneficial impact could be significant. By reallocating resources from technologies that increase risk toward efforts to control potentially rogue superintelligence, such as algorithms to ensure ethical behavior favorable to humankind, the most extreme danger might be averted. One especially amusing slide presented by Bostrom was a graph showing the relative frequency of published studies of human extinction compared to three other topics: Bostrom_RiskStudies In response to questions from the audience, Bostrom expressed doubt about prospects for imposing outright prohibitions on certain kinds of AI work perceived as dangerous. He seemed to lean more toward incremental strategies that would buy humanity time to find ways of mitigating risk so that we would be better prepared for rogue AI when and if it appears. If his lecture was short on concrete solutions, it did make a convincing case for greater attention to the dangers of technologies once praised as utopian but which we must increasingly see as fostering risks whose magnitude we are only now beginning to imagine. The lecture was followed by a reception on the SAR campus.

One of the issues that arose in discussion with attendees was whether this event represented a new departure for SAR, which is principally known for its major contributions to anthropology, archaeology, and Native American art.  My response was that our commitment to the areas of our greatest strength remain undiminished but that SAR also wants to build on its tradition of contributing to big-picture debates about human futures, social justice, and expanding frontiers of knowledge.

admin-building-reception-6691[1]
Bostrom reception on patio in front of SAR administration building.
reception-board-6686
L to R: Jocelyne Guilbault, Steven Feld, Tom Joyce.
bostrom-audience-above-6687
Nick Bostrom and fans discuss existential risk while drinking really good sangria.

This event would not have been possible without the generous support of our underwriters, the Vera R. Campbell Foundation, Susan L. Foote, Merrilee Caldwell, and Marcus P. Randolph.

Another copy of this post, with additional photographs, has been uploaded to the SAR website.

Added 10/26/2015  A video of Bostrom’s Santa Fe lecture is now available in the SAR YouTube channel.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s