Open Access and magical thinking

oa_logoIn late May of 2015, the board of the Society for Cultural Anthropology issued a statement about the society’s shift to  Open Access distribution of its journal, Cultural Anthropology. The statement is reasonably judicious  considering that many OA partisans insist  that anthropology has an overriding moral duty to make its findings available at no cost to the world at large and that any distribution model settling for less than this is ethically indefensible.

The SCA deserves praise for its courage and the skill with which it has created an attractive, lively platform for topical discussion and distribution of its articles.  I have considerable respect for the SCA staff and leadership who have undertaken this effort with such élan.

That said, Cultural Anthropology‘s move to OA is not irony-free. The funds used to launch this experiment were the fruit of Cultural Anthropology’s royalties from the for-profit publisher Wiley-Blackwell’s contract with the American Anthropological Association.  In a recent editorial, Michael Chibnik, editor of the American Anthropologist, notes that only two journals in the large publishing portfolio of the AAA realize a profit. Strictly speaking, a few other titles generate modest gains (in the hundreds of dollars), but these gains would not be able to carry the burdens of the 15 titles that lose money each year. Resulting net profits from across the portfolio are shared among all journal-publishing sections of the AAA.  In going OA, Cultural Anthropology is forsaking most future revenue from the Wiley-Blackwell alliance.  Whether the society’s new model will prove financially sustainable remains to be seen; we’ll know for sure in three to five years.  I have my doubts . . . but will be pleased if my skepticism proves to be misplaced.

After serving for three years on the AAA’s Committee on the Future of Print and Electronic Publishing, my feelings about the role of OA in anthropology have moved from strong support to far more tempered enthusiasm.

Before I explain the roots of this apostasy, let me make two things clear.  My expression of doubt about OA fundamentalism should not be interpreted as a defense of the excesses of some for-profit publishers.  At their worst, they have turned the work of writing, editing, and peer-review (“service to the profession”) into a Southern plantation in which we academics work for free and the publishers and their shareholders reap handsome profits from the fruits of our unpaid labor.

Second, thanks to my involvement with the Society for the Anthropology of Lowland South America (SALSA), I’ve seen how digital-only OA can offer an attractive solution to the scale problems of small-society publishing.  SALSA’s peer-reviewed journal, Tipití, which published for seven years as a conventional print journal, was losing money at an unsustainable rate, in part because of rising international postage rates and the difficulty of managing subscriptions. (SALSA is a truly international society, with many members in Latin America and Europe.)  Once the journal shifted from paper publishing to digital-only, gold OA, its viewership increased from the hundreds to the many thousands in a matter of weeks. That welcome change has not been without financial challenges, however. The journal’s viability depends on free web hosting, no-cost access to the bepress publishing platform, and technical support offered by Trinity University in San Antonio.

Continue reading Open Access and magical thinking

Reflections on Nick Bostrom’s Lecture, “Can We Reshape Humanity’s Deep Future?” 7 June 2015, Santa Fe

superintellAs part of a series of occasional lectures that we’re calling Dispatches from the Edge, on June 7 the School for Advanced Research sponsored a public lecture by Professor Nick Bostrom (Future of Humanity Institute, University of Oxford, UK), “Can We Reshape Humanity’s Deep Future? Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies.”  Bostrom’s talk was a snapshot of his research on existential risk, large-scale events or processes that could lead either to complete extinction of humanity or some form of “permanent stagnation.” bostrom-podium-6588

Bostrom opened his lecture with a thumbnail history of our species: our emergence as bipedal primates living in small, mobile groups of foragers; the role of the agricultural revolution in supporting larger populations and fostering the emergence of social hierarchy; beginning roughly 250 years ago, the transition to industrial economies and their acceleration of technological innovation; and finally, the digital revolution, which along with the rise of new genetic technologies makes possible (and in Bostrom’s view, inevitable), the emergence of “superintelligence,” cognitive assets that surpass those of contemporary human beings.

Although Bostrom couldn’t rule out the possibility that existential risks can arise from natural phenomena such as supervolcanos or asteroid collisions, he argued that in light of the absence of near-extinction events during the last 100,000 years, the odds of such natural catastrophes presenting a significant existential risk are low.  Far more salient, he argued, is anthropogenic risk, the possibility that our own technological activities will prove uncontrollable and ultimately lethal to humankind.

Superintelligence could conceivably emerge in human form through systematic use of enhancement technologies that would increase human IQ to levels significantly in excess of current norms.  But Bostrom leans toward machine AI as the more likely site of superintelligence, perhaps emerging as early as 2050. In this scenario, AI agents approaching human cognitive levels launch a self-perpetuating process that would quickly bring them to a point at which they could assert their own survival priorities over those of their human creators.  As the situation was described by Elon Musk in a Washington Post interview, “If there was a very deep digital superintelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way … it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers . . . .”

Continue reading Reflections on Nick Bostrom’s Lecture, “Can We Reshape Humanity’s Deep Future?” 7 June 2015, Santa Fe