
TROUBLED BOUNDARIES [Posted April 29, 2025]
Webb Keane, a cultural anthropologist known especially for his work on ethics (including the much-praised Ethical Life: Its Natural and Social Histories, 2015), has followed Ethical Life with a book offering reflections on the moral challenges generated by human relationships with animals, machines, and artificial intelligence. Animals, Robots, Gods: Adventures in the Moral Imagination is admirably concise and apparently directed to a general audience. “This book,” Keane says, “invites you to broaden—and even deepen—your understanding of moral life and its potential for change by entering those contact zones between humans and whatever they encounter on the other side” (p. 3)
Although it’s fair to say that the main event in this book is explorations of encounters between humans and machines, Keane uses the moral bonds between humans and animals as a way of easing into his central argument. People in many societies survive by killing animals, both domesticated and wild, yet in these same societies it is not unusual for hunters or individuals sacrificing an animal for religious purposes to see animals as sentient benefactors who need to be thanked before being killed. Even in our own society, it’s not unusual for people to speak to pets and other animals in a way that expresses an “I-you” relationship infused with ethical implications. After reviewing a range of animal-human relationships from around the world, Keane observes, “Once [people] admit a social relationship with an animal, they cannot be indifferent to it.”
For me, the book’s discussion of human-animal relations sits somewhat uneasily with human-machine interactions even if they both exemplify what Keane calls “boundary trouble.” Some level of reciprocal responsibility to other life-forms, however limited, seems different from, say, the relationship a human has with a heart-lung machine sustaining that person’s life. Your opinion may differ.

When humans and machines are “fused”—the cyborg being the key example—things get more complicated. We’re now largely accustomed to entry-level devices such as hearing aids, eye glasses, and artificial joints. They can be seen simply as tools or devices that facilitate human activity.
[Image above generated by Google Gemini AI in response to the command “create a picture of a cyborg.”]
But consider the ethical problems that arise when people are supported by machines after entering what’s called a “permanent vegetative state”? Are a regular machine-directed heartbeat and steady respiration enough to consider someone to be “alive”? Keane walks readers through case studies of how different societies deal with this question.
Robots take this to the next level, especially when they are given human form rather than, say, the utilitarian appearance of a Roomba. When robots are designed to look and act like humans, things get creepy, at least for people in some societies. In contrast, there is ample evidence that the Japanese have warmed up to humanoid robots. Drawing on the research of anthropologist Jennifer Robertson, Keane notes that “the distinction between natural and artificial, and between non-human and human, is differently configured than in the English-speaking world.” Hence the creation of Japanese rituals to mark the death of much-loved robotic dogs that have reached the end of their service life.
The sections of the book that leap from today’s headlines concern the ethical and perhaps even ontological dilemmas generated by increasingly sophisticated artificial intelligence. Should we be alarmed that humans in various parts of the world are turning to AI chat bots for psychotherapeutic conversations or even romantic exchanges? Do these interactions somehow impoverish relations with real humans, or do they enhance social skills in constructive ways? What are the parallels between AI-driven chat bots and such traditional practices as shamanism and divination? Keane’s exploration of these questions is unfailingly interesting even if definitive answers remain elusive.
It should be clear that Animals, Robots, and Gods makes extensive use of cross-cultural comparison, a tool of cultural anthropology that in some quarters is regarded with skepticism. (I don’t inhabit that quarter.) One of Keane’s important claims, however, is that AI and the algorithms behind it are so inherently shaped by the cultural context of their creation that we cannot assume they represent the world accurately.
Given the book’s commitment to relativism, readers looking for universal moral/ethical rules relating to AI, cyborgs, and robots won’t find them here. What they will find instead is an accessible meditation on the world our species is creating through technologies that make increasingly ambiguous the boundaries of the human.
Here’s a link to a 2015 lecture at SAR by Oxford’s Nick Bostrom. At the time, one of the attendees thanked me for bringing Bostrom to speak in Santa Fe. “I agree with Bostrom that AI presents an existential risk. But we probably don’t need to worry about it for decades.”





You must be logged in to post a comment.