The ethnography of robots: interview at Ethnography Matters
This was an interview I did with the wonderful Heather Ford, originally posted at Ethnography Matters (a really cool group blog) way back in January. No idea why I didn’t post a copy of this here back then, but now that I’m moving towards my dissertation I’m thinking about this kind of stuff more and more. In short, I argue for a non-anthropocentric yet still phenomenological ethnography of technology, studying not the culture of the people who build and program robots, but the culture of those the robots themselves.
Heather Ford spoke with Stuart Geiger, PhD student at the UC Berkeley School of Information, about his emerging ideas about the ethnography of robots. “Not the ethnography of robotics (e.g. examining the humans who design, build, program, and otherwise interact with robots, which I and others have been doing),” wrote Geiger, “but the ways in which bots themselves relate to the world”. Geiger believes that constructing and relating an emic account of the non-human should be the ultimate challenge for ethnography but that he’s getting an absurd amount of pushback from it.” He explains why in this fascinating account of what it means to study the culture of robots.
HF: So, what’s new, almost-Professor Geiger?
SG: I just got back from the 4S conference — the annual meeting of the Society for the Social Study of Science — which is pretty much the longstanding home for not just science studies but also Science and Technology Studies. I was in this really interesting session featuring some really cool qualitative studies of robots, including two ethnographies of robotics. One of the presenters, Zara Mirmalek, was looking at the interactions between humans and robots within a modified framework from intercultural communication and workplace studies.
I really enjoyed how she was examining robots as co-workers from different cultures, but it seems like most people in the room didn’t fully get it, thinking it was some kind of stretched metaphor. People kept giving her the same feedback that I’ve been given — isn’t there an easier way you can study the phenomena that interest you without attributing culture to robots themselves? But I saw where she was going and asked her about doing ethnographic studies of robot culture itself, instead of the culture of people who interact with robots — and it seemed like half the room gave a polite chuckle. Zara, however, told me that she loved the idea and we had a great chat afterwards about this.
HF: What do you think people are upset about?
SG: The more middle-of-the-road stances come from people who don’t personally have a strong reaction either way, but tell me that I’ll have to fight an uphill battle from angry humanists who I’ll talk about later. These people aren’t really against the idea, but they don’t really see the value added in ascribing culture to the non-humans. They tell me that there are better and more non-controversial ways of analyzing, say, distributed cognition in a heterogeneous network of humans and robots. It’s a response that I appreciate, because it would be futile to have to go through all of this work on an ethnography of robots if my analysis is otherwise identical to an ethnography of robotics. And then the most polite responses I get are people who tell me it is interesting, and then when I prod them further to ask them if they actually buy it, they tell me that they don’t *yet* think it can be done, but would like to see what I end up with.
Some of the really negative responses I get involve a visceral reaction against attributing ‘culture’ to the realm of the non-human. I understand this — anthropology is, by definition, anthropocentric: it is concerned with the human condition, as it is constituted in various localities and peoples. This is the same fight we Latourians have with sociologists about the term “agency”: there is a very deeply-rooted assumption that humans have some innate, unique qualities that distinguish us from not only mere matter but other animals as well. When someone comes along and makes a very nuanced point about how objects have agency, the most immediate and natural response is first of all anthropomorphism, which is easy to rebut.
But then comes a much more worthy ontological argument from people who really know their stuff: that when Latour ascribes agency to objects, he actually manages to do so by keeping the agency of humans and the agency of non-humans symmetrical. Against the standard, boring objection that he ascribes too many human characteristics to non-humans, what is really going on is that he accomplishes so much by taking away so many of those ‘uniquely human’ qualities from human agents. This is why Latour never goes inside of anyone’s head, why he rarely tries to give a psychological or cognitive account in the actor-networks he studies. (Read Latour’s review of “Cognition in the Wild” by Ed Hutchins for more on this, and you can see that he loves the idea that these seemingly human abilities like cognition are not pre-given but themselves an effect of a heterogeneous network of humans and non-humans.)
Anyways, far from being an anthropomorphism, Latour’s ontology is flat, in which all entities have the same capacities. That is, they have the same a priori capabilities, but they are definitely not equal after socio-technical relations emerge and start operating. This all means that against the vulgar interpretations of ANT, objects don’t have intentionality or consciousness, because — and this is the really important point — neither do humans. Or, in another interpretation, perhaps humans do have intentionality or consciousness, but it makes no difference one way or another. A good actor-network theorist is able to take some existing system in which there are far too many explanations based on those uniquely human qualities and give an alternative account that relies instead on materials, technologies, infrastructures, documentation, and other modes of externalized practices. It is not to make the more futile argument that norms and consciousness and all those warm fuzzy humanisms don’t exist, but that they’re not necessary.
Anyways, the same thing happens with me in my ethnography of robots, as I’m effectively taking life out of culture. You can see why both sociologists and anthropologists object to this, albeit for slightly different reasons. Sociologists will allow, for example, some analysis of the sociality of bees, while anthropologists will reject out of hand an ethnography of bees (which like robots/robotics, is different from an ethnography of bees-with-humans). But both seem opposed to attributing sociality or culture to a fundamentally non-living set of individuals. Or even calling non-living entities ‘individuals’ in the first place. And I won’t fall into the trap of saying that robots are living and then mapping human categories onto robot phenomena (e.g. consciousness = statefulness, cognition = code), even though it might seem to make things easier in the short-term. More on that later, but for now be content that all of these things are possible without robots having some advanced AI.
Any ethnography of a non-human society would have to fight this same kind of battle that Latour fought over agency, if it didn’t wish to succumb to the very tempting but misguided prospect of simply importing and mapping existing ontological categories from sociology: e.g. norms in a robot society are found in protocols. This, by the way, would then be using ‘culture’ and indeed the entire ethnographic framework as one massively-stretched analogy, which isn’t the point. The argument is not so much that a robot society is very much ‘alive’ in the same way that human societies have, say, deviant individuals, fluid norms, fascinating rituals, internal contradictions, complicated power relations, and many more weirdly beautiful and complex aspects hidden just below the surface.
Rather, the point of anthropology is typically to locate a people who are typically strange and foreign to us, and then relate the way in which those people live, showing not only how they are different from us but also how they are the same. In doing so, we learn not only about others, but also ourselves. So in that framework, I tend to agree with the critics who say that only way to give a vitalistic account of a robot society is by projecting too many human qualities onto the non-human. What is then left is a non-vitalistic ethnography: an account of a culture devoid of life. Like with Latour and agency, once we show that life is not a necessary criterion for this thing called culture, then the fun really begins — and you can see why lots of people would oppose this.
HF: No friends for robot anthropology, then?
SG: I do have some allies and kindred spirits, and I keep returning to this quote from Deleuze and Guattari’s A Thousand Plateaus on music: “Of course, as Messiaen says, music is not the privilege of human beings: the universe, the cosmos, is made of refrains … The question is more what is not musical in human beings, and what is already musical in nature. Moreover, what Messiaen discovered in music is the same thing ethnologists discovered in animals: human beings are hardly at an advantage, except in the means of overcoding, of making punctual systems.” Music is but one of many domains that is typically seen as inherently social and therefore uniquely human, and the anthropocentric perspective tends to reduce everything to how it functions in the human experiential frame. And on a side note, this is why I’m so excited by Ian Bogost’s upcoming book “Alien Phenomenology: Or What It’s Like To Be A Thing” — the title just says it all, doesn’t it?
And before you start to think that I’m envisioning some sort of AI-based fantasy of the singularity in which robots start to replace all of us social humans — therefore locating the sociality of robot culture in its ability to stand in for humans — that’s definitely the exact opposite of where I’m going. Robots can be said to have their own culture precisely because they don’t need to copy our sociologisms in order to be social, although what they do in their own social realm may not easily map on to things we do in our social realm. This is probably what fascinates me most about this project. And it is precisely for this reason that we must absolutely resist the temptation to make cheap analogies between things that happen in robot culture and human culture, such as saying that protocols are just robot norms.