The Anthropology of AI with Joseph Wilson

Joseph Wilson
Joseph Wilson

What can AI tell us about what it means to be human? It’s the question that has been central to countless science fiction stories over the years.

Neural networks and language models like ChatGPT can tell us a lot about human intelligence, argues Joseph Wilson, a PhD Candidate at the Department of Anthropology, just perhaps not in the ways we might expect.

“AI, as far as I'm concerned, is not a technical thing, a scientific thing,” says Wilson. “It's not even a scientific concept, because nobody can really define it. It keeps changing over time. It's a cultural thing, a social concept. It's a cultural fact, not a scientific fact.”

Wilson is currently writing a book entitled Artificial Hype recounting the events of the year 2023, a banner year in the development of artificial intelligence. He points out that there has been relatively little published by social scientists on the field of AI.

“This is a really interesting time where technology is bumping up against this idea of what it means to be human,” says Wilson. “Where does the human begin and where does the machine begin? People have always asked these philosophical questions. But now people are talking about AI in their everyday lives.” 

AI, as far as I'm concerned, is not a technical thing, a scientific thing. It's not even a scientific concept, because nobody can really define it. It keeps changing over time. It's a cultural thing, a social concept. It's a cultural fact, not a scientific fact.

Wilson’s fieldwork has taken him to AI conferences to speak to computer scientists, and he has taken part in a project for Cohere for AI, a not-for-profit putting together data for a large language model, as well as a startup making silicon computer chips to train language models. “I think that a lot of the scientists like Geoff Hinton and Richard Sutton have been immersed in this stuff so long they've forgotten that the computer as a brain metaphor is just a metaphor,” says Wilson. “And they really think that because of the language model’s ability to put together words, that understanding, and even subjective experience or consciousness will follow, and in many cases has followed. That's not really a fringe opinion anymore in AI circles, which I find astonishing.”

A conceptual image of a language model: A drawing of a human face made of electrical connections speaking a random assortment of letters
Many computer scientists already believe that a language model's ability to put words together is indicative of intelligence.

2023 was the year of ChatGPT, when improvements in language models meant that tech companies felt confident announcing that artificial intelligence was here. Wilson points out that in technological terms, nothing much had changed from the language models that had already been in existence for decades. They just got much better at convincing people that they were having conversations with thinking entities.

“Language models existed before, they just weren't as good,” says Wilson. “And another question is, how do we define good? We seem to agree that the more humanlike, the better we think it is. Deep learning and machine learning techniques have been used to do astonishing things in the last 20 years. But this one really grabbed everybody because it really felt like there was somebody there, it really felt uncanny.”

Wilson mentions the psychological phenomenon of pareidolia, the tendency to perceive specific meaningful images in random patterns, such as the Rorschach ink blot test. It’s this effect that leads humans to see shapes in clouds or faces burnt into pieces of toast. Wilson argues that it’s this tendency for humans to perceive other objects as in some way “human” that allows us to accept that language models are intelligent, thinking beings.

“That effect is part of who we are,” says Wilson. “It's part of the empathy we have for other humans and other animals. We ascribe agency to things, even if we know they're not technically alive, but we still speak of them and act as if they have agency, emotion, and a capacity to think.”

Wilson notes that Western culture has an ingrained idea of a Cartesian split between mind and body. It’s the idea that allows transhumanists like Ray Kurtzweil to suggest humans might one day upload their minds onto computers while remaining essentially the same sentient being.

“I don't think you can have intelligent beings in that way,” says Wilson. “We use the words artificial intelligence without AI having bodies, cells and emotions and fears like we do. I think the body and mind are inextricably intertwined. For a lot of cultures around the world, when you talk about intelligence sometimes it means you can use your body really well, in hunting or tracking for instance, or in having a lot of knowledge about how to use medicinal plants. Intelligence might be related to things that seem irrational to Westerners. Activities like shamanism, or seeing into the future for instance. The idea of intelligence as being just information processing capacity, that it is separable from all the rest of it, that’s a cultural quirk of the West.”

While true Artificial Intelligence seems set to elude scientists for some time yet… or maybe forever, it seems that what human beings are willing to consider a sentient being like ourselves may end up evolving as quickly as technology does. “Human beings are uncommonly social creatures,” concluded Wilson. “It stands to reason we see other ‘humans’ everywhere.”