The recent boom in large language models has caused a good deal of anxiety among humanities scholars, and historians are no exception. Recently the American Historical Review (AHR) published a forum on artificial intelligence and the practice of history, featuring essays on the methods and approaches to writing the history of AI and in the age of AI.  These essays are punctuated by the argument that, at a time when machine learning tools are being introduced in historical research, historians’ expertise remains key to the proper usage of such tools. As Kate Crawford and Matthew Jones state in their essays, histories of AI and data—and the questions they ask about the origins, methodologies, and information gaps in data sets—are crucial to the reflective use of data and machine learning tools in humanistic scholarship.[1]Kate Crawford, “Archeologies of Datasets,” The American Historical Review 128, no. 3 (September 1, 2023): 1370; Matthew L. Jones, “AI in History,” The American Historical Review 128, no. 3 (September 1, 2023): 1363. Unlike some strains of anthropology, history avoids abstracting data away into theory. The latter often helps historians craft their questions, but the primary task of historical scholarship is to understand the past on its own terms.

Therefore, the major take-away that historians of data and computing can glean from Code concerns approach rather than method. More historians have recently suggested that technical histories of data and algorithms are insufficient for understanding the genealogies and social ramifications of these technologies.[2]Jones, “AI in History”; Eden Medina, “Forensic Identification in the Aftermath of Human Rights Crimes in Chile: A Decentered Computer History,” Technology and Culture 59, no. 4 (2018): 100–133. This concern was also voiced at the meetings of the University of Cambridge Mellon-Sawyer Seminar on Histories of Artificial Intelligence in 2020 and 2021, organized by Syed Mustafa Ali, Stephanie Dick, Sarah Dillon, Matthew Jones, Jonnie Penn and Richard Staley. In his AHR essay, for instance, Matthew Jones calls for turning “infrastructures of data collection upside down” to look far beyond the technical detail “to understand the creation and analysis of data in the contexts of its production and its use to track the contestation and consolidation of those infrastructures in the organization of social, intellectual, and cultural life.”[3]Jones, “AI in History,” 1363. Code does exactly that. It situates the origins of present-day digital analytics in anthropology and French theory, unearthing the complex and previously uncharted historical contingency of intellectual currents that shaped the twentieth and twenty-first century information age.

We are at the convergence of multiple post-humanisms: the absolute hegemony of the digital in media and communications, the ascendency of algorithms and AI, and the near ubiquity of IT devices in everyday life. As Geoghegan writes in Code, “Digital cultural analytics premised on pervasive information enclosure appear to be an imminently realizable endeavor” (171).

Of course, there are numerous anthropologies of digital life—algorithms, applications, robots, and other non-human agents. Yet, there is also the frequent charge that anthropology is not equipped for our digital futures. Whether qualitative or quantitative, ethnographic work with groups that rarely exceed a few hundred seems ill-suited to the explication of big data. Or are these even the right questions to ask? What Geoghegan’s book demonstrates is that we are all heirs to “code,” with all of the powerful inequalities that implies. This means that our critiques of digital capitalism must also involve a reflexive understanding of the ways “code” has shaped the anthropological project over the past 60 years, and conversely, the ways that information society represents a revanchist anthropology grounded in empire. More optimistically, it extends the possibility of a more interactionist multi-agent ontology, one that extends both fields into latencies opened up by that first generation of cyberneticists.

We are surrounded by multi-agent systems at every moment in our lives. Those agents may be variously intelligent. With generative AI, we have agents that simulate human conversation, so it is easy for people to (mis)understand their agency. Most algorithms, however, toil in the background, running checks on our job applications, checking purchases against credit history, touching up photographs. Yet in all cases, it is not so easy to understand how multi-agent systems make decisions. Large language models, for example, are unable to tell us which sources they’ve consulted in their replies, and text-to-image generation has notoriously surreal moments, when, for example, Dall-E or Stable Diffusion produce monstrous figures like “Loab,” a disquieting image of what looks to be an elderly woman.

This is the world we live in now, but it is not a “post-human” one in the sense of a reduction of human life to manipulable flows of data. Instead, we find ourselves enmeshed in interactive systems that can’t be reduced to information quanta—even if powerful organizations strive to do so. Amidst these heterogeneous systems, communication may be an impossibility; instead, coordination may be the optimal outcome. The basis of that coordination lies in the questions: who defines the field for interaction? Will everything be used for profit? For state surveillance?

On the other hand, we might strive towards making these systems explicable and more predictable, more interactive, more responsive and more grounded in the experiences of diverse communities: “Explainable artificial intelligence,” “community robotics,” “community-based design”: anthropologists are uniquely poised to consider these networks of humans and non-humans as they interact together to create their world. They are also uniquely positioned to insist that non-human agents need not serve the whims of capital, and that rejecting information as “command and control” means opening up recombinatory possibilities.

I have approached questions of the digital future and AI from the perspective of science studies rather than anthropology per se, so I am unable to speak to this question within the confines of one discipline.  For me speculating on the digital/AI or machine learning future requires that I think carefully about strategies of modeling, and practices of quantification and commensuration.  I consider algorithms to be just another formalizing practice, akin to standardization and rationalization, i.e., efforts to regulate behavior and stipulate its parameters. I read Code then as a cautionary tale, since it describes the potential of cybernetics to address crucial social and political issues, a potential that is lost when French cynicism (and US processes of disciplinary professionalization) caused us to turn our backs on the political commitments of earlier decades.  Cybernetics was not alone in shedding its initial political impetus and social critique in the process of being translated into an academic discipline; the second wave women’s movement and interdisciplinary debates among humanists and natural scientists also suffered the same fate once they were reconfigured as feminist theory and science studies on university campuses.[4]See Elena Aronova’s work on the prehistory of science studies, especially: “The Politics and Contexts of Soviet Science Studies (Naukovedenie): Soviet Philosophy of Science at the Crossroads,” Studies in East European Thought 63, no. 3 (2011): 175–202; “The Congress for Cultural Freedom, ‘Minerva’, and the Quest for Instituting ‘Science Studies’ in the Age of Cold War,” Minerva 50, no. 3 (2012): 307–37; and Elena Aronova, “Big Science and ‘Big Science Studies’ in the United States and the Soviet Union during the Cold War,” in Science and Technology in the Global Cold War, ed. Naomi Oreskes and John Krige (The MIT Press, 2014). 

In Le Geste et la Parole, André Leroi-Gourhan (1964), whose work is curiously absent from Code, traces the gradual externalization of biological functions into artifacts, a trend that persists in the creation of intellectual devices such as books, indexes, cards, and code systems. As projects involving artificial environments like Biosphere 2 and spaceships, and the increasing influence of algorithms over the environment, continue to evolve, so does the demand for codes. With the increasing power of algorithms over human existence, particularly in the context of the development of AI, a new question comes to the forefront. No longer: how have human organisms externalized biological functions? But instead: how are human lives—understood as life forms and forms of life—internalized by artificial environments? The thermostat is often presented as an example of a cybernetic device operating homeostatically, just as an organism would. But beyond this analogy, the challenge is to think how living organisms, human and non-human, inhabiting controlled artificial environments are themselves modified by technical devices.

This challenge has, for example, been taken up by Stefan Helmreich in the domain of electroencephelography: “The social life of EEGs transformed when they were patched into a cybernetic view of the organism and used not just to classify epileptics and others but also to develop therapies through which brain waves might be used as stimuli for individual self-regulation and feedback […] ‘Biofeedback’ took off in the 1970s and sought to place individual persons in the loop of monitoring and controlling the waves of activity in their bodies, particularly their brains.”[5]Stefan Helmreich, “Potential Energy and the Body Electric: Cardiac Waves, Brain Waves, and the Making of Quantities into Qualities,” Current Anthropology 54, no. S7 (October 2013): S144. With the development of AI, new research questions are opening up, not only to determine the similarities and differences between natural intelligence and its technical modeling, but also to understand how humans explore new biofeedback avenues during the most sophisticated intellectual operations.

Anthropologist Matt Artz has written that AI and digital technologies will keep altering anthropology in tremendous ways, especially in unveiling new insights and patterns humans often miss.[6]Matt Artz, “Ten Predictions for AI and the Future of Anthropology,” Anthropology News website, (May 8, 2023). Historically, we have seen how new technologies change anthropology both theoretically and methodologically. Modern-day linguistic anthropology benefited tremendously from the invention of the sound spectrograph, tape recorder, and electronic computer.[7]Dell H. Hymes, “Notes toward a History of Linguistic Anthropology,” Anthropological Linguistics 5, no. 1 (1963): 59–103. Archaeologists today use LiDAR (light detection and ranging) to measure and map objects that otherwise would remain hidden. Latter-day cultural anthropologists have also treated digital spaces as ethnographic sites of their own in the growing field of digital anthropology.

Code pushes us to appreciate cybernetics as the “conceptual forerunner of today’s digital exercises, from data crunching to cultural analytics” (175). This appreciation then allows us to reflect one level deeper, including on how the development of AI and digital technologies is not—and has never been—an impersonal project free of sociopolitical interests and consequences. Reflecting on the history of cybernetics, we can see how the use of digital technologies in anthropology might have been peppered with imperialistic and technocratic tendencies. However, I do think it is possible to use technology with a different purpose from how it was originally used.

The use of cameras in ethnography serves as a fitting example of anthropology’s shifting entanglement with technology. When Mead and Bateson first came up with the visual methods to record Balinese cultures, they had strong inclinations toward objectivity in cultural research.[8]Ira Jacknis, “Margaret Mead and Gregory Bateson in Bali: Their Use of Photography and Film,” Cultural Anthropology 3, no. 2 (1988): 160–77. They saw the camera as a tool that could overcome human limitations in observing Balinese postures and spaces. On her expedition proposal to the Social Sciences Research Council, Mead wrote that the use of a camera would make field research more reliable as it could “act as an automatic correction on the variability of the human observer.” In my opinion, Mead and Bateson did not use cameras to understand, but to document. They saw Balinese as pathological objects of observation serving as “models for technocratic reform” (57)—in this case in curing schizophrenia—instead of as research participants.

Today, anthropologists still regard Mead and Bateson’s methods as pioneering in visual anthropological practices. However, the intellectual spirit in using camera technologies has radically changed from when it was earlier devised. No longer seeing technology as a tool aiming at objectivity, anthropologists now are critical of its subjectivity and how it affects the observer’s point of view. There is significant movement towards giving marginalized communities—who were previously objectified by cybernetic approaches—the autonomy to visually represent themselves ethnographically through visual cultures. In 1992, Caroline Wang and Mary Ann Burris developed a community-based participatory method using camera called photovoice.[9]Caroline Wang and Mary Ann Burris, “Photovoice: Concept, Methodology, and Use for Participatory Needs Assessment,” Health Education & Behavior 24, no. 3 (1997): 369–87. Instead of being observed by anthropologists, research participants in photovoice studies are asked to express their life experiences and social issues by photographing their surroundings. This method has become widely used in anthropology for its ability to put forward the voices of the marginalized, and I think this will be one part of what anthropology’s digital future looks like.

Notes

Notes
1 Kate Crawford, “Archeologies of Datasets,” The American Historical Review 128, no. 3 (September 1, 2023): 1370; Matthew L. Jones, “AI in History,” The American Historical Review 128, no. 3 (September 1, 2023): 1363.
2 Jones, “AI in History”; Eden Medina, “Forensic Identification in the Aftermath of Human Rights Crimes in Chile: A Decentered Computer History,” Technology and Culture 59, no. 4 (2018): 100–133. This concern was also voiced at the meetings of the University of Cambridge Mellon-Sawyer Seminar on Histories of Artificial Intelligence in 2020 and 2021, organized by Syed Mustafa Ali, Stephanie Dick, Sarah Dillon, Matthew Jones, Jonnie Penn and Richard Staley.
3 Jones, “AI in History,” 1363.
4 See Elena Aronova’s work on the prehistory of science studies, especially: “The Politics and Contexts of Soviet Science Studies (Naukovedenie): Soviet Philosophy of Science at the Crossroads,” Studies in East European Thought 63, no. 3 (2011): 175–202; “The Congress for Cultural Freedom, ‘Minerva’, and the Quest for Instituting ‘Science Studies’ in the Age of Cold War,” Minerva 50, no. 3 (2012): 307–37; and Elena Aronova, “Big Science and ‘Big Science Studies’ in the United States and the Soviet Union during the Cold War,” in Science and Technology in the Global Cold War, ed. Naomi Oreskes and John Krige (The MIT Press, 2014).
5 Stefan Helmreich, “Potential Energy and the Body Electric: Cardiac Waves, Brain Waves, and the Making of Quantities into Qualities,” Current Anthropology 54, no. S7 (October 2013): S144.
6 Matt Artz, “Ten Predictions for AI and the Future of Anthropology,” Anthropology News website, (May 8, 2023).
7 Dell H. Hymes, “Notes toward a History of Linguistic Anthropology,” Anthropological Linguistics 5, no. 1 (1963): 59–103.
8 Ira Jacknis, “Margaret Mead and Gregory Bateson in Bali: Their Use of Photography and Film,” Cultural Anthropology 3, no. 2 (1988): 160–77.
9 Caroline Wang and Mary Ann Burris, “Photovoice: Concept, Methodology, and Use for Participatory Needs Assessment,” Health Education & Behavior 24, no. 3 (1997): 369–87.