The established narrative in the history of technology is that the information age—at least, its twentieth and twenty-first-century iterations—has its roots in mid-century cybernetics, an engineering discipline of control and communication that approached humans and machines as information-processing systems.[1]Ronald R. Kline, The Cybernetics Moment: Or Why We Call Our Age the Information Age, 1st edition (Baltimore: Johns Hopkins University Press, 2015). Cybernetics, then, as many historians have shown, supplied the human sciences with new metaphors and conceptual tools to reorient their tasks, goals, and methodologies in the 1950s and 1960s.[2]Jamie Cohen-Cole, The Open Mind: Cold War Politics and the Sciences of Human Nature, Reprint edition (University of Chicago Press, 2016); Deborah Weinstein, The Pathological Family: Postwar America and the Rise of Family Therapy (Cornell University Press, 2013). The mid-century connections between engineering and human sciences approaches to communication are also the subject of Bernard Geoghegan’s Code: From Information Theory to French Theory. And yet Code’s take on the relationship between hard and soft sciences diverges in a significant way from existing historical accounts.

As Geoghegan demonstrates, far from being mere recipients of cybernetic ideas, human sciences laid as much groundwork for the information age as engineering, computing, and mathematics. American anthropologists Gregory Bateson and Margaret Mead (previously treated by historians as secondary figures in the cybernetics movement) and Russian-born linguist Roman Jakobson and French ethnologist Claude Levi-Strauss (two entirely new historical actors in the twentieth-century information sciences) began approaching human communication in terms of signals, codes, and information patterns even before mathematicians Norbert Wiener and Claude Shannon formulated their information theories at MIT and Bell Labs in the 1940s.  Since the cybernetics agenda happened to match their already established research programs, these human scientists would later become a part of the international cybernetics network or, as in the case of Jakobson, would benefit from the patronage of institutions traditionally associated with information science.

Not only does Geoghegan bring new actors to the history of cybernetics, he also revisits well-known figures, such as computer engineer Vannevar Bush and Claude Shannon, to showcase how, back in the 1930s, human sciences, especially ethnography and eugenics, informed their work on computing and mathematics. For instance, as Chapter One tells us, during the height of the American technocracy movement, Bush introduced Shannon to the Eugenics Records Office, where the latter worked to streamline information processing methods.  This is one of the many historical episodes that allow Geoghegan to argue that the social engineering of the 1920s and 1930s was as important in shaping cybernetics as computer engineering of the mid-century.

Code introduces two overlapping power asymmetries—an international and an epistemic one. Many of Geoghegan’s white Western scholars developed their theories using the data about “primitive” people. The US and France were places where universal theories of the human condition were formulated, while the non-Western world served as a data mine for those theories. This line of analysis continues the historiographical trajectory that examines how Western science—ranging from early modern natural philosophy to twentieth-century anthropology—used colonized lands as data repositories for its theoretical projects.[3]Rebecca Lemov, Database of Dreams: The Lost Quest to Catalog Humanity (New Haven ; London: Yale University Press, 2015); Londa Schiebinger, “Bioprospecting,” in Plants and Empire: Colonial Bioprospecting in the Atlantic World, Illustrated edition (Cambridge, Massachusetts London, England: Harvard University Press, 2007), 73–104; Joanna Radin, “‘Digital Natives’: How Medical and Indigenous Histories Matter for Big Data,” Osiris 32, no. 1 (September 2017): 43–64.  Tracing the origins of cybernetics to Dutch Bali rather than US East Coast engineering labs, Geoghegan demonstrates that cybernetics was yet another Western science with roots deep in settler colonialism.

The book’s conclusion makes an intriguing statement by suggesting a direct line of inheritance between the alliance of “data-driven research into the human condition” and structuralism, on the one hand, and present-day data analytics and algorithmic platforms, on the other (59, 177). But, as historians and STS scholars have recently demonstrated, one prominent marker of the data-driven algorithmic systems of the twenty-first century is their fixation on prediction.[4]Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Publishing Group, 2018).  From Netflix’s predictive models for film recommendation to Google’s content-targeted advertising, present day data-driven algorithmic systems are designed to anticipate and manipulate user behavior.[5]Nick Seaver, Computing Taste: Algorithms and the Makers of Music Recommendation (University of Chicago Press, 2022), 55–58; Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019), 82. Their negative effect on society has been significant: from reinforcing racism to steering political opinions.[6]Rebecca Lemov, “Into the Whirlpool: How Predictive Data Put Brainwashing on the Spin Cycle,” The Hedgehog Review 22, no. 2 (June 22, 2020): 74–86; Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press, 2018); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown: New York, 2016).

The historian Matthew Jones and psychologist Shoshanna Zuboff both argue for the instrumentalist imperative at the heart of predictive algorithmic systems. When analyzing their mathematical foundations, Jones argues that their developers are unconcerned with establishing any knowledge about statistical laws that govern the predicted event.[7]Matthew L. Jones, “How We Became Instrumentalists (Again): Data Positivism since World War II,” Historical Studies in the Natural Sciences 48, no. 5 (2018): 673–84. Zuboff makes a parallel argument as she examines how predictive algorithms manipulate user behavior: understanding the user is beyond the scope of their design, as they reduce “human experience to measurable observable behavior.”[8]Shoshana Zuboff, “Surveillance Capitalism and the Challenge of Collective Action,” New Labor Forum 28, no. 1 (January 1, 2019): 30. It is the emphasis on prediction over understanding that makes these systems so ill-equipped to solve hard social problems. For instance, as shown by the political scientist Virginia Eubanks, when such predictive systems are introduced to the state government, particularly welfare offices, they make decisions that ultimately punish whoever turns to the state for support, failing to see the human behind the data inputs.[9]Virginia Eubanks, “Allegheny Algorithm,” in Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Publishing Group, 2018).

It would be interesting to consider how the cybernetic approach of Margaret Mead, Gregory Bateson, and Claude Lévi-Strauss compares to the instrumentation imperative of contemporary predictive data analytics. Were the empiricism and abstractions of the human sciences described by Geoghegan ultimately driven by the ambition to better understand the human condition? History reveals the paths not taken, helping us imagine alternative world orders and knowledge systems. It is unclear, however, how the mid-century human scientists discussed in this book engaged with the binary understanding vs. prediction. The unique historical moment unearthed by Geoghegan may be as much an example of an alternative, perhaps, more humane, approach to data collection and deployment as a vital step to the instrumentarian data analytics we are exposed to at the present.

The recent boom in large language models has caused a good deal of anxiety among humanities scholars, and historians are no exception. Recently, the American Historical Review (AHR) published a forum on artificial intelligence and the practice of history, featuring essays on the methods and approaches to writing the history of AI and in the age of AI. These essays are punctuated by the argument that, at a time when machine learning tools are being introduced in historical research, historians’ expertise remains key to the proper usage of such tools. As Kate Crawford and Matthew Jones state in their essays, histories of AI and data—and the questions they ask about the origins, methodologies, and information gaps in data sets—are crucial to the reflective use of data and machine learning tools in humanistic scholarship.[10]Kate Crawford, “Archeologies of Datasets,” The American Historical Review 128, no. 3 (September 1, 2023): 1370; Matthew L. Jones, “AI in History,” The American Historical Review 128, no. 3 (September 1, 2023): 1363. Unlike some strains of anthropology, history avoids abstracting data away into theory. The latter often helps historians craft their questions, but the primary task of historical scholarship is to understand the past on its own terms.

Therefore, the major take-away that historians of data and computing can glean from Code concerns approach rather than method. More historians have recently suggested that technical histories of data and algorithms are insufficient for understanding the genealogies and social ramifications of these technologies.[11]Jones, “AI in History”; Eden Medina, “Forensic Identification in the Aftermath of Human Rights Crimes in Chile: A Decentered Computer History,” Technology and Culture 59, no. 4 (2018): 100–133. This concern was also voiced at the meetings of the University of Cambridge Mellon-Sawyer Seminar on Histories of Artificial Intelligence in 2020 and 2021, organized by Syed Mustafa Ali, Stephanie Dick, Sarah Dillon, Matthew Jones, Jonnie Penn and Richard Staley. In his AHR essay, for instance, Matthew Jones calls for turning “infrastructures of data collection upside down” to look far beyond the technical detail “to understand the creation and analysis of data in the contexts of its production and its use to track the contestation and consolidation of those infrastructures in the organization of social, intellectual, and cultural life.”[12]Jones, “AI in History,” 1363. Code does exactly that. It situates the origins of present-day digital analytics in anthropology and French theory, unearthing the complex and previously uncharted historical contingency of intellectual currents that shaped the twentieth and twenty-first century information age.

Notes

Notes
1 Ronald R. Kline, The Cybernetics Moment: Or Why We Call Our Age the Information Age, 1st edition (Baltimore: Johns Hopkins University Press, 2015).
2 Jamie Cohen-Cole, The Open Mind: Cold War Politics and the Sciences of Human Nature, Reprint edition (University of Chicago Press, 2016); Deborah Weinstein, The Pathological Family: Postwar America and the Rise of Family Therapy (Cornell University Press, 2013).
3 Rebecca Lemov, Database of Dreams: The Lost Quest to Catalog Humanity (New Haven ; London: Yale University Press, 2015); Londa Schiebinger, “Bioprospecting,” in Plants and Empire: Colonial Bioprospecting in the Atlantic World, Illustrated edition (Cambridge, Massachusetts London, England: Harvard University Press, 2007), 73–104; Joanna Radin, “‘Digital Natives’: How Medical and Indigenous Histories Matter for Big Data,” Osiris 32, no. 1 (September 2017): 43–64.
4 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Publishing Group, 2018).
5 Nick Seaver, Computing Taste: Algorithms and the Makers of Music Recommendation (University of Chicago Press, 2022), 55–58; Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019), 82.
6 Rebecca Lemov, “Into the Whirlpool: How Predictive Data Put Brainwashing on the Spin Cycle,” The Hedgehog Review 22, no. 2 (June 22, 2020): 74–86; Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press, 2018); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown: New York, 2016).
7 Matthew L. Jones, “How We Became Instrumentalists (Again): Data Positivism since World War II,” Historical Studies in the Natural Sciences 48, no. 5 (2018): 673–84.
8 Shoshana Zuboff, “Surveillance Capitalism and the Challenge of Collective Action,” New Labor Forum 28, no. 1 (January 1, 2019): 30.
9 Virginia Eubanks, “Allegheny Algorithm,” in Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Publishing Group, 2018).
10 Kate Crawford, “Archeologies of Datasets,” The American Historical Review 128, no. 3 (September 1, 2023): 1370; Matthew L. Jones, “AI in History,” The American Historical Review 128, no. 3 (September 1, 2023): 1363.
11 Jones, “AI in History”; Eden Medina, “Forensic Identification in the Aftermath of Human Rights Crimes in Chile: A Decentered Computer History,” Technology and Culture 59, no. 4 (2018): 100–133. This concern was also voiced at the meetings of the University of Cambridge Mellon-Sawyer Seminar on Histories of Artificial Intelligence in 2020 and 2021, organized by Syed Mustafa Ali, Stephanie Dick, Sarah Dillon, Matthew Jones, Jonnie Penn and Richard Staley.
12 Jones, “AI in History,” 1363.