The Coder Who Came in from the Cold

A review of From Russia with Code: Programming Migrations in Post-Soviet Times, Mario Biagioli and Vincent Antonin Lépinay eds., Duke University Press, 2019.

From Russia with CodeFrom Russia with Code is the product of a three-year research effort by an international team of scholars connected to the European University at Saint Petersburg (EUSP). It benefited from the patronage of two important figures: Bruno Latour, who pioneered science and technology studies (STS) in France and oversaw the creation of a Medialab at Sciences Po in Paris; and Oleg Kharkhodin, a Russian political scientist with a PhD from the University of California at Berkeley who served as EUSP’s rector during most of the duration of the study. Based on more than three hundred in-depth interviews conducted from 2013 through 2015, the research project also benefited from a rare window of opportunity offered by political conditions prevalent back then. Supported by a consortium of Western research institutions, it was partially funded by a grant from the Ministry of Education and Science of the Russian Federation for the study of high-skill brain migration. It could build on the solid foundation of EUSP, a private graduate institute whose academic independence is secured by an endowment fund that is one of the biggest in the country. The brain drain of IT specialists was obviously a matter of concern for Russian authorities, as surveys showed that in 2014 the emigration of Russian scientists and entrepreneurs was by a wide margin the highest since 1999. The movement was amplified after 2014 by Russia’s decision to annex the Crimean Peninsula and, in 2022, by its all-in war of aggression against Ukraine. Conditions for fieldwork-based studies and international research projects in Russia would certainly be different today. The book’s chapter on civic hackers illustrates how fast the ground has moved in the past ten years: most of the civic tech projects it describes were affiliated with the foundation created by Alexey Navalny, a Russian opposition leader who was detained in 2021 and died in a high-security prison in February 2024.

Preventing the brain drain

The research questions framing the project demonstrate how social science can contribute to policy discussions while translating practical issues into scholarly interrogations. The concerns of the Russian authorities that sponsored the project are well reflected in the topics covered and the questions addressed. How can Russia prevent or reverse the brain drain that was perceived as a direct threat to the nation’s sovereignty? How to avoid dependence on Western imports and cultivate world leaders in an industry dominated by the GAFA? Is import substitution in the IT sector a viable strategy, or should the country rely on foreign direct investment and integration into global value chains? Could Russia create its own version of Silicon Valley by encouraging the clustering of industries in special economic zones and technoparks? These questions are reframed and displaced through the lenses of disciplinary studies mobilized by the members of the research team: STS, transition to market theory, economic geography, innovation policy studies, corporate management, migration studies, and so on. But mostly, From Russia with Code helps answer the questions that readers familiar with IT all know too well: why are Russian programmers so talented and prized by the market? What explains their unique combination of skills, and how to integrate these skills into a foreign business setting? Is it true that their technical prowess is offset by a lack of managerial skills and poor entrepreneurial spirit? The list of famous Russian IT developers include Andrei Chernov, one of the founders of the Russian Internet and the creator of the KOI8-R character encoding; Andrey Ershov, whose research on the mathematical nature of compilation was recognized with the prestigious Krylov Prize; Mikhail Donskoy, a leading developer of Kaissa, the first computer chess champion; Alexey Pajitnov, inventor of Tetris; and Yevgeny Kaspersky, founder of cybersecurity and anti-virus provider Kaspersky Lab. Russia is one of the few countries that is not dominated by Google, Facebook, and WhatsApp, but that has developed its own search engine (Yandex), social network (VKontakte) and message app (Telegram). As a last question that lurks into readers’ minds: what are Russian hackers really up to, and should we be afraid of their cyberattack capabilities?

The standard diagnosis on Russia’s IT capacity is framed by transition theory and posits that “Russians historically have been good at invention but poor at innovation.” Russian computer scientists built successful academic careers outside their homeland, and many global technological giants such as Apple, Google, Intel, Microsoft, or Amazon retain Russian programers as valuable talents. Yet Russian IT entrepreneurs are scarce either in Russia or abroad, and outstanding success stories are the exception rather than the rule. It took one generation to produce a Sergey Brin, co-founder of Google, who arrived in the United States at the age of six where his Russian Jewish parents typically pursued a teaching and research career instead of turning to the corporate world. The virtuosity of Russian software programmers is often explained by their high-level training in mathematics and pure science. The Soviet Union maintained a top-class scientific apparatus, from the fizmat model high schools specializing in math and physics to the dense network of research institutes, science cities, and elite academic institutions like the Academy of Sciences. This strong institutional basis translated into a high number of Nobel prizes and science olympiad laureates. Russian IT developers are praised for their deep interest and immersion in research, an inventive turn of mind, the ability to think independently and offer innovative solutions, and their intuitive grasp of complex problems. But they are also lambasted for their lack of management and entrepreneurial skills. Management was something to which Soviet scientists and science students had virtually no exposure. Even now, business culture is still perceived by many in the community as a superfluous and even disingenuous element. According to the standard view, Russian tech specialists are often interested mainly in new and technically exciting projects, to the point where they disregard the interest of their clients. They tend to think that if an idea is good technically, it will automatically translate into commercial success. They are criticized for a lack of business acumen, poor business etiquette, a certain intolerance for risk, a limited sense of the global market, and disinterest in management issues, which they see as “bullshit.”

Lack of management skills

The studies assembled in From Russia with Code both validate and complicate this diagnosis. Russian IT specialist are certainly heirs to a tradition that values the plan over the market, pure science over applied technology, and developing elegant responses to abstract questions over providing practical solutions to specific problems. Technical skills can be acquired using brute force and a sound foundation in basic science; management culture is taking much longer to cultivate and is more reliant on “soft skills.” The history of computer science in the Soviet Union lies at the root of the differences in programming cultures between East and West. As long as informatics remained a basic science akin to applied mathematics, Soviet scientists remained at the forefront of the discipline. Although cybernetics was initially perceived as an American “reactionary pseudoscience,” it quickly became part of a vision of a socialist information society. As in the United States, early computers were intended for scientific and military calculations. A universally programmable electronic computer known as MESM was created in 1950 by a team of scientists directed by Sergey Lebedev at the Kiev Institute of Electrotechnology. Electrical engineering and programming was one of the few careers in the Soviet Union that was relatively open to Jews and to women: hence their large numbers in these professions. The engineering education was fairly broad, with heavy emphasis on mathematics and physics, but without much foundation in computers: according to one former student, “learning to program without computers was akin to learning to swim without water.” Hardware limitations forced Soviet programmers to write programs in machine code until the early 1970s. By that time, the Soviet government decided to abandon development of original computer designs and encouraged cloning of existing Western systems. A program to expand computer literacy in Soviet schools was one of the first initiatives announced by Mikhail Gorbachev after he came to power in 1985. A network of afterschool education centers carrying programming classes for children led to a wide popularity of Basic and other programming languages.

A half century’s worth of Soviet experience with computing did not just disappear overnight with the end of the Soviet Union. Russians continued to play by the old rules they had internalized in the Soviet economy. The technical skills that Russian software programmers are internationally appreciated for and identified with are skills they have developed through the very specific Russian (and formerly Soviet) educational system. A case study of Yandex, the company behind Russia’s main search engine and the fourth-largest in the world, illustrates how coding socializes IT workers and creates communities of practice aligned with corporate objectives. Computer codes are written in languages that need to be executed by machines, thus leaving no space for semantic ambiguities. At the same time, and for the same reason, there is a specific sociality to code to the extent that lines of code also encapsulate relationships of collaboration, training, and skill transfer. At Yandex, young recruits are encouraged to immerse themselves in the source code of the company and to spot errors or typos for debugging. This way they learn the conventions of the community, all of which are inscribed in the codebase. Face-to-face interactions and oral communication are limited, as developers work from different office buildings and spend most of their time facing their computer screen, writing code or discussing through chatboxes. Yandex has a tradition of writing code without including comments in natural language: the code should be able to “speak for itself” by being accurate, simple, and “clean.” The very first thing every new employee has to learn is how to make code readable and to improve its utility for human readers. As in other programming communities, there is a difference in style between the “mathematicians” who prefer high-level languages such as Python and the “engineers” who favor low-level languages like C++. But projects at Yandex often mix the two approaches, while the corpus they create remains open to criticism and correction. All employees have access to the full codebase of the company and are free to comment on ongoing projects, upholding long-held principles of communal help that hark back to an idealized Soviet past.

Smart cities and technoparks

A key concern of policymakers is to create conditions by which IT industry can flourish. Interventions to promote public-private partnerships and foster cooperation between institutions and actors occur at different scales, from macro to micro: special economic zones, regional corridors, smart cities, creative hubs, technoparks, startup incubators, rentable work-space, and so on. Russia can build upon a model of science promotion that has concentrated resources in isolated science cities and non-teaching research institutions such as the Academy of Sciences. It has been successful at generating scientific breakthrough and achieving technological milestones in fields such as space exploration or the nuclear arms race. However, it has failed consistently in translating scientific discovery into technological innovation and market success. Commercialization was never a priority in the planned economy. In the IT sector, where innovation was increasingly driven by the market, the Soviet Union soon lost its advance in basic science and cybernetics and was reduced to licensing or copying Western technologies. Emerging from the ruins of the Soviet Union, the Russian state had its own particular vision of IT development. It was aiming at not simply imitating the West, but at keeping innovation within state control through authoritarian policy decisions and administrative guidance. But instead of supporting existing science cities and research institutions, the state decided to build a new technological apparatus separate from the Soviet one and inspired by the Silicon Valley model. As a result, Russia got the worst of both worlds: increased competition and the profit motive brought many IT professionals to exit the country in search of more remunerative opportunities, while domestically industrial policy gestured toward Silicon Valley but continued to follow the template of the old Soviet science apparatus. Created with great fanfare by then President Dmitry Medvedev, the Skolkovo “Innovative City” is almost impossible to find on a map and very difficult to go to from Moscow. At the time oof the book’s writing, it was criticized for “inefficiency, corruption, high rents, a complicated architectural plan, and a failing program for the support of startup companies.” Technoparks have been established in many other Russian cities to host both IT startups and larger technology companies. But local authorities are competing against each other through incentives and subsidies programs, while thousands of IT specialists have left the country and are likely never to return. Meanwhile, grassroots initiatives and homegrown developments were annihilated by the state’s attempt to regain control over peripheral regions. In the Russian Far East, a thriving ecosystem built around the online trading of used Japanese cars was suppressed by one stroke of a pen when the Russian state decided to impose a hefty levy over imported cars of more than five years. Other experiments such as Kazan’s self-branding as “the capital of the Russian IT industry” have met with more support from the centralizing state whose priorities are aligned with the interests of local politicians in Tatarstan. However, at present the city plan remains more a layout than a fully functional smart city, and the reader cannot escape the feeling of being led through a Potemkin village by an overtly enthusiastic research guide. It is easy to adopt the jargon of IT success and talk the talk of startup promotion. To walk the walk is another matter.

Russia’s soviet heritage continues to linger in the present.  But the Western capitalist model exemplified by Silicon Valley doesn’t represent the sole alternative. Not all Western countries share the same approach to running IT business. Elements of the socialist model, such as an orientation towards social justice, have influenced policies and mindset in Scandinavia, where Russian expatriates appreciate the communalist ethos and the family-friendly environment. Other Russian migrants who have relocated to Boston or to Israel place high value on a corporate capitalist model of large organizations which are both risk-adverse and profit-oriented. As the last article in the book concludes, “the entrepreneurial capitalism of Silicon Valley is not the only game in town.” There are circumstances when a ”socialist” technological model or a “corporate” capitalist model are more applicable than the purely “entrepreneurial” model of IT startups and venture capital. From a Russian perspective, it makes sense to cultivate the tradition of high technical skills and complex problem-solving that constitute Russia’s soviet heritage. Business models that originate ion the academic community are quite distinct from the capitalist motive or profit generation. Even in the West, open source programming and the free software movement have led to sustainable ventures and now undergird a vast portion of today’s internet. Moreover, the lack of entrepreneurial spirit by Russian IT specialists may be due to institutional factors: the lax attitude toward intellectual property, the absence of trust among young professionals, the relative isolation of Russia from global trade patterns, the absence of venture capital and related services to scale up enterprising businesses, the shadow of the criminal economy, etc. According to the authors, the brain drain narrative also needs to be complicated. Experiences of work migration by IT professionals from India or China have demonstrated that the “brain drain” is not an unfixable curse and can instead be viewed as “brain circulation,” with people looking for better conditions regardless of the country. Here again, the profit motive is not the only driver of individual decisions. Student and young researcher mobility is increasingly part of the academic curriculum, and the choice of destination is often motivated by existing collaborative networks or diasporic connexions. Scholars get a first taste of academic life abroad by spending a few months as a postdoctoral student or a guest lecturer before considering more long-term migration options. The same process of migrating step-by-step can also be found in a corporate environment where the decision to relocate is preceded by offshoring contracts and temporary missions. The story of Russian Jewish IT practitioners migrating to Boston during the Soviet period dispels the myth of the “tech maverick” and shows that migrants often have to re-train and upgrade their skills sets before they can find employment in US companies. The concept of brain drain assumes a kind of inherent and fixed value to the “brains” that leave their homeland and settle abroad. In practice however, migration often leads to occupational downgrading, deprofessionalization and de-skilling, as highly educated graduates lacking connexions and job-search skills become employed in low-skilled work or, at best, “upper-middle tech” in big US corporations. The failure to produce technological entrepreneurs among Russian immigrants should not be read as a result of their inability to operate in a capitalist economy or as a lack of entrepreneurial skills. Considering the limited options offered to migrants in a new environment, settling in for a mid-level corporate position in a large corporation instead of starting a new high-risk venture seems like a reasonable option.

The shadow of cyber criminality

In addition to the three models identified by the authors—socialist, entrepreneurial, or corporate—, there is a fourth model that they don’t consider in their essays: the criminal one. Much late-Soviet entrepreneurial activity emerged as an antidote to the country’s collapsing economy, and the idea of “dishonest speculation” was seen as the predominant form of engaging In business activities. From semi-legal market practices to criminal activities, there was only a fine line that many young professionals equipped with IT skills were ready to cross. The same skills that made fizmat school graduates valuable on the IT job market could also be turned toward quick gains in the shadow economy. During Russia’s market transition, the grey zone between legitimate, semi-legal, and illegal activity led to surprising developments, such as a publicly organized conference of avowed criminals that took place at Hotel Odessa in May 2002. The First Worldwide Carders Conference was convened by the administrators of CarderPlanet, a website on the dark web that specialized in mediating between vendors and purchasers of stolen credit card data. In the early age of e-commerce, when American banks and card issuers lagged behind in the chip-and-PIN technology which their European counterparts had developed, “carding” or credit card fraud became a very lucrative activity.  Russian fizmat kids with access to a computer and an Internet connexion turned into early-day hackers and cybercriminals.  CarderPlanet became the breeding ground of a whole generation who turned to cybercrime for lack of better opportunities in the context of a crumbling economy and a disintegrating state. Later on, these hackers turned to ransomware as the preferred mode of attack and to bitcoin as the privileged means of payment. Russian cybercriminality cannot be understood without appreciating its relationship to Russian national security interests. Early on the FSB, Russia’s secret service, made it clear that any criminal operation against domestic state interests was clearly off-limits and would be met with strong retaliation. Later on, criminal gangs were mobilized into cyber attacks against newly independent states such as Estonia or Georgia. Members of cyber gangs were also recruited into notorious state-backed hacking teams such as APT28 or Unit 26165. Cybercriminals hide behind anonymity services, encrypted communications, middlemen, puppet accounts, and pseudonyms. This makes it challenging for law enforcement agencies, let alone social scientists, to track them or describe their practices. A few facts highlighted by From Russia with Code might however be relevant here. Like conventional Russian software developers, Russian cybercriminals and hackers are likely to value technical prowess and coding virtuosity above all else. For them, code is a political instrument that has the power to challenge geopolitical power relations and capitalist business interests. Code also serves to create groups and communal identities of like-minded professionals, like the software-writing teams at Yandex. Studying their coding style and particular signature may help intelligence agencies to attribute cyberattacks to known actors in Russia, thereby responding to the challenge of attribution in cyber warfare. Like the professionals described in the book, Russian cybercriminals’ relation to the motherland is likely to be transactional. They are also geographically mobile, and need to venture abroad to close some illicit transactions, which gives Western law-enforcement agencies an opportunity to locate them and bring them behind bars. Most participants in the 2002 CarderPlanet Conference have been identified, tracked down, arrested, and condemned by justice.

Martian Chronicles

A review of Dying Planet: Mars in Science and the Imagination, Robert Markley, Duke University Press, 2005.

Dying PlanetThe relations between science and fiction have nowhere been any closer than on the planet Mars. The genre of science fiction literally began with imagining life on Mars; and some of its most popular entries nowadays are stories of how humans could settle on the red planet and make it more like the Earth. Planetary science originally took Mars as its object and tried to project onto Mars what scientists knew about the climate and geology on Earth. Now this interest for Martian affairs is coming back to Earth, as scientists are applying knowledge derived from studying Mars to the study of the Earth’s planetary dynamics. Mars’ image as a dying planet has been invoked to support competing, even antithetical views, of the fate of our world and its inhabitants: a glorious future of interplanetary expansion and space conquest, or a bleak fate of environmental devastation and human extinction. Science has not completely closed the issue on whether life has ever existed on Mars; but visions of extraterrestrial civilizations and space invaders have been superseded by narratives centered on mankind and its cosmic manifest destiny. This intimate relationship between science and fiction under the sign of Mars is now more than one century old, but shows no sign of abating. What is it in Mars that inflames people’s imagination from one generation to the next? Why has Mars attracted more interest than our closest satellite, the Moon, or than more distant planets in the solar system such as Venus or Saturn? Are there commonalities between the way our ancestors envisioned channels built by Martian civilizations and more recent visions of making Mars suitable for human sojourn? Will the detailed inventory of the Martian terrain brought back by satellite images and camera-equipped rovers put an end to our interest for the red planet, or will it rekindle a new space age with the colonization of Mars as its overarching goal? And how can our visions of planetary expansion avoid the pitfalls of colonial metaphors and Earth-based anthropocentrism?

Is there life on Mars?

Dying Planet explores the ways in which Mars has served as a screen on which we have projected our hopes for the future and our fears of ecological devastation on Earth. It presents a cross-disciplinary investigation of changing perceptions of Mars as both a scientific object and a cultural artifact. The persistence of the red planet in our cultural imagination explains its enduring presence on the scientific agenda; and the scientific controversies surrounding Mars have often fueled the imagination of artists and philosophers. Scientists still frequently resort to terrestrial analogies to describe Mars; and the study of Mars has encouraged scientists to think about the planetwide conditions necessary to sustain life, making Earth more of a Mars-like planet. For planetary scientists and science-fiction writers, Mars often acts as a bellwether, a harbinger of the ecological fate of the Earth. The image of Mars as a dying planet has an enduring quality: it indicates that the Earth may go the way of Mars and transform itself into a barren land due to resource exhaustion and environmental stress. To the question: Why Mars?, the author lists the reasons that has made the fourth planet in the solar system such an enduring presence in the scientific imagination. Since the invention of the telescope in the seventeenth century, Mars can be observed with a fair degree of accuracy. Dark patches on the surface, the polar caps that wax and wane, waves of darkening that spread across the planet from the poles toward the equator during its spring and summer months: all these observed phenomena have nourished rampant speculation based on analogies to Earth’s seasonal and hydrological cycles. In 1878, Giovanni Schiaparelli (1835-1910) announced that he had observed canali (channels or canals) criss-crossing its surface. At the end of the nineteenth century, American astronomer Percival Lowell (1855-1916) forcefully defended the idea that these canals were built for irrigation by an intelligent civilization. For more than a half century, the canal controversy fueled speculations about an alien race which could enter in contact with mankind. More generally, the discovery of life on Mars or elsewhere in the universe would profoundly alter humankind’s perception of its place in the cosmos: the question: Is there life on Mars? is as important as Copernic’s questioning the place of Earth at the center of the universe.

Our fascination with Mars stems from what Robert Markley calls the interplanetary sublime. According to Immanuel Kant, the sublime is the infinite object that reveals the sublimity of reason. The “starry heavens above me and the moral law within me” fill us with a profound sense of wonder and awe. The spectacle of Mars in science and in literature is indeed sublime and awe-inspiring. Mars has the largest volcano in the solar system. Its main valley stretches for three thousand miles, dwarfing terrestrial analogues and making the Grand Canyon seem “a mere crack on the sidewalk.” Its surface preserves landforms three to four billion years old that provide a window into a geological past that has long since disappeared from Earth. Orbital photographs show evidence of geologically recent lava flows, patterns of water erosion, and meteoric impacts that suggest a complex history of planetary evolution and climate change. The evidence of a once warmer and wetter Mars raises the question of planetary evolution and climate change. The study of Mars involves a multiplicity of sciences including geology, chemistry, hydrology, meteorology, and microbiology, as well as the still virtual disciplines of exobiology and terraforming. The exploration of Mars is a “fundamental science driver”: it pushes the frontiers of science further and provokes the imagination of scientists and writers alike. What we see in Mars also reflects “the moral law within me”: gazing at a distant planet makes our insignificance in the universe palpable. Whether humankind is alone in the universe or one of many intelligent species has profound philosophical and even theological implications. The loss of Mars’s atmosphere and the disappearance of water on its surface also bring lessons close to home: if the geological similarities between Mars and Earth have the same causes, then the history of Mars provides a window into Earh’s possible future. Doing comparative planetology, and understanding the dynamics of planetary climate change, therefore becomes the new rationale for going to Mars.

The planetary imagination

To twenty-first century observers, seeing canals on Mars is a bit like discerning a rabbit on the Moon: a figure of the imagination, a matter of folklore and cultural mythology. It is hard to realize that less than a century ago the issue of Mars canals was a matter of science, not fiction, filling the pages of scientific journals and the popular press. The idea of a plurality of inhabitable worlds has long been debated in speculative philosophy, starting with Greek philosopher Anaximander (610-546 B.C.). Based on observation and calculus, Nicholas Copernicus (1473-1543) placed the sun at the center of the solar system, relegating the Earth to merely another orbiting planet. The Copernician theory provided the impetus for Johannes Kepler (1571-1630) to describe precisely the orbits of the planets, although the German astronomer was “almost driven to madness” by the complexity of Mars’s orbit. With the development of the telescope in the seventeenth century, Mars began to be perceived as the most likely candidate in the solar system for harboring an extraterrestrial civilization. Giovanni Cassini (1625-1712) and Christiaan Huygens (1629-1695) published detailed images of the Martian surface that drew on terrestrial analogies: polar caps, “seas” and “oases” became familiar features of the Martian terrain. In the eighteenth century, the plurality of world hypothesis had been put on a sound scientific footing and was debated by scientists and philosophers alike. Mars’s surface was described with increasing precision, and almost all astronomers who had modern instruments at their disposal made observations of the planet. The mapping of Mars focused primarily on global cycles of temperature, hydrology, and presumed biological activity. But it was Giovanni Schiaparelli’s observation of a network of lines on the surface of Mars in 1877 that sparked the most intense controversy. Schiaparelli himself was agnostic about what his canali signified: where they “channels” connecting what was described as oceans, continents, and islands, or “canals” built by an alien civilization?

Robert Markley devotes almost three chapters of Dying Planet to the canal controversy. Forcefully defended by Percival Lowell, it had all the ingredients of a great scientific controversy. It could be boiled down to a simple thesis (canals meant intelligent Martians) and integrated into a grand narrative of planetary evolution (canals were built to counter the desertification of a dying world.) Lowell’s theory operated within the bounds of accepted scientific practice (it used all scientific observations available at the time) and mobilized the rhetoric of scientific objectivity to challenge the values, assumptions, and methods of his opponents (whose refusal to envisage life outside of Earth was denounced as religiously motivated.) Part of the fascination with Mars stemmed from the implicit and explicit lessons which scientists and their readers drew from Lowell’s vision of an advanced civilization struggling to stave off ecological disaster. Lowell’s grand narrative of a dying planet found echoes in the emerging literature of science-fiction writers who mixed the literary genres of utopian novels, adventure narratives, and philosophical speculation. Although H. G. Wells’ The War of the Worlds is by far the best known of the turn-of-the century science-fiction novels, it was by no means an isolated production. Wells’s novel offers a classic dystopian inversion of European imperialism: his blood-drinking Martians pose a horrific challenge to bourgeois complacency, even as they give shape to late Victorian culture’s masochistic fascination with its own demise. Kurd Lasswitz (1848-1910) describes a more peaceful encounter between humans and a more advanced Martian civilization in his 1897 novel Auf zwei Planeten, published in English in 1971 with a foreword by Wernher von Braun. The book has the Martian race running out of water, eating synthetic foods, traveling by rolling roads, and utilizing space stations. Alexander Bogdanov (1873-1928), a Russian physician, philosopher, and Bolshevik revolutionary, describes his Red Star (1908) as a collectivist utopia in the full throes of resource exhaustion and planetary decline. The vanguard socialism of the Martians is carved into the landscape of their planet, with the canals as both cause and effect of Martian collectivism.

How to prove a negative?

Until the 1930s, the canal thesis had enough currency within the scientific community to reinforce a widespread agnosticism about the possibility of intelligent life on Mars. Even as the canal builders retreated into science fiction, the idea of “primitive” life on Mars persisted. Lowell’s paradigm of a dying planet influenced scientific speculation about the composition of the Martian atmosphere, the character of its surface, and the nature of its putative life-forms. After World War II, advances in radiometry and the study of the infrared spectrum gave astronomers new tools with which to study Mars. As the intelligent life hypothesis became more and more improbable, scientists still deduced from the alleged existence of ice, water, and an atmosphere the possibility of vegetative life in the form of lichens and algae. It is hard to prove a negative: the inability to detect signs of life does not signify that life does not—or did not—exist on Mars. Even after the Mariner missions in the mid-1960s brought back photographs showing Mars’s barren surface as inhospitable to life, scientists speculated that oxygen might still be captured in the polar caps, and that bacterial forms of life may have existed in the past and might still be present. Evidence suggested that Mars three billion years ago was comparatively warm and wet. Did life exist in the very distant past on this more hospitable Mars? How had the planet died? Could micro-organisms survive in extreme conditions, as is the case in volcanic or deep sea environments on Earth? A whole discipline, exobiology, grounded on the premise that life may exist beyond Earth, concentrated on the search for signs of life and the study of habitable environments. The ambiguous results of the life-detection experiments conducted during the Viking missions which landed on Mars in 1976 led scientists to lobby for more sophisticated microbiology testings on future NASA landers. The search for life remains a crucial selling point for plans to explore Mars by sending automated rovers and, ultimately, boots on the ground.

In 1948, inspired by the novel of his compatriot Kurd Lasswitz, the rocket physicist and space scientist Wernher von Braun wrote the technical specification for a human expedition to Mars, The Mars Project. In the 1970s and early 1980s, the American astronomer and science communicator Carl Sagan was the most vocal advocate of space exploration and the search for extra-terrestrial intelligent life. Again, he was inspired by the science-fiction novels he read as a teenager: a map representing Edgar Rice Burroughs’s vision of Mars hung on the hallway wall outside his office for more than twenty years. Just as the canals occupied the attention of a generation of scientists, Burroughs’s novels about John Carter and his adventures on the planet he calls Barsoom dominated the interplanetary fiction of the first half of the century. Literature inspired by Mars includes the good, the bad, and the ugly: for a Ray Bradbury and his Martian Chronicles (1950) or a Isaac Asimov’s The Martian Way (1952), how many pulp fictions or comic-book adventures featuring green aliens laying eggs and four-armed tetrapods shooting laser beans? As Robert Markley states in his introduction, “anyone who has read a lot science fiction realizes that much of it is pretty bad.” But the appeal of the genre lies elsewhere: “science fiction does not represent historical experience, but generates simulations of what that experience may become.” Ray Bradbury once said that “Burroughs has probably changed more destinies than any other writer in American history.” The same could be said about himself. Generations of adults (mostly males) had their formative years influenced by the likes of Ray Bradbury, Isaac Asimov, and Arthur C. Clarke. Considering that space exploration lacks the support of vested interests outside of the aerospace industry, science-fiction novels created a constituency for sending missions to the red planet and beyond.

The Mars Society

Inspired by the Lowellian paradigm of a dying planet bearing the mark of ancient civilizations, classic science fiction was obsessed with the idea of intelligent life on Mars. More recent science fiction plays with the idea of bringing life and civilization (back) to Mars: by sending manned missions, establishing a permanent presence, and terraforming the planet. As an emblem of humankind’s interplanetary future, Mars is described both as a dead world that resists human effort to explore, colonize, and transform it and the site of humankind’s next giant leap in its multisecular evolution. These fictions are haunted by the dark underside of colonization and extractive capitalism, and often demystify the masculinist narrative of the conquest of space with a vision of failed social order and technoscientific hubris. In Kim Stanley Robinson’s trilogy, Red Mars (1992), Green Mars (1993), and Blue Mars (1996), the settlement and terraforming of Mars is chronicled through the personal and detailed viewpoints of a wide variety of characters spanning almost two centuries. Ultimately more utopian than dystopian, the story focuses on egalitarian, sociological, and scientific advances made on Mars, while Earth suffers from overpopulation and ecological disaster. These plans to colonize Mars are no longer science fiction: established in 1998 by aerospace engineer Robert Zubrin and backed by multibillionaire Elon Musk, the Mars Society, a nongovernmental organization, has set itself the goal to send humans to Mars and establish a permanent colony in the very near future. In an industry where NASA remains the most expensive game in town, the “new space” industry that operates on a “faster, better, cheaper” basis promotes alternative, low-cost ways of getting humans to Mars and sustain them while they stay on the planet. Robert Markley, who published Dying Planet in 2005, has reservations about the whole endeavor. In his opinion, the Mars Society’s vision of a new American frontier, or a new manifest destiny, “is founded on dubious or simplified readings of American history that repress both the human and ecological consequences of conquest and colonization.” As he concludes, “the ultimate challenge posed by planetary transformation is ultimately as much ethical as it is scientific.”

Coding and Decoding

A review of Code: From Information Theory to French Theory, Bernard Dionysius Geoghegan, Duke University Press, 2023.

CodeIs there a pathway that goes “from information theory to French Theory”? Straying away from the familiar itineraries of intellectual history, Bernard Dionysius Geoghegan invites us to take a path less trodden: a detour that allows the reader to revisit famous milestones in the development of cybernetics and digital media, and to connect them to scholarly debates stemming from fields of study as distant as structural anthropology, family therapy, and literary semiology. Detours and shortcuts are deviations from linear progression, reminding the traveler that there is no one best way to reach a point of destination. Similarly, there are several ways to read this book. One is to start from the beginning, and proceed until the end, from the birth of communication science during the Progressive Era in the United States to the heydays of French seminars in sciences humaines in the Quartier latin before mai 68. Another way is to start from the conclusion, “Coding Today”, and to read the whole book in reverse order as a genealogy of the cultural analytics used today by big data specialists and modern codifiers of culture. A third approach would be to start from the fifth and last chapter on “Cybernetics and French Theory” and to see how casting cultural objects in terms of codes, structures, and signifiers relates to previous methodologies of treating communication as information, signals, and patterns. The common point of these three approaches to reading Code is to emphasize the crossing of boundaries: disciplinary boundaries between technical sciences and the humanities; political demarcations between social engineering and cultural critique; and transatlantic borders between North America and France. The gallery of scientists and intellectuals that the book summons is reflective of this broad sweep: Norbert Wiener, Warren Weaver, Margaret Mead, Gregory Bateson, Claude Lévi-Strauss, Roman Jakobson, Jacques Lacan, Roland Barthes, and Luce Irigaray are seldom assembled in a single essay; yet this is the challenge that Code raises, inviting us to hold together disciplines and methodologies that are usually kept separate.

The empire of code 

Let’s start from the present and move it from there. “Coding” now mostly means writing lines of code or computer software using a programming language such as JavaScript, Python, or C++. Codes can also designate social norms or cultural imperatives governing acceptable behavior in a certain context or within a subgroup. To “know the codes” means to be able to navigate a certain social world without committing blunders or impropriety. Of course, social scientists have taught us that social rules are best obeyed when one is not conscious of their imperium. Social norms must become embodied knowledge to be played spontaneously, and the best performance has the charm and immediacy of the natural, the innate, the unrehearsed. Culture cannot be recitated as a learned lesson or a set of rules. When social life is reduced to a system of codes, decontextualized from its rich background and reformatted for transnational circulation, then it becomes a simulacrum. This is why we should worry about the extension of the domain of the norm that is fueled by the twin forces of globalization and digital technologies. We are witnessing the weakening of the notion of culture, once thought of as a set of evidences shared and anchored in a territory, and today reduced to a corpus of explicit norms and cultural markers, which circulate on a global scale. The crisis in culture that Hannah Arendt diagnosed in 1961 has now given way to culture’s opposite: the reign of the explicit, the quantified, the normative. The disappearance of high culture as a shared implicit within territorial and social boundaries gives way to the sequencing of small bits of cultural content that are recombined to form a marketized commodity, as in UNESCO’s heritage list of intangible assets. These packets of texts and images circulate through networks that separate them from their point of origin and delivers them to the right place. If the network changes, due to congestion or broken links, routers can use an alternative interface to reach destination. 

There is a growing disconnect between the territory in which we live and the cultural references that we manipulate. National or religious identity is redefined as a set of cultural markers and signs of belonging that are decomposed and recomposed into new individual selves that are both unique and interchangeable. Coding implies normativity. We need new norms and regulation because things that seemed obvious, at least within a given cultural space, are no longer so. If everything is open to discussion and contestation, then we must make the rules explicit and as detailed as possible. This codification of social practice considerably reduces inner spaces of freedom and nonnormativity: the intimate, the private, the unconscious. Normativeness is the consequence of coding, the passage to the explicit, the quantification of affects. A grammar, for example, is a code and when we make a mistake, we are corrected. Contrary to language, code is acquired by apprenticeship or formal training: one must know the rules to practice coding, whereas it is not necessary to know grammar to practice a language. Coding follows a model of communication that makes each term explicit, where the receiver understands exactly what the emitter wants to say. This applies to social interactions, where what was previously left unsaid now needs to be specified, and even to the use of language, with the spread of global English and the standardization of public expression. In a multicultural context, it is recommended to speak as clearly as possible without using allusions, cultural references, and humor. The spread of artificial intelligence and chatbots will only reinforce this trend: in order to make ourselves understood by machines, or to allow machines to communicate between themselves, we must separate language from culture and minimize the noise generated through the process of encoding and decoding. 

The age of the seminar

This becoming-code of all cultural contents and social interactions has a long history. A surprising milestone in the advent of code is to be found in the works of philosophers, literary critics, and semioticians that are sometimes bundled together in the United States under the label of “French Theory.” Coding and decoding were definitely code words in French intellectual discussions during the 1960s and 1970s. “Assez décodé !” (Stop decoding/stop fooling around) was the title of a popular essay in 1978 that took aim at Roland Barthes’ new literary criticism and the abuse of technical jargon. Geoghegan identifies the 1960s as the period when “culture as communication” gave way to a preoccupation with “culture as code.” Cybernetics and information theory acted as both model and test bed for this transformation. They were part of a broader trend of social transformation based on the import of American technologies and institutions to fit postwar France’s condition. Techniques of management and human engineering were adopted en masse by an increasingly technocratic France. Funding from American foundations, tracing back to fortunes accumulated by robber barons and with links to the Cold War intelligence apparatus, supported the creation of research institutions that set new modes of organizing critical inquiry in the humanities and social sciences. A new research center and central forum for teaching social sciences was created within the Ecole pratique des hautes études as the “sixième section,” better known as the Ecole des hautes études en sciences sociales or EHESS. It modeled aspects of its study program on the social sciences in the United States, distancing itself from previous modes of scholarly organization in French universities. Its scope was resolutely transdisciplinary and experimental.  It pioneered the use of statistical methods and mathematical models in the humanities. Indeed, there is a book to be written on the fascination, some would say the math envy, exerted by mathematics and formal science on French social scientists as diverse as Claude Lévi-Strauss, Pierre Bourdieu, and Jacques Lacan. One locus for such collaboration was Lévi-Strauss’s research seminar on the utilization of mathematics in the social sciences,  which let to long-lasting interdisciplinary collaboration between scientists and social critics.

The research seminar thus became a key site for the clinical analysis of the human condition, remote from the elegant discussions in cafés and salons that previously exemplified intellectual authority in France. The seminar was the domain of the expert, the specialist, the fieldworker. It displayed science in the making, and opened its ranks to any social scientist who had new research results to share, regardless of academic position or social authority. Later on, Michel Foucault would label this new kind of postwar thinker a “specific intellectual” whose political responsibility was akin to that of the “nuclear scientist, computer expert, and pharmacologist.” Structuralism imposed itself as the dominant paradigm, with its emphasis on codes, systems, communication, economy, and even informatics patterning of signs. The promise of scientific precision and far-reaching advances attracted younger scholars eager to chart bold yet rigorous programs in emerging research areas. Human sciences as envisioned by Claude Lévi-Strauss had one great aim: “the consolidation of social anthropology, economics, and linguistics into one great field, that of communication.” In particular, “social anthropology,” he wrote, “can hope to benefit from the immense prospects opened up to linguistics itself, through the application of mathematical reasoning to the study of phenomena of communication.” Lévi-Strauss was an enthusiastic reader of Shannon and Weaver’s Mathematical Theory of Communication (1949). One of his early papers on the relevance of cybernetics on linguistics argued that engineering models of communication could be transposed onto all other fields of human activity, including linguistics, economic transactions, and the circulation of women within primitive systems of kinship. Through the 1950s, Lévi-Strauss sought to establish a physical infrastructure equal to the tasks of his emerging structural anthropology. His ascension to a chair at the Collège de France in 1960, and his concomitant establishment of the Laboratory of Social Anthropology, presented him with the long-sought opportunity to establish a research laboratory. One of his first initiatives was to acquire a copy of the Human Relations Area Files, a searchable database of two million index cards compiling ethnographic findings. Vast regimes of human data were disassembled into informational units for cross-cultural analysis. They were part of a global apparatus of knowledge that, paradoxically, unmoored cultures from local and embodied reality. Headquartered in Paris, UNESCO offered an early vehicle for bringing these new political techniques to the world.

Back to the future

Code insists on the transatlantic origins of the dominant paradigm in the sciences humaines, both institutionally and in terms of substance. The history of structuralism and poststructuralism has often been told, with an emphasis on the John Hopkins conference of 1966 that spearheaded the reception of French contemporary thought in North America. Here Geoghegan goes further back in time to highlight the way European nascent human sciences were incorporated into emerging logics of US communication science during World War II. As war swept Europe, the Rockefeller Foundation mobilized to bring threatened European intellectuals under the umbrella of US wartime science. An early recruit was Russian-born linguist Roman Jakobson, who founded the Linguistic Circle of New York in 1943 as a successor to the celebrated Prague Linguistic Circle, mixing structural linguistics initiated by the Swiss linguist Ferdinand de Saussure with diverse insights from fields including Russian formalism, avant-garde art such as futurism and cubism, and relativity theory developed in atomic physics. For Saussure, language was like a game of chess: one did not simply speak but selected from among a field of possibilities prefigured by formal constraints and anticipated threats. With Jakobson, language became probabilistic and combinatoric, ordered on principles that followed the direction of cybernetics and communication science. Much as Warren Weaver and Claude Shannon used probabilistic sequences to predict series of words, phrases, and sentences, Jakobson described phonemes as probabilistically encoded and decoded series. Another Rockefeller foundation initiative was the establishment of the Ecole libre des hautes études in New York, which recruited Claude Lévi-Strauss but declined to support Jacques Lacan. Under Jakobson’s influence, Lévi-Strauss ceased to study the empirical facts of indigenous kinship and focused instead on the relations among terms that constituted a kinship system proper. With the aid of a French mathematician, he even found algebraic expressions for his kinship studies. The linguistics seminar Jakobson and Lévi-Strauss held at the Ecole libre made a field trip to AT&T headquarters to witness the performance of the Voder, a synthetic speaking device, in 1944. According to Geoghegan, the Ecole Libre was a methodological crucible, nudging French scholars away from a concern for social equality and redirecting them in technocratic directions. As he remarks, “this was indeed a strategy of political transformation of the sort that would become a pillar of American ‘nation building’ in decades to come.”

The last thesis proposed by Geoghegan—or the first if you follow the book order, from chapter one to chapter five—, is that cybernetics wasn’t an invention of World War II and the Cold War, as science historians sometimes assume. Code shows that “links among the Rockefeller, Macy, and Carnegie philanthropies forged in the 1930s and 1940s, well before the United States’ entry into World War II, guided subsequent initiatives in cybernetics, information theory, and game theory.” The roots of the project lie in Progressive Era technocracy and its agenda to transform social strife into communication engineering problems available for technical problem-solving. Welfare policies, not warfare, were the test bed for the rise of the communication sciences, and its first deployments were to be found in the colony, the clinic, the asylum, and the urban ghetto. As Geoghegan observes, “dreams of cybernetic post-humanism depended on disappearing the bodies of native persons and other subjects regarded as less than human.” Anthropologists Ruth Benedict and Margaret Mead thought that all existing human cultures were distributed along a great “arc” which covered the whole range of possible cultural traits. Each culture then selects along this arc a “pattern” of human possibilities that fits its environment and forms a coherent whole. After his pathbreaking master degree thesis that laid the groundwork of information theory, Claude Shannon’s PhD dissertation, completed in 1940, applied Boolean algebra to the orderly processing of eugenic data. The celebrated Macy Conferences on Cybernetics, initially convened in 1942, brought together mathematicians, anthropologists, engineers, and scientists from other disciplines, and popularized notions such as reflexivity, feedback loops, and error correction mechanisms. Scientific networks cultivated in the 1930s and consolidated in wartime military projects laid the foundation for interdisciplinary communication projects well into the 1950s.

Return to sender

There is a tendency to downplay the links between the natural sciences and the dominant paradigms in the humanities. This book show that the history of the human sciences in the twentieth century cannot be separated from the rise of the communication sciences. Fields such as anthropology, psychology, and semiotics served as experimental laboratories for the engineering of a society of digital media and codified culture. Far from trailing behind engineers and natural scientists, human scientists spearheaded the reconceptualization of cultural forms as forms of code that could be decomposed and recombined using mathematical tools. Efforts to transform the humanities and social sciences into a single field, the human sciences, oriented toward communication, cannot be separated from the rise of scientific philanthropy. The Rockefeller Foundation and a host of like-minded philanthropies funded by robber barons (e.g., the Ford Foundation; the Josiah Macy Jr. Foundation; the Wenner-Gren Foundation) lavished generous funding on interdisciplinary research linked to research programs inspired by cybernetics and information theory. Their midcentury interest in these fields reflected progressive hopes to submit divisive political issues for neutral technical analysis. The long-standing aim of American philanthropies to reorient the humanities toward exact, quantifying, empirical, and rule-governed theoretical analysis found fertile ground in postwar France. Even if we should use the expression “French Theory” with caution, there was a theoretical impetus toward formalization, even a “math envy,” that shaped the dominant paradigms of structuralism and poststructuralism. A cybernetic turn of mind influenced French structuralists’ talk of codes, systems, and communication. While Barthes’s contrarian attitude or Lacan’s extravagant vocabulary carried a critique of technocratic rule, their seminars fit within the period’s emphasis on experts, codification, and structures. Their effort to remake French thought also ended up remaking American thought along the way. If we summarize the standard model of communication as a message sent by an addresser to an addressee through a channel involving operations of coding and decoding, the development of French Theory on American campuses was a case of return to sender.

Science’s Big Picture

A review of Epigenetic Landscapes: Drawings as Metaphor, Susan Merrill Squier, Duke University Press, 2017.

Epigenetic landscapesSusan M. Squier believes drawings, cartoons, and comic strips should play a role in science and in medicine. Not only in the waiting room of the medical doctor or during the pauses scientists take from work, but straight into the curriculum of science students and in the prescriptions given to ailing patients. She even has a word for it: graphic medicine, or the application of the cartoonist’s art to problems of health and disease. Her point is not only that laughing or smiling while reading a comic book may have beneficial effects on the patient’s morale and health. Works of graphic medicine can enable greater understanding of medical procedures, and can even generate new research questions and clinical approaches. Cartoons can help treat cancer; they might even contribute to cancer research. Pretending otherwise is to adhere to a reductionist view of science that excludes some people, especially women and the artistically inclined, from the laboratory. In order to make science more inclusive, scientists should espouse “explanatory pluralism” and remain open to nonverbal forms of communication, including drawings and pictures. Comics and cartoons are a legitimate source of knowledge production and information sharing, allowing for an embodied and personal experience to be made social. They are providing new ways to look at things, enabling new modes of intervention, and putting research content in visual form. In comics, body posture and gesture occupy a position of primacy over text, and graphic medicine therefore facilitates an encounter with the whole patient instead of focusing on abstract parameters such as illness or diagnosis. Studies are already suggesting that medical students taught to make their own comics become more empathetic caregivers as doctors. Health-care workers, patients, family members, and caregivers should be encouraged to create their own comics and to circulate them as a form of people-centered mode of knowledge creation.

Difficult words made easy

Epigenetic Landscapes is full of difficult words: DNA methylation, chromatin modification, homeorhesis, chreod, pluripotency, anastomosis (I will explain each and every one of them in this review). It also mobilizes several distinct disciplines: embryology, genetics, thermodynamics, architecture, science and technology studies, and art critique. But the reader needs not be a rocket scientist or a medical PhD to get the gist of the book. The author’s apologia of graphic medicine, or the call to apply graphic art to healthcare and to medical science, is part of a broader agenda: the rehabilitation of gender-based and art-sensitive forms of intellection that have been estranged from the life sciences. The entanglement of art and science that the author advocates is informed by feminist epistemology: in addition to the French philosopher Michel Serres, the feminist scholar Donna Haraway is presented as one of her main sources of inspiration. However Susan Squier doesn’t discuss theory in the abstract: in order to prove her larger point, she takes the life story and scientific achievement of one scientist, the biologist and embryologist C. H. Waddington (1905-1975), as well as one of the main concepts he introduced, the epigenetic landscape, a figure that has played a foundational role in the formation of epigenetics. Squier emphasizes Waddington’s claim that art and science are inextricably intertwined, and that one largely informs and provides exposure to the development of the other. While Waddington’s model, the epigenetic landscape, represented the determinative nature of development, demonstrating how canalization leads an individual to return to the normal development course even when disrupted, recently scientists are discovering that the developmental process is neither linear nor so determined. This echoes Squier’s mode of narration, which incorporates scholarship from various disciplines and exhibits nonlinearity and indeterminacy as a style of thought.

Epigenetics is a hot topic in contemporary science: it is one of the most often quoted words in biology articles, and dozens of textbooks or popular essays have been devoted to the field—some with catchy titles such as “Change Your Genes, Change Your Life,” or “Your Body is a Self-Healing Machine.” According to its scientific promoters, epigenetics can potentially revolutionize our understanding of the structure and behavior of biological life on Earth. It explains why mapping an organism’s genetic code is not enough to determine how it develops or acts, and shows how nurture combines with nature to engineer biological diversity. Some pundits draw the conclusion that “biology is no longer destiny” and that we can optimize our health outcomes by making lifestyle choices on what we eat and how we live, or by controlling the toxicity of our environment. Epigenetics is now a widely-used term, but there is still a lot of confusion surrounding what it actually is and does. Susan Squier does not add to the hype surrounding the field, but nor does she provide intellectual clarity about the potential and limitations of recent research. Moving away from contemporary debates, she focuses on the personality of C.H. Waddington and follows the cultural trail of the metaphor he helped create and that finds echoes in fields as diverse as graphic medicine, landscape architecture, and bio-art. The epigenetic landscape is all at once a model, a metaphor and a picture that appeared in three different iterations: “the river”, “the ball on the hill”, and “the view from underneath with guy wires.”

Three pictures of the epigenetic landscape

As a scientific model, the epigenetic landscape fell out of use in the late 1960s, returning only with the advent of big-data genomic research in the twenty-fist century. Yet as the epigenetic landscape has come back into widespread use, it has done so with a difference. Now the terms refers primarily to the specific mechanisms by which epigenetics works on a molecular level, particularly through DNA methylation and chromatin modification (the first inhibits gene expression in animal cells, the second makes the chromatin structure more condensed and as a result, transcription of the gene is repressed.) When Waddington conceptualized the epigenetic landscape and coined the words homeorhesis and chreods, he had a broader signification in mind. Homeorhesis, derived from the Greek for “similar flow”, is a concept encompassing dynamical systems which return to a trajectory, as opposed to systems which return to a particular state of equilibrium, which is termed homeostasis. Waddington presented the first version of his epigenetic landscape in 1940 as a river flowing in a deep valley, a visual metaphor for the role played by stable pathways (later to be called “chreodes”) in the process of biological development. This flow represents the progressive changes in size, shape, and function during the life of an organism by which its genetic potentials (genotype) are translated into functioning mature systems (phenotype). Waddington’s second landscape–an embryo, fertilized egg, or ball atop a contour-riven slope, also allows for further visual motion; while the river flows in a linear fashion, somewhat restricted by its blurred boundaries, the embryo has the possibility of rolling down any of the paths present on the hill. The third representation used by Waddington, with wires and nodes underneath the landscape, underscores the way gene expression can be pulled into different directions.

In Waddington’s vision, the role of the epigenetic landscape extended beyond the life sciences. The first representation of the model, published in his book Organizers and Genes (1940), was a drawing commissioned to the painter John Piper, who had been enrolled as a war artist to make paintings of buildings smashed by bombings. Waddington returned to the theme of collaboration between scientists and artists in his article “Art between the Wars”, where he praised the return to figurative painting under wartime conditions, and even more so in his book Behind Appearance: A study of the relations between Painting and the Natural Sciences in this Century, published in 1970. Both scientific knowledge and artistic creations, he argued, had turned “against old-fashioned common sense” and developed models, from quantum physics to abstract painting, that fundamentally challenged individual and collective representations. Behind Appearance emphasizes that both scientists and artists have come to acknowledge the extent to which they are implicated in their research. Drawing from Einstein’s remarks on the process of creation, Waddington asked whether words or images, symbols or myths, are the foundation of scientific thought. Two mythological figures were of particular importance for him: the world egg, the bland and round shape from which all things are born, and the Ouroboros, the snake that eats its tail. These figures can be found in many mythologies and they also help represent advances in modern science, from cosmological models of the Big Bang to the cybernetic notion of the feedback loop. As he grew older, Waddington was more willing to challenge the divide between science and the humanities in order to emphasize the unitary nature of knowledge.

Feminist epistemologies

He was also, or so argues Susan Squier, less constrained by gender boundaries and more willing to acknowledge women’s contribution to the advancement of science. When he was writing about art in conjunction to science, Waddington had in mind a broad readership that included many influential women, including his wife, fellow scientists, female artists, and women architects. By contrast, when he addressed his male peers at the Serbelloni Symposium in 1967 on a topic as large and open-ended as the refoundation of biological science, he was less inclined to challenge positivist orthodoxies and offer metaphysical musings. Women at this symposium were relegated to the role of the philosopher-of-science commenting on the proceedings from a detached perspective (not unlike Susan Squier’s own position), or the artist offering two poems to close the conference with a note of gendered artistry. For Susan Squier, a feminist epistemology encourages ambiguity and questioning. She conceives of her role as “poaching on academic territory in which I can claim at best amateur competence.” She notes how embryology makes pluripotent cells (stem cells that can develop into any kind of cell) and embryos visible by turning pregnant women into invisible bodies, and she redirects our attention from the embryo to the woman that is carrying it. For her, making the embryo visible is not just a matter of imaging technology: it is an act of mediation and remediation, in the sense that it mediates between the anatomical, the experimental, and the genetic; and that it offers remedy as it helps provide a treatment, an antidote or a cure. Using cartoons and comics as a mediating and remediating media, “graphic medicine” as she advocates it can help reintegrate the gendered experience exiled from formal medicine, by literally “making the womb talk.”

A feminist epistemology is not limited to the promotion of women in science. It studies the various influences of norms and conceptions of gender roles on the production and dissemination of knowledge. It avoids dubious claims about feminine cognitive differences, and balances an internal critique of mainstream research with an external perspective based on cultural studies and social critique. Squier’s analysis shows that Waddington’s epigenetic landscape was gendered as it represented the embryo cell without any reference to the female body. Her feminist critique of life sciences stresses plasticity rather than genetic determinism. She contests the dualism between science and the humanities, and argues that biology has been shaped all along by aesthetic and social concerns, just as the humanities and arts have engaged with life processes and vitalism. The scientific imagination is nurtured by myths and symbols, as Waddington himself acknowledged by conjuring the figures of the Ouroboros and the cosmic egg. The ability to think about biological development from different perspectives, visual as well as verbal, analytic as well as embodied, is understood to be a catalyst to creativity. Similarly, medicine as a healing process must include a narrative of the patient facing the disease, as well as representations—pictures or images—of illness and well-being. An evidence-only, process-oriented, and value-blind medicine has more difficulties curing patients. A doctor that takes the embodied, personal experience of the patient as a starting point is a better doctor.

Manga and anime

Epigenetic Landscapes provides us with a useful argument for rebalancing scientific and medical knowledge practices with sensorial and embodied experiences drawing from the humanities, the arts, and popular forms of expressions such as graphic novels and comic strips. But does this make the argument a feminist one, and does it apply to cultural contexts outside the Anglo-saxon world? In fact, I was surprised that no reference was made to Japan except a passing mention of Sesshū’s landscape ink painting from the fifteenth century. Japan has developed the art of explaining scientific concepts and medical training in graphic form. Anime and manga are part of any student’s formal and informal education, and famous scientists have published manga series popularizing their discipline under their names. The manga Black Jack and the TV series The Great White Tower, not to quote many others, have accompanied generations of medical students and are at the origin of many vocations into the profession. In Japan, graphic medicine doesn’t need advocacy, feminist or otherwise: it is part of the way things are done. My second remark is that the critique of phallogocentrism—to borrow a term from Derrida that Squier doesn’t use—will only bring you so far. Under this theory, abstract reasoning, which originates in the Greek logos and identifies with patriarchy, must give way to more embodied forms of knowledge practices that include the nonverbal, the intuitive, the sensorial. But we now live in an age where the image is everywhere, and where stimuli to our senses are ubiquitous. Our visual and aural cultures have received a boost with the diffusion of new media technologies. With computer graphics and artificial intelligence, anything that can be conceived can be pictured, animated, and made real in a virtual world that encroaches on our perceived environment. The written text isn not extinct however, and we can still figure things out without the help of animated images and virtual simulations. The non-representable, the purely abstract, and the ideational must remain part of the scientific imagination.

Social Studies of Space Science

A review of Placing Outer Space: An Earthly Ethnography of Other Worlds, Lisa Messeri, Duke University Press, 2016.

Placing Outer SpaceWhen I heard Lisa Messeri had written an ethnography about space research, my first reaction was: what’s an anthropologist like her doing in a place like this? How can one study outer space with the tools and methods of social science? What is the distinct contribution of the anthropologist in a field dominated by rocket scientists and big bang theoreticians? What can the cosmos teach us about ourselves that is not grounded in hard science and space observatory data? To be sure, there is no anthropos to study in outer space, and other worlds are beyond the grasp of the ethnographer. The sociology of other planets remains a big question mark. So far, you cannot make participatory observation in space stations or conduct fieldwork on Mars. We may hire anthropologists, linguists, semioticians, and indeed all the help we can get when we encounter extraterrestrial civilizations and extraplanetary forms of life; but so far these close encounters of the third type remain the stuff of science-fiction novels and blockbuster movies. But on second thought, an anthropologist in outer space is not completely out of place. Anthropologists have always accompanied explorers and discoverers to the frontiers of human knowledge. They helped us understand alien cultures and foreign civilizations to make them less distant, and drew lessons from their immersion into other worlds for our own society. Anthropologists make the strange and the alien look familiar, and the “view from afar” that they advocate also makes our own planet look alien and unfamiliar. They also help us make sense of science’s results and methods, and have been a trusted if somewhat critical companion of scientific research and laboratory life. Science and technology studies (STS in the jargon) have taught us that natural scientists—contrary to a common prejudice—are never simply depicting or describing reality out there “just as it is”: their research is always characterized by a specific style and colored by the “scientific imagination.”

Bringing space down to Earth

An “anthropology off the Earth” therefore seems like the obvious next step for the discipline when humanity has entered the space age. And indeed, outer space is no longer the exclusive domain of what is usually designated as “hard” science. Today supposedly “messier” or “softer” sciences play an increasing role, exerting significant influence on how the extraterrestrial is portrayed and understood. A growing number of researchers in the social sciences and the humanities have begun to focus on the wider universe and how it is apprehended by modern cosmology. Call it the “four S”: social studies of space science. What unites these efforts is that the many surprises you may encounter “out there” also tell us something about ourselves, here on this planet. Space science gives us access to something that surpasses humanity and yet simultaneously contains it. Astronomy doesn’t stand apart from more earthly pursuits. The quest for an Earth-like planet not only promises a better understanding of places elsewhere in our galaxy but also provides a mirror for examining terrestrial relations from a different perspective. Anthropology can contribute to bringing space science down to Earth by its firm grounding in participant observation, its twin process of familiarization and alienation, and its attention to dimensions that are not spontaneously considered by space scientists: inequalities of gender, class, and ethnicity; legacies of colonial and imperial approaches; and terrestrial understandings of nation and nationalism. In a time of post-colonialism, gender equality, and trans-border flows, we must resist the language of “colonization,” “manned” missions, and “frontiers.”

I first used Placing Outer Space as a primer in space and planetary science. Before completing her PhD in MIT’s program in History, Anthropology, and Science, Technology and Society, Lisa Messeri took a Bachelor’s degree in aerospace engineering, and is deeply familiar with the environment in which she immersed herself for her fieldwork. Focusing on planetary scientists as the main target of her ethnographic study, she describes the practices and techniques that allow them to transform planets from abstract objects into places full of meanings and considered from the point of view of potential habitability. Her knowledge of planetary science vastly exceeds the few nuggets I retained from junior high school and teenage readings. I was reminded that there used to be water on Mars, and that the Moon and the Earth were once one and the same. I knew about gravitational pull and orbiting ellipses that make planets dance around the Sun in a well-designed choreography. I had to update some basic facts such as the list of planets in the solar system: apparently, Pluto is no longer a planet (says who?, asks Messeri in a 2010 article.) I had vaguely heard of the existence of planets outside the solar system, but I was surprised to learn that the first detection of an extrasolar planet orbiting a Sun-like star only happened in 1995. Before that, exoplanets were a conjecture deduced from statistical reasoning: considering the almost infinite number of stars in the universe, it is only logical that some may have planets orbiting them. By the same token, scientist also deduce the existence of Earth-like planets, and conjecture that a fraction of these planets can also support life. Some physicists speculate on the number of inhabited planets in the universe, and make a probabilistic argument about the existence of extraterrestrial civilizations that may be able to communicate with us (this is called the Drake equation and was first proposed in 1961.)

Finding exoplanets

These dreams and speculations, what Messeri calls the “planetary imagination,” have always animated space research. What is new with modern planetary science is that now these theoretical musings can be backed by hard numbers and observations. Scientist have embarked on a quest to find Earth-like worlds and environments that may be conducive to life on other planets. This is an almost impossible task: Lisa Messeri compares it to spotting a firefly with a searchlight when you are in the East Coast and the searchlight is in California. And yet, since the detection of the first exoplanet in 1995, more than a thousand exoplanets have been confirmed at the time of Messeri’s writing. A more recent estimate indicates that more than 4,000 exoplanets have been discovered and are considered “confirmed.” However, there are thousands of other “candidate” exoplanet detections that require further observations in order to say for sure whether or not the exoplanet is real. Messeri explains us how this detection and confirmation process works. Telescopes collect starlight and measure how the flux or energy output of a star changes over time. Applying several filters, and separating signal from noise, astronomers are able to detect a U-shaped dip in the light curve: this is the signature of an exoplanet, the sign that a planet has passed in front of a star and has blocked a minuscule fraction of the star’s light. Further tinkering with the data allows the researcher to estimate the distance of the planet from the star and its approximate mass and density. These measures will tell you whether this planet is “habitable,” whether it is made from solid rock and able to sustain water. Based on spectrum data, you can even speculate about the existence of an atmosphere and its temperature. But for the moment, finding and describing an exoplanet is as much a work of science as an art of persuasion: you have to convince colleagues that the squiggle in the data that you detect is indeed the signature of a celestial body. Young scientists-in-training have to learn how to see a stream of data as a planet, as a world. It is the ability to conjure worlds that reinforces the community of exoplanet astronomers. Their faith unites them in the pursuit of the holy grail: the discovery of a planet just like our own orbiting a star like the Sun.

Because of rapid advances in detection and computing technologies, almost all data are digital in observational astronomy nowadays. As a result, its practitioners have become more akin to number crunchers than skywatchers. As Messeri notes, “inspiration might strike while gazing up at the night sky, but the real work happens in front of a computer, and discourse is dominated by methods of data processing and analysis.” In daily conversations, the feeling of excitement comes not from speculating about habitable planets but from marveling over how “clean” the dataset looks. Exoplanet astronomy increasingly relies on space-based telescopes that beam large streams of big data back to Earth. But despite this transition to a remote model of observation, researchers still find it useful to travel regularly to observatories built on mountaintops in exotic locations. Messeri accompanies an exoplanet researcher and her PhD student to the Cerro-Tolol Inter-American Observatory (CTIO) in central Chile. Habiting a mountain observatory, even on a temporary basis, is justified on several grounds. It anchors astronomers into the history of their discipline, as old observatories in lower altitudes are often turned into space museums. It is a rite of passage into the profession for aspiring researchers, and generates social interactions and face-to-face collaboration between members of the same epistemic community. It allows astronomers to tinker with the equipment and to interact with technicians. And as Messeri notes, “being at the observatory affords one of the few chances to remember and reconnect with the awesomeness of a dark sky.” Going to faraway places on top of mountains reminds astronomers that the ultimate goal of their quest is to inhabit another world. It is also, in a way, a voyage of conquest and annexation. In conversation with Peter Redfield’s Space in the Tropics, an ethnography on the French space program in French Guiana, the author explores how observatories are “situated in a landscape with multiple histories and ties to the local, even if there are actions (intentional or not) that seek to exclude the local.”

Earth-centrism and post-colonialism

Other aspects links exoplanetary science to a post-colonial enterprise. Finding an exoplanet is by definition an Earth-centered enterprise: an habitable planet is defined as a planet that offers an acceptable environment for human beings. The “habitable zone” circling a certain category of star is defined as a region in which a planet would receive neither too little nor too much heat, and where liquid water and an oxygen atmosphere could be sustained. Due to Earth-centrism and other speciesism bias, we cannot conceive of a place conducive to life that would be devoid of these elements. The vision of Mars as a terrain for exploration and discovery also remains clouded by an Earth-centric bias. In two chapters, Messeri describes how Mars scientists transform the Earth into a Martian kind of place by simulating habitat into extreme desert environments, and how they help to bring Mars down to Earth by mapping its rugged terrain with the help of satellites images and the pictures taken by the Rover missions. By stating that “humanity’s new frontier can only be on Mars,” the Mars Society, which funds the Mars Desert Research Station in the Utah desert, is reinvigorating the rhetoric of exploration, the frontier, and colonization that reminds us of “how the West was won” and populations subjected to the logic of empires. In an age in which a proliferation of new space ventures look set to explore and exploit outer space in the interests of those who are capable of sponsoring such efforts, Messeri warns us about “the inherent hierarchies and exclusions that come with place-making practices.” But she also notes that space exploration, including commercial space flight and space tourism, is in a large part “orthogonal to profit,” and underscores that “the aim of this book is not to unpack the white, American, imperial subtext of invocations of exploration.” Taking the discourse of planetary scientist at face value, she prefers to insist on the moral element that comes with the perception of our place in the cosmos.

As noted earlier, anthropology, with its habit of making the unfamiliar familiar and of looking at our earthly condition from afar, is a welcome companion to space science and the quest for habitable planets. By positioning the Earth as one planet among many on which humans might be capable of living, social studies of outer space can help us to make sense of what it means to be on Earth. The planetary imagination is sustained by the effort to envisage what it is to be like in other worlds. The Mars mission in the Utah desert prepares astronauts to the condition humans could face in a Martian colony. Earth is being transformed into a laboratory of sorts, where scientists experiment with life on other planets. In the process, astronomy is becoming a fieldwork-based science, not unlike anthropology itself. Fieldwork is grounded in a notion that “being there” is a valuable and telling experience, and scientists trained in geology can pierce up a narrative about Mars based on the shapes of dried-up rivers, the tumbling of craters, and the presence of rock concretions. The 3D-mapping of Mars shows the Red Planet on a human scale and allows the user to “see like a rover” by navigating the landscape in an immersive experience similar to the one offered by Google Maps. These open-source maps and user-friendly interfaces assume and thus disseminate an inherent worthwhileness in studying other planets, and act as a recruiting and advocacy tool for NASA. Turning Mars into a place on Earth, and preparing to make an earthly place out of Mars, also helps us to understand our own planet in unfamiliar terms. Earth is literally made alien when seen from outer space, as in the famous Blue Marble image made from the Voyager-1 spacecraft that ushered a new ecological consciousness about the finite resources of our planet. As Messeri notes, “the most prominent legacies of the space age are not prolonged human presence in space and exploration of nearby planets but a new way to observe and study our own planet.” Similarly, the quest for an Earth-like planet is not driven by the hubris to conquer other worlds, but by the belief that humans will finally feel less cosmically alone.

Place-making and being out of place

Lisa Messeri’s distinct contribution in Placing Outer Space lies in her analysis of the role of place in planetary science and astronomy. Drawing from insights ranging from critical geography’s conceptualization of space as a social, historical, and political phenomenon, to Heidegger’s Heimatlosigkeit, she finds that place-making is central to the work of outer space scientists who transform infinite space into a definite place to be. As she argues, place “is not just a passive canvas on which action occurs but an active way of knowing worlds. Even when place is not self-evident, as perhaps with invisible exoplanets, it is nonetheless invoked and created in order to generate scientific knowledge.” Place transforms the geographically alien into the familiar, and helps us to imagine other planets as habitable worlds. Place is more than a given category; it is a way of knowing and of making sense. It involves the four processes of narrating, mapping, visualizing, and inhabiting that are used by scientists to imagine themselves in other worlds. The author sees an irony in the tension between the urge to see planets as places and the increasing sense of placelessness that we experience on Earth. Astronauts and space scientists increasingly spend time away from office or from home, turning a seat and a laptop in a conference venue or in an observatory into a working environment. The need to inhabit a physical space is declining just as the desire to detect a habitable planet is on the rise. With remote access to the Internet and data stocked in clouds, our mode of being seems increasingly disconnected from place. And yet, place is where we long to be, the destination that invites us to make ourselves at home, on Earth as it is in heaven.

Remnants of “La Coopération”

A review of Edges of Exposure. Toxicology and the Problem of Capacity in Postcolonial Senegal, Noémi Tousignant, Duke University Press, 2018.

Edges of ExposureCapacity building is the holy grail of development cooperation. It refers to the process by which individuals and organizations as well as nations obtain, improve, and retain the skills, knowledge, tools, equipment, and other resources needed to achieve development. Like a scaffolding, official development assistance is only a temporary fixture; it pursues the goal of making itself irrelevant. The partner country, it insists, needs to be placed in the driver’s seat and implement its domestically-designed policies on its own terms. Once capacity is built and the development infrastructure is in place, technical assistance is no longer needed. National programs, funded by fiscal resources and private capital, can pursue the task of development and pick up from where foreign experts and ODA projects left off. And yet, in most cases, building capacity proves elusive. The landscape of development cooperation is filled with failed projects, broken-down equipment, useless consultant reports, and empty promises. Developing countries are playing catch-up with an ever receding target. As local experts master skills and technologies are transferred, new technologies emerge and disrupt existing practices. Creative destruction wreaks havoc fixed capacity and accumulated capital. Development can even be destructive and nefarious. The ground on which the book opens, the commune of Ngagne Diaw near Senegal’s capital city Dakar, is made toxic by the poisonous effluents of used lead-acid car batteries that inhabitants process to recycle heavy metals and scrape a living. Other locations in rural areas are contaminated with stockpiles of pesticides that have leaked into soil and water ecosystems.

Playing catch-up with a moving target

Edges of Exposure is based on an eight-month period of intensive fieldwork that Noémi Tousignant spent by establishing residence in the toxicology department of Université Cheikh Anta Diop in Dakar, in an ecotoxicological project center, and in the newly-established Centre Anti-Poison, Senegal’s national poison control center. The choice to study the history of toxicology in Senegal through the accumulation of capacity in these three institutions was justified by the opportunity they offered to the social scientist: toxicity, that invisible scourge that surfaced in the disease outbreaks of “toxic hotspots” such as Ngagne Diaw, was made visible and exposed as an issue of national concern by the scientists and equipments that tried to measure it and control its spread. Layers of equipments that have accumulated in these two locations appear as “leftovers of unpredictable transfers of analytical capacity originating in the Global North.” Writing about history, but using the tools of anthropology and ethnographic fieldwork, the author combines the twin methods of archeology and genealogy. The first is about examining the material and discursive traces left by the past in order to understand “the meaning this past acquires from and gives to the present.” The second is an investigation into those elements we tend to feel are without history because they cannot be ordered into a narrative of progress and accomplishment, such as toxicity and technical capacity.

Noémi Tousignant begins with a material history of the buildings, equipments, and archives left onsite by the successive waves of capacity building campaigns. The book cover picturing the analytical chemistry laboratory sets the stage for the ongoing narrative, with its rows of unused teaching benches, chipped tiles, rusty gas taps, and handwritten signs instructing not to use the water spigots. The various measurement equipments,  sample freezers, and portable testing kits are mostly in disrepair or unused, and local staff describe them as “antiques,” “remnants,” or leftovers of a “wreckage.” They provide evidence of a “process of ruination” by which capacity was acquired, maintained, and lost or destroyed. The buildings of Cheikh Anta Diop university—named after the scholar who first claimed the African origins of Egyptian civilization—speak of a time of high hopes and ambitions. The various departments, “toxicology,” “pharmacology,” “organic chemistry,” are arranged in neat fashion, and each unit envisions an optimistic future of scientific advancement, public health provision, and economic development. The toxicology lab is supposed to perform a broad range of functions, from medico-legal expertise to the testing of food quality and suspicious substances and to the monitoring of indicators of exposure and contamination. But in the lab, technicians complained that “nothing worked” and that outside requests for sample testing had to be turned down. Research projects and advanced degrees could only be completed overseas. Capacity was only there as infrastructure and equipment sedimented over time and now largely deactivated.

Sediments of cooperation

Based on her observations and interviews, Noémi Tousignant reconstructs three ages of capacity building in Senegalese toxicology, from the golden era of “la coopération” to the financially constrained period of “structural adjustment” and to a time of bricolage and muddling through. The Faculty of Pharmacy was created as part of the post-independence extension of pharmacy education from a technical degree to the full state qualification, on par with a French degree. For several decades after the independence, the French government provided technical assistants, equipment, budget, and supplies with the commitment to maintain “equivalent quality” with French higher education. The motivation was only partly altruistic and also self-serving: the university was put under French leadership, with key posts occupied by French coopérants, and throughout the 1960s about a third of its students were French nationals. It allowed children of the many French expats in Senegal to begin their degree in Dakar and easily transfer to French universities, and also provided technical assistants with career opportunities that could be later translated into good positions in the metropole. France was clearly in the driver’s seat, and Senegalese scientists and technicians were invited to join the bandwagon. But the belief in equivalent expertise and convergent development embodied in la coopération also bore the promise of a national and sovereign future for Senegal and opened the possibility of African membership in a universal modernity of technical norms and expertise. Coopérants’ teaching and research activities were temporary by definition: they were meant to produce the experts and cadres that would replace them.

The genealogy of the toxicology discipline itself delineates three periods within French coopération: from post-colonial science to modern state-building and to Africanization. The first French professor to occupy the chair of pharmaceutical chemistry and toxicology in Dakar described in his speeches and writings “a luxuriant Africa in which poison abounds and poisoning rites are highly varied.” His interest for traditional poisons and pharmacopeia was not only motivated by the lure of exoticism: “tropical toxicology” could analyze African plant-based poisons to solve crimes, maintain public order, and identify potentially lucrative substances. In none of his articles published between 1959 and 1963 did the French director mention the toxicologist’s role in preventing toxic exposure or mitigating its effects on a population level. His successors at the university maintained French control but reoriented training and research to fulfill national construction needs. They acquired equipment and developed methods to measure traces of lead and mercury in Senegalese fish, blood, water, and hair, while arguing that toxicology was needed in Senegal to accompany intensified production in fishing and agriculture. But they did not emphasize the environmental or public health significance of these tests, and their research did not contribute to the strengthening of regulation at the national and regional level. Africanization, which was touted as an long-term objective since the time of the independence, was only achieved with the abrupt departure of the last French director in 1983 and its replacement with Senegalese researchers who had obtained their doctoral degree in France. But it coincided with the adoption of structural adjustment programs and their translation into budget cuts, state sector downsizing, and shifting priorities toward the private sector.

After la coopération

Ties with France were not severed: a few technical assistants remained, equipment was provided on an ad hoc basis, and Senegalese faculty still relied on their access to better-equipped French labs during their doctoral research or for short-term “study visits.” But the activation of these links came to rely more on the continuation of friendly relations and favors than on state-supported programs and entitlements. French universities donated second-hand equipment and welcomed young African scientists to fill needed positions in their research teams. They made the occasional favor of testing samples that could no longer be analyzed with the broken-down equipment in Dakar. The toxicology department at Cheikh Anta Diop University could not keep up with advances in science and technology, with the emergence of automated analytical systems and genetic toxicology that made cutting-edge research more expensive and thus less accessible to modestly funded public institutions. Some modern machines were provided by international aid agencies as part of transnational projects to monitor the concentration of heavy metals, pesticides, and aflatoxins—accumulated often as the result of previous ill-advised development projects such as the large-scale spraying of pesticides in the Sahel to combat locust and grasshopper invasions. But, as Tousignant notes, such scientific instruments “are particularly prone to disrepair, needing constant calibration, adjustments, and often a steady supply of consumables.” The “project machines” provided the capacity to test for the presence of some of the toxins in food and the environment, but they did not translate into regulatory measures and soon broke down because of lack of maintenance.

The result of this “wreckage” is a landscape filled with antique machinery, broken dreams, and “nostalgia for the futures” that the infrastructures and equipment promised. Abandoned by the state, some research scientists and technicians left for the private sector and now operate from consultancy bureaus, local NGOs, and private labs with good foreign connections. Others continue to uphold the ideal of science as a public service and try to attract contract work or are occasionally enlisted in transnational collaborative projects. Students and researchers initiate low-cost, civic-minded “research that can solve problems,” collecting samples of fresh products, powdered milk, edible oils, and generic drugs to test for their quality and composition. Meanwhile, the government of Senegal has ratified a series of international conventions bearing the names of European metropoles—Basel, Rotterdam, Stockholm—addressing global chemical pollution and regulating the trade of hazardous wastes and pesticides. Western NGOs such as Pure Earth are mapping “toxic hotspots” such as Ngagne Diaw and are contracting with the Dakar toxicology lab to provide portable testing kits and measure lead concentration levels in soil and blood. Entreprising state pharmacologists and medical doctors have invested an unused wing of Hôpital Fan on the university campus to create a national poison control center, complete with a logo and an organizational chart but devoid of any equipment. Its main activity is a helpline to respond to people bitten by poisonous snakes.

Testing for testing’s sake

Toxicology monitoring now seems to be submitted to the imperatives of global health and environmental science. Western donors and private project contractors are interested in the development of an African toxicological science only insofar as it can provide the data point, heatmaps, and early warning systems for global monitoring. The protection and healing of populations should be the ultimate goal, and yet the absence of a regulatory framework, let alone a functional enforcement capacity, guarantees that people living in toxic environments will be left on their own. In such conditions, what’s the point of monitoring for monitoring’s sake? “Ultimately, the struggle for toxicological capacity seems largely futile, unable to generate protective knowledge other than fragments, hopes, and fictions.” But, as Noémi Tousignant argues, these are “useful fictions.” First, the maintenance of minimal monitoring capacity, and the presence of dedicated experts, can ensure that egregious cases of “toxic colonialism” such as the illegal dumping of hazardous waste, will not go undetected and unanswered. Against the temptation to consider the lives of the poor as expendable, and to treat Africa as waste, toxicologists can act as a sentinel and render visible some of the harm that populations and ecosystems have to endure. Second, like the layers of abandoned equipment that documents the futures that could have been, toxicologists highlight the missed opportunity of protection. “They affirm, even if only indirectly, the possibility of—and the legitimacy of claims to—a protective biopolitics of poison in Africa.”

Anti-Vaccine Campaigns Then and Now: Lessons from 19th-Century England

A review of Bodily Matters: The Anti-Vaccination Movement in England, 1853–1907, Nadja Durbach, Duke University Press, 2004.

Bodily MattersIn 1980, smallpox, also known as variola, became the only human infectious disease ever to be completely eradicated. Smallpox had plagued humanity since times immemorial. It is believed to have appeared around 10,000 BC, at the time of the first agricultural settlements. Stains of smallpox were found in Egyptian mummies, in ancient Chinese tombs, and among the Roman legions. Long before germ theory was developed and bacteria or viruses could be observed, humanity was already familiar with ways to prevent the disease and to produce a remedy. The technique of variolation, or exposing patients to the disease so that they develop immunity, was already known to the Chinese in the fifteenth century and to India, the Ottoman Empire, and Europe in the eighteenth century. In 1796, Edward Jenner developed the first vaccine by noticing that milkmaids who had gotten cowpox never contracted smallpox. Calves or children produced the cowpox lymph that was then inoculated to patients to vaccinate them from smallpox. Vaccination became widely accepted and gradually replaced the practice of variolation. By the end of the nineteenth century, Europeans vaccinated most of their children and they brought the technique to the colonies, where it was nonetheless slow to take hold. In 1959, the World Health Organization initiated a plan to rid the world of smallpox. The concept of global health emerged from that enterprise and, as a result of these efforts, the World Health Assembly declared smallpox eradicated in 1980 and recommended that all countries cease routine smallpox vaccination.

Humanity’s greatest achievement

The eradication of smallpox should be celebrated as one of humanity’s greatest achievements. But it isn’t. In recent years vaccination has emerged as a controversial issue. Claiming various health concerns or belief motives, some parents are reluctant to let their children receive some or all of the recommended vaccines. The constituents who make up the so-called vaccine resistant community come from disparate groups, and include anti-government libertarians, apostles of the all-natural, and parents who believe that doctors should not dictate medical decisions about children. They circulate wild claims that autism is linked to vaccines, based on a fraudulent study that was long ago debunked. They affirm, without any scientific backing, that infant immune systems can’t handle so many vaccines, that natural immunity is better than vaccine-acquired immunity, and that vaccines aren’t worth the risk as they may create allergic reactions or even infect the child with the disease they are trying to prevent. Public health officials and physicians have been combating these misconceptions about vaccines for decades. But anti-vaccine memes seem deeply ingrained in segments of the public, and they feed on new pieces of information and communication channels as they circulate by word-of-mouth and on social media. Each country seems to have a special reluctance for a particular vaccine: in the United State, the MMR vaccine against measles, mumps, and rubella has been the target of anti-vax campaigns. in France, the innocuity of the hepatitis B vaccine has been put into question, and most people neglect to vaccinate against seasonal flu. In the Islamic world, some fatwas have targeted vaccination against polio.

Resistance to vaccines isn’t new. In Bodily Matters, Nadja Durbach investigates the history of the first outbreak of anti-vaccine fever: the anti-vaccination movement that spread over England from 1853, the year the first Compulsory Vaccination Act was established on the basis of the Poor Law system, until 1907, when the last legislation on smallpox was adopted to grant exemption certificates to reluctant parents. Like its modern equivalent, it is a history that pits the medical establishment and the scientific community against vast segments of the population. Vaccination against smallpox at that time was a painful affair: Victorian vaccinators used a lancet to cut lines into the flesh of infants’ arms, then applied the lymph that had developed on the suppurating blisters of other children who had received the same treatment. Infections often developed, diseases were passed with the arm-to-arm method, and some babies responded badly to the vaccine. Statistics showing the efficacy of vaccination were not fully reliable: doctors routinely classified those with no vaccination scars as “unvaccinated,” and the number of patients who caught smallpox after receiving vaccination was not properly counted. The vaccination process was perceived as invasive, painful, and of dubious effect: opponents to vaccination claimed that it caused many more deaths than the diffusion of smallpox itself. Serious infections such as gangrene could follow even a successful vaccination. But people not only resisted the invasion of the body and the risk to their health: resistance against compulsory vaccination was also predicated upon assumptions about the boundaries of state intervention in personal life. Concerns about the role of the state, the rights of the individual, and the authority of the medical profession combined with deeply-held beliefs about the health and safety of the body.

Anti-vaccination in 19th-century England

While historians have often seen anti-vaccination as resistance against progress and enlightenment, the picture that emerges from the historical narrative, as reconstructed by Nadja Durbach, is much more nuanced. Through detailed analysis of the way sanitary policies were implemented and the resistance they faced, she shows that anti-vaccination in nineteenth-century England was very often on the side of social progress, democratic accountability, and the promotion of working-class interest, while forced vaccination was synonymous with state control, medical hegemony, and the encroachment of private liberties. The growth of professional medicine run counter to the interests of practitioners such as unlicensed physicians, surgeons, midwives, and apothecaries, some of whom had practiced variolation with the smallpox virus for a long time. It abolished the long-held practice of negotiating what treatments were to be applied, and turned patients into passive receptacles of prescriptions backed by the authority of science and the state. Compulsory infant vaccination, as the first continuous public-health activity undertaken by the state, ushered in a new age in which the Victorian state became intimately involved in bodily matters. Administrators—the same officers who applied the infamous Poor Laws and ran the workhouses for indigents and vagabonds—saw the bodies of the working classes themselves as contagious and, like prisoners, beggars, and paupers, in need of surveillance and control. Sanitary technologies such as quarantines, compulsory medical checks, forced sanitization of houses, and destruction of contaminated property were first experimented in this context of state-enforced medicine and bureaucratization. Several Vaccination Acts were adopted—in 1853, 1867, and 1871—to ensure that all infants born from poor families were vaccinated against smallpox. The fact that the authorities had to repeat the same laws on the books shows that the “lower and uneducated classes” were not taking advantage of the free service, and were avoiding mandatory vaccination at all costs.

Born in the 1850s, the anti-vaccination movement took shape in the late 1860s and early ‘70s as resisters responded to what they considered an increasingly coercive vaccination policy. The first to protest were traditional healers and proponents of alternative medicine who felt threatened by the professionalization of health care and the development of medical science. For these alternative practitioners, medicine was more art than science, and the state had no role in regulating this sector of activity. They objected to the scientific experimentation on the human body: vaccination, they maintained, not only polluted the blood with animal material but also spread dangerous diseases such as scrofula and syphilis. These early medical dissenters were soon rejoined by a motley crew of social activists who added the anti-vaccination cause to their broader social and political agenda. Temperance associations, anti-vivisectionists, vegetarians and food reformers, women’s rights advocates, working men’s clubs, trade unionists, religious sects, followers of the Swedish mystic Swedenborg: all these movements formed a larger culture of dissent in which anti-vaccinators found a place. They created leagues to organize against the Vaccination Acts, organized debates and mass meetings, published tracts and bulletins, and held demonstrations that sometimes turned into small-scale riots. Women from all social classes were particularly active: they wrote pamphlets, contributed letters to newspapers, and expressed strong opposition at public meetings. They often took their roles as guardians of the home quite literally, and refused to open their door to intruding medical officials. Campaigners argued that parental rights were political rights, to which all respectable English citizens were entitled. The state, they contended, had no right to encroach on parental choice and individual freedom. “The Englishman’s home is his castle,” they maintained, and how best to raise a family was a domestic issue over which the state had no authority to interfere.

Middle-class campaigners and working-class opponents

While the populist language of rights and citizenship enabled a cross-class alliance to exist, the middle-class campaigners didn’t experience the bulk of repression that befell on working-class families that resisted compulsory vaccination. Working-class noncompliers were routinely sized from their houses and dragged to jail, or were charged with heavy fines. Middle-class activists clung to the old liberal tenets of individual rights and laissez-faire: “There should be free trade in vaccination; let those buy it who want it, and let those be free who don’t want it.” By contrast, working-class protests against vaccination was often formulated at the level of the collective, and they had important bodily implications. Some anti-vaccinators considered themselves socialists and belonged to the Independent Labour Party. They aligned their fight with the interest of the working class and expressed distrust of state welfare in general and of anti-pauperism in particular. The Poor Laws that forced recipient of government relief into the workhouse were a target of widespread detestation. Vaccination remained linked to poor relief in the minds of many parents, as workhouse surgeons were often in charge of inoculation and the health campaigns remained administered by the Poor Law Board. Public vaccination was performed at vaccination stations, regarded by many as sites of moral and physical pollution. The vaccination of children from arm to arm provoked enormous fears of contamination. Parents expressed a shared experience of the body as violated and coerced, and repeatedly voiced their grievances in the political language of class conflict. Their protests helped to shape the production of a working-class identity by locating class consciousness in shared bodily experience.

Anti-vaccination also drew from an imaginary of bodily invasion, blood contamination, and monstrous transformations. Many Victorians believed that health depended on preserving the body’s integrity, encouraging the circulation of pure blood, and preventing the introduction of any foreign material into the body. Gothic novels popularized the figures of the vampire, the body-snatcher, and the incubus. They offered lurid tales of rotten flesh and scabrous wounds that left a mark on readers’ imagination. Anti-vaccinators heavily exploited these gothic tropes to generate parental anxieties: they depicted vaccination as a kind of ritual murder or child sacrifice, a sacrilege that interfered with the God-given body of the pristine child. They quoted the Book of Revelations: “Fool and evil sores came upon the men who bore the mark of the beast.” Supporters of vaccination also participated in the production of this sensationalist imagery by depicting innocent victims of the smallpox disease turned into loathsome creatures. Fear of bodily violation was intimately bound up with concerns over the purity of the blood and the proper functioning of the circulatory system. The best guard against smallpox, maintained a medical dissenter, was to keep “the blood pure, the bowels regular, and the skin clean.” Temperance advocates or proselytizing vegetarians added anti-vaccine to their cause: “If there is anything that I detest more than others, they are vaccination, alcohol, and tobacco.” As the lymph applied to children’s sores was the product of disease-infected cows, some parents feared that vaccinated children might adopt cow-like tendencies, or that calf lymph might also transmit animal diseases. Human lymph was even more problematic: applied from arm to arm, it could expose untainted children to the poisonous fluids of contaminated patients and spread contagious or hereditary diseases such as scrofula, syphilis, leprosy, blindness, or tuberculosis.

Understanding the intellectual and social roots of anti-vax campaigns

This early wave of resistance to vaccination, as depicted in Bodily Matters, is crucial to understanding the intellectual and social roots of modern anti-vaccine campaigns. Then as now, anti-vax advocates use the same arguments: that vaccines are unsafe and inefficient, that the government is abusing its power, and that alternative health practices are preferable. Vaccination is no longer coercive and disciplinary, but the issue of compulsory treatment of certain professions such as healthcare workers regularly resurfaces. More fundamentally, the Victorian era in nineteenth-century England was, like our own age, a time of deepening democratization and rampant anti-elitism. Now, too, the democratization of knowledge and truth can produce an odd mixture of credulity and skepticism among many ordinary citizens. Moreover, we, too, are living in an era when state-enforced medicine and scientific expertise are being challenged. Science has become just another voice in the room, and people are carrying their reliance on individual judgment to ridiculous extremes. With everyone being told that their ideas about medicine, art, and government are as valid as those of the so-called “experts” and “those in power,” truth and knowledge become elusive and difficult to pin down. As we are discovering again, democracy and elite expertise do not always go well together. Where everything is believable, everything is doubtable. And when all claims to expert knowledge become suspect, people will tend to mistrust anything that they have not seen, felt, heard, tasted, or smelled. Proponents of alternative medicine uphold a more holistic approach to sickness and health and they claim, as did nineteenth century medical dissenters, that every man and woman could and should be his or her own doctor. Of course, campaigners from the late Victorian age could only have dreamed of the role that social media has enabled ordinary people to play. The pamphlets and periodicals of the 1870s couldn’t hold a candle to Twitter, Facebook, and other platforms that enable everyone to participate in the creation of popular opinion.

Which brings us to the present situation. As I write this review, governments all over the world are busy developing, acquiring, and administering new vaccines against an infectious disease that has left no country untouched. The Covid-19, as the new viral disease is known, has spread across borders like wildfire, demonstrating the interconnect nature of our present global age. Pending the diffusion of an effective treatment, herd immunity, which was touted by some experts as a possible endgame, can only be attained at a staggering cost in human lives and economic loss. “Flattening the curve” to allow the healthcare system to cope with the crisis before mass vaccination campaigns unroll quickly became the new mantra, and rankings were made among countries to determine which policies have proven the most efficient in containing the disease. Meanwhile, scientists have worked furiously to develop and test an effective vaccine. Vaccines usually take years to develop and they are submitted to a lengthy process of testing and approval until they reach the market. Covid-19 has changed all this: several proof-tested vaccines using three different technologies are currently being administered in the most time-condensed vaccination campaign of all times. This is when resistance to vaccines resurfaces: as vaccines become widely available, a significant proportion of the population in developing countries are refusing to get their shots. And many of those refusing are those who have the most reason to get vaccinated: high-risk themselves or susceptible of passing the virus to other vulnerable people. Disinformation, distrust and rumors that are downright delusional have turned what should have been a well-oiled operation into an organizational nightmare. In the end, we will get rid of Covid-19. But we can’t and we won’t get rid of our dependence on vaccines.

Art-and-Technology Projects

A review of Technocrats of the Imagination: Art, Technology, and the Military-Industrial Avant-Garde, John Beck and Ryan Bishop, Duke University Press, 2020.

Technocrats of the ImaginationThere is a renewed interest in the United States for art-and-technology projects. Tech firms have money to spend on the arts to buttress their image of cool modernity; universities want to break the barriers between science and the humanities; and artists are looking for material opportunities to explore new modes of working. Recent initiatives mixing art, science, and technology include  the Art+Technology Lab at LACMA (Los Angeles County Museum of Art), MIT’s Center for Art, Science, and Technology (CAST), and the E.A.T. Salon launched by Nokia Bell Labs. In their presentation documents, these institutions make reference to previous experiments in which artists worked with scientists and engineers in universities, private labs, and museums. LACMA’s A+T Lab is the heir to the Art&Technology Program (A&T) launched in 1967 by curator Maurice Tuchman with the involvement of the most famous artists of the period, such as Andy Warhol, Claes Oldenburg, Roy Lichtenstein, and Richard Serra. MIT was the host of the Center for Advanced Visual Studies (CAVS) founded in the same year by György Kepes, who had previously worked with László Moholy-Nagy at the New Bauhaus in Chicago. Bell Labs is where scientist Billy Klüver launched Experiments in Art and Technology (E.A.T.) with Robert Rauschenberg in late 1966. Technocrats of the Imagination tells the story of these early initiatives by replacing them in their intellectual and geopolitical context, exposing in particular the link with Cold War R&D and the rising influence of the military-industrial complex. The contradiction between an anti-establishment cultural milieu denouncing technocratic complicity with the Vietnam war and a corporate environment where these collusions were left unchallenged led these art-and-technology projects to their rapid demise. Modern initiatives operate in a different environment, but unquestioned assumptions may lead them to the same fate.

Creativity, collaboration, and experimentation

Why should artists collaborate with scientists and engineers? Then and now, the same arguments are put forward by a class of art curators, tech gurus, and project managers. The art world and the research lab are both characterized by a strategy of continuous innovation, collaborative experimentation, and disciplined creativity. They tend to abolish the boundaries between theory and practice, knowing and doing, individual inspiration and collective work. These tendencies were reinforced in the context of the 1950s and 1960s: in an age of big science and artistic avant-garde framed by integrative paradigms such as cybernetics and information theory, the artist and the engineer seemed to herald a new dawn of democratic organization and shared prosperity. The artist defined himself as a “factory manager” (Andy Warhol) and did not hesitate to don the white coat of the laboratory experimenter. The scientist was engaged in much more than the accumulation of scientific knowledge and science’s contribution was vital for the nation’s wealth and security. Both worked under the assumption that science could enlarge democracy and support the United States’ place in the world, and that American art should be considered on an equal footing with other professional fields of activity. But the shared virtues of creativity, collaboration, and experimentation covered profoundly different ideas of what those terms might mean and how they should be achieved. The conception of experimental collaboration in the arts was heir to a liberal tradition of educational reform emphasizing free expression and self-discovery. By contrast, innovation and experimentation as understood by institutions training and employing scientists followed a model of elite expertise and top-down management. They were also heavily compromised, as John Beck and Ryan Bishop emphasize, by their ties to the military-industrial complex.

Beck and Bishop place the genealogy of the three art-and-tech initiatives under the influence of two currents: John Dewey’s philosophy of democracy and education, and Bauhaus’ approach to artistic-industrial collaborations. The influence of John Dewey over the course of the twentieth century cannot be overemphasized. More than any other public intellectual, Dewey shaped and influenced debates on the relations between science, politics, and society in the United States. His principles of democratic education emphasizing holistic learning and the study of art were applied in Black Mountain College in North Carolina, a liberal arts education institution that left its imprint on a whole generation of future artists and creators (Robert Rauschenberg, Cy Twombly, John Cage, Merce Cunningham, Ray Johnson, Ruth Asawa, Robert Motherwell, Dorothea Rockburne, Susan Weil, Buckminster Fuller, Franz Kline, Aaron Siskind, Willem and Elaine de Kooning, etc.) The influence of Dewey’s pragmatism extended beyond the US, notably among German educational reformers, and his notion of “learning by doing” was picked up by the Bauhaus, a German art school operational from 1919 to 1933 that combined crafts and the fine arts. In return, Bauhaus furnished Black Mountain College with émigrés educators—Josef and Anni Albers, Xanti Schawinksy, Walter Gropius—and an utopian vision of a post-disciplinary, collectivist education that did not favor one medium or skill set over another. Bauhaus’ afterlife and legacy in the United States also manifests itself in the trajectories of Bauhaus veterans László Moholy-Nagy who created the short-lived Chicago School of Design in 1937, and György Kepes, who taught at MIT and ended up creating the Center for Advanced Visual Studies (CAVS) in 1967.

Bauhaus in America

It was Moholy-Nagy who originated the idea to stimulate interactions among artists, scientists, and technologists in order to spearhead creativity and innovation. His Hungarian compatriot and associate at the School of Design took the idea to the MIT, an institution whose motto mens et manus (“mind and hand”) echoed Dewey’s and Bauhaus’ devotion to “learning by doing” and “experience as experimentation.” MIT was a full research-based science university awash with money from government contracts and military R&D. Research teams working on ‘Big Science’ projects included not just scientists but engineers, administrators, and technicians collaborating together in a structured manner. Kepes’ tenure at MIT between 1946 and 1977 was characterized by a commitment to science and technology and a belief in the virtues of the unintended consequences of chance encounters leading to breakthrough innovations. His interdisciplinary teachings were structured around the principles of vision, visual technologies, and their social implications. Many disciplines were mobilized, including Gestalt psychology, systems theory, physiology, linguistics, architecture, art, design, music, and perception theory. Transdisciplinarity, holistic approaches, and the eclectic mix of science, technology, and artistic disciplines was in the air in the late sixties and influenced the counterculture as well as artistic creation. The same eclecticism presided over the creation of CAVS, a center dedicated to all aspects related to vision and visual technologies. Drawing in important artists and thinkers, including many Black Mountain alumni, CAVS laid the groundwork for subsequent MIT ventures such as the influential Media Lab, founded in 1985 by Nicholas Negroponte, and the Center for Art, Science, and Technology (CAST). It was in such environment that experimental filmmaker Stan Vanderbeek pondered the possibility of creating an “electronic paintbrush” to complement the electronic pen used in early man/machine interfaces.

The industrial corporation, the research university, and the private lab were the three nodes of the military-industrial complex. Hailed by Fortune magazine as “The World’s Greatest Industrial Laboratory,” the Bell Labs’ research center at Murray Hill in New Jersey was conceived along the lines of a miniature college or university. The laboratories themselves were physically flexible, with no fixed partitions and rooms so that they could be partitioned, assembled, and taken apart at short notice. Bell Laboratories cultivated creativity and innovation: researchers working at Bell Labs were credited with the development of the transistor, the laser, the photovoltaic cell, information theory, and the first computer programs to play electronic music. The proximity of New York City, which had become the capital of the art world, and the presence of an arts college at the neighboring Rutgers University, facilitated the rapprochement between the scientific avant-garde working at Murray Hill and the contemporary art world. Artists and musicians were offered organized tours of Bell Labs as a mean of opening dialogue and providing a sense of how technology could be harnessed for artistic creativity. Early realizations include Edgar Varèse’s Déserts (1950-54), an atonal piece that was described as “music in the time of the H-bomb”; Jean Tinguely’s Homage to New York (1960), a self-constructing and self-destructing sculpture mechanism that performed for 27 minutes during a public performance in the Sculpture Garden of the Museum of Modern Art in New York; and Robert Rauschenberg’s Oracle (1962-65), a five-part found-metal assemblage with five concealed radios and electronic components now displayed at the Pompidou Center in Paris. Also influential was the 9 Evenings: Theatre and Engineering, a series of performances that mixed avant-garde theatre, dance, music, and new technologies. In 1967, the engineer and project manager Billy Klüver set up the Experiments in Art and Technology (E.A.T.), a collaborative project matching avant-garde artists and Bell Lab researchers that attracted the application of more than 6000 artists, scientists and engineers. But the project soon foundered due to poor management and lack of funds.

From New York to Los Angeles and to the world

Place matters for artistic innovation, as it does for scientific discovery and technological breakthrough. During the twentieth century, the center of the advanced art world shifted from Paris to New York. Yet there was also a marked increase in the geographic origins of innovative artists. When he became the first curator of twentieth-century art at the Los Angeles County Museum of Art (LACMA), part of Maurice Tuchman’s mission was to put LA on the art map as “the center of a new civilization.” He did so by partnering with business organizations to sponsor an Art & Technology exhibition in 1971, with the participation of high-profile artists such as Roy Lichtenstein, Claes Oldenburg, Robert Rauschenberg, Richard Serra, and Andy Warhol. But at that time public opinion had already shifted away from the technocratic model of corporate liberalism, and the exhibition was a flop. Another Californian experiment sponsored by LACMA was the creation of artist-in-residence positions at RAND and the Hudson Institute, two think tanks working mostly for the government sector and tasked with: “thinking about the unthinkable.” But the New York-based sculptor John Chamberlain and the conceptual artist James Lee Byars had a difficult time adapting to their new environment. The first sent a memo to all RAND staff stating: “I’m searching for ANSWERS. Not questions! If you have any, will you please fill it below”: the incomprehension was total, and the memo fell flat. The second set up a “World Question Center” and invited the public to submit any kind of questions that would then be answered by a panel of intellectuals, artists, and scientists. But as the two authors of Technocrats of the Imagination comment: “If Byars could have included Stein, Einstein, and Wittgenstein in his teleconference, what might they have been permitted to say, given the serious limitations of the format? An expert is an expert is an expert.”

Twentieth century art was advanced by new institutions on the art scene: the Salons and group exhibitions of independent art collectives, the private art gallery, the art critique magazine, the contemporary art museum, and the international art biennale. World exhibitions also played a key role in the globalization of advanced art, and the American presence in these global events often displayed art-and-technology projects. Billy Klüver and the E.A.T. program at Bell Labs engineered the American pavilion for the Osaka World’s Fair, Expo ’70, in partnership with PepsiCo. The RAND Corporation was pivotal for displaying US advanced technology abroad in exhibitions of science, urbanism, postwar visions of the future, and consumer society. The Eames Office, a design studio based in Venice, California, was commissioned to contribute to the USIA-sponsored US pavilion at the 1959 Moscow World’s Fair and the Montreal Expo ’67, and designed the IBM pavilion at the 1964 New York World’s Fair. The aim of these exhibitions was geopolitical: they were to display America’s might at its most spectacular, and to offer a glimpse of the future in which technology played a key part. They were conceived as artist-led immersive environments in the tradition of the Gesamtkunstwerk or “total work of art” of the Bauhaus, and played a pioneering role in the development of multimedia installations and video art. Charles and Ray Eames were “cultural ambassadors” for the Cold War representation of the United States, and their design creations aligned with the political agenda the US government wished to communicate. The Eames Office made important cutting-edge documentaries such as Powers of Ten (1968), a short film dealing with the relative size of things in the universe and the effect of adding or subtracting one zero, or Think (1964), a multiscreen film in a large, egg-shaped structure called the Ovoid Theater that stood high above the canopy and central structure of the IBM pavilion at the New York World’s Fair.

Corporate neoliberalism

John Beck and Ryan Bishop focus their analysis on the ideological underpinnings and geopolitical ramifications of these art-and-technology projects. They argue that, contrary to their forward-looking ambitions and futuristic visions, MIT’s CAVS, Bell Lab’s E.A.T., and LACMA’s A&T’s program were behind their times. In the late 1960s, antiwar sentiment had hardened public opinion against corporations and technology more generally. The positions of the scientist and the engineer were compromised by their participation in the military-industrial complex:  “science and technology had come to be seen by many as sinister, nihilistic, and death-driven.” The idea that US corporations could plausibly collaborate with artists to create new worlds of social progress was now evidence of complicity and corruption—technology was the problem and not the solution. The political climate made it impossible to justify what was now summarily dismissed as “industry-sponsored art.” In this politically charged context, art and technology projects had very little to say about politics, American foreign policy, or the Cold War in general. Technocrats of the Imagination concludes with a comparison between these late-1960s projects and recent reenactments such as MIT’s CAST, LACMA’s A+T Lab, and Nokia’s E.A.T. Salon. Contrary to their predecessors, these new projects operate in a neoliberal environment driven by private corporations in which the sense of dedication to the public good that animated scientists and artists from the previous generation has all but disappeared. As the authors argue, the recent art-and-tech reboot “cannot be separated from or understood outside the deregulated labor market under neoliberalism that has demanded increased worker flexibility, adaptability, and entrepreneurialism.” The avant-garde artist’s new partner is not the white-coated scientist or the lab engineer, but the tech entrepreneur who claims the heritage of counterculture to advance techno-utopianism and radical individualism. Their claim of “hippie modernism” and their appropriation of the 1960s’ avant-garde is based on historical amnesia, against which this book provides a useful remedy.

Less Than Human

A review of Infrahumanisms. Science, Culture, and the Making of Modern Non/personhood, Megan H. Glick, Duke University Press, 2018.

InfraInfrahumanisms directs a multidisciplinary gaze on what it means to be human or less-than-human in twentieth century America. The author, who teaches American Studies at Wesleyan University, combines the approaches of historiography, animal studies, science studies, gender studies, ethnic studies, and other strands of cultural studies, to build new analytical tools and to apply them to a range of issues that have marked the United States’ recent history: children and primates caught in a process of bioexpansionism from the 1900s to the 1930s; extraterrestriality or the pursuit of posthuman life in outer space from the 1940s to the 1970s; and the interiority of cross-species contagion and hybridity from the 1980s to the 2010s. Judged by historiography’s standards, the book lacks the recourse to previously unexploited archives and new textual documents that most historians consider as essential for original contributions to their field. The empirical base of Infrahumanisms is composed of published books and articles, secondary analyses drawn from various disciplines, and theories offered by various authors. There are no interviews or testimonies drawn from oral history or direct observations from ethnographic fieldwork, no unearthing of new documents or unexploited archives, and no attempt to quantify or to measure statistical correlations. This piece of scholarship is firmly grounded in the qualitative methodologies and humanistic viewpoints that define American Studies on US campuses. The only novel approach proposed by the book is to use a range of photographies and visual sources as primary material and to complement textual commentary with the tools of visual analysis borrowed from media studies. But what Infrahumanisms lacks in methodological originality is more than compensated by its theoretical deftness. Megan Glick innovates in the research questions that she applies to her sample of empirical data and in the theory that she builds out of her constant back-and-forth between facts and abstraction. She does conceptual work as other social scientists do fieldwork, and offers experience-near concepts or mid-range theorizing as a way to contribute to the expansion of her research field. In particular, her use of animal studies is very novel: just like minority studies gave birth to white studies within the framework of ethnic studies, or feminism led to masculinism in the field of gender analysis, Megan Glick complements animal studies with the cultural analysis of humans as a species. Exit the old humanities that once defined American studies or literary criticism; welcome to the post-humanities of human studies that patrol the liminalities and borderings of the human species.

The whitening of the chimpanzee

What is the infrahuman contained in Infrahumanisms? A straightforward answer is to start with the book cover representing the simian body of a young baboon (sculpted by artist Kendra Haste) seen from behind: monkeys, particularly great apes, are infrahuman. This, at least, was how the word was first introduced in the English language: the first use of the term “infrahuman” was made in 1916 by Robert Mearns Yerkes, a psychobiologist now remembered as the founding father of primatology. By modern criteria, Yerkes was a eugenicist and a racist: he saw his work as assisting in the process of natural selection by promoting the success and propagation of “superior” models of the human race. Through the Pasteur Institute in Paris, he was able to import primates from French Guinea and to apply to them various tests of mental and physical capacities that were first conceived for the measurement of the intelligence and characteristics of various “races”. Thus, writes Megan Glick, “while the terms of dehumanization and radicalization are often understood to be familiar bedfellows, (…) the process of humanization is equally as important in the construction of racial difference and inequality.” In particular, she shows that the chimpanzee appeared in these early primatology studies and in popular discourse as akin to the white race, while the gorilla was identified with black Africans. The “whitening of the chimpanzee” and “blackening of the gorilla” manifested itself in the early photographs of primates in human company or in the first episodes of the Tarzan series, where Cheeta is part of Tarzan and Jane’s composite family in the jungle, while gorillas are imagined as “the deadly enemies of Tarzan’s tribe.” The jungle trope is also applied to early twentieth-century children who were involved in animalistic rituals and identities: from “jungle gym” equipments in public playgrounds to the totems and wild outdoor activities of the Boy Scouts movement, the development of a childhood culture in close contact with the natural world marked a new moment in the lives of US children at the beginning of the century. The child was imagined as a distinct species, a proto-evolutionary figure providing the missing link between animals and humans. Neither primates nor children leave written archives or provide a “voice” available for historiographical record: like the subaltern, they literally “cannot speak.” Here again, the historian turns to pictures and illustrations to envision children as infrahuman, as in the photographs of infant and adult skeletons in pediatrics books that portrayed the child as “different from the adult in every fiber.”

The mid-twentieth century was a time of great anxieties about the human condition. Images and photographs tell the story better than words. The era of extraterrestriality was bordered by the mushroom clouds of Hiroshima and Nagasaki on one end and the picture of the blue planet as seen from outer space on the other. Extraterrestrial creatures were a matter of sighting and picturing more than storytelling or inventing. The pictures of aliens crashing at Roswell, New Mexico, with their “short gray” bodies and oversized heads, took to the public imagination and were described in similar terms by “alien abductees” who came up with similar visions although they had no way to coordinate their testimonies between themselves. While aliens on the big screen or in popular media tended to be large, monstrous, and even superhuman, aliens “sighted” by the American public were small, quasi-human, and frail. Here the author has a theory that stands at variance with standard interpretations of alien invasions as inspired by the red scare of communism. It wasn’t the Cold War and the mass panic over the infiltration of communist subjects that inspired the narratives and depictions of alien abductions and Mars attacks, but rather the traumatic after-effects of the Holocaust pictures that were disseminated at the end of the Second World War. As Megan Glick argues, “both tell a story about the nature of midcentury visual culture, both are concerned about the boundaries of human embodiment, and both question the futurity of humanity.” Meanwhile, the increasing precision of human genetics gave way to a post-Holocaust eugenic culture, in which the fight against social ills that undergirded the earlier eugenic movement was traded in for a more exacting battle against biological flaws. Key to these developments was the Nobel Prize winner Joshua Lederberg, a bacteriologist who made seminal contributions to the field of human genetics and who launched the speculative study of exobiology, of life on other planets. Like in the final screenshot of the cult movie 2001: A Space Odyssey, the picture of the earth as viewed from space paralleled the image of the fully developed fetus within a woman’s womb as reproduced on the cover of Life magazine. Lederberg and his colleague envisioned the impending elimination of genetically based disabilities through intra-uterine manipulation of the embryo. Considering the backdrop of sterilization campaigns for disabled persons or anxieties raised by overpopulation in the Third World, this raised concerns that African American populations could be targeted for “defective genetic traits” such as the prevalence of sickle cell disease.

Jumping the species barrier

The 1980s was marked by the AIDS crisis, which at first was associated with stigmatized populations such as gay men, intravenous drug users, and migrants from Haiti. The AIDS epidemic has already been studied from various perspectives, locating the disease within the history of sexuality, race, and medicine. Carol Glick adopts a new angle by taking an animal studies perspective by treating AIDS as a zoonotic or cross-species disease, placing it in a series that also includes SARS, mad cow disease, and avian flu. When the virus was found to have emerged from within chimpanzees in Africa, questions wee soon raised about how, why, and when AIDS had jumped the species barrier. Speculations extended to the “strangeness” of African sexual habits and dietary customs, and the denunciation of the consumption of bush meat operated both a dehumanization of African poachers and a humanization of monkey species. Tracts of tropical forest were cleared from their human presence to preserve the habitat of great apes. Dehumanization also worked at the level of AIDS patients, who were denied proper treatment and health insurance up to this day. An extreme form of dehumanization is animalization, especially the comparison of humans with certain devalorized species such as pigs. A cartoon published in the New Yorker shows the evolution of the human species from ape to mankind, and then its devolution into pigness due to sloth and obesity. In such representations, the obese body is usually represented as disabled and deformed; it is more often than not male, bald, and white. But statistically, obese people are more likely to be black, poor, and female. Public health campaigns put the blame of overweightness on individuals, obfuscating the role of food companies, advertisement campaigns, and policy neglect for our unhealthy diet. In more than one way, pigs are our posthuman future: genetic engineering is capable of creating porcine chimeras capable of developing human cells and organs for xenotransplantation benefitting needy patients. Using animal parts in human bodies results in the hybridization of both species, while the American dietary passion for pork creates the possibility of a species transgression akin to cannibalism that the taboo on pork consumption for Muslims and Jews seems to have anticipated. The main barriers to our porcine and infrahuman future may not be scientific and technological, but cultural and religious.

The concluding chapter is titled The Plurality is Near, a pun on Ray Kurzwell’s book announcing that “the singularity is near” and that humans will soon transcend biology. The plurality of species, which includes parasites and vectors of harmful diseases, raises the issue of speciesism: does mankind have the right to eradicate certain species, such as the mosquito Aedes aegypti targeted by a campaign of total elimination due to its role in the spread of malaria, dengue, and Zika? The elimination of mosquitoes in the name of human health is hard to contest; and yet we do not know what the long-term consequences of this tinkering of ecosystems will be. Scientists record an alarming rate of species decline and extinction, with spectacular drops in the population of bugs, butterflies, and insects. A future without insects would have catastrophic implications for birds, plants, soils, and humans; so much so that in order to slow down and someday reverse the loss of insects, we must change the way we manage the earth’s ecosystem and enhance their chances of survival. The plurality of species also forms the background of the new discipline of microbiomics, the study of the genetic material of all the microbes—bacteria, fungi, yeasts and viruses—that live on and inside the human body. Yoghurt commercials have popularized the notion of the intestinal flora as essential to the well-being of the organism. Digestive health sees the intestinal tract as not only a site of transit and evacuation, but also of flourishing and symbiosis. New models representing the body go beyond the mechanics of fluids and the circuitry of organs: they mobilize the ecology of populations and the co-evolution of ecosystems. Like the poet Walt Whitman, the human body can claim to contain multitudes: where the body ends and the environment begins is no longer clear. What happens at the infrahuman level unsettles the definition of the human: “the proposed manipulation of populations that exist in parasitic and symbiotic relation to the human species, often inside the body itself, suggests a deep unsettling of the animal/human binary and a restaging of human difference.” Seeing human beings are primate-microbe hybrids sets a new frontier for research and raises questions about the future of mankind. As microbiologist and NASA adviser Joshua Lederberg once declared, “We live in evolutionary competition with microbes, bacteria and viruses – there is no certainty that we will be the winners.”

Unmasking the ideology of infrahumanism

The infrahuman, then, takes up different figures throughout the twentieth century: the ape, the child, the creature from outer space, the embryo, the racial other, the posthuman hybrid, the microbiome within the human body. The infrahuman complicates notions of the other, of what counts as alien, outsider, non-human, friend or foe. It appears through twentieth-century scientific and cultural discourses that include pediatrics, primatology, eugenics, exobiology, microbiotics, and obesity research. The infrahuman confronts us with what the author calls “hyperalterity” or the radically other. By extension, infrahumanism, taken in the plural, designates an ideology, an episteme, or an -ism that inspires processes of infrahumanization. It rests on the belief that one’s ingroup is more human than an outgroup, which is less human. It results from a dual movement of dehumanization, which denies the humanity of certain individuals or collectives, and of rehumanization, which bestows non-human animals with certain human characteristics. It is closely related to the notions of speciation, the process by which differences are constituted into a distinct species, and of speciesism, the idea that being human is a good enough reason for human animals to have greater moral rights than non-human animals. What gets to count as human or as animal also affects our conceptions of human difference such as race, sexuality, disability, and disease status. Carol Glick argues that unmasking the ideology of infrahumanism is crucial to better understanding the persistence of human social inequality, “laying bare the rhetorics of being ‘beyond’ or ‘post’ race, gender, and other forms of social difference thought now to be on the precipice of mere social construction.” She notes the curious coincidence between the deconstruction of humanist thought and the emergence of an animal rights discourse at the precise moment when feminist and minority movements started to demand the recognition of their full rights as human beings, a category from which they had long been excluded. This is why “feminism should not end at the species divide”: feminist studies have a distinctive contribution to offer on the human/nonhuman distinction and how it affects the rights and claims of both groups.

Thinking about humanism, and its infrahumanist variants, as the ideology proper to the human species also transforms our vision of “the humanities”. Rather than simply reproducing established forms and methods of disciplinary knowledge, posthumanists should confront how changes in society and culture require that scholars rethink what they do—theoretically, methodologically, and ethically. Infrahumanisms bridges the scientific and cultural spheres by attending to the cultural imaginaries of scientists as well as to the changes brought by science in popular culture. It provides a welcome critique of the foundations of the field of animal studies, itself less than a couple of decades old. In her introduction, Carl Glick scratches in passing some of the great founders of the discipline—Cary Wolfe and his infatuation with systems theory, Jacques Derrida and his cat, Donna Haraway and her doggie—while giving kudos to more recent entries that mix the radical  critique of feminist studies, critical race studies, queer studies, and disability studies—with authors such as Mel Chen, Neel Ahuja, Lauren Berlant, and Claire Jean Kim. She doesn’t support radicalism for radicalism’s sake: she has strong reservations with the biological essentialism of some animal rights activists who conflate racism with speciesism, and she reminds us that “we cannot ethically argue for the direct comparison of people and animals.” Her book is therefore a welcome contribution “to the vast and difficult conversation about the place of nonhuman animals in the humanist academy.” As mentioned, Carol Glick also extends what counts as historical archive and how to present it to the reader. Images, pictures, photographs, screenshots, and movies will remain as the twentieth century’s main archives. They require a mode of analysis and exposure that is distinct from textual interpretation, and for which tools and methodologies are only beginning to be designed. Illustrations used by the author form part of her demonstration. For many readers, the striking book cover of Infrahumanisms will remain an apt summary of her main argument.

Too Much Shock, Too Little Therapy?

A review of Shock Therapy: Psychology, Precarity, and Well-Being in Postsocialist Russia, Tomas Matza, Duke University Press, 2018.

Shock TherapyWhen Russia broke away from socialism, reformers implemented a set of economic policies known as “shock therapy” that included privatization, marketization, price liberalization, and shrinking of social expenditures. In retrospect, critics claim there was “too much shock, too little therapy”: the economy spiraled down into a deep recession, currency devaluations sent prices up, and inequalities exploded. Huge fortunes were built over the privatization of state assets while the vast majority of the population experienced economic hardships and moral disarray. The indicators of social well-being went into alert mode: the psychological shock and mental distress that was caused by Russia’s transition to market economy was evidenced in higher rates of suicide, alcoholism, early death, and divorce, as well as precarious living conditions. People learned to adapt to freedom and the market the hard way: some took refuge in an idealized vision of the Soviet past, while for others traditional values of Russian nationalism and Orthodox christianity substituted for a lack of moral compass. The society as a whole experienced post-traumatic stress disorder. But contrary to the claim that economic shock therapy was “all shock and no therapy”, on the psychological front at least, therapy came in large supply. During the 1990s and 2000s, there was a boom in psychotherapeutic practices in postsocialist Russia, with an overwhelming presence of psychology in talk shows, media columns, education services, family counseling, self-help books, and personal-growth seminars. Shell-shocked Russians turned to mind training and counseling as a way to adapt to their new market environment. Political and economic transformations were accompanied by a transformation of the self: in order to deal with “biopoliticus interruptus”, homo sovieticus gave way to a psychologized homo economicus. Long repressed, discourses of the self flourished in talk therapies and speech groups in which, under condition of anonymity and privacy, individuals could say things about themselves that they wouldn’t have confessed even to their close friends or relatives. Russia became a talk show nation: the forms of psychological talk cultivated by TV hosts came to define the way Russians saw themselves as they sought guidance on how to adapt to their new environment.

Supply and demand

There are at least two ways to interpret this psychotherapeutic turn. The first mobilizes the tools of standard economics to analyze the growth of therapeutic services in terms of supply and demand converging under conditions of liberalization. Russia transited from centrally planned economy to market capitalism by removing state controls and unleashing the forces of the market. Under market conditions, supply meets demand, and pent-up demand leads to a supply boom when the constraints limiting market entry and expansion are lifted. The supply of psychotherapeutic services in the Soviet Union was severely restricted. Individual were to blame for affective disorder and social maladaptation, which diverted energies from the building of a socialist society. Care providers and psychologists had to adhere to a strict materialist approach, and subjective approaches of the self were replaced by neurophysiology and rational psychotherapy. Mental health and psychic wellbeing were tools of state control: political opposition or dissent were interpreted as a psychiatric problem, and the KGB routinely sent dissenters to psychiatrists for diagnosing to avoid embarrassing publiс trials and to discredit dissidence as the product of ill minds. During late-Soviet liberalization and perestroika, new therapeutic approaches were introduced and ideas from the West began to gain influence. This turned into a full psychotherapeutic boom after 1991: market entry conditions were relaxed, as anybody could set shop as a psikholog or a psikhoterapevt, and entrepreneurs began to advertise their services to the fraction of the public that could afford private counseling. Talk therapy and self-help, virtually nonexistent in the Soviet Union, became a booming industry. Private corporations established human resource departments and began to emphasize the cultivation of soft skills and emotional intelligence. Under volatile market conditions and with the disappearance of Soviet institutions, people strived for stability and points of reference. There was a new demand for treningi (training), koyching (coaching), and personal growth (lichnyi rost) or leadership (liderstvo) seminars. Raising a child also became a new challenge, and anxious families as well as school administrators began to use psychological services to improve performance and guarantee success.

A second interpretation, not necessarily contradicting the first, understands the psychotherapeutic turn in Russia as a symptom of the global expansion of neoliberal capitalism. In social science studies and critical discourse, neoliberalism is identified with notions of individual rationality, autonomy and responsibility, entrepreneurship, and positivity and self-confidence. These discourses and associated techniques constitute the neoliberal subject in ways consonant with neoliberal governmentality. Neoliberalism extends to education and to the self the vocabulary and mindset of economics: individuals are compelled to assume market-based values in all of their judgments and practices in order to amass sufficient quantities of ”human capital”, “invest” in skills and capabilities, and thereby become ”entrepreneurs of themselves”. They are led to believe that they are autonomous subjects responsible for their present condition and they have control of their own destiny. Those who fail to thrive under such social conditions have no one and nothing to blame but themselves. The cost of social protection, which was once supported by state programs of social security, is now transferred to the individual or to families and communities; and social ills such as unemployment, poor health, obesity, drug abuse, or school failure are blamed on individuals, as opposed to putting the blame on the societal system as a whole. Self-development discourse instills stronger individualism in society, while constraining collective identity, and thus provides social control and contributes to preserving status quo of neoliberal societies. Within the logic of global neoliberalism, the role of government is defined by its obligations to foster competition through the installation of market-based mechanisms for constraining and conditioning the actions of individuals, institutions, and the population as a whole. Neoliberalism is not laissez-faire, but permanent vigilance, activity, and intervention. The rationality of neoliberalism consists of values and principles that must be actively instituted, maintained, reassessed and, if need be, reinserted at all levels of society.

Neoliberalism at work

When he set foot in Saint Petersburg to study the psychotherapeutic complex operating in various state and private sector institutions, Tomas Matza expected to find neoliberalism at work: as he notes, there is “an extensive literature that describes how the neoliberal reforms of privatization and marketization are not just accompanied but in fact depend on the cultivation of particular kinds of citizens—namely, self-sufficient, individualized subjects of freedom able to survive austerity measures such as the withdrawal of state social programs.” But instead of homo neoliberalis in the making, he met concrete individuals with their ideals and hopes, their fears and frustrations. The story of neoliberalism couldn’t give full account of the way people perceived changes in their environment and in their own selves. The growth of the new psychotherapy market was linked to numerous reasons and motives: happiness, self-realization, improved relations, healing, change from routine, discovery, and learning. Therapists and patients came together in search of an alternative kind of social experience, rooted in an heightened form of togetherness. They described their first taste of group therapy as a kind of electric shock: “It was a new way of thinking, a new point of view. We called each other by first name […] It was shocking how new it was.” Psychotherapy was associated with a liberation of the self, a blossoming of free speech and a new age of freedom: hardly the imposition of new constraints and disciplines that critics of neoliberalism would have us believe. Besides, care providers identified themselves as political liberals as opposed to supporters of free market neoliberalism. Their technologies of the self were not aimed as much at the rational actor motivated by self-interest as at a particular kind of individual flourishing in a well-functioning democracy. They straddled a divide between political and economic liberalisms: insofar as they had political programs or opinions, these were for reforms of political practices to achieve more transparency and put a halt on immoral greed that was corrupting the basic values of society. Psychotherapy had a social and political purpose in Russia; but it was more aligned with the political values of classical liberalism than with the economic imperatives of neoliberalism.

For Tomas Matza, the psychotherapeutic turn in Russia is better described as postsocialist. It was determined by a set of experiences specific to Russia, of which the import of economic disciplines and psychological doctrines from the West was only one element. Shock Therapy attempts to describe Russia’s psychotherapy boom following the collapse of the Soviet Union by attending various terrains: psychological education camps and municipal counseling services in public schools, adult training and personal growth seminars, messages appearing in the advertising industry or exchanged in TV talk shows, and a psychoneurological outpatient clinic. Tomas Matza studied these various sites by doing participatory observation:  he took part in the kid camp’s discussions, wrote answers to the questionnaires, drew images and made clay representations of his “internal world”, and attempted professional school meetings where the cases of “problem children” were discussed. He shows that psychotherapy inherits from a long story of applied psychology in the Soviet Union. There were several periods when an interest in the subjective factors of human behavior emerged in Soviet science—the 1920s, 1960s, and 1980s. The early efforts to join Freudianism and Marxism were thwarted by the dogmatism of Pavlovian science and the Stalinization of psychology, which was banned from the faculties and redesigned as a subfield of philosophy. The fact that Soviet psychology was based on Marxism did not do away with the diversity of theoretical concepts and therapeutic approaches, which sometimes paralleled Western psychodynamics and in other cases offered home-grown discourses and concepts. Yet even in the 1970s, psychotherapists could be questioned by the KGB for mentioning Freud in group discussions. Modern practicians and academics remember the widespread repression and control that characterized late-Soviet psychology: “In the Soviet Union, there was no need for therapy.” Doctors would give a moral lecture to their patients, lie to them in an adversarial relation based on deception, and transform the clinic into a “theater of the absurd” in which power was exerted in an erratic and contradictory fashion. However, things began to change with late-Soviet liberalization and Perestroika. New approaches to education, healthcare, work, and sports were proposed, emphasizing the “human factor of production” with a huge potential yet to be tapped. More frequent exchanges between American humanistic psychologists and Soviet researchers also spread new therapeutic orientations in the USSR. The rapid expansion of psychotherapeutic services in the reform period was thus prepared by intense discussions and experimentations in the late-Soviet era.

Russian psychotherapy

As a result, Russian therapeutic practices and vocables only partly overlap with Western science. Russian professionals developed a lexicon of domestic words to translate or adapt concepts imported from the West, or to propose home-grown versions of talk cures and self cultivation. Freedom, translated as svoboda, has a more social connotation in Russian than in English: it has historically connoted a form of “freedom with”, and an emphasis on the idea that “we are free together” rather than a limitation on individual freedom. Samootsenka, now translated as self-esteem, was in Soviet times conceived as a transformation of the self that would make self-sacrifice possible. The idioms of dusha (soul), energiia, and garmoniia,  which were often used in psychological training sessions, had meanings different from their English equivalent. Through these terms and others, a new language was invented and circulated for thinking about society and the self, providing reassurance and meaning in a time of increasing anxiety and change. Some of the affects produced by psychotherapists have a strong religious undertone: “tears of bitterness and joy” flowed from the eyes of a participant attending a conference by American psychologist Carl Rogers in 1986. Some American ideas and mindsets were transmitted wholesale through seminars and book translations; others doctrines were imported from Germany, such as the “systemic constellations” theory of Bert Hellinger (a “Zulu-influenced ontology of trans-generational connectedness”); yet others were produced domestically by best-selling authors such as Vadim Zeland (“transsurfing reality”), Mirzakarim Norbekov (“how to get rid of your glasses”) or Valery Sinelnikov (“love your disease”). Tomas Matza doesn’t expand much on these doctrines, and he presents the content of the psychotherapeutic sessions in a neutral, nonjudgmental way. Another way to look at it would be to assess their scientific value based on some rational benchmarks, or to do an internal critique of the ideas and messages they convey. Shock Therapy lacks a detailed description of the therapies that are provided to the individuals in state of shock. But even a faint acquaintance with the self-help literature and personal development methods covered in the book can make the reader highly suspicious of their intellectual or humanistic value. More than the “education of freedom” that their promoters advocate, these commercial methods of self-manipulation seem to provide the “opium of the people” that Marx identified with religion.

All the psychotherapeutic work that Tomas Matza attended and that he describes in Shock Therapy do not fall into the categories of sham and scam. There is indeed some value in training the emotional intelligence of children, in cultivating the values of teamwork and leadership, or in providing support to people in times of distress. The work of care, whether it addresses the body or the soul, is a valuable endeavor. But it comes at a cost, and this financial burden is not distributed evenly across the Russian population. Tomas Matza compares two different kinds of institutions he was able to observe close-range, the first servicing primarily the children of the elite, the second focusing on poor children in difficult circumstances. While both were concerned with children’s interiorities, the first addressed children’s psychology in terms of potential, while the second brought the issues of pathology and abnormality. The psychotherapeutic turn in post-socialist Russia is associated with social inequality which it helps produce and reproduce. New forms of care focusing on well-being and the flourishing of the self are generally much more available to the better-off. Psychologists have been enrolled in the cultivation of the new elite, inciting a potential-filled, possessive individualism through the development of techniques of self-knowledge and self-esteem. For the upper and middle classes, parenting has been turned into a commercial enterprise, an activity involving financial investment, expert knowledge, and careful planning. By contrast, in municipal institutions applying psychological knowledge to public schools, resource constraints and a new management culture have squeezed their services into prophylaxis for the “problem child.” Psychological lenses are used for the management of risk and the anticipation of various possible problems: computer addiction, substance abuse, delinquency, various troubles at home, and poor school results. These differential uses of psychology may have the effect of deepening social differences and hierarchies: the soft skills and emotional intelligence acquired through supplementary education can make the difference between success and failure in the market society, while the hasty use of diagnostics with children at risk can deepen the troubles associated with the psychosocial environment.

Close-range concepts

Self-work in Russia is therefore a far more complex enterprise than simple references to the onslaught of neoliberalism would allow for. The “psychological complex” involves both the cultivation of the self and the attention to others, and it has been profoundly shaped by privatization and the emergence of consumer culture in Russia. It provides healing and care, but also reproduces social difference and class structures in a society characterized by deep inequalities. To call this complex assemblage “neoliberal governmentality” misses important details. What is at stake in the turn toward psychological explanations and therapy is not so much the construction of neoliberal subjectivity, but a search for new interpretations and new modes of sociality in a society turned upside down by the demise of socialism. Rather than stratospheric notions such as “neoliberalism”, Tomas Matza provides close-range concepts that help understand a specific situation: “psychosociality” describes the warm feeling of togetherness experienced by participants in talk groups and psychotherapy sessions; “precarious care” refers to the provision of care and the cultivation of self under conditions or precariousness; “commensuration” brings together norms and values belonging to different spheres, such as the ethical and the economic, the political and the individual. The book offers social critique while taking into account the testimonies and feelings of the persons involved with the work of care. Psychologists and psychotherapists have their own views about the social and political effects of their work. They claim to be promoting a democratic spirit and personal emancipation by helping people “learn to be free.” Other practitioners invoke the negative side effects of marketization and rationalization to argue that they are fostering social connection. The paradox is that these claims contradict the social context in which these psychotherapeutic techniques take place: they are complicit with a social hierarchy that they help reproduce, and they feed on the anxieties of people that they are supposed to assuage.