The Story of the Deadly Virus

A review of Contagious. Cultures, Carriers, and the Outbreak Narrative, Priscilla Ward, Duke University Press, 2008.

ContagiousWe think containing the spread of infectious diseases is all about science. In fact, more than we care to admit, our perception of disease contagion is shaped by fictions: blockbuster movies, popular novels, newspaper headlines, and magazine articles. These fictions frame our understanding of emerging viruses and the response we give to global health crises. Call it the outbreak narrative. It follows a formulaic plot that goes through roughly the same steps of emergence in nature or in labs, human infection, transnational contagion, widespread prevalence, medical identification of the virus, epidemiological containment, and final eradication. It features familiar characters: the healthy human carrier, the superspreader, the virus detective, the microbe hunter. It summons mythological figures or supervillains from past history: the poisonous Typhoid Mary from the early twentieth century, the elusive Patient Zero from the HIV/AIDS crisis. Through these fictions, new terms and metaphors have entered our vocabulary: immunodeficiency, false negative, reproductive rate, incubation period, herd immunity, “flattening the curve.” We don’t know the science behind the concepts, but we easily get the picture. Outbreak narratives have consequences: they shape the reaction to the health crisis by leaders and the public, they affect survival rates and contagion routes, they promote or mitigate the stigmatizing of individuals and groups, and they change moral and political economies. It is therefore important to understand the appeal and persistence of the outbreak narrative in order to design more effective and humane responses to the global health crises that lie ahead of us.

The outbreak narrative

Another consequence of living immersed in fiction is that usually you only remember the last episode of the whole drama series. Published in 2008, Priscilla Ward’s book begins with a reference to “the first novel infectious disease epidemic of the 21st century, caused by a brand-new coronavirus.” The contagion epidemic was of course SARS, not COVID, and the “brand-new” coronavirus of the early 2000s was named SARS-CoV-1 as opposed to the more recent SARS-CoV-2. But it is difficult not to read Contagious in light of the ongoing Covid-19 epidemic, and not to apply its narrative logic to our recent predicament. Covid-19 rewrote the script of past epidemic outbreaks but didn’t change it completely. It built on past experience, both real and imagined or reflected through fiction. The scenario of disease emergence was already familiar to the public, and it shaped the way countries responded to the epidemiological crisis. It demonstrated that living in fiction leaves us fully unprepared to face the real thing: the countries that achieved early success in containing the virus were those most affected by past outbreaks and especially by SARS, which mainly spread in East Asia. By contrast, the United States is the country from which most fictions originate, but where response to Covid-19 outbreak was disorganized and weak. We need more than fiction to prepare us to the health crises of the future; we also need better fictions than the conventional outbreak narrative that casts the blame on villains and invests hope in heroes to provide salvation.

As Priscilla Ward reminds us, there was an earlier wave of fictional scenarios in the 1990s that popularized the outbreak narrative in its present form. Blockbuster movies, medical thrillers, and nonfiction books reached a wide public and dramatized the research results that infectious disease specialists were discussing at the time in their scientific conferences and publications. They include the novels Carriers (David Lynch 1995), Contagion (Robin Cook, 1995), The Blood Artists (Chuck Hogan, 1998), as well as the movies Twelve Monkeys (dir. Terry Gillian, 1995), The Stand (dir. Mike Harris, 1994), Outbreak (dir. Wolfgang Petersen, 1995), and the nonfiction bestsellers The Hot Zone (Richard Preston, 1994), The Coming Plague (Laurie Garrett, 1994), and Guns, Germs and Steel (Jared Diamond, 1997). Priscilla Ward use the movie Outbreak, starring Dustin Hoffman and Morgan Freeman, as particularly representative of the genre that came to shape the global imaginary of disease emergence. The opening scene of a desolate African camp decimated by an unknown hemorrhagic virus, as seen through the protection mask of an American epidemiologist, sets the stage for subsequent narratives. The story casts Africa as an “epidemiological ground zero,” a continental Petri dish out of which “virtually anything might arise.” It dramatizes human responsibility in bringing microbes and animals in close contact with (American) human beings and in spreading the disease out of its “natural” environment through the illicit traffic of a monkey that finds its way to a California pet store. It gives the US Army a key role in maintaining public order and makes US soldiers shoot their countrymen who attempt to violate the quarantine. Outbreak fictions often cast the military officer as the villain, sometimes in cahoot with private corporations to engineer bioweapons, and the public scientist as the ultimate savior who substitutes a medical cure for a military solution. Helped by visual technologies such as epidemiological maps, electron microscopes, and close-ups of the virus, experts engage in a race against time to identify the source of the disease and then to determine how to eradicate it. That effort constitutes the plot and storyline of the film: the outbreak narrative.

Healthy carriers and social reformers

The outbreak narrative as it emerged in the mid-1990s builds on earlier attempts to storify disease emergence and contagion. Much like the blockbuster movies and popular novels of the 1990s relied on the work of scientists writing and debating about emerging infections, discussions about disease and contagion in the early twentieth century were shaped by new and controversial research showing that a apparently healthy person could transmit a communicable disease. The idea of a healthy human carrier was one of the most publicized and transformative discoveries of bacteriology. It signified that one person could fuel an epidemic without knowing it or being detected, and it required the curtailment of personal liberties to identify, isolate, and treat or eliminate such a vector of contagion. For the popular press in the English-speaking world, the healthy and deadly carrier took the figure of “Typhoid Mary,” an Irish immigrant who worked as a cook and left a trail of contaminations in the families that employed her. She was reluctant to submit herself to containment or incarceration in a hospital facility and repeatedly escaped the surveillance of public-health officials, assuming a false name and identity to disappear and cause new cases of contagion. Typhoid fever at the time was a “national disgrace” associated with dirtiness and filth. It resulted from the ingestion of fecal matter, as many authors liked to explain, and could be combatted by personal hygiene and proper sanitation of homes and urban space. Typhoid Mary’s refusal to cooperate with public health authorities created a moral panic that combined the perceived threat of immigration, prejudices against Irish female servants, fallen-woman narratives, and violation of the sanctity of the family. In response, the Home Economics Movement emphasized “how carefully we should select our cooks,” and made familial and national health a central occupation of the professional housewife.

Communicable disease and the figure of the healthy carrier influenced changing ideas about urban space and social interactions. Focusing on poverty, city life, urban slums, marginal men, migration, deviance, and crime, the Chicago School was one of the first and most influential centers of sociological research in North America. Like other sociologists of his generation, Robert Park began his career as a muck-raking journalist and social reformer. While investigating the outbreak of a diphtheria epidemic in downtown Chicago, he was able to plot the distribution of cases along an open sewer that he identified as the source of the infection. This led him to use the concept of contagion as a metaphor for social interactions and cultural transmission. It wasn’t the first time biology provided models for the nascent discipline of sociology. In the view of early commentators, microbes did not just represent social bonds; they created and enforced them, acting as a great “social leveller” unifying the social body. In France, Gabriel Tarde and Emile Durkheim argued about the role of contagion and imitation in explaining social phenomena such as suicide and crime. Communicable disease in particular vividly depicted the connection between impoverished urban spaces and the broader social environment. Calling the city a “laboratory or clinic in which human nature and social processes may be conveniently and profitably studied,” Park and his colleagues from the Chicago School of sociology concentrated their analysis on social interactions in urban formations such as the tenement or slum dwelling, the ethnic enclave or the ghetto, as well as nodes of communication such as points of entry, train stations, and quarantine spaces. The particular association of those spaces with immigrants in the United States intensified nativism and anti-Semitism, as preventive measures disproportionately and inequitably targeted Eastern European Jews. The theories and models of the urban sociologists conceptualized a spacialization of the social and the pathological that would play a great role in the outbreak narrative.

Cold War stories

The outbreak narrative is also heir to the stories of viral invasion, threats to the national body, and monstrous creatures from outer space that shaped the imaginaries of the Cold War. The insights of virology were central to those stories. New technologies of visualization implanted on the public the image of a virus attacking a healthy cell and destroying the host through a weakening of the immune system. Viruses unsettled traditional definitions of life and human existence. Unlike parasites, they did not simply gain nutrients from host cells but actually harnessed the cell’s apparatus to duplicate themselves. Neither living nor dead, they offered a convenient trope for science-fiction horror stories envisioning the invasion of the earth by “body snatchers” that transformed their human hosts into insentient beings of walking dead. These stories were suffused with the anxieties of the times: the inflated threat of Communism, the paranoia fueled by McCarthyism, research into biological warfare or mind control, the atomization of society, emerging visions of an ecological catastrophe, as well as the unsettling of racial and gender boundaries. Americans were inundated with stories and images of a cunning enemy waiting to infiltrate the deepest recesses of their being. Conceptual changes into science and politics commingled, and narrative fictions in turn influenced the new discipline of virology, marking the conjunction of art and science. Priscilla Ward describes these changes through an analysis of the avant-garde work of William S. Burroughs, who developed a fascination with virology, as well as popular fictions such as Jack Finney’s bestselling 1955 novel The Body Snatchers and its cinematic adaptations.

The metamorphosis of infected people into superspreaders is a convention of the outbreak narrative. In the case of HIV/AIDS, epidemiology mixed with moral judgments and social conventions to shape popular perceptions and influence scientific hypotheses. Medical doctors, journalists, and the general public found the sexuality of the early AIDS patients too compelling to ignore. In 1987, Randy Shilts’s controversial bestseller And the Band Played On brought the story of the early years of the HIV/AIDS epidemic to a mainstream audience and contributed significantly to an emerging narrative of HIV/AIDS. Particularly contentious was the story of the French Canadian airline steward Gaetan Dugas, launched into notoriety as “Patient Zero” and who reported hundreds of sexual partners per year. In retrospect, Shilts regretted that “630 pages of serious AIDS policy reporting” were reduced to the most sensational aspects of the epidemic, and offered an apology for the harm he may have done. Considering the lack of scientific validity of the “Patient Zero” hypothesis, it is difficult not to see the identification of this epidemiological index case and its transformation into a story character as primarily a narrative device. The earliest narratives of any new disease always reflect assumptions about the location, population, and circumstances in which it is first identified. In the case of HIV/AIDS, the earlier focus on homosexuals, and also on Haitians, intravenous drug users, and hemophiliacs, was an integral part of the viral equation, while origin theories associating the virus with the primordial spaces of African rainforests reproduced earlier tropes of Africa as a continent of evil and darkness. Modern stories of “supergerms” developing antibiotic resistance in the unregulated spaces of the Third World and threatening to turn Western hospitals into nineteenth-century hotbeds of nosocomial infection fuel on the same anxieties.

The narrative bias

The outbreak narrative introduces several biases in our treatment of global health crises, a lesson that is made only too obvious in the international response to Covid-19. It focuses on the emergence of the disease, often bringing scientific expertise into view; but it treats the widespread diffusion of the virus along conventional lines, and has almost nothing to say about the closure or end-game of the epidemic. It is cast in distinctly national terms, and only envisages national responses to a global threat. It presents public health as first and foremost a national responsibility, and treats international cooperation as secondary or even as nefarious. As countries engage in a “war of narratives,” the reality of global interdependence is made into a threat, not a solution. The exclusive focus on discourse and narratives overlooks the importance of social processes and material outcomes. Priscilla Ward’s book reflects many of the biases she otherwise denounces. It is America-centric and focuses solely on fictions produced in the United States. It exhibits a narrative bias that is shared by politicians and journalists who think problems can be solved by addressing them at the discursive level. It neglects the material artifacts that play a key role in the spread and containment of infectious diseases: the protection mask, the test kit, the hospital ventilator, and the vaccine shot are as much part of the Covid-19 story as debates about the outbreak and zoonotic origins of the disease. Priscilla Ward’s Contagious concludes with a vigorous plea to “revise the outbreak narrative, to tell the story of disease emergence and human connection in the language of social justice rather than of susceptibility.” But fictions alone cannot solve the problem of modern epidemics. In times like ours, leaders are tested not by the stories they tell, but by the actions they take and the results they achieve.

Dispatches from a Controlled American Source in Quito

A review of The CIA in Ecuador, Marc Becker, Duke University Press, 2021.

CIA in EcuadorA large literature exists on United States intervention in Latin America. Much has been written about the CIA’s role in fomenting coups, influencing election results, and plotting to assassinate popular figures. Well-documented cases of abuse include the overthrow of the popularly elected president of Guatemala in 1954 and the attempts to assassinate Rafael Trujillo in the Dominican Republic and Fidel Castro in Cuba. Books about the CIA make for compelling stories and sensationalist titles: The Ghosts of Langley, The Devil’s Chessboard, Killing Hope, Legacy of Ashes, Deadly Deceits. They are usually written from the perspective of the agency’s headquarters—which moved to Langley, Virginia, only after 1961—, and they concentrate on the CIA leadership or on the wider foreign policy community in Washington—The Power Elite, The Wise Men, The Georgetown Set. Rarely do they reflect the perspective of agents in the field: the station chiefs, the case officers, the special agents charged with gathering intelligence and monitoring operations on the ground. Such narratives require a more fine-grained approach that is less spectacular than the journalistic accounts of grand spying schemes but more true to the everyday work of intelligence officers based in US diplomatic representations abroad. Fortunately, sources are available. There is a trove of declassified intelligence documents made available to the public through the online CREST database under the 25-year program of automatic declassification. In The CIA in Ecuador, Marc Becker exploits this archive to document the history of the Communist Party of Ecuador as seen from the surveillance and reporting activities of the CIA station in Quito during the first decade of the Cold War.

This is not a spy story

This book will be a disappointment for readers with a fascination for the dark arts of the spy trade and who expect crispy revelations about covert operations, clandestine schemes, and dirty espionage tricks. There were apparently no attempt to manipulate election results, no secret plots to eliminate or discredit opposition leaders, and no extraordinary renditions to undisclosed locations. Of the two missions of the CIA, the gathering of foreign intelligence and the conduct of covert action, archival evidence indicates that the Quito station strictly stuck to the first one during the period covered by the book, from 1947 to 1959. Nor are the names of confidential informants, domestic assets, or deep cover moles uncovered and exposed: intelligence reports or diplomatic dispatches usually don’t identify their sources by name and only mention their reliability (a “B2” classification thereby signifies that the source is “usually reliable” and that the content is “probably true.”) The farthest the author goes into revealing state secrets is by exposing the names of the successive station chiefs in Quito—for many decades, US authorities maintained that there was “no such things as a CIA station,” and diplomatic dispatches only referred to their intelligence as coming from a “controlled American source.” Using public records, Marc Becker was able to reconstruct their career path subsequent to their posting in Ecuador. They were not grandmaster spies destined for prestigious careers: throughout the 1950s, Quito was a small station for the CIA, and Ecuador was peripheral to Cold War interests. Their intelligence reports do not make for entertaining reading. They speak of bureaucratic work, administrative drudgery, and solitary boredom in a remote posting that rarely lasted more than three years.

To be true, despite the book’s title, the author is not interested in “the CIA in Ecuador.” He uses CIA documentation and State Department archives to write a detailed history of the left in Ecuador in the postwar period, focusing in particular on the Communist Party that was the object of intense surveillance from the CIA. The 1950s were a unusually quiet period in the turbulent political life of Ecuador. After a long period marked by political instability and infighting—twenty-one chief executives held office between 1931 and 1948, and no one managed to complete a term—, Ecuador entered a twelve-year “democratic parentheses” during which a series of three presidents were elected in what critics generally recognized as free and fair elections and were able to finish their terms in office and hand power to an elected successor from an opposing party. Despite persistent rumors of coups and insurrections, the army stayed in the barracks and public order was broadly maintained, with the occasional workers’ strike, student demonstration, or Indian mobilization, the latter facing the most violent repression. The Communist Party of Ecuador sought to coalesce these social forces into a political movement that would lay the basis for a more just and equal society. Rather than pressing for class struggle and a violent revolution, communist leaders advocated the pursuit of democratic means to achieve socialism in coalition with other progressive forces. But their attempts to form a broad anticonservative alliance with the liberals and the socialists repeatedly failed, and they drew minimal support during elections. Their emphasis on a peaceful and gradual path to power eventually led a radical wing to break from the party in the 1960s. After 1959, Ecuador returned to its status quo ante of political volatility and instability, and leftist politics became more fragmentary and confrontational.

Cold Warriors in Ecuador

Unlike Marc Becker, I am more interested in the CIA’s activities and style of reporting he indirectly describes than in the travails of the communist movement in Ecuador. Unsurprisingly, the authors of diplomatic dispatches and intelligence reports were Cold Warriors, and they shared the biases and proclivities of their colleagues and leaders in Washington. They considered world communism as the enemy, and drew the consequences of this antagonism for the conduct of foreign policy in Ecuador. They were convinced, and tried to convince their interlocutors, that the communists were dangerous subversives bent on death and destruction and that they plotted to disturb the smooth functioning of society. They were determined to implicate communists in coup attempts and they repeatedly pointed to external support for subversive movements. They saw the hand of Moscow, and Moscow’s gold, behind every move and decision of the PCE, and they closely monitored contacts with foreign communist parties and their fellow travelers, including by intercepting incoming mail and opening correspondence. Despite their weak number—estimate of party membership oscillates between 5000 and 1500 during the period—, communists were suspected of manipulating labor unions, student movements, and intellectual organizations, and of infiltrating the socialist party and progressive local governments. According to American officials, Ecuadorians did not take the communist threat seriously enough. United States representatives pressed the Ecuadorian government to implement strong anticommunist measures and applauded when it did so. The accusations of communists organizing riots and fomenting revolution fed an existing anticommunist paranoia rather than reflecting political realities. Evidence shows that the communists had no intentions of resorting to violence to achieve their political goals. But their claim for social justice and labor empowerment was perceived as posing a threat to the economic and political interests of the United States, and was fought accordingly.

In this respect, and contrary to its reputation as a rogue agency or a “state within the state,” there is no evidence that the CIA was running its own foreign policy in Ecuador. Its objectives were fully aligned with those of the State Department, and there was close cooperation between the CIA station chief and the rest of the embassy’s staff. Different branches of the government represented in Quito, including the military attaché, the cultural affairs officer, and the labor attaché, collaborated extensively around a shared anticommunist agenda. Indeed, Cold War objectives were also shared by other countries allied to the United States, and Becker quotes extensively from the correspondence of the British ambassador, who stood broadly on the same anticommunist positions but expressed them with more synthetic clarity and literary talent. To be sure, there were some petty infighting and administrative rivalry between services within the embassy. The CIA typically exaggerated communist threats, whereas State Department officials dedicated more attention to the much larger socialist party and to violent political organizations inspired by Italian fascism and the Spanish Falange. There were redundancies between official correspondence and covert reporting, and diplomats competed with CIA agents for the same sources and breaking news. Officials in Washington had “an insatiable demand for information” and were constantly fed by a flow of cables containing little valuable information and analysis. Occasionally, case officer would annex to their correspondence a tract or a manifesto that, considering the absence or destruction of party archives, provides the historian with an invaluable source of information.

Cognitive biases

In failing to give a realistic assessment of the political forces in Ecuador, CIA officials exhibited several cognitive biases and were prone to misjudgments and errors. They interpreted events through a Cold War lens that colored their understanding of the realities they observed. Their belief in the presence of an international conspiracy that sought to throw chaos across the region bordered on paranoia and made them neglect or distort important pieces of information. They failed to report that the communist party was opposed to involvement in military coups, and they overestimated the communists’ influence in the armed forces. They were blind to the threat posed by proto-fascist movements such as the falangist group ARNE and the populist CFP, suspecting the later of leftist leanings because its leader was a former communist even though he became violently opposed to his former comrades. They overreacted to some news such as the disruption of an anticommunist movie projection with stink bombs thrown by unidentified students or the spontaneous riots that followed the radio broadcasting of Orson Welles’ The War of the Worlds, “a prank turned terribly awry.” They had mood swings that alternated between overconfidence and inflated fears, minimizing the strength of the party while overemphasizing its influence over the course of events. They exhibited an almost pathological urge to uncover external sources of funding for subversive activities, even though they knew that Ecuadorian communists had only minimal contacts with Moscow and that their party’s finances were always in dire straits. They were oversensitive to divisions within the party, providing the historian with valuable information about internal currents and debates, but failed to notice political organizing efforts among Indian communities that provided strong support to the party (in general, indigenous people were a blind spot in embassy’s reporting: “The Indians are apart and their values are unknown,” pondered the ambassador.) Like any bureaucracy, the CIA and the State Department fell victim of mission creep: as one officer observed, “There was a lot of information for information’s sake.”

Considering Marc Becker’s many criticisms of US interference and interpretive biases, one wonders what an alternative course of action might have been. The United States might have adhered to a strict policy of neutrality in the hemisphere and refrained from their vehement denunciation of communism by acknowledging that the Communist Party of Ecuador and its supporters were a legitimate political force in the local context. In other terms, they might have tried to disconnect Latin America from the broader geopolitical forces that were shaping their Cold War strategy, stating in effect that Ecuador was irrelevant to the pursuit of their global policy objectives. Considering not only their words but the limited means they allotted to CIA surveillance in Ecuador in the 1950s, this is more or less what American policymakers did: only with the turbulent sixties would the United States invest more means, including covert actions, to prevent the expansion of communism following the Cuban revolution and the rise in insurgency movements. Alternatively, at the individual level, officers might have tried to rid themselves of the cognitive biases and to paint a more realistic picture of the political situation, emphasizing not only the threat but also the opportunities raised by the development of the progressive left. This might have been the course pursued by more enlightened diplomats, but considering the political climate prevailing in Washington, where McCarthyism was in full swing and the State Department was decimated by red purges, this would have meant political suicide and instant demotion for the officers involved. Better, in their perspective, to bide their time and adhere to a more conformist line of analysis, serving to their political leaders the discourse that they wanted to hear.

A revisionist history

The historian is not without his own bias. Marc Becker is a revisionist historian bent on setting the record straight: during the 1950s, the Ecuadorian Communist party was a progressive force preaching reformism and European-style social welfare programs within the parliamentary system. To demonstrate his case, he sticks to the archival record and provides much more detail for the period from 1949 to 1954, for which sources are abundant and detailed, than for the years after 1955, for which the CREST database contains much fewer documents. Like his sources, he tends to overemphasize the geopolitical importance of Ecuador and Latin America in postwar global history. His concluding chapter on the year 1959 states that “the triumph of revolutionary forces in Cuba is arguably one of the most significant political events of the twentieth century.” He sees all activities of US diplomats in Ecuador with suspicion, and tracks in every detail the heavy hand of American interventionism where in fact diplomatic missions were only doing their job of representation, advocacy, and reporting. He detects a running contradiction between the official policy of nonintervention in the internal affairs of other countries and the reality of Americans trying to shape opinions and influence outcomes. In doing so, he doesn’t clearly distinguish between adherence to the principle of non-interference, the pursuit of influence through public diplomacy, and the defense of the national interest. The fact that diplomatic dispatches conclude that a presidential candidate or a policy measure may be more favorable to American interests abroad is not synonymous with meddling into internal affairs: it is the bread-and-butter of diplomatic activity, even though what constitutes the national interest may be open to democratic debate. In the case of Ecuador during the 1950s, it was in America’s interest to monitor the activities of a communist party that was vehemently opposed to “Yankee imperialist capitalism,” however small and inconsistent its threat to the neoliberal international order. The fact that diplomatic representatives and intelligence officers pursued this mission with dedication and rigor may be put to their credit, and our understanding of the past is made richer for the documentary record they left behind.

Remnants of “La Coopération”

A review of Edges of Exposure. Toxicology and the Problem of Capacity in Postcolonial Senegal, Noémi Tousignant, Duke University Press, 2018.

Edges of ExposureCapacity building is the holy grail of development cooperation. It refers to the process by which individuals and organizations as well as nations obtain, improve, and retain the skills, knowledge, tools, equipment, and other resources needed to achieve development. Like a scaffolding, official development assistance is only a temporary fixture; it pursues the goal of making itself irrelevant. The partner country, it insists, needs to be placed in the driver’s seat and implement its domestically-designed policies on its own terms. Once capacity is built and the development infrastructure is in place, technical assistance is no longer needed. National programs, funded by fiscal resources and private capital, can pursue the task of development and pick up from where foreign experts and ODA projects left off. And yet, in most cases, building capacity proves elusive. The landscape of development cooperation is filled with failed projects, broken-down equipment, useless consultant reports, and empty promises. Developing countries are playing catch-up with an ever receding target. As local experts master skills and technologies are transferred, new technologies emerge and disrupt existing practices. Creative destruction wreaks havoc fixed capacity and accumulated capital. Development can even be destructive and nefarious. The ground on which the book opens, the commune of Ngagne Diaw near Senegal’s capital city Dakar, is made toxic by the poisonous effluents of used lead-acid car batteries that inhabitants process to recycle heavy metals and scrape a living. Other locations in rural areas are contaminated with stockpiles of pesticides that have leaked into soil and water ecosystems.

Playing catch-up with a moving target

Edges of Exposure is based on an eight-month period of intensive fieldwork that Noémi Tousignant spent by establishing residence in the toxicology department of Université Cheikh Anta Diop in Dakar, in an ecotoxicological project center, and in the newly-established Centre Anti-Poison, Senegal’s national poison control center. The choice to study the history of toxicology in Senegal through the accumulation of capacity in these three institutions was justified by the opportunity they offered to the social scientist: toxicity, that invisible scourge that surfaced in the disease outbreaks of “toxic hotspots” such as Ngagne Diaw, was made visible and exposed as an issue of national concern by the scientists and equipments that tried to measure it and control its spread. Layers of equipments that have accumulated in these two locations appear as “leftovers of unpredictable transfers of analytical capacity originating in the Global North.” Writing about history, but using the tools of anthropology and ethnographic fieldwork, the author combines the twin methods of archeology and genealogy. The first is about examining the material and discursive traces left by the past in order to understand “the meaning this past acquires from and gives to the present.” The second is an investigation into those elements we tend to feel are without history because they cannot be ordered into a narrative of progress and accomplishment, such as toxicity and technical capacity.

Noémi Tousignant begins with a material history of the buildings, equipments, and archives left onsite by the successive waves of capacity building campaigns. The book cover picturing the analytical chemistry laboratory sets the stage for the ongoing narrative, with its rows of unused teaching benches, chipped tiles, rusty gas taps, and handwritten signs instructing not to use the water spigots. The various measurement equipments,  sample freezers, and portable testing kits are mostly in disrepair or unused, and local staff describe them as “antiques,” “remnants,” or leftovers of a “wreckage.” They provide evidence of a “process of ruination” by which capacity was acquired, maintained, and lost or destroyed. The buildings of Cheikh Anta Diop university—named after the scholar who first claimed the African origins of Egyptian civilization—speak of a time of high hopes and ambitions. The various departments, “toxicology,” “pharmacology,” “organic chemistry,” are arranged in neat fashion, and each unit envisions an optimistic future of scientific advancement, public health provision, and economic development. The toxicology lab is supposed to perform a broad range of functions, from medico-legal expertise to the testing of food quality and suspicious substances and to the monitoring of indicators of exposure and contamination. But in the lab, technicians complained that “nothing worked” and that outside requests for sample testing had to be turned down. Research projects and advanced degrees could only be completed overseas. Capacity was only there as infrastructure and equipment sedimented over time and now largely deactivated.

Sediments of cooperation

Based on her observations and interviews, Noémi Tousignant reconstructs three ages of capacity building in Senegalese toxicology, from the golden era of “la coopération” to the financially constrained period of “structural adjustment” and to a time of bricolage and muddling through. The Faculty of Pharmacy was created as part of the post-independence extension of pharmacy education from a technical degree to the full state qualification, on par with a French degree. For several decades after the independence, the French government provided technical assistants, equipment, budget, and supplies with the commitment to maintain “equivalent quality” with French higher education. The motivation was only partly altruistic and also self-serving: the university was put under French leadership, with key posts occupied by French coopérants, and throughout the 1960s about a third of its students were French nationals. It allowed children of the many French expats in Senegal to begin their degree in Dakar and easily transfer to French universities, and also provided technical assistants with career opportunities that could be later translated into good positions in the metropole. France was clearly in the driver’s seat, and Senegalese scientists and technicians were invited to join the bandwagon. But the belief in equivalent expertise and convergent development embodied in la coopération also bore the promise of a national and sovereign future for Senegal and opened the possibility of African membership in a universal modernity of technical norms and expertise. Coopérants’ teaching and research activities were temporary by definition: they were meant to produce the experts and cadres that would replace them.

The genealogy of the toxicology discipline itself delineates three periods within French coopération: from post-colonial science to modern state-building and to Africanization. The first French professor to occupy the chair of pharmaceutical chemistry and toxicology in Dakar described in his speeches and writings “a luxuriant Africa in which poison abounds and poisoning rites are highly varied.” His interest for traditional poisons and pharmacopeia was not only motivated by the lure of exoticism: “tropical toxicology” could analyze African plant-based poisons to solve crimes, maintain public order, and identify potentially lucrative substances. In none of his articles published between 1959 and 1963 did the French director mention the toxicologist’s role in preventing toxic exposure or mitigating its effects on a population level. His successors at the university maintained French control but reoriented training and research to fulfill national construction needs. They acquired equipment and developed methods to measure traces of lead and mercury in Senegalese fish, blood, water, and hair, while arguing that toxicology was needed in Senegal to accompany intensified production in fishing and agriculture. But they did not emphasize the environmental or public health significance of these tests, and their research did not contribute to the strengthening of regulation at the national and regional level. Africanization, which was touted as an long-term objective since the time of the independence, was only achieved with the abrupt departure of the last French director in 1983 and its replacement with Senegalese researchers who had obtained their doctoral degree in France. But it coincided with the adoption of structural adjustment programs and their translation into budget cuts, state sector downsizing, and shifting priorities toward the private sector.

After la coopération

Ties with France were not severed: a few technical assistants remained, equipment was provided on an ad hoc basis, and Senegalese faculty still relied on their access to better-equipped French labs during their doctoral research or for short-term “study visits.” But the activation of these links came to rely more on the continuation of friendly relations and favors than on state-supported programs and entitlements. French universities donated second-hand equipment and welcomed young African scientists to fill needed positions in their research teams. They made the occasional favor of testing samples that could no longer be analyzed with the broken-down equipment in Dakar. The toxicology department at Cheikh Anta Diop University could not keep up with advances in science and technology, with the emergence of automated analytical systems and genetic toxicology that made cutting-edge research more expensive and thus less accessible to modestly funded public institutions. Some modern machines were provided by international aid agencies as part of transnational projects to monitor the concentration of heavy metals, pesticides, and aflatoxins—accumulated often as the result of previous ill-advised development projects such as the large-scale spraying of pesticides in the Sahel to combat locust and grasshopper invasions. But, as Tousignant notes, such scientific instruments “are particularly prone to disrepair, needing constant calibration, adjustments, and often a steady supply of consumables.” The “project machines” provided the capacity to test for the presence of some of the toxins in food and the environment, but they did not translate into regulatory measures and soon broke down because of lack of maintenance.

The result of this “wreckage” is a landscape filled with antique machinery, broken dreams, and “nostalgia for the futures” that the infrastructures and equipment promised. Abandoned by the state, some research scientists and technicians left for the private sector and now operate from consultancy bureaus, local NGOs, and private labs with good foreign connections. Others continue to uphold the ideal of science as a public service and try to attract contract work or are occasionally enlisted in transnational collaborative projects. Students and researchers initiate low-cost, civic-minded “research that can solve problems,” collecting samples of fresh products, powdered milk, edible oils, and generic drugs to test for their quality and composition. Meanwhile, the government of Senegal has ratified a series of international conventions bearing the names of European metropoles—Basel, Rotterdam, Stockholm—addressing global chemical pollution and regulating the trade of hazardous wastes and pesticides. Western NGOs such as Pure Earth are mapping “toxic hotspots” such as Ngagne Diaw and are contracting with the Dakar toxicology lab to provide portable testing kits and measure lead concentration levels in soil and blood. Entreprising state pharmacologists and medical doctors have invested an unused wing of Hôpital Fan on the university campus to create a national poison control center, complete with a logo and an organizational chart but devoid of any equipment. Its main activity is a helpline to respond to people bitten by poisonous snakes.

Testing for testing’s sake

Toxicology monitoring now seems to be submitted to the imperatives of global health and environmental science. Western donors and private project contractors are interested in the development of an African toxicological science only insofar as it can provide the data point, heatmaps, and early warning systems for global monitoring. The protection and healing of populations should be the ultimate goal, and yet the absence of a regulatory framework, let alone a functional enforcement capacity, guarantees that people living in toxic environments will be left on their own. In such conditions, what’s the point of monitoring for monitoring’s sake? “Ultimately, the struggle for toxicological capacity seems largely futile, unable to generate protective knowledge other than fragments, hopes, and fictions.” But, as Noémi Tousignant argues, these are “useful fictions.” First, the maintenance of minimal monitoring capacity, and the presence of dedicated experts, can ensure that egregious cases of “toxic colonialism” such as the illegal dumping of hazardous waste, will not go undetected and unanswered. Against the temptation to consider the lives of the poor as expendable, and to treat Africa as waste, toxicologists can act as a sentinel and render visible some of the harm that populations and ecosystems have to endure. Second, like the layers of abandoned equipment that documents the futures that could have been, toxicologists highlight the missed opportunity of protection. “They affirm, even if only indirectly, the possibility of—and the legitimacy of claims to—a protective biopolitics of poison in Africa.”

This Voice Sounds Black

A review of The Race of Sound. Listening, Timbre, and Vocality in African American Music, Nina Sun Eidsheim, Duke University Press, 2019.

The Race of SoundI close my eyes and I can hear Billie Holiday’s black voice filling the room. Her voice, described as “a unique blend of vulnerability, innocence, and sexuality,” speaks of a life marked by abandonment, drug abuse, romantic turmoil, and premature death. Hearing Billie Holiday sing the blues also summons her black ancestors’ history of enslavement, hard labor, racial segregation, and disfranchisement. I can imagine the black singer, cigarette in hand, eyes closed, bearing the sorrow of shattered hopes and broken dreams. But wait. I open my eyes and what I see on the screen is a seven-year-old Norwegian named Angelina Jordan performing on the variety show Norway’s Got Talent. Her imitation of Billie Holiday is almost perfect: pitch, rhythm, intonation, and vocal range correspond to her model down to the smallest detail. Here is a combination of a child’s frail body and the sound of an iconic singer that we usually hear through the narrative of her unfortunate life and perceived ethnicity. Impersonations of African-American singers can be problematic: as Nina Eidsheim notes, they bring to mind a past history of blackface minstrelsy and racist exploitation, and a present still marked by cultural misappropriation and racial stereotypes. But her point is elsewhere: by assigning a race or ethnicity to the sound of a voice, we commit a common fallacy that helps reproduce and essentialize the notion of race. We hear race where, in fact, it isn’t.

Hearing race where it isn’t

Do black voices sound different? Biologically speaking, it makes no sense to assign a racial identity to the sound of a voice. Vocal timbre is determined by the diameter and length of the vocal tract and the size of the vocal folds, neither of which are affected by race or ethnicity. These components vary with gender, age, and enculturation into “communities of language and speech.” The training of the voice, like the training of the body, affects the development of vocal tissue, mass, musculature, and ligaments. Training or “entrainment” takes place both formally and informally, involving vocal practices such as speaking, singing, acting, imitating, crying, or laughing. We grow up into a certain voice tone, and this vocal timbre comes to designate an essential part of our identity. Through voice, we perform who we are or who we want to be. Voice is a collective, cultured performance, unfolding over time, and situated within a culture. Sociology can help us explain how voice becomes the way it sounds.  Drawing from his observation of soldiers in World War I, Marcel Mauss described how people in different societies are brought up to walk, stand, sit, or squat in very different ways. Similarly, Pierre Bourdieu showed in La Distinction how the tone of one’s voice, the habit to speak from the tip of one’s mouth or from the depth of one’s throat, is influenced by social class and status and correlates with other social practices such as eating or engaging in cultural activities. Nina Eidsheim extends these observations on bodily techniques and cultural styles to the ways everyday vocal training is manifested corporeally and vocally. More importantly, she shows that voice does not arise solely from the vocalizer; it is created just as much within the process of listening.

Disciples of the Greek philosopher Pythagoras used to listen to their master from behind a veil in order to better concentrate on his teachings. If an “acousmatic sound” designates a sound that is heard without its originating cause being seen, the “acousmatic question” is raised when one asks who is the person we hear singing or talking without seeing him or her. It is assumed we can know a person’s identity through the sound made by his or her voice: using aural cues, we can guess the age, gender, and ethnicity of the person with only a limited margin of error. From this on, we infer that the voice can give us access to interiority, essence, and unmediated identity of the person. To have a voice is to have a soul, and to hear a voice is to access the soul. Nina Eidsheim shows that this belied of voice as an expression of the true self is based on an illusion: the listener projects onto the voice an individual essence and a racialized identity of his or her own making. In order to dispel that illusion, and to debunk the myth of essential vocal timbre, she offers three postulates that sustain her analysis of voice as critical performance practice. Voice is not singular; its is collective. Voice is not innate; it is cultural. Voice’s source is not the singer; it is the listener. Armed with these three basic tenets, she provides many examples by which we answer to the “acousmatic question” and project a racialized identity on a voice we consider as “black.”

National schools of singing

Classical vocal artists undergo intense training, much of which is dedicated to learning to hear their own voices as the experts hear them. Classical vocal pedagogy is built upon the assumption that it is possible to construct timbre, and national schools of singing have different ways to shape a voice into a distinctive artistic performance. The difference between classical renditions of the same song, Lied or opera in Paris, London, Vienna, or Moscow has nothing to do with the race or place of birth of the singer and is entirely based on the way the singer was schooled and trained to perform. For instance, as Eidsheim notes, the French school of singing insists on the “attaque,” a very strong beginning that is created by a powerful inward thrust of the abdomen. The result is a held sound that is slightly above pitch, with a pushed and sharp-sounding phonation. Singing the French repertoire requires not only a familiarity with the numerous French liaison rules and constant vowel flow within and between words, which a French lyric diction coach can provide, but also a mastery of the attaque and other singing techniques that the French classical tradition has developed. But classical voice teachers also believe each voice has to sound “healthy,” “authentic,” and “natural.” This is where race comes in: most teachers, particularly in the North American context, believe they can always tell the ethnicity of the singer by his or her vocal timbre, and train their students to cultivate what they call their “ethnic timbre” or “unique color.” An ethic of multiculturalism has penetrated vocal pedagogy: some specialists go so far as to criticize ignorant teachers who have not been exposed to a variety of racial timbres for “homogenizing” their students’ voices. Making racial judgments on voice becomes a self-fulfilling prophecy: for performers, teachers, and listeners alike, voice begins to be heard through racial filters and categories.

For most of their history, opera houses in the United States have been exclusively white. Desegregating classical music took time and effort, and black singers had to overcome many obstacles and prejudices. Segregation prohibited African American singers from taking lessons with white teachers or singing in integrated contexts. Those who performed classical music had to share the same spaces and the same programs with the minstrel repertoire, burlesque shows, and negro spirituals. It was difficult, if not impossible, for those performers to advance their careers without reinforcing stereotypes. The first African American singers to perform classical repertoire for large interracial audiences drew a great deal of attention to their blackness. They were given nicknames such as “the black swan” or “the black Patti,” and their voices described as “husky, musky, smoky, misty,” retaining their “savage character” and imbued with the “sorrow of their race.” A surge of African American operatic divas triumphed on the stage during the 1970s and 1980s, breaking the “Porgy and Bess curse” that had relegated their predecessors to singing only a limited part of the repertoire. But even now, singers do not come to the operatic musical tradition on an equal footing. There is resistance toward casting African American tenors as romantic lead characters, and also at creating interracial romances portrayed on stage. It is easier for African Americans to succeed as baritones or basses because the roles written for these vocal types are typically villains. Visual blackness is projected onto auditory timbre, resulting in the perception of sonic blackness. The world of opera is based on the willing suspension of disbelief: the tenor may be too fat, the soprano dowdy and old, and yet the audience accepts what is on stage as a plausible fiction for the sake of enjoyment. But what if Othello isn’t black, or if the Romeo and Juliet couple is interracial?

Projections of identity

Audiences “hear” race when they see a black person singing; they also perceive gender and other markers of identity. It is often believed that a feminine voice is higher in pitch than a masculine one. In fact, there is a considerable area of overlap between male and female voices. And timbre plays a key role in the gendered reading of voice: it is how voices are colored and timbrally mediated that determines whether they are perceived as male or female. Nina Eidsheim illustrates the importance of audiences’ projections of gender categories by taking up the life of Jimmy Scott, an artist who defied categorization. Scott didn’t fit the model of the African American male jazz artist. He was born with a hormonal condition that prevented his voice from changing at puberty. The condition also stopped Scott’s body from growing after the age of twelve. “Little Jimmy Scott” achieved early commercial success but then suffered from a long period of oblivion and was rediscovered by audiences and the music world when he reached old age. Although he always described himself as “a regular guy,” he transcended gender distinctions, thus becoming uncanny, transgressive, and ripe for projection, misidentification, and dismissal as burlesque or play. On many occasions, record covers didn’t feature his picture or give credit to his artistry, and his “neutered” voice was detached from any particular gendered body. When he did appear under his own name, his unique identity was doubled by identities and significations not his own. He was perceived as a masculine woman, a homosexual, a transsexual, or a freak. Listeners participated in the co-creation of Scott’s voice and overall gender identity by projecting familiar stereotypes onto a complex artist.

Audiences project a gendered and racialized identity onto a voice, thereby changing the perception of the performer’s artistry. But racializing voice is not reserved for the human voice: the popular discourse about the “race of sound” is equally present in the digital realm, where voice is converted into zeros and ones. Nina Eidsheim examines the case of the vocal synthesis software Vocaloid that enables songwriters to generate singing by simply typing the lyrics and music notes of their composition, then choosing a “vocal font” to interpret their tune. While Vocaloid is far from the first voice synthesis program, it was the first specifically created as a commercial, consumer-oriented music product. Fan-based communities formed around the voice characters that the software enabled and that were given Christian names such as LOLA and LEON or MIRIAM by the producing company Yamaha. But while LOLA was marketed as a black soul singer’s voice and used samples from a Jamaican artist, users didn’t hear her voice as “black.” Instead, the sound character was described as “a British singer with a Japanese accent” who “lisps like a Spaniard,” and the use of the vocal font fell mostly outside the register of soul music. Vocaloid-created music feeds into YouTube channels with anime character illustrations, even though the original font characters have been “retired” and are no longer commercially available. The anime genre allows for a post-racial representation of facial traits, immersed in an Asian imaginary of misty eyes and colorful hair. Subsequent Vocaloid characters such as Hatsune Miku have transformed into “platforms people can build on,” and their hologram projections are displayed in live concerts where cosplay fans don the attire of their favorite characters. The genie has definitely escaped the racial box its creators designed for it.

I have a dream

The Race of Sound is built on a strong assumption: voice in itself is neither black nor white, and the projection of race takes place in the ear of the beholder as much as it is shaped by the entrainment of the vocalist into speaking or singing communities. The perpetuation of racialized vocal timbre goes a long way in explaining the entrenched nature of structural racism in our societies. As Nina Eidsheim underscores, “For every time that Holiday is heard as and reduced to the archetypal tragic black woman, people are turned away from jobs or housing opportunities based on reductions of their voices to assumed nonwhite identities.” But judging about the nature of voice goes much deeper and is based on fundamental beliefs about sound and listening. We practice the “cult of fidelity” by assuming that sound and vocal timbre are stable and knowable, and we project onto the sonic world fixed categories that shape our perception and representation of what we hear. Therefore, to debunk myths about race as an essential category, one must deconstruct the way we think about sound, music, and listening. This will not only allow us to become more enlightened listeners, but also uphold the status and skills of sound performers. More than stereotypes about the tragic lives of black women, it was style and technique that allowed Billie Holiday to bring dignity, depth, and grandeur to her performances. Understanding vocal timbre as an expression of skill, artistry, and communicative intention will help us appreciate the performance of great artists by judging them not by the color of their skin but by the content of their creative ability.

South Korea Meets the Queer Nation

A review of Queer Korea, edited by Todd A. Henry, Duke University Press, 2020.

Queer KoreaOn March 3, 2021, Byun Hui-su, South Korea’s first transgender soldier who was discharged from the military the year before for having gender reassignment surgery, was found dead in her home. Her apparent suicide drew media attention to transphobia and homophobia in the army and in South Korean society at large. According to Todd Henry, who edited the volume Queer Korea published by Duke University Press in 2020, “LGBTI South Koreans face innumerable obstacles in a society in which homophobia, transphobia, toxic masculinity, misogyny, and other marginalizing pressures cause an alarmingly high number of queers (and other alienated subjects) to commit suicide or inflict self-harm.” Recently people and organizations claiming LGBT identity and rights have gained increased visibility. The city of Seoul has had a Gay Pride parade since 2000, and in 2014 its mayor Park Won-soon suggested that South Korea become the first country to legalize gay marriage—but conservative politicians as well as some so-called progressives blocked the move, and the mayor committed suicide linked to a #metoo scandal in 2020. Short of same-sex unions, most laws and judicial decisions protecting LGBT rights are already on the books or in jurisprudence, and society has moved towards a more tolerant attitude regarding the issue. Nonetheless, gay and lesbian Koreans still face numerous difficulties at home and work, and many prefer not to reveal their sexual orientation to family, friends or co-workers. Opposition to LGBT rights comes mostly from Christian sectors of the country, especially Protestants, who regularly stage counter-protests to pride parades, carrying signs urging LGBT people to “repent from their sins.” In these conditions, some sexually non-normative subjects eschew visibility and remain closeted, or even give up sexuality and retreat from same-sex communities as a survival strategy.

Queer studies in a Korean context

There is also a dearth of books and articles addressing gay and lesbian cultures or gender variance in South Korean scholarship. Unlike the situation prevailing on North American university campuses, queer studies still haven’t found a place in Korean academia. Students at the most prestigious Korean universities (SNU, Korea University, Yonsei, Ehwa…) have created LGBT student groups and reading circles, but graduate students who specialize in the field face a bleak employment future. Many scholars who contributed to Queer Korea did it from a perch in a foreign university or from tier-two colleges in South Korea. This volume nonetheless demonstrates the vitality of the field and the fecundity of applying a queer studies approach to Korean history and society. The authors do not limit themselves to gay and lesbian studies: a queer perspective also includes cross-gender identification, non-binary identities, and homosocial longings that fall outside the purview of sexuality. Queer theory also takes issue with a normative approach emphasizing political visibility, human rights, and multicultural diversity as the only legitimate forms of collective mobilization. Queer-of-color critiques point out that power dynamics associating race, class, gender expression, sexuality, ability, culture and nationality influence the lived experiences of individuals and groups that hold one or more of these identities. Asian queer studies have shown that tropes of the “closet” and “coming out” may not apply to societies where the heterosexual family and the nation trump the individual and inhibit the expression of homosexuality. In addition, as postcolonial studies remind us, South Korea is heir to a history of colonialism, Cold War, and authoritarianism that has exacerbated the hyper-masculine and androcentric tendencies of the nation.

Some conservatives in South Korea hold the view that “homosexuality doesn’t exist in Korean culture” and that same-sex relations were a foreign import coming from the West (North Koreans apparently share this view.) This is, of course, absurd: although Confucianism repressed same-sex intercourse and limited sexuality to reproductive ends, throughout Korean history some men and women are known to have engaged in homoerotic activity and express their love for a person of the same sex. To limit oneself to the twentieth century, there is a rich archival record relating to same-sex longings and sensuality, cross-gender identification, and non-normative intimacies that the authors of Queer Korea were able to exploit. Homosexuality didn’t have to be invented or imported: it was present all along, albeit in different cultural forms and personal expressions. Close readings of literary texts, research into historical archives, surveys of newspapers and periodicals, visual analysis of movies and pictures, and participatory observation or social activism allow each contributor to produce scholarship on a neglected aspect of Korean history and society. But it is also true that persons that were sexually attracted to the same sex lacked role models or conceptual schemes that would have helped them make sense of their inclination. They were kept “in the dark” about the meaning of homosexuality as anything but a temporary aberrant behavior, a perverted desire that ordinary men “slipped” or “fell into” (ppajida), especially in the absence of female partners. The strong bondings that girls and young women developed in the intimacy of all-female classrooms and dormitories was seen with more leniency, but was considered as a temporary arrangement before they entered adulthood and marriage. As a result of the authoritarian ideology of the family-state, official information about non-normative sexualities such as homosexuality was highly restricted. Many men and women attracted to the same sex were confused and morally torn about their desires.

The elusive Third Miracle of the Han River

An optimistic view alleges that sexual minority rights with follow the path of economic development and democratization, only with some delay. According to this view, the “miracle of the Han river” occurred in three stages. A country totally destroyed by the Korean War transformed itself in less than three decades from a Third World wastebasket to an Asian economic powerhouse, becoming the 12th largest economy in terms of GDP. The second miracle occurred when democratic forces toppled the authoritarian regime and installed civilian rule and democratic accountability. The third transformation may be currently ongoing and refers to the mobilization of civil society to achieve equal rights for all, openness to multiculturalism, and women’s empowerment. But this teleological view neglects the fact that an emerging market economy can always shift to reverse mode: economic crises may sweep away hard-won gains, the rule of law may be compromised by ill-fated politicians, and social mobilizations may face a conservative backlash. This is arguably what is happening in South Korea these days. To limit oneself to sexual minority rights, the current administration has backpedalled on its promise to pass an anti-discrimination law; the legalization of same-sex-marriages still faces strong opposition; and homophobic institutions such as the army or schools fail to provide legal protection for gender-variant or sexually non-normative persons. The failure of LGBT communities to adopt a distinctive gay, lesbian, or trans culture and follow the path of right-based activism should not be seen as an incapacity to challenge the hetero-patriarchal norms of traditional society in favor of a transgressive and non-normative identity politics. As John (Song Pae) Cho notes, “For Korean gay men who had been excluded from the very category of humanity, simply existing as ordinary members of society may be considered the most transgressive act of all.”

The current backlash against homosexuality is not a return to a previous period of sexual repression and self-denial. It is triggered by economic necessity in the face of financial insecurity, labor market flexibility, and a retreating welfare state. John Cho shows that the three phases of male homosexuality within South Korea’s modern history were intrinsically linked to economic development. The “dark period” of South Korea’s homosexuality during the late developmentalist period, from the 1970s to the mid-1990s, was followed by a brief flowering of homosexual communities fueled by the Internet and the growing economy. But this community-building phase was undermined by the family-based restructuring that accompanied South Korea’s transformation into a neoliberal economy. As a response to the IMF crisis of 1997, the Korean state revived the older ideology of “family as nation” and “nation as family.” It used family, employment, and other social benefits to discriminate against non-married members of society and discipline non-normative populations who did not belong to the heterosexual nuclear family. Many single gay men in their thirties and forties were forced to “retreat” and “retire” from homosexuality to focus on self-development and financial security that often took the form of marriage with the opposite sex. Other gay men turned to money as the only form of security in a neoliberal world. In her chapter titled “Avoiding T’ibu (Obvious Butchness)”, Layoung Shin shows that young queer women who used to cultivate a certain masculinity, wearing short hair and young men’s clothing to emulate the look of boy bands’ idols, reverted to a strategy of invisibility and gender conformity to avoid discrimination at school and on the job market. The choice of invisibility is rendered compulsory in the army, where the Korean military even uses “honey traps” on gay dating apps to root out and expel gay military personnel.

Fighting against homophobia and transphobia

In such a context, developing queer studies in South Korea is going against the grain of powerful societal forces, and this may account for the militant tone adopted by many contributors to this volume. John Cho concludes his article on “The Three Faces of South Korea’s Male Homosexuality” by stating that the homophobic backlash is “ushering in a new period of neofascism in Korean history.” Layoung Shin emphasizes that “we cannot blame young queer women’s avoidance of masculinity,” and formulates the hope that “our criticism may offer them the courage to not fear punishment and harassment or bullying at school, which an antidiscrimination bill would remedy.” Timothy Gitzen exposes the “toxic masculinity” of South Korea’s armed forces where, on the basis of an obscure clause in the military penal code, dozens of soldiers who purportedly engage in anal sex are hunted down and imprisoned, even though they met partners during sanctioned periods of leave and in the privacy of off-base facilities. An independent researcher and transgender activist named Ruin, who self-identifies as a “zhe,” shows that bodies that do not conform to strict boundaries between men and women face intense scrutiny and various forms of discrimination, consolidated by institutions and norms such as the first digit in the second part of national ID numbers which are used for all kinds of procedures like getting a mobile phone or registering for employment and social benefits. Zhe claims that “this problem cannot be solved by legal reform; on the contrary, abolishing these legal structures altogether may be a more fundamental and effective solution,” as ID numbers were introduced to exclude and persecute “ppalgaengi” citizens suspected of pro-Communist sympathies during the Korean War. Todd Henry, the volume editor, notes that homophobia and transphobia are not limited to South (and North) Korea and that queer and transgender people in the United States face the added risk of being brutally murdered by gun-toting individuals.

But the most transgressive moves in Queer Korea may be the attempt to reframe history and revisit the literary canon using queer lenses and critical approaches inspired by queer theory. Remember that some conservative critics pontificate that “homosexuality didn’t exist in Korea” before it was introduced from abroad. In a way they are right: the word “same-sex love” (tongsongae) was translated from the Japanese dōseiai and was introduced under colonial modernity at the same time as “romantic love” (yonae) and “free marriage” (chayu kyoron). Colonial society allowed certain groups, such as schoolgirls, to engage in spiritual same-sex love to keep young people away from heterosexual intercourse. Pairs of high school girls formed a bond of sistership (ssisuta) and vowed they would “never marry and instead love each other eternally.” But during this period, “love” had little to do with sexual and romantic desire, and society relied on conjugal and filial conventions that privileged men at the expense of women. High school girls were expected to “graduate” from same-sex love and to serve as “wise mothers and good wives” (hyonmo yangcho). Those who didn’t and who tragically committed double suicides (chongsa) or led their lives as New Women (shin yoja) attracted a great deal of contentious debates and literary attention. Meanwhile, namsaek (“male color”) and tongsongae (homosexuality) between men was medicalized and pathologized as an abnormal behavior discussed along the same lines as rape, bigamy, and sexual perversion (songjok tochak). Whereas male spiritual bonding (tongjong) and physical intimacy known nowadays as skinship were tolerated and even sometimes encouraged, there seems to have emerged a fixation on anal sex (kyegan, “chicken rape”) that is shared today by the military and conservative Christian groups.

Drag queens and cross-dressers

Traditional Korea also had its drag queens and cross-dressers. The male shamans and healers (mugyok, nammu, baksu), female fortunetellers and spiritists (mudang, posal), and the so-called flower boys (hwarang) practiced cross-dressing, sex change, and gender fluidity avant la lettre. Transgendered shamans passed as women by dressing, talking, and behaving as women, while women practitioners of kut ceremonies donned kings’ and warriors’ robes and channelled the voice of male gods and spirits. Despised by traditional Korean society, they formed guilds and associations under Japanese occupation and assimilated with official shinto religion to get political favor. Under their theory of “two peoples, one civilization,” Japanese scholars claimed that Korean shamanism and Japanese Shintō shared a common origin. Meanwhile, well-known historians such as Ch’oe Nam-son and Yi Nung-hwa exploited the precolonial traditions of these marginalized women and men to forge a glorious story of the nation, one that re-centered Korea and Manchuria in a larger continental culture of shamanism. Korea’s colonial modernity had also a “queer” writer in the person of Yi Sang (1910-37), whose pen name could be transliterated as “abnormal” or “odd,” and who cultivated a Bohemian style inspired by European dandyism and avant-garde eccentricity. During the Park Chung-hee era (1963-79), the suppression of homosexuality didn’t mean that unofficial and popular representations of non-normative sexualities were absent. In fact, both reports in weekly newspaper and in gender comedy films were rife with such representations, of which queer populations were shadow readers and viewers. In a long and well-documented article, Todd Henry shows that South Korea boasts a long but largely ignored history of same-sex unions, particularly among working-class women. Journalists working for pulp magazines routinely covered female-female wedding ceremonies from the 1950s to the 1980s. In “a Female-Dressed Man Sings a National Epic,” Chung-kang Kim analyzes the story of the movie Male Kisaeng (1969), the Korean equivalent of the gender comedy film Some Like It Hot.

Queer studies are underdeveloped in South Korea. In an academy that remains disinterested in, if not hostile to, queer studies, it takes some courage to stake one’s career on the development of the field. This explains the militant tone adopted by some contributors, who mix scholarship and social activism. In a society that has often been framed in terms of ethnoracial and heteropatriarchal purities, they have a lot to bring to contemporary debates by showing how Korea has always been more diverse, and sometimes more tolerant to diversity, than dominant representations make us believe. As one of the authors claims, “homosexuality is not a ‘foreign Other’ that has been imported only into the country as part of the phenomenon of globalization. It likely has always existed as a ‘proximate Other’ within the nation itself.” And yet, Queer Korea appears at a time when the LGBT movement seems to be in retreat. The stigmatization and marginalization of sexual minorities continue unabated, and the emergence of LGBT organizations, film festivals, and political organizations during a period that witnessed the establishment of democratic institutions has given way to individual strategies of invisibility and retreat. Most queer subjects avoid the kind of public visibility that typically undergirds identity politics. Even politicians sympathetic to gay and lesbian rights avoid taking positions in this fraught context in fear of “homophobia by association”—they might be involved in collective culpability, just like the families and colleagues of ppalgaengi (Reds) were targeted by “guilt by association” under authoritarian rule. Queer studies in Korea, and Korean queer theory, will not necessarily follow the path taken by the discipline elsewhere. But this volume definitely puts it on the map.

Digital Humanities and Sound Studies

A review of Digital Sound Studies, edited by Mary Caton Lingold, Darren Mueller, and Whitney Trettien, Duke University Press, 2018.

Digital Sound StudiesNowadays young PhDs majoring in the social sciences and the humanities often list an interest in sound studies when they enter the academic job market. Likewise, digital humanities is a booming field encompassing a wide range of theories and disciplines bound together by an interest in digital tools and technologies. There is a premium in listing these categories as fields of interest in one’s CV, even though the young scholar’s specialization may lie in more traditional disciplines such as English literature, modern history, or American studies. This is what economists call job market signaling: by associating themselves with “hot” topics, potential new hires make themselves in hot demand and differentiate their profile from more standard competitors. And yet, digital humanities and sonic materials have so far had a limited impact on social science scholarship. The humanities remain text-centric and bound by technologies inherited from the printing press and the paper format. The reproduction of sound is ubiquitous, and digital technologies are everywhere but in the content of academic journals and university syllabuses. Student evaluation is still mostly based on silent modes of learning such as final essays, midterm exams, and reading responses. Sonic modes of participation such as asking questions, providing oral feedback, and exchanging ideas with peers during class discussions are weighted with a limited coefficient compared to other evaluation metrics based on the written text.

A new age of digital acoustics

In a way, digital humanities and sound studies are a story of literary scholars catching up with the times. What isn’t digital these days? We live our lives immersed in digital environments and aided by digital devices that transform the way we work, relax, and communicate. The sounds of nature and of city life have given way to artificial soundscapes shaped by recorded music and transmitted signals. We live in an age where a new orality sustained by distant communication, radio, television, and other electronic devices has partially substituted to the written word and the visual cue. Almost all college students now have an audio and video device in their pocket—the challenge is rather to make them silence their smartphone and concentrate on the aural and visual environment of the classroom as opposed to their earbuds and small screens. It has become standard to include video and audio files in powerpoint presentations and to use multimedia material inside humanities work across all fields. As the editors of Digital Sound Studies note, “It has never been easier to build and access sonic archives or incorporate sound into scholarship.” Social scientists and humanities scholars who have grown up alongside digital technologies and audio equipment are comfortable using them in their research and in their teaching. So why not make digital sound itself the object of enquiry?

Despite its societal impact and economic value, technology is not the primary engine of change in the academy. The real game changer is money. Monetary incentives, reinforced by institutional recognition, are what makes the academic world go round. The editors of this volume are very open about it: “One of the reason that digital humanities has burgeoned is that there’s money behind it.” Take the case of Joanna Swafford, from Tufts University. As a PhD student specializing in Victorian poetry, she would have faced a dull doctoral environment and a bleak employment future. Instead, gaining some programming and web development skills, she designed Songs of the Victorians, an archive of Victorian song settings of contemporaneous poems. She went on to create Augmented Notes, a software tool that allows users to integrate an audio file with a score image and a text commentary so that everyone, regardless of musical literacy, can follow along the audio song, the score, and the written text. She was supported in her endeavor by multiple scholarships, research grants, fellowship programs, and skill upgrading sessions in the digital humanities. Her case is not isolated: enterprising scholars in humanities departments everywhere are riding the digital wave to get equipments and research fundings that their more classically inclined colleagues can only dream of. And they are adding sound and music to the mix in order to create a multi-sensory and multimodal experience.

Low cost, high rewards

There are huge incentives to get into digital humanities. By contrast, barriers to entry into the field of digital humanities are very low. The great bulk of research that is being produced can be characterized as low tech, even though there is a premium in making elaborate project designs and using advanced technology methods. Most multimedia tools are already on the shelves, sometimes accessible free of charge as open software and web-based solutions. The curated sound studies blog Sounding Out! is a prime example of a low-tech enterprise: the hosts just use the WordPress platform, SoundCloud, and YouTube, and put all their energy in giving editorial advice and feedback to contributing authors. New academic journals and publishing platforms such as Scalar have created venues for born-digital work that encourage exploration and experimentation while building on established traditions of academic writing and argumentation. New text mining techniques using machine learning and AI allow to search, analyze, and visualize large bodies of audiovisual material. But tagging and indexing audio files to train the machine-learning algorithm is a low-tech, labor-intensive process that requires only limited equipment. Providing uniformity across the sound samples raises the issue of language-based classification systems and individual perception. What sounds “loud” or “inaudible” depends on the person and on the context. More generally, people working on sound are always confronting the issue of writing about sound in text. There is a very limited vocabulary of representing sound, and this vocabulary is usually not included in school curriculum. Categories borrowed from prosody and rhetorics—timbre, accent, tone, stress, pitch frequency, duration, and intensity—are finding new uses in technologies exploring speech patterns and sound archives in order to “search sound with sound.”

There is also a premium for political correctness. Digital sound studies in a North American context intersect with issues of race, gender, sexual orientation, disability, and postcoloniality. The editors point out that generations of black cultural critics and authors have drawn deeply from music and sound in their writings. Black studies has also had to confront sonically encoded racist stereotypes, such as those made popular through blackface minstrelsy and the use of “negro dialect” in early radio and television. In his contribution, Richard Rath tries to render in sonic form a text describing the music and dance of enslaved Africans on a Jamaican plantation in 1688. He tinkers with various musical instruments and electronic tools to exercise what he calls the “historical imagination,” but is reluctant to take on the voice of enslaved Africans himself or to make “a singalong with audiences of mostly white folks,” as such performance would smack of cultural appropriation—he has less qualms about having the classroom clap in three and four beat patterns to illustrate polyrhythm. Similarly, African-American writer and Harlem Renaissance figure Zora Neale Hurston is mentioned in several chapters and gets much credit for performing and recording the Deep South songs that white male scholars Franz Boas and Alan Lomax made her collect—the fact that she exposed the sexual promiscuity of some of her childhood neighbors in the ethnography of her hometown in Florida is not mentioned, but remains controversial to this day.

Raiders of the lost sound

Some academic disciplines are more attuned to digital sounds than others. In the 1960s and 1970s ethnomusicologists often included LPs with their monographs so readers could hear the music the book described. Anthropology and folklore scholars also used recording equipment to document oral traditions and sonic environments. These fields have evolved as technology moved from analog to digital, and they have acquired a new sensitivity to power imbalances and cultural hegemony: it is no longer white men recording native sounds for their own uses. Sonic archives and recordings are repatriated to their communities of origin, sometimes using portable devices like USB sticks and minidiscs in places with low internet connectivity. Literary studies have also experienced a sonic turn. In particular, the intersection of music and poetry is a booming area of research. The English Broadside Ballad Archive at the University of California, Santa Barbara, is bringing musical settings to the fore by digitizing almost eight thousand ballads from England, and it includes facsimiles, transcriptions, and when available, audio recordings of the ballads. In the Songs of the Victorians project mentioned above, Joanna Swafford was able to show that women musicians used songs performed in the parlor as part of a courtship ritual that unsettled the gendered status quo. Poetry is also a place where, in the space of one generation, scholars have rediscovered the importance of voicing and listening. Literature needs not be a silent experience: some words cry out to be articulated, whispered, or shouted.

Historians are also designing their acousmetologies, exploring the world through sound and recreating historical soundscapes that are true to the past “wie es eigentlich gewesen.” As Geoffroy-Schwinden argues, digital explorations of sonic history must do more than simply attempt to recreate the sound and fury of the past; these projects must also historicize sound and contextualize the listening experience, as similar sounds were not perceived in the same way then and now. Musicians attempting to execute historically informed performances must not stop at the use of period instruments and past performance techniques: they must also recreate the ancient concert hall soundscape with its low-voice conversations, loud cheers, sneezing and coughing that modern concert goers try to silence as much as they can. Immersive environments can go beyond the sonic experience and include the visual, the haptic, the olfactive, the tactile, and the visceral. Listening is a multisensory experience: incorporating sound into digital environments must also attend to the ways in which users physically interact with and are affected by sound at the level of the senses. In many experiments such as the reconstruction of historical soundscapes (Emily Thompson,s “The Roaring Twenties,” Mylène Pardoen’s “Projet Bretez”) or the incorporation of multi-vocal narratives in social science projects (Erik Loyer’s “Public Secrets”), the frontier between art and science blurs and the public is invited to take part in a performance of “artistic research.” This, according to the editors, illustrates the “turn toward practice” and away from high theory that characterizes recent academic orientations, of which digital humanities is a part.

Talking shop

By combining two hot topics, digital humanities and sound studies, this book provides a blueprint for making sound central to research, teaching, and publishing practices. And yet, despite its profession of inclusiveness and accessibility, this seems to me a book targeted to a very small segment of the academic world, as potential readers will mostly be people already engaged in teaching and research activities they describe as digital sound studies. Instead of addressing digital natives and sound aficionados at large, they engage in a conversation that concerns mostly themselves. The concluding chapter, which takes the form of a discussion between Jonathan Sterne and the three editors, illustrates this inward-looking and parochial nature of the whole endeavor. The discussants concentrate on practical issues that appear mundane to outsiders but in which they invest considerable energy: how to get tenure, what counts as scholarly work as opposed to teaching duties or to community projects, how to get published into the “best” journals, which fields are hot and which aren’t, what will be the next epistemological turn or the new paradigm that will redefine scholarly practices, etc. Free labor is an issue for them: like everybody else, they do many things for fun, like blogging or building stuff, but unlike other professions they would like to see these activities recognized as part of their academic contribution. Scholars can be openly frank and direct when they speak among themselves. They use simple words and colloquialisms, as opposed to the heavily barbed jargon of academic publications. But they also expose their petty interests and narrow corporatism when they are allowed to talk shop in public. Digital Sound Studies taught me more about the functioning of academia in a segment of disciplinary studies than about sound studies and digital humanities as such.

Anti-Vaccine Campaigns Then and Now: Lessons from 19th-Century England

A review of Bodily Matters: The Anti-Vaccination Movement in England, 1853–1907, Nadja Durbach, Duke University Press, 2004.

Bodily MattersIn 1980, smallpox, also known as variola, became the only human infectious disease ever to be completely eradicated. Smallpox had plagued humanity since times immemorial. It is believed to have appeared around 10,000 BC, at the time of the first agricultural settlements. Stains of smallpox were found in Egyptian mummies, in ancient Chinese tombs, and among the Roman legions. Long before germ theory was developed and bacteria or viruses could be observed, humanity was already familiar with ways to prevent the disease and to produce a remedy. The technique of variolation, or exposing patients to the disease so that they develop immunity, was already known to the Chinese in the fifteenth century and to India, the Ottoman Empire, and Europe in the eighteenth century. In 1796, Edward Jenner developed the first vaccine by noticing that milkmaids who had gotten cowpox never contracted smallpox. Calves or children produced the cowpox lymph that was then inoculated to patients to vaccinate them from smallpox. Vaccination became widely accepted and gradually replaced the practice of variolation. By the end of the nineteenth century, Europeans vaccinated most of their children and they brought the technique to the colonies, where it was nonetheless slow to take hold. In 1959, the World Health Organization initiated a plan to rid the world of smallpox. The concept of global health emerged from that enterprise and, as a result of these efforts, the World Health Assembly declared smallpox eradicated in 1980 and recommended that all countries cease routine smallpox vaccination.

Humanity’s greatest achievement

The eradication of smallpox should be celebrated as one of humanity’s greatest achievements. But it isn’t. In recent years vaccination has emerged as a controversial issue. Claiming various health concerns or belief motives, some parents are reluctant to let their children receive some or all of the recommended vaccines. The constituents who make up the so-called vaccine resistant community come from disparate groups, and include anti-government libertarians, apostles of the all-natural, and parents who believe that doctors should not dictate medical decisions about children. They circulate wild claims that autism is linked to vaccines, based on a fraudulent study that was long ago debunked. They affirm, without any scientific backing, that infant immune systems can’t handle so many vaccines, that natural immunity is better than vaccine-acquired immunity, and that vaccines aren’t worth the risk as they may create allergic reactions or even infect the child with the disease they are trying to prevent. Public health officials and physicians have been combating these misconceptions about vaccines for decades. But anti-vaccine memes seem deeply ingrained in segments of the public, and they feed on new pieces of information and communication channels as they circulate by word-of-mouth and on social media. Each country seems to have a special reluctance for a particular vaccine: in the United State, the MMR vaccine against measles, mumps, and rubella has been the target of anti-vax campaigns. in France, the innocuity of the hepatitis B vaccine has been put into question, and most people neglect to vaccinate against seasonal flu. In the Islamic world, some fatwas have targeted vaccination against polio.

Resistance to vaccines isn’t new. In Bodily Matters, Nadja Durbach investigates the history of the first outbreak of anti-vaccine fever: the anti-vaccination movement that spread over England from 1853, the year the first Compulsory Vaccination Act was established on the basis of the Poor Law system, until 1907, when the last legislation on smallpox was adopted to grant exemption certificates to reluctant parents. Like its modern equivalent, it is a history that pits the medical establishment and the scientific community against vast segments of the population. Vaccination against smallpox at that time was a painful affair: Victorian vaccinators used a lancet to cut lines into the flesh of infants’ arms, then applied the lymph that had developed on the suppurating blisters of other children who had received the same treatment. Infections often developed, diseases were passed with the arm-to-arm method, and some babies responded badly to the vaccine. Statistics showing the efficacy of vaccination were not fully reliable: doctors routinely classified those with no vaccination scars as “unvaccinated,” and the number of patients who caught smallpox after receiving vaccination was not properly counted. The vaccination process was perceived as invasive, painful, and of dubious effect: opponents to vaccination claimed that it caused many more deaths than the diffusion of smallpox itself. Serious infections such as gangrene could follow even a successful vaccination. But people not only resisted the invasion of the body and the risk to their health: resistance against compulsory vaccination was also predicated upon assumptions about the boundaries of state intervention in personal life. Concerns about the role of the state, the rights of the individual, and the authority of the medical profession combined with deeply-held beliefs about the health and safety of the body.

Anti-vaccination in 19th-century England

While historians have often seen anti-vaccination as resistance against progress and enlightenment, the picture that emerges from the historical narrative, as reconstructed by Nadja Durbach, is much more nuanced. Through detailed analysis of the way sanitary policies were implemented and the resistance they faced, she shows that anti-vaccination in nineteenth-century England was very often on the side of social progress, democratic accountability, and the promotion of working-class interest, while forced vaccination was synonymous with state control, medical hegemony, and the encroachment of private liberties. The growth of professional medicine run counter to the interests of practitioners such as unlicensed physicians, surgeons, midwives, and apothecaries, some of whom had practiced variolation with the smallpox virus for a long time. It abolished the long-held practice of negotiating what treatments were to be applied, and turned patients into passive receptacles of prescriptions backed by the authority of science and the state. Compulsory infant vaccination, as the first continuous public-health activity undertaken by the state, ushered in a new age in which the Victorian state became intimately involved in bodily matters. Administrators—the same officers who applied the infamous Poor Laws and ran the workhouses for indigents and vagabonds—saw the bodies of the working classes themselves as contagious and, like prisoners, beggars, and paupers, in need of surveillance and control. Sanitary technologies such as quarantines, compulsory medical checks, forced sanitization of houses, and destruction of contaminated property were first experimented in this context of state-enforced medicine and bureaucratization. Several Vaccination Acts were adopted—in 1853, 1867, and 1871—to ensure that all infants born from poor families were vaccinated against smallpox. The fact that the authorities had to repeat the same laws on the books shows that the “lower and uneducated classes” were not taking advantage of the free service, and were avoiding mandatory vaccination at all costs.

Born in the 1850s, the anti-vaccination movement took shape in the late 1860s and early ‘70s as resisters responded to what they considered an increasingly coercive vaccination policy. The first to protest were traditional healers and proponents of alternative medicine who felt threatened by the professionalization of health care and the development of medical science. For these alternative practitioners, medicine was more art than science, and the state had no role in regulating this sector of activity. They objected to the scientific experimentation on the human body: vaccination, they maintained, not only polluted the blood with animal material but also spread dangerous diseases such as scrofula and syphilis. These early medical dissenters were soon rejoined by a motley crew of social activists who added the anti-vaccination cause to their broader social and political agenda. Temperance associations, anti-vivisectionists, vegetarians and food reformers, women’s rights advocates, working men’s clubs, trade unionists, religious sects, followers of the Swedish mystic Swedenborg: all these movements formed a larger culture of dissent in which anti-vaccinators found a place. They created leagues to organize against the Vaccination Acts, organized debates and mass meetings, published tracts and bulletins, and held demonstrations that sometimes turned into small-scale riots. Women from all social classes were particularly active: they wrote pamphlets, contributed letters to newspapers, and expressed strong opposition at public meetings. They often took their roles as guardians of the home quite literally, and refused to open their door to intruding medical officials. Campaigners argued that parental rights were political rights, to which all respectable English citizens were entitled. The state, they contended, had no right to encroach on parental choice and individual freedom. “The Englishman’s home is his castle,” they maintained, and how best to raise a family was a domestic issue over which the state had no authority to interfere.

Middle-class campaigners and working-class opponents

While the populist language of rights and citizenship enabled a cross-class alliance to exist, the middle-class campaigners didn’t experience the bulk of repression that befell on working-class families that resisted compulsory vaccination. Working-class noncompliers were routinely sized from their houses and dragged to jail, or were charged with heavy fines. Middle-class activists clung to the old liberal tenets of individual rights and laissez-faire: “There should be free trade in vaccination; let those buy it who want it, and let those be free who don’t want it.” By contrast, working-class protests against vaccination was often formulated at the level of the collective, and they had important bodily implications. Some anti-vaccinators considered themselves socialists and belonged to the Independent Labour Party. They aligned their fight with the interest of the working class and expressed distrust of state welfare in general and of anti-pauperism in particular. The Poor Laws that forced recipient of government relief into the workhouse were a target of widespread detestation. Vaccination remained linked to poor relief in the minds of many parents, as workhouse surgeons were often in charge of inoculation and the health campaigns remained administered by the Poor Law Board. Public vaccination was performed at vaccination stations, regarded by many as sites of moral and physical pollution. The vaccination of children from arm to arm provoked enormous fears of contamination. Parents expressed a shared experience of the body as violated and coerced, and repeatedly voiced their grievances in the political language of class conflict. Their protests helped to shape the production of a working-class identity by locating class consciousness in shared bodily experience.

Anti-vaccination also drew from an imaginary of bodily invasion, blood contamination, and monstrous transformations. Many Victorians believed that health depended on preserving the body’s integrity, encouraging the circulation of pure blood, and preventing the introduction of any foreign material into the body. Gothic novels popularized the figures of the vampire, the body-snatcher, and the incubus. They offered lurid tales of rotten flesh and scabrous wounds that left a mark on readers’ imagination. Anti-vaccinators heavily exploited these gothic tropes to generate parental anxieties: they depicted vaccination as a kind of ritual murder or child sacrifice, a sacrilege that interfered with the God-given body of the pristine child. They quoted the Book of Revelations: “Fool and evil sores came upon the men who bore the mark of the beast.” Supporters of vaccination also participated in the production of this sensationalist imagery by depicting innocent victims of the smallpox disease turned into loathsome creatures. Fear of bodily violation was intimately bound up with concerns over the purity of the blood and the proper functioning of the circulatory system. The best guard against smallpox, maintained a medical dissenter, was to keep “the blood pure, the bowels regular, and the skin clean.” Temperance advocates or proselytizing vegetarians added anti-vaccine to their cause: “If there is anything that I detest more than others, they are vaccination, alcohol, and tobacco.” As the lymph applied to children’s sores was the product of disease-infected cows, some parents feared that vaccinated children might adopt cow-like tendencies, or that calf lymph might also transmit animal diseases. Human lymph was even more problematic: applied from arm to arm, it could expose untainted children to the poisonous fluids of contaminated patients and spread contagious or hereditary diseases such as scrofula, syphilis, leprosy, blindness, or tuberculosis.

Understanding the intellectual and social roots of anti-vax campaigns

This early wave of resistance to vaccination, as depicted in Bodily Matters, is crucial to understanding the intellectual and social roots of modern anti-vaccine campaigns. Then as now, anti-vax advocates use the same arguments: that vaccines are unsafe and inefficient, that the government is abusing its power, and that alternative health practices are preferable. Vaccination is no longer coercive and disciplinary, but the issue of compulsory treatment of certain professions such as healthcare workers regularly resurfaces. More fundamentally, the Victorian era in nineteenth-century England was, like our own age, a time of deepening democratization and rampant anti-elitism. Now, too, the democratization of knowledge and truth can produce an odd mixture of credulity and skepticism among many ordinary citizens. Moreover, we, too, are living in an era when state-enforced medicine and scientific expertise are being challenged. Science has become just another voice in the room, and people are carrying their reliance on individual judgment to ridiculous extremes. With everyone being told that their ideas about medicine, art, and government are as valid as those of the so-called “experts” and “those in power,” truth and knowledge become elusive and difficult to pin down. As we are discovering again, democracy and elite expertise do not always go well together. Where everything is believable, everything is doubtable. And when all claims to expert knowledge become suspect, people will tend to mistrust anything that they have not seen, felt, heard, tasted, or smelled. Proponents of alternative medicine uphold a more holistic approach to sickness and health and they claim, as did nineteenth century medical dissenters, that every man and woman could and should be his or her own doctor. Of course, campaigners from the late Victorian age could only have dreamed of the role that social media has enabled ordinary people to play. The pamphlets and periodicals of the 1870s couldn’t hold a candle to Twitter, Facebook, and other platforms that enable everyone to participate in the creation of popular opinion.

Which brings us to the present situation. As I write this review, governments all over the world are busy developing, acquiring, and administering new vaccines against an infectious disease that has left no country untouched. The Covid-19, as the new viral disease is known, has spread across borders like wildfire, demonstrating the interconnect nature of our present global age. Pending the diffusion of an effective treatment, herd immunity, which was touted by some experts as a possible endgame, can only be attained at a staggering cost in human lives and economic loss. “Flattening the curve” to allow the healthcare system to cope with the crisis before mass vaccination campaigns unroll quickly became the new mantra, and rankings were made among countries to determine which policies have proven the most efficient in containing the disease. Meanwhile, scientists have worked furiously to develop and test an effective vaccine. Vaccines usually take years to develop and they are submitted to a lengthy process of testing and approval until they reach the market. Covid-19 has changed all this: several proof-tested vaccines using three different technologies are currently being administered in the most time-condensed vaccination campaign of all times. This is when resistance to vaccines resurfaces: as vaccines become widely available, a significant proportion of the population in developing countries are refusing to get their shots. And many of those refusing are those who have the most reason to get vaccinated: high-risk themselves or susceptible of passing the virus to other vulnerable people. Disinformation, distrust and rumors that are downright delusional have turned what should have been a well-oiled operation into an organizational nightmare. In the end, we will get rid of Covid-19. But we can’t and we won’t get rid of our dependence on vaccines.

Art-and-Technology Projects

A review of Technocrats of the Imagination: Art, Technology, and the Military-Industrial Avant-Garde, John Beck and Ryan Bishop, Duke University Press, 2020.

Technocrats of the ImaginationThere is a renewed interest in the United States for art-and-technology projects. Tech firms have money to spend on the arts to buttress their image of cool modernity; universities want to break the barriers between science and the humanities; and artists are looking for material opportunities to explore new modes of working. Recent initiatives mixing art, science, and technology include  the Art+Technology Lab at LACMA (Los Angeles County Museum of Art), MIT’s Center for Art, Science, and Technology (CAST), and the E.A.T. Salon launched by Nokia Bell Labs. In their presentation documents, these institutions make reference to previous experiments in which artists worked with scientists and engineers in universities, private labs, and museums. LACMA’s A+T Lab is the heir to the Art&Technology Program (A&T) launched in 1967 by curator Maurice Tuchman with the involvement of the most famous artists of the period, such as Andy Warhol, Claes Oldenburg, Roy Lichtenstein, and Richard Serra. MIT was the host of the Center for Advanced Visual Studies (CAVS) founded in the same year by György Kepes, who had previously worked with László Moholy-Nagy at the New Bauhaus in Chicago. Bell Labs is where scientist Billy Klüver launched Experiments in Art and Technology (E.A.T.) with Robert Rauschenberg in late 1966. Technocrats of the Imagination tells the story of these early initiatives by replacing them in their intellectual and geopolitical context, exposing in particular the link with Cold War R&D and the rising influence of the military-industrial complex. The contradiction between an anti-establishment cultural milieu denouncing technocratic complicity with the Vietnam war and a corporate environment where these collusions were left unchallenged led these art-and-technology projects to their rapid demise. Modern initiatives operate in a different environment, but unquestioned assumptions may lead them to the same fate.

Creativity, collaboration, and experimentation

Why should artists collaborate with scientists and engineers? Then and now, the same arguments are put forward by a class of art curators, tech gurus, and project managers. The art world and the research lab are both characterized by a strategy of continuous innovation, collaborative experimentation, and disciplined creativity. They tend to abolish the boundaries between theory and practice, knowing and doing, individual inspiration and collective work. These tendencies were reinforced in the context of the 1950s and 1960s: in an age of big science and artistic avant-garde framed by integrative paradigms such as cybernetics and information theory, the artist and the engineer seemed to herald a new dawn of democratic organization and shared prosperity. The artist defined himself as a “factory manager” (Andy Warhol) and did not hesitate to don the white coat of the laboratory experimenter. The scientist was engaged in much more than the accumulation of scientific knowledge and science’s contribution was vital for the nation’s wealth and security. Both worked under the assumption that science could enlarge democracy and support the United States’ place in the world, and that American art should be considered on an equal footing with other professional fields of activity. But the shared virtues of creativity, collaboration, and experimentation covered profoundly different ideas of what those terms might mean and how they should be achieved. The conception of experimental collaboration in the arts was heir to a liberal tradition of educational reform emphasizing free expression and self-discovery. By contrast, innovation and experimentation as understood by institutions training and employing scientists followed a model of elite expertise and top-down management. They were also heavily compromised, as John Beck and Ryan Bishop emphasize, by their ties to the military-industrial complex.

Beck and Bishop place the genealogy of the three art-and-tech initiatives under the influence of two currents: John Dewey’s philosophy of democracy and education, and Bauhaus’ approach to artistic-industrial collaborations. The influence of John Dewey over the course of the twentieth century cannot be overemphasized. More than any other public intellectual, Dewey shaped and influenced debates on the relations between science, politics, and society in the United States. His principles of democratic education emphasizing holistic learning and the study of art were applied in Black Mountain College in North Carolina, a liberal arts education institution that left its imprint on a whole generation of future artists and creators (Robert Rauschenberg, Cy Twombly, John Cage, Merce Cunningham, Ray Johnson, Ruth Asawa, Robert Motherwell, Dorothea Rockburne, Susan Weil, Buckminster Fuller, Franz Kline, Aaron Siskind, Willem and Elaine de Kooning, etc.) The influence of Dewey’s pragmatism extended beyond the US, notably among German educational reformers, and his notion of “learning by doing” was picked up by the Bauhaus, a German art school operational from 1919 to 1933 that combined crafts and the fine arts. In return, Bauhaus furnished Black Mountain College with émigrés educators—Josef and Anni Albers, Xanti Schawinksy, Walter Gropius—and an utopian vision of a post-disciplinary, collectivist education that did not favor one medium or skill set over another. Bauhaus’ afterlife and legacy in the United States also manifests itself in the trajectories of Bauhaus veterans László Moholy-Nagy who created the short-lived Chicago School of Design in 1937, and György Kepes, who taught at MIT and ended up creating the Center for Advanced Visual Studies (CAVS) in 1967.

Bauhaus in America

It was Moholy-Nagy who originated the idea to stimulate interactions among artists, scientists, and technologists in order to spearhead creativity and innovation. His Hungarian compatriot and associate at the School of Design took the idea to the MIT, an institution whose motto mens et manus (“mind and hand”) echoed Dewey’s and Bauhaus’ devotion to “learning by doing” and “experience as experimentation.” MIT was a full research-based science university awash with money from government contracts and military R&D. Research teams working on ‘Big Science’ projects included not just scientists but engineers, administrators, and technicians collaborating together in a structured manner. Kepes’ tenure at MIT between 1946 and 1977 was characterized by a commitment to science and technology and a belief in the virtues of the unintended consequences of chance encounters leading to breakthrough innovations. His interdisciplinary teachings were structured around the principles of vision, visual technologies, and their social implications. Many disciplines were mobilized, including Gestalt psychology, systems theory, physiology, linguistics, architecture, art, design, music, and perception theory. Transdisciplinarity, holistic approaches, and the eclectic mix of science, technology, and artistic disciplines was in the air in the late sixties and influenced the counterculture as well as artistic creation. The same eclecticism presided over the creation of CAVS, a center dedicated to all aspects related to vision and visual technologies. Drawing in important artists and thinkers, including many Black Mountain alumni, CAVS laid the groundwork for subsequent MIT ventures such as the influential Media Lab, founded in 1985 by Nicholas Negroponte, and the Center for Art, Science, and Technology (CAST). It was in such environment that experimental filmmaker Stan Vanderbeek pondered the possibility of creating an “electronic paintbrush” to complement the electronic pen used in early man/machine interfaces.

The industrial corporation, the research university, and the private lab were the three nodes of the military-industrial complex. Hailed by Fortune magazine as “The World’s Greatest Industrial Laboratory,” the Bell Labs’ research center at Murray Hill in New Jersey was conceived along the lines of a miniature college or university. The laboratories themselves were physically flexible, with no fixed partitions and rooms so that they could be partitioned, assembled, and taken apart at short notice. Bell Laboratories cultivated creativity and innovation: researchers working at Bell Labs were credited with the development of the transistor, the laser, the photovoltaic cell, information theory, and the first computer programs to play electronic music. The proximity of New York City, which had become the capital of the art world, and the presence of an arts college at the neighboring Rutgers University, facilitated the rapprochement between the scientific avant-garde working at Murray Hill and the contemporary art world. Artists and musicians were offered organized tours of Bell Labs as a mean of opening dialogue and providing a sense of how technology could be harnessed for artistic creativity. Early realizations include Edgar Varèse’s Déserts (1950-54), an atonal piece that was described as “music in the time of the H-bomb”; Jean Tinguely’s Homage to New York (1960), a self-constructing and self-destructing sculpture mechanism that performed for 27 minutes during a public performance in the Sculpture Garden of the Museum of Modern Art in New York; and Robert Rauschenberg’s Oracle (1962-65), a five-part found-metal assemblage with five concealed radios and electronic components now displayed at the Pompidou Center in Paris. Also influential was the 9 Evenings: Theatre and Engineering, a series of performances that mixed avant-garde theatre, dance, music, and new technologies. In 1967, the engineer and project manager Billy Klüver set up the Experiments in Art and Technology (E.A.T.), a collaborative project matching avant-garde artists and Bell Lab researchers that attracted the application of more than 6000 artists, scientists and engineers. But the project soon foundered due to poor management and lack of funds.

From New York to Los Angeles and to the world

Place matters for artistic innovation, as it does for scientific discovery and technological breakthrough. During the twentieth century, the center of the advanced art world shifted from Paris to New York. Yet there was also a marked increase in the geographic origins of innovative artists. When he became the first curator of twentieth-century art at the Los Angeles County Museum of Art (LACMA), part of Maurice Tuchman’s mission was to put LA on the art map as “the center of a new civilization.” He did so by partnering with business organizations to sponsor an Art & Technology exhibition in 1971, with the participation of high-profile artists such as Roy Lichtenstein, Claes Oldenburg, Robert Rauschenberg, Richard Serra, and Andy Warhol. But at that time public opinion had already shifted away from the technocratic model of corporate liberalism, and the exhibition was a flop. Another Californian experiment sponsored by LACMA was the creation of artist-in-residence positions at RAND and the Hudson Institute, two think tanks working mostly for the government sector and tasked with: “thinking about the unthinkable.” But the New York-based sculptor John Chamberlain and the conceptual artist James Lee Byars had a difficult time adapting to their new environment. The first sent a memo to all RAND staff stating: “I’m searching for ANSWERS. Not questions! If you have any, will you please fill it below”: the incomprehension was total, and the memo fell flat. The second set up a “World Question Center” and invited the public to submit any kind of questions that would then be answered by a panel of intellectuals, artists, and scientists. But as the two authors of Technocrats of the Imagination comment: “If Byars could have included Stein, Einstein, and Wittgenstein in his teleconference, what might they have been permitted to say, given the serious limitations of the format? An expert is an expert is an expert.”

Twentieth century art was advanced by new institutions on the art scene: the Salons and group exhibitions of independent art collectives, the private art gallery, the art critique magazine, the contemporary art museum, and the international art biennale. World exhibitions also played a key role in the globalization of advanced art, and the American presence in these global events often displayed art-and-technology projects. Billy Klüver and the E.A.T. program at Bell Labs engineered the American pavilion for the Osaka World’s Fair, Expo ’70, in partnership with PepsiCo. The RAND Corporation was pivotal for displaying US advanced technology abroad in exhibitions of science, urbanism, postwar visions of the future, and consumer society. The Eames Office, a design studio based in Venice, California, was commissioned to contribute to the USIA-sponsored US pavilion at the 1959 Moscow World’s Fair and the Montreal Expo ’67, and designed the IBM pavilion at the 1964 New York World’s Fair. The aim of these exhibitions was geopolitical: they were to display America’s might at its most spectacular, and to offer a glimpse of the future in which technology played a key part. They were conceived as artist-led immersive environments in the tradition of the Gesamtkunstwerk or “total work of art” of the Bauhaus, and played a pioneering role in the development of multimedia installations and video art. Charles and Ray Eames were “cultural ambassadors” for the Cold War representation of the United States, and their design creations aligned with the political agenda the US government wished to communicate. The Eames Office made important cutting-edge documentaries such as Powers of Ten (1968), a short film dealing with the relative size of things in the universe and the effect of adding or subtracting one zero, or Think (1964), a multiscreen film in a large, egg-shaped structure called the Ovoid Theater that stood high above the canopy and central structure of the IBM pavilion at the New York World’s Fair.

Corporate neoliberalism

John Beck and Ryan Bishop focus their analysis on the ideological underpinnings and geopolitical ramifications of these art-and-technology projects. They argue that, contrary to their forward-looking ambitions and futuristic visions, MIT’s CAVS, Bell Lab’s E.A.T., and LACMA’s A&T’s program were behind their times. In the late 1960s, antiwar sentiment had hardened public opinion against corporations and technology more generally. The positions of the scientist and the engineer were compromised by their participation in the military-industrial complex:  “science and technology had come to be seen by many as sinister, nihilistic, and death-driven.” The idea that US corporations could plausibly collaborate with artists to create new worlds of social progress was now evidence of complicity and corruption—technology was the problem and not the solution. The political climate made it impossible to justify what was now summarily dismissed as “industry-sponsored art.” In this politically charged context, art and technology projects had very little to say about politics, American foreign policy, or the Cold War in general. Technocrats of the Imagination concludes with a comparison between these late-1960s projects and recent reenactments such as MIT’s CAST, LACMA’s A+T Lab, and Nokia’s E.A.T. Salon. Contrary to their predecessors, these new projects operate in a neoliberal environment driven by private corporations in which the sense of dedication to the public good that animated scientists and artists from the previous generation has all but disappeared. As the authors argue, the recent art-and-tech reboot “cannot be separated from or understood outside the deregulated labor market under neoliberalism that has demanded increased worker flexibility, adaptability, and entrepreneurialism.” The avant-garde artist’s new partner is not the white-coated scientist or the lab engineer, but the tech entrepreneur who claims the heritage of counterculture to advance techno-utopianism and radical individualism. Their claim of “hippie modernism” and their appropriation of the 1960s’ avant-garde is based on historical amnesia, against which this book provides a useful remedy.

One Thousand and One Arab Springs

A review of Revolution and Disenchantment: Arab Marxism and the Binds of Emancipation, Fadi A. Bardawil, Duke University Press, 2020.

Revolution and DisenchantmentTen years have passed since the wave of protests that swept across North Africa and the Middle East. Time has not been kind to the hopes, dreams, and aspirations for change that were invested in these Arab uprisings. A whole generation is now looking back at its youthful idealism with nostalgia, disillusion, and bitterness. Revolutionary hope is always followed by political disenchantment: this has been the case for all revolutions that succeeded and for all attempts that failed. Fadi Bardawil even sees here the expression of a more general law: “For as long as I can remember, I have witnessed intellectuals and critical theorists slide from critique to loss and melancholia after having witnessed a political defeat or experienced a regression in the state of affairs of the world.” These cycles of hope and disillusion are particularly acute in the Arab world, where each decade seems to bring its own political sequence of rising tide and lowering ebb. Revolution and Disenchantment tells the story of a fringe political movement, Socialist Lebanon (1964-70), through the figures of three Marxist intellectuals who went through a cycle of revolutionary fervor, disenchantment, despair, and adjustment. Waddah Charara (1942–), Fawwaz Traboulsi (1941–), and Ahmad Beydoun (1942–) are completely unknown for most publics outside Lebanon, and their reputation in their country may not even have crossed the limits of narrow intellectual circles. They have now retired from an academic career in the humanities and social sciences, and few people remember their youthful engagement at the vanguard of the revolutionary Left. But their political itinerary has a lot to tell about the role of intellectuals, the relationship between theory and practice, and the waves of enthusiasm and disillusion that turn emancipatory enterprises into disenchanted projects.

The ebbs and flows of revolution

Fadi Bardawil proposes to his readers a tidal model of intellectual history. There were four consecutive tides that affected the lives of the three intellectuals under consideration—as well as, less directly, his own: Arab nationalism, Leftist politics, the Palestinian question, and political Islam. Each tide followed its ebb and flow of enthusiasm and disenchantment, leaving behind empty shells and debris that have drifted onshore for the scholar to pick. The generation to which the three intellectuals belong was formed during the high tides of anticolonial Pan-Arabism, founded the New Left, and adhered to the Palestinian revolution before ending up as detached, disenchanted critics of sectarian violence and communal divisions. Collectively, they point to a different chronology and geography of the reception of revolutionary ideas in the Middle East. The conventional periodization and list of landmark events identified by historians do not fully apply: for instance, the June 1967 Arab-Israeli War is often overemphasized as a turning point, while the collapse of the union between Egypt and Syria in September 1961 is now largely forgotten. But the Palestinian question predates 1967, while the 1961 breakdown of Arab unity ushered in the first immanent critique of the regimes in power. Similarly, the traditional East/West and North/South binaries cannot account for the complexities and internal divisions of Middle East societies. Beirut was closer to Paris and to French intellectual life than to other regional metropoles, including Cairo where the Nasser regime silenced all oppositional voices. The site of the “main contradiction” was not always the West, as Marxist scholars assumed; very often the contradictions were integral to the fabric of Arab societies.

Like the rest of the Arab world, the Lebanon in which the three intellectuals grew up was tuned to the speeches of Gamal Abdel Nasser broadcast by Radio Cairo and by demonstrations of support to the Algerian national liberation struggle. Palestinian refugees who had fled Israel in the aftermath of 1948 were a familiar presence in Lebanon, and the Arab Catastrophe or Nakba—as the Palestinian exodus was designated—loomed large in the Arab nationalist agenda. As one of the interviewees recalls, “the ‘Arab Cause’ was more dominant in our lives than Lebanese concerns.” Lebanese intellectuals from Sunni, Shi’i and Druze backgrounds were attracted to Nasserist nationalism and Ba’thist ideology and politics, while a majority of the Christian population supported the pro-Western politics of President Camille Chamoun (1952—58). Chamoun’s decision not to severe diplomatic ties with France and Great Britain after the Suez crisis in 1956 resulted in a political crisis that drew heavier American involvement in the form of economic assistance and military presence. The summer of 1958 was an important milestone in the development of the generation that was now in high school: sectarian tensions and the political deadlock led to a short civil war in Beirut, while inter-Arab relations and Cold War politics provoked a shift in alliances. The union between Egypt and Syria came to an end in 1961, and authoritarian regimes settled under the guise of socialist and Ba’thist ideologies in Syria and Iraq. The tidal wave of Pan-Arabism and its promise of a united popular sovereignty on Arab lands after defeating colonialism was now at its low point. The budding young intellectuals became disillusioned with Arab nationalism and turned to Marxism to fuel their quest for social change and emancipation.

Translating Marx into Arabic

The intellectual generation that founded Socialist Lebanon in 1964, with Waddah Charara, Fawwaz Traboulsi, and Ahmad Beydoun at the forefront, was also the product of an education system. Lebanon was created as an independent country in 1943 under a pact of double negation: neither integration into Syria (the Muslims’ Pan-Arabic demand) nor French protection (the Christians’ demand). Ties were not severed with France, however, as the Maronite elite used predominantly French and sent its children to French schools and universities, while international education was also buttressed by the presence of English language schools and the American University of Beirut. Charara was a southern Lebanon Shi’a who went to a francophone Beirut school and left for undergraduate studies in Lyon, completed later by a doctorate in Paris. Traboulsi was the son of a Greek Catholic Christian from the Bekka Valley who attended a Quaker-founded boarding school near Mount Lebanon and studied in Manchester as well as the American University of Beirut. Ahmad Beydoun went to a Lebanese school that pitted pro-Phalangist Maronites and pro-Ba’th nationalists against each other. Learning French and English in addition to their native Arabic, and studying abroad, opened new intellectual venues for these promising students. As Bardawil notes, “Foreign languages is a crucial matter that provides insight into the readings, influences, and literary sensibilities and imaginaries out of which an intellectual’s habitus is fashioned.”

The habitus of the generation that came of age at the turn of the 1960s was decidedly radical. Socialist Lebanon, the New Left movement that they founded in 1964, was in its beginnings more a study circle than a political party. The readings of these young intellectuals were extensive and not circumscribed by disciplinary boundaries: Marxist theory, French philosophy, psychology, sociology, art critique, economics… They published a bulletin that was printed underground using Roneo machines and distributed clandestinely. In order to avoid being taken for wacky intellectuals, they rarely made quotes from the French intellectuals they were imbibing (Althusser, Foucault, Lacan, Castoriadis, Lefebvre…), and mostly referred to the cannon of the revolutionary tradition: Marx, Engels, Trotsky, Lenin, but also some Cuban references and, in the end, Mao. Through their translations and commentary, they also gave agency to other voices from the South: Fanon, Ben Barka, Giap, Cabral, Che Guevara, Eldrige Cleaver, Malcolm X and others. Books published by Editions Maspero in Paris, as well as articles from Le Monde Diplomatique, Les Temps Modernes, and the New Left Review, were pivotal in the readings discussed in Beirut at that time. So were the pamphlets of Leftist opponents of the Nasser regime in Egypt such as Anouar Abdel Malak, Mahmoud Hussein, and Hassan Riad (the pseudonym of Samir Amin): “What couldn’t be published in Cairo in Arabic was published in France and translated back into Arabic in Beirut with the hope that it would circulate in the Arab world.”

Left-wing groupuscules

In addition to reading, discussing, writing, and translating, the young revolutionaries engaged in clandestine political activities. Unlike their gauchistes equivalents in France, Germany or Italy, they ran the risk of arbitrary arrest, detainment, and execution: hence their practice of secrecy, with underground political cells and anonymity publishing. Their critiques targeted the Ba’th and Arab nationalist ideologies, the authoritarian regimes in power in the region, the national bourgeoisie, and last but not least the pro-Soviet communist parties. The Lebanese Communist Party was the target of their most ferocious attacks, but intra-leftist skirmishes also targeted other groupuscules. The Arab-Israeli war in June 1967, often considered as a watershed for the region and for the world, brought to the fore the Palestinian question. Bardawil argues that the date of 1967, referred to in Arabic as an-Naksah or “the setback”, was more a turning point for the intellectual diaspora than for local actors. Indeed, Edward Said recalls in his autobiography the shock and wake-up call that the defeat of the Arab armies caused in his personal identity: “I was no longer the same person after 1967,” he wrote. The 1967 setback was also used by nationalist military regimes to legitimize their own repressive politics in the name of anti-imperialism and the fight for the liberation of Palestine. But as we saw, the nationalist tide had already ebbed in 1961, and Socialist Lebanon had developed a radical critique of the gap separating the regimes’ progressive professions of faith and their authoritarian rule.

The Palestinian resistance post-1967 became a local player in Lebanese politics, putting on the table again the question of Lebanon’s national identity. It generated its own cycle of hope and disenchantment for the Left. For the cohort of intellectuals forming Socialist Lebanon, it was a time of fuite en avant. The group became increasingly cultist and sectarian, and turned to Maoism to articulate its militant fervor and revolutionary praxis. In 1970, Socialist Lebanon fused with the much larger Organization of Lebanese Socialists, establishing a united organization that became known as the Marxist-Leninist Organization of Communist Action in Lebanon (OCAL). In true gauchiste fashion, OCAL would be plagued by splits and expulsions from the beginning. Note however that the call for action directe and a “people’s war” that Charara articulated in his Blue Pamphlet did not turn into political assassinations and terrorism. The reason was that Lebanese society was already plagued by violence: violent strikes and demonstrations were repressed in blood; armed Palestinian resistance gained force until Israel invaded in 1978 and pushed PLO and leftist militants away from the borders; and terrorist actions were indeed taken up by Palestinian groupuscules such as the PFLP-EO that committed the Lod Airport massacre in May 1972, with the participation of three members of the Japanese Red Army. The low ebb of the Palestinian tide came with the defeat of the Palestinian revolution in Lebanon in 1982. By then, Lebanon had already plunged into a sequence of civil wars (1975—1990) splitting the country along sectarian lines; the Iranian revolution (1979) had ushered a new cycle of militant fervor centered on political Islam; and the Lebanese intellectuals had retired from political militancy to join secure positions in academia.

From Nakba to Naksa and to Nahda

This summary of the historical plot line of Revolution and Disenchantment doesn’t do justice to the theoretical depth and breadth of the book. Trained as an anthropologist and as a historian, Fadi Bardawil attempts to do “fieldwork in theory” as a method to locate “not only how theory helps us understand the world but also what kind of work it does in it: how it seduces intellectuals, contributes to the cultivation of their ethos and sensibilities, and authorizes political practices for militants.” He treats the written and oral archives of the Lebanese New Left as a material to ponder the possibility of a global emancipatory politics of the present that would not be predicated on the assumption that theory always comes from the West to be applied in empirical terrains in the South. He takes issue with the current focus on Islamist ideologues such as Sayyid Qutb and Ali Shariati that are used by Western scholars for “thinking past terror,” while the indigenous tradition of Marxism and left-wing thinking is deemed too compromised with the West to offer an immanent critique of Arab politics. As Bardawil notes, quite a few of the 1960s leftists rediscovered the heritage of the earlier generation of Nahda (Renaissance) liberal thinkers such as Taha Husayn (1889–1973) and ‘Ali ‘Abd-al-Raziq (1886–1966) or, like the aging and sobered Charara, turned to Ibn Khaldun (1332–1406) to understand the logics of communal violence that had engulfed Lebanon. Revolution and Disenchantment also reflects the coming-of-age story of the author who started his research project in the US in the wake of the September 11 attacks, still marked by the Left-wing melancholia of his school years in Lebanon, then matured into a more balanced approach that took its cues from the mass mobilizations known collectively as the Arab Spring.

Postscript: I read a book review of Revolution and Disenchantment written by a PhD student specializing in Middle East studies who regretted the fact that readership of this book will most likely be limited to a fringe audience of area specialists. If only this book could become a core text for an introduction to intellectual history or for a class on world Marxism!, she bemoaned. My answer to that is, you never know. Manuscripts have a strange and unpredictable afterlife once they get published, and neither the author nor the publisher can tell in advance which readership they will eventually reach. Remember the circuits of the French editions of revolutionary classics published by Editions Maspero in a historical conjuncture when theory itself was being generated not from Europe but from the Third World. Add to that the fact that Revolution and Disenchantment is available free of charge for downloads on the website of Duke University Press (with a trove of other scholarly books), and you may have in your hands the potential of an unlikely success. Besides, the political effects of a text, and the difference that it makes, cannot be measured by the number of clicks and readers but depend on the questions asked by the reading publics and the stakes animating their practical engagements. You never know in advance which texts will be included in future political archives and curricula, or who will read what and for what purposes. Reading today about the Lebanese New Left in Hanoi is not more uncanny than translating Mao and Giap into Arabic in Beirut during the sixties. New forms of critique and their transnational travels may produce unexpected political effects that go beyond the closed lecture circuit of jet-lagged academics. This is one reason why the Arab Springs were followed with passion in China, leading the Communist authorities to delete all references to the events on Chinese social media. Ten years after, a new cycle of democratic hope and enlightenment may begin.

Kiss the Frog

A review of Animacies: Biopolitics, Racial Mattering, and Queer Affect, Mel Y. Chen, Duke University Press, 2012.

Animacies“Inanimate objects, have you then a soul / that clings to our soul and forces it to love?,” wondered Alphonse de Lamartine in his poem “Milly or the Homeland.” In Animacies, Mel Chen answers positively to the first part of this question, although the range of affects she considers is much broader than the lovely attachments that connected the French poet to his home village. As she sees it, “matter that is considered insensate, immobile, deathly, or otherwise ‘wrong’ animates cultural life in important ways.” Anima, the Latin word from which animacy derives, is defined as air, breath, life, mind, or soul. Inanimate objects are supposed to be devoid of such characteristics. In De Anima, Aristotle granted a soul to animals and to plants as well as to humans, but he denied that stones could have one. Modern thinkers have been more ready to take the plunge. As Chen notes, “Throughout the humanities and social sciences, scholars are working through posthumanist understandings of the significance of stuff, objects, commodities, and things.” Various concepts have been proposed to break the great divide between humans and nonhumans and between life and inanimate things, as the titles of recent essays indicate: “Vibrant Matter” (Jane Bennett), “Excitable Matter” (Natasha Myers), “Bodies That Matter” (Judith Butler), “The Social Life of Things” (Arjun Appadurai), “The Politics of Life Itself” (Nikolas Rose),“Parliament of Things” (Bruno Latour). Many argue that objects are imbued with agency, or at least an ability to evoke some sort of change or response in individual humans or in an entire society. However, each scholar also possesses an individual interpretation of the meaning of agency and the true capacity of material objects to have personalities of their own. In Animacies, Mel Chen makes her own contribution to this debate by pushing it in a radical way: writing from the perspective of queer studies, she argues that degrees of animacy, the agency of life and things, cannot be dissociated from the parameters of sexuality and race and is imbricated with health and disability issues as well as environmental and security concerns.

Intersectionality

Recent scholarship has seen a proliferation of dedicated cultural studies bearing the name of their subfield as an identity banner in a rainbow coalition: feminist studies, queer studies, Asian American studies, critical race studies, disability studies, animal studies… In a bold gesture of transdisciplinarity, Mel Chen’s Animacies contributes to all of them. The author doesn’t limit herself to one section of the identity spectrum: in her writing, intersectionality cuts across lines of species, race, ability, sexuality, and ethnicity. It even includes in its reach inanimate matter such as pieces of furniture (a couch plays a key part in the narrative) and toxic chemicals such as mercury and lead. And as each field yields its own conceptualization, Mel Chen draws her inspiration from what she refers to as “queer theory,” “crip theory,” “new materialisms,” “affect theory,” and “cognitive linguistics.” What makes the author confident enough to contribute to such a broad array of fields, methods, and objects? The reason has to do with the way identity politics is played in American universities. To claim legitimacy in a field of cultural studies, a scholar has to demonstrate a special connexion with the domain under consideration. As an Asian American for instance, Mel Chen cannot claim expertise in African American studies; but she can work intersectionally by building on her identity as a “queer woman of color” to enter into a productive dialogue with African American feminists. The same goes with other identity categories: persons with disabilities have a personal connexion to abled and disabled embodiment, while non-disabled persons can only reflect self-consciously about their ableism. Even pet lovers, as we will see, have to develop a special relationship with their furry friends in order to contribute to (critical) animal studies.

Using this yardstick, Mel Chen qualifies by all counts to her transdisciplinary endeavor. She identifies herself as Asian American, queer, and suffering from a debilitating illness. She gives many autobiographical details to buttress her credentials. She mentions that her parents were immigrants from China who couldn’t speak proper English and used singular and plural or gendered pronominal forms indifferently. She grew up in a white-dominated town in the Midwest and was used to hearing racist slurs, such as people yelling “SARS!” at her—this was before a US president publicly stigmatized the “Chinese virus.” She shows that prejudice against the Chinese has a long history in the United States. The book includes racist illustrations dating from the nineteenth century featuring Chinese immigrants with a hair “tail” and animal traits that make them look like rodents. Chen analyzes the racial fears of lead poisoning in the “Chinese lead toy scare” of 2007 when millions of Chinese exported toys made by Mattel were recalled due to overdoses of lead paint. She exhumes from the documentary and film archives the figure of Fu Manchu, a turn-of-the-century personification of the Yellow Peril, and proposes her own slant on this character that is said to provide “the bread and butter of Asian American studies.” Mel Chen’s self-reported identity as queer is also documented.  She mentions her “Asian off-gendered form” when describing herself, and frequently refers to her own queerness. In an autobiographical vignette, she designates her partner as a “she” and puts the pronoun “her” in quotes when she refers to her girlfriend (Chen’s own bio on her academic webpage refers to her as “they”). Her scholarship builds on the classics of queer studies such as Judith Butler and Eve Kosofsky Sedgwick, and she feels especially close to “queer women of color” theorizing. She exposes to her readers some unconventional gender and sexuality performances, such as the category of “stone butch” designating a lesbian who displays traditional masculinity traits and does not allow herself to be touched by her partner during lovemaking (to draw a comparison, Chen adds that many men, homo or heterosexual, do not like to be penetrated.)

Feeling Toxic

But it is on her medical condition that Mel Chen provides the most details. Moving to the “risky terrain of the autobiographical,” she mentions that she was diagnosed as suffering from “multiple chemical sensitivity” and “heavy metal poisoning.” This condition causes her to alternate between bouts of morbid depression and moments of “incredible wakefulness.” She makes a moving description of walking in the street without her filter mask and being in high alert for toxins and chemicals coming her way: navigating the city without her chemical respirator exposes her to multiple dangers, as each passerby with a whiff of cologne or traces of a chemical sunscreen may precipitate a strong allergic reaction. In such condition, which affects her physically and mentally, she prefers to stay at home and lie on her couch without seeing anybody. But Mel Chen doesn’t dwell on her personal condition in order to pose as a victim or to elicit compassion from her readers. Firstly, she feels privileged to occupy an academic position as gender and women studies professor at UC Berkeley: “I, too, write from the seat and time of empire,” she confesses, and this position of self-assumed privilege may explain why she doesn’t feel empowered enough to contribute to postcolonial studies or to decolonial scholarship. More importantly, she considers her disability as an opportunity, not a calamity. Of course, the fact that she cannot sustain many everyday toxins limits her life choices and capabilities. But toxicity opens up a new world of possibilities, a new orientation to people, to objects and to mental states. As we are invited to consider, “queer theories are especially rich for thinking about the affects of toxicity.”

This is where the love affair with her sofa comes in. When she retreats from the toxicity of the outside world, she cuddles in the arms of her couch and cannot be disturbed from her prostration. “The couch and I are interabsorbent, interporous, and not only because the couch is made of mammalian skin.” They switch sides, as object becomes animate and subject becomes inanimate. This is not only fetishism: a heightened sense of perception of human/object relations allows her to develop a “queer phenomenology” out of her mercurial experience. New modes of relationality affirm the agency of the matter that we live among and break it down to the level of the molecular. Mel Chen criticizes the way Deleuze and Guattari use the word “molecularity” in a purely abstract manner, considering “verbal particles” as well as subjectivities in their description of the molar and the molecular. By contrast, she takes the notion of the molecular at face value, describing the very concrete effects toxic molecules have on people and their being in the world. These effects are mediated by race, class, age, ability, and gender. In her description of the Chinese lead toy panic of 2007, she argues that the lead painted onto children’s toys imported to the United States was racialized as Chinese, whereas its potential victims were depicted as largely white. She reminds us that exposure to environmental lead affects primarily black and impoverished children as well as native Indian communities, with debilitating effects over the wellbeing and psychosocial development of children. Also ignored are the toxic conditions of labor and manufacture in Chinese factories operating mainly for Western consumers. The queer part of her narrative comes with her description of white middle-class parents panicking at the sight of their child licking their train toy Thomas the Tank Engine. In American parents’ view, Thomas is a symbol of masculinity, and straight children shouldn’t take pleasure in putting this manly emblem into their mouth. But as Chen asks: “What precisely is wrong with the boy licking the train?”

Queer Licking

In addition to her self-description as Asian, queer, and disabled, Mel Chen also claims the authority of the scholar, and it is on the academic front, not at the testimonial or autobiographical level, that she wants her Animacies to be registered. Trained as “a queer feminist linguist with a heightened sensitivity to the political and disciplinary mobility of terms,” she borrows her flagship concept from linguistics. Linguists define animacy as “the quality of liveness, sentience, or humanness of a noun or noun phrase that has grammatical, often syntactic, consequences.” Animacy describes a hierarchical ordering of types of entities that positions able-bodied humans at the top and that runs from human to animal, to vegetable, and to inanimate object such as stones. Animacy operates in a continuum, and degrees of animacies are linked to existing registers of species, race, sex, ability, and sexuality. Humans can be animalized, as in racist slurs but also during lovemaking. “Vegetable” can describe the state of a terminally-ill person. As for stones, we already encountered the stone butch. Conversely, animals can be humanized, and even natural phenomena such as hurricanes can be gendered and personified (as with Katrina.) Language acts may contain and order many kinds of matter, including lifeless matter and abject objects. Dehumanization and objectification involve the removal of qualities considered as human and are linked to regimes of biopower or to necropolitics by which the sovereign decides who may live and who must die.

This makes the concept of animacy, and Mel Chen’s analysis of it, highly political. Linguistics is often disconnected from politics: Noam Chomsky, the most prominent linguist of the twentieth century, also took very vocal positions on war and American imperialism, but he kept his political agenda separate from his contribution to the discipline. In How to Do Things with Words, J. L. Austin demonstrates that speech acts can have very real and political effects, and in Language and Symbolic Power, Pierre Bourdieu takes language to be not merely a method of communication, but also a mechanism of power. Mel Chen takes this politicization to its radical extreme. She criticizes queer liberalism and its homonormative tendencies to turn queer subjects into good citizens, good consumers, good soldiers, and good married couples. Recalling the history and uses of the word queer, which began as an insult and was turned into a banner and an academic discipline, she notes that some queers of color reject the term as an identity and substitute their own terminology, as the African American quare. She also questions the politics by which animals are excluded from cognition and emotion, arguing that many nonhuman animals can also think and feel. Positioning her animacy theory at the intersection of queer of color scholarship, critical animal studies, and disability theory, she argues that categories of sexuality and animality are not colorblind and that degrees of animacy also have to do with sexual orientation and disability. She brings the endurance of her readers to its break point by invoking subjects such as bestiality and highly unconventional sexual practices. Her examples are mostly borrowed from historical and social developments in the United States, with some references to the People’s Republic of China. She exploits a highly diverse archive that includes contemporary art, popular visual culture, and TV trivia.

Critical Pet Studies

According to “Critical Pet Theory” (there appears to be such a thing), scholars have to demonstrate a special bond with their pet in order to contribute to the field of animal studies. Talking in abstract of a cat or a dog won’t do: it has to be this particular dog of a particular breed (Donna Harraway’s Australian shepherd ‘Cayenne’), or this small female cat that Jacques Derrida describes in The Animal That Therefore I Am. Talking, as Deleuze and Guattari did, of the notion of “becoming-animal” with “actual unconcern for actual animals” (as Chen reproaches them in a footnote) is clearly a breach in pet studies’ normative ethics. Even Derrida failed a simple obligation of companion species scholarship when he failed to become curious about what his cat might actually be doing, feeling, or thinking during that morning when he emerged unclothed from the bathroom, feeling somehow disturbed by the cat’s gaze. Mel Chen’s choice of companion species is in line with her self-cultivated queerness: she begins the acknowledgments section “with heartfelt thanks to the toads,” as well as “to the many humans and domesticated animals populating the words in this book.” The close-up picture of a toad on the book cover is not easily recognizable, as its bubonic glands, swollen excrescences, and slimy texture seem to belong both to the animal kingdom and to the realm of inert matter. Animacy, of course, summons the animal. But Mel Chen is not interested in contributing to pet studies: she advocates the study of wild and unruly beasts or, as she writes, a “feral” approach to disciplinarity and scholarship. “Thinking ferally” involves poaching among disciplines, raiding archives, rejecting disciplinary homes, and playing with repugnance and aversion in order to disturb and to unsettle. Yes, the toad, this “nightingale of the mud” as the French poet would have said, is an adequate representation of this book’s project.