The Party Left and the Hindu Right in Kerala

A review of Violence of Democracy: Interparty Conflict in South India, Ruchi Chaturvedi, Duke University Press, 2023.

Violence of DemocracyViolence of Democracy studies a long-standing violent antagonism between members of the party left and the Hindu right in the Kannur district of Kerala, a state on the southwestern coast of India. The term party left refers to members of the Communist Party of India (Marxist) (CPI(M)); the term Hindu right denotes affiliates of the Rashtriya Swayam Sevak Sangh (RSS) and the Bharatiya Janata Party (BJP) that holds power in New Delhi since 2014. The prevalence of violence in Kerala’s political life presents the reader with three paradoxes. First, political scientists view democracy as a pacifying system, as the regime that is most capable of keeping violence at bay. Autocracies are violent by nature; democracies are supposed to be more peaceful, both between themselves (democracies don’t go to war against each other) and within their borders (antagonisms are resolved through the ballot box.) But Ruchi Chaturvedi shows us that democracy can coexist with violence; indeed, that some characteristics of a democratic regime call for the violence it is supposed to contain. As she states in the introduction, “violence, I argue, not only reflects the paradoxes of democratic life, but democratic competitive politics has also helped to condition and produce it.” This criminalization of domestic politics has a long history in Kerala, and Violence of Democracy documents it by revisiting the life narratives of key politicians from the left, by going through judicial cases and media reports of political violence in the Kannur district of Kerala, and by conducting ethnographic interviews with grassroot militants from both parties. This book will be of special interest to social scientists interested in Indian politics as viewed from a southern state that now stands in opposition to the Modi government. But the author also raises disturbing questions for political scientists more generally: is democracy intrinsically violent? What explains the shift from the verbal violence inherent in antagonistic politics to agonistic confrontation that results in acts of intimidation, attempts to murder, and hate crimes? How can violence become closely entwined with the institutions of democracy? How to make political forces accountable for the violence they encourage and the crimes committed in their name? What happens to political violence and its culprits when they are prosecuted through the judicial system and are sanctioned under criminal law? 

Violent democracy

The second paradox lies with the root causes of political violence in this district of Kerala. Violence in India is often seen as the result of communal tensions. India’s birth of freedom was bathed in blood: the 1947 partition immediately following independence cut through the fabric of social life, pitting one community against the other. Antagonisms between Hindus and Muslims, or between Hindus and Sikhs, have often led to waves of riots and murderous violence. Beyond the trauma of the partition in which around one million people were killed and 14 million were displaced, mass breakouts of violence include the 1969 Gujarat riots involving internecine strife between Hindus and Muslims, the 1984 Sikh massacre following the assassination of Indira Gandhi by her Sikh bodyguards, the armed insurgency in Kashmir starting in 1989, the Babri Masjid demolition in the city of Ayodhya leading to retaliatory violence in 1992, the 2002 Gujarat riots that followed the Godhra train burning incident, and many other such episodes. If religion was not enough reason to fuel internal conflict, Indian society is also divided along caste, class, race, regional, and ethno-linguistic lines, and these divisions in turn often abet violence and intercommunal strife. But in the Kannur district that Chaturvedi observes, “members of the two groups do not belong to ethnic, racial, linguistic, or religious groups that have been historically pitched against other.” Indeed, “local-level workers of both the party left and the Hindu right involved in the violent conflict with each other share a similar class, religious, and caste background. And yet the contest between them to become a stronger presence and the major political force in the region has generated considerable violence.” The conflict between the two parties in this particular district is purely political. It cannot be read as a conflict between an ethnic or religious majority against a minority community. Its roots lie elsewhere: for Chaturvedi, they are to be found in the very functioning of parliamentary democracy in India.

The third paradox is that this history of violent struggle between the party left and the Hindu right doesn’t correspond to the standard image most people have of Kerala. This state on India’s tropical Malabar Coast is known for its high literacy rate, low infant and adult mortality, and low levels of poverty. Kerala’s model of development gained exceptional global coverage in the 1970s, 1980s, and early 1990s, before the rest of India began to enter into its course of high growth and raising average incomes. Even now, Kerala is ahead of other Indian states in terms of provision of social services such as education and health. Its achievements are not linked to a particular industry, like the IT service sector in Bangalore or the automotive industry in Chennai, but stem from continuous investments in human capital and infrastructure (remittances of Kerala workers employed in Gulf states have also played a role.) Kerala is also known for having self-avowed Marxists occupying positions of power since more than four decades. As Chaturvedi reminds us, “it was the first place in the world to elect a communist government through the electoral ballot in 1957.” Today, the two largest communist parties in Kerala politics are the Communist Party of India (Marxist) and the Communist Party of India, which, together with other left-wing parties, form the ruling Left Democratic Front alliance. They have been in and out of power for most of India’s post-independence history, and are well entrenched in local political life. Communists are sometimes accused of plotting the violent overthrow of the government through revolutionary tactics, and the BJP is not immune to playing with the red scare and accusing its enemies of complotism. But in Kerala violence doesn’t come from revolutionary struggle or armed insurgency; it originates in the very exercise of power. And it didn’t prevent Kerala to become the poster child of development economics, showing that redistributive justice can be achieved despite (or alongside) violent conflict and antagonistic politics.

Malabar traditions

Some observers may explain political violence in Kerala by the intrinsic character of its inhabitants. They point to a traditional martial culture of physical confrontation and warfare. The local martial art, kalaripayattu, is said to be one of the oldest combat technique still in existence. Dravidian history was marked by internecine warfare, the rise and fall of many great empires, and a culture of resistance against northern invaders. The Portuguese established several trading posts along the Malabar Coast and were followed by the Dutch in the 17th century and the French in the 18th century. In French, a “malabar” still means a muscular and sturdy character, although the name seems to come from the indentured Indian workers who came to toil in sugarcane fields of the Réunion island. The British gained control of the region in the late 18th century. The Malabar District was attached to the Madras Presidency, while the other two provinces of Travancore and Cochin, which make up the present-day Kerala, were ruled indirectly through a series of treaties reached with their princely authorities in the course of the 19th century. Direct rule in Malabar reinforced landlord domination over sharecroppers and tenants, with the landlords belonging to the upper-caste Nairs and Nambudiris while tenant cultivators and agricultural workers were the purportedly inferior Thiyyas, Pulayas, and Cherumas. In the early 20th century, social tensions were rife, voices were calling for land reform and the end of caste privilege, and Kerala became the breeding ground for the cadres and leaders of the Communist Party of India (CPI), officially founded on 26 December 1925. Communism is therefore heir to a long tradition of militancy in Kerala. India is home to not one but two communist parties, the CPI and the CPI(M), the second born of a schism in 1964 and sending more representatives to the national parliament than the first.

Instead of essentializing a streak of violence in India’s and Kerala’s political life, Chaturvedi explains the violent turn of electoral politics in the district of Kannur as the result of majoritanianism, the adversarial search to become a major force in a local political system, and its correlate minoritization, the drive to marginalize proponents from the minority party. The search for ascendance is not extraneous to democracies but is part of their basic definition and structure. In Kerala, politics turned violent precisely because the main political forces, and especially the party left and the Hindu right, agreed to play by the rules of democracy. The acceptance of democracy’s rules-of-the-game, namely free and fair elections and majority rule, wasn’t a preordained result. At various points in its history, the communist movement in India was tempted by insurgency tactics and armed struggle. Chaturvedi revisits the political history of Kerala by drawing the portrait of two leaders of the political Left, using their autobiographies and self-narratives. Both A.K. Gopalan (“AKG”) and P.R. Kurup were upper-caste politicians who identified with the plight of poor peasants and lower-caste workers. In 1927, Gopalan joined the Indian National Congress and began playing an active role in the Khadi Movement and the upliftment of Harijans (“untouchables” or Dalits). He later became acquainted with communism and was one of 16 CPI members elected to the first Lok Sabha in 1952. Gopalan’s life narratives “privilege spontaneous moral reactions marked by a good deal of physical courage and a strong sense of masculinity.” He was a party organizer, anchoring the CPI and then the CPI(M) in the political life of Kerala, and a partisan of electoral politics, discarding the temptation to engage in armed insurrection in 1948-1951 as “adventurist” or “ultra-left.” Thanks to his heritage, the CPI(M) now resembles other parties normally seen in parliamentary democracies: “each one seeking to obtain the majority of votes in order to ascend to the major rungs of government.” But P.R. Kurup embodies a darker side of electoral politics: known as “rowdy Kurup,” he remained a regional socialist leader through strong-arm tactics and the occasional streetfight operation against rival supporters of the CPI or the Congress. His band of low-caste supporters (“Kurup’s rowdies”) were willing to use intimidatory and violent means so that their party remained on top.  

From agonistic contest to antagonistic conflict

Both Gopalan and Kurup were “shepherds” or “pastoral leaders” who protected, saved, and facilitated the well-being of a populace that reciprocated their favors with votes and other expressions of support. By contrast, the next generation of local leaders to which Chaturvedi turns come from a lower rung of society. They are the militant members and local cadres of the CPI(M) and the RSS-BJP who form antagonistic communities willing to attack and counterattack each other so that their party might dominate in the electoral competition. The fact that young men at the forefront of the conflict between the party left and the Hindu right in the district of Kannur share similar religious, caste, and class backgrounds makes it exceptional. Conflict between the two groups cannot be read as a conflict between an ethnic or religious majority against a minority community. But this distinctive form of political violence in Kannur can be characterized as an exceptional-normal phenomenon, an expression of something common in all democracies: competition for popular and electoral support creates the conditions and ground for the emergence of hate-filled and vengeful acts of violence between opposing political communities. The clashes between the two camps are not just occasional: exploiting various sources such as police and court records as well as personal interviews with workers from the two groups, the author estimates that more than four thousand workers of various parties have been tried for political crimes in Kannur in the past five decades. Assailants used weapons such as iron rods, chopping knives, axes, crude bombs, sword knives (kathival), sticks, and bamboo staffs (lathi). They formed tight-knit communities of young men sharing fraternal bonds and a spirit of strong cohesion: the RSS shakha (local branch network) is the most organized structure from the Hindu right, but the party left also has its volunteer vigilante corps akin to RSS cadres or student wing trained in “self-defense techniques.” For both camps, a cycle of attacks and counterattacks breeds mimetic violence and a culture of aggression and vengeance.

In a functional democracy, law and order is maintained and crime gets punished. Many young men from the party left and the Hindu right have been brought to court on suspicion of politically motivated crimes and sanctioned accordingly. But for Chaturvedi, law is a “subterfuge” that obfuscates the complicity of the democratic political system in brewing violence and offers it an “alibi” or a “free pass.” Justice is the continuation of politics by other means, and the conflict between the CPI(M) and the RSS-BJP in Kannur is being reenacted in the courts. The judicial system depoliticizes political violence by projecting responsibility onto individuals and exonerating political structures of any responsibility for the crimes committed in their name. Perpetrators of violent aggression are liable under criminal law and judges don’t take into account their political motivations, pointing instead to acts of madness or a background of criminal delinquency. Political parties from both sides do not remain inactive during trials: they tutor witnesses to produce convincing testimonies or offer alibis, they create suspicion about testimonies of the opposite party, they fabricate evidence and manipulate opinion. Judicial proceedings take an exceedingly long time due to juridical maneuvers, and suspects are often acquitted for lack of evidence. Important local figures thought to be planning and facilitating the aggressions are not called to account. In addition, according to Chaturvedi, the judicial system in India has taken a majoritarian turn: it affords impunity to members of the dominant group while persecuting minorities and those who challenge its hegemony. In Kerala, it did not stop generations of young men to engage in attacks and counterattacks so that their party can stay on top. Depoliticizing political violence and obscuring the conditions that have produced it not only leaves political forces unaccountable: it perpetuates a cycle of aggression and impunity. For the author, a true political justice should not reduce political violence to individual criminality, but should address the structures that underlie it.

Majoritarianism and minoritization

For Chaturvedi, electoral democracy is defined by the competition “to become major and make minor,” or the imperative “to become a major political force and reduce the opposition to a minor position.” In a first-past-the-post electoral system, the party that commands the greatest number of votes in the greatest number of constituencies obtains greater legislative powers and access to executive authority. There is a built-in incentive to conquer and vanquish, as political opponents are seen as an obstacle in the road to power. Democracy therefore has a propensity to divide, polarize, hurt, and generate long-term conflicts. In the district studied by the author, democracy has facilitated the emergence of violent majoritarianism and minoritization, understood as “practices that disempower a group in the course of establishing the hegemony of another.” Most modern democracies make accommodations to protect minorities, but they also continue to uphold rule of the majority as the source of their legitimacy. The founding fathers of modern India, from Syed Ahmad Khan to Mahatma Gandhi to B.R. Ambedkar, were aware of this risk of majority rule and sought to mitigate it by building checks-and-balances and appealing to the better part of people’s nature. Initially a proponent of Hindu-Muslim unity, Sir Syed wrote about the “potentially oppressive” character of democracy, fearing that it might translate into “crude enforcement of majority rule.” Gandhi not only warned against the workings of competitive politics and the dangers of majoritarianism, but also expressed skepticism about the rule of law and impartiality of the judicial system. Ambedkar wrote principles of political freedom and social justice into the Indian constitution, but was keenly aware that democracies were by definition a precarious place for social and numerical minorities. Although their solutions may not be ours, Chaturvedi concludes that “we need to attend to questions that figures like Sir Syed, Ambedkar, and Gandhi raised.”

The World’s Largest Democracy

A review of Hailing the State: Indian Democracy between Elections, Lisa Mitchell, Duke University Press, 2023.

Hailing the StateWe are tirelessly reminded that India is “the world’s largest democracy.” In times of general elections, like the one taking place from 19th of April to 1st of June 2024, approximately 970 million people out of a population of 1.4 billion people are called to the ballot box in several phases to elect 543 members of the Lok Sabha, the lower house of India’s bicameral parliament. The election garners a lot of international attention. For some, it is the promise that democracy can flourish regardless of economic status or levels of income per head: India has been one of the poorest country in the world for much of the twentieth century, and yet has never reneged on its democratic pledge since independence in 1947. For others, it is the proof that unity in diversity is possible, and that nations divided along ethnic, religious, or regional lines can manage their differences in a peaceful and inclusive way. Still for others, India is not immune to the populist currents menacing democracies in the twenty-first century. For some observers, like political scientist Christophe Jaffrelot, India’s elections this year stand out for their undemocratic nature, and democracy is under threat in Narendra Modi’s India. And yet India is a functional democracy where citizens participate in voting at far higher rates than in the United States or Europe. Lisa Mitchell’s book Hailing the State draws our attention to what happens to (as the book’s subtitle says) “Indian democracy between elections.” Except during general election campaigns, foreign media’s coverage of Indian domestic politics is limited in scope and mostly concentrates on the ruling party’s exercise of power in New Delhi. Whether this year’s elections are free and fair will be considered as a test for Indian democracy. But as human rights activist G. Haragopal (quoted by the author) reminds us, “democracy doesn’t just means elections. Elections are only one part of democracy.” Elected officials have to be held accountable for their campaign promises; they have to listen to the grievances of their constituencies and find solutions to their local problems; they have to represent them and echo their concerns. When they don’t, people speak out.

Repertoires of protest

They do so in distinctly Indian ways, using repertoires of protest that differ markedly from modes of action used in other democracies. During the Telangana movement to create a separate state distinct from Andhra Pradesh, people resorted to roadblocks on state and national highways, rail blockades, fasting vows or hunger strikes, mass outdoor public meetings, strikes or work stoppages, sit-ins, human chains, processions, and marches to the capital. Collective mobilizations acquired grand names such as Mahā Jana Garjana (lit., “great roar of the people”), Sakala Janula Samme (general strike; lit., “All People’s Strike”) or Dilli Chalo (“Let’s Go to Delhi”) movements, while more ordinary practices were designated as garjanas (mass meetings), dharnās (sit-ins), padayātras (foot pilgrimages), and rāstā (blockades) and rail roko actions. During the 2020–2021 Indian farmers’ protests against three farm bills that were passed by the Parliament of India in September 2020, Tamil Nadu farmers resorted to various techniques to gain political attention, including “shaving half their beards and hair, displaying skulls and femur bones purported to be from farmers who had committed suicide, eating rats and snakes, marching in the nude to the prime minister’s office, and vowing to drink their own urine and eat their own feces.” According to Lisa Mitchell, we should not see these practices as specific to southern Indian states or linked with low-status caste or religious-based identitarian politics. First, these registers of political participation are not marginal to Indian democracy: “the many collective assemblies that sought to hold elected officials accountable to their promises to create the new state of Telangana are just one set of examples of the many similar practices that animate India’s wider political terrain.” Second, these collective modes of assembly serve a political function: they are “widely seen in India as everyday communicative methods for gaining the attention of officials, making sure that election promises are implemented, and ensuring the equitable enforcement of existing laws and policies.” And third, these mass protests have a history that predates the institution of Indian democracy, finding their roots in colonial times and even in the precolonial efforts to gain audience with domestic rulers.

Lisa Mitchell defines “hailing the state” as “a wide range of practices that can be grouped together around their common aim to actively seek, maintain, or expand state recognition and establish or enhance channels of connection to facilitate ongoing access to authorities and elected officials.” The expression inverts or subverts the state tactic identified by French philosopher Louis Althusser as “hailing” or “interpellation” by which a state official—in the Althusserian vignette, a policeman—interpellates a citizen with a halting order (“Hey, you!”). For Michel Foucault, a disciplinary society is a society where one becomes a docile body due to the presence, or threat of, constant surveillance and discipline. In political analysis inspired by Marxism or Foucaldian studies, the capitalist state is always on the side of oppression or surveillance and subjects are drawn to passive submission or led to active resistance. According to anthropologist James Scott, “weapons of the weak” include everyday forms of resistance such as footdragging, dissimulation, false compliance, pilfering, feigned ignorance, slander, arson, sabotage, and so forth. But as Lisa Mitchell notes, many collective actions of protest are in fact efforts to seek recognition and inclusion by state authorities, not to subvert or to bypass them. In both the Telangana movement and the 2020-21 farmers’ protests, the demands made were not for the overthrow of the state, but rather for dialogue with representatives of the state, for inclusion within the processes that would determine state policies, and for the fulfillment of earlier political promises that had not yet been realized. Failure to achieve recognition forces petitioners to amplify their voices in order to be heard by public administrators, political leaders, and the general public: “when one’s interests are ready well represented and one can be certain that one’s voice will be heard, there is little need to mobilize collectively in the streets. However, when one’s voice and interests repeatedly fail to find recognition, an alternative is to make one’s articulations more difficult to ignore by joining together in collective communicative action.”

Turning up the volume

Hailing the State is organized around seven sets of collective mobilizations: (1) sit-ins (dharna) and hunger strikes (nirāhāra dīkṣa); (2) efforts to meet or gain audience (samāvēśaṁ) with someone in a position of authority; (3) mass open-air public meetings (garjana); (4) strikes (samme, bandh, hartāl); (5) alarm chain pulling in the Indian railways; (6) road and rail blockades (rāstā and rail roko agitation);and (7) rallies, processions, and pilgrimages to sites of power (yātra, padayātra), along with the mass ticketless travels that often enable these gatherings. These social movements are not the expression of preexisting cultural identities; on the contrary, as Mitchell shows, Telangana or Dalit identities are constructed out of collective action and are the result of efforts to amplify voices and have them recognized. Actors who seek recognition, connection with, or incorporation into structures of state power are drawn together by a common desire to gain visibility and inclusion. Rather than ascribing a different “culture” to subaltern counterpublics and explaining differences in political repertoires by differences in underlying ideologies, we should consider that the styles of public expression are produced through failure of recognition and unequal access to power. Distinctions in the level of responsiveness by authorities to various individuals and groups explain the civility and order, or violence and unruliness, by which collective claims are made. Subaltern actors are not more prone to violence and angry protest than elites; it is just that the later usually settle their problems with ruling powers behind closed doors and without having to raise their voice, whereas the former are forced to find ways to amplify their voices. Speaking softly or writing in moderate tones is a condition of privilege, based on the expectation that one’s voice will be heard and acknowledged. We should not dismiss the masses firsthand as unruly, angry and uncivil, without considering that for them the “conditions of listening” are often not in place. Likewise, we should not draw a sharp line between the practices of “civil society” and those of “political society,” or between public places open to collective political activity and other urban venues devoted to circulation or economic activity.

Many acts of civil disobedience or nonviolent protest in India are associated with Mahatma Gandhi and the legacy of his struggle for Indian independence. Yet a history of these practices shows that they have very ancient roots, and that they didn’t stop with independence. Fasting and threatening to commit suicide at the doorstep of a powerful person, or assembling in a designated place to gain audience and present petitions are repertoires of practice recorded in ancient Hindu scriptures and colonial archives. Local rulers were usually quite responsive in promising redress to such appeals, at which point the fasting brahmin or the gathering crowd would return home and resume daily activities. Similarly, as Mitchell notes, “work stoppages, mass migrations, and collective strikes to shut down commerce and transportation are evident in South Asian archival sources from at least the seventeenth century, perhaps even earlier, and were clearly used to make representations to state authorities at the highest level.” Later on, East India Company officials and then British colonial administrators were unable to comprehend the social context of petitioning and therefore invariably took any large demonstration to be an act of hostile rebellion. They referred to the collective actions as “combinations” or, less generously, as “insurgencies,” “mutinies,” insurrections,” “revolts,” or “rebellions,” even when their participants sought only to gain an audience with officials in circumstances in which earlier communicative efforts were ignored or refused. When collective actions did become violent, it was often in response to authorities firing on crowds to silence and disperse them. Leaders of the newly independent India in 1947 largely inherited both the ideological perspective on collective assembly and the legal and policing systems established by the British. But they were never entirely successful in eliminating the collective practices that offered time-tested models for effectively engaging and communicating with officials, authority figures, and others in positions of power.

Railways democracy

Public transportation networks play a central role in the organization of collective political actions. Streets, highways, intersections, railway stations, rail lines, and road junctions are sites where people gather, claims are made, and communication with the state is pursued. A history of Indian democracy would not be complete without mentioning the role railways traffic and infrastructure have played in creating a common polity. As soon as they were built, the railways became a key target of anticolonial protest. Practices such as alarm chain pulling, rail blockades known as roko, and ticketless travel to join political rallies were so common that they eventually came to be redefined by the government as political manifestations, and efforts to impose penalties on perpetrators were lifted. Disruption of rail traffic reached such heights and became such a regular challenge to authorities that the Indian Railways developed a policy of mitigation and adaptation, adding additional wagons to accommodate the large numbers of people traveling without tickets to mass meetings or authorizing the stoppage of a train for a brief moment in order to allow demonstrators to have their picture taken by the media before clearing the way. Political scientists have underscored the role of the printing press or the mass media in the emergence of a public arena and the rise of democratic governance. Similarly, railways in India have been an effective medium of political communication. Halting a train in one location enabled a message to be broadcast up and down the entire length of a railway line, forcing those from other regions to pay attention to the cause of a delay. Road blockages have become equally important ways to convey political messages. Genealogies of democracy in India should not only focus on deliberative processes and political representation, but should also include material infrastructures such as railways and roads. Democracy is something people do, and places of participation and inclusion are a fundamental part of what democracy means.

Hailing the State is based on archival evidence and ethnographic observation. The author has documented the social movement that led to the creation of a separate Telangana state, the result of sixty years of mobilization by Telangana residents for political recognition. This movement culminated on June 2, 2014 with the creation of India’s twenty-ninth state, which bifurcated the existing Indian state of Andhra Pradesh. Proponents of a separate Telangana state felt that plans and assurances from the state legislature and Lok Sabha had not been honored, and mobilized to hold government officials accountable to their promises. They cultivated a distinct cultural identity based partly on a variant of the Telugu language, and resented having their accent ignored or mocked by speakers coming from coastal Andhra. Lisa Mitchell also documents other social movements led by Dalit students, women, and peasants in India’s southern states. Her archival work led her to exploit the archives of Indian railways, documenting the debates around alarm chain pulling and roko rail blockades over the twentieth century. Her book is also theoretically ambitious. In her text and in her endnotes, she discusses the ideas of European philosophers like Althusser, Foucault, Balibar, Lefebvre, and Habermas, highlighting their insights and perceptiveness but also their biases and shortcomings. Mitchell invites us to “decenter England (and Europe more generally) as the ‘precocious’ and normative site for historical innovation in collective forms of contentious political action.” The way democracy works in India between elections holds lessons for the rest of the world. In particular, observers would have ben less puzzled by the various Occupy movements in Western metropoles (and the Yellow Vests protests in France) had they paid any attention to the Telangana movement or other forms of collective public performances in southern India.

The India Stack

Democracy these days is becoming more abstract and dematerialized: from online consultations to e-governance, people increasingly turn to the internet for information about their rights, delivery of social services, and feedback about public matters. Digital government is supposed to enhance governance for citizens in a convenient, effective, and transparent way, eliminating opportunities for corruption and embedding democratic processes in the information infrastructure. India is at the vanguard of this movement: with a vision to transform India into a digitally empowered society and knowledge economy, the government has digitized the delivery of vital services across various domains, ensuring transparency, inclusivity, and accessibility for all citizens. The “India Stack” includes Aadhaar, the world’s largest digital ID programme; the United Payments Interface (UPI), India’s homegrown real-time mobile payments system; and the Data Empowerment and Protection Architecture (DEPA), India’s version of the General Data Protection Regulation in the European Union. But e-government and personal identity numbers can also be used to limit political access to persons in position of power or to reduce opportunities for recognition and face-to-face communication. As Lisa Mitchell notes, the decision to launch a website for receiving online petitions and substitute it to direct access was met with great protest. The removal of Dharna Chowk, Hyderabad’s designated place for assembly and protest, to a site far away from the center of power was perceived as an authoritarian effort to silence dissent and limit political opposition. Foreign observers often deride the institution of granting audience, whereby citizens wait in line to meet a government official and petition for justice, relief, or favor, as the remains of a “feudal mindset” inherited from Mughal administrators and British officers. But Indian citizens are attached to their own ways of hailing the state, and such collective performances are neither antithetical nor incidental to the functioning of India’s democracy between elections.

The Celibate Plot

A review of Celibacies: American Modernism and Sexual Life, Benjamin Kahan, Duke University Press, 2013.

CelibaciesLiterary criticism has accustomed us to read sex between the lines of literary fiction. What Maisie Knew was what her parents were doing in the bedroom; The Turn of the Screw would have the heroin screwed if the door was unlocked; and Marcel Proust’s Lost Time was time not spent in the arms of his lover. According to this view, literature is when the author wants to suggest something about a person or thing, but then for whatever reason he or she may not wish to explicitly state what is on his or her mind, and so the author writes a novel, or poetry. Psychoanalysis has several words for this urge to dissimulate and beautify: sublimation, repression, transfer, displacement, defense mechanism, the conflict between the super-ego and the id. They all refer to the transformation of socially undesirable impulses into desirable and acceptable behaviors. But what if the opposite was true? What if no sex means no sex, and there is no dark secret to probe into? The French philosopher Michel Foucault hinted at this possibility in his History of Sexuality when he criticized the repressive hypothesis, the idea that western society suppressed sexuality from the 17th to the mid-20th century due to the rise of capitalism and bourgeois society. Foucault argued that discourse on sexuality in fact proliferated during this period during which experts began to examine sexuality in a scientific manner, cataloguing sexual perversions and emphasizing the binary between hetero- and homosexuality. By opposition, Roland Barthes, Foucault’s colleague at the Collège de France, proposed a concept to bypass the paradigm of sexuality and go beyond the binary construction of meaning: the Neutral. “I define the Neutral as that which outplays the paradigm, or rather I call Neutral everything that baffles paradigm,” he wrote. According to Barthes, the Neutral, or the grammatical Neuter (le neutre) operates a radical deconstruction of meaning and sexuality. It allows us to reexamine from a fresh perspective the question of le genre, understood in its dual sense of literary genre and of gender. 

The repressive hypothesis

Biographies of Roland Barthes point out that he remained a bachelor all his life and shared an apartment with his mother, to whom he devoted a vibrant eulogy at the time of her death. Barthes was also a closet homosexual, never avowing in public his penchant for boys and his dependence on the gigolo trade. His works are almost silent on his sexuality. Barthes’s homosexuality concerned only a private part of his life; it was never made public, because it simply wasn’t. Homosexuality was never for Barthes anything other than a matter of sex, limited to the question of the choice of a sexual object. He wasn’t gay (a term that functions as a seal of identity), and would never have been part of the political movement for the recognition of homosexual rights. This indifference was not a repression: it was another way of expressing what being modern meant for him, even if Bathes’ modernity was closely related to a certain resistance to the modern world. In a society obsessed with the new and the rejection of conventional forms, it is attachment to the past that now constitutes a form of marginality or even clandestinity and, as such, a heroism of the ordinary. Being modern doesn’t just mean taking part in the intellectual or artistic spectacle of contemporary society. It also, and above all, means constructing meanings, words, ways of being, cultural and textual interventions that precede what a society makes available. To be modern is to make one’s desire come to language. In this sense, Benjamin Kahan’s Celibacies, a work of literary criticism and cultural history, articulates other ways of being modern. Focusing on a diverse group of authors, social activists, and artists, spanning from the suffragettes to Henry James, and from the Harlem Renaissance’s Father Divine to Andy Warhol, Kahan shows that the celibate condition, in the diverse forms that it took in the twentieth century, meant much more than sexual abstinence or a cover for homosexuality. To those who associate the notion of celibacy with sexual repression, submission to social norms, and political conservatism, he demonstrates that celibacies in the twentieth century were more often than not on the side of social reform, leftist politics, and artistic avant-garde.

Celibacies is placed under the sign of Eve Sedgwick’s Epistemology of the Closet, with a quote used as an epigraph that opens the book: “Many people have their richest mental/emotional involvement with sexual acts that they don’t do, or even don’t want to do.” Sedgwick deemed the hermeneutic practice of uncovering evidence of same-sex desire and its repression in literature, “paranoid reading.” To this trend, she opposed a reparative turn in literary studies: reparative reading seeks pleasure in the text and works to replenish the self. Sedgwick’s injunction to move from paranoid to reparative reading has been diversely followed. On the one hand, queer studies continue to read the absence of sex as itself a sign of homosexuality or of repressed desire, as an act of self-censorship and insincerity. The closeted subject has internalized social norms and keeps the true self hidden from outside views, sometimes hidden from the conscious self as well. By opposition, the queer subject brings desire to the fore, and challenges tendencies to oppose private eroticism and the systems of value that govern public interests. On the other hand, queer theory rejects normativities of all stripes, including homonormativity. It understands sex and gender as enacted and not fixed by natural determinism. Since the performance of gender is what makes gender exist, a performance of “no sex” creates a distinct gender identity: no means no, and abstinence from sex is not always the sign of repressed sexuality. It is possible to theorize gender and even sexuality without the interference of sex. But according to Kahan, celibacy is distinct from asexuality, understood as the lack of sexual attraction to others, or low or absent interest in or desire for sexual activity. Celibacy is a historical formation or a structure of attachment that can be understood as a sexuality in its own right. Its meaning has evolved in the nineteenth and twentieth centuries: it has be used as a synonym for unmarried, as a life stage preceding marriage, as a choice or a vow of sexual abstinence, as a political self-identification, as a resistance to compulsory sexuality, as a period in between sexual activity, or as a new form of gender identity organized in a distinct community culture. Celibacies used in the plural reflect these overlapping meanings and cast a light on literary productions illustrating the impact of modernism in America.

The educated spinster

Celibacy once was a recognized social identity defined by its opposite, heterosexual marriage. According to Simone de Beauvoir, “the celibate woman is to be explained and defined with reference to marriage, whether she is frustrated, rebellious, or even indifferent in regard to that institution.” Its determinants were political and economical rather than sexual or sentimental: celibacy was a necessary condition for middle- and upper-class white women to gain legal and financial independence. At the end of the nineteenth century, “marriage bars” required the dismissal of female employees upon their marriage or the prohibition of the employment of a married woman. Educated women who wanted to enter a career or a profession had to remain unmarried or to hide their marriage. They did so in large numbers: “Of women educated at Bryn Mawr between 1889 and 1908, for instance, fifty-three percent remained unwed.” For this reason, celibacy is at the very heart of the history of labor in America. It is also a key component of social mobilization and civic campaigns: in the United States, unmarried, educated women composed much of the rank and file of social movements campaigning for universal suffrage, temperance, and social purity. The centrality of celibacy for first-wave feminism cannot be emphasized enough. For the author, women’s “choice not to marry is indicative of a willingness to think outside existing social structures and thus it is associated with freedom of thought.” For their male contemporaries, it was also associated with ridicule. Women campaigning for female suffrage were belittled as “suffragettes”; and other expressions disparaged women who had chosen to stay single (“singletons,” “bachelorettes,” “old maids,” “spinsters.”) The male bachelor, by contrast, was seen as socially able to marry but having delayed marriage of his own volition; he could be characterized as “a good catch,” “a stag,” or “a jolly good fellow.” 

Celibacy’s history is imbricated with the history of homosexuality. Discussing Henry James’ novel The Bostonians, Kahan investigates one of the most contested site of celibacy in the history of homosexuality: the Boston marriage. The term “Boston marriage” describes a long-term partnership between two women who live together and share their lives with one another. In James’s satirical novel, the romance between the heroin Verena Tarrant and Olive Chancellor, a Boston feminist and social campaigner, is placed on equal footing with the romance between Verena and her other suitor, Basil Ransom. This love triangle is often read as a lesbian plot: Olivia’s decision to leave her parents’ house, move in with Verena and study in preparation for a career in the feminist movement is seen as the result of a love attraction. Benjamin Kahan proposes another interpretation based on the constitutive role of celibacy as a means for independence and self-determination. The Boston marriage, which does not grow out of “convenience or economy,” is associated with collaborative literary production. It reflects Henry James’ own condition as a lifelong bachelor and his conception of authorship as a vocation. The artist, like the bachelor, is fundamentally monadic and stands apart from social spheres of influence: “rather than seeing James’s celibacy as only an element of a homosexual identity, I understand it as a crucial component of his novelistic production.” In a separate chapter examining the work of Marianne Moore, a twentieth-century American poet, Kahan sees echoes of her lifelong celibacy in her poetics and conception of time. Moore’s “celibate poetics” involve a lack of development within the poem, a lack of climax, a backwardness that reverses the passage of time, as well as pleasure in difficulty, lack of explicitness, and a style at once shy and flamboyant. Moore’s remark that “the cure for loneliness is solitude” makes solitary existence a fully contented mode of sociability and a crucial part of her poetics.

Black celibacy and queer citizenship

In his effort to make celibacy be seen as progressive and pleasurable, Benjamin Kahan underscores that the celibate condition in the twentieth century was not restricted to middle-class white women. Black celibacy was advocated by a now forgotten figure of the Harlem Renaissance, Father Divine, “an intellectual and religious leader who believed he was God.” His cult, the Peace Mission Movement, organized his followers into interracial celibate living arrangements called kingdoms. These celibate communes were a direct response to economic conditions: rents in Harlem were prohibitively high, making necessary for families to share apartments or take in lodgers. Cooperative housing also echoed the calls from Claude McKay, a socialist and a poet, to seize the means of production and organize the black community on a self-sustaining basis. Lastly, black celibacy and chastity vows countered racist depictions of the black body as oversexualized and promiscuous. By making a celibate identity available to black subjects, Father Divine allowed black men and women to participate in the public sphere and created economic and spiritual opportunities for racial equality. Celibacy was also used as a strategy for queer subjects to circumvent the prohibition preventing homosexual immigrants from becoming American citizens. Before the passage of the McCarran-Walter Act in 1952, the queer citizen could, according to the letter of the law, belong to America so long as he remained celibate or was not “caught in an act of moral turpitude.” The British poet W. H. Auden became an American citizen in 1946 by practicing “cheating celibacy,” a position both inside and outside the rules that he thematized in his 1944 poetic essay The Sea and the Mirror: A Commentary on Shakespeare’s The Tempest”. This long poem is a series of dramatic monologues spoken by the characters in Shakespeare’s play in which Caliban renunciates his former self in favor of a queer form of belonging. But as Kahan notes, “black queer writers like Claude McKay, James Baldwin, and Langston Hughes had significantly less ability to move in and out of America’s borders than white authors like Auden.”

Kahan’s choice to associate Andy Warhol with celibacy is disconcerting. The pop artist was openly gay and had a reputation for promiscuity and swishiness. His art collective, the Factory, was populated by “drag queens, hustlers, speed freaks, fag hags, and others.” But “‘gayness’ is not a category that we can control in advance.” If Warhol’s declarations can be taken at face value, he claimed that he didn’t have any sex life: “Well, I never have sex” and “Yeah. I’m still a virgin,” he responded in an interview. Evidence also suggests that the Factory wasn’t the “Pussy Heaven” or “Queer Central” journalists once described: according to one witness, celibacy organized life at the Factory, and Warhol’s abstinence from sex shaped relations of power and subjection. As Kahan sees it, the tradition of celibate philosophers underwrites the Factory’s mode of government and theorizes a concept of group celibacy. Warhol’s marriage to his tape recorder exemplified his rejection of traditional marriage and emotional life: “I want to be a machine.” In the view of a contemporary, “everything is sexual to Andy without the sex act actually taking place.” His celibacy operates at a zero degree of desire. My Hustler, his 1965 movie with film director Paul Morrissey and actor Ed Hood, presents a twisted celibate plot characterized as much by sexlessness as by sex. Valerie Solanas tried to kill Andy Warhol in 1968 because she claimed “he had too much control of [her] life”. In the SCUM Manifesto she published before her attempted murder, the radical feminist urged women to “overthrow the government, eliminate the money system, institute complete automation and destroy the male sex.” Kahan places both Warhol and Solanas in a tradition of philosophical bachelorhood that precludes sex in favor of alternative modes of governance.

Celibate readings

In the conclusion of Celibacies, Benjamin Kahan argues that celibacy should not be abandoned to the American political right, with its apology of abstinence before marriage and traditional gender roles. Celibacy from the 1880s to the 1960s has been on the side of reform and modernism. Celibate women could access public space and the professions at a time social norms prevented educated married women from entering the workforce. In the 1930s, celibacy was a possible option availing economic advantages to African-Americans in Harlem or allowing queer foreigners to access U.S. citizenship. Celibacy could also be a philosophical choice or a condition for artistic production. Having a room of one’s own was easier when one didn’t have to share the apartment with another person or raise a family. Forms of celibacies have also been animated by “sexual currents, desires, identifications, and pleasures.” Celibacy’s imbrication with homosexuality is not just a modern invention: depictions of “Boston marriage” in the late nineteenth century had strong implications of lesbianism. But celibacy was not only a pre-homosexual discourse or the result of sexual repression: it was a form of sexuality in its own right, entailing a more radical withdrawal than is the case with the closet homosexual or the scholar practicing sexual abstinence. No sex means sex otherwise, or a different form of sexuality. Looking to literary works of fiction and poetry through the prism of celibacy leads to valuable insights: Kahan reads a “celibate plot” in Henry James’ The Bostonians or Andy Warhol’s My Hustler, and highlights a “celibate poetics” in the poems of Marianne Moore or W. H. Auden. This book is published in a series devoted to queer studies because, as the author argues, “celibate and queer readings overlap without being coextensive.” Much as queer theory has the effect of “undoing gender,” the primary purpose of the Neutral according to Roland Barthes is to undo the classifying function of language and thus to neutralize the signifier’s distinctive function. “L’écriture célibataire” is the form the Neutral took in American modernism.

Martian Chronicles

A review of Dying Planet: Mars in Science and the Imagination, Robert Markley, Duke University Press, 2005.

Dying PlanetThe relations between science and fiction have nowhere been any closer than on the planet Mars. The genre of science fiction literally began with imagining life on Mars; and some of its most popular entries nowadays are stories of how humans could settle on the red planet and make it more like the Earth. Planetary science originally took Mars as its object and tried to project onto Mars what scientists knew about the climate and geology on Earth. Now this interest for Martian affairs is coming back to Earth, as scientists are applying knowledge derived from studying Mars to the study of the Earth’s planetary dynamics. Mars’ image as a dying planet has been invoked to support competing, even antithetical views, of the fate of our world and its inhabitants: a glorious future of interplanetary expansion and space conquest, or a bleak fate of environmental devastation and human extinction. Science has not completely closed the issue on whether life has ever existed on Mars; but visions of extraterrestrial civilizations and space invaders have been superseded by narratives centered on mankind and its cosmic manifest destiny. This intimate relationship between science and fiction under the sign of Mars is now more than one century old, but shows no sign of abating. What is it in Mars that inflames people’s imagination from one generation to the next? Why has Mars attracted more interest than our closest satellite, the Moon, or than more distant planets in the solar system such as Venus or Saturn? Are there commonalities between the way our ancestors envisioned channels built by Martian civilizations and more recent visions of making Mars suitable for human sojourn? Will the detailed inventory of the Martian terrain brought back by satellite images and camera-equipped rovers put an end to our interest for the red planet, or will it rekindle a new space age with the colonization of Mars as its overarching goal? And how can our visions of planetary expansion avoid the pitfalls of colonial metaphors and Earth-based anthropocentrism?

Is there life on Mars?

Dying Planet explores the ways in which Mars has served as a screen on which we have projected our hopes for the future and our fears of ecological devastation on Earth. It presents a cross-disciplinary investigation of changing perceptions of Mars as both a scientific object and a cultural artifact. The persistence of the red planet in our cultural imagination explains its enduring presence on the scientific agenda; and the scientific controversies surrounding Mars have often fueled the imagination of artists and philosophers. Scientists still frequently resort to terrestrial analogies to describe Mars; and the study of Mars has encouraged scientists to think about the planetwide conditions necessary to sustain life, making Earth more of a Mars-like planet. For planetary scientists and science-fiction writers, Mars often acts as a bellwether, a harbinger of the ecological fate of the Earth. The image of Mars as a dying planet has an enduring quality: it indicates that the Earth may go the way of Mars and transform itself into a barren land due to resource exhaustion and environmental stress. To the question: Why Mars?, the author lists the reasons that has made the fourth planet in the solar system such an enduring presence in the scientific imagination. Since the invention of the telescope in the seventeenth century, Mars can be observed with a fair degree of accuracy. Dark patches on the surface, the polar caps that wax and wane, waves of darkening that spread across the planet from the poles toward the equator during its spring and summer months: all these observed phenomena have nourished rampant speculation based on analogies to Earth’s seasonal and hydrological cycles. In 1878, Giovanni Schiaparelli (1835-1910) announced that he had observed canali (channels or canals) criss-crossing its surface. At the end of the nineteenth century, American astronomer Percival Lowell (1855-1916) forcefully defended the idea that these canals were built for irrigation by an intelligent civilization. For more than a half century, the canal controversy fueled speculations about an alien race which could enter in contact with mankind. More generally, the discovery of life on Mars or elsewhere in the universe would profoundly alter humankind’s perception of its place in the cosmos: the question: Is there life on Mars? is as important as Copernic’s questioning the place of Earth at the center of the universe.

Our fascination with Mars stems from what Robert Markley calls the interplanetary sublime. According to Immanuel Kant, the sublime is the infinite object that reveals the sublimity of reason. The “starry heavens above me and the moral law within me” fill us with a profound sense of wonder and awe. The spectacle of Mars in science and in literature is indeed sublime and awe-inspiring. Mars has the largest volcano in the solar system. Its main valley stretches for three thousand miles, dwarfing terrestrial analogues and making the Grand Canyon seem “a mere crack on the sidewalk.” Its surface preserves landforms three to four billion years old that provide a window into a geological past that has long since disappeared from Earth. Orbital photographs show evidence of geologically recent lava flows, patterns of water erosion, and meteoric impacts that suggest a complex history of planetary evolution and climate change. The evidence of a once warmer and wetter Mars raises the question of planetary evolution and climate change. The study of Mars involves a multiplicity of sciences including geology, chemistry, hydrology, meteorology, and microbiology, as well as the still virtual disciplines of exobiology and terraforming. The exploration of Mars is a “fundamental science driver”: it pushes the frontiers of science further and provokes the imagination of scientists and writers alike. What we see in Mars also reflects “the moral law within me”: gazing at a distant planet makes our insignificance in the universe palpable. Whether humankind is alone in the universe or one of many intelligent species has profound philosophical and even theological implications. The loss of Mars’s atmosphere and the disappearance of water on its surface also bring lessons close to home: if the geological similarities between Mars and Earth have the same causes, then the history of Mars provides a window into Earh’s possible future. Doing comparative planetology, and understanding the dynamics of planetary climate change, therefore becomes the new rationale for going to Mars.

The planetary imagination

To twenty-first century observers, seeing canals on Mars is a bit like discerning a rabbit on the Moon: a figure of the imagination, a matter of folklore and cultural mythology. It is hard to realize that less than a century ago the issue of Mars canals was a matter of science, not fiction, filling the pages of scientific journals and the popular press. The idea of a plurality of inhabitable worlds has long been debated in speculative philosophy, starting with Greek philosopher Anaximander (610-546 B.C.). Based on observation and calculus, Nicholas Copernicus (1473-1543) placed the sun at the center of the solar system, relegating the Earth to merely another orbiting planet. The Copernician theory provided the impetus for Johannes Kepler (1571-1630) to describe precisely the orbits of the planets, although the German astronomer was “almost driven to madness” by the complexity of Mars’s orbit. With the development of the telescope in the seventeenth century, Mars began to be perceived as the most likely candidate in the solar system for harboring an extraterrestrial civilization. Giovanni Cassini (1625-1712) and Christiaan Huygens (1629-1695) published detailed images of the Martian surface that drew on terrestrial analogies: polar caps, “seas” and “oases” became familiar features of the Martian terrain. In the eighteenth century, the plurality of world hypothesis had been put on a sound scientific footing and was debated by scientists and philosophers alike. Mars’s surface was described with increasing precision, and almost all astronomers who had modern instruments at their disposal made observations of the planet. The mapping of Mars focused primarily on global cycles of temperature, hydrology, and presumed biological activity. But it was Giovanni Schiaparelli’s observation of a network of lines on the surface of Mars in 1877 that sparked the most intense controversy. Schiaparelli himself was agnostic about what his canali signified: where they “channels” connecting what was described as oceans, continents, and islands, or “canals” built by an alien civilization?

Robert Markley devotes almost three chapters of Dying Planet to the canal controversy. Forcefully defended by Percival Lowell, it had all the ingredients of a great scientific controversy. It could be boiled down to a simple thesis (canals meant intelligent Martians) and integrated into a grand narrative of planetary evolution (canals were built to counter the desertification of a dying world.) Lowell’s theory operated within the bounds of accepted scientific practice (it used all scientific observations available at the time) and mobilized the rhetoric of scientific objectivity to challenge the values, assumptions, and methods of his opponents (whose refusal to envisage life outside of Earth was denounced as religiously motivated.) Part of the fascination with Mars stemmed from the implicit and explicit lessons which scientists and their readers drew from Lowell’s vision of an advanced civilization struggling to stave off ecological disaster. Lowell’s grand narrative of a dying planet found echoes in the emerging literature of science-fiction writers who mixed the literary genres of utopian novels, adventure narratives, and philosophical speculation. Although H. G. Wells’ The War of the Worlds is by far the best known of the turn-of-the century science-fiction novels, it was by no means an isolated production. Wells’s novel offers a classic dystopian inversion of European imperialism: his blood-drinking Martians pose a horrific challenge to bourgeois complacency, even as they give shape to late Victorian culture’s masochistic fascination with its own demise. Kurd Lasswitz (1848-1910) describes a more peaceful encounter between humans and a more advanced Martian civilization in his 1897 novel Auf zwei Planeten, published in English in 1971 with a foreword by Wernher von Braun. The book has the Martian race running out of water, eating synthetic foods, traveling by rolling roads, and utilizing space stations. Alexander Bogdanov (1873-1928), a Russian physician, philosopher, and Bolshevik revolutionary, describes his Red Star (1908) as a collectivist utopia in the full throes of resource exhaustion and planetary decline. The vanguard socialism of the Martians is carved into the landscape of their planet, with the canals as both cause and effect of Martian collectivism.

How to prove a negative?

Until the 1930s, the canal thesis had enough currency within the scientific community to reinforce a widespread agnosticism about the possibility of intelligent life on Mars. Even as the canal builders retreated into science fiction, the idea of “primitive” life on Mars persisted. Lowell’s paradigm of a dying planet influenced scientific speculation about the composition of the Martian atmosphere, the character of its surface, and the nature of its putative life-forms. After World War II, advances in radiometry and the study of the infrared spectrum gave astronomers new tools with which to study Mars. As the intelligent life hypothesis became more and more improbable, scientists still deduced from the alleged existence of ice, water, and an atmosphere the possibility of vegetative life in the form of lichens and algae. It is hard to prove a negative: the inability to detect signs of life does not signify that life does not—or did not—exist on Mars. Even after the Mariner missions in the mid-1960s brought back photographs showing Mars’s barren surface as inhospitable to life, scientists speculated that oxygen might still be captured in the polar caps, and that bacterial forms of life may have existed in the past and might still be present. Evidence suggested that Mars three billion years ago was comparatively warm and wet. Did life exist in the very distant past on this more hospitable Mars? How had the planet died? Could micro-organisms survive in extreme conditions, as is the case in volcanic or deep sea environments on Earth? A whole discipline, exobiology, grounded on the premise that life may exist beyond Earth, concentrated on the search for signs of life and the study of habitable environments. The ambiguous results of the life-detection experiments conducted during the Viking missions which landed on Mars in 1976 led scientists to lobby for more sophisticated microbiology testings on future NASA landers. The search for life remains a crucial selling point for plans to explore Mars by sending automated rovers and, ultimately, boots on the ground.

In 1948, inspired by the novel of his compatriot Kurd Lasswitz, the rocket physicist and space scientist Wernher von Braun wrote the technical specification for a human expedition to Mars, The Mars Project. In the 1970s and early 1980s, the American astronomer and science communicator Carl Sagan was the most vocal advocate of space exploration and the search for extra-terrestrial intelligent life. Again, he was inspired by the science-fiction novels he read as a teenager: a map representing Edgar Rice Burroughs’s vision of Mars hung on the hallway wall outside his office for more than twenty years. Just as the canals occupied the attention of a generation of scientists, Burroughs’s novels about John Carter and his adventures on the planet he calls Barsoom dominated the interplanetary fiction of the first half of the century. Literature inspired by Mars includes the good, the bad, and the ugly: for a Ray Bradbury and his Martian Chronicles (1950) or a Isaac Asimov’s The Martian Way (1952), how many pulp fictions or comic-book adventures featuring green aliens laying eggs and four-armed tetrapods shooting laser beans? As Robert Markley states in his introduction, “anyone who has read a lot science fiction realizes that much of it is pretty bad.” But the appeal of the genre lies elsewhere: “science fiction does not represent historical experience, but generates simulations of what that experience may become.” Ray Bradbury once said that “Burroughs has probably changed more destinies than any other writer in American history.” The same could be said about himself. Generations of adults (mostly males) had their formative years influenced by the likes of Ray Bradbury, Isaac Asimov, and Arthur C. Clarke. Considering that space exploration lacks the support of vested interests outside of the aerospace industry, science-fiction novels created a constituency for sending missions to the red planet and beyond.

The Mars Society

Inspired by the Lowellian paradigm of a dying planet bearing the mark of ancient civilizations, classic science fiction was obsessed with the idea of intelligent life on Mars. More recent science fiction plays with the idea of bringing life and civilization (back) to Mars: by sending manned missions, establishing a permanent presence, and terraforming the planet. As an emblem of humankind’s interplanetary future, Mars is described both as a dead world that resists human effort to explore, colonize, and transform it and the site of humankind’s next giant leap in its multisecular evolution. These fictions are haunted by the dark underside of colonization and extractive capitalism, and often demystify the masculinist narrative of the conquest of space with a vision of failed social order and technoscientific hubris. In Kim Stanley Robinson’s trilogy, Red Mars (1992), Green Mars (1993), and Blue Mars (1996), the settlement and terraforming of Mars is chronicled through the personal and detailed viewpoints of a wide variety of characters spanning almost two centuries. Ultimately more utopian than dystopian, the story focuses on egalitarian, sociological, and scientific advances made on Mars, while Earth suffers from overpopulation and ecological disaster. These plans to colonize Mars are no longer science fiction: established in 1998 by aerospace engineer Robert Zubrin and backed by multibillionaire Elon Musk, the Mars Society, a nongovernmental organization, has set itself the goal to send humans to Mars and establish a permanent colony in the very near future. In an industry where NASA remains the most expensive game in town, the “new space” industry that operates on a “faster, better, cheaper” basis promotes alternative, low-cost ways of getting humans to Mars and sustain them while they stay on the planet. Robert Markley, who published Dying Planet in 2005, has reservations about the whole endeavor. In his opinion, the Mars Society’s vision of a new American frontier, or a new manifest destiny, “is founded on dubious or simplified readings of American history that repress both the human and ecological consequences of conquest and colonization.” As he concludes, “the ultimate challenge posed by planetary transformation is ultimately as much ethical as it is scientific.”

The Land of Kush

A review of Chosen Peoples: Christianity and Political Imagination in South Sudan, Christopher Tounsel, Duke University Press, 2021.

Chosen PeoplesOn July 9, 2011, South Sudan celebrated its independence as the world’s newest nation. One name considered for christening the country was the Kush Republic, after the Kingdom of Kush that ruled over part of Egypt until the 7th century BC. According to historians of antiquity, Kush was an African superpower and its influence extended to what is now called the Middle East. Placing the new nation under the sign of this prestigious ancestor was seen as particularly auspicious. But for many people the name Kush has been connected with the biblical character Cush, son of Ham and grandson of Noah in the Hebrew Bible, whose descendants include his son Nemrod and various biblical figures, including a wife of Moses referred to as “a Cushite woman.” A prophecy about Cush in Isaiah 18 speaks of “a people tall and smooth-skinned, a people feared far and wide, an aggressive nation of strange speech, whose land is divided by rivers” that will come to present gifts to God on Mount Zion after carrying them in papyrus boats over the water. For many South Sudanese at independence, Isaiah’s ancient prophecy directly applied to them, to the point the newly appointed President Salva Kiir chose Israel as one of his first destinations abroad. Churchgoers also read echoes of their fight for sovereignty and independence in various passages of the Bible. Christian southerners envisioned themselves as a chosen people destined for liberation, while Arabs and Muslim rulers in Khartoum were likened to oppressors in the biblical tradition of Babylon, Egypt, and the Philistines. John Garang, leader of the Sudan People’s Liberation Army/Movement (SPLA/M), was identified as a new Moses leading his people to the promised land. The fact that he left the reins of power to his second-in-command Salva Kiir before independence, just like Moses did with Joshua upon entering the land of Canaan, was interpreted as further accomplishment of the prophecy. Certainly God had a divine plan for the South Sudanese. For some Christian fundamentalists, the accomplishment of Isaiah’s prophecy was a sign of the imminent Second Coming of Jesus Christ that Isaiah identified as the Messiah, the king in the line of David who would establish an eternal reign upon the earth.

Isaiah’s prophecy

This moment of bliss and religious fervor did not last long. Conflict soon erupted between forces loyal to President Salva Kiir (of Dinka ethnicity, Sudan’s largest ethnic group) and former Vice President Riek Machar (of Nuer ethnicity, the South’s second largest ethnic group.) The South Sudanese Civil War that ensued killed more than 400,000 people and led about 2.5 million to flee to neighboring countries, especially Uganda, Sudan, and Kenya. Various ceasefire agreements were negotiated under the auspices of the African Union, the United Nations, and IGAD, a regional organization of eight East African nations. The last truce signed in February 2020 led to a power sharing agreement and a national unity government that was supposed to hold the first democratic elections since independence in 2023. Again, some predicators and religious commentators interpreted these internal divisions and ethnic strife using biblical metaphors. As with earlier periods, the war produced a dynamic crucible of religious thought. Supporters of civil peace called on South Sudanese not to divide themselves like the tribes of Israel or recalled Paul’s injunctions in the Epistle to the Galatians to become one in Jesus by forgetting divisive identities. “Let us take the Bible instead of the gun,” exhorted a senior official at the Ministry of Religious Affairs. “Shedding blood is the work of the devil, and anybody who is killing people is doing the work of the devil,” declared another cleric. The civil war was interpreted as an opposition between right and wrong; only this time the forces of evil were internal to South Sudan, not projected upon the northern oppressor. The most vindictive denounced their enemies by comparing them to the Pharisees or even to Herod. God was used in one breath to argue for cultural unity (“all are one in Christ”) and in another for cultural diversity (tribes are “gifts of God”). These conflicting arguments are proof that in all situations, the biblical referent remains major in the South Sudanese national imagination. Meanwhile, the “land of milk and honey” remains one of the poorest countries on earth, with all the characteristics of a failed state.

For some people, interpreting historical events along religious lines is not only irrational and delusional, but also dangerous and divisive. Looking at history from God’s perspective can lead to a fatalistic view of life and human action. Having “God on our side” has served as justification to some of the worst atrocities in human history, and the Westphalian system of nation-states that is enshrined in the United Nations Charter was originally created to bring an end to the religious wars that plagued Europe in the sixteenth and seventeenth century. According to modern views, Christian interpretation of biblical prophecies should remain in the pulpit, and clerics should refrain from interfering in political issues of the day: “The more politically involved the church has become, the less spiritually involved the church is.” In the case of Sudan, religion was mobilized both in the North and in the South to bolster national identities and strengthen racial differences. Leaders in Khartoum have attempted to fashion the country as an Islamic state, making Islam the state religion and sharia the source of the law since 1983. Meanwhile, Southern Sudanese have used the Bible to provide a lexicon for resistance, a vehicle for defining friends and enemies, and a script for political and often seditious actions in their quest for self-determination and sovereignty. But Christopher Tounsel does not see religion as the source of the civil war that led to the independence of South Sudan. After all, rebels in the Sudan People’s Liberation Army (SPLA) were first inspired by Marxism and backed by the socialist regime of Mengistu in Ethiopia. John Garang believed in national unity and a secular state that would guarantee the rights of all ethnic groups and religions in a “New Sudan” conceived as a democratic and pluralistic state. Theology was only one of the discourses that informed the ideological construction of the South Sudanese nation-state. Race and, after 2005, ethnicity, were also important components of southern identities, working to include individuals in collective bodies and to distinguish them from others. In this perspective, the author cautions “against a limited view of South Sudanese religious nationalism as one based exclusively in anti-Islamization.”

A crucible of race

In Chosen Peoples, Christopher Tounsel presents “theology as a crucible of race, a space where racial differences and behaviors were defined.” Rather than approaching race and religion—the two elements most often used to distinguish North and South Sudan—as separate entities, he analyzes religion as a space where race was expressed, defined, and animated with power. Tounsel is particularly interested in how Christianity shaped the identity of the region’s black inhabitants (as opposed to Sudan’s Arab-Muslim population) and brings forth the notion of God’s chosen people (or peoples) using the Bible as a “political technology” in their fight against the oppressor. The first Catholic missionaries – Jesuits – settled in South Sudan in the middle of the 19th century following the creation by Pope Gregory XVI of the Vicariate Apostolic of Central Africa in 1846. As for the Protestants, they arrived in 1866 by through the British and Foreign Bible Society. However, this initial period of mission work was interrupted for nearly thirty years due to the Mahdist Wars that bloodied Sudan in the last decades of the century. When the British regained control of the region under the Condominium Agreement signed with Egypt in 1899, they facilitated the reestablishment of missions there in order to transform South Sudan into a buffer zone that could stem the expansion of Arabic and Islam up the Nile. The missionary work carried out there in the first half of the 20th century, mainly by Roman Catholics, the Church Missionary Society (CMS) and the United Presbyterian Mission (also known as the American Mission), in addition to its classic dimensions (translating the Bible, identifying socio-linguistic groups, schooling a new local elite), included a strong martial dimension by playing both on the symbolism of the crusade and the struggle against Muslim slavery. Through a case study of the Nugent School, created by the CMS in Juba in 1920, Tounsel shows that ethnic identities were also reinforced through the teaching of local vernacular languages and the definition of self-contained tribal units based upon indigenous customs, traditional usage, and competitive antinomies (a Nuer-English dictionary included the descriptive phrase “my cattle were stolen by Dinka.”) Ethnic conflict between indigenous identities, seen as natural and inevitable, could only be overcome by a common Christianity, while Islam and Arab culture was portrayed as alien and hostile.

After Egypt’s 1946 effort to assert its sovereignty over Sudan, Britain reversed course and conceded Sudan’s right to self-determination and, ultimately, independence, which was proclaimed on January 1st, 1956. The almost complete exclusion of southerners from the “Sudanization” policies in the 1950s fueled a growing sense of southern grievance and political identity. The 1954 creation of the first all-Sudanese cabinet under al-Azhari’s National Union Party, while the southern Liberal Party was in opposition, accelerated southern political thinking toward self-determination and federalism. It was in this context that a mutiny of the Equatorial Corps occurred in 1955 at Torit in the southern Equatoria province. The Equatorial Corps, composed entirely of Christian soldiers – around 900 –, had been created by Lord Reginald Wingate as part of the Anglo-Egyptian condominium on Sudan at the end of the 1910s: a bold decision in a context where military service had until then been reserved for Muslims. It was intentionally divided along ethnic lines: most of the corps was recruited from the Lotuho and other small eastern ethnic groups on the Sudanese slave frontier that were perceived to have “natural” military qualities. The mutiny, motivated by a project to transfer some units to the North and have them replaced by northern soldiers, was sparked by an incident involving an Arab soldier who allegedly insulted a black soldier by calling him a slave (abid). This term, then commonly used by Muslim Sudanese to denigrate black populations, testified to the very slow disappearance of slavery in the region. Sudanese slavery had even experienced a surge in the 1860s and 1870s with the progress of navigation on the Nile and had still been largely tolerated by British supervision until the beginning of the 20th century, after the end of the Mahdist wars. Mostly contained in Equatoria, where most of the mutineers were based and originated, the mutiny was quickly put down but it then led to the First Sudanese Civil War, taking its sources from the same crucible: Christian identity, racial confrontation, ethnic divisions, refusal of slavery and Muslim domination.

The First and Second Sudanese Civil Wars

The First Sudanese Civil War (1955-1972) considerably strengthened the biblical reference in the South Sudanese national emancipation movement. It was widely regarded as a religious confrontation between a Muslim government in Khartoum and its armies, and Christian liberation fighters in the South. Religious thought provided an important spiritual lexicon for the racial dynamics of the war, becoming a space for southerners to articulate the extent of racial division and hostility. The decision of the Sudanese government to Arabize school programs and gradually ban foreign missions, definitively expelled in 1964, not only amplified Christian proselytizing by local pastors but also provided new troops for the South Sudanese resistance. At the beginning of the 1960s, southern opposition was structured militarily and acquired propaganda organs such as the Voice of Southern Sudan published from London with the support of missionary societies. In 1967 the Youth Organ Monthly Bulletin of the Sudan African National Union (SANU) published a rewriting of Jeremiah’s Book of Lamentations where Israel was replaced by South Sudan and Babylon by Khartoum. This type of parallel was used more and more frequently, giving the conflict the appearance of a war of religion. While Arabs were demonized as inhuman evil agents of Satan, southerners framed themselves as God’s beloved people analogous to the Israelites. The war witnessed the creation of a theology that maintained that providence was leading southerners to victory. When the first civil war ended in 1972, biblical reference was clearly rooted while racial and religious identities were closely interwoven. For Sudanese refugees, returning home was presented as the end of exile in Babylon. Southern intellectuals, rather than approaching race and religion as mutually exclusive, used theology as a crucible through which racial identity was defined.

The peace agreement signed in Addis Ababa in 1972 provided for autonomy for South Sudan and religious freedom for non-Muslim populations. Despite their desire for independence, SANU leaders accepted to compromise, but multiple violations of the agreement, as well as the decision of the Sudanese government to impose Islamic law, contributed to relaunching the conflict in 1983 with the creation of the Sudan People’s Liberation Movement and Army (SPLM/A). The fall of Ethiopia’s Mengistu regime in 1991 was the second formative event, depriving the southern opposition from operational support and ideological justification. Though the SPLM never officially affiliated with any religion and maintained a policy of religious toleration, it increasingly turned to Christianity to mobilize and garner support at home and abroad. The SPLA was transformed into a largely Christian force that explicitly used Christian themes and language as propaganda. Apart from the Bible, few other sources were available with which to interpret their position. Episodes from biblical Israel’s history, like David’s clash with Goliath or Moses leading his people to the Promised Land, became popular narratives to fit the modern situation. It is in this context that Isaiah’s prophecy concerning Cush was referenced as foretelling ultimate victory. John Garang, a secularist at the beginning of the war, saw utility in including Cush in domestic politics. He also tried to mobilize support abroad, appealing to Pan-Africanism, Evangelical solidarity, and humanitarian repulsion against modern slavery. American human rights activists pressured the US government to get involved in the situation, framing the conflict as a war between Arabs and Africans, Christianity and Islam, masters and slaves. Their advocacy and humanitarian engagement influenced the manner in which the conflict was represented in mainstream Western media. Beginning in the 1990s, Sudan entered the American evangelical mind as a site of Christian persecution and possible redemption. President Bush appointed Senator John Danforth—an ordained Episcopal minister—as his special envoy on the Sudan. Without Washington’s support, the Comprehensive Peace Agreement signed in 2005 and the ensuing independence of South Sudan in 2011 would never have taken place.

A failed state

Christopher Tounsel takes a neutral perspective on the role of religion in framing South Sudan’s struggle for independence. He does not see religion as a “veil” for material interests or as an “opium” that would intoxicate people into a war frenzy. He has consideration and respect for the religious narrative that interprets South Sudanese nationalism as a spiritual chronicle inspired by the Bible and corresponding to God’s plan. Of course, he does not himself offer a religious interpretation of historical events. The views he presents are those of local religious actors: mission students, clergy, politicians, former refugees, and others from a wide range of Christian denominations and ethnicities. He strictly endorses the role of the professional historian, crafting a rigorous history of religious nationalism—analyzing many printed sources and archives that are exploited from the first time; collecting oral testimonies by clerical and non-clerical figures in Juba; offering his own interpretation after discussing other viewpoints present in the academic literature. Only in the acknowledgement section does he make reference to his own religious affiliation by giving thanks to “my Lord and Savior Jesus Christ.” But if we consider the devastating toll that successive civil wars had on the local population, one may see the role religion has played in a more negative light. Were it not for a biblical narrative of suffering and redemption, a South Sudanese state would never have seen the day. There are serious concerns about the viability of such a landlocked, ethnically polarized country that political scientists subsume under the category of failed state. Religious faith may have been useful in forging a common identity against an oppressor perceived as Arab and Muslim, but could not prevent the newly independent state to plunge into prolonged ethnic warfare. And American Evangelicals who viewed South Sudan as the fulfillment of Isaiah’s prophecy and the sign of Christ’s second coming were not simply delusional: they added oil to the fire in an explosive crucible of race, religion, and ethnicity.

War Photos and Peace Signs from Vietnam

A review of Warring Visions: Photography and Vietnam, Thy Phu, Duke University Press, 2022.

Warring VisionsIn April 2015, the Institut Français in Hanoi held a photography exhibition, Reporters de Guerre (War Reporters), marking the fortieth anniversary of the end of the Vietnam War. Curated by Patrick Chauvel, an award-winning photographer who had covered the war for France, the exhibition showcased the work of four North Vietnamese photographers (Đoàn Công Tính, Chu Chi Thành, Tràn Mai Nam, and Hùa Kiêm) whose documenting of the Vietnam War was often overshadowed by photographers from the Western press working from the South. The poster for the cultural event at L’Espace used an iconic image: a black-and-white picture of North Vietnamese soldiers climbing a rope against the spectacular backdrop of a waterfall, taken in 1970 along the Ho Chi Minh trail. Đoàn Công Tính, the photographer, had caught a moment of timeless beauty and strength, an image of mankind overcoming physical hindrances and material obstacles in the pursuit of a higher goal. However, a scandal erupted when Danish photographer Jørn Stjerneklar pointed out on his blog that this iconic image was doctored. He compared two versions, the recent print that appeared in the exhibition and the “original,” which was published in Tính’s 2001 book Khoảnh Khắc (Moments). Tính apologized profusely for “mistakenly” sending the photoshopped image, claiming that the original negative had been damaged and that he accidentally included a copy of the image with a photoshopped background in a CD to the exhibition’s organisers. But in a follow-up article on his blog, Stjerneklar pointed out that even the “original” had been retouched, as evidenced by the repeating pattern of the waterfall, and was likely a montage of another photograph which is displayed at the War Remnants Museum in Ho Chi Minh City. Stjerneklar’s story was picked up worldwide and ignited a lively debate around the presumed objectivity of photojournalism and the role of photography in propaganda.

Photography and propaganda

That photography was, and still is, part of propaganda in Vietnam was never a secret. Along with my colleagues, I experienced it firsthand during my term as consular counsellor at the French Embassy in Vietnam. When the Institut Français organized photo exhibitions at its flagship cultural center L’Espace in Hanoi, every picture had to be vetted by controlling organs of the government. The answer often came at the last minute, and many photographs were rejected on the basis of obscure criteria. Still, young Vietnamese photographers were enthusiastic about events organized by the French culture center. With the help of French photographer Nicolas Cornet and other professionals, young photography apprentices honed their skills in creative workshops and attended seminars on portfolio building. Some talented photographers held their first solo exhibition at L’Espace before embarking on an international career. In April 2023 (after I had left Vietnam), the Institut Français in Hanoi and its director, Thierry Vergon, initiated the first International Photography Biennale in Hanoi, a major cultural event placed under the aegis of Hanoi’s People’s Committee in partnership with a network of Vietnamese and international partners. More than twenty exhibitions organized on several locations allowed the general public and professionals to discover the wealth of contemporary photography and the treasures of heritage photography in Vietnam. A series of outreach activities were scheduled throughout the Biennale, including workshops to connect stakeholders, roundtables and debates, training sessions, film screenings, and portfolio reviews. The initiative was used by Hanoi City, part of UNESCO’s Creative Cities Network, to bolster its image as a regional hub for culture and innovation. Still under the strictures of a socialist government, a new Vietnamese narrative on photography is slowly emerging. It is based on creativity, not control, and its aim is to put Vietnam’s capital on the map for cultural professionals and creative workers. Alternative visions of Vietnam are seeping through the web of censorship and are flourishing in the rare spaces of unrestricted freedom offered by social networks or independent cultural venues.

Thy Phu’s book Warring Visions shows that creativity was also present in the photographs taken during the Vietnam War (known in Vietnamese as the Resistance War against America.) Vietnamese photographers working for the Hanoi-based Vietnam News Agency (VNA) were no less talented than their Western counterparts operating from the South. War pictures published by the Western press (or by Japan’s) were as much involved in political propaganda as the “socialist ways of seeing Vietnam” that filled the pages of Vietnam Pictorial, an illustrated magazine run by the communist state. War was fought on the front of images, both in Vietnam and within America. Propaganda pictures were also waged by the South Vietnam government, with less international success. For Americans, the Vietnam War still haunts the national psyche with the ignominy of defeat. The war was a watershed in visual history, and the many pictures taken by Western reporters and photographers laid the foundation for battlefield reporting and contemporary photography studies. But as Thy Phu notes, “in addition to overlooking unspectacular forms of representation, the Western press, then as now, neglects Vietnamese perspectives, emphasizing instead the American experience of this war.” The role of Vietnamese photographers, including the many stringers and fixers working for full-time foreign correspondents, is systematically downplayed, although some of them took the most iconic photos that were to shape the imaginaries of the war (such as Napalm Girl, the picture of a naked girl running away from an aerial napalm attack.) But placing the spotlight on photographs taken by Vietnamese war photographers is only half of the story. According to Thy Phu, we need to enlarge the category of war photography, a genre that usually consists of images illustrating the immediacy of combat and the spectacle of violence, pain, and wounded bodies. Pictures depicting wedding ceremonies, family reunions, and quotidian rituals are also part of the Vietnamese experience during the war. Drawing from family photo books from the Vietnamese diaspora, discarded collections found in vintage stores in Ho Chi Minh City, or her own family records, Thy Phu reconstitutes a lost archive of what war in Vietnam might have been like for ordinary citizens.

Socialist ways of seeing Vietnam

The canon of war photography, as well as its most basic principles, were established during the Vietnam War. Pulitzer-winning images exposed the brutality and injustice of war, its toll on the body and on the mind of soldiers, its devastating consequences for civilians and their living environment. According to the profession, war images should by no means be staged or manipulated. They should expose reality as it is, captured on the spur of the moment by a neutral observer. It will come as no surprise to learn that North Vietnamese photographers obeyed to different rules and aesthetic principles. The images that were taken by these propaganda workers are full of positivism and youthful energy. Unlike the photos taken from the South showing the terrible effects of war, the images taken by photographers from the North show young soldiers smiling in front of the camera or caught in the middle of disciplined action, images of incredible romanticism in the middle of war. The goal was, of course, to highlight their heroism in order to stimulate other soldiers and citizens seeing the images. Ideology informed the subject matter of these photographs and guided practitioners into what to look at and how to represent it. Harsh material conditions also shaped the way photographs were taken and circulated. The photographers were foot soldiers in uniform who had been selected from among Hanoi’s university elite and given a crash course training in journalism and photo reporting before being sent to the frontline. Communist allies abroad provided cameras and lenses that were made in East Germany and the USSR. Equipment and film were in such short supply that they were not issued to individual photographers but were stored at the headquarters of organizations such as the Young Pioneers, the Army’s photographic department, and the VNA. In such conditions of scarcity, photographers were forced to shoot sparingly, to compose and stage their images prior to shooting, and to improvise solutions to compensate for the lack of equipment. In the absence of flash bulbs, the flare of rockets fired against a dark sky provided the light necessary for nighttime pictures. Piecing together several shots created an improvised panoramic view without need of a wide-angle lens. War photos were displayed in makeshift jungle exhibitions or village fairs, along with propaganda posters, to uplift the masses and disseminate a “socialist way of seeing” things. Photographs were also distributed to foreigners beyond the Communist bloc, especially to members of antiwar organizations, some of whom received copies of Vietnam Pictorial, an internationally circulated illustrated magazine.

Reviewing past issues of this magazine, three central subjects stand out: the heroic struggle of soldiers, the toil of factory workers and farmers, and the sacrifices of revolutionary Vietnamese women. Beautiful portraits of women harvesting lotus flowers, of young girls playing in poppy fields, or children riding on the back of water buffaloes also adorned the color covers of Vietnam Pictorial, with vibrant colors denoting artificially painted photographs and reminding readers of the bright socialist future for which war was fought. For Thy Phu, the revolutionary Vietnamese woman was more than just an image: it was a symbol, embodying contested visions of women’s role in anticolonial resistance and national reunification. The battle for this symbol was fought on two fronts. On the leadership side, the figure of Nguyễn Thị Bình, the Viet Cong’s chief negotiator at the Paris Peace Conference in 1973, opposed the fierceful Madame Nhu, the de facto First Lady of South Vietnam from 1955 to 1963. Both used feminity for political aims, wearing different styles of áo dài, Vietnam’s traditional dress, as a gendered display of nationalism. In contrast to Madame Bình’s demure attire which singled her out as the sole woman at the negotiating table, Madame Nhu favored a more risqué style of áo dài and did not hesitate to pose in masculinist postures, such as in the famous closeup picture where she is seen firing a .38 pistol. Both camps also sought to glorify women’s contribution to nationalist struggle by enrolling them in mass movements. In the South, Madame Nhu founded the Women’s Solidarity Movement of Vietnam (WSM) in order to give women military training and enroll them in paramilitary groups assisting the armed forces. Women in uniform included Hồ Thị Quế (the “Tiger Lady”), member of the Black Tigers Ranger Battalion, pictured in full battledress looking fiercely at the camera. In the North, young women were recruited en masse in the Youth Shock Brigades, also known as TNXP, and sent to the frontline in order to assist male soldiers or build the Ho Chi Minh trail. The image of “girls with guns” or “long-haired soldiers” stood in stark contrast with the more traditional pictures emphasizing motherhood and family that were used to appeal to the solidarity of women’s antiwar organizations in the United States. But pictures offer fertile ground for projection, misrecognition, and reinvention: the Vietnamese revolutionary woman was reclaimed as a radical chic symbol for American feminist struggles in which she had no part. The Vietnamese Communist Party won the day in the fight over images and symbols associated with womanhood. But as French historian François Guillemot reminds us, Vietnamese women, who represent half of society, suffered more than their half as a result of military conflict and civil war.

Lost archives

The Democratic Republic of Vietnam (DRV), now known as the Socialist Republic of Vietnam, ultimately claimed victory in the war of images and symbols. As a result, war images from the South were censored, erased, and eliminated from the record. They survive as embodied performances of reenactment and remembrance in the dispersed archives of the Vietnamese diaspora. To illustrate the war as seen from the perspective of South Vietnam, Thy Phu takes the example of Nguyễn Ngọc Hạnh who was one of the most respected Vietnamese photographers of his time. He served in the French Army until 1950, then transferred to the Armée Nationale Vietnamienne, which in 1956 became the Army of the Republic of Vietnam (ARVN). He attended the French Army photography school during the mid-1950s, was designated the official ARVN combat photographer in 1961, and ultimately attained the rank of Lieutenant Colonel. After the fall of Saigon in 1975, he was sent to a “re-education camp” with his fellow officers, but survived until he was released through the intervention of Amnesty International in 1983. He emigrated to the U.S. in 1989, where he passed away in 2017. Published in 1969 in collaboration with civilian photographer Nguyễn Mạnh Đan, his book Vietnam in Flames ranks in the top echelon of great Vietnam photobooks, right alongside Philip Jones Griffiths, David Douglas Duncan, and the best of the Japanese photographers. Hạnh made no secret that his photos were staged: he even explained in painstaking detail how he used drops of olive oil to place “tears” on one of his most notable photograph, Sorrow, the portrait of a lovely young woman weeping over the dog tags of her missing companion. As Thy Phu notes, manipulation has been a defining characteristic of war photography from the nineteenth century to the present. Indeed, some of the mots famous war photographs, such as Robert Capra’s The Falling Soldier, are said to be restaged or reenacted. Hạnh nevertheless insisted that his images are authentic documents that register the intensity of the emotions the war engendered. Photographs, like tears, are a social ritual. Whether they are authentic or inauthentic, induced or spontaneous, matters less than the fact that they are to be seen and recognized. As they circulate among the Vietnamese diaspora while they remain censored in Hanoi or Ho Chi Minh City, pictures from Vietnam in Flames contribute to a sense of community through collective suffering, sacrifice, and remembrance.

The two waves of Vietnamese refugees, those who fled in 1975 after the fall of Saigon and the “boat people” who left the country from the late 1970s into the early 1990s, left behind all their personal belongings, including family pictures and photo albums. Those who stayed behind pruned their photo collections of all images reminiscent of the old regime: men in ARVN uniform, pictures betraying friendly connections with Americans, or scenes denoting bourgeois proclivities such as foreign travels and private vacations. Remarkably, however, thousands of those photos have resurfaced in the marketplace in the form of orphan images and albums separated from their original owners and stories. These are images that have been “unhomed”: scattered, lost, or left behind. Together they provide a counter-narrative of the war, a testimony of Southern Vietnamese experiences that have been erased from the record and banished from official history. How to deal with those missing archives, lost memories, and orphaned pictures? What can be learned of family pictures in the absence of a story, when the memories that bring photographs to life are missing from official records and even personal collections? In Shakespeare’s Hamlet, only a scholar is capable of speaking with ghosts. Similarly, only artists can speak to the ghostly presence of these anonymous faces. Thy Phu, who herself assembled a community archive of family photographs and the stories about them, presents the artistic démarche of Dinh Q. Lê, a diaspora Vietnamese artist now based in Ho Chi Minh City and whose work was recognized by major exhibitions in Singapore, Tokyo, New York, and Paris. Since 1998, Lê has been working on a trilogy of installations that feature family photographs, objects that fascinate him because he lost all of his own photographs in the course of his family’s forced migration. Images are stitched together to form fragile-looking, rectangular installations like mosquito nets, or they are cut into enlarged strips that are weaved to form a new picture, superposing the initial faces on the strips and an emerging bigger picture. In his 2022 exhibition at the Quai Branly Museum, one of the weaved picture represented Madame Nhu waving a pistol, an image still taboo in Vietnam but that the artist was able to reinterpret through his own eyes. In another installation, onlookers from the Vietnamese diaspora were invited to pick up images covering the gallery floor and to consult an online database that draws on crowdsourcing to identify lost images of their own family, merging the acts of collecting, remembering, and archiving.

War photography in the age of generative AI

What does Thy Phu’s book tell us about photography censorship and creativity in contemporary Vietnam? How can we interpret war photography in the light of warring visions, ragged memories, and contested identities? The first lesson I learned from Warring Visions is that the distinction between propaganda pictures and war reporting is artificial: in the end, what matters is not political intent, but what we make of it. War pictures will always be used for political purposes. But those that remain in public memory transcend the immediacy of a cause and express universal values, sometimes at odds with the intention of their sponsors. The second lesson is that we need to expand our notion of war photography. Vernacular pictures representing quotidian rites of family life also tell stories about wartime conditions, and these stories must be collected and made known. As a third lesson, we should think hard about authenticity and manipulation of images in the age of generative AI and deep fakes. The indignation that followed Jørn Stjerneklar’s blog article exposing the manipulation of Đoàn Công Tính’s poster in 2015 was in a way misplaced: war pictures can be staged, reframed, doctored, reenacted, and, yes, photoshopped. As historians of war photography tell us, this has always been the case, and we should anticipate more of the same in our technologically savvy future. In my perception, Vietnamese nowadays have a more relaxed attitude to Photoshop than people in Europe or in North America. When I took ID pictures in Hanoi, the result came heavily retouched, with bright eyes and rosy cheeks. To tell the truth, I like the picture more than the original, and I still use it on my identity documents or CV profile. This tradition of retouching pictures goes back a long way, as evidenced by the family portraits and painted photographs from colonial Indochina. It is also linked to the highest levels of Vietnamese statesmanship: as is well known, prior to establishing the DRV in 1945, Hồ Chí Minh led a peripatetic life and worked a number of odd jobs. According to the records of the French police, around 1915-17 he worked as a photo retoucher in Paris by day and meeting leading Communist agitators by night. It is said that this humble experience with visual restoration led him to grasp photography’s political potential. It also taught him to be wary of photography’s role for state surveillance and identity control: only one portrait remains from this period, recognizable by the chipped upper part of his left ear that allowed the French police to check the identity of the Vietnamese revolutionary leader who changed his name and civil status several times over the course of his career.

Parliamentary Abdications: 1933 and 1940

A review of Ruling Oneself Out: A Theory of Collective Abdications, Ivan Ermakoff, Duke University Press, 2008.

Ruling Oneself OutHow can a majority of parliamentarians vote to renounce democracy? Why would a group accept its own debasement and, in doing so, abdicate its capacity for self-preservation? What induces them not only to surrender power, but also to legitimize this surrender by a vote? Which conjuncture allowed democratically elected officials to rule themselves out and allow authoritarian leaders to take full control? This sad reversal of fortune happened on two occasions in the twentieth century. On 23 March 1933, less than one month after the burning of the Reichstag, German parliamentarians gathered at the Kroll Opera House in Berlin passed a bill enabling Hitler to concentrate all powers in his own hands by a majority of 444 to 94, meeting the two-third majority required for any constitutional change. On the afternoon of 10 July 1940, at the Grand Casino in Vichy, a great majority of French deputies and senators—569 parliamentarians, about 85 percent of those who took part in the vote—endorsed a bill that vested Marshal Pétain with full powers, including authorization to draft a new constitution. In these two cases, abdication was sanctioned by an explicit decision—a vote. Both cases gave authoritarian leaders all powers to sideline parliament, suspend the republican constitution, and rule by decree. Both 23 March 1933 and 10 July 1940 are dates which live on in infamy in Germany and in France. As soon as the events occurred, they were to haunt the elected officials who took part in the decision. To borrow Ivan Ermakoff’s words, these were “decisions that people make in a mist of darkness, the darkness of their own motivations, the darkness of those who confront and challenge them, and the darkness of what the future has in store.” Can we shed light on this darkness?

History and context

History is the discipline of context, and it explains an event by putting it into a broader frame. But how much context does a historian need, and how far back is one to investigate to put an historical event into a proper frame? In Ruling Oneself Out, Ermakoff chose to explain the handing over of state powers to Hitler and to Pétain by staying as close to the event as possible. He takes as chronological points of departure the German presidential election of 1932 and the French declaration of war in September 1939, giving a short summary of Germany’s descent into political chaos and France’s ignominious defeat. He concentrates on the moment of decision within the two national parliaments by making a blow-by-blow narrative of those two fateful days, trying to enter the minds of rank-and-file parliamentarians and to account for their every motions and expressions. This is different from the historian’s point of view which usually goes farther back in time and tries to put an event into a causal chain of explaining variables and historical determinants. For instance, in a historical essay written on the spur of the moment, Marc Bloch explained France’s “Strange Defeat” (L’Étrange Défaite) by listing all the personal failings and strategic mistakes made by political and military leaders during the interwar period. German historians have interpreted the rise of Nazism as a “belated post-scriptum” (ein spätes Postskriptum) to the cultural conflict (Kulturkampf) between German Catholics and the central state inaugurated by Bismark’s antichurch policies in 1871, or as the expression of a Sonderweg some see going as far back as Luther’s Reformation. Ermakoff’s goal is not to provide a historical account of the two events but to build a theoretical model, a “theory of collective abdications” that may apply, beyond the two cases of parliamentary suicide, to a broad class of collective situations and outcomes.

In the empirical sciences, and especially for the formal lenses of decision theory, context should be reduced to a minimum. The scientist builds a theoretical model and tests it on a constructed dataset, keeping as close as possible to a controlled experiment. Ermakoff’s study is context-rich and steeped in empirical detail. For him, abstracting the event from its historical context is a crucial mistake. The sources he uses for his enquiry are narrative accounts produced by the actors themselves. These testimonies are either contemporary or retrospective, spontaneous or in response to a request, public stances or private accounts. Especially letters and diaries written immediately after the facts are very helpful to debunk ex post rationalizations. They allow to reconstruct actors’ subjective states as they made their decisions. But all testimonies include a form of self-justification and self-deception. The actors rationalized their decision by portraying it as the only viable and acceptable course of action. The tactical reasons the delegates invoke for themselves are self-serving and often betray a willingness to deceive oneself. As Jean-Paul Sartre has shown, the consciousness with which we generally consider our surroundings is different from our reflecting on this consciousness. The historian has to dig deeper if he wants to examine actors in the process of making their decisions. He has to rely on a range of analytical tools—formal, quantitative, and hermeneutic— and apply them to a variety of historical sources. The theory he offers is primarily a theory of the case. Its scope is limited to the confines of the two historical events. No single determination or macrocausal explanation can account for the result of the two parliamentary votes. The outcome was not predetermined; events could have turned differently. The parliamentary votes of abdication were the end result of a process by which parliamentarians based their opinions on the behavior and declarations of their peers. The collective understanding of a situation is dependent on a series of interactions. In confining the analysis to the moment of decision, both theory and history gain in intelligibility and leverage.

Fear of retaliation, misjudgment, ideological contamination

Ermakoff starts by rejecting the three standard explanations for political acquiescence during these two fateful days: fear, blindness, or treason. Delegates who participated in the decision, and historians after them, argued that they made their choice because they were coerced; because they were deceived; or because they were complicit. The coercion thesis portrays abdication as a forced choice. Miscalculation calls into question the assumption that target actors correctly assess the implications and significance of their decision. Collusion lifts the hypothesis that the challenger and the target actors have conflicting interests. Each explanation holds a degree of validity. Threats and intimidation were certainly present in the context of 1933 Germany. Nazi thugs harassed their opponents with seeming impunity. On the day of the vote, several thousand Nazi activists demonstrated outside the Kroll Opera House, where the parliamentarian session took place. In the meeting room they filled the banks reserved for external observers. In France, the country was under the direct threat of an occupying army, and rumors of latent menaces also suffused the political climate. A vote of no confidence would have entailed Pétain’s resignation and a leap into the abyss. But when so much is at stake, actors can always choose to disregard the threats that are deployed against them. As Ermakoff underscores, they can decide to challenge the odds. At the final moment, the decision is theirs. Likewise, the explanation that abdication was based on misjudgement takes actors’ most common retrospective justification at face value. But Hitler’s intentions were clear for all people to see, and in France Pétain’s mouthpiece Pierre Laval had made it clear that a “yes” vote would mean an end to democracy and the republican regime. Besides, the coercion and the misjudgment arguments miss the fact that the two votes were by no means unanimous. Some parliamentarians took a stand and opposed the delegation of full powers to a single man. In Germany, the Social Democratic delegation unanimously voted against the bill. Despite all appearances in retrospect, acquiescence in March 1933 and July 1940 was not a foregone conclusion.

The third scenario, ideological collusion, takes the deterministic argument to its extreme. Nazism’s rise to power is sometimes interpreted as a result of a class alliance between the ruling elite and Hitler’s party in order to thwart communism and secure the interests of the capitalist owners of production. Similarly, the Vichy vote is seen as a conservative revenge against the Front Populaire that had won the elections and implemented labor-friendly policies in 1936. More generally, the period saw the commitment to democratic institutions dwindle, becoming either tenuous or dubious. German Catholic leaders, it is argued, surrendered to Hitler because they were not entirely immune to an antiliberal frame of mind and an organicist conception of the nation. In France, class-based motivations for a political revenge went along with a rejection of the political regime that had made military defeat possible. According to historian Robert Paxton, “there was no resistance simply because no one wanted to resist.” There is a degree of truth in this argument. Parliamentarians who voted “yes” to Hitler and to Pétain failed in their role to uphold the constitution and maintain democratic institutions. They gave the transition to an authoritarian regime the appearance of legality because they abdicated their political capacity. From a legal perspective, they had no right to give a constitutional blank check to Hitler and to Pétain. Lack of personal courage was ultimately the reason for the delegates’ acquiescence. But accusing a majority of parliamentarians of treason or dereliction of duty misses the point. The social scientist’s role is to interpret and to explain, not to judge or to vindicate. A more fine-grained approach to the mechanics of decision is needed.

Decision under stress

Another line of argument points to the irrational forces at play during the decision process. Delegates were under maximum pressure. Wild rumors were circulating. Fear and anguish spread like a disease. On February 28, the day after the burning of the Reichstag, Hitler had issued an emergency decree “for the protection of the people and the state” that abolished basic civil rights conferred by the Constitution. Germany was veering toward total chaos and civil war. Likewise, France after the defeat was totally disorganized. In Vichy, where authorities had gathered after a chaotic escape away from the advancing German troops, everything had to be improvised. The breakdown of parliamentary groups made interactions more random, more hectic, more informal, less structured, and less predictable than in routine times. In such circumstances, social scientists often emphasize the irrationality of crowds, herd behavior, panic movements, and contagion effects. The collective behavior of a disorganized group can influence people to act a certain way or lose their responsibility. Ermakoff agrees that the explanatory key to the outcome is to be found in the collective dimension of the decision. The focus should be on a collective process of decision making. Diffusion effects and the modeling of interactions should take centerstage, for in situations of radical uncertainty people tend to turn toward their peers to shape their own expectations. Yet what gets diffused is less an emotional state than a strategic assessment of the situation. Contagion, the diffusion of an affective state, is a misleading representation of group behavior, for it implies that the process is purely emotional, affective, and mechanical. Even if the process is nonlinear and the result suboptimal, there is no need to resort to the irrationality of crowds and to abandon the hypothesis of rational behavior.

The crucial factor underlying collective abdications were not threats, blindness, or ideological propensities but the dynamics of expectation formation that took shape among delegates in a context of radical uncertainty and as traditional coordination mechanisms had broken down. Witnessing their world crumbling, delegates turned their eyes to their peers. But they did not know where these peers stood. This situation is a classical setting in game theory, decision science, or empirical finance. And indeed, Ermakoff mobilizes many concepts from these disciplines. His theory of collective abdication builds on the three notions of sequential alignment, local knowledge, and tacit coordination, and uses concepts such as reference groups, prominent actors, action thresholds, tipping points, and common knowledge. Traditional historians may shy away at the evocation of these abstract notions, and may even recoil in horror at the sight of a book’s appendix full of equations and graphs. But Ruling Oneself Out is not a book written for theory’s sake, and its narrational structure remains faithful to disciplinary standards in historiography. It refers to many books written by historians on the two events, and particularly to Robert Paxton’s Vichy France. Historians will be on familiar ground, as game theory or decision science are only mobilized to interpret the historical narrative and analyze data. This sociology to the event stands in stark contrast to the cumbersome constructions of social scientists who only use empirical evidence to advance their theoretical claims. Sociology remains a historical science and cannot abstract itself from its temporal condition in order to offer an abstract modeling of social systems. Here, formal tools and abstract theories remain at the service of explaining the facts.

When democracies fail

Ruling Oneself Out offers many valuable lessons for today’s social scientists and committed citizens. The choice to concentrate on two episodes when democracies surrendered on their own free will is significant: it shows in particular that the failure of democracies was not a preordained conclusion. Things could have turned differently. Historical outcomes are not the result of deterministic forces and collective interests: individual decisions matter. And so does politics. In particular, parliamentarians are bestowed with a great burden of responsibility and accountability. Their decisions can make or break a constitutional order. They are the guardians of the democratic temple, and their failure in protecting the sacred treasure that they have in charge can have dramatic consequences. Ermakoff also offers a lesson in applied social science, or historical sociology. He used the best conceptual tools available at his time, without falling into the trap of theory fetishism of math graph envy. His book combines various methods of analysis, both qualitative and quantitative, which are rarely used together. It remains an historical account of two significant events, in which the lay reader will learn a lot. In this perspective, theory and evidence should go hand in hand. Being faithful to “how things actually happened,” to take Leopold von Ranke’s definition of history, does not imply a rejection of formal models or theoretical constructions. The role of parliaments offer a particularly rich material for theoretically inclined social scientists. Parliaments’ archives contain a trove of empirical data, both textual and statistical. They are open to the public, and easily lend themselves to participant observation or ethnographic work. A young French sociologist, Etienne Ollion, has recently published a book on the functioning of the French National Assembly using state-of-the-art narrative techniques and quantitative tools, including statistical techniques derived from artificial intelligence. I hope his book, Les candidats, gets translated into English.

A Flash in Japan

A review of The Flash of Capital: Film and Geopolitics in Japan, Eric Cazdyn, Duke University Press, 2002.

The Flash of CapitalThe “flash of capital” refers to the way the underlying structure of a national economy “flashes” or reverberates through the films it produces, and how cinema critique can highlight the relations between culture and capitalism, film aesthetics and geopolitics, movie commentary and political discourse, at particular moments of their transformation. A flash is not a reflection or an image, and Eric Cazdyn does not subscribe to the reflection theory of classical Marxism that sees cultural productions as a mirror image of the underlying economic infrastructure. Karl Marx posited that the superstructure, which includes the state apparatus, forms of social consciousness, and dominant ideologies, is determined “in the last instance” by the “base” or substructure, which relates to the mode of production that evolves from feudalism to capitalism and then to communism. Transformations of the mode of production lead to changes in the superstructure. Hungarian philosopher and literary critic György Lukács applied this framework to all kinds of cultural productions, claiming that a true work of art must reflect the underlying patterns of economic contradictions in the society. Rather than Marx’s and Lukács’ reflection theory, Cazdyn’s “flash theory” is inspired by post-marxist cultural theorists Walter Benjamin and Fredric Jameson, and by the work of Japan scholars Masao Miyoshi and Harry Harootunian (the two editors of the collection at Duke University Press in which the book was published). For Cazdyn, how we produce meaning and how we produce wealth are closely interrelated. Cultural productions such as films give access to the unconscious of a society: “What is unrepresentable in everyday discourse is flashed on the level of the aesthetic.” Films not only reflect and explain underlying contradictions but, more importantly, actively participate in the construction of economic and geopolitical transformations.

 Reflection theory and flash theory

 The Flash of Capital concentrates on those critical moments of Japanese modern history during which the forms of both cinematic and capitalist categories mutate. The author identifies three such mutations of Japanese modernity: (1) between being colonized and being a colonizer nation of the pre-World War II moment; between the individual and collective of the postwar moment; and between the national and the transnational of the contemporary situation. Colonialism, Cold War, globalization: these are the three moments that Cazdyn addresses through thematic discussions of cinematic visuality, of film historiography, of literary adaptations, of amateur acting, of pornography, and of aesthetic experiments. Rather than a linear history, he prefers to concentrate on key moments of transformation during which formal inventions on the level of the film aesthetic figure a way out of impossible situations before a grammar becomes available to make sense of them. By paying close attention to the details of cinematic texts, he reads the works of Japanese directors and film critics as so many symptoms of the most pressing social problems of the day. Cazdyn borrows from Fredric Jameson and other literary critics the technique of symptomatic reading, a mode of reading literary and cinematic works which focuses on the text’s underlying presuppositions. A symptomatic reading is concerned with understanding how a text comes to mean what it does as opposed to simply describing what it means or represents. In particular, it tries to determine what a particular text is unable to say or represses because of its ideological conviction, but that transpires at the formal level through flashes, allegories, and aesthetic choices. The films that Cazdyn passes under review occur at historical junctures in which the social and political events are difficult to articulate. There does not seem to be an effective language with which to express the transformations taking place at key moments of Japanese modernity. But, as Cazdyn notes, “some filmmakers take more risks than others. They risk speaking in a language for which there is no established grammar.”

 Japanese cinema has a peculiar affinity with the history of capitalist development. The movie industry is literally coeval with Japanese modernity: in the case of Japan, the history of film and the history of the modern nation share approximately the same span of time, both emerging in the 1890s. In addition, the one-hundred-year anniversary of film in Japan coincided with the fifty-year anniversary of the end of World War II. It comes as no surprise, therefore, that almost every history of Japanese film has used the history of the nation to chart its course. The three moments that The Flash of Capital choses to concentrate on are key turning points in Japanese modern history. They are also periods when Japanese cinema was particularly productive, with successive “Golden Ages” that have marked the history of Japanese cinema for a worldwide audience. The 1930s, the postwar period up to the late 1960s, and the 1990s were times fraught with contradictions. The antinomies and tensions between colonization and empire, between the individual and the collective, and between the national and the transnational made an imprint of the films produced during these periods, both at the level of content and in the formal dimension of aesthetic choices and scenic display. It is interesting to note that these moments have also produced canonic histories of Japanese cinema, both in print and through cinematic retrospectives. Cazdyn conducts a formal analysis of six histories of Japanese films, two of which are themselves films. The first historiographic works in the 1930s and early 1940s set the terms for a theory of cinema that was heavily influenced by Marxism and by nationalism; the 1950s saw the publication of Tanaka Jun’ichirō’s monumental encyclopedia of Japanese movies and Donald Richie and Joseph Anderson’s The Japanese Film; and the 1990s was marked by the one-hundred anniversary of Japanese cinema, with yet another four-volume encyclopedia and a film retrospective by Oshima Nagisa. Among scholars and students in the West, Richie and Anderson’s book has been a constant reference and has gone through a series of republications; it is, however, distinctly anticommunist and heavily marked by the Cold War context.

Colonialism, Cold War, globalization

 Cazdyn begins his discussion of the first period with an Urtext of Japan’s cinematography: the recording in 1899 of a scene from the kabuki drama Momojigari by the actor Ichikawa Danjūrō (the stage name of a lineage of actors that goes back to the seventeenth century down to the present). Attending a screening held at his private residence, Danjūrō was shocked by his own image staring back at him and made it clear that the film should never be screened during his lifetime. But he later agreed that a presentation of the movie reels at an event in Osaka he was unable to attend was more satisfactory than a performance by another kabuki troupe. This episode set the terms—repetition, reproductibility, ubiquity, copy rights, distribution networks, mass production—by which the movie industry later operated. By the 1930s, cinema had become well entrenched in Japan. The early figures of the onnagata (men playing women’s roles) and the benshi (commentator integrated into the story), taken from similar roles in the traditional performing arts (kabuki, noh, bunraku), had given way to the modern talkie movie, a star system based on female actors, and genres divided between jidai-geki (period dramas) and gendai-geki (modern dramas). Film adaptations (eiga-ka) of literary works of fiction (shōsetsu) served to gain legitimacy for cinema as an art form, circumvent censorship, consolidate a literary cannon, and affirm the superiority of the original through fidelity-based adaptations. The writer Tanizaki Jun’ichirō, who had offered his own theory of adaptation through his successive translations into modern Japanese of the Tale of Genji, criticized the filmization of his novel Shunkinshō by pointing out the erasing of multiple levels of narration and identity that was so central to his work. When Tanizaki’s novel is reduced to mere narrative content, “all that remains are the most reactionary and conservative elements.” For the author, Tanizaki’s aesthetic choices, and the films produced by the first generation of Japanese directors, were inextricably related to the most crucial issues facing the Japanese nation in the 1930s: the rise of militarism and the backsliding of democracy, the colonization of large swathes of Asia, the rejection of Western values in favor of Japanese mores. Remaining silent about these issues, like Tanizaki in his novels or Ozu Yasujirō in his early movies, are charges that can be held against the authors.

 The second Golden Age of Japanese cinema, and a high point of Japanese capitalist development, arose from the rubbles of World War II, found its most vivid expressions in the 1950s and early 1960s, and culminated in the avant-garde productions of the late 1960s and early 1970s. Out of this second period emerged not only a studio system modeled on Hollywood, but an impressive number of great auteurs that have become household names in the history of artistic cinema. Ozu’s challenging formal compositions, Kurosawa’s intricate plots, and Imamura’s nonlinear temporalities are immediately recognizable and have influenced generations of movie directors in the West and in Asia. The postwar period, which coincided with the Cold War, was marked by the subjectivity debate or shutaisei ronsō, which influenced popular ideas about nationalism and social change. For the postwar generation of left-leaning intellectuals, a sense of self—of one’s capacity and legitimacy to act as an individual and to intervene against the state and collective opinion—was crucial to keep the nation from ever being hijacked again by totalitarianism. But at the same time, the individual was summoned to put the interests of big corporations, administrative structures, and the Japanese nation as a whole before his or her own personal fulfillment, and to sacrifice the self in favor of economic development. In the context of the movie industry, the attempt to transcend the contradiction between the individual and the collective was resolved by positing a third term: the “genius” filmmaker who breaks out of the rigid structure and trumps the other two terms. The “great man theory” claims that an individual can rise up and produce greatness within—if not transcend—any structure. The same emphasis on the power of the filmmaker characterized film adaptations of literary works in the period. Encouraged by the Art Theater Guild, eiga-ka movies took liberties with the original text either by focusing on a particular section or adding content to the narrative. Shindō Kaneto’s 1973 adaptation of Kokoro, for example, deals only with the third letter of Sōseki’s famous shōsetsu, while in Ichikawa Kon’s Fires on the Plain the soldier-narrator of Ōoka Shōhei’s novel is shot and killed at the end instead of going to a mental hospital.

The withering away of the nation-state

 The era of globalization, the third period in Eric Cazdyn’s survey of movie history, marks a transformation in the operations of the nation-state and in the aesthetics of Japanese cinema. The problem of globalization is the problem of a globalized system in which nations are steadily losing their sovereignty but where state structures and ideological models cling to an outdated form of representation. The political-economic and the cultural-ideological dimensions do not move at the same speed: at the precise moment in which the decision-making power of the nation-state is declining, nationalist ideologies and identities are as strong as ever. Some authors combined a renewed emphasis on the nation with the full embrace of globalization. For Ōshima Nagisa, the enfant terrible of the Japanese New Wave, national cinema is dead, and Japan is being bypassed by the transnational forces of capital. In Merry Christmas, Mr. Lawrence (1983), he represents the Japanese from the viewpoint of the white prisoners of war. In L’Empire des sens (1976), the pornographic nature of the film does not lie in the content (although the actors Matsuda Eiko and Fuji Tatsuya are “really doing it”) but in the form of reception: the Japanese conversation about the film was almost entirely consumed by questions of censorship, while in France, where it was first released, the film was geared towards a general audience—and foreign visitors: Ōshima noted that one out of every four Japanese who traveled to France had seen the movie. For Cazdyn, a film that makes history is “a film that represents a transformation before it has happened, a film that finds a language for something before a language has been assigned, a film that flashes the totality of modern Japanese society in a way that is unavailable to other forms of discourse.” Rather than commenting on blockbuster movies and costly productions, he choses to read political allegories in experimental films such as Tsukamoto Tetsuya’s Tetsuo (1988) or the documentary films of Hara Kazuo such as Yukiyukite shingun (Naked Army, 1987). He even finds inspiration in adult videos, which he sees as a compromise between guerrilla-style documentaries on the left and reality TV on the right. He notes that approximately seventy-five percent of current adult-video films in Japan are documentary-style—that is, their narratives are not couched in fiction, but follow a male character walk the streets looking for sex and engaging women to that end. Similarly, in his documentaries, Hara Kazuo can often be heard asking questions and provoking situations. His films make change happen into the real.

Eric Cazdyn is well-versed in the history of Japanese Marxism and makes it a central tenet of his theorization of Japanese cinema. He refers to the pre-war Marxist debate between the Kōza-ha (the faction that remained loyal to the Japanese Communist Party and the Komintern) and Rōnō-ha (the faction that split from the JCP in 1927 and argued that a bourgeois revolution had been achieved with the Meiji Restoration). Another school of Marxism, the Uno-ha, was the school of the late Tokyo Imperial University economist Uno Kōzō, who was probably the single most influential postwar Japanese economist on the domestic academic scene. Uno drew a distinction between a pure theory of capitalism, a theory of its historical phases, and the study of concrete societies. He concentrated on the first, and dedicated himself to working through the most theoretical problems of Marx’s Capital, such as the labor theory of value, the money circuit represented by the M-C-M’ formula, commodity fetishism, and the recurrence of crises. Moving to the present, Cazdyn pays tribute to Karatani Kōjin, a contemporary philosopher and interpreter of Marx’s thought that has attracted a vast followership. Marxism has had a lasting influence on Japan’s intellectual landscape, and has impacted the work of many filmmakers in the course of the past century. Cazdyn recalls that many intellectuals joined film clubs in the late 1920s and early 1930s because they were some of the only places where members could read Marx’s Capital without falling prey to censorship and repression. But this utopian space was soon discovered, and by 1935 Marxist intellectuals were either behind bars, had retreated to their private space, or had embraced right-wing nationalism. Illustrative of this wave of political commitment is the Proletarian Filmmaker’s League or Prokino. Cold War histories of Japanese cinema have disparaged this left-wing organization by pointing out “the extremely low quality of its products.” Cazdyn rehabilitates the work of its main theorist, Iwasaki Akira, and of film documentarist Kamei Fumio, who treated montage as a “method of philosophical expression.”

New publics for old movies

What is the relevance of these references to Marxist theory and obscure works of documentary or fiction for contemporary students of Japanese cinema in North America and in Europe? Cazdyn highlights the changing demographics of the classes that enroll in his discipline: “Students were primarily attracted to the arts and Eastern religion in the 1960s and 1970s; in the 1980s, they were chasing the overvalued yen; and today, they are consumed by (and consumers of) Japanese popular culture—namely manga and anime.” He also notes that the study of national cinema as an organizing paradigm has lost much of its appeal. The academic focus is now on films that address issues of minorities in Japan—post-colonial narratives, feminist films, LGBT movies, social documentaries—or on transnational productions in which Japanese identity is diluted into a pan-Asian whole. But academics should not project their current global and professional insecurities onto the screen of cinema history. The demise of the nation-state, and the dilution of national cinema into the global, is not a foregone conclusion. Movies produced in Japan today do not seem to appear less Japanese than the ones made one or two generations ago. There is still a strong home bias in the preferences of viewers, who favor locally produced movies over foreign productions. Japanese films that are popular abroad do not necessarily make it big in Japan, and the art movie theaters or international festivals often include films that are completely unknown in their domestic market. The economic and geopolitical context matters for understanding a movie, but not in the sense that Cazdyn implies. The author’s knowledge of the real functioning of an economy is inversely proportional to his investment in Marxist theory. He confesses that his interest does not hinge “on the profits and losses incurred by the film industry in Japan.” But supply and demand, profits and losses, and production and distribution circuits matter for the evolution of cinema over the ages, and a theory that claims to conceptualize the link between films and their socio-economical context must grapple with economic realities, not just outmoded Marxist fictions.

Passing for White, Passing for Black

A review of Passing and the Fictions of Identity, Elaine K. Ginsberg ed., Duke University Press, 1996.

Passing

On September 10, 2020, the editorial director of Duke University Press issued a statement about Jessica Krug, a published author who for several decades had falsely claimed a Black and Latinx identity before being exposed as a case of racial fraud. The public statement was brimming with rage and indignation: “I have been sickened, angered, and saddened by the many years that she deployed gross racial stereotypes to build her fake identity,” the editor wrote. The feminist scholar was denounced as a case of deception and fraud, rendered more shameful by the fact that “early in her career, she took funding and other opportunities that were earmarked for non-white scholars.” Confronted with her lies, Jessica Krug herself issued a blog confession in which she disclosed her original identity “as a white Jewish child in suburban Kansas City” who, because of “some unaddressed mental health” issues, had assumed a false identity initially as a youth and then as a scholar. Using a word tainted by a history of antisemitism, she described herself as “a culture leech,” apologized profusely, and asked to “be cancelled.” It turned out Jessica Krug wasn’t the only case of racial impersonation in academia: over the forthcoming months, other scholars were exposed as having claimed a false racial identity, including another author who had manuscripts accepted by Duke University Press even after she was denounced as a so-called “Pretendian,” or a person falsely claiming a Native American heritage. In another statement, the same editor indicated that “for months now, we at Duke University Press have engaged in difficult conversations about how we can do a better job of considering ethical concerns as we make our publishing decisions.” But she did not indicate whether the academic publisher would take measures to check the self-declared racial identity of its contributors, or how it would proceed in doing so.

Policing race, unpolicing gender

I remember being amused and puzzled by these media statements. I saw them as a typically American story as we like to imagine them in France: a narrative following a pattern of public exposure, legal confrontation, personal confession, atonement for past sins, and redemption, as was the case of Bill Clinton in the Jessica Lewinsky affair. Only in the case of white people assuming a Black identity there was neither mercy nor redemption: the culprits were expected to expose their shame publicly before disappearing into oblivion. And indeed, following her confession Jessica Krug vanished from public view, never to be seen again: she was, in effect, cancelled. To a certain degree, I can understand the outrage of the Duke editor and other persons who had been fooled into believing the usurpated identity of racial impostors. But only to a degree: there are also convincing arguments to support the fact that racial usurpation is not such a big deal, and should be treated with leniency. Whom did Jessica Krug harm by pretending to be black? Does having benefited from earmarked resources justify the policy of cancellation of a scholar who may otherwise have brought useful contributions to the field? What if it was possible to “play one’s race” as one plays a role? After all, isn’t it a central tenet of critical studies that identity is a fiction and that social roles are performatively enacted? According to Judith Butler, whose Gender Trouble was published in 1990, gender is performance. Likewise, in Epistemology of the Closet, also published in 1990, Eve Kosofsky Sedgwick argues that limiting sexuality to homosexuality or heterosexuality, in a structured binary opposition, is too simplistic. The discipline of queer studies that they helped establish is a broad tent: one does not have to prove one’s credentials as a gay, lesbian, or otherwise LGBTQI+ person to identify as “queer.” Likewise, in crip theory—the radical arm of disability studies—, a person is considered as disabled if she considers herself to be so. There are no checks of medical records or social security status: indeed, disability scholars deny doctors the exclusive right to declare who is disabled and who is not, and argue that disability status is biased against persons of color, people living in precarious conditions, and otherwise discriminated persons. Being disabled (or being queer) is a social construction, just like what is opposed to it, namely being able-bodied (or being straight). Why should race be treated differently? Are academics serious when they claim that race is also a fluid and reversible category?

The moral panic raised by racial usurpations of minority identity is a very contemporary phenomenon. To understand its roots, one has to delve into the American history of race relations, and to understand the academic context as it emerged in the 1990, especially in literature departments where questions of identity and fiction were most prominently raised. It was a time when the modern racial impersonators started their career, and when transracialism, although based in those cases on identity theft and deception, appeared as a feasible option. The book Passing and the Fictions of Identity, edited by Elaine K. Ginsberg and published in 1996, therefore provides a useful benchmark to assess contemporary debates in light of their foundational moment. The term passing designates a performance in which one presents oneself as what one is not, a performance commonly imagined along the axis of race, class, gender, or sexuality. In American literature, passing across race and across gender are thoroughly imbricated—most famously in the narrative of William and Ellen Craft, Running a Thousand Miles for Freedom (1860), in which the black couple escaped from slavery, she dressed as a white man and he posing as her servant, and in Harriet Beecher Stowe’s Uncle Tom’s Cabin (1852) when Eliza, traveling to Canada, disguises herself as a white man and her young son as a girl. In the twentieth century, novels such as Nella Larsen’s Passing (1929) and James Weldon Johnson’s Autobiography of an Ex-Coloured Man (1912) added to the discourse of racial passing a third important sense of passing: the appearance of “homosexual” as “heterosexual.” Passing and the Fictions of Identity explores passing novels as a literary genre that complicates racial and sexual categories. It also considers passing across social status delimitations, as in The Life of Olaudah Equiano (1789) in which the narrator, an Igbo African and a former British slave, becomes a free sailor and a pioneer of the abolitionist cause. It addresses gender crossing through a close reading of The Woman in Battle (1876), an account of Civil War cross-dressing that presents itself as the autobiography of Loreta Velazquez, a woman who masqueraded as a Confederate officer and spy during the war. Passing novels also include The Hidden Hand (or Capitola the Madcap), a picaresque adventure tale first published in 1859, and James Baldwin’s Giovanni’s Room (1956), in which national, racial, and sexual identities are presented as nostalgic constructions subject to a pathos of lost origins. Black Like Me (1961) is not a work of fiction but a realistic account of a journey in the Deep South of the United States, at a time when African-Americans lived under racial segregation, by a journalist who had his skin temporarily darkened to pass as a black man. Closing the book, Adrian Piper, a philosopher and a performance artist, offers her personal testimony as an African American woman who identifies herself as black but often passes for white because of her light-skin complexion.

The dilemma of passing

Passing for white is still a reality in contemporary American society, where African American identity was built on a history of slavery and segregation and where Blacks still suffer from racial prejudice and social exclusion. As F. James Davis writes in Who Is Black? One Nation’s Definition (1991), “Those who pass have a severe dilemma before they decide to do so, since a person must give up all family ties and loyalties to the black community in order to gain economic and other opportunities.” There is no forced “outing” of people who pass for white in the African American community: “Publicly to expose the African ancestry of someone who claims to have none is not done,” writes Adrian Piper. And yet passing is met with ambivalence and equivocation. In the novel Passing, one character remarks: “It’s funny about ‘passing.’ We disapprove of it and at the same time condone it. It excites our contempt and yet we rather admire it. We shy away from it with an odd kind of revulsion, but we protect it.” By contrast, at the time the book was published, passing for black when one is white was deemed a complete impossibility. Adrian Piper, who was suspected of doing so, reacts violently to such accusation: “It’s an extraordinary idea, when you think about it: as though someone would willingly shoulder the stigma of being black in a racist society.” Based on her own experience, she considers being black as “a social condition, more than an identity, that no white person would voluntarily assume, even in imagination.” The many instances of microaggressions, discriminatory treatment, racial slurs, or racist conversations she overheard even in an academic context considered as “safe” justify her point: raised as an African American by a committed family, but as a person who “looked white” and “talked white,” she involuntarily passed as white and thus was able to witness the racist behavior of white persons who lower their guard when they think they are among themselves (as in the Saturday Night Life routine when a whitefaced Eddy Murphy experiences the sight of relief as a single black man exits a bus full of white passengers.) In Black Like Me, John Howard Griffin expresses outrage and mortification at a variety of incidents that would have been commonplace to black Southerners living under Jim Crow: being turned away from hotels and restaurants, made the target of racial animosity and sexual objectification, denied banking privileges, rejected peremptorily from jobs, required to use segregated toilet facilities, and forced to sit at the back of the bus. Clearly under such conditions, no white person would willingly become black.

There are several reasons why passing became a popular trope in American literature, and why literary criticism took on the subject with an enthusiasm bordering on frenzy in the 1990s. Cross-dressing and assuming a fake identity have always been a familiar ploy in literary fiction, from picaresque novels of sixteenth century Spain to the theater comedies of Shakespeare and Marivaux. The American legacy of slavery and racial segregation added an element of drama to this familiar plot. The fictitious characters of the passing novel and the unknown thousands of very real black men and women who passed out of slavery moved from a category of subordination and oppression to one of freedom and privilege. According to the one-drop rule, any person with even one ancestor of black ancestry (“one drop” of “black blood”) was considered black (Negro or colored in historical terms). African blood is invisible on the surface of the body, allowing persons of mixed descent with a light skin and Caucasian facial features to pass as white. Crossing racial or sexual boundaries involves a suspension of disbelief that is at the heart of literary fiction: appearances are deceiving, identities are in flux, and nothing is what it seems. The visual force of passing, and especially the shock of its discovery after the fact, is extraordinary. Especially in the case of race, passing is not simply performance or theatricality, the pervasive tropes of recent work on sex and gender identity, nor is it parody or pastiche, for it seeks to erase, rather than expose, its own dissimulation. Unlike sexual identity which is not necessarily apparent, race is eminently visible, as if it were natural. Race is essential, communal, and public, whereas sexuality is contingent, individual, and private. The misperception of race is therefore surprising insofar as it contradicts the established belief in the strength of blood ties and genetic makeup. Racial passing resonates deep within the American psyche. Even though a significant proportion of white Americans, about 3.5 percent according to geneticists, are known to have some African ancestry, very few people who identify themselves as white are ready to acknowledge this heritage. According to Adrian Piper, “the fact of African ancestry among whites ranks up there with family incest, murder, and suicide as one of the bitterest and most difficult pills for white Americans to swallow.”

The fictions of identity

Through the contributions to this volume, passing was constituted as a literary genre and a productive space in which to interrogate identity in all its dimensions. According to one contributor, “passing is an act of resistance against dominant constructions of race, gender, sexuality, and identity.” As she explains, the discourse of racial passing reveals the arbitrary foundation of the categories “black” and “white,” just as passing across gender and sexuality places in question the meaning of “masculine” and “feminine,” “straight” and “gay.” For the editor in her introductory chapter, “just as the ontology of race exposes the contingencies of the categories ‘white’ and ‘black,’ so the ontology of gender exposes the essential inauthenticity of ‘man’ and ‘woman.’” Socially constructed identities seemed to connote an identity easily altered or cast off: one could be black or white (or Native American) by an act of volition, a conscious decision that would engage the rest of one’s life but that had no relation to one’s previous self. The facticity of identity made any experience of that identity necessarily inauthentic: “Passing is only one more indication that subjectivity involves fracture, that no true self exists apart from its multiple, simultaneous enactments.” It was accepted as a an article of faith that “identities are not singularly true or false but multiple and contingent.” There was no authentic self, but an assemblage composed by “a series of guises and masks, performances and roles.” Literature had first established passing as a trope, and literary criticism gave it its badge of honor. The 1990s were years of transformation in the humanities, and the university became a factory for ideas of gender transition and eventually of race fluidity. Under the influence of Roland Barthes, Jacques Lacan, Judith Butler, and Eve Sedgwick, who are quoted at length in this volume, identities were read as fictions and constructed as fantasies. Race was compared to a “metaphor,” an “empty signifier,” a “mark empty of any referential content” or “the unheimlisch return of a desire” that could be as malleable as text. Then was a time to “construct new identities, to experiment with multiple subject positions, and to cross social and economic boundaries that exclude or oppress.”

In such intellectual climate, it is no wonder that some enterprising individuals took critical theory at its own word and decided to experiment in real life the theses that literary critics and social scientists had proposed on the cultural front. If all identities are in passing and race is a masquerade, why not assume a different racial identity and pretend one belonged to a minority of color instead of dull and undifferentiated whiteness? If race is a role we play, why not choose the character we wish to embody and play the part accordingly? Of course, assuming a different ethnic identity involves lying about one’s “true” origins. But if race is a lie, lying about a lie is not a lie: it is all performance. There were several motivations behind the choice made by some individuals, mostly academics and performing artists, to take on the identity of an ethnic minority. First, the stigma once associated with being colored started to recede with the civil rights movement and the promotion of ethnic identities. In the ideologically charged climate of the 1970s, Black was beautiful, Native Indian was noble, ethnic was chic. There was a whiff of marginality and radicalism in embracing the cause of ethnic minorities fighting for their rights. As the author of Black Like Me experienced it, one could not act as a spokesperson of a group in which one did not belong. He chose to step aside and to support black separatism from a distance; others preferred to espouse the cause with which they identified unequivocally, and to play the part until the end. As a second reason, this was a period when ethnic studies and other interdisciplinary fields emerged as new and exciting disciplines. For a promising academic, it was important to position oneself where all the action was. If this involved lying about one’s ethnic origins, so be it. Most of the time, the deception began with a lie by omission or a sous-entendre that may have been based on family lore. In a nonsuspecting environment, there was no hard questions asked, and no need to provide minute answers about one’s genetic makeup. In some cases, what began as a histrionic role became an acting career. Academics spend their life on a stage and impersonate a role in front of a devoted audience. They tend to embody the ideas they defend to the point their appearance becomes inseparable from their discipline. Teaching ethnic studies made one feel part of this ethnicity.

The backlash against transracialism

And yet, transracialism has few modern proponents, and academics who are found to have lied about their ethnic origins are subjected to public shaming and a strict policy of cancellation. “In Defense of Transracialism”, an article published by the philosopher Rebecca Tuvel in the academic journal Hypatia in spring 2017, was met with a barrage of insults and denunciations, and the journal’s editors had to publish an apology. How, then, are we to understand the backlash against racial trespassing and the cancellation of individuals who claimed an ethnic identity when in fact they were white? Why did race and gender follow different paths and ended up on opposite sides of academic debates, with transracialism denounced as wholly illegitimate while trans identities were recognized and even praised by gender theorists? First, the issue of passing involves not only an individual’s decision to change race, but also deliberately lying and deceiving about it. Academia is an industry that defines itself in large part by its ethical standards: having a career based on a lie makes other people angry and resentful. American ethics adds a layer or prudery and moral posturing to these manifestations of public outrage: remember that in the Lewinsky case, what was reproached to Bill Clinton was not to have had an affair with an intern, but to have lied about it. Denunciations of ethnic fraud also emphasize the fact that the culprits benefited from preferential treatment and financial resources originally earmarked to members of ethnic minorities: they “stole” these resources from others, who may have benefited from these affirmative action measures but could not. One may find this argument shallow and petty: there is more to academia than just money and a struggle for positions, and every social policy has its leakages. The resolution to curtail the phenomenon of passing also comes from the realization that it may have reached massive proportions. According to the 1990 census, two million Americans reported as American Indians and Alaska Natives. In 2000, almost twice as many gave the same answer to the questionnaire. Among them, in proportion, Latinos and highly educated adults as well as women were the groups most directly affected. Checking the “Native American” box is not only a means of gaming the university admission system: Native American cultures have experienced a kind of cultural renaissance, which increases the number of persons willing to associate with them. As a last argument, the reaction to Rebecca Tuvel’s article showed that feminists who support trans identities and queer studies are particularly ill at ease with the possibility of transracialism. They do not want to witness the contamination of gender debates with issues of racial transition. Policing race is also a way to police their own discipline and to erect barriers to avoid trespassing.

Hawai’i on Ice

A review of Cooling the Tropics: Ice, Indigeneity, and Hawaiian Refreshment, Hi′ilei Julia Kawehipuaakahaopulani Hobart, Duke University Press, 2022.

Cooling the tropicsMany public events in the United States and in Canada begin by paying respects to the traditional custodians of the land, acknowledging that the gathering takes place on their traditional territory, and noting that they called the land home before the arrival of settlers and in many cases still do call it home. Cooling the Tropics does not open with such a Land Acknowledgement, but Hi′ilei Julia Kawehipuaakahaopulani Hobart (thereafter: Hi′ilei Hobart) claims Hawai’i as her piko (umbilicus) and pays tribute to the kūpuna (noble elders) and the lāhui (lay people) who “defended the sovereignty of [her] homeland with tender and fierce love.” She describes her identity as “anchored in a childhood in Hawai’i, with a Kānaka Maoli mother who epitomized Hawaiian grace and a second-generation Irish father who expressed his devotion to her by researching and writing our family histories.” She expresses her support for decolonial struggles and Indigenous rights, and participated in protests claiming territorial sovereignty for Hawai’i’s Native population. How can one decolonize Hawai’i? How can Hawaiian sovereignty discourse articulate a claim to land restitution and self-determination that is not a return to a mythic past? What about racial mixing, once regarded with anxiety and now touted as a symbol of Hawai’i’s success as a multicultural US state? What happens to settler colonialism and white privilege when the local economy and the political arena are dominated by populations originating from East Asia and persons of mixed descent? Is economic self-reliance a feasible option considering the imbrication of Hawai’i’s economy into the US mainland’s market? Can the rights of the Indigenous population be better defended in a sovereign Hawai’i? What is the meaning of supporting decolonial futures that include “deoccupation, demilitarization, and the dismantling of the settler state”? Can decolonization be achieved by nonviolent means, or do sovereignty’s activists have to resort to rebellion and armed struggle? What would be the future of a decolonized Hawai’i in a region fraught with military tensions and geopolitical rivalries? What can a decolonial perspective bring to the analysis of Hawai’i’s colonial past and possible futures? And why is academic research on Hawai’i’s history and society so often aligned with the decolonization agenda, to the point that decolonial approaches are almost synonymous with Hawaiian studies in the United States? More to the point: how can a PhD student majoring in food studies and chronicling the introduction of ice water, ice-making machines, ice cream, and shave ice in Hawai’i address issues of settler colonialism, Indigenous dispossession, Native rights to self-determination, and decolonial futures?

Decolonize Hawai’i

Unbeknownst to most Americans, and to all non-US citizen but a few exceptions, there is a thriving independence movement taking place in the Hawaiian Islands today. It was borne out of an unlawful US-backed overthrow of the Hawaiian Kingdom in 1893, it survived Hawai’i’s accession to statehood in 1959, and it is currently in opposition to the territorial encroachment by military infrastructure and other state interests over confiscated land and sacred sites. The Hawaiian soveignty movement doesn’t advocate a return to a mythic past. Simply put, Native communities demand respect for their traditional cultures, consideration for their role as stewards of the land, and empowerment to take part in all decisions that affect them. Since 2014, local activists have opposed the construction of the Thirty Meter Telescope (TMT), a scientific endeavor with governmental support from Canada, France, Japan, China, and India. Slated to become the most powerful telescope on the planet, the stadium-sized facility threatens to desecrate one of the most sacred sites for Kānaka Maoli. Construction was temporarily halted due to a blockade of the roadway leading to the site, and further protests as well as legal battles prevented construction of the telescope to resume. Hi′ilei Hobart took part in the protests, helping to keep the basecamp of picketers provisioned with food and beverages. Participating in local struggles fed into her dissertation in more than one way. Firstly, it underscored the obvious: ice and snow are native to Hawai’i; they are not an imported commodity brought by Anglo-American settlers along with “civilization”. Those who tell the story of how ice first came to Hawai’i get it wrong: ice and snow have been there since time immemorial. During winter, snow frequently falls on the ice-capped summits of the island chain’s tallest mountains. But even confronted with this evidence, popular discourse continues to construe ice and snow as alien to Hawai’i, and to frame Maunakea―the site of the TMT―as a terra nullius unoccupied by the Native population and thus open for grabs and available for construction in the name of science and progress. Discursive logics have combined to produce Maunakea as “not-for-Hawaiians” (Kānaka Maoli were supposed to steer away from altitude, and the first individuals on record to climb the mountaintops were Westerners), as “not-Hawai’i” (outsiders picture Hawai’i as a tropical paradise of lush valleys and beaches), and as “not-Earth” (NASA used the desolate volcanic site for outerspace simulations of spacewalks on Mars and the moon). Cumulative efforts to frame Maunakea as empty and alien have resulted in disregard for Natives’ rights and belief systems.

The second lesson Hi′ilei Hobart could draw from her roadblock picketing is a better sense of the local cosmogonies that tie humans with nature and the elements in Hawai’i. For Kanaka Maoli, Maunakea’s snow, mist, and rain are not just atmospheric phenomena: they signal the lingering presence of gods (akua) and ancestors’ spirits who have been occupying the place even in the absence of humans. Local tales or mo’olelo kept by way of oral transmission carry foundation myths of the islands and mountains and attest to Maunakea’s central role in Indigenous place and thought, while animating the elements and other life forces with their own spirit and consciousness. Likewise, for the anthropologist, commodities are animated with a life of their own. According to Marx, a wooden table “does not only stands with its feet on the ground, but, in relation to all other commodities, it stands on its head, and evolves out of its wooden brain grotesque ideas, far more wonderful than if it were to begin dancing of its own free will.” Ice and refreshments in the tropics are imbued with values, desires, longings, and social hierarchies. They have a history that intersects with the history of settler colonialism, racial capitalism, and the militaro-touristic complex in Hawai’i. Discourses about ice encapsulate ideas about race, modernity, gender, and the affective sensorium. They help rationalize Indigenous dispossession and contribute to the legitimization of imperialism. As historian Eric Jennings has demonstrated, the concepts of freshness and refreshment marked colonial relationships in the tropics. The hill stations and colonial spas built by the French and the British in their colonial outposts were predicated on the idea that fragile European bodies could not endure tropical heat and had to periodically regain some of their vigor in high-altitude places where conditions of life in the homeland were reproduced. The same logic explains how ice and frozen refreshments were progressively naturalized in Hawai’i’s foodscape. First to penetrate the Hawaiian market in the nineteenth century, ice cubes were associated with masculinity, alcohol consumption, saloon culture, plantation ownership, and white privilege. By contrast, the more feminine ice water came to be seen as a means to achieve temperance, mitigating the warm climate, and cooling after effort. Ice cream was a symbol of whiteness, sugary sweetness, purity, leasure, and innocent childhood; for young women, who could frequent the ice cream parlor without being chaperoned, the fast-melting delicacy was also synonymous with freedom and romantic encounters. Born on the plantations, shave ice is associated with brown labor, rural life, Asian migrants, mom-and-pop stores, and nostalgia for simpler times.

Infrastructures of the cold

As a third lesson of the author’s fieldwork as an activist came the realization that American society depends on thermal infrastructures, from the cold chain to keep perishable foodstuff to air conditioning and big houses protected from outside temperature. Freezers and refrigerators are essential to modern survival. These infrastructures have become so embedded in everyday life that they fade into the background, and their very invisibility guarantees that structures of dispossession and extraction go unnoticed. This is what the author labels “thermal colonialism”, defined as the modes by which temperature was managed and organized to favor settlers’ interests and reproduce racial hierarchies. Americans have become quite literally “conditioned” to experience coolness or frozen taste in hot weather, to the point that they consider the “right to chill” as constitutionally guaranteed. But desire for freshness and refreshment has a history: it is not biologically determined. We realize the importance of infrastructures of the cold when they fail us: the fragility of the cold chain in Hawai’i reveals itself after a hurricane, when lines of supply are disrupted, or each time the islands brace for an emergency. When things fall apart, networks of care and resilience take precedence over market relations and commercial interests. This is what Hi′ilei Hobart realized in the encampment at Mount Maunakea as she filled coolers with ice and drained their brown water to keep foodstuff fresh and edible. Managing community food resource pooling made her aware of food insecurity and thermal dependence in a state that heavily relies on imported goods and processed food. As her food studies turned to food work, she realized that “all that is frozen melts into water” (to paraphase Marx’s famous quote) and wondered whether Hawai’i had a future beyond the ice age: “what place does refrigeration have within Indigenous futures that move beyond settler capitalism, when coldness has played such an intimate role in these systems of oppression?” Draining water from coolers also drew her attention to melt as a condition of our current times marked by climate change and the images of fast-disappearing glaciers. She also discovered the materiality of freshness and frozenness, which pointed to a different kind of political economy as the one she had envisaged as a graduate student: an economy that is not based on commodity fetishism and labor exploitation, but on user value and short “farm-to-fork” circuits of exchange. Commodity trade, Marx argues, historically begins at the boundaries of separate economic communities based otherwise on a non-commercial form of production. As Marx explains, the commodity remains simple as long as it is tied to its use-value: “A commodity appears, at first sight, a very trivial thing, and easily understood. Its analysis shows that it is, in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.”

Hi′ilei Hobart’s history of how artificial ice came to Hawai’i is heavily dependent on her sources. Scarce at the beginning, with a few advertisements and newspaper clippings (including publications in the local language, ‘ōlelo Hawai’i), they include a wider array of testimonies, photographs, business records, cookbooks, consumer goods, and personal memories as we move closer to the present. She first chronicles the great American ice trade, in which big blocks of ice harvested from lakes in the Northeast or in Alaska circulated the globe from 1840 to 1870, the year the first ice-making machines were introduced. The ice that went to the tropics was a luxury product, used in cocktails, to chill wines, and for service at fine hotels where American planters, Western missionaries, European tourists, and Hawaiian elites mingled. The ice importing business never really took off in Hawai’i: even though entrepreneurs petitioned the local rulers for monopoly rights and invested in storage facilities, the venture remained unprofitable and was interrupted in 1860 after two decades of sporadic shipments. King Kamehameha III had mixed feelings about alcoholic beverages and iced punches: ruling over a “semi-European” polity that was modernizing fast, he also leaned to the robust temperance movement championed by Western missionaries and patronage ladies. He eventually died in 1854 after drinking from a poisoned punch-bowl of iced champagne. Under the reign of the last Hawaiian monarch, King Kalākaua, Honolulu was a fast-growing city with all the trappings of a Western metropole. ‘Iolani Palace, the royal residence, had electricity, indoor plumbing, and telephones even before Buckingham Palace or the White House. Among these technologies, ice machines and ice factories came into operation in the 1870s, transforming a once-foreign commodity into a local product.

Entering the Ice Age

Hawai’i entered the ice age at about the same period as the United States: when home refrigeration, cold chains for perishable goods, ice cream parlors, and soda fountains connected Honolulu’s domestic life to global standards of modernity. But unlike in the mainland, the use of freezing technologies were subject to colonialist frames of interpretation and local resistance. Settler reports of Kānaka aversion to ice stood as indictment of their slow pace to civility. Native people’s first contact with ice cream, taken as extremely hot instead of freezing cold, was derided as a sign of inferior civilizational status. Hawaiian-language newspapers, however, refuted implications that Kānaka Maoli were confused about or afraid of ice, and advertised the lavish cosmopolitan banquets including icy desserts served at the ‘Iolani Palace. But haole (foreigners), ali’i (elite Hawaiians), and maka’āinana (local commoners) reacted differently to frozen tastes, reflecting hierarchies of class, gender, and racialized proximity to whiteness. The racist and classist distinctions manifested themselves after US annexation during the pure food battles of the 1910s. The newly appointed food commissioner decided to apply US legislation strictly to ban poi, a local dish alternatively described as a truly delicious paste with yeasty flavor or “a native concoction that tastes like billboard paste,” and to increase the butterfat content of ice cream to mainland levels, contradicting local tastes and recipes developed by Japanese and Chinese ice cream vendors.

Shave ice and its “rainbow” of flavors is now offered as a metaphor for the “rainbow state” and its multiethnic, postracial population. As a symbol of Hawai’is racial landscape, the rainbow offers an important vehicle for the affective, and often tense, sentiments of identity and belonging. How did a food practice brought by Japanese migrants come to epitomize a US state, and how did a sugar plantation economy built along racial lines produce a racially harmonious society in the only US state with a nonwhite majority population? Shave ice offers an alternative narrative to forms of refreshment oriented toward white leasure, like the ice creams or tiki cocktails fetishized by the touristic gaze. Historians trace the origin of shave ice to Japanese agricultural workers and plantation store owners who brought the food tradition of kakigōri from Japan. Born in rural spaces where non-Hawaiians put down deep community roots, shave ice offers an alternative story about race and refreshment, one that is not tethered to whiteness and the leisure class. Asian immigrant populations in Hawai’i, once systematically marginalized, have become a “model majority” characterized by upward class mobility and adherence to nationalist values. They dominate the local economy, to the point scholars have forged the category “Asian settler colonialism” to describe the ascendancy of working-class communities of color. Hawai’i is now considered as a laboratory for multiethnic harmony as well as a harbinger of what the whole United States could become: a postracial nation, turning its back on its history of Native Indian extermination and Black enslavement. These fictions mask ongoing structural racism against Native Hawaiians and other ethnic minorities (Samoans, Filipino-Americans…) The shave ice success story glosses over such divisions and obscures Kānaka Maoli claims for Indigenous sovereignty. For present-day Hawaiians, it also brings back shared memories of childhood and nostalgia for “simpler times” characterized by community resilience, rural life, and low economic wealth. Again, this nationalist narrative envisioning an ahistorical and uncomplicated past erases a history of racial discrimination and labor exploitation, and produces “Hawaiians” as an always already multiethnic category that excludes indigeneity or Kānaka Maoli claims to place.

Hawaiian futures

I don’t see much potential in an independent, sovereign, or post-statehood Hawai’i that would grant Indigenous people rights of self-determination and privileges of territorial ownership. There are other ways to tackle the deep structural inequalities and discrimination that affect the Native population. As the French have experienced in French Polynesia, recognizing Indigenous rights is not synonymous with granting full independence or a right to secession. Politics of atonement and official apologies may be aligned with the Anglo-saxon protestant mindset, but they have their limits: short of reparations and restitution, they leave intact the structures of power that have led to Native dispossession and do not advance the living conditions of Indigenous populations. Economic needs must also be addressed, and the responsibility of all leaders, oriented toward independence or otherwise, is to chart a course that guarantees economic growth and sustainable development. I see tourism as a chance for Hawai’i, and militarization as a necessity borne out of historical and geopolitical concerns. Americans will always remember Pearl Harbor. Hawai’i is America’s first line of defense and its most strategic outpost in the Pacific. The security of the continent hinges on the continued presence of military forces which, along with tourism, form the twin pillars of the economy. Envisaging a decolonial future for Hawai’i seems to me more dystopian than real. And yet, with all these caveats in mind, I still find potential for decolonial approaches in modern scholarship about Hawai’i or other territories in the Pacific. Other Pacific islands have acceeded to independence and have demonstrated the viability, resilience, and vitality of Indigenous sovereign states. In the case of Hawai’i, but also the other US territories in the Pacific (Northern Mariana Islands, Guam, and American Samoa), solutions might exist toward or beyond US statehood without resorting to full independence. Besides, scholarship and politics are distinct endeavors. The challenge that decolonial studies must address is the decolonization of the mind. I see must potential in a decolonial perspective to the history of Hawai’i and other once occupied nations, and I learned much from reading Cooling the Tropics as much as I enjoyed reviewing it. One can quote Marx without being a Marxist; one can use decolonial scholarship without believing in a decolonial future for Hawai’i.