Anatomy of a Feminist Diplomacy Campaign

A review of The Banality of Good : The UN’s Global Fight against Human Trafficking, Lieba Faier, Duke University Press, 2024.

On the face of it, fighting human trafficking has all the banality of a good deed. The adoption in 2000 of a UN Protocol on human trafficking, with a heavy focus on the sexual exploitation and abuse of migrant women, was a positive act of feminist diplomacy and was lauded as such by feminist groups in the West and in developing countries. The UN template set the stage for the development of a new international regime of norms and guidelines for how national governments, NGOs, and international organizations should actively work together in this fight. With the Trafficking in Persons Report published yearly, the US Department of State must be praised for having given teeth to the UN Protocol, allowing a carrot-and-stick approach to ensure compliance while naming and shaming bad performers. The US government was bold enough to point finger at one of its closest allies, Japan, whose treatment of migrant women brought under an entertainer visa scheme was clearly violating basic human rights. Japan did a good thing by applying international best practices and diminishing abuse: within two years, the number of Filipino women entering Japan on entertainment visas dropped by nearly 90 percent. All stakeholders can take pride on this result: women’s groups, feminist leaders, UN diplomats, American Embassy staff, Japanese case workers, law enforcement officers, and the victims themselves. This may not sound like a big deal, but they all did well. Hence, the banality of good.

Turning Hannah Arendt’s Banality of Evil on its Head

Under this narrative, the banality of good denotes step-by-step progress in the advancement of human rights and the fight against human exploitation. But let us pause. All of the above is not the story that Lieba Faier tells, and her expression “the banality of good” in fact has the opposite meaning. She uses it “to refer to the perils of this campaign’s globalized institutional approach, which ultimately privileges technical prescription and bureaucratic compliance over the needs and perspectives of those it means to assist” (p. 11). All stakeholders aiming to do good and alleviate the plight of victims of human trafficking missed their original goal or had to compromise on their principles. By bringing a global solution to local problems, the international community only made things worse. Foreign women working in the sex industry were forcibly deported on criminal charges of visa overstay; grassroot NGO workers became complicit in the expulsion of those they were supposed to protect; while police raids pushed the sex industry further underground. In titling her book The Banality of Good, Lieba Faier of course has in mind the expression “the banality of evil” coined by Hannah Arendt to denote the fact that evil can be perpetuated when immoral principles become normalized over time by people who do not think about things from the perspective of others. Evil becomes banal when people don’t feel bad when they do evil. Here, the banality of good reflects the opposite attitude: people don’t feel good when they are supposed to do good. They know something is wrong, but they can only attribute it to “the system” or hope that their action contributes to the realization of a greater good.

Over and over in Lieba Faier’s narrative, individuals and groups committed to the betterment of foreign women’s plight had to compromise on their strategic goals and core values. The original impetus to fight against traffic in women came from Asian feminist organizations and grassroot human rights groups in Japan, Korea, and South-East Asia. Beginning in the early 1970s, they built a regional coalition to respond to a rising tide of Japanese sex tourism in the region. They also had a broader agenda that was anti-capitalist and anti-colonial at its core, seeing sex tourism as the reflect of structural inequities among nations and between genders and classes. But the US feminist groups who picked up their fight obscured the structural factors foregrounded by the earlier efforts of women’s groups in Asia and framed human trafficking as a uniform global issue that warranted a single global response. This global feminist movement coined the expression “sexual slavery” to articulate a singular, abstract, deterritorialized global practice, overlooking racial, national, and class inequalities among women. They formed an alliance with the human rights movement to launch a campaign for the abolition of “violence against women,” with human trafficking as a key instance of this violence. Lieba Faier describes how a globalist feminist project then became a UN-centered global human rights initiative: the drafting and adoption of the Trafficking Protocol was based on compromise by both Asian grassroot organizations and US feminist groups, who were themselves divided between prostitution abolitionists and sex worker rights’ advocates. By establishing a formal definition of human trafficking and then collecting data on it, the protocol promised to recognize human trafficking as a global phenomenon for states to measure and institutionally address.

Reframing Sexual Violence

But when national governments decided to act, they did not focus on human trafficking as a matter of violence against women. Rather, they reframed the issue once again, this time as a matter of transnational organized crime warranting a punitive solution. What US-based feminism had identified as violence against women would be reframed as a generalizable problem of criminal violation enacted by individual private citizens against other private citizens. A model of redistributive justice was discarded in favor of a carceral model. Of the three Ps framework—preventing trafficking, protecting victims and prosecuting traffickers–, the third P was prioritized and the two previous ones were sidelined. In Japan, grassroot NGO workers produced “trauma portfolios” of victims, collecting personal accounts of suffering to argue that foreign sex workers deserved protection and assistance, not treatment as criminals. NGO caseworkers’ accounts were so moving that American diplomats made the controversial decision of placing Japan on the Tier 2 Watch List of the 2004 Trafficking in Persons Report. For Japanese bureaucrats, this was a huge blow to national pride: most of the advanced countries were on the so-called Tier 1 list, but only Japan was ranked Tier 2. Something had to be done to restore Japan’s standing in the international community. The same narratives that had moved NGO activists and US diplomats into action were now perceived as a matter of national shame.

As Lieba Faier remarks, “People care about others for different reasons and thus to various ends” (p. 95). For NGO workers, reporting on victims’ stories of abuse to US embassy officials was a way of using gaiatsu, or foreign pressure, to induce reforms in domestic policies. But the Action Plan that the Japanese government enacted in 2005 was a bureaucratic exercise, devoid of compassion or concern for social justice. The “Roadmap to Tier 1” was rich in international best practices and indicators, but disconnected from facts on the ground. As a result of the screening process, migrants that were denied the status of victim were forcibly repatriated to their home countries or held liable for illegal residence (fuhō taizai). Only those officially recognized as victims of human trafficking received protected status, with a residency permit allowing them to remain in Japan or assistance to go back home. As one NGO caseworker confided to the author, “Sometimes I don’t feel good about the work I’m doing. These migrants have nothing back home” (p. xiv). Or as another worker put it, “They don’t have anywhere to go. For many, their life of extreme poverty in the home country is much worse than what they have now” (p. 15). These NGO caseworkers didn’t feel that justice was being served by those international protocols, but if they refused to participate in them, they worried that the situation would be worse. So they complied with the “bureaucratic glue and strings” (p. 171) attached to being part of an international campaign against trafficking in women.

Support Comes with Strings Attached

Lieba Faier complements her fieldwork with archival work and interviews with UN officials and government representatives in Japan and in the United States. She dissects the various frames and translations that a social issue has to go through in order to become a legal provision in an international protocol; and how a UN template in turn translates into reality and alters the lives of women who may or may not be designated as victims of international trafficking. She brings an ethnographic eye to practices of helping migrant workers, campaigning for women’s rights, drafting UN templates, and translating legal texts into policy options. She reads “against the grain of bureaucratic documents to see the contradictions, aporias, and impasses embedded in them” (p. 101). As she describes it, the United Nations acts as a clearinghouse for such efforts, erasing history and geographical differences in the interest of establishing a standardized international practice. As she points out, “the rote adherence to an institutional protocol comes to stand for necessary structural change” (p. 13). Well-intentioned humanitarian campaigns produce unintended harm through bureaucratic routines and institutional priorities. These efforts prioritize protocol compliance over survivors’ needs, perspectives, and lived realities, leading to repatriation, compromised quality of life, and even criminalization of those they aim to help. The Japanese government offers assistance to only a small portion of those foreign workers suffering abuse and exploitation: “In 2018, only seven trafficking victims received repatriation assistance, and this number dropped to five in 2019” (p. 211). These “lilliputian pockets of improvement” (p. 213) mask egregious failure to put an end to human trafficking. Even those who benefit from repatriation programs fall victim of “cruel empowerment” (p. 185): humanitarian programs designed to empower them through financial literacy or other neoliberal models of development fail to address the structural inequalities of the status quo.

Lieba Faier’s scholarship is informed by the years she spent as a volunteer in Japan, the Philippines and the United States working alongside NGO workers assisting migrant women and lobbying the UN and governments to address the mistreatment of foreign women working in the sex industry and other exploitative sectors. As she writes, “Doing multi-sided research involving multiple organizations in three different countries over many years had advantages insofar as I sometimes heard part of a story in one organization or country and the rest of the story in others” (p. 19). Her findings were also buttressed by the availability of US diplomatic cables disclosed by WikiLeaks, which documented internal processes and political motivations. Grassroot perspectives allowed her to question the way these migrant women’s plight was addressed in international policy forums. As she notes, “the global approach to this issue was sidelining, if not displacing, the expertise and guidance of the experienced NGO caseworkers whose labor was central to it” (p. xii). While these NGO workers were sometimes themselves former labor migrants and had a deep understanding of the situation they tried to alleviate, the organizations that brought the issue to an international public stage were headed mostly by academics, journalists, or lawyers with little direct knowledge of facts on the ground or contacts with grassroot organizations. She also questions the exclusive focus of the international campaign on the sex industry and the lack of attention awarded to other forms of exploitative labor, such as the conditions faced by Asian workers who come to Japan under the Technical Intern Training Program (Ginō Jisshū Seido), which she describes as a cover-up for cheap and disposable labor acquisition. Her advocacy for migrant rights doesn’t stop at one particular category, but is informed by “a vision of justice that asks national governments and their citizenries to see foreign workers as part of their imagined community” (p. 140).

A Plea for the UN

The Banality of Good is informed by the vision that “other worlds are possible,” as stated in the book’s opening dedication. But what are the alternatives? As a French diplomat committed to a feminist diplomacy agenda, I would not easily dismiss the United Nations’ approach to human trafficking or the work done by American diplomats to document Japan’s insufficient efforts in applying human rights standards. I agree with the author when she states that “if international guidelines are themselves problematic, little will be achieved by compliance with them” (p. 120). But this should serve as a rallying cry to devote more attention and resources to UN multilateralism and human rights campaigning. The Trafficking Protocol, with its lack of a credible enforcement mechanism and its emphasis on criminalization and border protection, is an easy target for attack. But internal debates show that the work was perfectible and that other policy options were put on the negotiation table. Mary Robinson, then UN high commissioner for human rights, pushed hard to have a human rights perspective embedded in the text. She proposed the addition of specific references, provisions, and language to acknowledge the rights of migrant workers, not just sex workers or those recognized as victims of trafficking; and she argued for strengthening the “victim protection and assistance” provisions in the draft protocol to allow for financial resources being devoted to helping victims of human trafficking. The fact is, we don’t have an alternative to the UN, and bottom-up approaches are compatible with international summitry or legal text draft-making. Concepts such as “responsibility to protect,” “rights-based approach” or “human security” are not just abstract notions devoid of any content; they alter facts on the ground and induce real changes for people in need of international protection. Misperformance is no reason for inaction.

From Slumdog to Millionaire

A review of Producing Bollywood: Inside the Contemporary Hindi Film Industry, Tejaswini Ganti, Duke University Press, 2012.

Imagine you are a foreign graduate student doing fieldwork in Hollywood and that you get to sit in a two-hour long interview with a major film star like Brad Pitt or Johnny Depp. This is precisely what happened to Tejaswini Ganti in the course of her graduate studies at the University of Pennsylvania when she was researching the local film industry in Mumbai, now better known as Bollywood. And it happened not only once: she sat in interviews with legendary actors such as Shah Rukh Khan, Aamir Khan, Shashi Kapoor, Sanjay Dutt, Amrish Puri, actress Ayesha Jhulka, as well as top producers and directors Aditya Chopra, Rakesh Roshan, and Subhash Ghai. What made this access possible? Why was a twenty-something PhD student in anthropology from New York able to meet some of the biggest celebrities in India? And what does it reveal about Bollywood? Obviously, this is not the kind of access a graduate student normally gets. Privileged access is usually granted to journalists, media critics, fellow producers, and other insiders. They observe the film industry for a reason: they are part of the larger media system, and they play a critical role in informing the public, evaluating new releases, building the legend of movie stars, and contributing to box-office success. As an anthropologist, Tejaswini Ganti’s approach to the Hindi film industry is different. As she states in her introduction, “my central focus is on the social world of Hindi filmmakers, their filmmaking practices, and their ideologies of production.” Her book explores “how filmmakers’ subjectivities, social relations, and world-views are constituted and mediated by their experiences of filmmaking.” As such, she produces little value for the marketization of Bollywood movies: her book may be read only by film students and fellow academics, and is not geared towards the general public. As befits a PhD dissertation, her prose is heavy with theoretical references. She draws on Pierre Bourdieu’s analysis of symbolic capital and his arguments about class, taste, and the practice of distinction. She uses Erving Goffman’s concept of face-work to describe the quest for respectability and avoidance of stigma in a social world associated with black money, shady operators, and tainted women. She steeps herself in industry statistics of production budgets, commercial outcomes, annual results, and box-office receipts, only to note that these figures are heavily biased and do not give an accurate picture of the movie industry in Mumbai.

Getting access

Part of Tejaswini Ganti’s success in getting access to the A-list of the Hindi film industry stems from her position of extraneity. As an “upper middle-class diasporic South Asian female academic from New York,” she didn’t benefit from “the privilege of white skin”—white European or American visitors could get access to the studios or film shoots in a way that no ethnic Indian outsider could—but she was obviously coming from outside and was not involved in power games or media strategies. For her initial contacts, she used the snowballing technique: personal friends in Philadelphia who had ties with the industry in Mumbai provided initial recommendations and helped her make her way through the personal networks and kinship relations that determine entry and access at every stage. Two different directors offered her the chance to join the team of directors assistants for two films, fulfilling the need for participant observation that remains a sine qua non in anthropology studies. People were genuinely puzzled by her academic interest in such a mundane topic (“You mean you can get a PhD in this in America?”) and eager to grant an interview to an outsider who had no stake in the game. Being a woman also helped: she “piqued curiosity and interest, often standing out as being one of the few—and sometimes only—women on a film act.” As she notes, she “did not seem to fit in any of the expected roles for women—actress, dancer, journalist, hair dresser, costume designer, or choreographer—visible at various production sites.” Contrary to common understanding about the gendered dimension of fieldwork, she actually had a harder time meeting women, specifically the actresses. She also experienced her share of sexual harassment, but as a young married woman with a strong will and a sharp wit she was able to handle unwelcome advances and derogatory remarks. Last but not least, dedicating an academic study to Bollywood provided a certain cachet and prestige to an industry that was desperately in need of social recognition. Actors and filmmakers strived not only for commercial success, but also for critical acclaim and cultural appraisal. A high-brow academic study by an American scholar gave respectability to the Hindi film industry “which for decades had been the object of much disparagement, derisive humor, and disdain.”

She also came at a critical juncture in the history of the Hindi film industry. She carried out her fieldwork for twelve months in 1996 and completed her dissertation in 2000, a period associated with the neoliberal turn in India’s political economy. She made shorter follow-up visits in 2005 and 2006, and her book was published by Duke University Press in 2012, at a time when neoliberalism was in full swing and the nationalist right was ascending. The Hindi film industry’s metamorphosis into Bollywood would not have been possible without the rise of neoliberal economic ideals in India. Along with the rest of the economy, the movie industry experienced a shift from public to private, from production to distribution, from domestic audiences to global markets, and from entertainment for the masses to gentrified leisure. The role of the state changed accordingly. At the time of independence, most leaders viewed the cinema as “low” and “vulgar” entertainment, popular with the uneducated “masses.” Gandhi declared many times that he had never seen a single film, comparing cinema with other “vices” such as satta (betting), gambling, and horseracing. Unlike Gandhi, Nehru was not averse to the cinema, but was critical of the kind of films being made at the time. He exhorted filmmakers to make “socially relevant” films to “uplift” the masses an to use cinema as a modernization tool in line with the developmentalist objectives of the state. He created a cultural bureaucracy to maximize the educational potential of movies, with institutions such as Doordarshan, the public service broadcaster, and the Films Division, the state-funded documentary film producer. Prohibitive policies such as censorship and taxation as well as bans on theater construction limited the development of commercial cinema, even though India soon became the most prolific film producing country in the world. How to explain the shift in attitudes toward mainstream cinema, from being a heavily criticized and maligned form of media to one which the state actually celebrated, touting as an example of India’s success in the international arena? There was, first, a rediscovery of cinema as national heritage, starting with the public celebrations of the cinema centenary in 1996. Cinema was also rehabilitated as an economic venture: large corporations such as the Birla Group, Tata Group, Sahara, Reliance, and others began to invest in the sector, displacing the shady operators that had associated Indian cinema with organized crime and money laundering. Multiplex construction replaced the old movie houses that had catered to the tastes and low budgets of the rural masses. Local authority started to offer tax breaks for films shot in their territory, while government agencies began to promote the export of Indian films to foreign markets. Formerly seen as a tool for social change, cinema was now envisaged as an engine of economic growth.

The gentrification of cinema

The result of this neoliberal turn was a gentrification of cinema. This transformation was reflected in the attitudes towards cinema, the ideology of industry players, the economic structure of the sector, and the content of movies themselves. One of the facts that surprised the author the she began her fieldwork in 1996 was the frequent criticism voiced by Hindi filmmakers concerning the industry’s work culture, production practices, and quality of filmmaking, as well as the disdain with which they viewed audiences. In discussions with filmmakers, the 1980s emerged as a particularly dreadful period of filmmaking, in contrast with both earlier and later periods of Hindi cinema. The arrival of VCR recorders and the advent of cable TV was hollowing out the market for theater moviegoing from both ends, resulting in a decline in cinematic quality. The upper classes completely skipped domestic cinema, the middle class increasingly turned to television and video recording, and working class audiences had access to video parlors where a simple hall with a television and a VCR replaced large-screen theaters. Filmmakers had no choice but to cater to the base instincts of the public, resulting in trashy movies with clichéd plots and dialogues, excessive violence, explicit sex, and vulgar choreography. The young ethnographer saw a marked evolution in her return visits to the field after 2000: while the Indian state recognized filmmaking as a legitimate cultural activity, filmmakers themselves began to feel pride in their work and became accepted into social and cultural elites. For Tejaswini Ganti, respectability and cultural legitimacy for commercial filmmaking only became possible when the developmentalist state was reconfigured into a neoliberal one, privileging doctrines of free markets, free trade, and consumerism. Urban middle classes were celebrated in state and media discourse as the main agents of social change as well as markers of modernity and development in India. A few blockbusters created a box-office bonanza and ushered in a new era for Bollywood movies. Released in 1995, Dilwale Dulhania Le Jayenge, better known by the initialism DDLJ, featured two young lovers (played by Shah Rukh Khan and Kajol) born and raised in Britain who elope in beautiful sceneries shot in Switzerland before facing the conflicting interests of their families in India. Love stories with extremely wealthy and often transnational characters began to replace former plots that often focused on class conflict, social injustice, and youthful rebellion. As the author notes, “through their valorization of patriarchy, the Hindu joint family, filial duty, feminine sexual modesty, and upper class privilege, the family films of the mid- to late 1990s were much more conservative than films from earlier eras; however, their visual, narrative, and performative style made them appear modern and ‘cool’.”

More than the content of films themselves, the material conditions of film-viewing and filmmaking were quoted as the main impetus for elite and middle-class audiences to return to cinema halls. The 1990s saw the advent of the era of the multiplex: with their smaller seating capacities, location in urban centers, and much higher ticket prices, multiplex theaters transformed the cinematic experience and allowed filmmakers to produce movies that would not have been commercially viable in the previous system. “What the multiplex has done today is release the producer from having to cater to the lowest common denominator,” says veteran actress Shabana Azmi. Indian middle-class norms of respectability and morality were embraced by the cinematic profession who sought to redeem its image formerly associated with organized crime, loose morals, and vulgar audiences. Girls from “good families” began to enter the industry as actresses, dancers, or assistants, their chastity protected by chaperones and new norms of decency on film sets: “while actresses frequently had to wear sexy, revealing clothing in certain sequences, once they were off camera their body language changed, going to great pains to cover themselves and create a zone of modesty and privacy in the very male and very public space of the set.” Male actors and directors also “performed respectability” and accomplished “face-work” by emphasizing their higher education credentials and middle-class lifestyle that cast them apart from “filmi” behavior—with the Indian English term filmi implying ostentation, flamboyance, crudeness, and amorality. Many individuals whose parents were filmmakers explained to the author that their parents had consciously kept them away from the film world. But many actors and directors were second-generation professionals who entered the industry through family connections and kinship networks. In Bollywood, cinema remains a family business, and while the Hindi film industry is very diverse in terms of linguistic, regional, religious, and caste origins of its members, the unifying characteristic of the contemporary industry is its quasi-dynastic structure. Getting a foothold into the profession requires connections, patience, and, at least in the stereotypical view associated with female actresses, a reliance on the “casting couch.”

An ethnography of Bollywood

This is why the kind of unmediated access, direct observation, and participatory experience that Tejaswini Ganti was able to accumulate makes Producing Bollywood a truly exceptional piece of scholarship. The author provides a “thick description” of an average day on an Hindi film set, rendering conversations, power relations, and social hierarchies. She emphasizes the prevalence of face-to-face relations, the significance of kinship as a source of talent, and the highly oral style of working. She depicts the presence of Hindi rituals, which have become incorporated into production routines, as well as the tremendous diversity—regional, linguistic, and religious—of members of the film industry. The movie industry is often analyzed through the lenses of Hollywood norms and practices: her ethnography of Bollywood aims at dislodging Hollywood from its default position by describing a different work culture based on improvisation, on-the-job training, and oral contracts. Films, deals, and commitments are made on the basis of face-to-face communication and discussion between key players, rather than via professional mediators or written materials. Actors, directors, writers, or musicians do not have any formal gatekeepers or agents as proxies for attaining work. If a producers wants a particular star for a film, he speaks directly with him. Heroines are usually chosen after the male star, director, and music director have been finalized for a film project, and are frequently regarded as interchangeable. Spending time on a Hindi film set, it is hard to miss the stark contrast between stars and everyone else around them, especially the way stars are accorded a great deal more basic comfort than the rest of the cast and crew. Chorus dancers and extras—referred to as “junior artists”—often do not have access to makeup rooms or even bathrooms. At any given point in time, only about five or six actors are deemed top stars by the industry, based on their box-office draw and performance. This makes the kind of access that the junior ethnographer enjoyed all the more exceptional.

Cinema is a risky business, and managing the uncertainty endemic to the filmmaking process is a key part of how the movie industry operates. Hindi filmmakers aim to reduce the risks and uncertainties involved with filmmaking in a variety of ways, from the most apparently superstitious practices—from conducting a ritual prayer to Ganesh, the elephant-headed Hindu god regarded as the remover of obstacles, to breaking a coconut to celebrate the first shoot of the day—to more perceptible forms of risk reduction, such as always working with the same team of people or remaking commercially successful films from the Tamil, Telugu, and Malayalam film industries. Although the driving force within the Mumbai industry is box-office success, it is a difficult goal, achieved by few and pursued by many. The reported probability of a Hindi film achieving success at the box-office ranges from 10 to 15 percent every year. The entry of the Indian corporate sector in the twenty-first century has infused the industry with much-needed capital and management skills. Many of the new companies have integrated production and distribution, which reduces uncertainties around the latter. Measures such as film insurance, coproductions, product placement, and marketing partnerships with high-profile consumer brands have also mitigated some of the financial uncertainties of filmmaking. The gentrification of cinema and the growth of multiplexes have helped to reduce the perception of uncertainty associated with filmmaking by reducing the reliance on mass audiences and single-screen cinemas. With their high ticket prices, social exclusivity, and material comforts, multiplexes have significantly transformed the economics of filmmaking. So has the growing importance of international audiences, with the South Asian diaspora providing one of the most profitable markets for Bollywood filmmakers. Diasporic audiences, especially in North America and the United Kingdom, are perceived as more predictable than domestic audiences. Not only has the multiplex and the gentrification of cinema created new modes of sociability and reordered public space, but it has also reshaped filmmakers’ audience imaginaries. Filmmakers still strive to produce the “universal hit,” a movie that can please “both aunties and servants,” but at the same time they complain that audiences are not “mature” enough to accept more risqué stories or artistically ambitious productions. This definition of the public as divided between “the masses and the classes” operates as a form of doxa—that which is completely naturalized and taken for granted—within the film industry.

The role of the state

The Hindi film industry offers a living proof example that competing against Hollywood’s dominance does not require huge barriers on imported films nor the provision of massive subsidies to domestic movies. In the movie industry as in other sectors, the role of the government is to set the broad economic environment promoting a sound and stable legal regime that is required by film companies. On this basis, film companies develop their business strategies, in particular they take the high risks inherent with this industry. A healthy domestic market requires that films from all origins compete on a level playing field to attract the largest number of domestic moviegoers. But very often the intervention of governments in the film industry goes beyond the provision of a level playing field. Public support such as subsidies, import restrictions, screen quotas, tax relief schemes, and specialized financial funds holds a preeminent place in the film policies of many countries. A generous film subsidy policy or certain import quotas can inflate the number of domestic films produced; but they rarely nurture a sustainable industry and often translate into a decline in film quality and viewers’ experience. In India, the government took the opposite direction to regulating the sector. Instead of subsidizing the industry, economic policies have treated cinema as a source of tax revenue rather than as an engine of growth. The main bulk of taxation is collected by individual state governments through the entertainment tax, which is a sales tax imposed on box-office receipts, ranging from 20 to 75 percent. India’s cinema industry has faced other regulatory hurdles, such as restrictions on screen construction that have hindered the expansion of cinemas, especially in smaller towns and cities. Even after being accorded official status as a private industry in 2001, moviemakers had tremendous difficulty in obtaining institutionalized funding, except for those already established companies that don’t need the capital and that can capitalize on lower bank interest rates compared to private financiers. The influx of capital from established financial institutions and business groups also brought in much needed management skills and planning capabilities. As a result, Bollywood has outperformed most of its competitors across a range of key dimensions (number of films produced, box office revenues, etc.) with much lower level of subsidies than the other countries and—above all from a cultural perspective—with an increase in quality and popular appeal of movies when compared to an earlier period or to foreign productions. Put that to the credit of neoliberalism.

The Coder Who Came in from the Cold

A review of From Russia with Code: Programming Migrations in Post-Soviet Times, Mario Biagioli and Vincent Antonin Lépinay eds., Duke University Press, 2019.

From Russia with CodeFrom Russia with Code is the product of a three-year research effort by an international team of scholars connected to the European University at Saint Petersburg (EUSP). It benefited from the patronage of two important figures: Bruno Latour, who pioneered science and technology studies (STS) in France and oversaw the creation of a Medialab at Sciences Po in Paris; and Oleg Kharkhodin, a Russian political scientist with a PhD from the University of California at Berkeley who served as EUSP’s rector during most of the duration of the study. Based on more than three hundred in-depth interviews conducted from 2013 through 2015, the research project also benefited from a rare window of opportunity offered by political conditions prevalent back then. Supported by a consortium of Western research institutions, it was partially funded by a grant from the Ministry of Education and Science of the Russian Federation for the study of high-skill brain migration. It could build on the solid foundation of EUSP, a private graduate institute whose academic independence is secured by an endowment fund that is one of the biggest in the country. The brain drain of IT specialists was obviously a matter of concern for Russian authorities, as surveys showed that in 2014 the emigration of Russian scientists and entrepreneurs was by a wide margin the highest since 1999. The movement was amplified after 2014 by Russia’s decision to annex the Crimean Peninsula and, in 2022, by its all-in war of aggression against Ukraine. Conditions for fieldwork-based studies and international research projects in Russia would certainly be different today. The book’s chapter on civic hackers illustrates how fast the ground has moved in the past ten years: most of the civic tech projects it describes were affiliated with the foundation created by Alexey Navalny, a Russian opposition leader who was detained in 2021 and died in a high-security prison in February 2024.

Preventing the brain drain

The research questions framing the project demonstrate how social science can contribute to policy discussions while translating practical issues into scholarly interrogations. The concerns of the Russian authorities that sponsored the project are well reflected in the topics covered and the questions addressed. How can Russia prevent or reverse the brain drain that was perceived as a direct threat to the nation’s sovereignty? How to avoid dependence on Western imports and cultivate world leaders in an industry dominated by the GAFA? Is import substitution in the IT sector a viable strategy, or should the country rely on foreign direct investment and integration into global value chains? Could Russia create its own version of Silicon Valley by encouraging the clustering of industries in special economic zones and technoparks? These questions are reframed and displaced through the lenses of disciplinary studies mobilized by the members of the research team: STS, transition to market theory, economic geography, innovation policy studies, corporate management, migration studies, and so on. But mostly, From Russia with Code helps answer the questions that readers familiar with IT all know too well: why are Russian programmers so talented and prized by the market? What explains their unique combination of skills, and how to integrate these skills into a foreign business setting? Is it true that their technical prowess is offset by a lack of managerial skills and poor entrepreneurial spirit? The list of famous Russian IT developers include Andrei Chernov, one of the founders of the Russian Internet and the creator of the KOI8-R character encoding; Andrey Ershov, whose research on the mathematical nature of compilation was recognized with the prestigious Krylov Prize; Mikhail Donskoy, a leading developer of Kaissa, the first computer chess champion; Alexey Pajitnov, inventor of Tetris; and Yevgeny Kaspersky, founder of cybersecurity and anti-virus provider Kaspersky Lab. Russia is one of the few countries that is not dominated by Google, Facebook, and WhatsApp, but that has developed its own search engine (Yandex), social network (VKontakte) and message app (Telegram). As a last question that lurks into readers’ minds: what are Russian hackers really up to, and should we be afraid of their cyberattack capabilities?

The standard diagnosis on Russia’s IT capacity is framed by transition theory and posits that “Russians historically have been good at invention but poor at innovation.” Russian computer scientists built successful academic careers outside their homeland, and many global technological giants such as Apple, Google, Intel, Microsoft, or Amazon retain Russian programers as valuable talents. Yet Russian IT entrepreneurs are scarce either in Russia or abroad, and outstanding success stories are the exception rather than the rule. It took one generation to produce a Sergey Brin, co-founder of Google, who arrived in the United States at the age of six where his Russian Jewish parents typically pursued a teaching and research career instead of turning to the corporate world. The virtuosity of Russian software programmers is often explained by their high-level training in mathematics and pure science. The Soviet Union maintained a top-class scientific apparatus, from the fizmat model high schools specializing in math and physics to the dense network of research institutes, science cities, and elite academic institutions like the Academy of Sciences. This strong institutional basis translated into a high number of Nobel prizes and science olympiad laureates. Russian IT developers are praised for their deep interest and immersion in research, an inventive turn of mind, the ability to think independently and offer innovative solutions, and their intuitive grasp of complex problems. But they are also lambasted for their lack of management and entrepreneurial skills. Management was something to which Soviet scientists and science students had virtually no exposure. Even now, business culture is still perceived by many in the community as a superfluous and even disingenuous element. According to the standard view, Russian tech specialists are often interested mainly in new and technically exciting projects, to the point where they disregard the interest of their clients. They tend to think that if an idea is good technically, it will automatically translate into commercial success. They are criticized for a lack of business acumen, poor business etiquette, a certain intolerance for risk, a limited sense of the global market, and disinterest in management issues, which they see as “bullshit.”

Lack of management skills

The studies assembled in From Russia with Code both validate and complicate this diagnosis. Russian IT specialist are certainly heirs to a tradition that values the plan over the market, pure science over applied technology, and developing elegant responses to abstract questions over providing practical solutions to specific problems. Technical skills can be acquired using brute force and a sound foundation in basic science; management culture is taking much longer to cultivate and is more reliant on “soft skills.” The history of computer science in the Soviet Union lies at the root of the differences in programming cultures between East and West. As long as informatics remained a basic science akin to applied mathematics, Soviet scientists remained at the forefront of the discipline. Although cybernetics was initially perceived as an American “reactionary pseudoscience,” it quickly became part of a vision of a socialist information society. As in the United States, early computers were intended for scientific and military calculations. A universally programmable electronic computer known as MESM was created in 1950 by a team of scientists directed by Sergey Lebedev at the Kiev Institute of Electrotechnology. Electrical engineering and programming was one of the few careers in the Soviet Union that was relatively open to Jews and to women: hence their large numbers in these professions. The engineering education was fairly broad, with heavy emphasis on mathematics and physics, but without much foundation in computers: according to one former student, “learning to program without computers was akin to learning to swim without water.” Hardware limitations forced Soviet programmers to write programs in machine code until the early 1970s. By that time, the Soviet government decided to abandon development of original computer designs and encouraged cloning of existing Western systems. A program to expand computer literacy in Soviet schools was one of the first initiatives announced by Mikhail Gorbachev after he came to power in 1985. A network of afterschool education centers carrying programming classes for children led to a wide popularity of Basic and other programming languages.

A half century’s worth of Soviet experience with computing did not just disappear overnight with the end of the Soviet Union. Russians continued to play by the old rules they had internalized in the Soviet economy. The technical skills that Russian software programmers are internationally appreciated for and identified with are skills they have developed through the very specific Russian (and formerly Soviet) educational system. A case study of Yandex, the company behind Russia’s main search engine and the fourth-largest in the world, illustrates how coding socializes IT workers and creates communities of practice aligned with corporate objectives. Computer codes are written in languages that need to be executed by machines, thus leaving no space for semantic ambiguities. At the same time, and for the same reason, there is a specific sociality to code to the extent that lines of code also encapsulate relationships of collaboration, training, and skill transfer. At Yandex, young recruits are encouraged to immerse themselves in the source code of the company and to spot errors or typos for debugging. This way they learn the conventions of the community, all of which are inscribed in the codebase. Face-to-face interactions and oral communication are limited, as developers work from different office buildings and spend most of their time facing their computer screen, writing code or discussing through chatboxes. Yandex has a tradition of writing code without including comments in natural language: the code should be able to “speak for itself” by being accurate, simple, and “clean.” The very first thing every new employee has to learn is how to make code readable and to improve its utility for human readers. As in other programming communities, there is a difference in style between the “mathematicians” who prefer high-level languages such as Python and the “engineers” who favor low-level languages like C++. But projects at Yandex often mix the two approaches, while the corpus they create remains open to criticism and correction. All employees have access to the full codebase of the company and are free to comment on ongoing projects, upholding long-held principles of communal help that hark back to an idealized Soviet past.

Smart cities and technoparks

A key concern of policymakers is to create conditions by which IT industry can flourish. Interventions to promote public-private partnerships and foster cooperation between institutions and actors occur at different scales, from macro to micro: special economic zones, regional corridors, smart cities, creative hubs, technoparks, startup incubators, rentable work-space, and so on. Russia can build upon a model of science promotion that has concentrated resources in isolated science cities and non-teaching research institutions such as the Academy of Sciences. It has been successful at generating scientific breakthrough and achieving technological milestones in fields such as space exploration or the nuclear arms race. However, it has failed consistently in translating scientific discovery into technological innovation and market success. Commercialization was never a priority in the planned economy. In the IT sector, where innovation was increasingly driven by the market, the Soviet Union soon lost its advance in basic science and cybernetics and was reduced to licensing or copying Western technologies. Emerging from the ruins of the Soviet Union, the Russian state had its own particular vision of IT development. It was aiming at not simply imitating the West, but at keeping innovation within state control through authoritarian policy decisions and administrative guidance. But instead of supporting existing science cities and research institutions, the state decided to build a new technological apparatus separate from the Soviet one and inspired by the Silicon Valley model. As a result, Russia got the worst of both worlds: increased competition and the profit motive brought many IT professionals to exit the country in search of more remunerative opportunities, while domestically industrial policy gestured toward Silicon Valley but continued to follow the template of the old Soviet science apparatus. Created with great fanfare by then President Dmitry Medvedev, the Skolkovo “Innovative City” is almost impossible to find on a map and very difficult to go to from Moscow. At the time oof the book’s writing, it was criticized for “inefficiency, corruption, high rents, a complicated architectural plan, and a failing program for the support of startup companies.” Technoparks have been established in many other Russian cities to host both IT startups and larger technology companies. But local authorities are competing against each other through incentives and subsidies programs, while thousands of IT specialists have left the country and are likely never to return. Meanwhile, grassroots initiatives and homegrown developments were annihilated by the state’s attempt to regain control over peripheral regions. In the Russian Far East, a thriving ecosystem built around the online trading of used Japanese cars was suppressed by one stroke of a pen when the Russian state decided to impose a hefty levy over imported cars of more than five years. Other experiments such as Kazan’s self-branding as “the capital of the Russian IT industry” have met with more support from the centralizing state whose priorities are aligned with the interests of local politicians in Tatarstan. However, at present the city plan remains more a layout than a fully functional smart city, and the reader cannot escape the feeling of being led through a Potemkin village by an overtly enthusiastic research guide. It is easy to adopt the jargon of IT success and talk the talk of startup promotion. To walk the walk is another matter.

Russia’s soviet heritage continues to linger in the present.  But the Western capitalist model exemplified by Silicon Valley doesn’t represent the sole alternative. Not all Western countries share the same approach to running IT business. Elements of the socialist model, such as an orientation towards social justice, have influenced policies and mindset in Scandinavia, where Russian expatriates appreciate the communalist ethos and the family-friendly environment. Other Russian migrants who have relocated to Boston or to Israel place high value on a corporate capitalist model of large organizations which are both risk-adverse and profit-oriented. As the last article in the book concludes, “the entrepreneurial capitalism of Silicon Valley is not the only game in town.” There are circumstances when a ”socialist” technological model or a “corporate” capitalist model are more applicable than the purely “entrepreneurial” model of IT startups and venture capital. From a Russian perspective, it makes sense to cultivate the tradition of high technical skills and complex problem-solving that constitute Russia’s soviet heritage. Business models that originate ion the academic community are quite distinct from the capitalist motive or profit generation. Even in the West, open source programming and the free software movement have led to sustainable ventures and now undergird a vast portion of today’s internet. Moreover, the lack of entrepreneurial spirit by Russian IT specialists may be due to institutional factors: the lax attitude toward intellectual property, the absence of trust among young professionals, the relative isolation of Russia from global trade patterns, the absence of venture capital and related services to scale up enterprising businesses, the shadow of the criminal economy, etc. According to the authors, the brain drain narrative also needs to be complicated. Experiences of work migration by IT professionals from India or China have demonstrated that the “brain drain” is not an unfixable curse and can instead be viewed as “brain circulation,” with people looking for better conditions regardless of the country. Here again, the profit motive is not the only driver of individual decisions. Student and young researcher mobility is increasingly part of the academic curriculum, and the choice of destination is often motivated by existing collaborative networks or diasporic connexions. Scholars get a first taste of academic life abroad by spending a few months as a postdoctoral student or a guest lecturer before considering more long-term migration options. The same process of migrating step-by-step can also be found in a corporate environment where the decision to relocate is preceded by offshoring contracts and temporary missions. The story of Russian Jewish IT practitioners migrating to Boston during the Soviet period dispels the myth of the “tech maverick” and shows that migrants often have to re-train and upgrade their skills sets before they can find employment in US companies. The concept of brain drain assumes a kind of inherent and fixed value to the “brains” that leave their homeland and settle abroad. In practice however, migration often leads to occupational downgrading, deprofessionalization and de-skilling, as highly educated graduates lacking connexions and job-search skills become employed in low-skilled work or, at best, “upper-middle tech” in big US corporations. The failure to produce technological entrepreneurs among Russian immigrants should not be read as a result of their inability to operate in a capitalist economy or as a lack of entrepreneurial skills. Considering the limited options offered to migrants in a new environment, settling in for a mid-level corporate position in a large corporation instead of starting a new high-risk venture seems like a reasonable option.

The shadow of cyber criminality

In addition to the three models identified by the authors—socialist, entrepreneurial, or corporate—, there is a fourth model that they don’t consider in their essays: the criminal one. Much late-Soviet entrepreneurial activity emerged as an antidote to the country’s collapsing economy, and the idea of “dishonest speculation” was seen as the predominant form of engaging In business activities. From semi-legal market practices to criminal activities, there was only a fine line that many young professionals equipped with IT skills were ready to cross. The same skills that made fizmat school graduates valuable on the IT job market could also be turned toward quick gains in the shadow economy. During Russia’s market transition, the grey zone between legitimate, semi-legal, and illegal activity led to surprising developments, such as a publicly organized conference of avowed criminals that took place at Hotel Odessa in May 2002. The First Worldwide Carders Conference was convened by the administrators of CarderPlanet, a website on the dark web that specialized in mediating between vendors and purchasers of stolen credit card data. In the early age of e-commerce, when American banks and card issuers lagged behind in the chip-and-PIN technology which their European counterparts had developed, “carding” or credit card fraud became a very lucrative activity.  Russian fizmat kids with access to a computer and an Internet connexion turned into early-day hackers and cybercriminals.  CarderPlanet became the breeding ground of a whole generation who turned to cybercrime for lack of better opportunities in the context of a crumbling economy and a disintegrating state. Later on, these hackers turned to ransomware as the preferred mode of attack and to bitcoin as the privileged means of payment. Russian cybercriminality cannot be understood without appreciating its relationship to Russian national security interests. Early on the FSB, Russia’s secret service, made it clear that any criminal operation against domestic state interests was clearly off-limits and would be met with strong retaliation. Later on, criminal gangs were mobilized into cyber attacks against newly independent states such as Estonia or Georgia. Members of cyber gangs were also recruited into notorious state-backed hacking teams such as APT28 or Unit 26165. Cybercriminals hide behind anonymity services, encrypted communications, middlemen, puppet accounts, and pseudonyms. This makes it challenging for law enforcement agencies, let alone social scientists, to track them or describe their practices. A few facts highlighted by From Russia with Code might however be relevant here. Like conventional Russian software developers, Russian cybercriminals and hackers are likely to value technical prowess and coding virtuosity above all else. For them, code is a political instrument that has the power to challenge geopolitical power relations and capitalist business interests. Code also serves to create groups and communal identities of like-minded professionals, like the software-writing teams at Yandex. Studying their coding style and particular signature may help intelligence agencies to attribute cyberattacks to known actors in Russia, thereby responding to the challenge of attribution in cyber warfare. Like the professionals described in the book, Russian cybercriminals’ relation to the motherland is likely to be transactional. They are also geographically mobile, and need to venture abroad to close some illicit transactions, which gives Western law-enforcement agencies an opportunity to locate them and bring them behind bars. Most participants in the 2002 CarderPlanet Conference have been identified, tracked down, arrested, and condemned by justice.

A Diplomat’s Dog in India

A review of Indifference. On the Praxis of Interspecies Being, Naisargi N. Davé, Duke University Press, 2023.

IndifferenceMy wife and I are moving to India along with our dog Kokoro, a shiba inu. Kokoro, aged 13 (a venerable age for a dog) has already been around, seen places. As a diplomat’s dog, he had to follow his keeper in his foreign assignments. He has never set foot, or paw, in the land of his ancestors, and doesn’t come with us when we travel to Japan. He remained in France when I was posted in Seoul—not because he was afraid of staying in a country where dog meat consumption is still not uncommon, but because I went to Seoul as a goose father, or gireogi appa, as the Koreans say to designate a breadwinner living away from wife and kids and sending money home for the sake of their children’s education. Kokoro did come to Vietnam during my most recent assignment. He and my wife had a hard time adapting to the local culture. Pets are increasingly becoming familiar in Vietnamese cities, but many people still regard dogs as uncouth and unclean, keeping them away from human contact. My wife couldn’t determine whether people waving or wagging finger at her and her dog to tell them to go away were being aggressive toward a foreigner or simply discriminatory toward a dog. She had to bring a stick when walking Kokoro in the neighborhood park in order to ward off stray dogs, and was once attacked and bruised by a mutt. Wherever we went, she joined local NGOs or Facebook groups mobilizing for animal protection and pet welfare.

Animal protection in India

I picked up Naisargi Davé’s book because Indifference was ostensibly about human-animal relations and animalist cultures in India. The questions that I had in mind were related to the conditions that would await Kokoro and his keepers in our future location. Is there a pet culture in Indian cities, and can one easily find dog food and specialized services such as vets and pet sitters? Do street dogs carry rabies and are they aggressive toward pet dogs and their keepers during their walks? What is the general attitude of the population toward non-human animals in general and dogs in particular? Are there local organizations of pet owners or animal rights NGOs that we could join? Is violence against animals or the unethical treatment of non-human species an issue? Are Indian cows really sacred, and why do they get such special treatment? Davé’s book didn’t provide answers to these questions, at least directly. It wasn’t meant or supposed to. Anthropology, at least the way it is practiced now, is not the discipline that will answer practical questions regarding a foreign country or a particular culture. There are other books for that: travel guides, how-to manuals, journalistic accounts, or expat diaries. Naisargi Davé is not interested in South Asian cultures or civilizations in the traditional sense. Nowadays culture is a fraught concept in anthropology; nobody really uses that notion any more. The frontier of the discipline lies in queer studies, new materialism, animalism, and deconstructing notions of race, gender, and identity. As an author published in a cutting-edge academic press, Davé is committed to pushing the envelope further, not in revisiting foregone notions.

One way she connects with the past of the discipline is through fieldwork and participant observation. Anthropology departs from arm-chair theorizing and cannot be practiced from a cabinet. Ethnographers have to go on the ground, meet people, participate in activities, observe surroundings, and take notes or keep a research diary. Davé conducted her ethnographic fieldwork in several Indian cities over a period of ten years, documenting animal activism and interspecies relations by doing participant observation in local NGOs. She didn’t follow a structured methodology or engaged in survey research; instead, as she describes it, “I followed my intuitions, went where I was invited; and, in general, said yes to who and what turned up.” She associated with several strands of Indian society, from rags to riches, from pariah to nabab. She had several discussions with Maneka Gandhi, India’s most notorious animal activist and heir to the Nehru-Gandhi political dynasty, but also followed streetworkers in their roaming across popular settlements or red-light districts to heal wounded dogs or rescue suffering animals. She provides a long list of animal rights organizations: People for Animals, Welfare for Stray Dogs, Kindness for Animals and Respect for Environment, the Society for the Prevention of Cruelty to Animals, Help in Suffering, Compassion Unlimited plus Action, the Animal Welfare Board of India, Humane Society International, Save Our Strays, etc.

Dog riots and cow vigilantes

Some organizations originated in the colonial period and were significantly shaped by foreigners or expatriates. Others follow a purely domestic agenda and reflect local cultures of animal protection. Jaïns, for instance, are strict vegetarians who try to avoid all harm to humans and animals; many Jaïn monks and nuns even wear fabric over their mouths to avoid breathing in insects or microbes, and sweep ahead of themselves while walking to avoid treading on bugs. Almost every Jaïn community has established animal hospitals to care for injured and abandoned animals; many Jaïns also rescue animals from slaughterhouses. Dogs are considered sacred in the Zoroastrian religion, and an attempt by the British government to exterminate Bombay’s stray dogs in 1832 led to a mass protest known as the Parsi Dog Riots. The Great Mutiny of 1857 originated because rumors circulated that Indian soldiers’ bullet cartridges were greased with pork fat (repulsive to Muslims) or beef fat (insulting to Hindus.) The Gau Seva Sangh or Society for Service to the Cow is associated with the Hindutva right and has sponsored laws banning cow slaughter in almost of all of India’s 28 states. Cow vigilante groups have been accused of enforcing this ban through violence, often leading to the lynching of (mostly Muslim) meat sellers and cattle traders. For Davé, cow protectionism “is not animal welfare: it is exclusively about the cow, the cow as a weaponized symbol that separates those who eat or slaughter cows (Muslims, Dalits, tribal people, and Christian minorities) from those who, by doctrine, do not (caste Hindus, or sarvanas).” Davé also makes a distinction between “hands-on” and “hands-off” animalists. The first get their hand dirty and “pick up poop” or enter their fingers into dogs’ behinds to extract maggots; the second keep their hands clean and are also designated as laptopwala or AC-wala (those who reside in air conditioning.) Davé follows the formers in animal shelters caring for three-legged dogs, paralyzed pigs, and a long list of species; in street patrols doing community service for suffering animals; and in inspection visits of slaughterhouses or poultry farms.

Davé notes contradictions that often shock foreigners or distant observers of Indian society. Compassion for suffering animals can coexist with indifference for the plight of humans or cruelty toward other species. Ahimsa, a central doctrine in Hindu, Jaïn, and Buddhist thought, is often invoked to laud the moral relationship between people and animals in India. But it can also be read as an abnegation of responsibility, a haughty indifference toward every social issue that does not directly involve the abuse of animals, or the refusal to expose onself to ethical quandaries. Her ethnographic vignettes include the story of Abodh, a street veterinarian who gets his hands dirty from cleaning the wounds of animals and only asks for water to clean his hands as a form of retribution; of Dipesh, a street worker who patiently cleans a dog’s butt infected by maggots and disposes of the worms on a discarded newspaper, then expresses indifference when the dog almost gets run over by a car; of Retired Brigadier-General S.S. Chauhan who cares so much for his cow (named Kamadhanu) that he gives up drinking milk after she stops lactating; of Amala Akkineni, a South Indian actress who converts to the animal cause when she picks up a goat hit by a lorry on the side of a road; or of the Brahmin who has a run-over bull suffering agonizing pain be removed ten feet from his land to the road where he can be shot without compromising the landlord’s ahimsa. Less savory stories include mob killings of Dalit or Muslim villagers accused of having slaughtered a cow, or the many cases of bestiality, or animal sexual abuse, reported by the media. In a provocative chapter co-written with Alok Gupta, Davé draws a parallel between sexual violence against animals and livestock insemination, which she labels “permissible interspecies sex.”

Ethnographic vignettes and biographical portraits

Davé traces the history of animal protection in India through three portraits of women who advocated for the rights of animals from very different perspectives, which she calls “the odious,” “the genial,” and “the luminous.” Savitri Devi Mukherji shows that moral attention to animals can be morally repulsive. Born in France from European parents, she developed a fixation on cats and Aryans as well as a lifelong animosity toward Britain. She came to India in 1932, stayed briefly at Rabindranath Tagore’s ashram in Shantiniketan, and became an apologist of Hitler and his crimes against humanity (which she found “hopelessly amateurish” in comparison to other atrocities.) According to Davé, Savitri Devi provides the link between radical ecology, Nazism, and the Hindutva movement. Crystal Rogers, by contrast, was moved by compassion toward sentient species, humans and non-humans alike. The author of the autobiography Mad Dogs and an Englishwoman was “the original kutta-billi activist” or dog-cat lover, creating several shelters for abandoned and injured animals that exist to this day. The third character, Rukmini Devi Arundale, was a student of the theosophist Indophile Annie Besant and has become an icon of the animal welfare movement in India through her sponsoring of the Prevention of Cruelty against Animals Act (PCA) in 1960 and her participation to the the Animal Welfare Board of India until 1986. She also espoused the cause of Bharatanatyam, a traditional dance from Tamil Nadu then scorned by the elite, and created the Kalakshetra dance academy in Chennai. In addition to these three biographies, Davé mentions the attachment to animal welfare by the Nehru-Gandhi family. Jawaharlal Nehru, the father of India’s independence, was an animal lover in the bourgeois sense of the word. He considered it a point of pride to push through the PCA Act. His daughter, Indira Gandhi, sponsored the wildlife conservation initiative Project Tiger in 1973 and wrote that “someday I hope people will shoot only cameras, and not guns, in the jungle.” Maneka Gandhi married Indira’s younger son Sanjay, who died in a plane crash in 1980. An animal rights activist and a politician, she led protests against the opening of the first McDonald’s restaurant in 1996, stating that “we don’t need cow killers in India.”

But those vignettes and portraits aside, Davé refrains from providing “context.” Asked by a colleague from another discipline to give some context to her anecdote about the dog’s butt infected with maggots, she answers tongue-in-cheek with a long description of a dog’s anatomy and a chemical analysis of anal glands secreting a strong smell that keeps parasites away. For her, we should not talk about dogs and animal welfare in general, but of this particular dog in a specific situation. She echoes the anthropologist Donna Haraway who, in When Species Meet, criticizes Jacques Derrida for describing (in The Animal That Therefore I Am) his reaction to a cat staring at him naked without indicating the cat’s name or taking a cat’s point of view. While she limits her perspective to the ground level, she also offers a meta-analysis of her topic: she doesn’t write directly on animal ethics in India, but on what it means to raise the issue of animal welfare in specific situations. She also warns against the scientific impulse to “look, stare, take in, pillage, acquire, ingest, dissect, admire, anthropologize, steal, exhibit, repair, voice, recoil, sell, and possess.” Curiosity—the basic drive of the social scientist—is a politically tainted notion. As she confesses, “I can say for myself that, as a dyke, the curious gaze of normal people is rarely a pleasure.” As she puts it, “one should at least sometimes just leave folks alone.” The requirement for total transparency, like language for Roland Barthes, is fascist by nature. Instead of the inquisitorial gaze of the curious observer, she advocates “indifference to difference.” Indifference calls for a different politics as well as a poetics of relation and identity. Naisargi Davé concurs with Édouard Glissant when the French Caribbean poet proclaims: “we clamor for the right to opacity for everyone.” Her particular form of opacity is called queerness, or the refusal to conform to heteronormative scripts: “as for my identity,” she echoes Glissant, “I will take care of that myself.” We should have respect for mutual forms of opacity: in the words of the intersectional feminist and poet Audre Lorde, we should be “at the watering hole / not quite together / but learning / each other’s ways.”

Pets and diplomacy

Over the years and along my foreign postings, I have accumulated many stories about pets and diplomacy. The dog of the US Ambassador to Seoul was quite a celebrity, and his separate twitter account attracted many followers. During morning walks, people stopped his master in the street because they recognized the dog, not the diplomat. There is a popular hashtag for #diplocats on Twitter, with Larry, the cat from 10 Downing Street, as a frequent guest star. An infamous picture shows the French Ambassador to Rwanda boarding an evacuation plane at the beginning of the 1994 genocide along with his dog, while leaving behind Rwandan employees and partners to a certain death. Instructions on relocation always have a few paragraphs on pets, describing the minute procedures that dog or cat owners have to follow in order to bring their animal companion with them. Some countries have set quarantines or procedures so complex that they have to be planned at least six months in advance, while other countries, such as the Maldives, prohibit bringing in or owning dogs by law. Pets are a sweetener in international relations: state visits sometimes end with the offering of a pet or a live animal as an official gift, and China conducts its own panda diplomacy by sending giant pandas to the zoos of friendly countries. In 1949, India’s first Prime Minister Jawaharlal Nehru received a letter from the children of Japan with an almost preposterous request: they had never seen a live elephant and wanted him to send one to them—which he did, starting India’s own brand of elephant diplomacy. Some gifts carry a mixed message, such as the dog Pushinka given to President Kennedy by Premier Khrushchev in 1961. As one of the puppies of the first dog to survive space travel, Strelka, Pushinka was a gesture of friendship but also a lingering, living reminder of the Soviets’ early victories in the space race. But pets can also be weaponized: witness the picture of German Chancellor Angela Merkel, reportedly fearful of dogs since one attacked her in 1995, looking distinctly uncomfortable when Russian President Vladimir Putin brought his large black Labrador Koni into a meeting at his summer residence in Sochi, Russia, in January 2007.

The Party Left and the Hindu Right in Kerala

A review of Violence of Democracy: Interparty Conflict in South India, Ruchi Chaturvedi, Duke University Press, 2023.

Violence of DemocracyViolence of Democracy studies a long-standing violent antagonism between members of the party left and the Hindu right in the Kannur district of Kerala, a state on the southwestern coast of India. The term party left refers to members of the Communist Party of India (Marxist) (CPI(M)); the term Hindu right denotes affiliates of the Rashtriya Swayam Sevak Sangh (RSS) and the Bharatiya Janata Party (BJP) that holds power in New Delhi since 2014. The prevalence of violence in Kerala’s political life presents the reader with three paradoxes. First, political scientists view democracy as a pacifying system, as the regime that is most capable of keeping violence at bay. Autocracies are violent by nature; democracies are supposed to be more peaceful, both between themselves (democracies don’t go to war against each other) and within their borders (antagonisms are resolved through the ballot box.) But Ruchi Chaturvedi shows us that democracy can coexist with violence; indeed, that some characteristics of a democratic regime call for the violence it is supposed to contain. As she states in the introduction, “violence, I argue, not only reflects the paradoxes of democratic life, but democratic competitive politics has also helped to condition and produce it.” This criminalization of domestic politics has a long history in Kerala, and Violence of Democracy documents it by revisiting the life narratives of key politicians from the left, by going through judicial cases and media reports of political violence in the Kannur district of Kerala, and by conducting ethnographic interviews with grassroot militants from both parties. This book will be of special interest to social scientists interested in Indian politics as viewed from a southern state that now stands in opposition to the Modi government. But the author also raises disturbing questions for political scientists more generally: is democracy intrinsically violent? What explains the shift from the verbal violence inherent in antagonistic politics to agonistic confrontation that results in acts of intimidation, attempts to murder, and hate crimes? How can violence become closely entwined with the institutions of democracy? How to make political forces accountable for the violence they encourage and the crimes committed in their name? What happens to political violence and its culprits when they are prosecuted through the judicial system and are sanctioned under criminal law? 

Violent democracy

The second paradox lies with the root causes of political violence in this district of Kerala. Violence in India is often seen as the result of communal tensions. India’s birth of freedom was bathed in blood: the 1947 partition immediately following independence cut through the fabric of social life, pitting one community against the other. Antagonisms between Hindus and Muslims, or between Hindus and Sikhs, have often led to waves of riots and murderous violence. Beyond the trauma of the partition in which around one million people were killed and 14 million were displaced, mass breakouts of violence include the 1969 Gujarat riots involving internecine strife between Hindus and Muslims, the 1984 Sikh massacre following the assassination of Indira Gandhi by her Sikh bodyguards, the armed insurgency in Kashmir starting in 1989, the Babri Masjid demolition in the city of Ayodhya leading to retaliatory violence in 1992, the 2002 Gujarat riots that followed the Godhra train burning incident, and many other such episodes. If religion was not enough reason to fuel internal conflict, Indian society is also divided along caste, class, race, regional, and ethno-linguistic lines, and these divisions in turn often abet violence and intercommunal strife. But in the Kannur district that Chaturvedi observes, “members of the two groups do not belong to ethnic, racial, linguistic, or religious groups that have been historically pitched against other.” Indeed, “local-level workers of both the party left and the Hindu right involved in the violent conflict with each other share a similar class, religious, and caste background. And yet the contest between them to become a stronger presence and the major political force in the region has generated considerable violence.” The conflict between the two parties in this particular district is purely political. It cannot be read as a conflict between an ethnic or religious majority against a minority community. Its roots lie elsewhere: for Chaturvedi, they are to be found in the very functioning of parliamentary democracy in India.

The third paradox is that this history of violent struggle between the party left and the Hindu right doesn’t correspond to the standard image most people have of Kerala. This state on India’s tropical Malabar Coast is known for its high literacy rate, low infant and adult mortality, and low levels of poverty. Kerala’s model of development gained exceptional global coverage in the 1970s, 1980s, and early 1990s, before the rest of India began to enter into its course of high growth and raising average incomes. Even now, Kerala is ahead of other Indian states in terms of provision of social services such as education and health. Its achievements are not linked to a particular industry, like the IT service sector in Bangalore or the automotive industry in Chennai, but stem from continuous investments in human capital and infrastructure (remittances of Kerala workers employed in Gulf states have also played a role.) Kerala is also known for having self-avowed Marxists occupying positions of power since more than four decades. As Chaturvedi reminds us, “it was the first place in the world to elect a communist government through the electoral ballot in 1957.” Today, the two largest communist parties in Kerala politics are the Communist Party of India (Marxist) and the Communist Party of India, which, together with other left-wing parties, form the ruling Left Democratic Front alliance. They have been in and out of power for most of India’s post-independence history, and are well entrenched in local political life. Communists are sometimes accused of plotting the violent overthrow of the government through revolutionary tactics, and the BJP is not immune to playing with the red scare and accusing its enemies of complotism. But in Kerala violence doesn’t come from revolutionary struggle or armed insurgency; it originates in the very exercise of power. And it didn’t prevent Kerala to become the poster child of development economics, showing that redistributive justice can be achieved despite (or alongside) violent conflict and antagonistic politics.

Malabar traditions

Some observers may explain political violence in Kerala by the intrinsic character of its inhabitants. They point to a traditional martial culture of physical confrontation and warfare. The local martial art, kalaripayattu, is said to be one of the oldest combat technique still in existence. Dravidian history was marked by internecine warfare, the rise and fall of many great empires, and a culture of resistance against northern invaders. The Portuguese established several trading posts along the Malabar Coast and were followed by the Dutch in the 17th century and the French in the 18th century. In French, a “malabar” still means a muscular and sturdy character, although the name seems to come from the indentured Indian workers who came to toil in sugarcane fields of the Réunion island. The British gained control of the region in the late 18th century. The Malabar District was attached to the Madras Presidency, while the other two provinces of Travancore and Cochin, which make up the present-day Kerala, were ruled indirectly through a series of treaties reached with their princely authorities in the course of the 19th century. Direct rule in Malabar reinforced landlord domination over sharecroppers and tenants, with the landlords belonging to the upper-caste Nairs and Nambudiris while tenant cultivators and agricultural workers were the purportedly inferior Thiyyas, Pulayas, and Cherumas. In the early 20th century, social tensions were rife, voices were calling for land reform and the end of caste privilege, and Kerala became the breeding ground for the cadres and leaders of the Communist Party of India (CPI), officially founded on 26 December 1925. Communism is therefore heir to a long tradition of militancy in Kerala. India is home to not one but two communist parties, the CPI and the CPI(M), the second born of a schism in 1964 and sending more representatives to the national parliament than the first.

Instead of essentializing a streak of violence in India’s and Kerala’s political life, Chaturvedi explains the violent turn of electoral politics in the district of Kannur as the result of majoritanianism, the adversarial search to become a major force in a local political system, and its correlate minoritization, the drive to marginalize proponents from the minority party. The search for ascendance is not extraneous to democracies but is part of their basic definition and structure. In Kerala, politics turned violent precisely because the main political forces, and especially the party left and the Hindu right, agreed to play by the rules of democracy. The acceptance of democracy’s rules-of-the-game, namely free and fair elections and majority rule, wasn’t a preordained result. At various points in its history, the communist movement in India was tempted by insurgency tactics and armed struggle. Chaturvedi revisits the political history of Kerala by drawing the portrait of two leaders of the political Left, using their autobiographies and self-narratives. Both A.K. Gopalan (“AKG”) and P.R. Kurup were upper-caste politicians who identified with the plight of poor peasants and lower-caste workers. In 1927, Gopalan joined the Indian National Congress and began playing an active role in the Khadi Movement and the upliftment of Harijans (“untouchables” or Dalits). He later became acquainted with communism and was one of 16 CPI members elected to the first Lok Sabha in 1952. Gopalan’s life narratives “privilege spontaneous moral reactions marked by a good deal of physical courage and a strong sense of masculinity.” He was a party organizer, anchoring the CPI and then the CPI(M) in the political life of Kerala, and a partisan of electoral politics, discarding the temptation to engage in armed insurrection in 1948-1951 as “adventurist” or “ultra-left.” Thanks to his heritage, the CPI(M) now resembles other parties normally seen in parliamentary democracies: “each one seeking to obtain the majority of votes in order to ascend to the major rungs of government.” But P.R. Kurup embodies a darker side of electoral politics: known as “rowdy Kurup,” he remained a regional socialist leader through strong-arm tactics and the occasional streetfight operation against rival supporters of the CPI or the Congress. His band of low-caste supporters (“Kurup’s rowdies”) were willing to use intimidatory and violent means so that their party remained on top.  

From agonistic contest to antagonistic conflict

Both Gopalan and Kurup were “shepherds” or “pastoral leaders” who protected, saved, and facilitated the well-being of a populace that reciprocated their favors with votes and other expressions of support. By contrast, the next generation of local leaders to which Chaturvedi turns come from a lower rung of society. They are the militant members and local cadres of the CPI(M) and the RSS-BJP who form antagonistic communities willing to attack and counterattack each other so that their party might dominate in the electoral competition. The fact that young men at the forefront of the conflict between the party left and the Hindu right in the district of Kannur share similar religious, caste, and class backgrounds makes it exceptional. Conflict between the two groups cannot be read as a conflict between an ethnic or religious majority against a minority community. But this distinctive form of political violence in Kannur can be characterized as an exceptional-normal phenomenon, an expression of something common in all democracies: competition for popular and electoral support creates the conditions and ground for the emergence of hate-filled and vengeful acts of violence between opposing political communities. The clashes between the two camps are not just occasional: exploiting various sources such as police and court records as well as personal interviews with workers from the two groups, the author estimates that more than four thousand workers of various parties have been tried for political crimes in Kannur in the past five decades. Assailants used weapons such as iron rods, chopping knives, axes, crude bombs, sword knives (kathival), sticks, and bamboo staffs (lathi). They formed tight-knit communities of young men sharing fraternal bonds and a spirit of strong cohesion: the RSS shakha (local branch network) is the most organized structure from the Hindu right, but the party left also has its volunteer vigilante corps akin to RSS cadres or student wing trained in “self-defense techniques.” For both camps, a cycle of attacks and counterattacks breeds mimetic violence and a culture of aggression and vengeance.

In a functional democracy, law and order is maintained and crime gets punished. Many young men from the party left and the Hindu right have been brought to court on suspicion of politically motivated crimes and sanctioned accordingly. But for Chaturvedi, law is a “subterfuge” that obfuscates the complicity of the democratic political system in brewing violence and offers it an “alibi” or a “free pass.” Justice is the continuation of politics by other means, and the conflict between the CPI(M) and the RSS-BJP in Kannur is being reenacted in the courts. The judicial system depoliticizes political violence by projecting responsibility onto individuals and exonerating political structures of any responsibility for the crimes committed in their name. Perpetrators of violent aggression are liable under criminal law and judges don’t take into account their political motivations, pointing instead to acts of madness or a background of criminal delinquency. Political parties from both sides do not remain inactive during trials: they tutor witnesses to produce convincing testimonies or offer alibis, they create suspicion about testimonies of the opposite party, they fabricate evidence and manipulate opinion. Judicial proceedings take an exceedingly long time due to juridical maneuvers, and suspects are often acquitted for lack of evidence. Important local figures thought to be planning and facilitating the aggressions are not called to account. In addition, according to Chaturvedi, the judicial system in India has taken a majoritarian turn: it affords impunity to members of the dominant group while persecuting minorities and those who challenge its hegemony. In Kerala, it did not stop generations of young men to engage in attacks and counterattacks so that their party can stay on top. Depoliticizing political violence and obscuring the conditions that have produced it not only leaves political forces unaccountable: it perpetuates a cycle of aggression and impunity. For the author, a true political justice should not reduce political violence to individual criminality, but should address the structures that underlie it.

Majoritarianism and minoritization

For Chaturvedi, electoral democracy is defined by the competition “to become major and make minor,” or the imperative “to become a major political force and reduce the opposition to a minor position.” In a first-past-the-post electoral system, the party that commands the greatest number of votes in the greatest number of constituencies obtains greater legislative powers and access to executive authority. There is a built-in incentive to conquer and vanquish, as political opponents are seen as an obstacle in the road to power. Democracy therefore has a propensity to divide, polarize, hurt, and generate long-term conflicts. In the district studied by the author, democracy has facilitated the emergence of violent majoritarianism and minoritization, understood as “practices that disempower a group in the course of establishing the hegemony of another.” Most modern democracies make accommodations to protect minorities, but they also continue to uphold rule of the majority as the source of their legitimacy. The founding fathers of modern India, from Syed Ahmad Khan to Mahatma Gandhi to B.R. Ambedkar, were aware of this risk of majority rule and sought to mitigate it by building checks-and-balances and appealing to the better part of people’s nature. Initially a proponent of Hindu-Muslim unity, Sir Syed wrote about the “potentially oppressive” character of democracy, fearing that it might translate into “crude enforcement of majority rule.” Gandhi not only warned against the workings of competitive politics and the dangers of majoritarianism, but also expressed skepticism about the rule of law and impartiality of the judicial system. Ambedkar wrote principles of political freedom and social justice into the Indian constitution, but was keenly aware that democracies were by definition a precarious place for social and numerical minorities. Although their solutions may not be ours, Chaturvedi concludes that “we need to attend to questions that figures like Sir Syed, Ambedkar, and Gandhi raised.”

The World’s Largest Democracy

A review of Hailing the State: Indian Democracy between Elections, Lisa Mitchell, Duke University Press, 2023.

Hailing the StateWe are tirelessly reminded that India is “the world’s largest democracy.” In times of general elections, like the one taking place from 19th of April to 1st of June 2024, approximately 970 million people out of a population of 1.4 billion people are called to the ballot box in several phases to elect 543 members of the Lok Sabha, the lower house of India’s bicameral parliament. The election garners a lot of international attention. For some, it is the promise that democracy can flourish regardless of economic status or levels of income per head: India has been one of the poorest country in the world for much of the twentieth century, and yet has never reneged on its democratic pledge since independence in 1947. For others, it is the proof that unity in diversity is possible, and that nations divided along ethnic, religious, or regional lines can manage their differences in a peaceful and inclusive way. Still for others, India is not immune to the populist currents menacing democracies in the twenty-first century. For some observers, like political scientist Christophe Jaffrelot, India’s elections this year stand out for their undemocratic nature, and democracy is under threat in Narendra Modi’s India. And yet India is a functional democracy where citizens participate in voting at far higher rates than in the United States or Europe. Lisa Mitchell’s book Hailing the State draws our attention to what happens to (as the book’s subtitle says) “Indian democracy between elections.” Except during general election campaigns, foreign media’s coverage of Indian domestic politics is limited in scope and mostly concentrates on the ruling party’s exercise of power in New Delhi. Whether this year’s elections are free and fair will be considered as a test for Indian democracy. But as human rights activist G. Haragopal (quoted by the author) reminds us, “democracy doesn’t just means elections. Elections are only one part of democracy.” Elected officials have to be held accountable for their campaign promises; they have to listen to the grievances of their constituencies and find solutions to their local problems; they have to represent them and echo their concerns. When they don’t, people speak out.

Repertoires of protest

They do so in distinctly Indian ways, using repertoires of protest that differ markedly from modes of action used in other democracies. During the Telangana movement to create a separate state distinct from Andhra Pradesh, people resorted to roadblocks on state and national highways, rail blockades, fasting vows or hunger strikes, mass outdoor public meetings, strikes or work stoppages, sit-ins, human chains, processions, and marches to the capital. Collective mobilizations acquired grand names such as Mahā Jana Garjana (lit., “great roar of the people”), Sakala Janula Samme (general strike; lit., “All People’s Strike”) or Dilli Chalo (“Let’s Go to Delhi”) movements, while more ordinary practices were designated as garjanas (mass meetings), dharnās (sit-ins), padayātras (foot pilgrimages), and rāstā (blockades) and rail roko actions. During the 2020–2021 Indian farmers’ protests against three farm bills that were passed by the Parliament of India in September 2020, Tamil Nadu farmers resorted to various techniques to gain political attention, including “shaving half their beards and hair, displaying skulls and femur bones purported to be from farmers who had committed suicide, eating rats and snakes, marching in the nude to the prime minister’s office, and vowing to drink their own urine and eat their own feces.” According to Lisa Mitchell, we should not see these practices as specific to southern Indian states or linked with low-status caste or religious-based identitarian politics. First, these registers of political participation are not marginal to Indian democracy: “the many collective assemblies that sought to hold elected officials accountable to their promises to create the new state of Telangana are just one set of examples of the many similar practices that animate India’s wider political terrain.” Second, these collective modes of assembly serve a political function: they are “widely seen in India as everyday communicative methods for gaining the attention of officials, making sure that election promises are implemented, and ensuring the equitable enforcement of existing laws and policies.” And third, these mass protests have a history that predates the institution of Indian democracy, finding their roots in colonial times and even in the precolonial efforts to gain audience with domestic rulers.

Lisa Mitchell defines “hailing the state” as “a wide range of practices that can be grouped together around their common aim to actively seek, maintain, or expand state recognition and establish or enhance channels of connection to facilitate ongoing access to authorities and elected officials.” The expression inverts or subverts the state tactic identified by French philosopher Louis Althusser as “hailing” or “interpellation” by which a state official—in the Althusserian vignette, a policeman—interpellates a citizen with a halting order (“Hey, you!”). For Michel Foucault, a disciplinary society is a society where one becomes a docile body due to the presence, or threat of, constant surveillance and discipline. In political analysis inspired by Marxism or Foucaldian studies, the capitalist state is always on the side of oppression or surveillance and subjects are drawn to passive submission or led to active resistance. According to anthropologist James Scott, “weapons of the weak” include everyday forms of resistance such as footdragging, dissimulation, false compliance, pilfering, feigned ignorance, slander, arson, sabotage, and so forth. But as Lisa Mitchell notes, many collective actions of protest are in fact efforts to seek recognition and inclusion by state authorities, not to subvert or to bypass them. In both the Telangana movement and the 2020-21 farmers’ protests, the demands made were not for the overthrow of the state, but rather for dialogue with representatives of the state, for inclusion within the processes that would determine state policies, and for the fulfillment of earlier political promises that had not yet been realized. Failure to achieve recognition forces petitioners to amplify their voices in order to be heard by public administrators, political leaders, and the general public: “when one’s interests are ready well represented and one can be certain that one’s voice will be heard, there is little need to mobilize collectively in the streets. However, when one’s voice and interests repeatedly fail to find recognition, an alternative is to make one’s articulations more difficult to ignore by joining together in collective communicative action.”

Turning up the volume

Hailing the State is organized around seven sets of collective mobilizations: (1) sit-ins (dharna) and hunger strikes (nirāhāra dīkṣa); (2) efforts to meet or gain audience (samāvēśaṁ) with someone in a position of authority; (3) mass open-air public meetings (garjana); (4) strikes (samme, bandh, hartāl); (5) alarm chain pulling in the Indian railways; (6) road and rail blockades (rāstā and rail roko agitation);and (7) rallies, processions, and pilgrimages to sites of power (yātra, padayātra), along with the mass ticketless travels that often enable these gatherings. These social movements are not the expression of preexisting cultural identities; on the contrary, as Mitchell shows, Telangana or Dalit identities are constructed out of collective action and are the result of efforts to amplify voices and have them recognized. Actors who seek recognition, connection with, or incorporation into structures of state power are drawn together by a common desire to gain visibility and inclusion. Rather than ascribing a different “culture” to subaltern counterpublics and explaining differences in political repertoires by differences in underlying ideologies, we should consider that the styles of public expression are produced through failure of recognition and unequal access to power. Distinctions in the level of responsiveness by authorities to various individuals and groups explain the civility and order, or violence and unruliness, by which collective claims are made. Subaltern actors are not more prone to violence and angry protest than elites; it is just that the later usually settle their problems with ruling powers behind closed doors and without having to raise their voice, whereas the former are forced to find ways to amplify their voices. Speaking softly or writing in moderate tones is a condition of privilege, based on the expectation that one’s voice will be heard and acknowledged. We should not dismiss the masses firsthand as unruly, angry and uncivil, without considering that for them the “conditions of listening” are often not in place. Likewise, we should not draw a sharp line between the practices of “civil society” and those of “political society,” or between public places open to collective political activity and other urban venues devoted to circulation or economic activity.

Many acts of civil disobedience or nonviolent protest in India are associated with Mahatma Gandhi and the legacy of his struggle for Indian independence. Yet a history of these practices shows that they have very ancient roots, and that they didn’t stop with independence. Fasting and threatening to commit suicide at the doorstep of a powerful person, or assembling in a designated place to gain audience and present petitions are repertoires of practice recorded in ancient Hindu scriptures and colonial archives. Local rulers were usually quite responsive in promising redress to such appeals, at which point the fasting brahmin or the gathering crowd would return home and resume daily activities. Similarly, as Mitchell notes, “work stoppages, mass migrations, and collective strikes to shut down commerce and transportation are evident in South Asian archival sources from at least the seventeenth century, perhaps even earlier, and were clearly used to make representations to state authorities at the highest level.” Later on, East India Company officials and then British colonial administrators were unable to comprehend the social context of petitioning and therefore invariably took any large demonstration to be an act of hostile rebellion. They referred to the collective actions as “combinations” or, less generously, as “insurgencies,” “mutinies,” insurrections,” “revolts,” or “rebellions,” even when their participants sought only to gain an audience with officials in circumstances in which earlier communicative efforts were ignored or refused. When collective actions did become violent, it was often in response to authorities firing on crowds to silence and disperse them. Leaders of the newly independent India in 1947 largely inherited both the ideological perspective on collective assembly and the legal and policing systems established by the British. But they were never entirely successful in eliminating the collective practices that offered time-tested models for effectively engaging and communicating with officials, authority figures, and others in positions of power.

Railways democracy

Public transportation networks play a central role in the organization of collective political actions. Streets, highways, intersections, railway stations, rail lines, and road junctions are sites where people gather, claims are made, and communication with the state is pursued. A history of Indian democracy would not be complete without mentioning the role railways traffic and infrastructure have played in creating a common polity. As soon as they were built, the railways became a key target of anticolonial protest. Practices such as alarm chain pulling, rail blockades known as roko, and ticketless travel to join political rallies were so common that they eventually came to be redefined by the government as political manifestations, and efforts to impose penalties on perpetrators were lifted. Disruption of rail traffic reached such heights and became such a regular challenge to authorities that the Indian Railways developed a policy of mitigation and adaptation, adding additional wagons to accommodate the large numbers of people traveling without tickets to mass meetings or authorizing the stoppage of a train for a brief moment in order to allow demonstrators to have their picture taken by the media before clearing the way. Political scientists have underscored the role of the printing press or the mass media in the emergence of a public arena and the rise of democratic governance. Similarly, railways in India have been an effective medium of political communication. Halting a train in one location enabled a message to be broadcast up and down the entire length of a railway line, forcing those from other regions to pay attention to the cause of a delay. Road blockages have become equally important ways to convey political messages. Genealogies of democracy in India should not only focus on deliberative processes and political representation, but should also include material infrastructures such as railways and roads. Democracy is something people do, and places of participation and inclusion are a fundamental part of what democracy means.

Hailing the State is based on archival evidence and ethnographic observation. The author has documented the social movement that led to the creation of a separate Telangana state, the result of sixty years of mobilization by Telangana residents for political recognition. This movement culminated on June 2, 2014 with the creation of India’s twenty-ninth state, which bifurcated the existing Indian state of Andhra Pradesh. Proponents of a separate Telangana state felt that plans and assurances from the state legislature and Lok Sabha had not been honored, and mobilized to hold government officials accountable to their promises. They cultivated a distinct cultural identity based partly on a variant of the Telugu language, and resented having their accent ignored or mocked by speakers coming from coastal Andhra. Lisa Mitchell also documents other social movements led by Dalit students, women, and peasants in India’s southern states. Her archival work led her to exploit the archives of Indian railways, documenting the debates around alarm chain pulling and roko rail blockades over the twentieth century. Her book is also theoretically ambitious. In her text and in her endnotes, she discusses the ideas of European philosophers like Althusser, Foucault, Balibar, Lefebvre, and Habermas, highlighting their insights and perceptiveness but also their biases and shortcomings. Mitchell invites us to “decenter England (and Europe more generally) as the ‘precocious’ and normative site for historical innovation in collective forms of contentious political action.” The way democracy works in India between elections holds lessons for the rest of the world. In particular, observers would have ben less puzzled by the various Occupy movements in Western metropoles (and the Yellow Vests protests in France) had they paid any attention to the Telangana movement or other forms of collective public performances in southern India.

The India Stack

Democracy these days is becoming more abstract and dematerialized: from online consultations to e-governance, people increasingly turn to the internet for information about their rights, delivery of social services, and feedback about public matters. Digital government is supposed to enhance governance for citizens in a convenient, effective, and transparent way, eliminating opportunities for corruption and embedding democratic processes in the information infrastructure. India is at the vanguard of this movement: with a vision to transform India into a digitally empowered society and knowledge economy, the government has digitized the delivery of vital services across various domains, ensuring transparency, inclusivity, and accessibility for all citizens. The “India Stack” includes Aadhaar, the world’s largest digital ID programme; the United Payments Interface (UPI), India’s homegrown real-time mobile payments system; and the Data Empowerment and Protection Architecture (DEPA), India’s version of the General Data Protection Regulation in the European Union. But e-government and personal identity numbers can also be used to limit political access to persons in position of power or to reduce opportunities for recognition and face-to-face communication. As Lisa Mitchell notes, the decision to launch a website for receiving online petitions and substitute it to direct access was met with great protest. The removal of Dharna Chowk, Hyderabad’s designated place for assembly and protest, to a site far away from the center of power was perceived as an authoritarian effort to silence dissent and limit political opposition. Foreign observers often deride the institution of granting audience, whereby citizens wait in line to meet a government official and petition for justice, relief, or favor, as the remains of a “feudal mindset” inherited from Mughal administrators and British officers. But Indian citizens are attached to their own ways of hailing the state, and such collective performances are neither antithetical nor incidental to the functioning of India’s democracy between elections.

Drone Theory and Bearing Witness

A review of Nonhuman Witnessing: War, Data, and Ecology after the End of the World, Michael Richardson, Duke University Press, 2024.

Nonhuman witnessingHow to witness a drone strike? Who—or what—bears witness in the operations of targeted killings where the success of a mission appears as a few pixels on a screen? Can there be justice if there is no witness? How can we bring the other-than-human to testify as a subject granted with agency and knowledge? What happens to human responsibility when machines have taken control? Can nonhuman witnessing register forms of violence that are otherwise rendered invisible, such as algorithmic enclosure or anthropogenic climate change? These questions lead Michael Richardson to emphasize the role of the nonhuman in witnessing, and to highlight the relevance of this expanded conception of witnessing in the struggle for more just worlds. The “end of the world” he refers to in the book’s title has several meanings. The catastrophic crises in which we find ourselves—remote wars, technological hubris, and environmental devastation—are of a world-ending importance. Human witnessing is no longer up to the task for making sense, assigning responsibility, and seeking justice in the face of such challenges. As Richardson claims, “only through an embrace of nonhuman witnessing can we humans, if indeed we are still or ever were humans, reckon with the world-destroying crises of war, data, and ecology that now envelop us.” The end of the world is also a location: Michael Richardson writes from a perch at UNSW Sydney, where he co-directs the Media Futures Hub and Autonomous Media Lab. He opens his book by paying tribute to “the unceded sovereignty of the Bidjigal and Gadigal people of the Eora Nation” over the land that is now Sydney, and he draws inspiration from First Nations cosmogonies that grant rights and agency to nonhuman actors such as animals, plants, rocks, and rivers. “World-ending crises are all too familiar to First Nation people” who also teach us that humans and nonhumans can inhabit many different worlds and ecologies. The world that is ending before our eyes is a world where Man, as opposed to nonhumans, was “the unexamined subject of witnessing.” In its demise, we see the emergence of “a world of many worlds” composed of humans, nonhumans, and assemblages thereof.

From Drone Theory to Drone Art

Nonhuman Witnessing begins with a piece of drone theory. The proliferation of drones on the battlefield, and the ethical questions that they raise, has led to a cottage industry of “drone studies,” with conferences, seminars, workshops, and publications devoted to the field. Richardson adds his own contribution by asking how witnessing occurs within conditions of drone warfare and targeted strikes from above. Drones are witnessing machines, but also what must be witnessed: new methods and concepts have to be designed to make recognizable encounters with nonhuman systems of violence that resist the forms of knowing and speaking available to the eyewitness. To analyze the witnessing of violence, as well as the violence that can be done by nonhuman witnessing, Richardson turns to theory and then to the arts. Drawing from media studies literature, he complements the notion of media witnessing, or witnessing performed in, by, and through media, by his own concept of “violent mediation,” or violence enacted through the computational simulation of reality. He also borrows from Brian Massumi the notion of ontopower, the power to bring into being, and the operative mode of preemption that seeks to define and control threat at the point of its emergence. For Richardson, drone warfare is characterized by an acceleration of the removal of human agency from military decision-making. Violence is made ubiquitous; it can take place anywhere at any time. The volume of data produced by drone sensors far outstrips human capacities for visual or computational analysis. They are transformed into actionable data by on-board autonomous software systems that rely on edge computing and AI algorithms. In a logical progression, “automated data collection leads to automated data processing, which, in turn, leads to automated response”: an ultimate end of the militarization of violent mediation is thus the “elimination of the human within technological systems to anything other than the potential target for violence.” By opposition, art insists on what makes us human. The paintings, photographs, and other art forms presented by the author emphasize the awesome power of unmanned airplanes such as the Reaper, the destruction they cause on the ground, their impact on the daily lives of those who remain under their surveillance, and their incorporation into local iconographies such as traditional Afghan war rugs. Art makes sensible the “enduring, gradual, and uneven violence done to the fabric of life” by killing machines that escape traditional forms of human witnessing.

Despite the evocative power of the concepts and artworks presented in Nonhuman Witnessing’s pages, there is a disconnect between drone theory and drone reality. The use of drones by the U.S. for targeted killings is highly publicized, because it is the most controversial, but quantitatively it remains very minor in comparison to surveillance missions. The subject of drone theory is less the drone as such than it is the drone as an illustration of the violence waged by the United States in the Middle East following the war in Afghanistan and the occupation of Iraq. New versions of the theory still have to incorporate the use of drones by new actors and in other theaters of conflict: in the Syrian civil war since 2012, during the short war between Armenia and Azerbaijan in 2020, in the Houthi insurgency against the Yemeni military supported by Saudi Arabia, and, of course, since Ukraine’s aggression by Russia in February 2022 and in Israel’s offensive against Gaza following Hamas’ surprise attack on southern Israel on 7 October 2023. The logic of preemption that characterized the United States’ war on terrorism is less manifest in these evolving situations. So is the role of AI and embarked computer systems: drones increasingly appear as a low-tech, low-cost solution, a weapon of the poor and savvy against more formidable enemies. Drone warfare and lethal autonomous weapon systems raise some complex strategic, ethical and legal questions that have been examined by a number of authors. But they are far from the “killer robots” decried in the critical literature—or hyped as a selling point by arm producers and media commentators. Richardson’s arguments against signature strikes—i.e. strikes based on behavioral patterns rather than on identity (personality strikes)—are valid and have indeed led to a reduction in targeted killings ordered by the U.S. in Pakistan, Yemen, or Somalia. But civilian killings such as the one described in the opening of the book show not that the drone is an imprecise weapon, but that it has been used in an imprecise way, just as a needle can be used imprecisely. Drones, like other pieces of military technology, can serve as inspiration or subject-matter for artists and theoreticians. But as much as drone theory is based on biased empirical ground, drone art is not a recognizable category beyond the avant-garde genre of drone music, which bears no connection with military drones whatsoever.

The power of algorithms

Whereas the chapter on “witnessing violence” used outdated evidence and questionable theory, the second chapter, “witnessing algorithms,” addresses more recent concerns and state-of-the-art technologies: ChatGPT and other applications of machine learning, deepfakes, synthetic media, mass surveillance, and the racist or misogynist biases embedded in algorithmic systems. It is based on the same conceptual swing that understands witnessing algorithms as both algorithms that enable witnessing and algorithms as entities that must themselves be witnessed. Theoretically, it draws from Deleuze and Guattari’s conception of machines as assemblages of bodies, desires, and meanings operating a generalized machinic enslavement of man, and of affect theory as interpreted by Brian Massumi and his grammar of intensities, virtual power, and futurity. Based on these references, Richardson proposes his own notion of “machinic affect” understood as “the capacity to affect and be affected that occurs within, through, and in contact with nonhuman technics.” Machine learning and generative AI can lead to false witnessing and fabrication of evidence: hence the weird errors and aberrations, the glitches and hallucinations that appear in computer-generated images or texts. “Like codes and magic, algorithms conceal their own operations: they remain mysterious, including to their makers.” But instead of denouncing their lack of transparency and demanding to open the proverbial black box, Richardson starts from algorithmic opacity as a given and attends to the emerging power of algorithms to witness on their own terms. Doing so requires the bracketing of any ethical imperative to witnessing: witnessing is what algorithms do, regardless of their accuracy or falsity, their explainability or opaqueness. Facts do not precede testimony: registering an event and producing it take place on the same plane of immanence that makes no difference between the natural and the artificial. Examples mobilized by Richardson include the false testimony of deepfakes such as the porn video of Gal Gadot having sex with her stepbrother; the production of actionable forensic evidence through the automatic detection of teargas canister images by Forensic Architecture, a British NGO investigating human rights violations; the infamous Project Maven designed by the Department of Defense to process full-motion videos from drones and automatically detect potential targets; and computer art videos making visible the inner functioning of AI.

Richardson adds to the existing literature on AI by asking how algorithmic evidence can be brought into the frame of witnessing in ways that human witnessing cannot. But he only hints at a crucial fact: most machine learning applications touted as capable of autonomous reasoning and intelligent decision-making are in fact “Potemkin AI” or “non-intelligent artificial intelligence.” The innovation sector lives on hype, hyperbole, and promissory futures. Likewise, media reactions to new technologies always follow the same tropes, from the “disappearance of work” to the advent of “intelligent machines” or “killer robots.” But the reality is more sobering. Deepfakes produce images that are not different in nature from the CGI-generated movies that dominate the box office since at least two decades. Forensic Architecture, the human rights NGO surveyed in the book, makes slick graphic presentations used as exhibits in judicial trials or media reportages, but does not produce new evidence or independent testimony. State surveillance is a product of twentieth century totalitarianism, not the invention of modern data engineers. Algorithms are biased because we designed them this way. The magic we see in AI-powered services is a form of trickery: their operating mode remains hidden because service providers have an interest in keeping it so. As Richardson rightfully notes, “machine learning systems and the companies that promote them almost always seek to obscure both the ‘free labor’ of user interactions and the low-paid labor of digital pieceworkers on platforms such as Mechanical Turk.” As such as human work will not disappear with automation, it would be a mistake to believe that human witnessing will be substituted by nonhuman forms of bearing witness. There are many human witnesses involved in the production of nonhuman witnessing. Instead of anticipating the replacement of humans by other-than-human agents, we would do well to examine the concrete changes taking place in human witnessing. The debasement of all forms of public authority, the hijacking of political institutions by private interests, the commitment fatigue in the face of too many horrors and catastrophes seem to me at the root of the crisis in human witnessing, for which the nonhuman offers no solution.

Ecological catastrophe

Richardson then turns to Pacific islands and the Australian continent to investigate the role of nonhuman witnessing in times of ecological catastrophe caused by the fallout of nuclear explosions and anthropogenic climate change. These territories, and the people they harbor, can testify to the world-destroying potential of these two crises: “just as the Marshall Islands and other nations in the Pacific were crucial sites for nuclear testing throughout the Cold War, so too are they now the canaries in the mineshaft of climate change.” Witnessing is not reducible to language or to human perception: when they take a continent or a planet as the scale of observation, they deny the human a privileged status for establishing environmental change or atmospheric control. The subject of the Anthroposcene is not the anthropos or Man as traditionally conceived, but an assemblage of humans, technologies, chemical elements, and other terraforming forces. Witnessing ecologies imply that ecologies can be made to witness impending crises and that there is an ecology of witnessing in which every element mediates every other. Drawing from affect theory and trauma studies, Richardson proposes the notion of “ecological trauma” to suggest the idea that trauma escapes the confines of the human body: “it can be climatic, atmospheric, collective, and it can be transmitted between people and across generations.” Ecological catastrophe has already been experienced by First Nations who have seen their environment shattered by settler colonialism, of which the British nuclear testings that took place on the Montebello Islands and at Maralinga in South Australia are only a late instantiation. The entire ecology—people, water, vegetation, animals, dirt, geology—was directly exposed to radioactive contaminants during the blasts and fallout, and no real effort to mitigate the effect on Aboriginal inhabitants was attempted. Polluted soil and sand melted into glass are the media used by Australian artist Yhonnie Scarce, whose glassblowing structure adorns the cover of the book. Other aesthetic works also figure prominently in this chapter, from the aerial imaging through which the planet becomes media to poems by Indigenous writers bearing witness to the destruction of their lands. For Richardson, inspired by recent developments in media theory, “attending to the nonhuman witnessing of ecologies and ecological relations continually returns us to mediation at its most fundamental: the transfer and translation of energies from one medium to another.”

The idea that we should consider nonhumans as well as humans in our processes of witnessing and decision-making already has a significant history in the social sciences. It was first put forward by science and technology studies, or STS, and it is directly relevant for the examination of technological innovation or environmental degradation. Proposed by Bruno Latour, a French STS scholar, Actor-network theory, usually abbreviated as ANT, aims to describe any phenomena—such as climate change or large technological systems—in terms of the relationships between the human and nonhuman actors that are entangled in assemblages or networks of relationships. These networks have power dynamics leading to processes such as translation (the transport with deformation of an assemblage), symmetry (representing all agents from their own perspective) or, as proposed by Richardson, witnessing. It should not be confused with the idea that humans are incapable of witnessing events that are too large-scale or too complex to be grasped by the human mind. Indeed, history shows that local communities and scholars have long understood and monitored changes in the environment and their effect on human activities. In his late work, Latour also proposed the idea that since the environmental question was radically new, politics had to be completely reinvented. We should convene a “parliament of things” where both humans and nonhumans can be represented adequately and be brought to the stand to give testimony. Although Richardson scarcely refers to this literature—he is more interested in art critique than in science and technology studies—, he shares the view that nonhuman witnessing is politically transformative. His politics is anchored in the pluriverse (a world of many worlds), mindful of the myriad of relations between humans and nonhumans, inspired by the belief systems of First Nations, and predicated on the idea that “difference is not a problem to be solved but rather the ground for flourishing.” As he concludes, “there is no blueprint for such a politics, no white paper or policy guidance.” But it is already emergent at the level of speculative aesthetics and in the creative works that punctuate his book.

Thought in the Act

Nonhuman Witnessing is published in a series edited by Erin Manning and Brian Massumi at Duke University Press. Richardson shares with the editors the taste for mixing art with philosophy and for engaging in high theory and abstract concept-building based on concrete examples. He borrows several key notions from Massumi (intensities, futurity, virtuality, preemption), who himself poached many of his insights in Deleuze and Guattari’s philosophy. The new theories developed by these authors and others working in the same field go under the names of affect theory, radical empiricism, process philosophy, speculative pragmatism, ontological vitalism, and new materialism. Each chapter in the book follows an identical pattern. It introduces a new concept (“violent mediation,” “machinic affect,” “ecological trauma,” but also “radical absence” and “witnessing opacity”) that provides an angle to a series of phenomena. It develops a few cases or examples that mostly expose forms of violence that occur across a variety of scales and temporalities: military drones and remote wars (“killer robots”), algorithms (“weapons of math destruction”), and environmental devastation through nuclear testings and climate change (“the end of the world”). It covers both aspects of witnessing, as the originator of an act of testimony and as an object to be witnessed. And it uses artistic creations as illustrations of certain forms of witnessing that escape the standard model of bearing witness. The result makes a suggestive reading but sometimes lacks coherence and clarity. Richardson starts from an original idea (whether drones might become nonhuman witnesses) but stretches it a bit too far. For him, opacity is not a pitfall to be avoided but a quality to be cultivated. Rather than a contribution to theory, the book’s main impact might be on art critique. I truly admire the author’s ability to make art part of the discussion we have on humanity’s main challenges. I didn’t review the artworks curated by the author in detail, but their description makes the most lasting impression.

The Celibate Plot

A review of Celibacies: American Modernism and Sexual Life, Benjamin Kahan, Duke University Press, 2013.

CelibaciesLiterary criticism has accustomed us to read sex between the lines of literary fiction. What Maisie Knew was what her parents were doing in the bedroom; The Turn of the Screw would have the heroin screwed if the door was unlocked; and Marcel Proust’s Lost Time was time not spent in the arms of his lover. According to this view, literature is when the author wants to suggest something about a person or thing, but then for whatever reason he or she may not wish to explicitly state what is on his or her mind, and so the author writes a novel, or poetry. Psychoanalysis has several words for this urge to dissimulate and beautify: sublimation, repression, transfer, displacement, defense mechanism, the conflict between the super-ego and the id. They all refer to the transformation of socially undesirable impulses into desirable and acceptable behaviors. But what if the opposite was true? What if no sex means no sex, and there is no dark secret to probe into? The French philosopher Michel Foucault hinted at this possibility in his History of Sexuality when he criticized the repressive hypothesis, the idea that western society suppressed sexuality from the 17th to the mid-20th century due to the rise of capitalism and bourgeois society. Foucault argued that discourse on sexuality in fact proliferated during this period during which experts began to examine sexuality in a scientific manner, cataloguing sexual perversions and emphasizing the binary between hetero- and homosexuality. By opposition, Roland Barthes, Foucault’s colleague at the Collège de France, proposed a concept to bypass the paradigm of sexuality and go beyond the binary construction of meaning: the Neutral. “I define the Neutral as that which outplays the paradigm, or rather I call Neutral everything that baffles paradigm,” he wrote. According to Barthes, the Neutral, or the grammatical Neuter (le neutre) operates a radical deconstruction of meaning and sexuality. It allows us to reexamine from a fresh perspective the question of le genre, understood in its dual sense of literary genre and of gender. 

The repressive hypothesis

Biographies of Roland Barthes point out that he remained a bachelor all his life and shared an apartment with his mother, to whom he devoted a vibrant eulogy at the time of her death. Barthes was also a closet homosexual, never avowing in public his penchant for boys and his dependence on the gigolo trade. His works are almost silent on his sexuality. Barthes’s homosexuality concerned only a private part of his life; it was never made public, because it simply wasn’t. Homosexuality was never for Barthes anything other than a matter of sex, limited to the question of the choice of a sexual object. He wasn’t gay (a term that functions as a seal of identity), and would never have been part of the political movement for the recognition of homosexual rights. This indifference was not a repression: it was another way of expressing what being modern meant for him, even if Bathes’ modernity was closely related to a certain resistance to the modern world. In a society obsessed with the new and the rejection of conventional forms, it is attachment to the past that now constitutes a form of marginality or even clandestinity and, as such, a heroism of the ordinary. Being modern doesn’t just mean taking part in the intellectual or artistic spectacle of contemporary society. It also, and above all, means constructing meanings, words, ways of being, cultural and textual interventions that precede what a society makes available. To be modern is to make one’s desire come to language. In this sense, Benjamin Kahan’s Celibacies, a work of literary criticism and cultural history, articulates other ways of being modern. Focusing on a diverse group of authors, social activists, and artists, spanning from the suffragettes to Henry James, and from the Harlem Renaissance’s Father Divine to Andy Warhol, Kahan shows that the celibate condition, in the diverse forms that it took in the twentieth century, meant much more than sexual abstinence or a cover for homosexuality. To those who associate the notion of celibacy with sexual repression, submission to social norms, and political conservatism, he demonstrates that celibacies in the twentieth century were more often than not on the side of social reform, leftist politics, and artistic avant-garde.

Celibacies is placed under the sign of Eve Sedgwick’s Epistemology of the Closet, with a quote used as an epigraph that opens the book: “Many people have their richest mental/emotional involvement with sexual acts that they don’t do, or even don’t want to do.” Sedgwick deemed the hermeneutic practice of uncovering evidence of same-sex desire and its repression in literature, “paranoid reading.” To this trend, she opposed a reparative turn in literary studies: reparative reading seeks pleasure in the text and works to replenish the self. Sedgwick’s injunction to move from paranoid to reparative reading has been diversely followed. On the one hand, queer studies continue to read the absence of sex as itself a sign of homosexuality or of repressed desire, as an act of self-censorship and insincerity. The closeted subject has internalized social norms and keeps the true self hidden from outside views, sometimes hidden from the conscious self as well. By opposition, the queer subject brings desire to the fore, and challenges tendencies to oppose private eroticism and the systems of value that govern public interests. On the other hand, queer theory rejects normativities of all stripes, including homonormativity. It understands sex and gender as enacted and not fixed by natural determinism. Since the performance of gender is what makes gender exist, a performance of “no sex” creates a distinct gender identity: no means no, and abstinence from sex is not always the sign of repressed sexuality. It is possible to theorize gender and even sexuality without the interference of sex. But according to Kahan, celibacy is distinct from asexuality, understood as the lack of sexual attraction to others, or low or absent interest in or desire for sexual activity. Celibacy is a historical formation or a structure of attachment that can be understood as a sexuality in its own right. Its meaning has evolved in the nineteenth and twentieth centuries: it has be used as a synonym for unmarried, as a life stage preceding marriage, as a choice or a vow of sexual abstinence, as a political self-identification, as a resistance to compulsory sexuality, as a period in between sexual activity, or as a new form of gender identity organized in a distinct community culture. Celibacies used in the plural reflect these overlapping meanings and cast a light on literary productions illustrating the impact of modernism in America.

The educated spinster

Celibacy once was a recognized social identity defined by its opposite, heterosexual marriage. According to Simone de Beauvoir, “the celibate woman is to be explained and defined with reference to marriage, whether she is frustrated, rebellious, or even indifferent in regard to that institution.” Its determinants were political and economical rather than sexual or sentimental: celibacy was a necessary condition for middle- and upper-class white women to gain legal and financial independence. At the end of the nineteenth century, “marriage bars” required the dismissal of female employees upon their marriage or the prohibition of the employment of a married woman. Educated women who wanted to enter a career or a profession had to remain unmarried or to hide their marriage. They did so in large numbers: “Of women educated at Bryn Mawr between 1889 and 1908, for instance, fifty-three percent remained unwed.” For this reason, celibacy is at the very heart of the history of labor in America. It is also a key component of social mobilization and civic campaigns: in the United States, unmarried, educated women composed much of the rank and file of social movements campaigning for universal suffrage, temperance, and social purity. The centrality of celibacy for first-wave feminism cannot be emphasized enough. For the author, women’s “choice not to marry is indicative of a willingness to think outside existing social structures and thus it is associated with freedom of thought.” For their male contemporaries, it was also associated with ridicule. Women campaigning for female suffrage were belittled as “suffragettes”; and other expressions disparaged women who had chosen to stay single (“singletons,” “bachelorettes,” “old maids,” “spinsters.”) The male bachelor, by contrast, was seen as socially able to marry but having delayed marriage of his own volition; he could be characterized as “a good catch,” “a stag,” or “a jolly good fellow.” 

Celibacy’s history is imbricated with the history of homosexuality. Discussing Henry James’ novel The Bostonians, Kahan investigates one of the most contested site of celibacy in the history of homosexuality: the Boston marriage. The term “Boston marriage” describes a long-term partnership between two women who live together and share their lives with one another. In James’s satirical novel, the romance between the heroin Verena Tarrant and Olive Chancellor, a Boston feminist and social campaigner, is placed on equal footing with the romance between Verena and her other suitor, Basil Ransom. This love triangle is often read as a lesbian plot: Olivia’s decision to leave her parents’ house, move in with Verena and study in preparation for a career in the feminist movement is seen as the result of a love attraction. Benjamin Kahan proposes another interpretation based on the constitutive role of celibacy as a means for independence and self-determination. The Boston marriage, which does not grow out of “convenience or economy,” is associated with collaborative literary production. It reflects Henry James’ own condition as a lifelong bachelor and his conception of authorship as a vocation. The artist, like the bachelor, is fundamentally monadic and stands apart from social spheres of influence: “rather than seeing James’s celibacy as only an element of a homosexual identity, I understand it as a crucial component of his novelistic production.” In a separate chapter examining the work of Marianne Moore, a twentieth-century American poet, Kahan sees echoes of her lifelong celibacy in her poetics and conception of time. Moore’s “celibate poetics” involve a lack of development within the poem, a lack of climax, a backwardness that reverses the passage of time, as well as pleasure in difficulty, lack of explicitness, and a style at once shy and flamboyant. Moore’s remark that “the cure for loneliness is solitude” makes solitary existence a fully contented mode of sociability and a crucial part of her poetics.

Black celibacy and queer citizenship

In his effort to make celibacy be seen as progressive and pleasurable, Benjamin Kahan underscores that the celibate condition in the twentieth century was not restricted to middle-class white women. Black celibacy was advocated by a now forgotten figure of the Harlem Renaissance, Father Divine, “an intellectual and religious leader who believed he was God.” His cult, the Peace Mission Movement, organized his followers into interracial celibate living arrangements called kingdoms. These celibate communes were a direct response to economic conditions: rents in Harlem were prohibitively high, making necessary for families to share apartments or take in lodgers. Cooperative housing also echoed the calls from Claude McKay, a socialist and a poet, to seize the means of production and organize the black community on a self-sustaining basis. Lastly, black celibacy and chastity vows countered racist depictions of the black body as oversexualized and promiscuous. By making a celibate identity available to black subjects, Father Divine allowed black men and women to participate in the public sphere and created economic and spiritual opportunities for racial equality. Celibacy was also used as a strategy for queer subjects to circumvent the prohibition preventing homosexual immigrants from becoming American citizens. Before the passage of the McCarran-Walter Act in 1952, the queer citizen could, according to the letter of the law, belong to America so long as he remained celibate or was not “caught in an act of moral turpitude.” The British poet W. H. Auden became an American citizen in 1946 by practicing “cheating celibacy,” a position both inside and outside the rules that he thematized in his 1944 poetic essay The Sea and the Mirror: A Commentary on Shakespeare’s The Tempest”. This long poem is a series of dramatic monologues spoken by the characters in Shakespeare’s play in which Caliban renunciates his former self in favor of a queer form of belonging. But as Kahan notes, “black queer writers like Claude McKay, James Baldwin, and Langston Hughes had significantly less ability to move in and out of America’s borders than white authors like Auden.”

Kahan’s choice to associate Andy Warhol with celibacy is disconcerting. The pop artist was openly gay and had a reputation for promiscuity and swishiness. His art collective, the Factory, was populated by “drag queens, hustlers, speed freaks, fag hags, and others.” But “‘gayness’ is not a category that we can control in advance.” If Warhol’s declarations can be taken at face value, he claimed that he didn’t have any sex life: “Well, I never have sex” and “Yeah. I’m still a virgin,” he responded in an interview. Evidence also suggests that the Factory wasn’t the “Pussy Heaven” or “Queer Central” journalists once described: according to one witness, celibacy organized life at the Factory, and Warhol’s abstinence from sex shaped relations of power and subjection. As Kahan sees it, the tradition of celibate philosophers underwrites the Factory’s mode of government and theorizes a concept of group celibacy. Warhol’s marriage to his tape recorder exemplified his rejection of traditional marriage and emotional life: “I want to be a machine.” In the view of a contemporary, “everything is sexual to Andy without the sex act actually taking place.” His celibacy operates at a zero degree of desire. My Hustler, his 1965 movie with film director Paul Morrissey and actor Ed Hood, presents a twisted celibate plot characterized as much by sexlessness as by sex. Valerie Solanas tried to kill Andy Warhol in 1968 because she claimed “he had too much control of [her] life”. In the SCUM Manifesto she published before her attempted murder, the radical feminist urged women to “overthrow the government, eliminate the money system, institute complete automation and destroy the male sex.” Kahan places both Warhol and Solanas in a tradition of philosophical bachelorhood that precludes sex in favor of alternative modes of governance.

Celibate readings

In the conclusion of Celibacies, Benjamin Kahan argues that celibacy should not be abandoned to the American political right, with its apology of abstinence before marriage and traditional gender roles. Celibacy from the 1880s to the 1960s has been on the side of reform and modernism. Celibate women could access public space and the professions at a time social norms prevented educated married women from entering the workforce. In the 1930s, celibacy was a possible option availing economic advantages to African-Americans in Harlem or allowing queer foreigners to access U.S. citizenship. Celibacy could also be a philosophical choice or a condition for artistic production. Having a room of one’s own was easier when one didn’t have to share the apartment with another person or raise a family. Forms of celibacies have also been animated by “sexual currents, desires, identifications, and pleasures.” Celibacy’s imbrication with homosexuality is not just a modern invention: depictions of “Boston marriage” in the late nineteenth century had strong implications of lesbianism. But celibacy was not only a pre-homosexual discourse or the result of sexual repression: it was a form of sexuality in its own right, entailing a more radical withdrawal than is the case with the closet homosexual or the scholar practicing sexual abstinence. No sex means sex otherwise, or a different form of sexuality. Looking to literary works of fiction and poetry through the prism of celibacy leads to valuable insights: Kahan reads a “celibate plot” in Henry James’ The Bostonians or Andy Warhol’s My Hustler, and highlights a “celibate poetics” in the poems of Marianne Moore or W. H. Auden. This book is published in a series devoted to queer studies because, as the author argues, “celibate and queer readings overlap without being coextensive.” Much as queer theory has the effect of “undoing gender,” the primary purpose of the Neutral according to Roland Barthes is to undo the classifying function of language and thus to neutralize the signifier’s distinctive function. “L’écriture célibataire” is the form the Neutral took in American modernism.

Martian Chronicles

A review of Dying Planet: Mars in Science and the Imagination, Robert Markley, Duke University Press, 2005.

Dying PlanetThe relations between science and fiction have nowhere been any closer than on the planet Mars. The genre of science fiction literally began with imagining life on Mars; and some of its most popular entries nowadays are stories of how humans could settle on the red planet and make it more like the Earth. Planetary science originally took Mars as its object and tried to project onto Mars what scientists knew about the climate and geology on Earth. Now this interest for Martian affairs is coming back to Earth, as scientists are applying knowledge derived from studying Mars to the study of the Earth’s planetary dynamics. Mars’ image as a dying planet has been invoked to support competing, even antithetical views, of the fate of our world and its inhabitants: a glorious future of interplanetary expansion and space conquest, or a bleak fate of environmental devastation and human extinction. Science has not completely closed the issue on whether life has ever existed on Mars; but visions of extraterrestrial civilizations and space invaders have been superseded by narratives centered on mankind and its cosmic manifest destiny. This intimate relationship between science and fiction under the sign of Mars is now more than one century old, but shows no sign of abating. What is it in Mars that inflames people’s imagination from one generation to the next? Why has Mars attracted more interest than our closest satellite, the Moon, or than more distant planets in the solar system such as Venus or Saturn? Are there commonalities between the way our ancestors envisioned channels built by Martian civilizations and more recent visions of making Mars suitable for human sojourn? Will the detailed inventory of the Martian terrain brought back by satellite images and camera-equipped rovers put an end to our interest for the red planet, or will it rekindle a new space age with the colonization of Mars as its overarching goal? And how can our visions of planetary expansion avoid the pitfalls of colonial metaphors and Earth-based anthropocentrism?

Is there life on Mars?

Dying Planet explores the ways in which Mars has served as a screen on which we have projected our hopes for the future and our fears of ecological devastation on Earth. It presents a cross-disciplinary investigation of changing perceptions of Mars as both a scientific object and a cultural artifact. The persistence of the red planet in our cultural imagination explains its enduring presence on the scientific agenda; and the scientific controversies surrounding Mars have often fueled the imagination of artists and philosophers. Scientists still frequently resort to terrestrial analogies to describe Mars; and the study of Mars has encouraged scientists to think about the planetwide conditions necessary to sustain life, making Earth more of a Mars-like planet. For planetary scientists and science-fiction writers, Mars often acts as a bellwether, a harbinger of the ecological fate of the Earth. The image of Mars as a dying planet has an enduring quality: it indicates that the Earth may go the way of Mars and transform itself into a barren land due to resource exhaustion and environmental stress. To the question: Why Mars?, the author lists the reasons that has made the fourth planet in the solar system such an enduring presence in the scientific imagination. Since the invention of the telescope in the seventeenth century, Mars can be observed with a fair degree of accuracy. Dark patches on the surface, the polar caps that wax and wane, waves of darkening that spread across the planet from the poles toward the equator during its spring and summer months: all these observed phenomena have nourished rampant speculation based on analogies to Earth’s seasonal and hydrological cycles. In 1878, Giovanni Schiaparelli (1835-1910) announced that he had observed canali (channels or canals) criss-crossing its surface. At the end of the nineteenth century, American astronomer Percival Lowell (1855-1916) forcefully defended the idea that these canals were built for irrigation by an intelligent civilization. For more than a half century, the canal controversy fueled speculations about an alien race which could enter in contact with mankind. More generally, the discovery of life on Mars or elsewhere in the universe would profoundly alter humankind’s perception of its place in the cosmos: the question: Is there life on Mars? is as important as Copernic’s questioning the place of Earth at the center of the universe.

Our fascination with Mars stems from what Robert Markley calls the interplanetary sublime. According to Immanuel Kant, the sublime is the infinite object that reveals the sublimity of reason. The “starry heavens above me and the moral law within me” fill us with a profound sense of wonder and awe. The spectacle of Mars in science and in literature is indeed sublime and awe-inspiring. Mars has the largest volcano in the solar system. Its main valley stretches for three thousand miles, dwarfing terrestrial analogues and making the Grand Canyon seem “a mere crack on the sidewalk.” Its surface preserves landforms three to four billion years old that provide a window into a geological past that has long since disappeared from Earth. Orbital photographs show evidence of geologically recent lava flows, patterns of water erosion, and meteoric impacts that suggest a complex history of planetary evolution and climate change. The evidence of a once warmer and wetter Mars raises the question of planetary evolution and climate change. The study of Mars involves a multiplicity of sciences including geology, chemistry, hydrology, meteorology, and microbiology, as well as the still virtual disciplines of exobiology and terraforming. The exploration of Mars is a “fundamental science driver”: it pushes the frontiers of science further and provokes the imagination of scientists and writers alike. What we see in Mars also reflects “the moral law within me”: gazing at a distant planet makes our insignificance in the universe palpable. Whether humankind is alone in the universe or one of many intelligent species has profound philosophical and even theological implications. The loss of Mars’s atmosphere and the disappearance of water on its surface also bring lessons close to home: if the geological similarities between Mars and Earth have the same causes, then the history of Mars provides a window into Earh’s possible future. Doing comparative planetology, and understanding the dynamics of planetary climate change, therefore becomes the new rationale for going to Mars.

The planetary imagination

To twenty-first century observers, seeing canals on Mars is a bit like discerning a rabbit on the Moon: a figure of the imagination, a matter of folklore and cultural mythology. It is hard to realize that less than a century ago the issue of Mars canals was a matter of science, not fiction, filling the pages of scientific journals and the popular press. The idea of a plurality of inhabitable worlds has long been debated in speculative philosophy, starting with Greek philosopher Anaximander (610-546 B.C.). Based on observation and calculus, Nicholas Copernicus (1473-1543) placed the sun at the center of the solar system, relegating the Earth to merely another orbiting planet. The Copernician theory provided the impetus for Johannes Kepler (1571-1630) to describe precisely the orbits of the planets, although the German astronomer was “almost driven to madness” by the complexity of Mars’s orbit. With the development of the telescope in the seventeenth century, Mars began to be perceived as the most likely candidate in the solar system for harboring an extraterrestrial civilization. Giovanni Cassini (1625-1712) and Christiaan Huygens (1629-1695) published detailed images of the Martian surface that drew on terrestrial analogies: polar caps, “seas” and “oases” became familiar features of the Martian terrain. In the eighteenth century, the plurality of world hypothesis had been put on a sound scientific footing and was debated by scientists and philosophers alike. Mars’s surface was described with increasing precision, and almost all astronomers who had modern instruments at their disposal made observations of the planet. The mapping of Mars focused primarily on global cycles of temperature, hydrology, and presumed biological activity. But it was Giovanni Schiaparelli’s observation of a network of lines on the surface of Mars in 1877 that sparked the most intense controversy. Schiaparelli himself was agnostic about what his canali signified: where they “channels” connecting what was described as oceans, continents, and islands, or “canals” built by an alien civilization?

Robert Markley devotes almost three chapters of Dying Planet to the canal controversy. Forcefully defended by Percival Lowell, it had all the ingredients of a great scientific controversy. It could be boiled down to a simple thesis (canals meant intelligent Martians) and integrated into a grand narrative of planetary evolution (canals were built to counter the desertification of a dying world.) Lowell’s theory operated within the bounds of accepted scientific practice (it used all scientific observations available at the time) and mobilized the rhetoric of scientific objectivity to challenge the values, assumptions, and methods of his opponents (whose refusal to envisage life outside of Earth was denounced as religiously motivated.) Part of the fascination with Mars stemmed from the implicit and explicit lessons which scientists and their readers drew from Lowell’s vision of an advanced civilization struggling to stave off ecological disaster. Lowell’s grand narrative of a dying planet found echoes in the emerging literature of science-fiction writers who mixed the literary genres of utopian novels, adventure narratives, and philosophical speculation. Although H. G. Wells’ The War of the Worlds is by far the best known of the turn-of-the century science-fiction novels, it was by no means an isolated production. Wells’s novel offers a classic dystopian inversion of European imperialism: his blood-drinking Martians pose a horrific challenge to bourgeois complacency, even as they give shape to late Victorian culture’s masochistic fascination with its own demise. Kurd Lasswitz (1848-1910) describes a more peaceful encounter between humans and a more advanced Martian civilization in his 1897 novel Auf zwei Planeten, published in English in 1971 with a foreword by Wernher von Braun. The book has the Martian race running out of water, eating synthetic foods, traveling by rolling roads, and utilizing space stations. Alexander Bogdanov (1873-1928), a Russian physician, philosopher, and Bolshevik revolutionary, describes his Red Star (1908) as a collectivist utopia in the full throes of resource exhaustion and planetary decline. The vanguard socialism of the Martians is carved into the landscape of their planet, with the canals as both cause and effect of Martian collectivism.

How to prove a negative?

Until the 1930s, the canal thesis had enough currency within the scientific community to reinforce a widespread agnosticism about the possibility of intelligent life on Mars. Even as the canal builders retreated into science fiction, the idea of “primitive” life on Mars persisted. Lowell’s paradigm of a dying planet influenced scientific speculation about the composition of the Martian atmosphere, the character of its surface, and the nature of its putative life-forms. After World War II, advances in radiometry and the study of the infrared spectrum gave astronomers new tools with which to study Mars. As the intelligent life hypothesis became more and more improbable, scientists still deduced from the alleged existence of ice, water, and an atmosphere the possibility of vegetative life in the form of lichens and algae. It is hard to prove a negative: the inability to detect signs of life does not signify that life does not—or did not—exist on Mars. Even after the Mariner missions in the mid-1960s brought back photographs showing Mars’s barren surface as inhospitable to life, scientists speculated that oxygen might still be captured in the polar caps, and that bacterial forms of life may have existed in the past and might still be present. Evidence suggested that Mars three billion years ago was comparatively warm and wet. Did life exist in the very distant past on this more hospitable Mars? How had the planet died? Could micro-organisms survive in extreme conditions, as is the case in volcanic or deep sea environments on Earth? A whole discipline, exobiology, grounded on the premise that life may exist beyond Earth, concentrated on the search for signs of life and the study of habitable environments. The ambiguous results of the life-detection experiments conducted during the Viking missions which landed on Mars in 1976 led scientists to lobby for more sophisticated microbiology testings on future NASA landers. The search for life remains a crucial selling point for plans to explore Mars by sending automated rovers and, ultimately, boots on the ground.

In 1948, inspired by the novel of his compatriot Kurd Lasswitz, the rocket physicist and space scientist Wernher von Braun wrote the technical specification for a human expedition to Mars, The Mars Project. In the 1970s and early 1980s, the American astronomer and science communicator Carl Sagan was the most vocal advocate of space exploration and the search for extra-terrestrial intelligent life. Again, he was inspired by the science-fiction novels he read as a teenager: a map representing Edgar Rice Burroughs’s vision of Mars hung on the hallway wall outside his office for more than twenty years. Just as the canals occupied the attention of a generation of scientists, Burroughs’s novels about John Carter and his adventures on the planet he calls Barsoom dominated the interplanetary fiction of the first half of the century. Literature inspired by Mars includes the good, the bad, and the ugly: for a Ray Bradbury and his Martian Chronicles (1950) or a Isaac Asimov’s The Martian Way (1952), how many pulp fictions or comic-book adventures featuring green aliens laying eggs and four-armed tetrapods shooting laser beans? As Robert Markley states in his introduction, “anyone who has read a lot science fiction realizes that much of it is pretty bad.” But the appeal of the genre lies elsewhere: “science fiction does not represent historical experience, but generates simulations of what that experience may become.” Ray Bradbury once said that “Burroughs has probably changed more destinies than any other writer in American history.” The same could be said about himself. Generations of adults (mostly males) had their formative years influenced by the likes of Ray Bradbury, Isaac Asimov, and Arthur C. Clarke. Considering that space exploration lacks the support of vested interests outside of the aerospace industry, science-fiction novels created a constituency for sending missions to the red planet and beyond.

The Mars Society

Inspired by the Lowellian paradigm of a dying planet bearing the mark of ancient civilizations, classic science fiction was obsessed with the idea of intelligent life on Mars. More recent science fiction plays with the idea of bringing life and civilization (back) to Mars: by sending manned missions, establishing a permanent presence, and terraforming the planet. As an emblem of humankind’s interplanetary future, Mars is described both as a dead world that resists human effort to explore, colonize, and transform it and the site of humankind’s next giant leap in its multisecular evolution. These fictions are haunted by the dark underside of colonization and extractive capitalism, and often demystify the masculinist narrative of the conquest of space with a vision of failed social order and technoscientific hubris. In Kim Stanley Robinson’s trilogy, Red Mars (1992), Green Mars (1993), and Blue Mars (1996), the settlement and terraforming of Mars is chronicled through the personal and detailed viewpoints of a wide variety of characters spanning almost two centuries. Ultimately more utopian than dystopian, the story focuses on egalitarian, sociological, and scientific advances made on Mars, while Earth suffers from overpopulation and ecological disaster. These plans to colonize Mars are no longer science fiction: established in 1998 by aerospace engineer Robert Zubrin and backed by multibillionaire Elon Musk, the Mars Society, a nongovernmental organization, has set itself the goal to send humans to Mars and establish a permanent colony in the very near future. In an industry where NASA remains the most expensive game in town, the “new space” industry that operates on a “faster, better, cheaper” basis promotes alternative, low-cost ways of getting humans to Mars and sustain them while they stay on the planet. Robert Markley, who published Dying Planet in 2005, has reservations about the whole endeavor. In his opinion, the Mars Society’s vision of a new American frontier, or a new manifest destiny, “is founded on dubious or simplified readings of American history that repress both the human and ecological consequences of conquest and colonization.” As he concludes, “the ultimate challenge posed by planetary transformation is ultimately as much ethical as it is scientific.”

The Land of Kush

A review of Chosen Peoples: Christianity and Political Imagination in South Sudan, Christopher Tounsel, Duke University Press, 2021.

Chosen PeoplesOn July 9, 2011, South Sudan celebrated its independence as the world’s newest nation. One name considered for christening the country was the Kush Republic, after the Kingdom of Kush that ruled over part of Egypt until the 7th century BC. According to historians of antiquity, Kush was an African superpower and its influence extended to what is now called the Middle East. Placing the new nation under the sign of this prestigious ancestor was seen as particularly auspicious. But for many people the name Kush has been connected with the biblical character Cush, son of Ham and grandson of Noah in the Hebrew Bible, whose descendants include his son Nemrod and various biblical figures, including a wife of Moses referred to as “a Cushite woman.” A prophecy about Cush in Isaiah 18 speaks of “a people tall and smooth-skinned, a people feared far and wide, an aggressive nation of strange speech, whose land is divided by rivers” that will come to present gifts to God on Mount Zion after carrying them in papyrus boats over the water. For many South Sudanese at independence, Isaiah’s ancient prophecy directly applied to them, to the point the newly appointed President Salva Kiir chose Israel as one of his first destinations abroad. Churchgoers also read echoes of their fight for sovereignty and independence in various passages of the Bible. Christian southerners envisioned themselves as a chosen people destined for liberation, while Arabs and Muslim rulers in Khartoum were likened to oppressors in the biblical tradition of Babylon, Egypt, and the Philistines. John Garang, leader of the Sudan People’s Liberation Army/Movement (SPLA/M), was identified as a new Moses leading his people to the promised land. The fact that he left the reins of power to his second-in-command Salva Kiir before independence, just like Moses did with Joshua upon entering the land of Canaan, was interpreted as further accomplishment of the prophecy. Certainly God had a divine plan for the South Sudanese. For some Christian fundamentalists, the accomplishment of Isaiah’s prophecy was a sign of the imminent Second Coming of Jesus Christ that Isaiah identified as the Messiah, the king in the line of David who would establish an eternal reign upon the earth.

Isaiah’s prophecy

This moment of bliss and religious fervor did not last long. Conflict soon erupted between forces loyal to President Salva Kiir (of Dinka ethnicity, Sudan’s largest ethnic group) and former Vice President Riek Machar (of Nuer ethnicity, the South’s second largest ethnic group.) The South Sudanese Civil War that ensued killed more than 400,000 people and led about 2.5 million to flee to neighboring countries, especially Uganda, Sudan, and Kenya. Various ceasefire agreements were negotiated under the auspices of the African Union, the United Nations, and IGAD, a regional organization of eight East African nations. The last truce signed in February 2020 led to a power sharing agreement and a national unity government that was supposed to hold the first democratic elections since independence in 2023. Again, some predicators and religious commentators interpreted these internal divisions and ethnic strife using biblical metaphors. As with earlier periods, the war produced a dynamic crucible of religious thought. Supporters of civil peace called on South Sudanese not to divide themselves like the tribes of Israel or recalled Paul’s injunctions in the Epistle to the Galatians to become one in Jesus by forgetting divisive identities. “Let us take the Bible instead of the gun,” exhorted a senior official at the Ministry of Religious Affairs. “Shedding blood is the work of the devil, and anybody who is killing people is doing the work of the devil,” declared another cleric. The civil war was interpreted as an opposition between right and wrong; only this time the forces of evil were internal to South Sudan, not projected upon the northern oppressor. The most vindictive denounced their enemies by comparing them to the Pharisees or even to Herod. God was used in one breath to argue for cultural unity (“all are one in Christ”) and in another for cultural diversity (tribes are “gifts of God”). These conflicting arguments are proof that in all situations, the biblical referent remains major in the South Sudanese national imagination. Meanwhile, the “land of milk and honey” remains one of the poorest countries on earth, with all the characteristics of a failed state.

For some people, interpreting historical events along religious lines is not only irrational and delusional, but also dangerous and divisive. Looking at history from God’s perspective can lead to a fatalistic view of life and human action. Having “God on our side” has served as justification to some of the worst atrocities in human history, and the Westphalian system of nation-states that is enshrined in the United Nations Charter was originally created to bring an end to the religious wars that plagued Europe in the sixteenth and seventeenth century. According to modern views, Christian interpretation of biblical prophecies should remain in the pulpit, and clerics should refrain from interfering in political issues of the day: “The more politically involved the church has become, the less spiritually involved the church is.” In the case of Sudan, religion was mobilized both in the North and in the South to bolster national identities and strengthen racial differences. Leaders in Khartoum have attempted to fashion the country as an Islamic state, making Islam the state religion and sharia the source of the law since 1983. Meanwhile, Southern Sudanese have used the Bible to provide a lexicon for resistance, a vehicle for defining friends and enemies, and a script for political and often seditious actions in their quest for self-determination and sovereignty. But Christopher Tounsel does not see religion as the source of the civil war that led to the independence of South Sudan. After all, rebels in the Sudan People’s Liberation Army (SPLA) were first inspired by Marxism and backed by the socialist regime of Mengistu in Ethiopia. John Garang believed in national unity and a secular state that would guarantee the rights of all ethnic groups and religions in a “New Sudan” conceived as a democratic and pluralistic state. Theology was only one of the discourses that informed the ideological construction of the South Sudanese nation-state. Race and, after 2005, ethnicity, were also important components of southern identities, working to include individuals in collective bodies and to distinguish them from others. In this perspective, the author cautions “against a limited view of South Sudanese religious nationalism as one based exclusively in anti-Islamization.”

A crucible of race

In Chosen Peoples, Christopher Tounsel presents “theology as a crucible of race, a space where racial differences and behaviors were defined.” Rather than approaching race and religion—the two elements most often used to distinguish North and South Sudan—as separate entities, he analyzes religion as a space where race was expressed, defined, and animated with power. Tounsel is particularly interested in how Christianity shaped the identity of the region’s black inhabitants (as opposed to Sudan’s Arab-Muslim population) and brings forth the notion of God’s chosen people (or peoples) using the Bible as a “political technology” in their fight against the oppressor. The first Catholic missionaries – Jesuits – settled in South Sudan in the middle of the 19th century following the creation by Pope Gregory XVI of the Vicariate Apostolic of Central Africa in 1846. As for the Protestants, they arrived in 1866 by through the British and Foreign Bible Society. However, this initial period of mission work was interrupted for nearly thirty years due to the Mahdist Wars that bloodied Sudan in the last decades of the century. When the British regained control of the region under the Condominium Agreement signed with Egypt in 1899, they facilitated the reestablishment of missions there in order to transform South Sudan into a buffer zone that could stem the expansion of Arabic and Islam up the Nile. The missionary work carried out there in the first half of the 20th century, mainly by Roman Catholics, the Church Missionary Society (CMS) and the United Presbyterian Mission (also known as the American Mission), in addition to its classic dimensions (translating the Bible, identifying socio-linguistic groups, schooling a new local elite), included a strong martial dimension by playing both on the symbolism of the crusade and the struggle against Muslim slavery. Through a case study of the Nugent School, created by the CMS in Juba in 1920, Tounsel shows that ethnic identities were also reinforced through the teaching of local vernacular languages and the definition of self-contained tribal units based upon indigenous customs, traditional usage, and competitive antinomies (a Nuer-English dictionary included the descriptive phrase “my cattle were stolen by Dinka.”) Ethnic conflict between indigenous identities, seen as natural and inevitable, could only be overcome by a common Christianity, while Islam and Arab culture was portrayed as alien and hostile.

After Egypt’s 1946 effort to assert its sovereignty over Sudan, Britain reversed course and conceded Sudan’s right to self-determination and, ultimately, independence, which was proclaimed on January 1st, 1956. The almost complete exclusion of southerners from the “Sudanization” policies in the 1950s fueled a growing sense of southern grievance and political identity. The 1954 creation of the first all-Sudanese cabinet under al-Azhari’s National Union Party, while the southern Liberal Party was in opposition, accelerated southern political thinking toward self-determination and federalism. It was in this context that a mutiny of the Equatorial Corps occurred in 1955 at Torit in the southern Equatoria province. The Equatorial Corps, composed entirely of Christian soldiers – around 900 –, had been created by Lord Reginald Wingate as part of the Anglo-Egyptian condominium on Sudan at the end of the 1910s: a bold decision in a context where military service had until then been reserved for Muslims. It was intentionally divided along ethnic lines: most of the corps was recruited from the Lotuho and other small eastern ethnic groups on the Sudanese slave frontier that were perceived to have “natural” military qualities. The mutiny, motivated by a project to transfer some units to the North and have them replaced by northern soldiers, was sparked by an incident involving an Arab soldier who allegedly insulted a black soldier by calling him a slave (abid). This term, then commonly used by Muslim Sudanese to denigrate black populations, testified to the very slow disappearance of slavery in the region. Sudanese slavery had even experienced a surge in the 1860s and 1870s with the progress of navigation on the Nile and had still been largely tolerated by British supervision until the beginning of the 20th century, after the end of the Mahdist wars. Mostly contained in Equatoria, where most of the mutineers were based and originated, the mutiny was quickly put down but it then led to the First Sudanese Civil War, taking its sources from the same crucible: Christian identity, racial confrontation, ethnic divisions, refusal of slavery and Muslim domination.

The First and Second Sudanese Civil Wars

The First Sudanese Civil War (1955-1972) considerably strengthened the biblical reference in the South Sudanese national emancipation movement. It was widely regarded as a religious confrontation between a Muslim government in Khartoum and its armies, and Christian liberation fighters in the South. Religious thought provided an important spiritual lexicon for the racial dynamics of the war, becoming a space for southerners to articulate the extent of racial division and hostility. The decision of the Sudanese government to Arabize school programs and gradually ban foreign missions, definitively expelled in 1964, not only amplified Christian proselytizing by local pastors but also provided new troops for the South Sudanese resistance. At the beginning of the 1960s, southern opposition was structured militarily and acquired propaganda organs such as the Voice of Southern Sudan published from London with the support of missionary societies. In 1967 the Youth Organ Monthly Bulletin of the Sudan African National Union (SANU) published a rewriting of Jeremiah’s Book of Lamentations where Israel was replaced by South Sudan and Babylon by Khartoum. This type of parallel was used more and more frequently, giving the conflict the appearance of a war of religion. While Arabs were demonized as inhuman evil agents of Satan, southerners framed themselves as God’s beloved people analogous to the Israelites. The war witnessed the creation of a theology that maintained that providence was leading southerners to victory. When the first civil war ended in 1972, biblical reference was clearly rooted while racial and religious identities were closely interwoven. For Sudanese refugees, returning home was presented as the end of exile in Babylon. Southern intellectuals, rather than approaching race and religion as mutually exclusive, used theology as a crucible through which racial identity was defined.

The peace agreement signed in Addis Ababa in 1972 provided for autonomy for South Sudan and religious freedom for non-Muslim populations. Despite their desire for independence, SANU leaders accepted to compromise, but multiple violations of the agreement, as well as the decision of the Sudanese government to impose Islamic law, contributed to relaunching the conflict in 1983 with the creation of the Sudan People’s Liberation Movement and Army (SPLM/A). The fall of Ethiopia’s Mengistu regime in 1991 was the second formative event, depriving the southern opposition from operational support and ideological justification. Though the SPLM never officially affiliated with any religion and maintained a policy of religious toleration, it increasingly turned to Christianity to mobilize and garner support at home and abroad. The SPLA was transformed into a largely Christian force that explicitly used Christian themes and language as propaganda. Apart from the Bible, few other sources were available with which to interpret their position. Episodes from biblical Israel’s history, like David’s clash with Goliath or Moses leading his people to the Promised Land, became popular narratives to fit the modern situation. It is in this context that Isaiah’s prophecy concerning Cush was referenced as foretelling ultimate victory. John Garang, a secularist at the beginning of the war, saw utility in including Cush in domestic politics. He also tried to mobilize support abroad, appealing to Pan-Africanism, Evangelical solidarity, and humanitarian repulsion against modern slavery. American human rights activists pressured the US government to get involved in the situation, framing the conflict as a war between Arabs and Africans, Christianity and Islam, masters and slaves. Their advocacy and humanitarian engagement influenced the manner in which the conflict was represented in mainstream Western media. Beginning in the 1990s, Sudan entered the American evangelical mind as a site of Christian persecution and possible redemption. President Bush appointed Senator John Danforth—an ordained Episcopal minister—as his special envoy on the Sudan. Without Washington’s support, the Comprehensive Peace Agreement signed in 2005 and the ensuing independence of South Sudan in 2011 would never have taken place.

A failed state

Christopher Tounsel takes a neutral perspective on the role of religion in framing South Sudan’s struggle for independence. He does not see religion as a “veil” for material interests or as an “opium” that would intoxicate people into a war frenzy. He has consideration and respect for the religious narrative that interprets South Sudanese nationalism as a spiritual chronicle inspired by the Bible and corresponding to God’s plan. Of course, he does not himself offer a religious interpretation of historical events. The views he presents are those of local religious actors: mission students, clergy, politicians, former refugees, and others from a wide range of Christian denominations and ethnicities. He strictly endorses the role of the professional historian, crafting a rigorous history of religious nationalism—analyzing many printed sources and archives that are exploited from the first time; collecting oral testimonies by clerical and non-clerical figures in Juba; offering his own interpretation after discussing other viewpoints present in the academic literature. Only in the acknowledgement section does he make reference to his own religious affiliation by giving thanks to “my Lord and Savior Jesus Christ.” But if we consider the devastating toll that successive civil wars had on the local population, one may see the role religion has played in a more negative light. Were it not for a biblical narrative of suffering and redemption, a South Sudanese state would never have seen the day. There are serious concerns about the viability of such a landlocked, ethnically polarized country that political scientists subsume under the category of failed state. Religious faith may have been useful in forging a common identity against an oppressor perceived as Arab and Muslim, but could not prevent the newly independent state to plunge into prolonged ethnic warfare. And American Evangelicals who viewed South Sudan as the fulfillment of Isaiah’s prophecy and the sign of Christ’s second coming were not simply delusional: they added oil to the fire in an explosive crucible of race, religion, and ethnicity.