The Coder Who Came in from the Cold

A review of From Russia with Code: Programming Migrations in Post-Soviet Times, Mario Biagioli and Vincent Antonin Lépinay eds., Duke University Press, 2019.

From Russia with CodeFrom Russia with Code is the product of a three-year research effort by an international team of scholars connected to the European University at Saint Petersburg (EUSP). It benefited from the patronage of two important figures: Bruno Latour, who pioneered science and technology studies (STS) in France and oversaw the creation of a Medialab at Sciences Po in Paris; and Oleg Kharkhodin, a Russian political scientist with a PhD from the University of California at Berkeley who served as EUSP’s rector during most of the duration of the study. Based on more than three hundred in-depth interviews conducted from 2013 through 2015, the research project also benefited from a rare window of opportunity offered by political conditions prevalent back then. Supported by a consortium of Western research institutions, it was partially funded by a grant from the Ministry of Education and Science of the Russian Federation for the study of high-skill brain migration. It could build on the solid foundation of EUSP, a private graduate institute whose academic independence is secured by an endowment fund that is one of the biggest in the country. The brain drain of IT specialists was obviously a matter of concern for Russian authorities, as surveys showed that in 2014 the emigration of Russian scientists and entrepreneurs was by a wide margin the highest since 1999. The movement was amplified after 2014 by Russia’s decision to annex the Crimean Peninsula and, in 2022, by its all-in war of aggression against Ukraine. Conditions for fieldwork-based studies and international research projects in Russia would certainly be different today. The book’s chapter on civic hackers illustrates how fast the ground has moved in the past ten years: most of the civic tech projects it describes were affiliated with the foundation created by Alexey Navalny, a Russian opposition leader who was detained in 2021 and died in a high-security prison in February 2024.

Preventing the brain drain

The research questions framing the project demonstrate how social science can contribute to policy discussions while translating practical issues into scholarly interrogations. The concerns of the Russian authorities that sponsored the project are well reflected in the topics covered and the questions addressed. How can Russia prevent or reverse the brain drain that was perceived as a direct threat to the nation’s sovereignty? How to avoid dependence on Western imports and cultivate world leaders in an industry dominated by the GAFA? Is import substitution in the IT sector a viable strategy, or should the country rely on foreign direct investment and integration into global value chains? Could Russia create its own version of Silicon Valley by encouraging the clustering of industries in special economic zones and technoparks? These questions are reframed and displaced through the lenses of disciplinary studies mobilized by the members of the research team: STS, transition to market theory, economic geography, innovation policy studies, corporate management, migration studies, and so on. But mostly, From Russia with Code helps answer the questions that readers familiar with IT all know too well: why are Russian programmers so talented and prized by the market? What explains their unique combination of skills, and how to integrate these skills into a foreign business setting? Is it true that their technical prowess is offset by a lack of managerial skills and poor entrepreneurial spirit? The list of famous Russian IT developers include Andrei Chernov, one of the founders of the Russian Internet and the creator of the KOI8-R character encoding; Andrey Ershov, whose research on the mathematical nature of compilation was recognized with the prestigious Krylov Prize; Mikhail Donskoy, a leading developer of Kaissa, the first computer chess champion; Alexey Pajitnov, inventor of Tetris; and Yevgeny Kaspersky, founder of cybersecurity and anti-virus provider Kaspersky Lab. Russia is one of the few countries that is not dominated by Google, Facebook, and WhatsApp, but that has developed its own search engine (Yandex), social network (VKontakte) and message app (Telegram). As a last question that lurks into readers’ minds: what are Russian hackers really up to, and should we be afraid of their cyberattack capabilities?

The standard diagnosis on Russia’s IT capacity is framed by transition theory and posits that “Russians historically have been good at invention but poor at innovation.” Russian computer scientists built successful academic careers outside their homeland, and many global technological giants such as Apple, Google, Intel, Microsoft, or Amazon retain Russian programers as valuable talents. Yet Russian IT entrepreneurs are scarce either in Russia or abroad, and outstanding success stories are the exception rather than the rule. It took one generation to produce a Sergey Brin, co-founder of Google, who arrived in the United States at the age of six where his Russian Jewish parents typically pursued a teaching and research career instead of turning to the corporate world. The virtuosity of Russian software programmers is often explained by their high-level training in mathematics and pure science. The Soviet Union maintained a top-class scientific apparatus, from the fizmat model high schools specializing in math and physics to the dense network of research institutes, science cities, and elite academic institutions like the Academy of Sciences. This strong institutional basis translated into a high number of Nobel prizes and science olympiad laureates. Russian IT developers are praised for their deep interest and immersion in research, an inventive turn of mind, the ability to think independently and offer innovative solutions, and their intuitive grasp of complex problems. But they are also lambasted for their lack of management and entrepreneurial skills. Management was something to which Soviet scientists and science students had virtually no exposure. Even now, business culture is still perceived by many in the community as a superfluous and even disingenuous element. According to the standard view, Russian tech specialists are often interested mainly in new and technically exciting projects, to the point where they disregard the interest of their clients. They tend to think that if an idea is good technically, it will automatically translate into commercial success. They are criticized for a lack of business acumen, poor business etiquette, a certain intolerance for risk, a limited sense of the global market, and disinterest in management issues, which they see as “bullshit.”

Lack of management skills

The studies assembled in From Russia with Code both validate and complicate this diagnosis. Russian IT specialist are certainly heirs to a tradition that values the plan over the market, pure science over applied technology, and developing elegant responses to abstract questions over providing practical solutions to specific problems. Technical skills can be acquired using brute force and a sound foundation in basic science; management culture is taking much longer to cultivate and is more reliant on “soft skills.” The history of computer science in the Soviet Union lies at the root of the differences in programming cultures between East and West. As long as informatics remained a basic science akin to applied mathematics, Soviet scientists remained at the forefront of the discipline. Although cybernetics was initially perceived as an American “reactionary pseudoscience,” it quickly became part of a vision of a socialist information society. As in the United States, early computers were intended for scientific and military calculations. A universally programmable electronic computer known as MESM was created in 1950 by a team of scientists directed by Sergey Lebedev at the Kiev Institute of Electrotechnology. Electrical engineering and programming was one of the few careers in the Soviet Union that was relatively open to Jews and to women: hence their large numbers in these professions. The engineering education was fairly broad, with heavy emphasis on mathematics and physics, but without much foundation in computers: according to one former student, “learning to program without computers was akin to learning to swim without water.” Hardware limitations forced Soviet programmers to write programs in machine code until the early 1970s. By that time, the Soviet government decided to abandon development of original computer designs and encouraged cloning of existing Western systems. A program to expand computer literacy in Soviet schools was one of the first initiatives announced by Mikhail Gorbachev after he came to power in 1985. A network of afterschool education centers carrying programming classes for children led to a wide popularity of Basic and other programming languages.

A half century’s worth of Soviet experience with computing did not just disappear overnight with the end of the Soviet Union. Russians continued to play by the old rules they had internalized in the Soviet economy. The technical skills that Russian software programmers are internationally appreciated for and identified with are skills they have developed through the very specific Russian (and formerly Soviet) educational system. A case study of Yandex, the company behind Russia’s main search engine and the fourth-largest in the world, illustrates how coding socializes IT workers and creates communities of practice aligned with corporate objectives. Computer codes are written in languages that need to be executed by machines, thus leaving no space for semantic ambiguities. At the same time, and for the same reason, there is a specific sociality to code to the extent that lines of code also encapsulate relationships of collaboration, training, and skill transfer. At Yandex, young recruits are encouraged to immerse themselves in the source code of the company and to spot errors or typos for debugging. This way they learn the conventions of the community, all of which are inscribed in the codebase. Face-to-face interactions and oral communication are limited, as developers work from different office buildings and spend most of their time facing their computer screen, writing code or discussing through chatboxes. Yandex has a tradition of writing code without including comments in natural language: the code should be able to “speak for itself” by being accurate, simple, and “clean.” The very first thing every new employee has to learn is how to make code readable and to improve its utility for human readers. As in other programming communities, there is a difference in style between the “mathematicians” who prefer high-level languages such as Python and the “engineers” who favor low-level languages like C++. But projects at Yandex often mix the two approaches, while the corpus they create remains open to criticism and correction. All employees have access to the full codebase of the company and are free to comment on ongoing projects, upholding long-held principles of communal help that hark back to an idealized Soviet past.

Smart cities and technoparks

A key concern of policymakers is to create conditions by which IT industry can flourish. Interventions to promote public-private partnerships and foster cooperation between institutions and actors occur at different scales, from macro to micro: special economic zones, regional corridors, smart cities, creative hubs, technoparks, startup incubators, rentable work-space, and so on. Russia can build upon a model of science promotion that has concentrated resources in isolated science cities and non-teaching research institutions such as the Academy of Sciences. It has been successful at generating scientific breakthrough and achieving technological milestones in fields such as space exploration or the nuclear arms race. However, it has failed consistently in translating scientific discovery into technological innovation and market success. Commercialization was never a priority in the planned economy. In the IT sector, where innovation was increasingly driven by the market, the Soviet Union soon lost its advance in basic science and cybernetics and was reduced to licensing or copying Western technologies. Emerging from the ruins of the Soviet Union, the Russian state had its own particular vision of IT development. It was aiming at not simply imitating the West, but at keeping innovation within state control through authoritarian policy decisions and administrative guidance. But instead of supporting existing science cities and research institutions, the state decided to build a new technological apparatus separate from the Soviet one and inspired by the Silicon Valley model. As a result, Russia got the worst of both worlds: increased competition and the profit motive brought many IT professionals to exit the country in search of more remunerative opportunities, while domestically industrial policy gestured toward Silicon Valley but continued to follow the template of the old Soviet science apparatus. Created with great fanfare by then President Dmitry Medvedev, the Skolkovo “Innovative City” is almost impossible to find on a map and very difficult to go to from Moscow. At the time oof the book’s writing, it was criticized for “inefficiency, corruption, high rents, a complicated architectural plan, and a failing program for the support of startup companies.” Technoparks have been established in many other Russian cities to host both IT startups and larger technology companies. But local authorities are competing against each other through incentives and subsidies programs, while thousands of IT specialists have left the country and are likely never to return. Meanwhile, grassroots initiatives and homegrown developments were annihilated by the state’s attempt to regain control over peripheral regions. In the Russian Far East, a thriving ecosystem built around the online trading of used Japanese cars was suppressed by one stroke of a pen when the Russian state decided to impose a hefty levy over imported cars of more than five years. Other experiments such as Kazan’s self-branding as “the capital of the Russian IT industry” have met with more support from the centralizing state whose priorities are aligned with the interests of local politicians in Tatarstan. However, at present the city plan remains more a layout than a fully functional smart city, and the reader cannot escape the feeling of being led through a Potemkin village by an overtly enthusiastic research guide. It is easy to adopt the jargon of IT success and talk the talk of startup promotion. To walk the walk is another matter.

Russia’s soviet heritage continues to linger in the present.  But the Western capitalist model exemplified by Silicon Valley doesn’t represent the sole alternative. Not all Western countries share the same approach to running IT business. Elements of the socialist model, such as an orientation towards social justice, have influenced policies and mindset in Scandinavia, where Russian expatriates appreciate the communalist ethos and the family-friendly environment. Other Russian migrants who have relocated to Boston or to Israel place high value on a corporate capitalist model of large organizations which are both risk-adverse and profit-oriented. As the last article in the book concludes, “the entrepreneurial capitalism of Silicon Valley is not the only game in town.” There are circumstances when a ”socialist” technological model or a “corporate” capitalist model are more applicable than the purely “entrepreneurial” model of IT startups and venture capital. From a Russian perspective, it makes sense to cultivate the tradition of high technical skills and complex problem-solving that constitute Russia’s soviet heritage. Business models that originate ion the academic community are quite distinct from the capitalist motive or profit generation. Even in the West, open source programming and the free software movement have led to sustainable ventures and now undergird a vast portion of today’s internet. Moreover, the lack of entrepreneurial spirit by Russian IT specialists may be due to institutional factors: the lax attitude toward intellectual property, the absence of trust among young professionals, the relative isolation of Russia from global trade patterns, the absence of venture capital and related services to scale up enterprising businesses, the shadow of the criminal economy, etc. According to the authors, the brain drain narrative also needs to be complicated. Experiences of work migration by IT professionals from India or China have demonstrated that the “brain drain” is not an unfixable curse and can instead be viewed as “brain circulation,” with people looking for better conditions regardless of the country. Here again, the profit motive is not the only driver of individual decisions. Student and young researcher mobility is increasingly part of the academic curriculum, and the choice of destination is often motivated by existing collaborative networks or diasporic connexions. Scholars get a first taste of academic life abroad by spending a few months as a postdoctoral student or a guest lecturer before considering more long-term migration options. The same process of migrating step-by-step can also be found in a corporate environment where the decision to relocate is preceded by offshoring contracts and temporary missions. The story of Russian Jewish IT practitioners migrating to Boston during the Soviet period dispels the myth of the “tech maverick” and shows that migrants often have to re-train and upgrade their skills sets before they can find employment in US companies. The concept of brain drain assumes a kind of inherent and fixed value to the “brains” that leave their homeland and settle abroad. In practice however, migration often leads to occupational downgrading, deprofessionalization and de-skilling, as highly educated graduates lacking connexions and job-search skills become employed in low-skilled work or, at best, “upper-middle tech” in big US corporations. The failure to produce technological entrepreneurs among Russian immigrants should not be read as a result of their inability to operate in a capitalist economy or as a lack of entrepreneurial skills. Considering the limited options offered to migrants in a new environment, settling in for a mid-level corporate position in a large corporation instead of starting a new high-risk venture seems like a reasonable option.

The shadow of cyber criminality

In addition to the three models identified by the authors—socialist, entrepreneurial, or corporate—, there is a fourth model that they don’t consider in their essays: the criminal one. Much late-Soviet entrepreneurial activity emerged as an antidote to the country’s collapsing economy, and the idea of “dishonest speculation” was seen as the predominant form of engaging In business activities. From semi-legal market practices to criminal activities, there was only a fine line that many young professionals equipped with IT skills were ready to cross. The same skills that made fizmat school graduates valuable on the IT job market could also be turned toward quick gains in the shadow economy. During Russia’s market transition, the grey zone between legitimate, semi-legal, and illegal activity led to surprising developments, such as a publicly organized conference of avowed criminals that took place at Hotel Odessa in May 2002. The First Worldwide Carders Conference was convened by the administrators of CarderPlanet, a website on the dark web that specialized in mediating between vendors and purchasers of stolen credit card data. In the early age of e-commerce, when American banks and card issuers lagged behind in the chip-and-PIN technology which their European counterparts had developed, “carding” or credit card fraud became a very lucrative activity.  Russian fizmat kids with access to a computer and an Internet connexion turned into early-day hackers and cybercriminals.  CarderPlanet became the breeding ground of a whole generation who turned to cybercrime for lack of better opportunities in the context of a crumbling economy and a disintegrating state. Later on, these hackers turned to ransomware as the preferred mode of attack and to bitcoin as the privileged means of payment. Russian cybercriminality cannot be understood without appreciating its relationship to Russian national security interests. Early on the FSB, Russia’s secret service, made it clear that any criminal operation against domestic state interests was clearly off-limits and would be met with strong retaliation. Later on, criminal gangs were mobilized into cyber attacks against newly independent states such as Estonia or Georgia. Members of cyber gangs were also recruited into notorious state-backed hacking teams such as APT28 or Unit 26165. Cybercriminals hide behind anonymity services, encrypted communications, middlemen, puppet accounts, and pseudonyms. This makes it challenging for law enforcement agencies, let alone social scientists, to track them or describe their practices. A few facts highlighted by From Russia with Code might however be relevant here. Like conventional Russian software developers, Russian cybercriminals and hackers are likely to value technical prowess and coding virtuosity above all else. For them, code is a political instrument that has the power to challenge geopolitical power relations and capitalist business interests. Code also serves to create groups and communal identities of like-minded professionals, like the software-writing teams at Yandex. Studying their coding style and particular signature may help intelligence agencies to attribute cyberattacks to known actors in Russia, thereby responding to the challenge of attribution in cyber warfare. Like the professionals described in the book, Russian cybercriminals’ relation to the motherland is likely to be transactional. They are also geographically mobile, and need to venture abroad to close some illicit transactions, which gives Western law-enforcement agencies an opportunity to locate them and bring them behind bars. Most participants in the 2002 CarderPlanet Conference have been identified, tracked down, arrested, and condemned by justice.

Drone Theory and Bearing Witness

A review of Nonhuman Witnessing: War, Data, and Ecology after the End of the World, Michael Richardson, Duke University Press, 2024.

Nonhuman witnessingHow to witness a drone strike? Who—or what—bears witness in the operations of targeted killings where the success of a mission appears as a few pixels on a screen? Can there be justice if there is no witness? How can we bring the other-than-human to testify as a subject granted with agency and knowledge? What happens to human responsibility when machines have taken control? Can nonhuman witnessing register forms of violence that are otherwise rendered invisible, such as algorithmic enclosure or anthropogenic climate change? These questions lead Michael Richardson to emphasize the role of the nonhuman in witnessing, and to highlight the relevance of this expanded conception of witnessing in the struggle for more just worlds. The “end of the world” he refers to in the book’s title has several meanings. The catastrophic crises in which we find ourselves—remote wars, technological hubris, and environmental devastation—are of a world-ending importance. Human witnessing is no longer up to the task for making sense, assigning responsibility, and seeking justice in the face of such challenges. As Richardson claims, “only through an embrace of nonhuman witnessing can we humans, if indeed we are still or ever were humans, reckon with the world-destroying crises of war, data, and ecology that now envelop us.” The end of the world is also a location: Michael Richardson writes from a perch at UNSW Sydney, where he co-directs the Media Futures Hub and Autonomous Media Lab. He opens his book by paying tribute to “the unceded sovereignty of the Bidjigal and Gadigal people of the Eora Nation” over the land that is now Sydney, and he draws inspiration from First Nations cosmogonies that grant rights and agency to nonhuman actors such as animals, plants, rocks, and rivers. “World-ending crises are all too familiar to First Nation people” who also teach us that humans and nonhumans can inhabit many different worlds and ecologies. The world that is ending before our eyes is a world where Man, as opposed to nonhumans, was “the unexamined subject of witnessing.” In its demise, we see the emergence of “a world of many worlds” composed of humans, nonhumans, and assemblages thereof.

From Drone Theory to Drone Art

Nonhuman Witnessing begins with a piece of drone theory. The proliferation of drones on the battlefield, and the ethical questions that they raise, has led to a cottage industry of “drone studies,” with conferences, seminars, workshops, and publications devoted to the field. Richardson adds his own contribution by asking how witnessing occurs within conditions of drone warfare and targeted strikes from above. Drones are witnessing machines, but also what must be witnessed: new methods and concepts have to be designed to make recognizable encounters with nonhuman systems of violence that resist the forms of knowing and speaking available to the eyewitness. To analyze the witnessing of violence, as well as the violence that can be done by nonhuman witnessing, Richardson turns to theory and then to the arts. Drawing from media studies literature, he complements the notion of media witnessing, or witnessing performed in, by, and through media, by his own concept of “violent mediation,” or violence enacted through the computational simulation of reality. He also borrows from Brian Massumi the notion of ontopower, the power to bring into being, and the operative mode of preemption that seeks to define and control threat at the point of its emergence. For Richardson, drone warfare is characterized by an acceleration of the removal of human agency from military decision-making. Violence is made ubiquitous; it can take place anywhere at any time. The volume of data produced by drone sensors far outstrips human capacities for visual or computational analysis. They are transformed into actionable data by on-board autonomous software systems that rely on edge computing and AI algorithms. In a logical progression, “automated data collection leads to automated data processing, which, in turn, leads to automated response”: an ultimate end of the militarization of violent mediation is thus the “elimination of the human within technological systems to anything other than the potential target for violence.” By opposition, art insists on what makes us human. The paintings, photographs, and other art forms presented by the author emphasize the awesome power of unmanned airplanes such as the Reaper, the destruction they cause on the ground, their impact on the daily lives of those who remain under their surveillance, and their incorporation into local iconographies such as traditional Afghan war rugs. Art makes sensible the “enduring, gradual, and uneven violence done to the fabric of life” by killing machines that escape traditional forms of human witnessing.

Despite the evocative power of the concepts and artworks presented in Nonhuman Witnessing’s pages, there is a disconnect between drone theory and drone reality. The use of drones by the U.S. for targeted killings is highly publicized, because it is the most controversial, but quantitatively it remains very minor in comparison to surveillance missions. The subject of drone theory is less the drone as such than it is the drone as an illustration of the violence waged by the United States in the Middle East following the war in Afghanistan and the occupation of Iraq. New versions of the theory still have to incorporate the use of drones by new actors and in other theaters of conflict: in the Syrian civil war since 2012, during the short war between Armenia and Azerbaijan in 2020, in the Houthi insurgency against the Yemeni military supported by Saudi Arabia, and, of course, since Ukraine’s aggression by Russia in February 2022 and in Israel’s offensive against Gaza following Hamas’ surprise attack on southern Israel on 7 October 2023. The logic of preemption that characterized the United States’ war on terrorism is less manifest in these evolving situations. So is the role of AI and embarked computer systems: drones increasingly appear as a low-tech, low-cost solution, a weapon of the poor and savvy against more formidable enemies. Drone warfare and lethal autonomous weapon systems raise some complex strategic, ethical and legal questions that have been examined by a number of authors. But they are far from the “killer robots” decried in the critical literature—or hyped as a selling point by arm producers and media commentators. Richardson’s arguments against signature strikes—i.e. strikes based on behavioral patterns rather than on identity (personality strikes)—are valid and have indeed led to a reduction in targeted killings ordered by the U.S. in Pakistan, Yemen, or Somalia. But civilian killings such as the one described in the opening of the book show not that the drone is an imprecise weapon, but that it has been used in an imprecise way, just as a needle can be used imprecisely. Drones, like other pieces of military technology, can serve as inspiration or subject-matter for artists and theoreticians. But as much as drone theory is based on biased empirical ground, drone art is not a recognizable category beyond the avant-garde genre of drone music, which bears no connection with military drones whatsoever.

The power of algorithms

Whereas the chapter on “witnessing violence” used outdated evidence and questionable theory, the second chapter, “witnessing algorithms,” addresses more recent concerns and state-of-the-art technologies: ChatGPT and other applications of machine learning, deepfakes, synthetic media, mass surveillance, and the racist or misogynist biases embedded in algorithmic systems. It is based on the same conceptual swing that understands witnessing algorithms as both algorithms that enable witnessing and algorithms as entities that must themselves be witnessed. Theoretically, it draws from Deleuze and Guattari’s conception of machines as assemblages of bodies, desires, and meanings operating a generalized machinic enslavement of man, and of affect theory as interpreted by Brian Massumi and his grammar of intensities, virtual power, and futurity. Based on these references, Richardson proposes his own notion of “machinic affect” understood as “the capacity to affect and be affected that occurs within, through, and in contact with nonhuman technics.” Machine learning and generative AI can lead to false witnessing and fabrication of evidence: hence the weird errors and aberrations, the glitches and hallucinations that appear in computer-generated images or texts. “Like codes and magic, algorithms conceal their own operations: they remain mysterious, including to their makers.” But instead of denouncing their lack of transparency and demanding to open the proverbial black box, Richardson starts from algorithmic opacity as a given and attends to the emerging power of algorithms to witness on their own terms. Doing so requires the bracketing of any ethical imperative to witnessing: witnessing is what algorithms do, regardless of their accuracy or falsity, their explainability or opaqueness. Facts do not precede testimony: registering an event and producing it take place on the same plane of immanence that makes no difference between the natural and the artificial. Examples mobilized by Richardson include the false testimony of deepfakes such as the porn video of Gal Gadot having sex with her stepbrother; the production of actionable forensic evidence through the automatic detection of teargas canister images by Forensic Architecture, a British NGO investigating human rights violations; the infamous Project Maven designed by the Department of Defense to process full-motion videos from drones and automatically detect potential targets; and computer art videos making visible the inner functioning of AI.

Richardson adds to the existing literature on AI by asking how algorithmic evidence can be brought into the frame of witnessing in ways that human witnessing cannot. But he only hints at a crucial fact: most machine learning applications touted as capable of autonomous reasoning and intelligent decision-making are in fact “Potemkin AI” or “non-intelligent artificial intelligence.” The innovation sector lives on hype, hyperbole, and promissory futures. Likewise, media reactions to new technologies always follow the same tropes, from the “disappearance of work” to the advent of “intelligent machines” or “killer robots.” But the reality is more sobering. Deepfakes produce images that are not different in nature from the CGI-generated movies that dominate the box office since at least two decades. Forensic Architecture, the human rights NGO surveyed in the book, makes slick graphic presentations used as exhibits in judicial trials or media reportages, but does not produce new evidence or independent testimony. State surveillance is a product of twentieth century totalitarianism, not the invention of modern data engineers. Algorithms are biased because we designed them this way. The magic we see in AI-powered services is a form of trickery: their operating mode remains hidden because service providers have an interest in keeping it so. As Richardson rightfully notes, “machine learning systems and the companies that promote them almost always seek to obscure both the ‘free labor’ of user interactions and the low-paid labor of digital pieceworkers on platforms such as Mechanical Turk.” As such as human work will not disappear with automation, it would be a mistake to believe that human witnessing will be substituted by nonhuman forms of bearing witness. There are many human witnesses involved in the production of nonhuman witnessing. Instead of anticipating the replacement of humans by other-than-human agents, we would do well to examine the concrete changes taking place in human witnessing. The debasement of all forms of public authority, the hijacking of political institutions by private interests, the commitment fatigue in the face of too many horrors and catastrophes seem to me at the root of the crisis in human witnessing, for which the nonhuman offers no solution.

Ecological catastrophe

Richardson then turns to Pacific islands and the Australian continent to investigate the role of nonhuman witnessing in times of ecological catastrophe caused by the fallout of nuclear explosions and anthropogenic climate change. These territories, and the people they harbor, can testify to the world-destroying potential of these two crises: “just as the Marshall Islands and other nations in the Pacific were crucial sites for nuclear testing throughout the Cold War, so too are they now the canaries in the mineshaft of climate change.” Witnessing is not reducible to language or to human perception: when they take a continent or a planet as the scale of observation, they deny the human a privileged status for establishing environmental change or atmospheric control. The subject of the Anthroposcene is not the anthropos or Man as traditionally conceived, but an assemblage of humans, technologies, chemical elements, and other terraforming forces. Witnessing ecologies imply that ecologies can be made to witness impending crises and that there is an ecology of witnessing in which every element mediates every other. Drawing from affect theory and trauma studies, Richardson proposes the notion of “ecological trauma” to suggest the idea that trauma escapes the confines of the human body: “it can be climatic, atmospheric, collective, and it can be transmitted between people and across generations.” Ecological catastrophe has already been experienced by First Nations who have seen their environment shattered by settler colonialism, of which the British nuclear testings that took place on the Montebello Islands and at Maralinga in South Australia are only a late instantiation. The entire ecology—people, water, vegetation, animals, dirt, geology—was directly exposed to radioactive contaminants during the blasts and fallout, and no real effort to mitigate the effect on Aboriginal inhabitants was attempted. Polluted soil and sand melted into glass are the media used by Australian artist Yhonnie Scarce, whose glassblowing structure adorns the cover of the book. Other aesthetic works also figure prominently in this chapter, from the aerial imaging through which the planet becomes media to poems by Indigenous writers bearing witness to the destruction of their lands. For Richardson, inspired by recent developments in media theory, “attending to the nonhuman witnessing of ecologies and ecological relations continually returns us to mediation at its most fundamental: the transfer and translation of energies from one medium to another.”

The idea that we should consider nonhumans as well as humans in our processes of witnessing and decision-making already has a significant history in the social sciences. It was first put forward by science and technology studies, or STS, and it is directly relevant for the examination of technological innovation or environmental degradation. Proposed by Bruno Latour, a French STS scholar, Actor-network theory, usually abbreviated as ANT, aims to describe any phenomena—such as climate change or large technological systems—in terms of the relationships between the human and nonhuman actors that are entangled in assemblages or networks of relationships. These networks have power dynamics leading to processes such as translation (the transport with deformation of an assemblage), symmetry (representing all agents from their own perspective) or, as proposed by Richardson, witnessing. It should not be confused with the idea that humans are incapable of witnessing events that are too large-scale or too complex to be grasped by the human mind. Indeed, history shows that local communities and scholars have long understood and monitored changes in the environment and their effect on human activities. In his late work, Latour also proposed the idea that since the environmental question was radically new, politics had to be completely reinvented. We should convene a “parliament of things” where both humans and nonhumans can be represented adequately and be brought to the stand to give testimony. Although Richardson scarcely refers to this literature—he is more interested in art critique than in science and technology studies—, he shares the view that nonhuman witnessing is politically transformative. His politics is anchored in the pluriverse (a world of many worlds), mindful of the myriad of relations between humans and nonhumans, inspired by the belief systems of First Nations, and predicated on the idea that “difference is not a problem to be solved but rather the ground for flourishing.” As he concludes, “there is no blueprint for such a politics, no white paper or policy guidance.” But it is already emergent at the level of speculative aesthetics and in the creative works that punctuate his book.

Thought in the Act

Nonhuman Witnessing is published in a series edited by Erin Manning and Brian Massumi at Duke University Press. Richardson shares with the editors the taste for mixing art with philosophy and for engaging in high theory and abstract concept-building based on concrete examples. He borrows several key notions from Massumi (intensities, futurity, virtuality, preemption), who himself poached many of his insights in Deleuze and Guattari’s philosophy. The new theories developed by these authors and others working in the same field go under the names of affect theory, radical empiricism, process philosophy, speculative pragmatism, ontological vitalism, and new materialism. Each chapter in the book follows an identical pattern. It introduces a new concept (“violent mediation,” “machinic affect,” “ecological trauma,” but also “radical absence” and “witnessing opacity”) that provides an angle to a series of phenomena. It develops a few cases or examples that mostly expose forms of violence that occur across a variety of scales and temporalities: military drones and remote wars (“killer robots”), algorithms (“weapons of math destruction”), and environmental devastation through nuclear testings and climate change (“the end of the world”). It covers both aspects of witnessing, as the originator of an act of testimony and as an object to be witnessed. And it uses artistic creations as illustrations of certain forms of witnessing that escape the standard model of bearing witness. The result makes a suggestive reading but sometimes lacks coherence and clarity. Richardson starts from an original idea (whether drones might become nonhuman witnesses) but stretches it a bit too far. For him, opacity is not a pitfall to be avoided but a quality to be cultivated. Rather than a contribution to theory, the book’s main impact might be on art critique. I truly admire the author’s ability to make art part of the discussion we have on humanity’s main challenges. I didn’t review the artworks curated by the author in detail, but their description makes the most lasting impression.

Indian Software Engineers and the Power of Algorithms  

A review of Virtual Migration: The Programming of Globalization, A. Aneesh, Duke University Press, 2006.

Virtual MigrationA. Aneesh first coined the word algocracy, or algocratic governance, in his book Virtual Migration, published by Duke University Press in 2006. He later refined the term in his book Neutral Accent, an ethnographic study of international call centers in India (which I reviewed here), and in subsequent work in which he preferred to use the term algorithmic governance. What is algocracy? Just as bureaucracy designates the power of bureaus, the administrative structures within large public or private organizations, algocracy points toward the power of algorithms, the lines of code underlying automatic expert systems, enterprise software solutions and, increasingly, artificial intelligence. Power and authority are increasingly embedded in algorithms that inform and define the world of automated teller machines, geographical positioning systems, personal digital assistants, digital video, word processing, databases, global capital flows, and the Internet. In Virtual Migration, Aneesh made the distinction between three types of organizational governance: the bureaucratic mode (rule by the office), the panoptic mode (rule by surveillance), and the algocratic mode (rule by code). Each form of governance corresponds to different technologies, organizations, and subjectivities. This classification is loosely connected to Max Weber’s classical distinction between three types of legitimate authority that characterize human societies, especially as they evolve from simple to more complex social organizations built upon shared norms, values, and beliefs. The German sociologist called these three types charismatic authority, traditional authority, and rational-legal authority. Charismatic authority comes from the personal charisma, strength, and aura of an individual leader. The legitimacy of traditional authority comes from traditions and customs. Rational-legal authority is a form of leadership in which command and control are largely tied to legal rationality, due process of law, and bureaucracy. Proposing the new concept of algocracy raises many questions. Is the rule of code perceived as legitimate, or how is the issue of legitimacy displaced by a new form of governance that doesn’t rest on human decision? How does this lack of human agency affect the functioning of democratic institutions? Does it have an effect on social asymmetry, inequity, and inequality? What are the intersections between algocracy and surveillance (the panoptic mode) and organizational design (the bureaucratic mode)?

What is algocracy?

But first, it is important to understand that algocratic governance is a sociological concept, grounded in the standard methodologies of social science. It is not a computer science concept, although software engineers and scientists deal with algorithms on a daily basis. Nor is it a philosophical notion that an intellectual builds out of thin air in his or her cabinet. Sociology has a long tradition of theory-building that goes through the steps of observation, categorization, and association. Participant observation is one type of data collection method typically used in qualitative research and ethnography; it constitutes the golden standard in anthropology and several branches of sociology. Other methods of data gathering include non-participant observation, survey research, structured interviews, and document analysis. Based on the collected dataset, the researcher makes generalizations from particular cases, tests the explanatory power of concepts, and builds theory through inductive reasoning. In contrast to Neutral Accent, Virtual Migration is not based on participant observation but was conducted through a qualitative methodology the author characterizes as critical, comparative, and exploratory. Aneesh conducted more than a hundred interviews with Indian programmers, system analysts, project managers, call center workers, human resource managers, and high-level executives, including CEOs, managing directors, and vice-presidents, both in India and in the United States. He also observed shop floor organization and work processes in twenty small, mid-size, and large software firms in New Delhi, Gurgaon, and Noida. The conceptualization of algocracy came to him through a simple observation. When an Indian dialer in a call center answers the phone or fills in the “fields” on a computer screen, these actions are constrained by the underlying computer system that directs the calls and formats the information to fill in. The operator “cannot type in the wrong part of a form, or put the address in the space of the phone number for the field may be coded to accept only numbers, not text; similarly, an agent cannot choose to dial a profile (unless, of course, they eschew the dialer and dial manually). The embedded code provides existing channels that guide action in precise ways.”

In order to come to this epiphany, Aneesh had to immerse himself in fieldwork and grapple with questions that connect the local and the particular to wider transnational trends. The context provides some understanding of the challenges the researcher was facing. The rise of the Indian IT industry was boosted by the so-called Millennium Bug, also known as Y2K: approaching the passage to the year 2000, there was widespread fear that the “00” date that would start from the last midnight of 1999 could cause computers to malfunction, since they might interpret it as the 00 for 1900. India’s fledging IT companies sensed the opportunity and offered their services.They sent software specialists onsite to fix the computer systems of large US corporations, and operated from a distance through increased bandwidth and Internet cable links. This was also a time when outsourcing and offshoring of service activities became an issue in the United States. The transferring of jobs from the United States to countries with lower labor standards and environmental protection became a dark symbol of globalization. The effect of international trade and global economic integration on workers’ rights, human rights, and the environment was hotly debated. In Seattle during December 1992, four days of massive street protests against the World Trade Organization turned the city into a battle ground. Globalization was attacked from the right and from the left. The nativist right criticized the loss of manufacturing jobs and the tide of immigrants that were flooding American cities, disrupting the social fabric and diluting national identity. The social justice left denounced the erosion of workers’ rights in the US and the prevalence of child labor and over forms of exploitation in the Global South. One type of work, the staffing of call centers responding to American customers from places in India or other locations, came under the focus of the news media. The same forces that had destroyed manufacturing jobs and put blue-collar workers on the dole were also affecting the service sector and threatening white-collar workers. For some observers, like the journalist Thomas Friedman, the world was becoming flat. Something was definitely happening, but social scientists lacked the tools and datasets for interpreting what was going on. New concepts were needed.

Body shopping and virtual migration

In their analysis of globalization, economists have shown that commercial integration and foreign direct investment reinforce each other, thus being complements rather than substitutes. Aneesh started his research project from a similar question: “Initially I began inquiring whether online services were replacing on-site work, making the physical migration of programming labor redundant.” During further investigations, especially interviews, he realized that the situation was a bit more complex. In a typical situation, “a firm in India might send two or three systems analysts to the client’s site in the United States for a short period, so that they might gain a first-hand understanding of the project and discuss system design. These systems analysts then help to develop the projects in India while remaining constantly in touch with their client, who can monitor the progress of the project and provide input. Once the project is over, one or two programmers fly back to the United States to test the system and oversee its installation.” Aneesh then made the distinction between two types of labor: body shopping, or embodied labor migration; and virtual migration, or disembodied labor migration. Both practices are part of the growing transnational system of flexible labor supply that allows Indian firms to enter into global supply chains and achieve optimal result. Virtual migration does not require workers to move in physical space; body shopping implies migration of both bodies and skills. In body shopping, Indian consultancy firms “shop” for skilled bodies: they recruit software professionals in India to contract them out for short-term projects in the United States. At the end of the project, programmers look for other projects, usually from the same contractors. Some of them start looking for a contractor based in the United States and attempt to secure a more lucrative placement. The ultimate goal is to switch their visa status from the H-1B work visa to the Green Card: body shopping allows Indian workers to pursue the American dream.

Contrary to standard perceptions, “the biggest advantage of hiring contract labor is not low short-term costs; it is flexibility, and the resulting reduction of the long-term costs of maintaining a large permanent workforce.” With a widespread demand for programming labor in different organizations, software professionals are well-paid workers. They are both “expensive and cheap” for American corporations to hire. They allow the receiving company to trim its workforce, take these temporary workers into service only in times of need, and economize on long-term benefits—social security, retirement contributions, health insurance, and unemployment insurance—that must be provided to permanent employees. Contractual employment allows American companies to implement just-in-time labor and to decouple work performance from the maintenance of a permanent workforce. In the case of virtual migration, they can also achieve temporal integration and work in real time, round-the-clock, in a seamless way: “Since the United States and India have an average time-zone difference of twelve hours, the client may enjoy, for a number of tasks, virtually round-the-clock office hours; when America closes its offices, India gets ready to start its day.” The temporal sequencing of work across time zones allows corporation to “follow the sun” and gain a competitive advantage by dividing their work groups and assignments between India and the United States. But time integration is not as easy as it sounds: coordination is a complex business, and lots of valuable information get lost during the workload transmission from one team to the other. Temporal dissonance may also occur when an Indian team is obliged to work at night to provide real-time response to American clients, like in the case of call centers. Like Aneesh illustrated in his subsequent book Neutral Accent, people who perform nightly live in two worlds, straddled between time zones, languages, and cultural references. Night work alters circadian rhythms and put workers out of phase with their own society: “there is a reason why night work has another name—the graveyard shift.”

Algocracy is not algonomics

In writing Virtual Migration, Aneesh’s ambition was to disentangle sociology from economics, showing that they can take different and sometimes opposed perspectives on the same phenomenon. An economist would ask whether migration and trade are complementary or substitute, and look at trade data and labor statistics to test hypotheses. He would try to differentiate between short-term losses and long-term gains, showing that job displacements and layoffs caused by transnational economic integration is more than compensated by gains in productivity and increased activity. Aneesh warns against the danger of conflating the economic and the social where the social is often assimilated to the economic. Virtual workers or Indian programmers who engage in the body shopping trade are not only economic agents; their location of choice is not only motivated by economic interest. During interviews, “programmers continually long for the ‘other’ nation: they miss India while in the United States and miss the United States when they are back in India.” It is not only an opposition between material versus more social and emotional longings: “we also find high-level executives who enjoy material luxuries in India such as chauffeur-driven cars, plush houses, and domestic help at home and yet still try to maintain their permanent residency in the United States.” Similarly, discussions on organizational networks tend to be economistic, focussing on possible efficiencies, competitive advantage, coordination, and relative transaction costs for corporations. But for Aneesh, the language of “networks” often obscures relations of power and governance in the emerging regime. As he explains, “algocracies are imbued with social ideas of control as well as formal logic, tracing their roots to the imperatives of capital and code.” Computer programming has emerged as a form of power that structures possible forms of action in a way that is analytically different from bureaucratic and surveillance systems. Enterprise software systems developed by Indian firms are not merely the automation of existing processes. They also “produce the real” by structuring possible forms of behavior and by translating embodied skills into disembodied code.

One of the characteristics of algocratic governance is to reduce the space needed for deliberation, negotiation, and contestation of the rules and processes that frame actions and orient decisions. As Aneesh could observe on shop floors and in call centers, “work is increasingly controlled not by telling workers to perform a task, nor necessarily by punishing workers for their failure, but by shaping an environment in which there are no alternatives to performing the work as desired.” Programming technologies have gained the ability to structure behavior without a need for orienting people toward accepting the rules of the game. Software templates provide existing channels that guide action in precise ways: all choices are already programmed and nonnegotiable. This guidance suggests that algorithmic authority does not need legitimacy in the same sense as was used in the past. Max Weber’s three types of legitimate power supposed human agency on the part of the bearers of authority and for those under their command. But as authority is increasingly embedded in the technology itself, or more specifically in the underlying code, governance operates without human intervention: human agency disappears, and so does the possibility to make authority legitimate. This is not to deny that programming is done by someone and that human agents are still in charge of making decisions. Yet programming also becomes fixed and congealed as a scheme, defining and channeling possible action. Automation, or the non-human operation of a process, is not a problem in itself. It becomes a matter of concern when automated algorithms enter into certain areas where it is important for the space for negotiation to remain open.

AI alignment

Artificial intelligence brings the power of algorithms to a new level. The critics addressed to AI are getting more and more familiar. AI systems are non-transparent, making it almost impossible to identify the rules that led them to recommend a decision. They can be biased and perpetuate discrimination by amplifying the racial or gender biases embedded in the data used for training them. They remain arbitrary from the individual’s perspective, substituting the human subject with changing behavioral patterns and data scores. AI lacks human qualities like creativity and empathy, limiting its ability to understand emotions or produce original ideas. Surveillance powered by AI threatens individual privacy and collective rights, tipping the balance in favor of authoritarian states and oppressive regimes. In a not-so-distant future, artificial general intelligence (AGI) systems could become “misaligned”—in a way that could lead them to make plans that involve disempowering humanity. For some experts, AGI raises an existential risk that could result in human extinction or another irreversible global catastrophe. The development of AI has generated strong warnings from leaders in the sector, some of whom have recommended a “pause” in AI research and commercial development. What I find missing in discussions about AI security and “AGI alignment” is the lack of observable facts. We need empirical observations and field research to document the changes AI-powered algorithms bring to work processes, organizational structures, and individual autonomy. We also need to explain what algorithms actually do in concrete terms by using the perspectives of people from various cultures and backgrounds. Only then will we be able to balance algorithmic governance with countervailing forces and ensure that democratic freedoms can be maintained in the age of the rule by code.

Gay Dykes on Acid-Free Paper

A review of Information Activism: A Queer History of Lesbian Media Technologies, Cait McKinney, Duke University Press, 2020.

Information ActivismLesbian feminists invented the Internet, and they did it without the help of a computer. This is the surprising finding that comes out of the book Information Activism: A Queer History of Lesbian Media Technologies, published by Duke University Press in 2020. As the author Cait McKinney immediately makes it clear, the Internet that lesbians built was not composed of URL, HTML, and IP servers: it was an assemblage of print newsletters, paper index cards, telephone hotlines, paper-based community archives, and early digital technologies such as electronic mailing lists and computer databases. What made these early media technologies “lesbian” is that they formed the information infrastructure of a social movement that Cait McKinney describes as “information activism” and that was oriented toward the needs and aspirations of lesbian women in North America during the 1980s and 1990s. And what makes Cait McKinney’s book a “queer history” is that she brings feminism and queer studies to bear on a media history of US lesbian-feminist information activism based on archival research, oral interviews, and participant observation through volunteering in the Lesbian Herstory Archives in New York. Information activism took many forms: sorting index cards, putting mailing labels on newsletters, answering the telephone every time it rings, converting old archives into digital format… All these activities may not sound glamorous, but they were part of the everyday politics of “being lesbian” and “doing feminism.”

The Internet that women built

Recently the role of women in the development of information technology and the Internet has attracted a great deal of attention. Thanks in part to the effort of popular author Walter Isaacson, the names of Ada Lovelace, Grace Hopper, Jean Jennings, and Jennifer Doudna have become more familiar to modern readers, and their enduring legacy may have contributed to attract more young women into computer science. Even so, computing remains a heavily male-dominated field, and the industry’s openness to “the crazy ones, the misfits, the rebels, the troublemakers, the round pegs in the square holes” (to quote from a famous Apple commercial) is mostly limited to the masculine part of mankind. It therefore bears reminding that the Internet revolution was brought forth by information activists of all stripes and colors, not just white cis males from California. The “misfits” lauded by Steve Jobs may also have included dykes, stone butches, high femmes, riot grrrls, and lavender women as well as trans and nonbinary subjects. Besides, as feminist critique has pointed out, the concept of the “Internet revolution” or the “information superhighway” are masculinist notions that need to be reexamined. There is a gender bias in popular accounts of technology development and innovation that tends to exclude the contribution of certain agents, especially queer subjects and women of color. Technologies are gendered, and they also exhibit heteronormative and white biases. To fix this problem, much more is needed than writing more inclusive histories of innovation and exposing occupational sexism in the technology industry.

The lesbian volunteers whose activities are chronicled in Information Activism did not really invent the Internet. They did something much more purposeful: they set out to create a world bearable and a life worth living for lesbian women in North America. They did this work within conditions of exclusion from access to reliable information about lesbian life and from the margins of social structures and even mainstream feminism. Confronted with discrimination, isolation, and invisibility, they decided to build an information infrastructure of their own, one connection at a time. Creating alternative communication channels responded to conditions in which many women lacked access to other lesbians and were desperate to find connection. Sometimes, the sole purpose of maintaining this information infrastructure was to show lesbian women that they were not alone. There was another person to talk to at the other end of the help line at the New York Lesbian Switchboard ; other researchers subscribing to the newsletter Matrices were doing stuff in a field marginalized within academic studies ; documents stored at the Lesbian Herstory Archives in New York City bore the testimony of queer lives whose memorialization was a source of inspiration for modern generations. In some cases, just knowing the information was “out there” was enough to go on living with a renewed purpose. In other instances, women engaged in long “rap sessions” discussing feminist politics over the phone, started collaborative research projects that led to the emergence of a full-fledge discipline of queer studies, or found companionship and accomplishment in their volunteering projects. Information makes promises and fulfills aspirations that are much greater than “finding things out.”

A Chatroom of One’s Own

Networks have been critical to the construction of feminist histories. Cait McKinney examines several cases of networked communication initiatives that predate the emergence of online media: the publication of the newsletter Matrices designed for sharing information and resources with anyone doing research related to lesbian feminism; the New York Lesbian Switchboard connecting callers to a source of information and advice; the Lesbian Herstory Archives’ collection of print documents and audio tapes; the patient collection of indexes and bibliographies that made lesbian feminist essays and periodicals searchable and actionable. The technologies used in these pre-digital enterprises now seem antique: typewriters, photocopiers, landline telephones, letter mail, stacks of papers, cardboards, index cards, and face-to-face interactions. But the results were far-reaching and futuristic. They laid the ground on which a lesbian-feminist movement could expand and self-organize. Information and communication networks allowed dispersed researchers to connect with each other, share information, and do lesbian research within unsupportive and sometimes openly hostile research environments. Women living in rural areas or isolated places were encouraged to become active nodes of the network by taking pictures, gathering newspaper clips, and audio-recording interviews to document events taking place in their geographic area. The Matrices newsletter facilitated historical research through the creation of a supportive information infrastructure ; it also allowed for the nationwide expansion of a social movement originally concentrated in New York; and it convinced dispersed readers that lesbian lives mattered and were worth documenting. Key initiatives grew out of the network, such as the volume Black Lesbians: An Annotated Bibliography compiled by JR Roberts to counter the invisibility of women of color in mainstream lesbian feminism. In the 1990s, many print newsletters lost relevance a web browsing developed and academic listservs became key networks for sharing information. Matrices stopped publishing in 1996, replaced ostensibly by commercial enterprises such as Google, Amazon, and digital publishing tools. But online communication does not present as much of a turning point as a continuation of networked modes of organization for feminist social movements.

Another example of continuity between analog and digital modes of communication is the lesbian telephone hotline staffed by volunteers in New York City that answered to every call with a listening ear and a range of helpful tips and advice. Like newsletters, telephone hotlines connected lesbians at a distance using information. For the historian, they are harder to document: volunteers were anonymous and cannot be traced back, and all that remains of the long nights spent answering the phone are the call logs recording every conversations with a few notes and doodles scribbled in the margin. The logs suggest that many callers expressed despair, loneliness, or confusion; but others called for help finding something fun to do that night, for precise information about support groups or community resources, or just to talk and “rap” about gender issues. Even before the appearance of mailing lists and online forums, the need to have a chatroom of one’s own was clearly felt and answered. McKinney also uses the log archives as entries to thinking about feminist research methods, multimedia practices, care provision, and affective labor involved in lesbian telephone hotlines. She reminds readers that feminist activism involved less acknowledged dynamics such as boredom, repetition, isolation, and burnout. What makes a telephone hotline “lesbian feminist” is the self-definition and principles under which the switchboard operated. Volunteers were recruited from within the lesbian community and bisexual women were tacitly kept out, while the policy toward trans women and gender nonconforming persons was left undefined, although their needs were also addressed on an ad hoc basis. These remarks remind us that terminology, such as the moniker “gay and lesbian” as opposed to the more contemporary “LGBTQI+”, are historical constructions that cast aside or rigidify some categories as much as they include or deconstruct others.

A feminist mode of network thinking

Network thinking has been a feature of feminist activism and knowledge production since before the consumer Internet. “Improving (lesbian) lives with information” could be the motto of a behemoth social media company catering to a niche market; it was always the principle under which lesbian activists operated. The feminist movement produced original ideas about communication, access to information, capacity building, and the power of alternative structures for organizing people and ideas. Lesbian feminists also offered pre-digital feminist critiques of networks as egalitarian ideals that can conceal functional hierarchies and threaten the privacy of participants. Computer networks were dreamt and imagined before they were invented and built. The librarians and volunteers who collected the Lesbian Periodicals Index  were imagining computer databases and electronic indexing while shuffling paper cards into shoeboxes ; the Lesbian Herstory Archives’ project leaders were figuring putting all their resources online before they had the equipment and manpower to convert documents into digital format. They were also early adopters of information technology, manifesting a can-do attitude and a hands-on sensibility familiar to feminist activism—and more generally to “women’s work.” McKinnon characterizes as “capable amateurism” a fearless approach to learning and implementing new media technologies; a gendered belief in the capacity of amateurs to work hard and acquire new skills; and a willingness to experiment, improvize, and figure things out on the fly. Lesbian feminism is also informed by values of non-hierarchy, direct participation by members, and an investment in decentralized processes.

Today these values are reflected in many internet communities. A good-enough approach (“rough consensus”), a culture of sharing (“copyleft”), and collectively organized work (“open source”) as well as political militancy (“Anonymous”) characterize segments of the computer industry as much as they are part of the lesbian-feminist heritage. One may even see in the Slow Web movement echoes of the politics of nonadoption and digital hesitancy that was developed by some activist groups surveyed by the author. Beyond lesbian history, these activists have much to teach all of us about why, when, and for whom information comes to matter. The lesbian feminist imagination allows us to envisage a world brought together by connection, care, and “sisterhood” that earlier feminist networks originally articulated and that worldwide Internet connectivity now makes potentially real. A lesbian-feminist approach also reminds us that networks make equalitarian promises that conceal the power structures, protocols, and control mechanisms they actually exert. Computer databases and search engines are not neutral; they determine what is thinkable and sayable through filtering access to information and indexing resources into categories and keywords. These are deeply political choices, and the way decision-making processes and governance bodies are structured matters a great deal. If we want to keep a free and open Internet and uphold the principle of net neutrality, perhaps we should learn from a history of information networks written through older forms of feminist print culture.

Lesbianism is so twentieth century

But does the lesbian past still talk to our queer age? As a self-described “masculine, nonbinary person,” Cait McKinney is ambivalent about the category of lesbianism. She originally assumed that “lesbian” as a specific term of self-identification was historically dated and situated in a period of late twentieth-century militancy, and she was surprised to learn that the term was still popular among a younger generation of queer-identified activists. Young volunteers at the Lesbian Herstory Archives articulate deep attachment to lesbian history and subcultures, and the snippets of information and pictures that the center posts on Instagram are instantly popular. Some business ventures exploit the revival in lesbian-feminist militancy heritage, selling T-shirts, collectable items, and other paraphernalia bearing slogans and pictures from the seventies and eighties. McKinney also thinks lesbianism, while providing a big tent for women with nonconforming gender identities, also had exclusionary effects as many lesbian-feminists were historically hostile to trans women and indifferent to women of color. As a matter of fact, lesbianism meant much more than women having sex with women. Likewise, the erotic exceeds what is commonly understood as sensuous, sexually appealing, and emotionally gratifying acts. Eroticism can be described as a communication practice, and information activism is definitely part of it. Reading archives against the grain (or along the archival grain, as Laura Stoller invites us to do) also refers to the grain of one’s skin, and the archival touch implies an embodied experience laden with sensory perceptions and affects. Libriarianship and archivism are professions that have been historically attractive to women, including persons attracted to same-sex relations, and they have often served as erotic projections of male—and sometimes female—desire. There is something queer about manipulating acid-free paper, and Information Activism consciously addresses how librarians and archivists cope with the affective and intimate impacts of accumulated print media.

From Hot Line to Help Line

A review of Neutral Accent: How Language, Labor, and Life Become Global, A. Aneesh, Duke University Press, 2015.

Neutral AccentAt the turn of the twenty-first century, China became identified as the world’s factory and India as the world’s call center. Like China, India attracted the attention of journalists and pundits who heralded a new age of globalization and documented the rise of the world’s two emerging giants. Foremost among them, Thomas Friedman wrote several New York Times columns about call centers in Bangalore and devoted nearly half a book, The World is Flat, to reviewing personal conversations he had with Indian entrepreneurs working in the IT sector. He argued that outsourcing service jobs to Bangalore was, in the end, good for America—what goes around comes around in the form of American machine exports, service contracts, software licenses, and more US jobs. He further expanded his optimistic view to conjecture that two countries at both ends of a call center will never fight a war against each other. An intellectual tradition going back to Montesquieu posits that “sweet commerce” tends to civilize people, making them less likely to resort to violent or irrational behavior. According to this view, economic relations between states act as a powerful deterrent to military conflict. As during the Cold War, telecom lines can be used as a tool of conflict prevention: with the difference that the “hot line,” which used to connect the Kremlin to the White House, has been replaced by the “help line” which connects everyone in America to a call center in the developing world. The benefits of openness therefore extend to peace as well as prosperity. In a flat world, nations that open themselves up to the world prosper, while those that close their borders and turn inward fall behind.

Doing fieldwork in a call center

Anthropologists were also attracted to Asian factories and call center to conduct their fieldwork and write ethnographies of these peculiar workplaces. Spending time toiling along with fellow workers and writing about their participant observation would earn them a PhD and the launch of a career in an anthropology department in the United States. Doing fieldwork in a call center in Gurgaon near New Delhi came relatively easy to A. Aneesh. As a native Indian, he didn’t have much trouble adapting to the cultural context and fitting in his new work environment or gaining acceptance from his colleagues and informers. His access to the field came in the easiest way possible: he applied for a position in a call center, and after several rounds of recruitment sessions and interviews he landed a job as a telemarketing operator in a medium-sized company fictitiously designated as GoCom. He had already completed his PhD at that time and was an assistant professor at Stanford who took a one-year break to do fieldwork and publish research. He even benefited from the support of two research assistants while in New Delhi. There was no special treatment for him at the office floor, however. He started as a trainee alongside newly-hired college graduates, attending lectures and hands-on sessions to get the proper voice accent and marketing skills, then moved to the call center’s main facility to work as a telemarketer doing the night shift. He engaged in casual conversations with his peers, ate with them in the cafeteria where lunch was served after midnight, conducted formal interviews with some of them, and collected written documents such as training manuals and instruction memos.

What makes Aneesh’s Neutral Accent different from Friedman’s The World is Flat? How does an ethnographic account of daily work in an Indian call center compare with a columnist’s reportage on the frontiers of globalization? What conclusions can we infer from both texts about the forces and drivers that shape our global present? Is there added value in a scholarly work based on extended field research as compared with a journalistic essay based on select interviews and short field visits? And what is at stake in talking of call centres as evidence of a globalised world? As must be already clear, the methods used by the two authors to gather information couldn’t be more different. Aneesh’s informants were ordinary people designated by their first name—“Vikas, Tarun, Narayan, Mukul, and others”—who shared their attitudes toward their job, their experience and hardships, their dreams and aspirations. The employees with whom the author spent his working nights were recent college graduates, well-educated and ambitious, reflecting the aspirations and life values of the Indian middle-class. By contrast, Friedman associated with world-famous CEOs and founders of multi-million-dollar companies. They shared with him their worldview of a world brought together by the powerful forces of digitalization and convergence, and emphasized that globalization must have “two-way traffic.” To be true, Friedman also tells of his visits to a recruiting seminar where young Indians go to compete for the highly sought-after jobs, and to an “accent-neutralization” class where Indians learn how to make their accents sound more American. To distantiate himself from the arm-chair theorist of globalization, he emphasizes his contacts with “real” people from all walks of life. But he never pretends that his reportages amount to academic fieldwork or participant observation.

The view from below

The information collected through these methods of investigation is bound to be different. One can expect office workers to behave cautiously when addressed by a star reporter coming from the US, along with his camera crew, and introduced to the staff by top management for his reportage. The chit-chat, the informal tone, the casual conversations, and the mix of Hindi and English are bound to disappear from the scene, replaced by deference, neutral pronunciation, and silence. The views channeled by senior executives convey a different perspective from the ones expressed on the ground floor. As they confided themselves to Aneesh, employees at GoCom expressed a complete lack or pride about their job and loyalty for their company. They were in for the money, and suspected GoCom of cheating employees out of their incentive-based income. Their suspicion was not completely unfounded, and the author notices several cases of deception, if not outright cheating, regarding the computation of monthly salaries. Operators were also encouraged to mislead and cheat the customer through inflated promises or by papering over the small print in the contract. Turnover was high, and working in a call center was often viewed as a temporary position after college and before moving to other occupations. While Friedman is interested in abstract dichotomies, such as oppositions between tradition and modernity, global and local, rich and poor, Aneesh focuses on much more mundane and concrete issues: the compensation package, the commute from home, or working the night shift.

Indeed, night work is a factor that goes almost unnoticed in Friedman’s reportage, while it is a major issue in Neutral Accent. “Why is there a total absence, in thought and in practice, of any collective struggle against the graveyard shift worldwide?” asks the author, who explains this invisibility by corporate greed, union weakness, and the divergence between economic, social, and physiological well-being. He documents the deleterious effects of nocturnal labor on workers’ health, especially on women who suffer from irregular menstruation and breast cancer risk. He notices the large number of smokers around him, as well as people who complain about an array of anxieties without directing their complaints on night work per se. The frustration and discomfort of working at night is displaced to other issues: the impossibility to marry and start a family—although night work is also used by some to delay marriage or run away from family life—and the complaint about commute cabs not running on time. Indeed, what Thomas Friedman and other reporters see as a valuable perk of the job, the ability for young employees to travel safely to and from work thanks to the chauffeured car-pool services provided by the call centers, ends up as a source of frustration and anguish due to the delay and waiting time occasioned by the transport. Nocturnal labor affects men and women differently; Indian women in particular feel the brunt of social stigma as “night workers,” leading some of them to conceal their careers while looking for marriage partners, or alternatively, limiting their choice of partner to men in the same business. While the lifting of restrictions on women’s right to work at night was justified by gender neutrality, the idea of being neutral to differences carries with it disturbing elements that feminist critique has already pointed out.

Being neutral to differences

Neutrality, or indifference to difference, also characterizes the most-often noticed trait of Indian call centers: the neutralization of accent and the mimetic adoption of certain characteristics such as the Americanization of the first-names of employees who assume a different identity at work. Aneesh points out that neutral accent is not American English: during job interviews, he was asked to “stop rolling your R’s as Americans do,” and invited to speak “global English,” which is “neither American nor British.” As he notes, “such an accent does not allude to a preexisting reality; it produces it.” Accent neutralization is now an industry with its teaching methods, textbooks, and instructors. Call center employees learn to stress certain syllables in words, raise or lower their tone along the sentence, use colloquial terms with which they may not be familiar, and acquire standard pronunciation of difficult words such as “derogatory” or “disparaging,” which they ironically note in the Hindi script. Some employees are repeatedly told that they are “too polite” and that they should not use “sir” or “madam” in every sentence. For Aneesh, “neutralization allows, only to a degree, the unhinging of speech from its cultural moorings and links it with purposes of global business.” Mimesis, the second feature of transmutation, reconnects the individual to a cultural identity by selecting traits that help establish global communication, such as cheerfulness and empathy. Employees are told to keep a smiling face and use a friendly voice while talking with their overseas clients. But despite their best efforts, some cultural traits are beyond the comprehension of call center agents: “The moment they start talking about baseball, you have absolutely no idea what’s going on there” (the same could be said regarding Indian conversations about cricket.)

Aneesh uses neutralization and mimesis as a key to comprehending globalization itself. They only work one way: as the author notes, “there is no pressure, at least currently, on American or British cultures for communicative adaptation, as they are not required to simulate Indian cultural traits.” But Western consumers are also affected by processes at work in the outsourcing and offshoring of service activities. Individual identities and behaviors are increasingly monitored at the systemic level in numerous databases covering one’s credit score, buying habits, medical history, criminal record, and demographics such as age, gender, region, and education. Indeed, most outbound global calls at GoCom were not initiated by call center agents but by a software program that used algorithms to target specific profiles—demographic, economic, and cultural—in America and Great Britain. Artificial intelligence and predictive algorithms, only nascent at the time of the author’s fieldwork in 2004-2005, now drive the call center industry and standardize the process all agents use, leaving little room for human agency. Data profiles of customers can be bought and sold at a distance, forming “system identities” governed by algorithms and embedded in software platforms that structure possible forms of interaction. Identities are no longer fixed; they keep changing with each new data point, escaping our control and our right of ownership over them.

Global conversations

We cannot judge The World is Flat and Neutral Accent by the same criteria. The standard to evaluate a journalistic reportage is accuracy of fact, balanced analysis, human interest, and impact over readers. Using this yardstick, Friedman’s book was a great success and, like Fukuyama’s End of History, came to define the times and orient global conversations. The flattened world became a standard expression animated with a life of its own, and generated scores of essays explaining why the world was not really flat after all. Many Indians credited Friedman for writing positively about India and often echoed his views, claiming that the outsourcing business was doing wonders for the economy. Others critiqued the approach, saying the flat world was just another word for underpaying Indian workers and denying them the right to migrate and find work in the US. By contrast, Aneesh’s book was not geared to the general public and, apart from an enthusiastic endorsement by Saskia Sassen on the back cover and a few book reviews in scholarly journals, its publication did not elicit much debate in the academic world. In his own way, Aneesh paints a nuanced picture of globalization. Where most people see call centers as generating cultural integration and economic convergence, he insists on disjunctures, fault lines, and differentiation. The “help line” is not just a tool to connect and erase differences; it may also create frictions and dissonances of its own. A world economy neutral to day and night differences; a labor law that disregards gender disparity; work practices that erase cultural diversity; digital identities that exist beyond our control: neutralization is a force that affects call center agents and their distant customers much beyond the adoption of global English and neutral accent as a means of communication.

Video Game Theory

A review of Respawn: Gamers, Hackers, and Technogenic Life, Colin Milburn, Duke University Press, 2018.

RespawnVideo games are now part of popular culture. Like books or movies, they can be studied as cultural productions, and university departments offer courses that critically engage with them. Scholars who specialize in this field of study take various perspectives: they can chart the history of video game production and consumption ; they can focus on their design or their aesthetic value; or they can analyze their narrative content and story plot. There is no limit to how video games can be engaged: some thinkers even take them as fertile ground for philosophy and theory building. Within the past few years, a handful of books have been published on video game theory. Colin Milburn’s Respawn can be added to that budding strand of literature. It is a work of applied theory: the author doesn’t engage with longstanding philosophical problems or abstract reasoning, but draws from the examples of a wide range of games, from Portal and Final Fantasy VII to Super Mario Sunshine and Shadow of the Colossus, to illustrate how they impact the lives of gamers and non-gamers alike. In particular, he considers the value of video games for shaping protest and political action. Video games, with the devotion that serious gamers bring to the task, introduce the possibility of living otherwise, of hacking the system, of gaming the game. Gamers and hackers develop alternative forms of participatory culture along with new tactics of critique and intervention. Hacktivist groups such as Anonymous use video game language and aesthetics to disrupt the operations of the security state and launch attacks on the neoliberal order. Pirate parties have won seats in European legislatures and advocate a brand of techno-progressivism, digital liberties, and participatory democracy largely inspired by video games. Exploring the culture of video games can therefore offer a glimpse into the functioning of our modern democracies in a computerized world.

Geek vocabulary

A culture is formed of various groups that may develop their own specific identity within the context of the larger social system to which they belong. Gaming culture can he treated as a subculture: a series of social codes, technological lore, and insignificant facts of history, popular culture, art and science. Subcultures create social groups by delineating their identities, beliefs, and habits as much as they exclude those who do not belong to the group. Geek culture is a subculture of computer enthusiasts that is traditionally associated with obscure media: Japanese animation, science fiction novels, comic books, and video games. Respawn is replete with trivia, code words, and key expressions that open for the noninitiate a window into the world of gaming. “All your base are belong to us” is the poorly translated sentence from the Japanese arcade game Zero Wing that is now used as a catchphrase for violent appropriation and technical domination. Used by the leader of the cyborg invasion force known as CATS, it signifies that a posthuman future is already inevitable, and presents an allegory of the information age in which mistranslations and malfunctions abound. Made popular by the website Something Awful, it is the feline equivalent of the Doge Internet meme that consists of a picture of a Shiba Inu dog accompanied by multicolored text deliberately written in a form of broken English. Variations on the CATS meme include the message posted by YouTube in 2006 that “All your videos are belong to us” or, following the Snowden affair and the exposure of NSA’s vast data-surveillance operation, various Internet images that proclaim: “All your data are belong to U.S.”

Another obscure lore sentence is the question: “Where were you on April 20, 2011?” that refers to the date the PlayStation Network was shut down as a retroactive security response to an external intrusion. Colin Milburn reconstructs the story of this particular episode, which exposes the troubled relations between Sony Corporation and various groups of hackers, of which the attack on Sony Pictures by operatives allegedly sponsored by the government of North Korea is only the last installment. It all began with Sony’s decision to make its PlayStation 3 open to homebrew programmers and technological innovators in order to encourage participatory science, peer-to-peer design, and do-it-yourself innovation. With its PlayStation Network or PSN, it even claimed to have created “the most powerful distributed computing network ever” and made it accessible to Stanford University’s researchers to simulate the mechanics of protein folding by installing the Folding@home software on all its stations. However, in January 2010, the young hacker George Hotz—more commonly known by his alias, GeoHot—announced that he had found a way to hack the PS3, gaining access to its system memory and processor and allowing users to make pirate copies of their games. Sony backpedalled on its open system policy and filed a lawsuit against GeoHot, which then found supporters among the hacker collective Anonymous who launched a massive DDOS attack against Sony servers. It is in this context that the PlayStation Network outage occurred, disabling gamers access to their favorite occupation and exposing them to the risk of leaked personal data, including passwords and credit card numbers that the hackers were able to extract from servers. Anonymous was quick to deny responsibility for the criminal intrusion, but it wasn’t the end for Sony’s troubles and the company was exposed to more attacks by malicious black-hat hackers. Meanwhile, the unsolved mystery of who hacked the PSN invited conspiracy narratives and dark humor mashups. “PlayStation Network was down so I killed Osama Ben Laden” was how a meme described president Obama’s reaction, while others noted the time coincidence between the PSN shutdown and the day the Skynet network took over the world in the Terminator movie.

Doing it for the lulz

The gamer culture intersects with hacking in the lulz, a form of corrupted laughter that derives pleasure from online actions taken at another’s expense. The “field of Theoretical Lulz” as depicted on Encyclopedia Dramatica includes trolling, gooning, griefing, and pranking, as well as the various forms of online harassment developed by hackers who, as they say, “do it just for the lulz.” Modding refers to the act of modifying hardware, software, or any aspect of a game, to perform a function not originally conceived or intended by the designer, or achieve a bespoke specification. Mods may range from small changes and tweaks to complete overhauls, and can extend the replay value and interest of the game. Respawn, a command-line first occurring in the game Doom, means to reenter an existing game environment at a fixed point after having been defeated or otherwise removed from play. It is the opposite of permadeath games that make players start over from the very beginning if their character dies. Yet another option is to play in “Iron Man mode” and try to reach the end of any game with only a single avatar life, eschewing the “save” or “respawn” functions. The hacker concept of “magic” refers to “anything as yet unexplained, or too complicated to explain,” but also to the command words in adventure games that included functions such as “XYZZY” or “PLUGH”. The word “pwn” is not a programming function or an instruction code, but a term of appreciation (as in “This game pwns”) that originated in the gaming community itself, probably born from a typographic error. According to the most enthusiastic critics, games raise philosophical issues. The role-playing game Portal includes the sentence “There will be cake” in its opening, but the player soon realizes that “The cake is a lie.” Of course, these two sentences have achieved cult status, and are repeated in countless Internet memes or signs carried at street demonstrations.

For Colin Milburn, games are closely correlated to the meaning of life. Many concepts from computer science draw parallels to the realm of organic life—worms, viruses, bugs, swarms, hives, and so forth. Sony has built upon this connexion by attaching its brand to an image of biological organism and vitality, from its 2007 “This Is Living” advertising campaign to its 2011 “Long Live Play” motto. Sony executives routinely speak about the PlayStation’s DNA, refer to its microprocessor as The Cell, and insist on the nucleic compatibility between successive generations of hardware products. For Colin Milburn, “‘respawn’ stands for a surplus of vitality, a reserve of as-yet unexpended life, a technologically mediated capacity to keep on going even while facing dire adversity.” He uses the term “technogenic life” to refer to the entanglement between organic life and digital media and the emergence of new life-forms, neither fully human nor artificial. This is of course a familiar trope in science fiction, and the author lists classic novels such as John Brunner’s Shockwave Rider, Vernon Vinge’s True Names, or William Gibson’s Neuromancer as part of any gamers’ portable library. Video games are experiments in applied science fiction: they allow players to test the limits of life, to engage with anticipation and foresight, and to make other futures imaginable. Gamers always have the possibility to reset, save, shut off, or reload. Games tend to encourage a playful and experimental attitude to life: working through error, overcoming failure, persevering toward the goal while staying open to the unexpected. Playing games can teach us how to live: indeed, they are part of our lives as Homo Ludens. Gamers respond to the injunction to “get a life” by arguing that they already have one, indeed many: “I am a gamer, not because I don’t have a life, but because I choose to have many.”

We Are Heroes

Gamers are also influenced by the subculture of comic books and superhero movies. Since 1978, when the first Superman cartridge appeared for the Atari 2600, the video-game industry has produced a steady stream of superhero adventures. One such game was City of Heroes, a massive multiplayer online role-playing game or MMORPG that attracted a large community of followers. In the game, players created super-powered characters that could team up with others to complete missions and fight criminals belonging to various gangs and organizations in the fictional Paragon City. When the South Korean company NCsoft decided to terminate its Paragon Studios development team and to shut down the game in 2012, massive protests arose. Online testimonies reflected feelings of camaraderie and shared culture, domestic and social belonging, comfort in times of sorrow, and personal accomplishment—indeed, all the qualities of “having a life”. Rallying under the motto “We are heroes. This is what we do,” participants envisaged various measures to keep the game operating past the announced date of closure. Their logic was straightforward: the company had made a game where players had spent the past eight years defending their city; it was only natural that they rose in protest against this attack on Paragon. Some decided to go rogue and keep the game running on servers based on the leaked source code. Like in the world of superheroes, the online community has always had its rogue elements, its vigilantes and its villains. The author is not sure where to categorize hackers such as the group Anonymous: “despite their roguelike appearances, hacktivists might even seem to be on the right side of history.” But the hate speech, misogynistic attacks, and racist slurs that circulate on forums such as Reddit or 4chan clearly fall into the villainy category. They represent “the dark side of the lulz,” the politics of terror and mayhem that is already familiar to the fans of Batman’s Gotham City and other superhero worlds.

Gaming also shapes a political imaginary. Numerous players have attested to the impact of gaming on their own political or ecological sensitivities. The dispositions and practices cultivated by gaming can inform political choices, responsible policy decisions, and collective action. Under the right circumstances, video games offer ways to experiment with the technopolitics of the present, to think otherwise even from the inside of a computer system. Edward Snowden has confessed that his motive for challenging the security state has developed partly through his lifelong interest in video games. According to Colin Milburn, video games frequently present interactive narratives about civil disobedience, social resistance, and transformation, becoming models for engagement. The quotidian act of saving or resetting gameplay data itself models an orientation to social change, affirming that duration and persistence are not givens but are always active processes of construction. Final Fantasy VII has encouraged a generation of players to consider “how deeply the fights for economic democracy and environmental sustainability are intertwined.” Gaming and hacking cultures are intrinsically correlated. The “primal scene of hacking” occurred in the early 1960s when MIT research scientists experimented with the university’s mainframe computer to create the first video game, Spacewar! The first online role-playing game, Adventure, which circulated on the ARPANET in the seventies, included a secret hideout place where the author left his unauthorized signature. Many games include hacking as a function, and offer the possibility to tweak the code or experiment with alternative commands even from the inside. But in the end, even those who resist the prevailing systems of control are likewise products of those same systems. Like in Ernest Cline’s novel Ready Player One, the only possible option may be to play through to the end or to quit entirely. Completing a game inevitably triggers the formula: “Game Over”.

Cultural studies

Colin Milburn advances the scholarly study of video games in several directions. First, he shows how to engage theoretically with video games. He borrows many of his tools from the cultural study of literature and cinema. For example, he focuses on particular episodes of video games, or he summarizes the plot of select games such as Final Fantasy VII. As in book or film reviews, his descriptions entail some disclosure of plot details that may constitute a spoiler for some gamers: if you don’t want to know the final scene in System Shock 2 or the location of the secret AVALANCHE hideout in Final Fantasy VII, you may have to skip some passages in the book. He also dwells on the psychology of some characters, just as a critic would do with a novel or a movie. In this sense, video game theory is not especially new: games are amenable to the tools used to analyze artworks that belong to the narrative genre. As a second contribution, Respawn offers a description of gaming culture. The author introduces the unfamiliar reader to a community brought together by code words, favorite expressions, a common history, and modes of engagement with video games and with life in general. Video game culture consists of a rich mythology of lore, trivia, fun facts, episodes, and images that are communicated through online discussions, the diffusion of Internet memes, and the participation in social events such as gaming conventions or cosplay parties. Thirdly, Colin Milburn underscores the transformative power of games, the subversive potential of role-playing and other forms of ludic recreation. The book traces the intersections of gaming with hacking and high-tech activism, focusing on several online campaigns launched by the hacktivist collective Anonymous. It underscores that lulz, fun, and games can no longer be thought as separate from issues of political or technological governance. Games allow other ways of being in the world: they create the possibility to act like a superhero, a vigilante or a villain, or to escape the laws of gravity by wavedashing or airdodging along with Super Mario. Most importantly, Colin Milburn demonstrates that video games matter—even for casual users or non-gamers. Video games have become increasingly sophisticated, not only in the evermore complex issues that they present, but also in revealing their own explicit and reflective awareness about theoretical issues. Video game theory may not just be about applying existing theory tools to video games, but also crafting new tools, concepts and theories brought forth by video games and that may be of broader relevance for culture and society.

A Failed Anthropology Project

Review of Two Bits: The Cultural Significance of Free Software, Christopher M. Kelty, Duke University Press, 2008.

Two BitsTwo Bits is a failed anthropology project. It does not make it a bad book: it is well-written and informative, and I learned a lot about Free Software and Open Source by reading it. But it does not meet academic standards that one is to expect from a book published in an anthropology series. These standards, as I see them, pertain to the position of the anthropologist; the importance of fieldwork; the role of theory; the interpretation of facts; and the style of ethnographic writing. Let me elaborate on these five points.

Many definitions have been proposed of the “participant observer.” Anthropologists who claim this position for themselves see it as a way to gain a close and intimate familiarity with a given group of individuals and their practices through an intensive involvement with people in their natural environment, usually over an extended period of time. It is different from “going native”: the participant observer usually remains an outside figure, who can provide support and hold various functions in the group but who makes it clear, at least to himself, that the locus of his engagement lies in the rendition he will make from his experience, not in the services or tasks he will have completed for the group during fieldwork. A key element in this research strategy is therefore to gain access to the group but also, perhaps equally important, the exit strategy that will allow the ethnographer to leave the field and return to a more distant point of observation.

“I am a geek”

Christopher Kelty does not make explicit his own definition of participant observation, but he nonetheless fancies a self-image: “I am a geek.” Becoming a geek is an integral part of his research project, and most ethnographic notes or vignettes are devoted to that process. For him, understanding how Free Software works is not just an academic pursuit but an experience that transforms the lives and work of participants involved: “something like religion.” The stories he tells about geeks, stories that geeks tell about themselves, are meant to “evangelize and advocate,” and to convert people to the cause.

His engagement with and exploration of Free Software got him involved in another project called Connexions, an “open content repository of educational materials” or a provider of Open Source textbooks. Connexions textbooks look different from conventional textbooks in that they consist of digital documents or “modules” that are strung together and made available through the Web under a Creative Commons license that allows for free use, reuse, and modification. Kelty would like his role in the Connexions project to be akin to an academic consultant, an anthropologist-in-residence that could provide advice and guidance based on his “expertise in social theory, philosophy, history, and ethnographic research.” But that is not how it turns out: “The fiction that I had first adopted–that I was bringing scholarly knowledge to the table–became harder and harder to maintain the more I realized that it was my understanding of Free Software, gained through ongoing years of ethnographic apprenticeship, that was driving my involvement.” He cannot fit into the anthropologist’s shoes because there is no need for one at Connexions. And so he ends up providing legal advice (which, strictly speaking, he is not qualified to do) and doing intermediary work with Creative Commons, a nonprofit organization that promotes copyright-free licenses.

Fieldwork is what anthropologists do. But what do anthropologists do when they do fieldwork? The definition has evolved over time. An anthropologist used to hang around in a remote place for a while, getting acquainted with the people, pressing informants with questions, and taking ethnographic notes. In our age of globalization, there is more emphasis on multiple sites, nomadic fieldwork, and de-centered ethnography. People move constantly from one location to the next, so why should the ethnographer be the only one to stay at the same place? Besides, in our interconnected world, something that happens in one place is often caused or explained by another phenomenon occurring in a distant place, and following the object under consideration is like pulling a thread from a ball of yarn. But fieldwork remains a central tenet of the anthropologist’s identity, what distinguishes him or her from scholars in other disciplines who “don’t do fieldwork.”

Hanging around with local hackers in Bangalore

Kelty insists that his account of the Free Software movement is based on ethnographic fieldwork. He gives a few vignettes of his engagement in the “field”: meeting two healthcare entrepreneurs at a Starbuck in Boston, cruising the night scene in Berlin, hanging around with local hackers in Bangalore, and, in the end, getting a position in the anthropology department at Rice University in Houston, where the Connexions project is based. But there is little purpose to these mentions of various locations, apart to demonstrate the coolness of the author and his persistence in becoming a geek akin to the ones he associates with. When it comes to substance, his real source of information is online. As he notes, nearly everything about the Internet’s history is archived. He is even able to track back newsgroup discussions dating back to the 1980s and chronicling the birth of open systems. As a result, the brunt of Kelty’s research presented in Two Bits is either archival work into the history of computer science or consulting work for the Connexions project, not ethnographic fieldwork in the strict sense of the word.

Anthropologists writing PhD dissertations are requested to demonstrate skills in manipulating theory. The canon of works to be mastered is rather limited: a grounding in Marx, a heavy dose of Foucault, some exposure to Freud or Lacan, add a pinch of feminist theory or media studies for those so inclined, and the PhD student is all set. Even by that light standard, Kelty must have flunked his theory exam. He introduces Foucault mainly for the record, but all he draws from the famous article “What Is Enlightenment?” is a quote stating that modernity should be seen as an attitude rather than a period of history. In other words, geeks are modern because they are cool. In another passage, he mentions that the notion of recursive public that he proposes should be understood from the perspective of works by Jürgen Habermas, Michael Warner, Charles Taylor, John Dewey, and Hannah Arendt. Then he stops. Besides the obvious point that eighteenth century’s coffee shops are different from today’s Internet forums, there is no further elaboration on these authors.

GNU (“GNU is Not UNIX”)

Another aspect of theory is the elaboration of concepts. Here, Kelty fares better, but I would still give him only a passing grade. His notion of a “recursive public” is indeed a working concept, or a middle-range theory as social scientists are wont to propose. Kelty defines it as “a public that is constituted by a shared concern for maintaining the means of association through which they come together as a public.” Recursivity is to be understood in the way computer programmers define procedures or name applications in terms of themselves. Popular examples include GNU (“GNU is Not UNIX”), but also EINE (“EINE Is Not EMACS”) ou ZWEI (“ZWEI Was EINE Initially”). It is, to use another image, Escher’s hand drawing itself. But the author does not try to sell his concept too hard: as mentioned, he does not explore the interplay with Habermas’ notion of a public sphere, and he downplays its importance for future scholarship (“I intend neither for actors nor really for many scholars to find it generally applicable.”) One would be at a loss to find other original concepts in the book. The expression “usable pasts” he uses to introduce his geek stories is just another name for modern myths. The notion of “singularity,” a point in time when the speed of autonomous technological development outstrips the human capacity to control it, is only a piece of geek folklore. Visibly, Kelty is more interested in telling stories than building theory.

Some authors define anthropology as the interpretation of cultures. In his book’s title, Kelty insists on the cultural significance of Free Software. Yet interpretation is lacking. By this, I mean that the anthropologist should be in search of meaning, not just facts or fictions. Kelty presents an orderly narrative of the origins and development of Free Software, organized around five basic functions: sharing code source, conceptualizing open systems, writing licenses, coordinating collaborative projects, and fomenting movements. He illustrates each chronological step with various stories, evolving around the development of the UNIX operating system and the standardization of Internet communications through TCP/IP. The result is informative if somewhat lengthy, but the cultural significance of the whole is not really addressed. Instead of wrapping up the lessons of this history, the last part of the book moves to a completely different topic by asking what is happening to Free Software as it spreads beyond the word of hackers and software and into online textbook publishing.

“Berlin. November 1999. I am in a very hip club in Mitte”

Anthropologists are authors, and their writing skills matter enormously in the reception and impact of their works. The style of Two Bits is more attuned to a journalistic account than to a piece of scholarship. This shows especially in the vignettes placing the author in various situations and locations, which create a “reality effect” but do not really add anything to the comprehension of the subject. Lines like “Berlin. November 1999. I am in a very hip club in Mitte” or “Bangalore, March 2000. I am at another bar, this time on one of Bangalore’s trendiest street” may be proper for nonfiction travelogues or media coverage, but they should not find their ways into anthropology books.