Like the inaudible hum of the electrical grid at 60 hertz, the cloud is silent, in the background, and almost unnoticeable. As a piece of information flows through the cloud — provisionally defined, a system of networks that pools computing power1 The first part of my provisional definition, the system of networks, is technically what is known as a “ network of networks ” : there are multiple kinds of disparate networks, from fiber to wireless to copper, within it. The Internet was so named because it moved data between satellite, packet radio, and telephone networks (inter-networks). To take the example of video streaming, when the video moves from a computer network operated by, say, Netflix or Amazon, to a different network, the cellular network, this is a “network of networks.” — it is designed to get to its destination with “five-nines” reliability, so that if one hard drive or piece of wire fails en route, another one takes its place, 99.999 percent of the time. Because of its reliability and ubiquity, the cloud is a particularly mute piece of infrastructure. It is just there, atmospheric and part of the environment.
Until something goes wrong, that is. Until a dictator throws the Internet “kill switch,” or, more likely, a farmer’s backhoe accidentally hits fiberoptic cable. Until state-sponsored hackers launch a wave of attacks, or, more likely, an unanticipated leap year throws off the servers, as it did on February 29, 2011. Until a small business in Virginia makes a mistake, and accidentally directs the entire Internet — yes, all of the Internet — to send its data via Virginia, and, almost unbelievably, it does. Until Pakistan Telecom inadvertently claims the data bound for YouTube. A multi-billion-dollar industry that claims 99.999 percent reliability breaks far more often than you’d think, because it sits on top of a few brittle fibers the width of a few hairs. The cloud is both an idea and a physical and material object, and the more one learns about it, the more one realizes just how fragile it is.
The gap between the physical reality of the cloud, and what we can see of it, between the idea of the cloud and the name that we give it — “cloud” — is a rich site for analysis. While consumers typically imagine “the cloud” as a new digital technology that arrived in 2010 – 2011, with the introduction of products such as iCloud or Amazon Cloud Player, perhaps the most surprising thing about the cloud is how old it is.2 Businesses, in contrast, typically date the term ’ s introduction to 2006, when it was used by Google to describe a new business model. Finally, researchers often list the date as 1996: the MIT Technology Review finds a reference to a patent for “cloud computing,” an unrealized model, in 1996, while the research studio Metahaven incorrectly cites 1996 as the first use of the word “the cloud” as it refers to network design. See Antonio Regalado, “Who Coined ‘Cloud Computing?’” and Metahaven, “Captives of the Cloud, Part I.” Seb Franklin has identified a 1922 design for predicting weather using a grid of “computers” (i.e., human mathematicians) connected by telegraphs.3 Lewis Fry Richardson ’ s Weather Prediction by Numerical Process (1922), as identified in Seb Franklin, “ Cloud Control, or the Network as Medium, ” 452 – 453. AT & T launched the “electronic ‘skyway’” — a series of microwave relay stations — in 1951, in conjunction with the first cross-country television network. And engineers at least as early as 1970 used the symbol of a cloud to represent any unspecifiable or unpredictable network, whether telephone network or Internet.
Figure 1.1 provides an early example. Drawn by Irwin Dorros, director of network planning for AT & T, this diagram utilizes a series of three clouds to describe the network behind AT & T’s new Picturephone service. Previously, network maps had been drawn as block diagrams — a series of boxes indicating either the exact telephone circuit or at least the possibility of finding the exact circuit. But Picturephone, a primitive videoconferencing system, was one of the first applications that worked across a mixture of analog and digital networks. Because Picturephone would operate regardless of the type of physical circuit underneath, Dorros illustrated the boundaries of the networks as an amorphous form.4 Perhaps the only other legacy of Picturephone, which has been discontinued and is now a relic, is the “pound” or “number” key (#) on the phone, pressed to differentiate Picturephone calls from voice calls.
What we learn from the diagram is simple: the cloud’s genesis was as a symbol. The cloud icon on a map allowed an administrator to situate a network he or she had direct knowledge of — the computers in his or her office, for example — within the same epistemic space as something that constantly fluctuates and is impossible to know: the amorphous admixture of the telephone network, cable network, and the Internet. While the thing that moves through the sky is in fact a formation of water vapor, water crystal, and aerosols, we call it a cloud to give a constantly shifting thing a simpler and more abstract form. Something similar happens in the digital world. While the system of computer resources is comprised of millions of hard drives, servers, routers, fiber-optic cables, and networks, we call it “the cloud”: a single, virtual, object. To do so not only make things easier on users and computer programs, but also allows the whole system to withstand the loss of an individual part. (Most of the time, anyway.)
As a result, the cloud is the premier example of what computer scientists term virtualization — a technique for turning real things into logical objects, whether a physical network turned into a cloud-shaped icon, or a warehouse full of data storage servers turned into a “cloud drive.” But the gap between the real and the virtual betrays a number of less studied consequences, some of which are benign and some of which are not. One’s data trail grows with each website one visits and each packet one sends through the cloud. The results are used by both marketing companies trying to target an online ad for, say, auto insurance, and government agencies trying to target terrorists for extrajudicial killings. We tend to perceive the two kinds of targeting as separate because online privacy appears to be a born-digital problem that has little to do with geopolitics. That gap, then, is crucial. The word “cloud” speaks to the way we imagine data in the virtual economy traveling instantaneously through the air or “skyway” — here in California one moment, there in Japan the next. Yet this idea of a virtual economy also masks the slow movement of electronics that power the cloud’s data centers, and the workers who must unload this equipment at the docks.5 As Allan Sekula writes, most cargo still takes the same amount of time to travel across the Pacific Ocean — eight to twelve days. Sekula, “ Dismal Science, Part I, ” 50. It also covers up the Third World workers who invisibly moderate the websites and forums of Web 2.0, such as Facebook, to produce the clean, well-tended communities that Western consumers expect to find. By producing a seemingly instant, unmediated relationship between user and website, our imagination of a virtual “cloud” displaces the infrastructure of labor within digital networks.
This book is an attempt to examine what occurs in the gap between the real and the virtual. In this it offers two interrelated stories. On one hand, A Prehistory of the Cloud tells the story of how the cloud grew out of older networks, such as railroad tracks, sewer lines, and television circuits, and often continues to be layered on top of or over them. Today, of course, the cloud has become so naturalized in everyday life that we tend to look right through it, seeing it uncritically, if we see it at all.6 Scholars Lisa Gitelman and Geoffrey Pingree have suggested that a new medium engages in a period of “discursive conflict” with an older medium in order to define its uses for society. A medium is successful, they note, when we naturalize it and view it as completely transparent to our historical frame — when it seems to become “unmediated.” As a result, the cloud is simultaneously transparent — it feels completely natural — and opaque — it hides the things that are unnatural to us behind its user-friendly interface. Gitelman and Pingree, “Introduction,” xiv. To make this historical infrastructure more visible, this book turns back the clock to the clumsy moments when the cloud was more of an idea than a smoothly functioning technology. It examines a series of cultural records and events that shaped the cloud’s “prehistory,” including a group of microwave relay stations sabotaged near Wendover, Utah, in 1961 that sparked a furious debate about what a network actually is; a 1946 comic strip, Bobby Gets Hep, produced by AT & T that taught children etiquette for party lines and therefore helped construct our modern notion of privacy; and the US government’s announcement of a National Data Center, a 1966 proposal deemed so dangerous to society that one scientist likened it to the development of a nuclear weapon.
Using these examples, the book tells a second story about the politics of digital culture. When commentators describe the nebulous Occupy Wall Street movements, the workings of global capitalism, the web-mediated resistance movements comprising the Arab Spring, and Al Qaeda and Associated Movements (AQAM) all as “cloudlike,” or as “a network of networks,” it is clear that the cloud, as an idea, has exceeded its technological platform and become a potent metaphor for the way contemporary society organizes and understands itself.7 Scholars who think this way often base their argument on John Arquilla and David Ronfeldt, Networks and Netwars, producing studies such as Samuel Weber, Targets of Opportunity, and Manuel Castells, The Power of Identity, which asks: “How can states fight networks?” Responding to this shift, communications and media scholars have attempted to theorize the new forms and structures of power that result from networked societies.
One traditional model of explaining political power is known as sovereignty. Consider a hypothetical kingdom, in which its sovereign can coerce his subjects into doing what he wants. Power here is top-down: centralized in the throne room, it radiates out to the borders of the king’s territories. Further, sovereign power is framed in the negative: it can punish unruly subjects, prohibit certain practices, and, in the most extreme cases, confiscate a subject’s life. While many modern democracies no longer have monarchs, they nevertheless retain some aspects of sovereign power. The United States, for example, has clearly defined borders; it centralizes power into clearly defined federal institutions and agencies, such as the White House and the FBI; and many of those agencies exist to prosecute or punish violations of the law. Yet the Internet puts pressure on this model of power. In one case, the US government attempted to limit the export of strong encryption software, only to be confounded by a proliferation of foreign websites — easily accessible by US users — offering that software for free download. Explaining this difficulty, legal scholar James Boyle writes that digitally dispersed, transnational networks always exceed a sovereign state’s ability to “regulate outside its borders ... We are sailing into the future on a sinking ship. This vessel, the accumulated canon of copyright and patent law, was developed to convey forms and methods of expression entirely different from the vaporous cargo it is now being asked to carry.”8 James Boyle, “Foucault in Cyberspace.”
Boyle and his contemporaries use sovereign power as an example of what digital networks make obsolete. Yet, counterintuitively, what this book argues is that “the cloud” also indexes a reemergence of sovereign power within the realm of data. Sovereign power may seem worlds away from the age of the Internet, particularly given its antiquated elements, such as the monarch’s power to arbitrarily kill. Algorithms and users seem to be running the world online, rather than there being a central decision maker; further, a topography of power based on borders or territories, instead of networks, seems out of date.9 This idea is forcefully voiced in Michael Hardt and Antonio Negri, Empire: “The fundamental principle of Empire as we have described it throughout this book is that its power has no actual and localizable terrain or center. Imperial power is distributed in networks, through mobile and articulated mechanisms of control” (384). This extends a strain of thinking most typically exemplified by communications scholar Manuel Castells in The Rise of the Network Society, and by Yochai Benkler in The Wealth of Networks . Indeed, the incongruity has caused media scholars to develop two main alternate models of power, which I explain below; I then return to why the cloud may actually effect a return to sovereignty.
Initially, scholars of surveillance and media turned to Michel Foucault’s model of disciplinary power, which replaces a single source of power, the sovereign, with a series of institutions, such as a factory, prison, school, hospital, or even the family, through which power can be exercised and subjects managed. Disciplinary power is far less coercive than sovereignty; it instead operates according to a set of norms and rules. A school, for example, educates its pupils and in so doing produces certain formations of knowledge and selfunderstandings of what constitutes “good behavior.” The school organizes pupils by classroom, grade, and daily schedule; evaluates each pupil through testing; and, occasionally, attempts to reform a wayward student by issuing detention. Disciplinary power, then, works as a kind of surveillant gaze, under which subjects internalize behavioral standards, learn to order their bodies, or are supervised through other social mechanisms. In his perhaps most quoted study, Foucault mobilizes Jeremy Bentham’s 1791 design of a prison called the Panopticon to analyze a condition in which, as Foucault writes, “a state of conscious and permanent visibility ... assures the automatic functioning of power.”10 Michel Foucault, Discipline and Punish, 201.
While the Internet is certainly a space of radical visibility — each and every packet of data that passes through the US Internet may be inspected by the National Security Agency, one aspect of surveillance that has led to Panopticon-like comparisons — digital networks complicate many other aspects of this theory. In the cloud, for example, there is seemingly no set of behavioral norms, hierarchies, or enclosed spaces, and any institutions involved in managing users seem purely incidental; the closest, it would seem, are Internet protocols, rather than organizations. And as computer networks seem to have become decentralized and distributed, power, too, seems to have become distributed on a microscopic level. For these reasons, over the last ten years, scholars of new media have generally coalesced around a second model: the control society.11 This Deleuzian argument about control is most clearly laid out in books such as Alexander Galloway, Protocol, Wendy Chun, Control and Freedom, and Raiford Guins, Edited Clean Version, and embraced by scholars in more “traditional” disciplines, such as D. N. Rodowick, Reading the Figural, or, Philosophy after the New Media. If Foucault originally described a shift from sovereign societies to disciplinary societies, Gilles Deleuze extended this shift into a third phase, in which subjects are governed by invisible rules and systems of regulation, such as our credit scores, web history, and computer protocols. As Deleuze puts it, prisons and other disciplinary institutions are subsumed by these mechanisms of control: “Everyone knows these institutions are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of new forces knocking at the door.”12 Gilles Deleuze, “Postscript on the Societies of Control,” 4. For more on this shift, see Franklin, “The Limits of Control.”
As an example of Deleuze’s idea of the control society, consider the credit card. There are often no preset spending limits associated with the card, and because a computer determines whether a particular transaction is fraudulent by comparing it to the charges the spender usually makes, a cardholder does not even have to worry if someone steals the card. Yet — and this is the key difference — the very freedoms credit cards offer also require users to order and self-regulate their own behavior. Even if the card is issued with no preset spending limits, a new cardholder still cannot typically buy a $100,000 car using that card. Regardless of whether the cardholder can afford it, a computer will likely decline the charge based on the lack of prior spending behavior; a cardholder, in turn, is aware of that potential for embarrassment, as well as the fact that overloading one’s credit line will likely impact one’s credit score and future credit potential. (The cardholder may wish to buy a car or a house in a few years, for instance, and hopes to get a loan.) Yet this situation may change from day to day; perhaps after the cardholder buys a series of $1,000 meals over a period of time, the computer will decide that he or she fits the profile of a “high spender” and allow the $100,000 charge in the future.
Somewhere, a computer is calculating the impact of each spending decision and adjusting to it in real time, and, in turn, the cardholders adjust, too. “You” are a set of spending patterns, and that projected profile both enables the bank to extend credit to you as well as puts the onus on you to take responsibility for those spending patterns. Convenient, if a little creepy. The core idea behind a control society, then, is thus a continuous set of cybernetic systems, financial incentives, and monitoring technologies molded to each individual subject, that follow him or her even when “outside” an institution as such; and, precisely because there are fewer explicit institutions, spaces, or rules to restrict the subject’s behaviors, these systems are often experienced as freeing.
Can the cloud be explained by this model of control? Most scholars would unequivocally say yes; Deleuze’s description of data aggregation, the amorphous and open environment of computer code, and even the gaseous qualities of corporations within a control society map directly onto attributes of the cloud. If there is any problem with his theory, it is that his argument seems a little too easy to apply to the cloud. When he writes about a nightmare scenario — that an “electronic card that raises a given barrier” is governed by “the computer that tracks each person’s position” — a present-day reader wonders what the fuss is about; much of what Deleuze envisioned, such as computer checkpoints and computer tracking, has come true already.13 Deleuze, “Postscript on the Societies of Control,” 7. The widespread claim that we are in an era of biopolitics has largely been built on the strength of such evidence.
This book’s goal is to think beyond this idea of the control society, to both acknowledge its influence and use the cloud to ask what this theory cannot account for. If we look at the cloud closely, we find the presence of phenomena that hint at other explanations. The all-but-forgotten infrastructures that undergird the cloud’s physical origins, for example, often originated in a state’s military apparatus; one of the earliest real-time computer networks was an early-warning system for incoming nuclear missiles in the 1950s. Today’s cloud relies on a repurposed version of this infrastructure, among a host of other Cold War spaces. Internet providers reuse old weapons and command bunkers as data centers, while the largest data management company today, Iron Mountain, was founded as Iron Mountain Atomic Storage, Inc. Even as this militarized legacy begins to decay, its traces continue to haunt modern-day digital networks with their ghostly presence.
These traces are a clue that the supposedly anachronistic mode of sovereign power may be returning under different forms. Rather than consider sovereign power a historical exception or aberration within a wholesale shift to the systems of control, I suggest that it has mutated and been given new life inside the cloud.14 Most critiques of Deleuze’s historicity have centered on the relationship between disciplinary and control societies, perhaps because they are more obviously coterminous in time. Sovereignty receives rather less attention. In response to this common line of thought among technologists, Geert Lovink has commented that “Internet protocols are not ruling the world ... In the end, G. W. Bush is.” This rejoinder, sent by email to Alexander Galloway and Eugene Thacker, is what opens their book The Exploit (1); they propose that sovereignty and networks may be related, most notably in examples such as Al Qaeda and the “swarm.” What differentiates our approaches is one of method: they focus on the technological mediation of wars, while my study, directed at its historical precedents, suggests that this mediation is largely a construct. Despite this disagreement, this book’s line of thinking is very much a response to Lovink (via Galloway and Thacker)’s challenge. As Foucault himself emphatically warns us: “We should not see these things as the replacement of a society of sovereignty by a society of discipline, and then of a society of discipline by a society, say, of government. In fact we have a triangle”: a triangle where sovereign, disciplinary, and governmental power (or control) constitute the three sides.15 Michel Foucault, Security, Territory, Population, 143. As Judith Butler writes, political theorists Wendy Brown and Giorgio Agamben have productively “refuse[d] the chronological argument that would situate sovereignty prior to governmentality” (Butler, Precarious Life, 60). What this book shows is that the cloud grafts control onto an older structure of sovereign power, much as fiber-optic networks are layered or grafted onto older networks. I term this new hybrid form the “sovereignty of data.”
The cloud contains a subtle weapon, a way of wrapping sovereign power — torture, targeted killings, and the latest atrocities in the war on terror — in the image of data. By this I do not mean that sovereign power surfaces in new forms of “cyberwarfare,” or (as others have argued) in the fact that war itself has become increasingly mediated by digital technologies.16 Jean Baudrillard, Paul Virilio, and James Der Derian are perhaps the bestknown scholars of this school. Instead, the sovereignty of data comes out of the way we invest the cloud’s technology with cultural fantasies about security and participation. These fantasies may be as simple as the idea that the cloud will protect our data from unsafe, “unfree” hackers; that data needs to be secured from disaster; or even that the cloud is a unique medium for user interaction. These ideologies are disseminated through routine interactions with applications in the cloud, such as tagging a photograph on Facebook. Even measures meant for our safety — marking messages as spam, for instance — construct a set of cultural norms that we internalize as “responsible” online behavior.
As users are increasingly aware, values such as participation are sometimes co-opted by market mechanisms that John Horvath originally described as “freeware capitalism.”17 John Horvath, “Freeware Capitalism.” Corporations, for example, ask us to “interact” as a form of marketing feedback. Even more subtly, however, by interfacing with the structure of sovereign power, these ideologies position the cloud’s users within the same political economy as the acts of state violence performed in their name. The wars over resources and territory and extralegal torture after 9/11 may appear to be worlds away from the political economy of data. But, I contend, they point to a resurgence of a violence that is enabled by the cloud.
Seen correctly, the cloud is a topography or architecture of our own desire. Much of the cloud’s data consists of our own data, the photographs and content uploaded from our hard drives and mobile phones; in an era of user-generated content, the cloud is, most obviously, our cloud (this is the promise of the “I” in Apple’s “iCloud,” or to use an older reference, the “my” in “mySpace”). Yet these fantasies — that the cloud gives us a new form of ownership over our data, or a new form of individualized participation — are nevertheless structured by older, preexisting discourses. As chapters 1 and 3 argue, the cloud’s relationship to security can be traced to Cold War ways of thinking about internal enemies as well as nineteenth-century notions of a “race war,” while chapters 2 and 4 show that its participatory impulse comes not just through new interactive technologies but also through economic liberalism’s mechanisms for constructing a modern subject that is left to itself — the sort of subject we call the “user.” The intersection of security and participation in the cloud can help us understand any number of individual cases: why we constantly invoke the specter of foreignness (e.g., China, Iran, Nigeria) when discussing hacker/spammer threats to the “free” Internet; why Cold War rhetoric has increasingly informed digital threats, as in the New York Times’ invention of the phrase “mutually assured cyberdestruction”;18 David E. Sanger, “Mutually Assured Cyberdestruction?” why calls for digital activism (or “hacktivism” ) are often co-opted in a framework that already invites participation; why the so-called Internet kill switch reads (falsely) as a joke that involves no actual killing; and even why the NSA’s facilities for decrypting intercepted calls are structurally similar to those used by digital archivists trying to preserve digital media from decay.
Taken together, my examples suggest that the sovereignty of data is ultimately what Achille Mbembe has called a “necropolitics,” a politics of death.19 Achille Mbembe, “Necropolitics.” It is the cloud’s participatory ideology that motivates amateur data collectors to assist NATO in streamlining its F-16 bombing operations, for example: as I argue in chapter 4, war outsources its dirty work to volunteers by substituting a live, interactive representation of death for death itself. The perversity of the cloud is therefore not that it explicitly causes death. Rather, the cloud transmutes the mechanism of death and presents it to us as life.
Of course, the idea that the cloud is inherently political may not be so new. Today one need not go far to find headlines like that on the February 2014 cover of Popular Mechanics: “Privacy Is Disappearing: How New Tech Tools Can Help You Fight Back.” As ever more data moves into the cloud, the general public has increasingly become aware that the cloud is politically contested terrain; articles on data-veillance and cell phone tracking routinely run in the popular media. Yet typical responses to this debate invoke technological and legal solutions, such as do-not-track software or a new law; the Popular Mechanics article, for example, tells users to reclaim their privacy by installing encryption software and “practicing good browser hygiene.”20 Davey Alba, “It’s Time to Fight for Your Digital Privacy.” The problem with this approach, however, is that it does little to address the wider political and social context from which these problems — and even the very idea of privacy — originate; nor does it take in account the logics behind technology itself that actually reproduce and redouble the problem of privacy. As I show in this book, for example, digital hygiene does not “fight” US government monitoring, as the article claims; indeed, digital hygiene is actually the goal of a Department of Homeland Security educational campaign that aims to have US users internalize correct (i.e., legal) online behaviors.
Scholars concerned about such issues have typically embraced the idea of materiality (or, more specifically, “platform studies”) to recuperate the often invisible logics, algorithms, and apparatuses that structure digital culture. Focusing on digital culture’s media-specific properties typically involves examining the technological platforms within: Internet Protocol, lines of Java code, network cables, or conventions for the Unix operating system.21 The term “medium specificity” comes from art history and generally refers to theories of Greenbergian modernism, though the idea ultimately dates back to ancient Greek philosophers, the Renaissance paragone between painting and sculpture, and Enlightenment texts such as Gotthold Ephraim Lessing’s Laocoon, or the Limits between Painting and Poetry (1766). The phrase has been reappropriated in places such as N. Katherine Hayles, “Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis,” and Lev Manovich, The Language of New Media, which (though Manovich does not use the term explicitly) attempts to map the specific attributes of new media through several organizing principles, such as the database. Numerous ever more specific subfields — such as “critical code studies” and “software studies” — have now internalized this method of study, to the point where it has become one of the dominant (if not the default) methods of analyzing new media. Thinking about the attempts by film scholars to define the “medium specific” properties of film from the 1890s to the 1930s — is film a (photo)play? is it an art? is it science? — one quickly understands the parallel impulse to find the “newness” of new media. In doing so, such scholars claim that an awareness of a medium’s materiality will lead to a more effective understanding of its ideological content. Yet the cloud, I am arguing, inevitably frustrates this approach, because by design, it is not based on any single medium or technology; it is medium-agnostic, rather than medium-specific. As an inter-network, any type of communications network or technological platform can conceivably be attached to it, even analog ones; in 2001, one Norwegian enthusiast even implemented Internet Protocol with a set of carrier pigeons. (Observers reported a disappointing 56 percent packet loss rate: rephrased in English, five out of the nine pigeons appeared to have wandered off, or have been eaten.)
Further, one of the curious dilemmas that the cloud represents is that not even the engineers who have built it typically know where the cloud is, and, as a consequence, what part of the apparatus to examine. A personal anecdote: in my stint as a network engineer in the late 1990s, I would routinely travel to a handsome, terra-cotta-roofed building a few blocks from downtown Palo Alto, California. Unbeknownst to its well-heeled passers-by, perhaps one-fifth of the world’s Internet traffic once flowed through this building. Back then, Palo Alto was one of five major exchange points in which the major national “backbone” networks, such as AT & T or Sprint, would connect with each other. Because of security concerns, engineers were never quite sure who or what was in the neighboring rack of network and computer equipment; my coworkers and I just knew that our cables ran somewhere into a cage in the floor below. I had a pretty good hunch, though — gleaned through a combination of the rumor mill, glimpses of router names punched on Dymo tape, and the Quebecois-accented English that some other workers spoke. Yet while I knew how to route packets to Deutsche Telekom, Nippon Telegraph and Telephone, and Teleglobe Canada over my computer screen, the network’s physical presence still felt remote to me. One morning at 4:00 a.m., I decided to become better acquainted with it.
So, impulsively, I took a fiber-optic cable and unplugged it. Then I held it up to my eye and looked in. On the other side of the fiber, I imagined, was Japan. The light was red, and it winked like a star on a smoggy night. Because fiber optics were new at the time, what I had only read about, but not yet experienced, is that there are two kinds of fiber-optic cable: single-mode, for long distances, and multi-mode, for short distances. A single-mode laser would have lased a hole into my cornea and blinded me instantly. Multi-mode is often powered by LED light sources rather than true lasers. Single-mode is indicated by a yellow cable; multi-mode, orange. The cable I had grabbed was orange.
That I can still see today is a testament to both my dumb luck, and also, metaphorically, to the paradox that the cloud represents: that you can never see it by looking directly at it. Indeed, my naive desire to look into the cloud’s fiber-optic network is a little like asking what a film is about, and looking into the most direct source of the image — the projector beam — to find out. You can get as close as you want to it, but the blinding light won’t tell you much about the film, and it may even be dangerous for your eyes; it certainly involves turning your head away and not watching the film. The cloud is not unlike the changing shapes and virtual images cast by a projector. But to mistake the apparatus for the film makes us like those mythical “country rubes” of the 1910s who were said to have assaulted the projection booth, looking for the real movie star shown by the film.
Analyzing the cloud requires standing at a middle distance from it, mindful of but not wholly immersed in either its virtuality or its materiality. For this reason, this book does not adhere to the cloud’s current technologies — for instance, by dissecting the lines of code in a network protocol implementation. Nor will this book begin with the story of an invention. There are no scenes of apartment windows in Cupertino or cubicles in Seattle lit late at night, where an Apple worker realized something about accessing e-mail applications over the web, or an Amazon engineer became aware that excess computing capacity could be resold to other companies. Because the “cloud” is, properly understood, a cultural phenomenon, I mobilize three categories of primary sources that address this larger sense of the cloud:
1. Representations and anticipations of the cloud in US popular media, legal and political records, corporate advertisements and ephemera, and the like. To be sure, the prehistory of the cloud is not only an American story; there is a need for parallel scholarly studies on, say, the French online service Minitel and specifically non-Western network cultures.22 Such studies, particularly in English, are relatively rare. One recent exception is Anita Chan’s study of Peruvian digital culture; see Chan, Networking Peripheries. But the United States wields considerable veto power over supposedly international bodies for Internet standards as well as infrastructures such as domain name service; the Department of Justice has even used its jurisdiction over the Virginia-based registrar for the .com, .net, and .org domains to seize and prosecute non-US websites. And, more metaphorically, the book’s focus on the United States also responds to the cloud’s own rhetorical framing as “the cloud.” President George W. Bush was widely mocked for using the plural “Internets” when he referred to “rumors on the, uh, Internets” during a 2004 presidential debate. But in pluralizing the term, Bush was dead-on in a strict sense: there are multiple private Internets and clouds that parallel or shadow the public Internet, run by research universities, militaries, and even foreign countries that have built or are building “walled garden” Internets. Instead of recognizing this, however, we typically internalize the fiction that there is only one Internet, and one cloud.
The use of the cloud’s singular form — “the cloud” — not only condenses a wide multiplicity of network forms and clouds into a single vision that encompasses all networks, but also reflects a universalist world view that tracked closely with American political ideals as they developed through the 1950s on: that the cloud would stand in for a “free” Internet and liberal civil society. For that reason, texts drawn from US culture offer a unique perspective for understanding this sensibility.
2. Examples drawn from the culture of computer science itself: the terms, metaphors, and diagrams that show how scientists and hackers understood their subjects — descriptions, to be clear, which often failed or succeeded purely by practice rather than technological superiority. What does it mean, for example, to name (and therefore to think of) a nonfunctional link to another website a “dead link” (rather than a “404 error”), or to mark each data packet with a “time to live” stamp? To think of multitasking as “intimacy,” and debugging as a form of “peeping”?
In revisiting subjects whose stories have been partially told by computer historians, my goal is not to recycle existing narratives of invention, but to examine discourses that are typically hidden within these narratives. Part of this aim is to avoid the presentist bias that often confirms successful technologies at the expense of the myriad of alternate ways that scientists imagined the future. Interactive computing was, for example, not developed for interactivity at all, but rather for more efficient debugging. For a while, the word “computer” did not even designate a machine, but rather a laborer — generally a low-paid female worker. Nor was computer science thought of as an independent discipline worth serious academic study until the 1950s and 1960s, when the first computer science programs were founded at the University of Cambridge and Purdue University. Lacking their own discipline, early computer scientists were by necessity in dialogue with urban planners, sociologists, businessmen, and even members of the counterculture; they were amateurs as well as professionals who were part of, but did not have the final word over, their cultural context.
3. Photographs, drawings, videos, and even games that offer insight into how visual culture functions in the cloud: what the cloud looks like on-screen; how we draw or map its shape; how the cloud grew out of TV/ video networks. Crucial to this enterprise is the belief that visual culture does not merely reflect or represent beliefs; it also anticipates and shapes it. As Marshall McLuhan put it, artists serve as a “Distant Early Warning system” for societal change,23 Marshall McLuhan, Understanding Media, 3. and the artistic avant-garde offer us a window into the bleeding edge of how new media might be used. The art objects I consider — by Ant Farm, Trevor Paglen, the Raindance Corporation, and others — bring into focus key moments of the cloud’s development, and allow us to think through historical problems of power and visibility. Because visual culture tracks the minor mutations of power that shape the cloud, the question of power’s visibility may be best interrogated by artists.
Collectively, these works of art also pose an important, if unanswerable, question: what tactics can we use to challenge a diffuse, invisible structure of power? Increasingly, artists and activists have used new media techniques to critique the myriad of problems raised in and by digital culture. As a form of electronic protest, for example, “hacktivist” groups have used distributed denial-of-service software to overwhelm target websites. Perhaps the bestknown example occurred in 2010, when Visa and MasterCard refused to process donations for WikiLeaks, the site that positioned itself as a secure drop box for massive floods of leaked information; in response, pro-WikiLeaks sympathizers flooded, and shut down, websites such as Visa.com and Mastercard. com. In turn, the quantity of info on the WikiLeaks site, such as the 109,032 field reports of every death in the Iraq war, spurred another set of hacktivists to develop a parallel tactic: the construction of data-mining and data visualization tools for mapping and “seeing” the cloud of data (figure 1.2).
Taken as a whole, these heterogeneous groups of “tactical media” artists have suggested that electronic problems should be opposed electronically.24 The term “tactical media,” originating in 1993, refers to a set of unconventional artistic practices organized around opposition, and often describes artist groups such as Critical Art Ensemble, Yes Men, Electronic Disturbance Theater, and eToy. Perhaps the clearest definition of the term comes from Rita Raley as “a mutable category” that revolves around “disturbance”; examples include “reverse engineering, hacktivism, denial-of-service attacks, the digital hijack,” and so forth (Raley, Tactical Media, 6). Whether through activism, art, or simply the narratives written about it, the cloud has offered a platform for unconventional modes of critique and dissent, a medium for loosely organized, decentralized modes of protest. Yet as productive as these strategies may be, many of them assume that the electronic medium they work in is a neutral one; as we see in chapters 3 and 4, their protests often reproduce the system of values (of, say, “participation”) embedded in the cloud.
This is a thorny problem, doubled by a historiographic one: the lines between newer and older forms of resisting power are typically drawn more sharply than usual, because the field of new media studies is typically built around finding and writing about the new. At the least, the term “new media” contains within it an implicit opposition to old media, just as digital media implicitly rules out or excludes the analog. (Even in the field of media archeology, a field oriented to studying dead media, the phrase “dead media” — and, as a result, media’s “deadness” — is too often taken for granted.) In discussing these artists, it may be more productive to set aside the impulse toward defining what is new about their work. For the battles over digital media are often reflected in other, older medias, such as Portapak video, that were once new. The call to reconfigure the structure of the television network in the 1970s using community access television (CATV), for example, is strongly reminiscent of the early debates over broadband Internet; both technologies briefly seemed to offer an alternative to the network’s centralized structure before they were quickly commercialized and recentralized.25 This point is made most succinctly in David Joselit, Feedback. For more, see my discussion in chapter 1 on the Ant Farm collective.
Though such strategies may have failed in their utopian idealism, they offer a perspective on and a way of working through problems of contemporary media. To be clear, I am not suggesting that there is nothing new about new technology. Instead, I am offering an alternative to the kind of historiographical model reliant on a series of technologically induced epistemic shifts or ruptures that often pervades media studies.26 Technological determinism has been embedded in the very foundation of “new media” studies, at least since Marshall McLuhan. Lev Manovich offers a typical example: “Today we are in the middle of a new media revolution — the shift of all culture to computer-mediated forms of production, distribution, and communication ... Mass media and data processing are complementary technologies; they appear together and develop side by side, making modern mass society possible” (Manovich, The Language of New Media, 19, 23). Wendy Chun offers an important corrective to the scholar’s assignation of too much power to technology: “Thus, in order to understand control-freedom, we need to insist on the failures and the actual operations of technology” (Chun, Control and Freedom, 9). The analog technologies within the cloud periodically return to view, even if in spectral form. As a result, analog sources will allow us to think through digital problems, and, in turn, challenge the implicit separation between analog and digital.
To challenge this separation is to realize that the cloud is a historical, fragile, and even mortal phenomenon with its own timespan. Over the last twenty years, the Internet has been variously described as a “series of tubes,” an “information superhighway,” an “ecosystem,” a “commons,” a “rhizome,” a “simulacra,” a “cloud” (note the title of this book), and even, as the director of the MIT Media Lab once put it, a “flock of ducks.” Each term brings with it an implicit politics of space: if the Internet is imagined as a “public commons” being walled off by regulations such as the Digital Millennium Copyright Act, this serves as potent rallying point for those who would defend it from such incursions; but if the Internet is a “rhizome,” then such incursions are already part of the network’s anarchic structure: “The Net treats censorship as noise and is designed to work around it.”27 On walling off the commons, see James Boyle, “The Second Enclosure Movement and the Construction of the Public Domain.” On rhizome, see John E. Newhagen and Sheizaf Rafaeli, “Why Communication Researchers Should Study the Internet.” These metaphors have therefore served as flashpoints for political debate. As Tiziana Terranova has shown, prominent neoconservatives such as Alvin Toffler and Newt Gingrich used the image of a network as a self-regulating “ecosystem” to repudiate the Clintonian metaphor of an “information superhighway” that, presumably, needed government construction and maintenance.28 Tiziana Terranova, Network Culture, 120.
“The cloud” is only the latest in this series of metaphors. Because it represents a cultural fantasy, it is always more than its present-day technological manifestation (which has, at any rate, already changed since the moment I set these words to paper). If we come to see the cloud as a historical object, we might realize that the story of the cloud is largely unwritten on two fronts: the past and the future. As its title indicates, this book puts forth a prehistory of the cloud. But it also attempts to open up a set of methodological tools to imagine the cloud in the future, meaning both the cloud’s impending obsolescence as well as its barely foreseen consequences. For the legacy of the cloud has already begun to write itself into the real environment. As one of the largest consumers of coal energy, for example, the cloud’s infrastructure was responsible for 2 percent of the world’s greenhouse gas emissions in 2008, and data centers have grown exponentially since then. The longterm consequences of the cloud are worlds away from the seductive “now” produced by its real-time systems. It is our job to catch up with this legacy.
To make a book about something as formless as the cloud is inherently a quixotic objective. Every book is a technology that imposes its own spatial terms on its subject: it’s generally linear in form, contains a certain number of pages, and is operated by flipping (or swiping). Within that container, my own four-chapter structure lays out the following question: if the cloud is a cultural fantasy (chapter 1) of participation (chapter 2) and security (chapter 3), what happens when users participate in their own security (chapter 4)? My argument is constructed by examining the cloud’s networks (chapter 1), virtualization (chapter 2), storage (chapter 3), and data-mining interfaces (chapter 4).
To understand what the chapter subjects in A Prehistory of the Cloud have to do with each other, it is first worth explaining one concept from computer science, which typically divides a technical apparatus into a series of so-called abstraction layers. These layers move progressively from the least abstract to most abstract. In the case of networks, for example, the physical link on the bottom — fiber-optic cable, Ethernet copper wire — forms the layer of least abstraction. Various protocols in between (Internet Protocol, then Transmission Control Protocol on top of IP) form the middle layers, with the application layer on the very top (the software built on networking protocols, such as streaming video) being the most abstract. This model also describes other technologies, such as operating systems or algorithms, usually through three to seven layers of abstraction.
The idea of layered abstraction is readily visible in our day-to-day lives: you can send an e-mail without worrying about if it travels over a wireless or wired connection, or store a file without knowing whether it is on a USB drive versus a magnetic drive; it is just “the network” or “the drive.” The idea thus offers a spatial model of understanding and even standardizing computing: each layer depends on the more material layers “below” it to work, but does not need to know the exact implementation of those layers. Cloud computing is the epitome of this abstraction, a way of turning millions of computers and networks into a single, extremely abstract idea: “the cloud.”
The chapter structure of A Prehistory of the Cloud evokes the spirit of its subject’s abstraction layers (table 1.1). This book begins with the earliest vision of the cloud as a “network of networks”; moves to the virtualization software that allows networked resources to be abstracted, and therefore shared; continues to data storage and security delivered through those virtual machines; and, last, examines the data-mining algorithms that “see” through, and rely on, cloud-delivered data. This layer of abstraction may be useful for readers more familiar with a traditional platform studies approach: for instance, someone who has read Matthew Kirschenbaum’s work on hard drives and storage might refer directly to chapter 3, while someone interested in material infrastructures might find chapter 1 of interest.29 Matthew Kirschenbaum, Mechanisms.
Of course, from a different perspective, my division of the cloud into four layers is arbitrary, and omits any number of other possible technological layers that could have been examined — for instance, the relational database, or web application architecture. Further, because the technology of the cloud is relatively young, the relation between layers is likely confusing: a consumer may think of “cloud” as the cloud drive on his phone (layer 3); a software developer may use “cloud” to mean “software as a service” (layer 2), while a network engineer may continue to use “cloud” to mean a “network of networks” (layer 1). Yet this confusion may also be an opportunity. One of the reasons that a network engineer thinks of a cloud in this way is because the cloud signified a network in the 1970s, while cloud storage did not explicitly declare itself “the cloud” until the late 2000s. In other words, as each infrastructure becomes naturalized, we tend to refer to it with increasing amounts of abstraction, talking about its use (cloud storage) rather than the infrastructure itself (storage servers in data centers). Thus each level of abstraction is a sort of archeological deposit that records the idea of what we thought of “the cloud” at a certain moment in time — however problematic that “we” is. This chronology is neither linear nor exact, but instead testifies to the multiple discourses and prehistories that form the cloud today.
Begin, then, with the cloud’s base layer: the network. How did the cloud come to be shaped the way it is? Chapter 1, “The Shape of the Network,” answers this question by examining a highly charged moment in 1961, when the Bell System was targeted by a series of bomb attacks that tore through Utah and Nevada, at the same time that engineer Paul Baran began to develop his theories on distributed networks. Reviewing Senate hearings on the bombing, I conclude that the perfect network is an ideological fantasy, one that has, at its core, the principle of deviance: of having a break or a rot somewhere in the network, of having circuits — or people — that are unreliable and untrustworthy. From this, I turn to the architectural collective Ant Farm, which offered a very different vision of an information highway in 1970 and 1971 — one reliant on trucks circulating on the interstate highway to carry packets of data — to suggest that in order to approach the perceptual effects of the cloud, one must first think of the network unobscured by the effects of technology.
In chapter 2, “Time-Sharing and Virtualization,” I examine the prehistory of what we now call “cloud computing,” the idea that computer power — along with the software programs and networks associated with them — could be “piped” into a user’s home, like electricity or other utilities. This vision came out of the early 1960s, with the invention of a technology called timesharing, which allowed the million-dollar cost of a computer to be shared and the computer multitasked. Though mostly forgotten, time-sharing not only invented the modern idea of a user — a personal subject that “owns” his or her data — but also positioned that user within a political economy that makes a user synonymous with his or her usage, and encourages users to take (even steal) computer resources for free. The freedom that results, however, is a deeply ambiguous one, for the same technologies that allow files and user accounts to be made private in the cloud — known as virtualization software — also represent a subtle form of control. This chapter uses the metaphor of an ancient technology, the sewer system, to understand how the cloud keeps each household’s private business private even as it extends the armature of the state or the corporation into individual homes.
Chapter 3, “Data Centers and Data Bunkers,” traces the cloud of data back to the data centers that store them. These massive warehouses at the heart of the cloud cost up to $1 billion each to build and contain virtually every form of data imaginable, including airline tickets, personnel records, streaming video, pornography, and financial transactions. Interestingly, a number of data centers enclose data inside repurposed Cold War military bunkers. This suggests an unexpected consequence: the sovereign’s rationale for a bunker or a keep, to defend an area of territory, has now become transported to the realm of data. Digital networks do not transcend territorial logic; even as data bunkers allow networks to be divided into logical zones of inside and outside, they raise the specter of attack from those that might be “outside” to network society, such as Chinese hackers or Iranian cyberwarfare specialists. By revisiting Paul Virilio’s classic text Bunker Archaeology, I suggest that the specter of a disaster that the cloud continually raises also carries within it a temporality of our imagined death. This temporality animates a recent series of digital preservation projects, such as the “digital genome” time capsule, intended to survive the “death of the digital.”
In chapter 4, “Seeing the Cloud of Data,” I continue the discussion of data-centric tools by examining the ways that companies, users, and states navigate the piles of data by “targeting” information. As I show, targeted marketing campaigns online come out of the same ideological apparatus as military targeting, and I take up this militarized aspect to offer a more complete picture of data mining. Two oppositional groups provide case studies for this chapter. First, I examine a group of radio-frequency hackers who started data-mining the 2011 NATO intervention in Libya, in effect turning war into a problem of “big data.” Next, I turn to artist/geographer Trevor Paglen, who has obsessively used data-gathering techniques to photograph what he calls “blank spots on the map” — reconnaissance satellites used to spy on us, and covert desert airstrips used to run the CIA’s extraordinary rendition program, which secretly transferred foreign prisoners to torture sites outside of US soil. However these “hacktivists” may posit a mode of countersurveillance, their tactics mimic a militarized state’s own operations; for this reason, these tactics may only end up reanimating the very structures of power that they purport to expose or overturn. This duplicative structure is part of a logic I call the sovereignty of data, which co-opts the opposition’s participation and gives practices such as torture and extraordinary rendition new life within the cloud.
Writing about violence within the cloud is difficult because the violence is largely displaced elsewhere. Rob Nixon, for example, has incisively pointed to the contrast between the supposedly real-time nature of the Gulf War and what he terms the “slow violence” of its aftermath.30 Rob Nixon, Slow Violence and the Environmentalism of the Poor. The story of the Gulf War — so critics and writers tell us — is one of networks and screens, of virtually instantaneous strikes by computer, of what one journalist dubbed the “Hundred Hour War.” Yet the story of the depleted uranium munitions released during combat (and their 4.5-billion-year lifespan) is not as easy to tell. Environmental problems, Nixon concludes, are difficult to narrate through conventional forms: “Stories — tightly framed in time, space, and point of view — are convenient places for concealing bodies.”31 Ibid., 200.
Nixon’s insight is applicable beyond the realm of environmental activism (and the environmental footprint of the cloud); the cloud, too, enacts its own form of slow violence. The constantly changing platforms of digital technology seem to call out for, as Peter Lunenfeld puts it, “doing theory and criticism in real time” — namely, responding to each event of digital culture as quickly as possible.32 Peter Lunenfeld, User. Yet the cloud, this book argues, causes a double displacement: the displacement of place itself from sight, but also a temporal displacement. A Prehistory of the Cloud attempts to reframe this discussion. It places the digital cloud in dialogue with objects and spaces on the other side of the analog/digital divide. Both interacting with and stepping back from the current moment, it explores the limits and potentialities of a slower form of writing that takes seriously the temporal disjunctions and dislocations within the idea of the cloud. In doing so, I hope to shed light on the hybrid construction I have called the sovereignty of data, a construction that joins war and security, users and use value, participation and opposition. As I argue in this book, looking at the cloud’s technology is not enough to tell the story. In truth, the technology has produced the means of its own interpretation, the lens (“cloud”) through which power is read, the crude map by which we understand the world. That is the tail wagging the dog. Begin with space, power, and the combination we call history. Then the cloud will follow.
We are the voluntary prisoners of the cloud; we are being watched over by governments we did not elect.
Wael Ghonim, Google’s Egyptian executive, said: “If you want to liberate a society just give them the internet.”1 Wael Ghonim, cited in Rebecca MacKinnon, Consent of the Networked: The Worldwide Struggle for Internet Freedom (New York City: Basic Books, 2012), xx. But how does one liberate a society that already has the internet? In a society permanently connected through pervasive broadband networks, the shared internet is, bit by bit and piece by piece, overshadowed by the “cloud.”
The cloud, as a planetary-scale infrastructure, was first made possible by an incremental rise in computing power, server space, and trans-continental fiber-optic connectivity. It is a by-product and parallel iteration of the global (information) economy, enabling a digital (social) marketplace on a worldwide scale. Many of the cloud’s most powerful companies no longer use the shared internet, but build their own dark fiber highways for convenience, resilience, and speed.2 Brandon Teddler, “To The Cloud!,” Ezine Mark, February 20, 2012. In the cloud’s architecture of power, the early internet is eclipsed.
A nondescript diagram in a 1996 MIT research paper titled “The Self-governing Internet: Coordination by Design,” showed a “cloud” of networks situated between routers linked up by Internet Protocol (IP).3 Sharon Gillett and Mitchell Kapor, “The Self-governing Internet: Coordination by Design,” presented at Coordination and Administration of the Internet Workshop at Kennedy School of Government, Harvard University, Boston, MA, September 8–10, 1996. This was the first reported usage of the term “cloud” in relation to the internet. The paper talked about a “confederation” of networks governed by common protocol. A 2001 New York Times article reported that Microsoft’s .NET software programs did not reside on any one computer, “but instead exist in the ‘cloud’ of computers that make up the internet.”4 John Markoff, “An Internet Critic Who Is Not Shy About Ruffling the Big Names in High,” New York Times, April 9, 2001. But it wasn’t until 2004 that the notion of “cloud computing” was defined by Google CEO Eric Schmidt:
I don’t think people have really understood how big this opportunity really is. It starts with the premise that the data services and architecture should be on servers. We call it cloud computing—they should be in a “cloud” somewhere. And that if you have the right kind of browser or the right kind of access, it doesn’t matter whether you have a PC or a Mac or a mobile phone or a BlackBerry or what have you—or new devices still to be developed—you can get access to the cloud. There are a number of companies that have benefited from that. Obviously, Google, Yahoo!, eBay, Amazon come to mind. The computation and the data and so forth are in the servers.5 Eric Schmidt, “Conversation with Eric Schmidt Hosted by Danny Sullivan,” Search Engine Strategies Conference, August 9, 2006.
The internet can be compared to a patchwork of city-states, or an archipelago of islands. User data and content materials are dispersed over different servers, domains, and jurisdictions (i.e., different sovereign countries). The cloud is more like Bismarck’s unification of Germany, sweeping up formerly distinct elements, bringing them under a central government. As with most technology, there is a sense of abstraction from prior experiences; in the cloud the user no longer needs to understand how a software program works or where his or her data really is. The important thing is that it works.
In the early 1990s, a user would operate a “personal home page,” hosted by an internet Service Provider (ISP), usually located in the country where that user lived. In the early 2000s, free online services like Blogspot and video sites like YouTube came to equal and surpass the services of local providers. Instead of using a paid-for local e-mail account, users would switch to a service like Gmail. In the late 2000s and the early 2010s this was complemented, if not replaced, by Facebook and other social media, which integrate e-mail, instant messaging, FTP (File Transfer Protocol), financial services, and other social interaction software within their clouds. Cloud-based book sales, shopping, and e-reading have brought about the global dominance of Amazon, the world’s biggest cloud storage provider and the “Walmart of the Web.”6 “Amazon: The Walmart of the Web,” The Economist, October 1, 2011. By 2015, combined spending for public and private cloud storage will be $22.6 billion worldwide.7 Nathan Eddy, “Cloud Computing to Drive Storage Growth: IDC Report,” eWeek, October 21, 2011. Given this transition, it is no exaggeration to proclaim an exodus from the internet to the cloud. The internet’s dispersed architecture gives way to the cloud’s central model of data storage and management, handled and owned by a handful of corporations.
The coming of the cloud is spelled out by Aaron Levie, founder and CEO of Box, one of Silicon Valley’s fastest growing cloud storage providers. As Levie states, the biggest driver of the cloud is the ever-expanding spectrum of mobile devices—iPhones, iPads, Androids, and such—from which users tap into the cloud and flock around its server spine:
If you think about the market that we’re in, and more broadly just the enterprise software market, the kind of transition that’s happening now from legacy systems to the cloud is literally, by definition, a once-in-a-lifetime opportunity. This is probably going to happen at a larger scale than any other technology transition we’ve seen in the enterprise. Larger than client servers. Larger than mainframes.8 Nick Bilton, “Data storage server, and founder, move quickly.” International Herald Tribune, August 28, 2012.
Google, one of the world’s seven largest cloud companies, has recently compared itself to a bank.9 Barb Darrow, “Amazon Is No. 1. Who’s Next in Cloud Computing?,” GigaOM, March 14, 2012. Cade Metz, “Google: ‘We’re Like a Bank for Your Data,’” Wired, May 29, 2012. That comparison is apt. If data in the cloud is like money in the bank, what happens to it while it resides “conveniently” in the cloud?
Where and by whom sites are registered and data is hosted matters a great deal in determining who gains access to and control over the data. For example, all data stored by US companies (or their subsidiaries) in non-US data centers falls under the jurisdiction of the USA Patriot Act, an anti-terrorism law introduced in 2001.10 Zack Whittaker, “Summary: ZDNet’s USA PATRIOT Act Series,” ZDNet, April 27, 2011. This emphatically includes the entire US cloud—Facebook, Apple, Twitter, Dropbox, Google, Amazon, Rackspace, Box, Microsoft, and many others. Jeffrey Rosen, a law professor at George Washington University, has established that the Patriot Act, rather than investigating potential terrorists, is mostly used to spy on innocent Americans.11 Jeffrey Rosen, “Too Much Power,” New York Times, September 8, 2011. But the people being watched need not even be Americans. Via the cloud, citizens across the world are subject to the same Patriot Act powers—which easily lend themselves to misuse by authorities. Matthew Waxman of the Council on Foreign Relations outlines the situation:
These kinds of surveillance powers have historically been prone to abuse. Some of the legal restrictions on surveillance that the Patriot Act was designed to roll back were actually the direct product of abuses by the FBI, the CIA, and other government agencies. During the 1960s and ‘70s, national security intelligence powers were used by government agents to spy on political opposition [and] cast abusively wide nets. That legacy of abuse has raised a lot of concerns about whether there is adequate oversight with respect to these new surveillance powers.12 Matthew C. Waxman, “Extending Patriot Act Powers,” interview by Jonathan Masters, www.cfr.org, February 22, 2012.
The sociologist Saskia Sassen adds to this perspective:
Through the Patriot Act [...] the government has authorized official monitoring of attorney-client conversations, wide-ranging secret searches and wiretaps, the collection of Internet and e-mail addressing data [...] All of this can be done without probable cause about the guilt of the people searched—that is to say, the usual threshold that must be passed before the government may invade privacy has been neutralized. This is an enormous accrual of powers in the administration, which has found itself in the position of having to reassure the public that it can be ‘trusted’ not to abuse these powers. But there have been abuses.13 Saskia Sassen, Territory, Authority, Rights. From Medieval to Global Assemblages, Princeton and Oxford: Princeton University Press, 2006 (2008), 180.
Microsoft was the first cloud company to publicly confirm Patriot Act access to its data stored outside the US.14 Paul Taylor, “Privacy Concerns Slow Cloud Adoption,” Financial Times, August 2, 2011. In August 2011, Google also confirmed that its data stored overseas is subject to “lawful access” by the US government.15 Lucian Constantin, “Google Admits Handing over European User Data to US Intelligence Agencies,” Softpedia, August 8, 2011. A 2012 white paper by the law and privacy firm Hogan Lovells examined these findings, concluding that while the Patriot Act does give the US government access to the cloud, many other governments enjoy similar forms of access under their own laws—and further, that using the “location” of a cloud server to determine legal protection was a mistaken idea altogether.16 Winston Maxwell, and Christopher Wolf, “A Global Reality: Governmental Access to Data in the Cloud,” A Hogan Lovells White Paper, May 23, 2012. The paper noted the widespread use of so-called Mutual Legal Assistance Treaties (MLATs), which streamline the exchange between countries of data needed for investigative purposes. Apart from treaty-backed requests, “informal relationships between law enforcement agencies […] allow for governmental access to data in the ‘possession, custody, or control’ of cloud service providers over whom the requesting country does not otherwise have jurisdiction.” The legality of such informal relationships was not examined by the study. Neither did it backlog any recorded abuses of the Patriot Act, or discuss reports by two US Senators about a “secret interpretation” of the law, which would give the FBI far-reaching extra surveillance powers that the public is unaware of.17 Mike Masnick, “Senators Reveal That Feds Have Secretly Reinterpreted The PATRIOT Act,” Techdirt, May 26, 2011.
One of the most powerful instruments the US government uses to look into the so-called “non-content information” of ISPs and cloud providers is the National Security Letter (NSL). NSLs demand specific information about users and are issued directly by the FBI. After the Patriot Act was signed into law, the number of letters issued rose exponentially: from 8,500 in 2000 to 39,346 in 2003. An NSL automatically includes a gag order that prohibits the recipient from notifying users about the request. The FBI need only assert that the information sought is “relevant” to an investigation.18 Kim Zetter, “Unknown Tech Company Defies FBI In Mystery Surveillance Case,” Wired, March 14, 2012. The crucial question in the Hogan Lovells report—“Are government orders to disclose customer data subject to review by a judge?”—is answered with “yes” in Australia, Canada, Denmark, France, Germany, Ireland, Japan, Spain, the United Kingdom, and the US. However, in the US this condition is only met if the cloud provider, after receiving the NSL, first challenges its built-in gag order. Only when the NSL is unsealed by a judge can the cloud provider inform the user about the existence of the letter. For the Hogan Lovells report, this procedure counts as judicial review.
In Egypt, during the revolution, Facebook and Twitter played the role of subversive, uncensorable alternative media—in part because the servers of these wildly popular services were beyond the reach of local authorities. Indeed, Hosni Mubarak’s best bet to fend off the power of the internet was to switch it off entirely. To do so, “just a few phone calls probably sufficed.”19 Ryan Singel, “Egypt Shut Down Its Net With a Series of Phone Calls.” Wired, January 28, 2011. While Mubarak’s ultima ratio as a sovereign ruler over Egyptian soil proved sufficient to wall the country off from the network, the violent crudeness of this act also demonstrated the dictator’s much more substantial lack of power over the network’s larger infrastructure. Sovereign control over the cloud, in contrast to authoritarian power-mongering, is a sophisticated affair. One might draw a very different map here: the global spread of the US cloud, for example, results in a kind of “super-jurisdiction” enjoyed by its host country.
Super-jurisdiction can be seen in action in the 2012 seizure of Megaupload.com by the US Department of Justice (DOJ). Megaupload.com was a Hong Kong-based internet enterprise paying loving tribute to all kinds of Hollywood films (to say it politely). The site offered, according to its own self-description, “no-registration upload and sharing of files up to 1 gigabyte.” It was seized in January 2012 by the DOJ and the FBI, backed by film industry copyright claimants. Megaupload.com stands accused of generating “more than $175 million in criminal proceeds” and causing “more than half a billion dollars in harm to copyright owners.”20 Claire Connelly and Lee Taylor, “FBI Shuts down Megaupload.com, Anonymous Shut down FBI,” News.com.au, January 20, 2012.
The site’s founder, thirty-seven-year-old internet millionaire Kim Dotcom, and three of his associates were brought to a New Zealand court to face extradition to the US. They’d been living like self-styled oligarchs. In a gesture toward transparency, they said they had “nothing to hide.”21 Ibid. In particular, Dotcom himself embodies the absurd saga of a contemporary, deeply self-parodying internet hooligan—a legal black hole turned persona, unprepared in every way to be “famous,” yet accepting the challenge wholeheartedly. Megaupload.com was, at least in its own self-imagination, nothing more than a technical conduit between those who upload and those who download, its content-indiscriminate policy a typical example of laissez-faire anarcho-capitalism. The US government’s prosecution of the site remains highly debated, because the DOJ interpreted the site’s global user base as a willful conspiracy to break US law. As Jennifer Granick at Stanford Law notes, the DOJ referenced “unknown parties” (i.e., the users of Megaupload.com) as members of a conspiracy to conduct a crime in the US. Granick notes that such users “were located all over the world, and may or may not have acted willfully.” Indeed, with Megaupload.com, the government alleges “an agreement to violate a US civil law, including by many people who are not subject to US rules.” As Granick then asks, “Does the United States have jurisdiction over anyone who uses a hosting provider in the Eastern District of Virginia? What about over any company that uses PayPal?”22 See Jennifer Granick, “Megaupload: A Lot Less Guilty Than You Think,” Center for Internet and Society at Stanford Law School, January 26, 2012. Indeed, these are the sorts of questions prompted by super-jurisdiction.
Super-jurisdiction means that the law of one country can, through various forms of cooperation and association implied by server locations and network connections, be extended into and enacted in another. The US, as a result of its unique position in managing the internet’s core, also has jurisdiction over all so-called top level domains, no matter where they are hosted and by whom. All top-level domain names (dot-com, dot-org, dot-net, etc.) must be registered through VeriSign, a Virginia-based company. Using its jurisdiction over the domain name registry, in 2012 the DOJ seized Bodog.com, a gambling website operated from Canada. A US Customs Enforcement spokesperson confirmed to Wired that the US had in a similar manner seized 750 different domain names of sites it believed committed intellectual property theft.23 David Kravats, “Uncle Sam: If It Ends in .Com, It’s .Seizable,” Wired, March 6, 2012. Michael Geist, an internet law professor at the University of Ottawa, observes that, indeed, “All Your internets Belong to US”:
The message from the [Bodog] case is clear: all dot-com, dot-net, and dot-org domain names are subject to US jurisdiction regardless of where they operate or where they were registered. This grants the US a form of “super-jurisdiction” over internet activities since most other countries are limited to jurisdiction with a real and substantial connection. For the US, the location of the domain name registry is good enough.24 Michael Geist, “All Your Internets Belong to US, Continued: The Bodog.com Case,” michaelgeist.ca, March 6, 2012.
The various technical components that enable global communication—server, network, and client—all lend themselves to surveillance. Access Controlled, a MIT Press handbook on internet surveillance and censorship, states that “the quest for information control is now beyond denial.”25 Ronald Deibert, John Palfrey, Rafal Rohozinski, Jonathan Zittrain (eds.), Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace (Cambridge, Massachusetts: The MIT Press, 2010), 6. It mentions the so-called “security first” norm, by which the combined threats of terrorism and child pornography create a mandate for the state to police the net without restriction. As the authors assert in their conclusion, “The security-first norm around internet governance can be seen, therefore, as but another manifestation of these wider developments. Internet censorship and surveillance—once largely confined to authoritarian regimes—is now fast becoming the global norm.”26 Ibid., 11. Indeed, if a lawsuit brought by the Electronic Frontier Foundation (EFF) against AT&T is any indication, the US government seems determined to expand its access to electronic communication. The EFF’s star witness in the case was Mark Klein, a former AT&T technician who claimed to have seen, in 2002, the creation and ongoing use of a dedicated private room where the National Security Agency (NSA) had “set up a system that vacuumed up internet and phone-call data from ordinary Americans with the cooperation of AT&T.”27 Ellen Nakashima, “A Story of Surveillance,” The Washington Post, November 7, 2007. Klein said the system allowed the government full surveillance of not just the AT&T customer base, but that of sixteen other companies as well.28 Ibid. The US government dismissed the case against the telecommunications provider, asserting the privilege of state secrets. The government has also dismissed cases against itself and other telecom companies that assisted with similar endeavors, including Sprint, Nextel, and Verizon.29 Dan Levine, “US Court Upholds Telecom Immunity for Surveillance,” Thomson Reuters, December 29, 2011. If the allegations are true, according to Access Controlled, “they show that the United States maintains the most sophisticated internet surveillance regime.”30 Ronald Deibert et. al., Access Controlled, 381.
As technologies expand, the governance, legislation, and legalities of surveillance become increasingly complicated. In May 2012, CNET reported that the general counsel of the FBI had drafted a proposed law that would require social-networking sites, e-mail and voice-over-IP (VoIP) providers, as well as instant messaging platforms, to provide a backdoor for surveillance—a demand from the US government for cloud companies to “alter their code to ensure their products are wiretap-friendly.”31 Declan McCullagh, “FBI: We Need Wiretap-Ready Web Sites – Now,” CNET, May 4, 2012. In 2012, the UK Government announced the installation—in collaboration with telecom companies and ISPs—of so-called “black boxes” which would retrieve and decrypt communications from Gmail and other cloud services, storing the non-content data from these communications.32 Geoff White, “‘Black Boxes’ to Monitor All Internet and Phone Data,” Channel 4, June 29, 2012. But the cloud is nothing like a national telephone network. Whenever the cloud is “wiretapped,” authorities listen into a global telecommunications oracle; the data of everyone using that cloud, regardless of where and who they are, and regardless of whether or not they are the suspect of a crime, is at least in principle at the disposal of law enforcement.
Most journalism routinely criticizes (or praises) the US government for its ability to spy on “Americans.” But something essential is not mentioned here—the practical ability of the US government to spy on everybody else. The potential impact of surveillance of the US cloud is as vast as the impact of its services—which have already profoundly transformed the world. An FBI representative told CNET about the gap the agency perceives between the phone network and advanced cloud communications for which it does not presently have sufficiently intrusive technical capacity—the risk of surveillance “going dark.” The representative mentioned “national security” to demonstrate how badly it needs such cloud wiretapping, inadvertently revealing that the state secrets privilege—once a legal anomaly, now a routine—will likely be invoked to shield such extensive and increased surveillance powers from public scrutiny.
Users’ concerns about about internet surveillance increased with the proposed Stop Online Piracy Act (SOPA), which was introduced into the US House of Representatives in late 2011. How the government would police SOPA became a real worry, with the suspicion that the enforcement method of choice would be standardized deep packet inspections (DPI) deployed through users’ internet service providers—a process by which the “packets” of data in the network are unpacked and inspected.33 Alex Wawro, “What Is Deep Packet Inspection?,” PC World, February 1, 2012. Through DPI, law enforcement would detect and identify illegal downloads. In 2010, before SOPA was even on the table, the Obama Administration sought to enact federal laws that would force communications providers offering encryption (including e-mail and instant messaging) to provide access by law enforcement to unencrypted data.34 Declan McCullagh, “Report: Feds to Push for Net Encryption Backdoors,” CNET, September 27, 2010. It is, however, worth noting that encryption is still protected as “free speech” by the First Amendment of the US Constitution—further complicating, but not likely deterring, attempts to break the code. One way of doing so consists of surrounding encryption with the insinuation of illegality. The FBI in 2012 distributed flyers to internet cafe business owners requesting to be wary of “suspicious behavior” by guests, including the “use of anonymizers, portals or other means to shield IP address” and “encryption or use of software to hide encrypted data.” In small print, the FBI added that each of these “indicators” by themselves, however, constituted lawful conduct.35 https://publicintelligence.net/fbi-suspicious-activity-reporting-flyers/
“Real name” requirements by the cloud-based social networking platforms Facebook and Google+ expressly attack anonymity and pseudonymity online, affecting the fundaments of political speech. Real name directives require users to register with a service using the name that is in their passport. The reasons given by cloud services for such real name requirements are vague—perhaps for fear of sounding too directly authoritarian. The preferred route, instead, is that of fatherly advice. Facebook claims that it has a real name policy “so that you always know who you’re connecting with,” while Google states that it requires real names so “that the people you want to connect with can find you.”36 “Facebook’s Name Policy – Facebook Help Center,” facebook.com. “Google+ Page and Profile Names – Google+ Help,” plus.google.com. These explanations gesture towards a conception of normative social arrangements—requiring that your use the same name that you’d use among your friends, family, or coworkers. Alexis Madrigal points out a certain irony in the Google+ real name requirement:
The kind of naming policy that Facebook and Google Plus have is actually a radical departure from the way identity and speech interact in the real world. They attach identity more strongly to every act of online speech than almost any real world situation does.37 Alexis Madrigal, “Why Facebook and Google’s Concept of ‘Real Names’ Is Revolutionary,” The Atlantic, August 5, 2011.
Cloud providers such as Amazon use real name registration as a mechanism for accountability. Though Amazon still allows users to use a “pen name,” the trademarked “real name” attribution is advertised as having the ability to “potentially increase your reputation in the community” as a retailer, seller, or reviewer.38 “Amazon.com Help: Pen Names and Real Names,” amazon.com. Some see the real name badge as a step towards “fixing their flawed [and] exploitable review system” for reviewing books—a system notoriously dominated by biased “anonymous” users, often thought to be, and sometimes proven to be, other authors, their family members, or the books’ publishers.39 Mark T. Kieczorek, “Amazon Real Name Badge,” Maktaw, July 23, 2004. See →. Amy Harmon, “Amazon Glitch Unmasks War Of Reviewers,” New York Times, February 14, 2004. Though Amazon’s reasoning for promoting the use of real names is more explicit than that of Facebook and Google+, one can imagine the marketing benefits of a synchronized real name system between social media and retail websites—and the connection that such a synchronicity might have with the government. Such requirements can be seen as aligned with plans of the US government to introduce a universal “trusted identity” or “internet ID” system for US citizens, a commission the White House granted to the US Commerce Department in 2011. According to White House Cybersecurity Coordinator Howard Schmidt, the effort entails nothing less than creating an “identity ecosystem” for the internet.40 Declan McCullagh, “Obama to Hand Commerce Dept. Authority over Cybersecurity ID,” CNET, January 7, 2011.
Cass Sunstein, the Obama Administration’s chief internet advisor, has recently argued for government policy against the spread of “rumors” on the internet; as noted by the New Yorker, one of the most persistent of such rumors was the theory that President Obama had been born in Kenya—and thus holds his presidency illegally.41 Elizabeth Kolbert, “The Things People Say,” The New Yorker, November 2, 2009. Sunstein believes that certain properties of the internet gear public speech toward the uninformed forwarding and circulation of rumors and conspiracy theories. In “echo chambers” and through “cybercascades,” one-sided opinion would spread rapidly and widely in the network without rebuttal. Supposedly balanced reporting by professional journalists in the mainstream media now has to compete for attention with, and gets often surpassed by, every other blog post, Facebook update, or tweet. The effortless ability for all Internet users to compose and live on a “Daily Me”—a news diet catered to fit and maintain an individual, already established, self-referential set of beliefs—would result in a fragmentation of the general public into factions which no longer expose themselves to views held by other factions. Sunstein claims that under such fragmentation, “diverse speech communities” are created “whose members talk and listen mostly to one another.” And,
When society is fragmented in this way, diverse groups will tend to polarize in a way that can breed extremism and even hatred and violence. New technologies, emphatically including the Internet, are dramatically increasing people’s ability to hear echoes of their own voices and to wall themselves off from others.42 Cass R. Sunstein, Republic.com 2.0, (Princeton and Oxford, Princeton University Press, 2007), 44.
Sunstein is concerned with how rumors may impair the effectiveness of government, and undermine its legitimacy. Early 2008, he and a co-author published a paper on conspiracy theories around the 9/11 attacks. In the paper, Sunstein recommended that “Government agents (and their allies) might enter chat rooms, online social networks, or even real-space groups and attempt to undermine percolating conspiracy theories by raising doubts about their factual premises, causal logic or implications for political action.”43 Cass R. Sunstein, Adrian Vermeule, “Conspiracy Theories.” Harvard University Law School Public Law & Legal Theory Research Paper Series, Paper No. 199, University of Chicago Law School, 2008, 22.
Nowhere is the coercive government stance toward online rumors as clear as in China. Beijing put forth regulations requiring users to register on social medial sites with their “real name identities” by March 2012—regulation comparable to policies already spontaneously embraced by Facebook and Google. Sites including Sina Weibo, one of the country’s largest microblogging sites, have begun implementing these regulations, which also forbid users from making statements against the state’s honor or statements that may disrupt civil obedience.44 Michael Kan, “Beijing to Require Users on Twitter-like Services to Register With Real Names,” PC World, December 16, 2011. Around the same time, social media sites across the country flared up over the ouster of political leader Bo Xilai from the Communist Party. The Chinese police swiftly detained six people and shut down sixteen websites over “rumors” surrounding the incident, including claims that military vehicles were entering Beijing.45 Michael Bristow, “China Arrests Over Coup Rumours,” BBC News, March 31, 2012. David Eimer, “China Arrests Six Over Coup Rumours,” The Telegraph, March 31, 2012.
The increasing prominence which cloud-based internet services, social media and VoIP technologies now enjoy over legacy tools of communication shows in how they enable new, virtually cost-free forms of organization. For social movements relying on collective action, this factor has proven to be key. Unsurprisingly, when social media platforms are suddenly “switched off,” their ability to organize can be severely affected. Facebook, in the wake of nationwide anti-austerity protests in the UK in February 2011, deleted the profiles of dozens of political groups preparing to take part in further protests. In doing so, Facebook effectively disabled lawful political activism, which had, for obvious reasons, moved their coordination to the cloud. The reason for the purge is still not known and likely never will be. All the social networking behemoth could utter to justify its behavior was cryptic technospeak. Profiles had “not been registered correctly,” as a Facebook spokeswoman explained.46 Shiv Malik, “Facebook Accused of Removing Activists’ Pages,” The Guardian, April 29, 2012. In 2010, UK Prime Minister David Cameron and other Conservative politicians met in London with Facebook founder Mark Zuckerberg. Their admiration was mutual.47 Tim Bradshaw, “Mark Zuckerberg Friends David Cameron,” Financial Times, June 21, 2012.
Rebecca MacKinnon, a former CNN reporter and cofounder of the citizen media network Global Voices, asserts in her book Consent of the Networked that “we cannot understand how the internet is used unless we first understand the ways in which the internet itself has become a highly contested political space.”48 MacKinnon, Consent of the Networked, xxii. This applies equally, and equally urgently, to the cloud.
The combined rights to a free flow of information, freedom of expession, and freedom from censorship, have been described as a compound right to “internet freedom.” Indeed, Google’s Wael Ghonim at the beginning of this story suggested that unhindered access to, and use of, the internet enables the liberation of a society.
Here, the free flow of information is blocked by clearly identifiable authoritarian despots. To not have internet freedom, one must be under the oppression of a shameless tyrant, or be living in a “closed society” where the free flow of information is not sufficiently appreciated just yet. On January 21, 2010, US Secretary of State Hillary Clinton delivered a speech on US foreign policy and internet freedom, highlighting exactly this view. Clinton assured her audience in Washington, D.C. that “As I speak to you today, government censors are working furiously to erase my words from the records of history.”49 “Internet Freedom.” The prepared text of U.S. of Secretary of State Hillary Rodham Clinton's speech, delivered at the Newseum in Washington, D.C. Foreign Policy, January 21, 2010. Evgeny Morozov, a US-based, Belarusian-born internet scholar rightly criticized Clinton’s “anachronistic view of authoritarianism.” As Morozov explained, “I didn’t hear anything about the evolving nature of internet control (e.g. that controlling the internet now includes many other activities—propaganda, DDoS attacks, physical intimidation of selected critics/activists). If we keep framing this discussion only as a censorship issue, we are unlikely to solve it.” He went on to criticize the double standards the State Department advertised with regard to online anonymity:
On the one hand, they want to crack down on intellectual property theft and terrorists; on the other hand, they want to protect Iranian and the Chinese dissidents. Well, let me break the hard news: You can’t have it both ways and the sooner you get on with “anonymity for everyone” rhetoric, the more you’ll accomplish. I am very pessimistic on the future of online anonymity in general—I think there is a good chance it will be eliminated by 2015—and this hesitance by the State Department does not make me feel any more optimistic.50 Evgeny Morozov, “Is Hillary Clinton launching a cyber Cold War?” Foreign Policy Net.Effect, January 21, 2010.
Still, the definition of internet freedom remains relatively opaque. One example of this vagueness is provided by Internetfreedom.org, a global consortium, which aims to “inform, connect, and empower the people in closed societies with information on a free internet.”51 http://www.internetfreedom.org/ Savetheinternet.com, a project of Free Press, breaks down internet freedom into somewhat more clearly defined categories—“net neutrality (wired and wireless), strong protections for mobile phone users, public use of the public airwaves and universal access to high-speed internet.”52 http://www.savetheinternet.com/sti-home The notion of net neutrality is as relevant to internet freedom as it is to the structure of the cloud, since the network’s management is in the hands of a patchwork of government agencies and private enterprises who may (or may not) hold a bias toward certain information on the network, or a bias toward one another. Coined by the legal scholar Tim Wu in 2003, network neutrality was originally meant to benchmark and promote the open nature of the internet for the sake of innovation—an “end-to-end” infrastructure unbiased towards its content. As Wu stated, “A communications network like the internet can be seen as a platform for a competition among application developers. Email, the web, and streaming applications are in a battle for the attention and interest of end-users. It is therefore important that the platform be neutral to ensure the competition remains meritocratic.”53 Tim Wu, “Network Neutrality, Broadband Discrimination.” Journal of Telecommunications and High Technology Law, Vol. 2, p. 141, 2003. Network neutrality applies to a decentralized architecture, with clearly divided roles between ISPs, broadband service providers, content providers, and services and applications on the network. It justifies a de facto gentlemen’s agreement through a joint economic interest in innovation and fair competition. Indeed, also political speech can be considered part of a competition—one of ideas on how to (not) govern ourselves. Venture capitalist Joichi Ito expressed this view in 2003, when he wrote that such a competition of ideas “requires freedom of speech and the ability to criticize those in power without fear of retribution.”54 Joichi Ito, “Weblogs and Emergent Democracy.”
Insofar as the cloud’s software services use the shared internet, they can be considered applications run on the network. To this end, network neutrality applies to the cloud (for example, the cloud is expected to consume more and more bandwidth in the network, possibly at the cost of other applications and services). The concept of network neutrality is more difficult to apply in the cloud, since some of the nominal conditions to institute neutrality are absorbed by the cloud’s combination of hosting and software services within a single black box. In the cloud, there is no more principled separation between the hosting of data, software, and client-side tools through which the data is handled and experienced. Indeed, the enormous success of the cloud is that it provides for all of these things at once.55 On a related note, cyberlaw professor Jonathan Zittrain in 2008 wrote The Future Of The Internet—And How To Stop It, a book focusing on the rise of the web's “tethered appliances,” which, like North Korean radio sets, can be attuned to exclude or disregard certain content, and are designed not to be tinkered with by their users. Zittrain argued that such closed service appliances—emphatically including design icons like iPods and iPhones, for example—would in fact contribute to stifle the generative and innovative capacity of the web. See Jonathan Zittrain, The Future Of The Internet—And How To Stop It, New Haven and London: Yale University Press, 2008.
The Terms of Service of any cloud-based provider are a far cry from a binding agreement to net neutrality; they allow plenty of space for “cloudy bias.” For example, in August, 2012, Apple banned “Drones+” from its App Store. This app, developed by NYU student Josh Begley, provides aggregated news on US drone strikes in Pakistan, Yemen and Somalia, and it includes a Google map on which the strikes are marked. The app also prompts the user whenever a new drone strike has occurred, and says how many casualties it had produced. Crucially, the information aggregated by the app is already completely public and freely available through various other sources including The Guardian’s iPhone app. Apple demonstrated its cloudy parody of network neutrality in the ever-changing reasons it gave for rejecting Drones+. Apple had problems with the Google logo appearing on the Google map. In July, the company stated in an e-mail that “The features and/or content of your app were not useful or entertaining enough, or your app did not appeal to a broad enough audience.” By August, Apple changed its mind. The app contained “content that many audiences would find objectionable, which is not in compliance with the App Store Review Guidelines.” Indeed, the company eventually concluded that Drones+, which does not show users any images of actual drone-related bloodshed, was “objectionable and crude.”56 Christina Bonnington and Spencer Ackerman, “Apple Rejects App That Tracks U.S. Drone Strikes.” Wired, August 30, 2012. The New York Times wondered how on earth it could be that
the material Apple deemed objectionable from Mr. Begley was nearly identical to the material available through The Guardian’s iPhone app. It’s unclear whether Apple is treating the two parties differently because The Guardian is a well-known media organization and Mr. Begley is not, or whether the problem is that Mr. Begley chose to focus his app only on drone strikes.57 Nick Wingfield, “Apple Rejects App Tracking Drone Strikes.” New York Times Blog, August 30, 2012.
One can endlessly ponder why Apple banned Drones+ from its cloud but admitted The Guardian, and one will never be finished weighing the arguments. The point is that if its cloud operated even under something remotely looking like network neutrality, Apple could not have reasonably rejected the app. The case also brings to mind Evgeny Morozov’s earlier warning that government censorship of the network nowadays is more sophisticated than a crude Mubarak internet kill switch. As Rebecca MacKinnon writes,
citizens are […] vulnerable to abuse of their rights to speech and assembly not only from government but also from private actors. In democracies, it follows that citizens must guard against violations of their digital rights by governments and corporations—or both acting in concert—regardless of whether the company involved is censoring and discriminating on its own initiative or acting under pressure from authorities.58 MacKinnon, ibid., 119.
It is highly unlikely that Drones+ was banned after direct government interference. But it isn’t difficult to imagine an informal, unstated, and rather intuitive constellation of interests between Apple—universally praised by US politicians on both sides of the aisle—and the US Government. Shared interests and informal ties between private enterprise and government, based on mutual forms of “Like,” rather than strict separations by Law, may account for de facto forms of censorship in the cloud, without the explicit order to enact it or the explicit obligation to justify it. In December 2010, Apple removed a WikiLeaks iPhone app from its store, citing its developer guidelines: “Any app that is defamatory, offensive, mean-spirited, or likely to place the targeted individual or group in harms [sic] way will be rejected.”59 Gregg Keizer, “Apple boots WikiLeaks app from iPhone store.” Computerworld, December 21, 2010. Simultaneous to the WikiLeaks app being banned, other US cloud companies, including Amazon and PayPal, stopped providing services to WikiLeaks.
The political, legal and jurisdictional consequences of the cloud are slowly becoming apparent—right at the time when we are unlikely to withdraw from it. The cloud is just too good. We won’t stop using our iPhones, iPads, Androids and Kindles. Paypal is still our frenemy. Happily the captives of the cloud, we will tweet our critiques of it, and Facebook-broadcast our outcries over its government back doors. But the story is not over yet. Will the anarcho-libertarian roots of the internet kick back at the cloud’s centralized architecture—or are they forever overrun by it? Has the cloud assumed its final form, or is there still a time and a place for surprises?
Is the future of the world the future of the internet?
—Julian Assange1 Julian Assange, in: “The Julian Assange Show: Cypherpunks Uncut (p.1)”, RT.com, July 29, 2012.
The cloud is the informational equivalent to the container terminal. It has a higher degree of standardization and scalability than most earlier forms of networked information and communication technology. From social networking to retail, from financial transactions to e-mail and telephone, these and many other services end up in the cloud. Surely, the internet already was a wholesale for all types of information and media formats. As Milton Mueller notes, these “used to be delivered through separate technologies governed by separate legal and regulatory regimes,” while now having converged on the internet and its protocols.2 Milton Mueller, Networks and States. The Global Politics of Internet Governance. Cambridge (MA): The MIT Press, 2010, 9-10. In the cloud, such “digital convergence” goes even further: data becomes more effectively and thoroughly harvested, analyzed, validated, monetized, looked into, and controlled than in the internet; its centralization is not just one of protocol, but also of location.
Many writers in recent decades have grappled with a seemingly borderless information society rooted in physical territories, and finding words for this condition has been key to most serious writing about information networks. For example, the term “space of flows” was coined in the 1990s by the Spanish sociologist Manuel Castells. It describes the spatial conditions of the global movement of goods, information, and money. According to Castells, the space of flows is
constituted by a circuit of electronic exchanges (micro-electronics-based devices, telecommunications, computer processing, broadcasting systems, and high-speed transportation—also based on information technologies) that, together, form the material basis for the processes we have observed as being strategically crucial in the network society.3 Manuel Castells, The Information Age: Economy, Society and Culture, Vol I: The Rise of the Network Society (Malden, MA: Blackwell 1996 ), 442.
Castells adds that this material basis is “a spatial form, just as it could be ‘the city’ or ‘the region’ in the organization of the merchant society or the industrial society.”4 Ibid. As legal scholars Tim Wu and Jack Goldsmith note in their study Who Controls the Internet?, beneath “formless cyberspace” rests “an ugly physical transport infrastructure: copper wires, fiberoptic cables, and the specialized routers and switches that direct information from place to place.”5 Jack Goldsmith and Tim Wu, Who Controls the Internet?: Illusions of a Borderless World (Oxford: Oxford University Press, 2006), 73. James Gleick describes the network’s data center, the cables, and the switches as “wheel-works,” and the cloud as its “avatar.”6 James Gleick, The Information: A History, a Theory, a Flood, (New York: Pantheon Books, 2011), 396. The cloud presupposes a geography where data centers can be built. It presupposes an environment protected and stable enough for its server farms to be secure, for its operations to run smoothly and uninterrupted. It presupposes redundant power grids, water supplies, high-volume, high-speed fiber-optic connectivity, and other advanced infrastructure. It presupposes cheap energy, as the cloud’s vast exhaust violates even the most lax of environmental rules. While data in the cloud may seem placeless and omnipresent, precisely for this reason, the infrastructure safeguarding its permanent availability is monstrous in size and scope. According to 2012 research by the New York Times, the cloud uses about thirty billion watts of electricity worldwide, roughly equivalent to thirty nuclear power plants’ worth of output. About one quarter to one third of this energy is consumed by data centers in the United States. According to one expert, “a single data center can take more power than a medium-size town.”7 James Glanz, “The Cloud Factories: Power, Pollution and the Internet,” New York Times, September 22, 2012.
A data center is a windowless, large, flat building. Its architecture is foreshadowed by the suburban big boxes of Walmart and the like. Unlike megamalls, the precise locations of data centers are secret. Companies don’t usually advertise where data centers are: not the public image of their operations, but their actual operations, depend on them.8 When, for example Dutch filmmaker Marije Meerman, while working on a documentary about the financial crisis and the role of high-speed trading, wanted to find data centers servicing the New York Stock Exchange, she found no official record of where these were located. Instead, Meerman tracked them down by looking for clues on the web sites of the construction companies that built them, and by flipping through local files of New Jersey town hall meetings. Eventually, she mapped a ring of data centers around New York City. See Marije Meerman, lecture at Mediafonds, Amsterdam, January 12, 2012. To users, the cloud seems almost formless, or transparent—always available, ever-changing, hanging in the air, on screens, in waves, appearing and disappearing, “formless cyberspace” indeed. Yet at the core of this informational ghost dance lies a rudimentary physical form—steel and concrete infrastructure. If the enormous, energy-slurping data factories are the cloud’s true form, then these instances of the “space of flows” recall the medieval castle, the treasure chest, and the military base. They recall the political and military conflicts that have dominated geography since recorded history. As the architect and writer Pier Vittorio Aureli states,
Any power, no matter how supreme, totalitarian, ubiquitous, high-tech, democratic, and evasive, at the end has to land on the actual ground of the city and leave traces that are difficult to efface. This is why, unlike the web, the city as the actual space of our primary perception remains a very strategic site of action and counteraction. … But in order to critically frame the network, we would need to propose a radical reification of it. This would mean its transformation into a finite “thing” among other finite things, and not always see the network and its derivatives like something immaterial and invisible, without a form we can trace and change.9 Pier Vittorio Aureli. In Pier Vittorio Aureli, Boris Groys, Metahaven, and Marina Vishmidt, “Form.” In Uncorporate Identity (Baden: Lars Müller, 2010), 262.
In discussion with Aureli, the theorist Boris Groys asserts that the network is situated on (or below) a “defined territory, controlled by the military.” On those terms, Groys claims,
the goal of future wars is already established: control over the network and the flows of information running through its architecture. It seems to me that the quest for global totalitarian power is not behind us but is a true promise of the future. If the network architecture culminates in one global building then there must be one power that controls it. The central political question of our time is the nature of this future power.10 Boris Groys. In Pier Vittorio Aureli, Boris Groys, Metahaven, and Marina Vishmidt, “Form.” In Uncorporate Identity (Baden: Lars Müller, 2010), 263.
The early internet, in the hearts and minds of its idealists, was something of an anarchic place. John Perry Barlow prefigured the “cyber-idealist” position in his manifesto, “A Declaration of the Independence of Cyberspace,” published in 1996. Barlow asserts that the network and its inhabitants are independent from the old-fashioned rules and regulations of territorial states, who have “no sovereignty where we gather”:
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here. … Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonwealth, our governance will emerge. Our identities may be distributed across many of your jurisdictions.11 John Perry Barlow, “A Declaration of the Independence of Cyberspace,” Electronic Frontier Foundation, February 8, 1996.
Barlow’s manifesto declared cyberspace a socio-political commons. A space seemingly beyond gravity, beyond the state—“a world that all may enter without privilege or prejudice”; “a world where anyone, anywhere may express his or her beliefs.” Barlow’s ideas have somehow resonated; indeed, Saskia Sassen mentions that “a distinct issue concerning the relation between the state and digital networks is the possibility for the average citizen, firm, or organization operating in the internet to escape or override most conventional jurisdictions.” Some of this thought, according to Sassen, is “still rooted in the earlier emphasis of the internet as a decentralized space where no authority structures can be instituted.”12 Saskia Sassen, Territory – Authority – Rights. From Medieval to Global Assemblages, (Princeton/Oxford: Princeton University Press, 2006, 2008), 330. Milton Mueller comments that cyber-libertarianism “… was never really born. It was more a prophetic vision than an ideology or ‘ism’ with a political and institutional program. It is now clear, however, that in considering the political alternatives and ideological dilemmas posed by the global internet we can’t really do without it …”13 Milton Mueller, Networks and States, 268.
Confusingly and hilariously, one place where the rhetoric of borderless information freedom is most pervasive is in the cloud. The world’s most powerful information companies have inserted some of the internet’s foundational optimism in their mission statements. These tech giants talk about themselves as heartwarming charities. Every billionaire CEO is his own private Dalai Lama. Pseudo-liberal jabberwocky of assumed universal validity permeates the junkspace of mission statements, annual reports, and TED talks, especially when it comes to the cloud. Microsoft wants to help everyone around the world “realize their full potential.”14 “About Microsoft” Facebook aims to give “people the power to share and make the world more open and connected.”15 See Gillian Reagan, “The Evolution of Facebook’s Mission Statement.” New York Observer, July 13, 2009. Skype makes it “simple to share experiences with the people that matter to you, wherever they are.”16 “About Skype” And Instagram, bought by Facebook, envisions “a world more connected through photos.”17 Instagram FAQ
Cyber-utopianism never translated into a policy outlook of sorts. But it is still associated with a set of practices and spatial forms: online anonymity, cryptography, Peer-To-Peer (P2P) file sharing, TOR (The Onion Router) bridges, bulletproof hosting, and offshore data havens, to name a few examples. Michael Froomkin, a professor at the University of Miami School of Law, defined the data haven in 1996 as “the information equivalent to a tax haven.”18 A. Michael Froomkin, “Flood Control on the Information Ocean: Living With Anonymity, Digital Cash, and Distributed Databases,”University of Pittsburgh Journal of Law and Commerce 395 (1996). This “place where data that cannot legally be kept can be stashed for later use; an offshore web host” appears omnipresent in the cyber-libertarian universe of thought, and is indeed an extreme form of keeping information away from antagonistically minded states, corporations or courts.19 See “Data Haven by Bruce Sterling from Islands in the Net”, technovelgy.com. The data haven is the spatial form that, at least theoretically, enables the evasion of sovereign power, while establishing an enclosed territory on the face of the earth. The data haven once provided a business model for the Principality of Sealand, an unrecognized mini-state founded by a British family on a former war platform in the North Sea. A notorious example in internet law, Sealand was, in the early 2000s, home to the servers of HavenCo, a startup providing offshore data hosting beyond the reach of any jurisdiction.20 The Principality of Sealand is discussed at length in our book, Uncorporate Identity. In our interview with hacker, cryptographer, and internet entrepreneur Sean Hastings, a self-styled inventor of Sealand’s data haven, Hastings declared that “the world needs a frontier. Every law, for good or ill, is an imposition on freedom. The frontier has always been a place for people who disagree with the morality of current law to be able to get away from it.” Sean Hastings, in “The Rise And Fall Of The Data Haven, Interview with Sean Hastings,” Metahaven and Maria Vishmidt eds., Uncorporate Identity (Baden: Lars Müller, 2010), 65. Later examples include Seasteading, an enterprise founded by Patri Friedman, designed to be a set of sovereign floating sea vehicles under ultraminimal governance without welfare or taxes. In 2011, Seasteading received funding from Paypal founder Peter Thiel. This “libertarian sea colony” was directly modeled after the Principality of Sealand, mixed with the gated community, the ranch, and the cruise ship. It is uncertain whether such physical havens, if realized in the first place, will ever escape their founding vision of conservative-libertarian frontier romanticism. See Cooper Smith, “Peter Thiel, PayPal Founder, Funds ‘Seasteading,’ Libertarian Sea Colony,” Huffington Post, August 19, 2011. HavenCo joined the dotcom boom with angel investment from Joi Ito (among others), who declared himself, still in 2002, “a great fan of the concept.”21 Joi Ito, “Havenco Doing Well According BBC.” Quoted from Slashdot, July 10, 2002. Sealand’s fragile sense of half-tested nationhood would theoretically raise the bar for any opposing jurisdiction to physically invade the offshore host. It would, indeed, demonstrate that cyber-libertarian ideology could take full control of an experimental country, and reform the internet in its name. James Grimmelmann, a professor at New York Law School, is skeptical about Sealand and HavenCo’s treatment of the law:
HavenCo was selling the end of law. ‘Third-world regulation’ was a euphemism for minimal regulation—or none at all. In its search for the lowest common denominator, HavenCo was willing to divide by zero.22 James Grimmelmann, “Sealand, Havenco, And The Rule Of Law.”Illinois Law Review 405, 2012, 460.
Grimmelmann also questions HavenCo’s effectiveness, as “for most purposes, cheap commodity hosting on one side of the Atlantic or the other could easily outcompete Sealand’s more expensive boutique product in the middle of the North Sea.”23 Grimmelmann, “Sealand, Havenco, And The Rule Of Law.” 462. Grimmelmann rhetorically continues, “in an age of YouTube, BitTorrent, and the darknet, who needs HavenCo?”24 Grimmelmann, “Sealand, Havenco, And The Rule Of Law.” 463. Sealand was the flagship store of the internet’s anarcho-libertarian movement. The P2P BitTorrent platform The Pirate Bay famously tried to buy the ailing principality in 2007, offering citizenship.25 Cory Doctorow, “Pirate Bay trying to buy Sealand, offering citizenship.” boingboing.net, January 12, 2007. Michael Froomkin, in a June 2012 lecture at the Oxford Internet Institute, sketched out an arresting and slightly dystopian view of the current internet. It looked like a complete dichotomy—a dialectic between two opposing visions, each serving broadly similar goals by completely antithetical means.26 Michael Froomkin, “Internet Regulation at a Crossroads.” Lecture at Oxford Internet Institute, University of Oxford, June 2012. YouTube. The dialectic was between “Cypherpunk Dream” and “Data’s Empire” (see diagram), where most of the anarchic (Barlow-style) stuff would be on the first side, and most of the cloud and surveillance on the other. Oddly, two cloud-based services, YouTube and Twitter, still appeared under the Cypherpunk Dream, presumably because of the pivotal role both services play in online activism and “getting the information out.” Froomkin connects Data’s Empire to a “renaissance of the state”—a re-emergence of state power over the network and the networked, perhaps, Froomkin suggests, in an unwitting reaction to a largely unrealized spectre of internet utopianism and anarchy. While both the Cypherpunk Dream and Data’s Empire seem to have a business model, the first one’s is Ayn Rand-style anarcho-capitalism, while the latter’s looks more like a digital form of industrial capitalism. The cloud, with its data factories, “scalability,” standardization and centralization, indeed looks a little like an industrial revolution, yet it is one largely one without a working class. This industrial complex actually dislikes most things that are small. Indeed, many of Silicon Valley’s cloud protagonists practice “acq-hiring”: promising startups are purchased only to get hold of their talented staff, while the product or concept that staff worked on gets discarded.27 See Liz Gannes, “The Vanity of the ‘Acqhire’: Why Do a Deal That Makes No Sense?” AllThingsD, August 10, 2012. One of the most arresting aspects of Froomkin’s scheme however is not in the dialectic as such, but in the reason he suggests for why it came about in the first place.
Cyber-libertarians, in hopes of evading the state’s grasp, assumed that its coercive powers would be constrained by jurisdictional and constitutional limits. As James Grimmelmann concisely puts that thought, “HavenCo simultaneously thumbed its nose at national law and relied on international law to protect Sealand.”28 Grimmelmann, “Sealand, Havenco, And The Rule Of Law.” 479. The possibility of states evading their own law, or international law, going rogue, sub- or supra-legal in their handling of disruptive actors, was not considered. The dream of offshore information freedom reflects this vision. But state power can be deployed in a legal void, as was recognized early on by James Boyle, a professor of law at Duke University. In his 1997 text Foucault in Cyberspace, Boyle refuted much of the legalistic optimism of cyber-utopianism:
Since a document can as easily be retrieved from a server 5,000 miles away as one five miles away, geographical proximity and content availability are independent of each other. If the king’s writ reaches only as far as the king’s sword, then much of the content on the Net might be presumed to be free from the regulation of any particular sovereign.29 James Boyle, Foucault In Cyberspace: Surveillance, Sovereignty, and Hard-Wired Censors, 1997.
Even then, Boyle argued, de facto authority can still be exercised by the state, as
the conceptual structure and jurisprudential assumptions of digital libertarianism lead its practitioners to ignore the ways in which the state can often use privatized enforcement and state-backed technologies to evade some of the supposed practical (and constitutional) restraints on the exercise of legal power over the Net.30 Ibid.
Boyle stressed that state power doesn’t need to operate in ways that confront its constitutional limits. In a similar vein, Grimmelmann concludes that “no matter what a piece of paper labeled ‘law’ says on it, if it has no correspondence with what people do, it is no law at all.”31 Grimmelmann, “Sealand, Havenco, And The Rule Of Law.” 484. And indeed, it isn’t. A mere thirteen years after Foucault in Cyberspace, the controversial whistleblowing web site WikiLeaks found itself to be the living proof of this, as it became embargoed by US companies.
WikiLeaks began in 2007 as an “uncensorable” web platform for the release of leaked documents. Through an anonymous drop box, users could upload digital files to WikiLeaks. The material would be published only if it had not been published before, and if it were of historical, ethical, or political significance. The site first used a wiki format, where users and members would analyze and comment on on the leaks. The wiki format was since abandoned, but the name WikiLeaks remained. The site would be practically uncensorable for any government, since its hosting was set up in multiple jurisdictions. Its materials would be stored on servers in multiple countries, and thus be protected by the laws of these countries—a bit like a distributed version of the Sealand data haven. On July 29, 2009, as WikiLeaks published the high-exposure loan book of the bankrupt Kaupthing Bank, the site ran a discouraging note for its adversaries which demonstrated the legal firewalls it had constructed for itself against state and corporate power:
No. We will not assist the remains of Kaupthing, or its clients, to hide its dirty laundry from the global community. Attempts by Kaupthing or its agents to discover the source of the document in question may be a criminal violation of both Belgium source protection laws and the Swedish constitution.32 “Icelandic bank Kaupthing threat to WikiLeaks over confidential large exposure report.” WikiLeaks.org, July 31, 2009.
Upon receiving a complaint from Kaupthing, a Reykjavik court silenced Iceland’s national broadcaster RUV, which was planning to break the story on television. So instead of airing the story, the TV host pointed viewers to the WikiLeaks web site, where they could see the documents for themselves—to great social and political effects in Iceland. WikiLeaks could evade the gag order by hosting its information offshore—indeed, multiple times so. It was, as Boyle would say, beyond the power of a particular sovereign. WikiLeaks systematically won its jurisdictional chess games until, on November 28, 2010, it began releasing its biggest leak ever: a trove of hundreds of thousands of classified diplomatic communications from US embassies all over the world, now commonly referred to as “Cablegate.”
WikiLeaks’ source of income is crowdfunding; the site relies on public donations, processed by the Wau Holland Foundation based in Kassel, Germany. Wau Holland is reported to have collected about one million euros in donations to WikiLeaks in 2010. This, according to CBS News, would have paid WikiLeaks founder Julian Assange a salary of about sixty-six thousand euros that year.33 See Xeni Jardin, “WSJ obtains Wikileaks financial data: spending up, donations down.” Boingboing, December 24, 2010 and Joshua Norman, “WikiLeaks’ Julian Assange Now Making $86k/year.” CBS News, December 24, 2010. The crowdfunding went through “conventional payment channels”: PayPal, an online payment system owned by eBay, Western Union, and VISA and MasterCard, two corporations which together virtually dominate the credit card market. One could say that the WikiLeaks donations relied on a private “cloud” of intermediary, US-based companies. According to WikiLeaks’ own account, funding after the release of the first cables peaked at an all-time high of 800 thousand individual donations in a single month.34 “Banking Blockade.” WikiLeaks.org.
Upon the release of the cables, WikiLeaks’ Sweden-based servers were hit by a vast distributed denial of service (DDOS) attack, which compelled the organization to make the move of hiring cloud space from Amazon Web Services (AWS). On December 1, 2010, a day after the site’s move to the cloud, Amazon kicked all WikiLeaks files from its servers, marking the effective beginning of a pan-industrial, state-corporate embargo.35 Ryan Paul, “Wikileaks kicked out of Amazon’s cloud.” Ars Technica, December 1, 2010. Amazon’s decision was prompted by an aggressive call to arms from Joe Lieberman, senior US Senator for Connecticut, and chairman of the Senate Committee on Homeland Security. Lieberman urged American enterprises—including Amazon—to stop providing services to the whistleblowing site, even though he had no legal authority to enforce this.36 Alexia Tsotsis, “Sen. Joe Lieberman: Amazon Has Pulled Hosting Services For WikiLeaks.” Techcrunch, December 1, 2010. His words amounted to nothing more than an opinion. Lieberman took the position of both accuser and judge, stating that “it sure looks to me that Assange and WikiLeaks have violated the Espionage Act.”37 Paul Owen, Richard Adams, Ewen MacAskill, “WikiLeaks: US Senator Joe Lieberman suggests New York Times could be investigated.” The Guardian, December 7, 2010. The result was that WikiLeaks’ vital infrastructure fell through, as key companies withdrew themselves from WikiLeaks without the check of a court. EveryDNS, a California-based domain name registry, stopped providing access to the wikileaks.org domain name server, so that the site would only be reachable if a user entered its IP address in the browser bar. MasterCard, PayPal, VISA, and Western Union ceased to process WikiLeaks donations. Apple removed a WikiLeaks iPhone app from its store, as was noted in Part I of this essay. These operations, together, amounted to an extra-legal embargo for which the organization was unprepared. Yochai Benkler, a professor of law at Harvard University, examines the embargo in detail in a 2011 article, analyzing how WikiLeaks became constrained by “a large-scale technical distributed-denial-of-service (DDoS) attack with new patterns of attack aimed to deny Domain Name System (DNS) service and cloud-storage facilities, disrupt payment systems services, and disable an iPhone app designed to display the site’s content.” Benkler asserts that the attack came from multiple sources, some of which were more clearly and directly involved and identified than others. Yet indirectly and opaquely, Yochai Benkler argues, the attack came on behalf of the Obama administration,
having entailed an extra-legal public-private partnership between politicians gunning to limit access to the site, functioning in a state constrained by the First Amendment, and private firms offering critical functionalities to the site—DNS, cloud storage, and payments, in particular—that were not similarly constrained by law from denying service to the offending site. The mechanism coupled a legally insufficient but publicly salient insinuation of illegality and dangerousness with a legal void.38 Yochai Benkler, “WikiLeaks and the Protect-IP Act: A New Public-Private Threat to the Internet Commons,” Daedalus 4 (2011), 154–55.
James Boyle asserted that there can be a “formal language of politics organized around relations between sovereign and citizen, expressed through rules backed by sanctions,” versus an “actual experience of power.” The distinction is significant—it captures, spot on, the role of the state in the WikiLeaks embargo. The “actual experience of power” operates much more like a social network—Senator Lieberman occupying a powerful node, capable (or, believably suggesting being capable) of a potentially devastating set of cascading effects in case his friendly suggestions are not followed up—Don Corleone’s offer you can’t refuse, pure and simple. Power then is to personally govern the pressing and depressing of “Like” buttons, deciding on life or death, like Romans once presided over the fate of gladiators. Facebook’s “Like” symbol—a thumbs up—has its origins in ancient Rome. Arguably, Lieberman clicked the “Dislike”— thumbs down—on WikiLeaks, causing a wave of consequences resulting from his private, social, network power, while backed by his stature as a Senator. James Grimmelmann comments:
It is not just that Lieberman possesses the usual sovereign power, so that his public statements are raw threats. There is a political cost to him to pushing legislation; it will have to be checked by the judicial system, etc. Rather, he is a actor within a nexus of sovereign, economic, and social power, leveraging some of those in service of his goals.39 James Grimmelmann, e-mail to author. July 17, 2012.
The WikiLeaks financial embargo by VISA and MasterCard was fought in an Icelandic court by DataCell, the company acting as WikiLeaks’ local payment processor. A July, 2012 ruling required that Valitor, VISA and MasterCard’s payment handling agent in Iceland, should resume processing donations to the site as a contractual obligation to DataCell. The ruling was touted (by WikiLeaks) as “a significant victory against Washington’s attempt to silence WikiLeaks.”40 Charles Arthur, “WikiLeaks claims court victory against Visa.” The Guardian, July 12, 2012. It remains, however, questionable as to whether the order against Valitor will actually restore funding to the site. James Grimmelmann doubts that US payment links to WikiLeaks are answerable to the Icelandic ruling. He suggests that
global payment networks still have seams along national boundaries. Valitor, a company which can be thought of as Wikileaks’ “accepting bank,” will not necessarily have donation payments to process. The ruling does not affect the embargo still in place by VISA and Mastercard who continue to control the money flow between the issuing bank (on behalf of their customers) and Valitor.41 James Grimmelmann, e-mail to author, July 17, 2012.
Sveinn Andri Sveinsson, a lawyer for DataCell, is less pessimistic. Sveinsson was quoted calling the victory a “good day for the freedom of expression.”42 Omar R. Valdimarsson, “Iceland Court Orders Valitor to Process WikiLeaks Donations.” Bloomberg, July 12, 2012. Still, the case was decided as a matter of contractual law rather than constitutionality.43 Charles Arthur, The Guardian. Ibid.
The situation for WikiLeaks got worse when the organization’s founder, Julian Assange, was accused of (but not charged for) sexual misconduct in Sweden. This led Interpol to issue a Red Notice—normally reserved for the likes of Muammar Gaddafi—for Assange’s arrest, and an ensuing two-year standoff between Assange and UK prosecutors. After Assange lost his appeal against his extradition to Sweden at the Supreme Court in May 2012, the WikiLeaks founder escaped to the Ecuadorian Embassy in London, applying for (and receiving) political asylum—apparently not to evade Swedish accusations, but to prevent Assange’s possible extradition to the United States on presumed charges of espionage.44 William Neuman and Maggy Ayala, “Ecuador Grants Asylum to Assange, Defying Britain.” The New York Times, August 15, 2012.
The lines along which Assange’s legal team fought his extradition, followed by his move into the embassy, are in remarkable consistency with WikiLeaks’ multi-jurisdictional hosting model. The case brought to the surface deep ambiguities in the treaties regulating extraditions, prompting the Cambridge Journal of International and Comparative Law to argue that the UK Supreme Court’s decision displayed “a fundamental mistake” in its judgment.45 Tiina Pajuste, “Assange v Swedish Prosecution Authority: the (mis)application of European and international law by the UK Supreme Court - Part I.” Cambridge Journal of International and Comparative Law, June 20, 2012. At the embassy, meanwhile, Assange’s life seems to have become fully equivalent to that of WikiLeaks’ data. The Ecuadorian outpost here is like an offshore internet server, beyond the grasp of Western powers—and indeed, there was widespread anger when Britain briefly threatened Ecuador to annul the status of its London embassy’s premises.46 The British authorities sent a letter to Ecuador saying that “You need to be aware that there is a legal base in the UK, the Diplomatic and Consular Premises Act 1987, that would allow us to take actions in order to arrest Mr Assange in the current premises of the embassy. We sincerely hope that we do not reach that point, but if you are not capable of resolving this matter of Mr Assange’s presence in your premises, this is an open option for us.” See Mark Weisbrot, “Julian Assange asylum: Ecuador is right to stand up to the US.” The Guardian, August 16, 2012.
Assange himself frequently deploys chessboard metaphors when talking about jurisdiction in a multipolar world. His personal television show, The World Tomorrow, was produced by RT, the Western branch of Russia’s state broadcaster (which, as especially liberal commentators prefer to add, is “Kremlin-backed”). As Assange explained to the Daily Mail in September 2012, “if it proceeds to a prosecution then it is a chess game in terms of my movements. I would be well advised to be in a jurisdiction that is not in an alliance with the US …” In Assange’s view,
we must see the countries of the world as a chess board with light and dark areas in ever shifting arrangements depending on our latest publication.47 Sarah Oliver, “‘It’s like living in a space station’: Julian Assange speaks out about living in a one-room embassy refuge with a mattress on the floor and a blue lamp to mimic daylight.” The Daily Mail,September 29, 2012.
If WikiLeaks, and Julian Assange, are making one thing clear, it is that the jurisprudential assumptions of cyber-libertarianism can have a visceral afterlife in the nondigital, material world. Traditional liberal-constitutional niches like freedom of expression and civil disobedience are no longer that convincing; they, in a sense, exhibit the same weaknesses as Sealand and the Pirate Bay in their wide-eyed expectation of state power curbed by law. The gross inequality in resources between the state and its idealist critics becomes painfully obvious when states deliberately shred to pieces, like discarded paperwork, legally certified limits on their executive power. It is becoming increasingly obvious that liberal-democratic conceptions like network neutrality, internet freedom, and freedom of expression, despite their key democratic value, don’t give any actual protection to those who need them most. In a global internet under a renaissance of the state, it is not just the network, but the networked, who are the ultimate subject of power.
In early 2011, Birgitta Jónsdóttir, an Icelandic Member of Parliament, found out that the US Department of Justice sought information about her Twitter account. Jónsdóttir was under investigation because of her alleged involvement in the making of a WikiLeaks video called Collateral Murder, which was edited and produced in Iceland in 2010.48 See Raffi Khatchadourian, “No Secrets. Julian assange’s mission for total transparency.” The New Yorker, June 7, 2010. The video documents the shooting of unarmed civilians in Baghdad by a US helicopter crew; the scene was filmed from the gun turret camera of an Apache attack helicopter. The video material used in Collateral Murder was received by WikiLeaks from a source in the US military, alleged to be Private First Class Bradley Manning. Manning is currently under a court martial pretrial for charges including “aiding the enemy.” A Grand Jury investigation into WikiLeaks brought about the DOJ’s interest in the Jónsdóttir’s Twitter information, along with the account information of Jacob Appelbaum and Rob Gonggrijp, computer experts who are also alleged to have helped with the production of Collateral Murder. All Twitter user information is stored on servers in the US, which are accessible to US law enforcement with or without a court order. The subpoena was issued so that the receiving party was forbidden from talking about it; Twitter’s lawyer however successfully lifted the gag order, so that Jónsdóttir, Gonggrijp and Appelbaum could be informed about the subpoena. On November 13, 2011, Jónsdóttir tweeted:
A foreign government would have a hard time getting permissions for officials entering my offline home, same should apply to online home.49 Twitter.
Her message was retweeted over 100 times. The problem is that in the cloud, there is no equivalent to a “home.” Cloud computing may sometimes mimic or emulate some of the virtues of the anarcho-libertarian internet, such as anonymous PGP keys and personalized security architecture.50 See “AWS Security and Compliance Center.” Amazon Web Services—a company which extra-legally censored WikiLeaks on the request of Joe Lieberman—boasts that it errs on the side of “protecting customer privacy,” and is “vigilant in determining which law enforcement requests we must comply with.” Indeed, it heroically says, “AWS does not hesitate to challenge orders from law enforcement if we think the orders lack a solid basis.”51 See “Amazon Web Services: Risk and Compliance White Paper July 2012.” However, all cyber-anarchic playtime must happen under the gaze of the web’s digital Walmart, without any definition of what a “solid basis” is. In addition, the possibility of revolving door interests between business and government can’t be ruled out either. Amazon’s current, Washington D.C.-based Deputy Chief Information Security Officer is reported to possess a “distinguished career in federal government security and law enforcement.”52 See Malcolm Ross, “Appian World 2012 – Developer Track.”Appian.com, March 16, 2012. A cloud service provider’s own security staff may in various ways—socially, geographically, and through expertise—be already intimately connected to the very law enforcement agencies whose requests it is supposed to scrutinize. As Rebecca Rosen explains, the notion of data storage being handled by a cloud provider already removes some of the legal constraints on evidence-gathering by law enforcement, especially on subpoenas:
Grand jury subpoenas are used to collect evidence. Unlike warrants, subpoenas can be issued with less than probable cause. The reasoning for the lower bar is in part that if someone does not want to turn over the requested evidence, he or she can contest the subpoena in court. Grand juries can subpoena not only the person who created a document but any third parties who might be in possession of that document. Under the Stored Communications Act, a grand jury can subpoena certain types of data from third parties whose only role is storing that data.53 Rebecca J. Rosen, “How Your Private Emails Can Be Used Against You in Court.” The Atlantic, July 8, 2011.
This, then, reflects an outdated idea of a third party’s role in a subpoena. At the time the law developed, it could be assumed that “any third party with access to someone’s data would have a stake in that data and a relationship with the person who created it.” As Rosen concludes, “in the old days of storing information in filing cabinets, subpoena power was constrained because people didn’t save everything and investigators had to know where to look to find incriminating evidence.”54 Ibid. A cloud provider is a new kind of third party; it manages and hosts vast troves of personal data belonging to its customers. But it is not a stakeholder in such data. Neither was the manufacturer of a filing cabinet a stakeholder in the private documents stored in it. There are many such filing cabinets in the cloud, storing the online self. Together, they form the scattered “online home” we inhabit. Information in the cloud perversely echoes the utopian dream of a weightless and autonomous internet, independent from the constraints of territory. But this utopian dream is, in reality, a centrally managed corporation. As James Gleick writes
all that information―all that information capacity―looms over us, not quite visible, not quite tangible, but awfully real; amorphous, spectral; hovering nearby, yet not situated in any one place. Heaven must once have felt this way to the faithful. People talk about shifting their lives to the cloud―their informational lives, at least. You may store photographs in the cloud; e-mail passes to and from the cloud and never really leaves the cloud. All traditional ideas of privacy, based on doors and locks, physical remoteness and invisibility, are upended in the cloud.55 James Gleick, The Information, 395–96.
Jónsdóttir, Appelbaum and Gonggrijp tried to find out if, and which, other social media companies had received similar subpoenas. They had reason to believe this would be the case, because Twitter is known (and often praised) for collecting relatively little information about its users; it would seem, as Glenn Greenwald wrote, “one of the least fruitful avenues to pursue” for the DOJ to rely solely on Twitter information.56 Glenn Greenwald, “DOJ subpoenas Twitter records of several WikiLeaks volunteers.” Salon.com, January 8, 2011. Jónsdóttir’s demands for transparency were flatly refused. US Attorney Neil MacBride wrote in a court filing that her request demonstrated an “overriding purpose to obtain a roadmap of the government’s investigation.” MacBride further stated that
the subscribers have no right to notice regarding any such developments in this confidential criminal investigation—any more than they have a right to notice of tax records requests, wiretap orders, or other confidential investigative steps as to which this Court’s approval might be obtained.57 Kevin Poulsen, “Feds: WikiLeaks Associates Have ‘No Right’ To Know About Demands For Their Records.” Wired, June 2, 2011.
This is a brazenly imperialist thing for MacBride to say. If the US government wants, for the purpose of a “confidential criminal investigation,” to have the tax records of a non-US citizen like Jónsdóttir, it can’t simply subpoena them from a US cloud service. It must file a case with a foreign government, and demonstrate probable cause. Apparently, to MacBride, obtaining information on a non-US subject from a US server is the same obtaining such information from foreign territory; smooth compliance is simply expected, and indeed presupposed. In a piece for the Guardian, Jónsdóttir referred to her legal ordeal as an example of ongoing attempts of the US to silence the truth as a means of maintaining power. She wrote that the DOJ’s subpoena constituted a “hack by legal means.”58 Birgitta Jónsdóttir, “Evidence of a US judicial vendetta against WikiLeaks activists mounts.” The Guardian, July 3, 2012. Perhaps out of a misunderstanding of the mechanisms of social media, or out of genuine Orwellian intent, cloud subpoena procedures can take on grotesque dimensions. For example, in December 2011, the Boston District Attorney subpoenaed Twitter over the following material:
Guido Fawkes, @p0isonANon, @occupyBoston, #BostonPD, #d0xcak3.59 Bernard Keane, “The Boston fishing party and Australians’ rights online.” Crikey, January 17, 2012.
The subpoena sought not just information on a specific user, but on all users connected to certain words and hashtags associated with the Occupy movement’s activities in Boston, and the hacktivist collective Anonymous, at a given point in time. WikiLeaks, in linking to this story, tweeted that it was now time for Twitter to move its servers offshore.60 Twitter. The Australian journalist Bernard Keane concluded from the Boston DA’s bizarre “fishing expedition” that
the only real solution is social media networks outside the jurisdiction of nation-states. WikiLeaks is currently establishing its own social network, Friends of WikiLeaks, and Anonymous has established AnonPlus; there have also been anonymous microblogging sites such as Youmitter established, but their lack of critical mass is a key impediment, as is resilience in the face of surges in traffic, and they remain vulnerable, to the extent that it’s enforceable, to authorities claiming to exercise jurisdiction over whatever servers are used to host the networks.61 Bernard Keane, ibid.
Groys’s “future power” over the network is unlikely to pose direct, legal limits on free speech. Instead, like in the WikiLeaks embargo, it directly affects the material basis of those who speak. One is tempted to think of the ways in which the FBI pursued hacker collectives Anonymous and LulzSec after their DDoS attacks on MasterCard and VISA. The FBI fully exploited the real-world frailties and vulnerabilities of the hackers, who presented themselves as invulnerable superheroes online. But they weren’t, in reality. The authorities made no qualms about the question whether or not Anonymous and LulzSec’s cyber-conflict entailed acts of “civil disobedience.” They were treated as cyber-terrorists, and the option for their practices to constitute a legitimate realm of civic protest was eclipsed—even though some of the most thorough previous analysis of Anonymous had focused on these possibilities.62 See Gabriella Coleman, The Many Moods Of Anonymous: Transcript. Discussion at NYU Steinhardt, March 4, 2011. One of the group’s most prominent members, Sabu, was apprehended by the FBI and turned into an informant. New York Magazine wrote about Sabu, using his real name instead of his online pseudonym:
On the day that he joined forces with the hacker collective Anonymous, Hector Xavier Monsegur walked his two little girls half a dozen blocks to their elementary school. “My girls,” he called them, although they weren’t actually his children. Monsegur, then 27, had stepped in after their mother—his aunt—returned to prison for heroin dealing.63 Steve Fishman, “Hello, I Am Sabu ... “ New York Magazine, June 3, 2012.
Ars Technica adds that “worried about the fate of two children in his charge, Monsegur has allegedly been aiding the FBI since his arrest last summer—aid which culminated in arrests today of several LulzSec members.”64 Nate Anderson, “LulzSec leader ‘Sabu’ worked with FBI since last summer.” Ars Technica, March 6, 2012. The Guardian completes this story, as
Monsegur … provided an FBI-owned computer to facilitate the release of 5m emails taken from US security consultancy Stratfor and which are now being published by WikiLeaks. That suggests the FBI may have had an inside track on discussions between Julian Assange of WikiLeaks, and Anonymous, another hacking group, about the leaking of thousands of confidential emails and documents.65 Charles Arthur, Dan Sabbagh and Sandra Laville, “LulzSec leader Sabu was working for us, says FBI.” The Guardian, March 7, 2012.
The space of flows is absolutely not smooth. It looks like a data center, and the coal plant that powers it. It looks like Julian Assange’s room in the Ecuadorian Embassy in London. It looks like the Principality of Sealand. It looks like Sabu’s social housing unit on Manhattan’s Lower East Side. The landing from the digital onto the material is hard; it comes with a cruelty and intensity we haven’t even begun to properly understand. Along these lines, we might grasp an emerging political geography of information, resources, and infrastructure. In such a geography, the state and the cloud are among the most important layers, but they are not the only layers by far. Saskia Sassen writes that we need to problematize “the seamlessness often attributed to digital networks. Far from being seamless, these digital assemblages are ‘lumpy,’ partly due to their imbrications with nondigital conditions.”66 Saskia Sassen, Ibid., 382-3.
Once again, the world indeed is lumpy enough for us not to draw easy conclusions. This story is not over yet. Tomorrow’s clouds are forming.
A Massive, Expanding Surveillance State With Unlimited Power And No Accountability Will Secure Our Freedom by Hans Christian Andersen.
—twitter.com/pourmecoffee1 August 16, 2013.
Violence arms itself with the inventions of Art and Science in order to contend against violence.
—Carl von Clausewitz2 Carl von Clausewitz, On War, trans. J. J. Graham (London, 1873).
Infrastructure is the technology that determines whether we live or die. Your infrastructure will kill you—if it fails, you fail.
—Smári McCarthy3 Smári McCarthy, “Iceland: A Radical Periphery in Action. Smári McCarthy interviewed by Metahaven,” Volume 32 (2012): 98–101.
The internet began as a place too complicated for nation-states to understand; it ended up, in the second decade of the twenty-first century, as a place only nation-states seem to understand. This has left omnipresent cloud giants Google and Yahoo!, in their own words, “outraged.” They are helpless bystanders to US spy agencies as they extraterritorially, without permission, and aided by the Brits, break into data center cables just to find out if the next Bin Laden is out there posting kitten videos on YouTube. In response, Germany and Switzerland cash in on “secure” clouds; Russia fortifies its digital walls, incarcerates Pussy Riot, and offers asylum to Edward Snowden. Ecuador, which hosts Julian Assange in its London Embassy as a political refugee, is to rebrand itself as a “haven for internet freedom.”4 https://www.buzzfeed.com/rosiegray/ecuador-bids-to-be-seen-as-the-home-of-internet-freedom These recent developments show the deep divide between the perspectives of various governments often claiming to restore national sovereignty over data space, and the very nature of the network itself, which is by definition transnational and borderless.
General Keith Alexander is the director of the US National Security Agency. In his previous position as the head of the US Army Intelligence and Security Command, Alexander had an architectural firm decorate his so-called “Information Dominance Center” to look like the Starship Enterprise control room. This helped him gain political enthusiasm for spying. As Foreign Policy notes, “Lawmakers and other important officials took turns sitting in a leather ‘captain’s chair’ in the center of the room and watched as Alexander, a lover of science-fiction movies, showed off his data tools on the big screen.”5 http://foreignpolicy.com/articles/2013/09/08/the_cowboy_of_the_nsa_keith_alexander?page=full At the time of this writing, Alexander is to step down from his position after a taxing year at the helm of the spyboat. Before Edward Snowden gave thousands of the agency’s top-secret documents to the press, Alexander used to publicly appear in full military attire. Sometimes he tried to win sympathy by taking the stage in a black t-shirt. In Las Vegas in 2012, Alexander urged digital troublemakers to join the NSA; he also pleaded that his agency operated lawfully and transparently. “We are overseen by everybody,” he said.6 https://www.wired.com/2012/07/nsa-chief-denies-dossiers/ But that was 2012. There were Patriot Act abuses, National Security Letters, and overzealous US prosecutors going after The Pirate Bay, Megaupload, WikiLeaks, and Chelsea Manning. As early as 2002, Mark Klein, an AT&T technician, witnessed an NSA-controlled wiretapping room in full operation in a data center in San Francisco. Later, a handful of US Senators warned the media about a secret interpretation of the Patriot Act.7 Senator Ron Wyden: “We’re getting to a gap between what the public thinks the law says and what the American government secretly thinks the law says.” Quoted in Mike Masnick, “Senators Reveal That Feds Have Secretly Reinterpreted the PATRIOT Act,” Techdirt, May 26, 2011. Nobody listened.
Then came Edward Snowden. As the magnitude of the NSA’s surveillance of global internet and phone communications systems was being revealed, Keith Alexander changed his public relations tactics accordingly, appearing as an obedient, invisible bureaucrat. At Def Con 2013, Alexander presented his mission: “connecting the dots.” Hoovering up everything from everyone up to three degrees of separation, or “hops,” away from a known suspect in order to avert the next 9/11.8 “All of your friends, that's one hop. Your friends' friends, whether you know them or not—two hops. Your friends’ friends’ friends, whoever they happen to be, are that third hop. That’s a massive group of people that the NSA apparently considers fair game.” Quoted in Philip Bump, “The NSA Admits It Analyzes More People's Data Than Previously Revealed,” The Atlantic Wire, July 17, 2013. Columbia University law professor Eben Moglen called it, plainly, “spying on humanity.”9 http://snowdenandthefuture.info/PartI.html Alexander was simply following an organization-wide, 9/11-centered PR memo given out as a script to its representatives.10 http://america.aljazeera.com/articles/2013/10/30/revealed-nsa-pushed911askeysoundbitetojustifysurveillance.html Meanwhile, the NSA boasted that its surveillance had thwarted fifty-four terrorist attacks. However, that number lacked a real basis in fact, as the website ProPublica concluded after research.11 http://www.thewire.com/politics/2013/07/nsa-admits-it-analyzes-more-peoples-data-previously-revealed/67287/
Keith Alexander’s spaceship-style ops room sparks the same dark pleasure as the happy smile that sits on a hand-drawn NSA diagram about infiltration into Google and Yahoo!12 https://www.washingtonpost.com/world/national-security/nsa-infiltrates-links-to-yahoo-google-data-centers-worldwide-snowden-documents-say/2013/10/30/e51d661e-4166-11e3-8b74-d89d714ca4dd_story.html Alexander—the man who plotted to ruin the reputation of islamic “radicalizers” by publicly revealing their porn site visits—is, after all, the pseudo-amicable human incarnation of neo-Stalinism.13 A document that reveals this NSA plot was published by the Huffington Post. The document’s origin is “DIRNSA,” the agency’s director. See Glenn Greenwald, Ryan Gallagher, Ryan Grim, “Top-Secret Document Reveals NSA Spied On Porn Habits As Part Of Plan To Discredit ‘Radicalizers,’” Huffington Post, November 26, 2013 The NSA uses corruption with martial agility. “Overseen” by opaque FISA courts, whose deliberations and decisions are secret, it has built a giant, data-slurping behemoth facility in Utah: a Wal-Mart holding everyone’s indeterminate digital past. Lost in a Berlusconian bunga bunga party, the NSA dreamed that its operations could go unseen forever. When asked by Congress if the NSA collected data on millions of Americans, the Director of National Intelligence, James Clapper, politely replied under oath: “No, sir … not wittingly.”14 Ruth Marcus, “James Clapper’s ‘least untruthful’ answer,” The Washington Post, June 13, 2013. Clapper later apologized for misleading Congress by giving the “least untruthful answer.”15 Jason Howerton, “James Clapper Apologizes For Lying To Congress About NSA Surveillance: ‘Clearly Erroneous’,” The Blaze, July 2, 2013.
In one cunning operation, the NSA wielded its power to influence the technical standards on which the internet itself relies, including the pseudo-random number generators that occupy our computers’ microchips. As Yochai Benkler asserts, the NSA “undermined the security of the SSL standard critical to online banking and shopping, VPN products central to secure corporate, research, and healthcare provider networks, and basic email utilities.”16 Yochai Benkler, “Time to tame the NSA behemoth trampling our rights,” The Guardian, September 13, 2013. Jennifer Granick calls the NSA “an exceedingly aggressive spy machine, pushing—and sometimes busting through—the technological, legal and political boundaries of lawful surveillance.”17 Jennifer Granick, “NSA SEXINT is the Abuse You’ve All Been Waiting For,” Just Security, November 29, 2013. Half-hearted attempts by the Obama Administration to curb the agency’s powers do little to reverse the situation. A newly appointed oversight committee is, as Benkler notes, stocked with insiders of the national security shadow world, even as the President claims, in awe-inspiring legalese, that it consists of “independent outside experts.” Surprise: the Obama-appointed chief curator of the committee is James Clapper himself.18 http://www.huffingtonpost.com/2013/08/13/james-clapper_n_3748431.html According to Slate, the proposed post-Snowden NSA reform bill, spearheaded by Democratic Senator Dianne Feinstein, “for the first time explicitly authorizes, and therefore entrenches in statute, the bulk collection of communications records, subject to more or less the same rules already imposed by the FISA Court. It endorses, rather than prohibits, what the NSA is already doing.”19 David Weigel, “New NSA Reform Bill Authorizes All the NSA Activity That Was Making You Angry,” Slate, November 1, 2013. Showing his deep understanding of the privacy concerns of ordinary people, President Obama ordered an end to the NSA’s spying on the IMF and the World Bank.20 Mark Hosenball, “Obama halted NSA spying on IMF and World Bank headquarters,” Reuters, October 31, 2013.
Initially known for its quirky minimalism and math, Google has been working hard on its emotional impact on the public. Its vice president for marketing said in 2012 that “if we don’t make you cry, we fail. It’s about emotion, which is bizarre for a tech company.”21 Claire Cain Miller, “Google Bases a Campaign on Emotions, Not Terms,” The New York Times, January 1, 2012. Free email, chat, and social networking are the Coke and McDonalds of the internet. But they don’t promise Americanness. They promise connections. The largest cloud services are global standards. They are “natural,” thus dominant, focal points in the network, offering the largest potential social reward and likelihood of connection. “Network power” obscures less popular alternatives. The ultimate container of network power is the mobile app, which bypasses the shared internet and its protocols entirely. Instead, users are permanently within the corporation’s digital walls rather than in and out of it through their web browser.22 See our discussion of network power in the book Uncorporate Identity.
Google’s top executives Eric Schmidt and Jared Cohen published The New Digital Age, a trailblazing book about their political ideas, and how Google interacts with American power abroad. WikiLeaks’s Julian Assange finds that in this paper-bound TED speech, a
liberal sprinkling of convenient, hypothetical dark-skinned worthies appear: Congolese fisherwomen, graphic designers in Botswana, anticorruption activists in San Salvador and illiterate Masai cattle herders in the Serengeti are all obediently summoned to demonstrate the progressive properties of Google phones jacked into the informational supply chain of the Western empire.23 Julian Assange, “The Banality of ‘Don’t Be Evil’,” The New York Times, June 1, 2013.
Indeed, every transaction on a Google server is an event under American jurisdiction.
The seizure of the internet by public-private technocrats, cloud providers, and secret services is an example of what Evgeny Morozov calls “solutionism.”24 See “Evgeny Morozov on technology—The folly of solutionism,” The Economist.com, May 2, 2013. Solutionism takes problems from social and political domains and recalibrates them as issues to be dealt with by technology alone. It brings them under the control of programmers, systems managers, Silicon Valley entrepreneurs, and their political avatars. Privacy and civil liberties are brushed aside: technological bypasses to political, social, and legal problems present themselves everywhere as progress. Who rules the internet on whose behalf, as ridiculously archaic as the question may sound, is a political and legal issue highjacked by solutionism. Milton Mueller phrases it slightly differently, as “who should be ‘sovereign’—the people interacting via the Internet or the territorial states constructed by earlier populations in complete ignorance of the capabilities of networked computers.”25 Milton L. Mueller, Networks and States: The Global Politics of Internet Governance (Boston, MA: MIT Press, 2010), 268.
It is uncertain whether sovereignty is attainable at all; whether it, as a concept, holds up against the network, with its winner-takes-it-all technologies. Security expert Bruce Schneier says we must “take back” the internet: “Government and industry have betrayed the internet, and us … We need to figure out how to re-engineer the internet to prevent this kind of wholesale spying. We need new techniques to prevent communications intermediaries from leaking private information.”26 Bruce Schneier, “The US government has betrayed the internet. We need to take it back,” The Guardian, September 5, 2013.
Infrastructure gets political when things don’t work. As long as they do, no questions are asked. Drinking water is instantly political when nothing comes out of the faucet. Scarcity is a big politicizer. The broken internet grapples with an opposite problem: it bathes in an overabundance of apps and services, which thrive on the deterritorialization, expropriation, and extortion of life and data. Benjamin Bratton calls this “microeconomic compliance.”27 http://www.e-flux.com/journal/some-trace-effects-of-the-post-anthropocene-on-accelerationist-geopolitical-aesthetics/ It is probably the most convenient model of exploitation that has ever existed.
The people on the internet live in territories. They have citizenship. But this feedback loop doesn’t activate political agency. What, after all, really is the connection between these things—“indifference, weariness and exhaustion from the lies, treachery and deceit of the political class” perhaps, as Russell Brand aptly stated?28 https://www.youtube.com/watch?v=3YR4CseY9pk Snapchat and Instagram are vehicles of social (and geopolitical) lure, endlessly more attractive than our tacit complicity with the machinery of representative politics. No one talks about political revolution, but the “Twitter Revolution” makes headlines in mainstream media. The only problem with our digital tools is their underlying standardization. We have an exhausted political machine on the one hand—“citizenship” forced into tiresome, backward rituals of participation. And on the other hand, we have the splendor and immediacy of love, friendship, connection, and technology built on microecononomic and geopolitical compliance. It seems an all too easy win for the latter. People have not considered the internet as a democratically governable structure. Decisions on the internet are delegated to a giant “don’t be evil” mix.
Carne Ross, a former British diplomat and founder of Independent Diplomat, is looking for a solution beyond technology. “The balance between the individual and state needs to be more fundamentally altered,” argues Ross. “New rules, in fact new kinds of rules, are needed. What is required is nothing less than a renegotiation of our contract with the state, and with each other.”29 Carne Ross, "Citizens of the world, unite! You have nothing to lose but your data," The Guardian, October 31, 2013. Ross’s proposal is not technical or bureaucratic. It is political in the most personal sense. Its problem is that it draws on decision-making and enforcement structures which don’t yet exist. People can look out for their common good only when they share common space and interest. They can work out their own polity better than central governments can, as Ross argues in his book The Leaderless Revolution, which promotes benign anarchism. Indeed, it is unclear how a renegotiation of the internet’s social contract might be achieved without a unifying political mechanism for those on the network who can’t bargain with the status quo. For those forced into compliance with its already dominant standards. Or for those who don’t yet know the faces of their friends.
Some version of a social contract between citizens and governments (and corporations) was demonstrated in 2012 when citizens across the world successfully prevented the Stop Online Piracy Act and the Protect IP Act from coming into effect.30 The US Congress withdrew the bills proposing SOPA and PIPA in February 2012 after widspread protests. Around the same time, the Anti-Counterfeiting Trade Agreement (ACTA) was successfully defeated by citizens in the European Union. “Social contract” here means the possibility for people to bargain with the powerful about measures that threaten the common good. Major websites like Google and Wikipedia sided with the protesters against SOPA/PIPA, which somewhat nuances the familiar picture of “evil corporations.” However, this type of legislation tends to silently return in a different guise, most recently with the highly secretive Trans-Pacific Partnership (TPP) agreement. The Intellectual Property portion of this agreement was leaked to WikiLeaks in November, 2013.31 https://wikileaks.org/tpp/pressrelease.html
A social contract for the internet requires governments and corporations to welcome its political inconveniences. It requires them to radically cut back on surveillance. It requires them to unambiguously legalize leaks, cyberprotests, and online civil disobedience as legitimate political expressions. As noted in Part II of this essay, in 2010 and 2011 UK- and US-based hacktivists used DDoS attacks to target private corporations that imposed a corporate embargo against WikiLeaks. The hacktivists responsible were hunted down and tried as criminals; the analogy between hacktivism and nonviolent civil disobedience was lost on the system and its judges. Cyberprotests express the absence of any verifiable and binding agreement between the system and its users. Digital equivalents to strikes and blockades are framed as crimes against property and profit.
The activist group NullifyNSA has taken on the task of disabling the NSA by shutting off the water supply to its data centers. The fascinating proposition is a stark reminder that the ability to spy and to store data is ultimately dependent on electricity and cooling. Thus, any “internet” operation is ultimately dependent upon the living environment and its resources. Michael Boldin, executive director of the Tenth Amendment Center and a NullifyNSA representative, explains that
In Utah, the new data center is expected to need 1.7 million gallons of water per day to keep operational. That water is being supplied by a political subdivision of the state of Utah. Passage in that state of the 4th Amendment Protection act would ban all state and local agencies from providing material support to the NSA while it continues its warrantless mass surveillance. No water = no data center.32 Michael Boldin/NullifyNSA, email to authors, December 4, 2013.
NullifyNSA is politically on the libertarian-conservative Right. Its ideas are, as Boldin says,
backed up by the advice of James Madison. The Supreme Court has repeatedly issued opinions over the years backing it up in a widely accepted legal principle known as the anti-commandeering doctrine. The cases go all the way back to the 1840s, when the court held that states couldn’t be forced to help the feds carry out slavery laws. The latest was the Sebelius case in 2012, where the court held that states couldn’t be compelled to expand Medicaid, even under threat of losing federal funding.33 Ibid.
NullifyNSA has all of the Right’s typical rigor and determination even while it, as Boldin summarizes, seeks to be “transpartisan” in its efforts:
Our goal is single-minded—stopping NSA spying. It’s a long haul, and it’s going to take significant effort and resistance from groups and people not used to working together. But the time is now to set aside differences for the liberty of all.34 Ibid.
The group explains the interdependency between the digital and the physical domains accurately and plainly. Almost no one on the Left seems to have talked about data centers quite like this. Boldin points out the ecological disaster that is the NSA, adding that “a state like Utah is in a state of near-constant drought. The fact that all these precious resources are being used to spy on the world should be disgusting to nearly everyone.”35 Ibid. He goes on to analyze the NSA’s distribution of data centers and its implications for the organization’s own perception of its vulnerabilities:
Back in 2006, the NSA maxed out the Baltimore area power grid. Insiders were very concerned that expansion of the NSA’s “mission” could result in power outages and a “virtual shutdown of the agency.” In reading their documents and press releases over the years, we know that a prime motivation in expanding their operations in Utah, Texas, Georgia, Colorado and elsewhere was to ensure that loads of resources like water, electricity, and more, were distributed. That means they know they have an Achilles heel.36 Ibid.
After all, the NSA’s weak point may be its insatiable appetite for electricity rather than its breaches of the Constitution. NullifyNSA, a group of conservatives with a practical bent, hints at the under-investigated relationship between data centers and their physical geographies.37 For NullifyNSA
“Data sovereignty” is a phrase of recent coinage describing two distinct trends in internet hosting. The first is the increasing tendency of nation-states to make networks that fit within national borders so they can completely control what goes on inside the network. Russia and China both have their own Facebook and Twitter, controlled at all times by the state. The only advantage of these networks is that they are not under the auspices of the NSA. Boutique data sovereignty is a viable economic strategy in the wake of global surveillance. Secure “email made in Germany” is now hot; user data are protected by supposedly watertight German privacy laws.38 Elizabeth Dwoskin and Frances Robinson, ‟NSA Internet Spying Sparks Race to Create Offshore Havens for Data Privacy,” The Wall Street Journal, September 27, 2013. Swisscom, Switzerland’s telecommunications company, which is majority-owned by the government, is developing a secure “Swiss cloud” aspiring to levels of security and privacy which US companies can’t guarantee.39 Caroline Copley, “Swisscom builds ‘Swiss Cloud’ as spying storm rages,” Reuters, November 3, 2013. Luxembourg and Switzerland’s recent wealth havens, or freeports for property in transit—mostly expensive art—also offer data storage.40 http://www.economist.com/news/briefing/21590353-ever-more-wealth-being-parked-fancy-storage-facilities-some-customers-they-are
The second definition of “data sovereignty” is personal. Every internet user should “own” all of his or her online data. Jonathan Obar critiques the idea, but for the wrong reasons. He claims that personal data sovereignty is fallible because we have now “big data”:
The saying goes that if your only tool is a hammer, all problems look like nails. Data may need to be prevented from becoming “big” in the first place. Obar inadvertently shows the conceptual similarity of “big data” to bad financial products that no one understands. Personal data have become the credit default swaps of the cloud, building a bubble economy as unsustainable as the subprime mortgages that triggered the 2008 financial collapse. The NSA participates in this corporate feeding frenzy as much as cloud providers do. There is, in this light, nothing strange about wanting more personal control over one’s personal information. A clear model for it is still missing, but a 2011 paper by US Naval Graduate School students notes that “data sovereignty provides an explicit tool to break a level of abstraction provided by the cloud. The idea of having the abstraction of the cloud when we want it, and removing it when we don’t, is a powerful one.”42 https://www.usenix.org/legacy/events/hotcloud11/tech/final_files/Peterson.pdf To break down the abstraction of the cloud, the internet needs to be more localized.43 See the extensive study by Anselm Franke, Eyal Weizman, and Ines Geisler, “Islands: The Geography of Extraterritoriality” Archis 6 (Amsterdam: Artimo, 2003): 19–21
An example of the boundaries between nation-state politics and online politics being traversed is Iceland—a sparsely populated island nation in the North Atlantic that has come to be one of the rare places in the West where political alternatives get a chance. On July 5, 2008, John Perry Barlow gave a speech at the Reykjavík Digital Freedoms Conference. The talk was titled “The Right to Know.“44 https://www.youtube.com/watch?v=snQrNSE1T7Y Barlow took his audience on a journey that began with the wordless prehistory of homo sapiens; he ended by pitching a somewhat unexpected update of the “data haven”—an offshore sanctuary for information prefigured by cyberpunk science fiction. Iceland, Barlow said, could become a “Switzerland of Bits”—a haven for digital freedom, a safe harbor for transparency, a sanctuary for the Enlightenment. Cyberspace, for Barlow, was both global and local, and “the more local it becomes, the more global it becomes.”
A mere three months after Barlow’s talk, Iceland’s banks collapsed. Relative to country size, it was the largest banking crisis ever suffered by a single state.45 http://www.economist.com/node/12762027?story_id=12762027 Iceland’s recovery from the banking crisis became an opportunity for national democratic and ethical reforms. A twenty-five-strong Constitutional Assembly rewrote the constitution, and a crowdsourcing effort introduced thousands of comments and hundreds of concrete proposals from citizens directly into the legislative process.46 “The Constitutional Council hands over the bill for a new constitution.” Stjornlagarad, July 29, 2011. On June 16, 2010, Iceland’s parliament cast a unanimous vote for IMMI, the Icelandic Modern Media Initiative. IMMI combined a “greatest hits” of freedom of speech and libel protection laws that existed in various other countries.47 “Iceland’s media law: ‘The Switzerland of bits,’” The Economist, June 17, 2010. And while the idea for the Switzerland of Bits came from Barlow, a cofounder of the Electronic Frontier Foundation, WikiLeaks also had an influence on IMMI’s legal architecture: Assange’s whistleblowing platform ran separate hosting agreements with ISPs in various countries, benefiting from their laws.
The internet activist, software developer, and writer Smári McCarthy is IMMI’s executive director. Much of the organization’s impact depends on Iceland’s ability to influence new international standards, and to attract companies and organizations to host data.48 ‟Birgitta Jónsdóttir—Samara/Massey Journalism Lecture.” Uploaded on July 21, 2011. At the same time, McCarthy is involved in the development of MailPile, a secure email application and collective decision-making software that is in the political lineage of “liquid democracy”—a form of delegative democracy. A founding member of the Icelandic Pirate Party, much of McCarthy’s work takes place on the cutting blade of law and code.
McCarthy describes IMMI as an “NGO somewhere half-way between a think tank and a lobby group.” Can IMMI transform Iceland into a Switzerland of Bits? McCarthy is unambiguous in his answer: “Yes. And not just Iceland.” He explains: “Look through the legal code, the social structure, and pretty easy entry points start to become obvious. Treat society as a Wiki—a publicly editable social space—and be bold.“49 “Iceland: A Radical Periphery in Action. Smári McCarthy interviewed by Metahaven,” Volume 32 (2012): 98–101.
James Grimmelmann, who is a Professor of Law at the University of Maryland, comments:
I think Iceland’s plans are viable and well-considered. They are using Iceland’s legal sovereignty, real-world isolation, global connectedness, and stable political system to advance a series of pro-expression policy goals. They’re doing so in ways that don’t fundamentally alter Iceland’s nature as a modern democratic state, but rather play to the theoretical and practical strengths of that model. And McCarthy shows a good understanding of what the limits to this strategy are, in terms of effects beyond Iceland’s borders.50 James Grimmelmann, email to authors, July 17, 2012.
In Iceland, the classical data haven has evolved into a more advanced combination of policy, software, coding, and advocacy, removing itself from the anarcho-libertarian free-for-all. The internet, here, is an experiment with democracy. The development of online communication and coordination tools certainly falls within IMMI’s scope. The organization’s technical director, Eleanor Saitta, explains its larger democratic vision:
The Internet is an $11 trillion economy, globally. It’s a largely post-national economy (to a degree that quantizing it in the currency of a single nation feels mildly ridiculous), but the effects of that economy touch specific people, on specific pieces of ground. What Iceland is becoming is a nation deeply integrated with the internet at an economic level. There are ways in which that resonates strongly and typologically with the notion of the “island”—it’s a resonance we use at IMMI, sometimes, to explain our work. However, the fact that it’s happening in a Scandinavian country also makes a big difference. Iceland has obviously seen its economy turned upside down by the massive financial looting of the past decade, but the fundamental collectivist nature of the country remains. This stands in stark contrast with the hyper-libertarian, “damn anyone who can’t keep up” attitude common among crypto-anarcho-capitalists.
Building a data haven means something very different when you do it in a place where people live and have lived for centuries, in a place where it is a national project, not an also-ran that at best injects a little cash and at worst exists only as network colonialism. The notion of resilience is critical here, too. While some large hosting companies are tentatively approaching sustainability as a concept, they’re doing so to get punishing energy budgets down to something manageable and to comply with regulatory forces. Resilience is much more than sustainability; it meshes very closely with left-information politics, and in doing so, combines to provide a basic political platform much stronger than each alone. Hence in Europe, the limitations of the Pirates as (until their recent initial steps) a single-issue party; likewise, the Greens, mostly working from a relatively obsolete sustainability-only platform.51 Eleanor Saitta, email to authors, November 4, 2012.
Saitta sees the networked politics of the near future to be strongly interconnected with locality, so that the outcome is neither a purely nation-state-based affair nor commitment-free internet clicktivism. Such politics spring from a space of exception created both within the context of Iceland as a community and within the internet as a human network:
As translated into the material context of neoliberal capitalism, this provides guidance for some specific corporation to decide where they wish to host servers, but the creation is an act of the commons … Now, as to how network culture can create its own room in which to breathe, I think that’s a much more interesting question, one where I think we will see networked post-institutional political non-state actors continuing to take a lead, to see that their politics leaks out from the internet into the real locality in which they may live. In creating room for themselves, they are in part looking at their place in the web of mutual obligation and stepping up to take their part in the deeper polis as much as they are drawing on and reinforcing the obligations of their localities to them.52 Ibid.
The design agenda for the future of the internet seems straightforward: become a networked, post-institutional, non-state actor and start right where you live with political reform. The idea of a “localized internet” anticipates increasing overlaps between digital and physical social structures. Eventually, all social structures take on physicality. Saitta:
I joke that my ten year stretch goal is to kill the nation state, but really, I don’t think that’s particularly necessary. There will always be territorial organizational structures, but they’re only one possible structure among many that can interact. I favor building up new alternatives, starting now. If we somehow magically did manage to destroy the nation state before there was anything to replace it, we’d all, quite frankly, be fucked. I’m a road fetishist. I really like roads. And power. And food. Those are all currently mostly provided by or coordinated through the state. Kill the state now, and life looks grim. That said, waiting until you’ve got a fully functional alternative before taking any kind of political action aimed at common emancipation is equally dumb, as is investing more effort in actively hostile systems when you can’t actually change them. I’m a realist, in the end. I want less suffering, for everyone, in both the short and long term, and that doesn’t come out of the barrel of any one ideology, just as surely as it isn’t going to come by sticking to the straight and narrow of our status quo handbasket.53 Ibid.
The possibility for a network—centralized, decentralized, or distributed—to override jurisdiction and state power is a foundational dream of the internet, as well as a perpetual mirage shaped and inspired by science fiction. What was once thought to be “the internet”—a deterritorialized space amongst a world of nation-states—is known today to be incredibly saturated with the spatial implications of borders, jurisdictions, and sovereignty. New approaches to guaranteeing internet freedoms are increasingly becoming premised on literally eluding these spatial implications of a (perhaps always) reterritorialized internet.
The Pirate Bay is a famous Swedish-based P2P BitTorrent sharing service. Recently, access to its service was blocked in various countries and the site’s three founders were sentenced on charges of enabling the violation of intellectual property by facilitating illegal downloads. At the time of this writing, the final sentences are still pending in Sweden, where the case has been brought to the Supreme Court. Apart from being a file sharing site, the Pirate Bay is also a kind of living manifesto for the cyber-anarchic internet; it has issued various memes, it had plans to buy the Principality of Sealand, and in March 2012, it issued an unusual announcement that detailed the next possibility for evading jurisdiction. The Pirate Bay announced that it would start hosting content on airborne drones, evading law enforcement and copyright claims.54 http://arstechnica.com/tech-policy/2012/03/pirate-bay-plans-to-build-aerial-server-drones-with-35-linux-computer/ The Pirate Bay’s own tagline was: “Everyone knows WHAT TPB is. Now they’re going to have to think about WHERE TPB is.” While clearly part of the Pirate Bay’s amazing array of publicity stunts and memes, the plan is not technologically impossible. In the same month, the website TorrentFreak interviewed Tomorrow’s Thoughts Today, an organization exploring “the consequences of fantastic, perverse and underrated urbanisms,” which has built a set of wirelessly connected drones operating like a mobile darknet.55 https://torrentfreak.com/worlds-first-flying-file-sharing-drones-in-action-120320/ These machines constitute what the organization says is “part nomadic infrastructure and part robotic swarm”:
We have rebuilt and programmed the drones to broadcast their own local wifi network as a form of aerial Napster. They swarm into formation, broadcasting their pirate network, and then disperse, escaping detection, only to reform elsewhere.56 Electronic Countermeasures GLOW Festival video, Liam Young.
Though some of the Pirate Bay’s servers reportedly now operate out of a secret mountain lair,57 “The Pirate Bay Ships New Servers to Mountain Complex,” Torrent Freak, May 16, 2011. its proposed Low Orbit Server Stations (LOSS) would host servers that redirect traffic to a secret location. Though the plan is, conceptually, a call for a deterritorialized internet space, it seems somewhat oblivious to the lingering legal implications of having a localized server. Tomorrow’s Thoughts Today’s Electronic Countermeasures project, on the other hand, is based equally on deterritoriality as well as locality. Liam Young, cofounder of Tomorrow’s Thoughts Today, reflects:
As a culture we are having to come to some kind of collective agreement about what copyright means in a digital age. Who owns information as it becomes a digital commodity. Industries and governments are too slow to adapt and projects like Electronic Countermeasures or The Pirate Bay drone servers are imagined for the purposes of examining these issues and speculating on new possibilities. The privatization of knowledge is something we all need to be thinking about. Moves toward the storage of all our data in the cloud, a cloud managed by private companies or nation states, is potentially very dangerous. Even if this drone network isn’t implemented as a practical solution we would be just as interested if the work made us question what is happening and what alternatives there may be in data distribution.58 http://mthvn.tumblr.com/post/41818910653/opensourcesky
Young’s “nomadic speculative infrastructures” are relatively harmless in areas that are already heavily covered by regulations. But in less regulated areas, they might become something more.
An island can be created either by expressly carving out law, or by not legislating at all. State power works both ways; negatively, some jurisdictions on the world map lack control over their borders and have no centrally administered rule of law—they are “‘lawless’ zones in various states of anarchy, poverty, decay and crime.”59 Franke, Weizman, Geisler, ibid. In international relations it has become customary to apply a set of rules to define statehood; a state needs to have control over borders, a centrally administered rule of law (even if a dictatorship), and to a considerable extent, it needs to comply with customary practices in “international society” or “the international system.” As a normative categorization, this presupposes the institutional characteristics of Western statehood as the one legitimate form to which all states should aspire.
The term “failed state” was introduced in Western foreign policy to signify any state authority not substantially fulfilling either one of these criteria. Since the introduction of the term, various failed states have emerged, many of them in Africa: Somalia, Yemen, Sudan, and Mali are but a few examples. The designation of “failure” seems legitimate when applied to raging civil wars, violent conflicts, and their fallout. But it also points back to the political process, ideology, or entity that hands out the designation. In other words: one man’s failed state is, potentially, another man’s utopia. As Pierre Englebert and Denis M. Tull assert in their study on failed states and nation building in Africa:
The goal of rebuilding collapsed states is to restore them as “constituted repositories of power and authority within borders” and as “performers and suppliers of political goods.” Almost all African states, however, have never achieved such levels of statehood. Many are “states that fail[ed] before they form[ed].” Indeed, the evidence is overwhelming that most of Africa’s collapsed states at no point in the postcolonial era remotely resembled the ideal type of the modern Western polity.60 Pierre Englebert and Denis M. Tull, “Postconflict Reconstruction in Africa. Flawed Ideas about Failed States,” International Security, Vol. 32, No. 4 (Spring 2008): 106.
Failed states can be seen as their own political model; a “failure” to produce outcomes compliant with accepted norms can be seen as a “success” in arenas where such norms are disputed. Failed states don’t govern, don’t hold a monopoly of violence, don’t control borders, and don’t enforce a rule of law. They are at the outer borders of the international system and the world political map. Insofar as they are still, partially at least, inside that system, they may present new opportunities for internet practice, new sovereignties for hosting, and new areas for nomadic infrastructure. James Grimmelmann outlines some of the complications that this model faces:
The problem that failed states face is that it’s difficult to create telecommunications infrastructure without security and a functioning economic system. They have domains that may not be effectively under their control and are backed up by an international body. Their internet infrastructure frequently relies on technological providers who operate from out-of-state; what is available is often of limited connectivity and quite expensive. De facto, these places of weak enforcement may tend to function as data havens—particularly when there are many of them—but the reliability of provisioning any specific content is low.61 James Grimmelmann, email to authors, July 17, 2012.
A country like Cameroon presents a borderline case. There is digital infrastructure in the country, but its statehood appears to descend into failure anyway. In 2008, Ozong Agborsangaya-Fiteu warned that in his country, “unless there is clear political reform that will allow citizens to finally enjoy basic civil liberties—including full freedom of expression, free elections and the rule of law—a crisis is inevitable.”62 Ozong Agborsangaya-Fiteu, “Another failed state? Cameroon's descent,” International Herald Tribune, April 10, 2008. About a year later, internet security firm McAfee revealed that Cameroonian websites were the most dangerous in the world for their users—even more than Hong Kong websites. McAfee found that Cameroon boasts a shadow industry of “typo-squatting” domains. Typo-squatting exploits users who mistype a popular URL, leading them to a scam website. Cameroon’s domain name extension (“.cm”) differs but one character from the ubiquitous “.com”—hence Cameroon’s success in building popular Potemkin destinations based on typos. Facebook.cm, apparently, leads to a highly offensive porn ad.63 Andy Greenberg, “Cameroon's Cybercrime Boom,” Forbes.com, December 2, 2009. Is the boom in “cybercrime” from countries with weak oversight some sort of data haven byproduct? Grimmelmann comments: “Yes, you could put it that way: I’m reminded of the Eastern European virus-writing ‘industry.’”64 James Grimmelmann, email to authors, July 20, 2012.
In The Truman Show—with Jim Carrey starring as Truman, the unwitting protagonist of a real life sitcom—the series director, or “Creator,” makes an emotional appeal to Truman in an attempt to convince him that reality out there is no better, and no more real, than reality inside the giant suburban Biosphere that was built for him. Truman’s world is a world without visible signs of government; there are only signposts, and warnings, and red tape, at the edges of its liveable reality.
Government, for Truman, is the drone-like perspective of the series director. Isn’t the point of view offered by NSA Director Keith Alexander similarly comforting? Keith Alexander begins almost every other sentence with the phrase “from my perspective.” He won’t really ever refer to anyone else’s perspective, but it sounds as if he could. “From my perspective” sounds almost modest. Alexander has innumerable grandchildren and their love for iPads illustrates, for the General, the countless possibilities and threats of the “cyber.” Alexander’s NSA is about “saving lives,” as if it were a virtual ambulance rushing to rescue the digitally wounded. He brags about his agency’s “tremendous capabilities” as if he were a middle-aged computer room systems manager boasting about the robustness of his Apache server. How do we best escape the custody of this virtual father figure, and others standing in line to take over once he steps down? How do you liberate a society that has the internet?
No one really knows, but to begin with, we need to get rid of the deceptive gibberish of technocracy. We have become the enslaved consumers of nonsensical abstractions. No one has ever seen the cloud, or its main tenant, “big data.” These are objects of ideology and belief, and at times, treacherous harbingers of Big Brother. Those who argue that we need new tools to fix the broken internet are right, but they shouldn’t forget that we also need the right polities to use them. The spectacle of technology needs to be unleashed to further the ends of those who wish for a way of their own, rather than rule over others. People are real. Clouds aren’t.
Reformist and legislative currents in the ongoing surveillance drama have put their stakes in institutions that are themselves the repositories of vested interests. This bureaucratic apparatus is incapable of reform, because it can’t fire itself from the job it has done so badly for so long. Shielded from the most basic democratic accountability, an opaque data orgy plays out inside the boardrooms, spy bases, and data warehouses of surveillance.
Those who promote that we should, in response, encrypt all our communications, seem to have a strong point. Anonymizing technologies and other protections bring to mind the sort of privacy that was once expected from a sealed envelope or a safe. Yet on the other hand, the very argument for total encryption is the flipside of solutionism; it seeks for technology to solve a political problem. Encryption can’t, by itself, heal the internet.
Separate from these two strands is a third possibility: a localized internet, one that wields the double-edged sword of political and technological reforms, and saves the network from being a looming abstraction manipulated by Silicon Valley entrepreneurs. We should be able to explain the network to each other in the simplest possible terms, in mutual agreement. We should not need to be under the gray cloud of a super-jurisdictional, abstract Totalstaat. We deserve to wake up from the dreamless lethargy that is induced by the techno-managerial matrix, and look each other in the eye.
New polities, new technologies, and new jurisdictions are needed—all three of them, in abundance. Democracy and people need to forever come before clouds. Drinking water needs to always be prioritized over spying. Life itself is the enemy of surveillance.
Planetary-scale computation takes different forms at different scales: energy grids and mineral sourcing; chthonic cloud infrastructure; urban software and public service privatization; massive universal addressing systems; interfaces drawn by the augmentation of the hand, of the eye, or dissolved into objects; users both overdetermined by self-quantification and exploded by the arrival of legions of nonhuman users (sensors, cars, robots). Instead of seeing the various species of contemporary computational technologies as so many different genres of machines, spinning out on their own, we should instead see them as forming the body of an accidental megastructure. Perhaps these parts align, layer by layer, into something not unlike a vast (if also incomplete), pervasive (if also irregular) software and hardware Stack. This model is of a Stack that both does and does not exist as such: it is a machine that serves as a schema, as much as it is a schema of machines.1 Software (and hardware) stacks are technical architectures which assign inter-dependent layers to different specific clusters of technologies, and fix specific protocols for how one layer can send information up or down to adjacent layers. OSI and TCP/IP are obvious examples. As such, perhaps the image of a totality that this conception provides would—as theories of totality have before—make the composition of new governmentalities and new sovereignties both more legible and more effective.
My interest in the geopolitics of planetary-scale computation focuses less on issues of personal privacy and state surveillance than on how it distorts and deforms traditional Westphalian modes of political geography, jurisdiction, and sovereignty, and produces new territories in its image. It draws from (and against) Carl Schmitt’s later work on The Nomos of the Earth, and from his (albeit) flawed history of the geometries of geopolitical architectures.2 See Carl Schmitt, The Nomos of the Earth in the International Law of Jus Publicum Europaeum, trans. G. L. Ulmen (Candor, NY: Telos Press, 2006). “Nomos” refers to the dominant and essential logic to the political subdivisions of the earth (of land, seas, and/or air, and now also of the domain that the US military simply calls “cyber”) and to the geopolitical order that stabilizes these subdivisions accordingly. Today, as the nomos that was defined by the horizontal loop geometry of the modern state system creaks and groans, and as “Seeing like a State” takes leave of that initial territorial nest—both with and against the demands of planetary-scale computation3 The reference is to James Scott’s Seeing Like a State, but the term seems to have expanded and migrated beyond his antigovernmental thesis. See also, for example, Bruno Latour’s lecture “How to Think Like A State” (“in the presence of the Queen of Holland”). For this text, I mean to tie one thread to Scott’s connotation (how states see everything available to their schemes) and to a more Foucauldian sense of the actual optical technologies that conjure forms of governance in their own image. Today, these privileges are also enjoyed by the hardware/software platforms that manufacture such optics and leverage them as the basis of their own exo-state governmental innovations. —we wrestle with the irregular abstractions of information, time, and territory, and the chaotic de-lamination of (practical) sovereignty from the occupation of place. For this, a nomos of the Cloud would, for example, draw jurisdiction not only according to the horizontal subdivision of physical sites by and for states, but also according to the vertical stacking of interdependent layers on top of one another: two geometries sometimes in cahoots, sometimes completely diagonal and unrecognizable to one another.4 I mean “Cloud” in a very general sense, referring to planetary-scale software/hardware platforms, supporting data centers, physical transmission links, browser-based applications, and so forth.
The Stack, in short, is that new nomos rendered now as vertically thickened political geography. In my analysis, there are six layers to this Stack: Earth, Cloud, City, Address, Interface, and User. Rather than demonstrating each layer of the Stack as a whole, I’ll focus specifically on the Cloud and the User layers, and articulate some alternative designs for these layers and for the totality (or even better, for the next totality, the nomos to come). The Black Stack, then, is to the Stack what the shadow of the future is to the form of the present. The Black Stack is less the anarchist stack, or the death-metal stack, or the utterly opaque stack, than the computational totality-to-come, defined at this moment by what it is not, by the empty content fields of its framework, and by its dire inevitability. It is not the platform we have, but the platform that might be. That platform would be defined by the productivity of its accidents, and by the strategy for which whatever may appear at first as the worst option (even evil) may ultimately be where to look for the best way out. It is less a “possible future” than an escape from the present.
The platforms of the Cloud layer of the Stack are structured by dense, plural, and noncontiguous geographies, a hybrid of US super-jurisdiction and Charter Cities, which have carved new partially privatized polities from the whole cloth of de-sovereigned lands. But perhaps there is more there.
The immediate geographical drama of the Cloud layer is seen most directly in the ongoing Sino-Google conflicts of 2008 to the present: China hacking Google, Google pulling out of China, the NSA hacking China, the NSA hacking Google, Google ghostwriting books for the State Department, and Google wordlessly circumventing the last instances of state oversight altogether, not by transgressing them but by absorbing them into its service offering. Meanwhile, Chinese router firmware bides its time.
The geographies at work are often weird. For example, Google filed a series of patents on offshore data centers, to be built in international waters on towers using tidal currents and available water to keep the servers cool. The complexities of jurisdiction suggested by a global Cloud piped in from non-state space are fantastic, but they are now less exceptional than exemplary of a new normal. Between the “hackers” of the People’s Liberation Army and Google there exists more than a standoff between the proxies of two state apparatuses. There is rather a fundamental conflict over the geometry of political geography itself, with one side bound by the territorial integrity of the state, and the other by the gossamer threads of the world’s information demanding to be “organized and made useful.” This is a clash between two logics of governance, two geometries of territory: one a subdivision of the horizontal, the other a stacking of vertical layers; one a state, the other a para-state; one superimposed on top of the other at any point on the map, and never resolving into some consensual cosmopolitanism, but rather continuing to grind against the grain of one another’s planes. This characterizes the geopolitics of our moment (this, plus the gravity of generalized succession, but the two are interrelated).
From here we see that contemporary Cloud platforms are displacing, if not also replacing, traditional core functions of states, and demonstrating, for both good and ill, new spatial and temporal models of politics and publics. Archaic states drew their authority from the regular provision of food. Over the course of modernization, more was added to the intricate bargains of Leviathan: energy, infrastructure, legal identity and standing, objective and comprehensive maps, credible currencies, and flag-brand loyalties. Bit by bit, each of these and more are now provided by Cloud platforms, not necessarily as formal replacements for the state versions but, like Google ID, simply more useful and effective for daily life. For these platforms, the terms of participation are not mandatory, and because of this, their social contracts are more extractive than constitutional. The Cloud Polis draws revenue from the cognitive capital of its Users, who trade attention and microeconomic compliance in exchange for global infrastructural services, and in turn, it provides each of them with an active discrete online identity and the license to use this infrastructure.
That said, it is clear that we don’t have anything like a proper geopolitical theory of these transformations. Before the full ambition of the US security apparatus was so evident, it was thought by many that the Cloud was a place where states had no ultimate competence, nor maybe even a role to play: too slow, too dumb, too easily outwitted by using the right browser. States would be cored out, component by component, until nothing was left but a well-armed health insurance scheme with its own World Cup team. In the long run, that may still be the outcome, with modern liberal states taking their place next to ceremonial monarchs and stripped of all but symbolic authority, not necessarily replaced but displaced and misplaced to one side. But now we are hearing the opposite, equally brittle conclusion: that the Cloud is only the state, that it equals the state, and that its totality (figural, potential) is intrinsically totalitarian. Despite all, I wouldn’t take that bet.
Looking toward the Black Stack, we observe that new forms of governmentality arise through new capacities to tax flows (at ports, at gates, on property, on income, on attention, on clicks, on movement, on electrons, on carbon, and so forth). It is not at all clear whether, in the long run, Cloud platforms will overwhelm state control on such flows, or whether states will continue to evolve into Cloud platforms, absorbing the displaced functions back into themselves, or whether both will split or rotate diagonally to one another, or how deeply what we may now recognize as the surveillance state (US, China, and so forth) will become a universal solvent of compulsory transparency and/or a cosmically opaque megastructure of absolute paranoia, or all of the above, or none of the above.
Between the state, the market, and the platform, which is better designed to tax the interfaces of everyday life and draw sovereignty thereby? It is a false choice to be sure, but one that raises the question of where to locate the proper site of governance as such. What would we mean by “the public” if not that which is constituted by such interfaces, and where else should “governance”—meant here as the necessary, deliberate, and enforceable composition of durable political subjects and their mediations—live if not there? Not in some obtuse chain of parliamentary representation, nor in some delusional monadic individual unit, nor in some sad little community consensus powered by moral hectoring, but instead in the immanent, immediate, and exactly present interfaces that cleave and bind us. Where should sovereignty reside if not in what is in-between us—derived not from each of us individually but from what draws the world through us?
For this, it’s critical to underscore that Cloud platforms (including sometimes state apparatuses) are exactly that: platforms. It is important as well to recognize that “platforms” are not only a technical architecture; they are also an institutional form. They centralize (like states), scaffolding the terms of participation according to rigid but universal protocols, even as they decentralize (like markets), coordinating economies not through the superimposition of fixed plans but through interoperable and emergent interaction. Next to states and markets, platforms are a third form, coordinating through fixed protocols while scattering free-range Users watched over in loving, if also disconcertingly omniscient, grace. In the platform-as-totality, drawing the interfaces of everyday life into one another, the maximal state and the minimal state, Red Plenty and Google Gosplan, start to look weirdly similar.
Our own subjective enrollment in this is less as citizens of a polis or as homo economicus within a market, but rather as Users of a platform. As I see it, the work of geopolitical theory is to develop a proper history, typology, and program for such platforms. These would not be a shorthand for Cloud Feudalism (nor for the network politics of the “multitude”) but models for the organization of durable alter-totalities which command the force of law, if not necessarily its forms and formality. Our understanding of the political economy of platforms demands its own Hobbes, Marx, Hayek, and Keynes.5 My ongoing discussion on the political economy of platforms with Benedict Singleton, Nick Srnicek, and Alex Williams informs these last remarks.
One of the useful paradoxes of the User’s position as a political subject is the contradictory impulse directed simultaneously toward his artificial over-individuation and his ultimate pluralization, with both participating differently in the geopolitics of transparency. For example, the Quantified Self movement (a true medical theology in California) is haunted by this contradiction. At first, the intensity and granularity of a new informational mirror image convinces the User of his individuated coherency and stability as a subject. He is flattered by the singular beauty of his reflection, and this is why QSelf is so popular with those inspired by an X-Men reading of Atlas Shrugged. But as more data is added to the diagram that quantifies the outside world’s impact on his person—the health of the microbial biome in his gut, immediate and long-term environmental conditions, his various epidemiological contexts, and so on—the quality of everything that is “not him” comes to overcode and overwhelm any notion of himself as a withdrawn and self-contained agent. Like Theseus’s Paradox—where after every component of a thing has been replaced, nothing original remains but a metaphysical husk—the User is confronted with the existential lesson that at any point he is only the intersection of many streams. At first, the subject position of the User overproduces individual identity, but in the continuance of the same mechanisms, it then succeeds in exploding it.
The geopolitics of the User we have now is inadequate, including its oppositional modes. The Oedipal discourse of privacy and transparency in relation to the Evil Eye of the uninvited stepfather is a necessary process toward an alterglobalism, but it has real limits worth spelling out. A geopolitics of computation predicated at its core upon the biopolitics of privacy, of self-immunization from any compulsory appearance in front of publics, of platforms, of states, of Others, can sometimes also serve a psychological internalization of a now-ascendant general economy of succession, castration anxiety—whatever. The result is the pre-paranoia of withdrawal into an atomic and anomic dream of self-mastery that elsewhere we call the “neoliberal subject.”
The space in which the discursive formation of the subject meets the technical constitution of the User enjoys a much larger horizon than the one defined by these kinds of individuation. Consider, for example, proxy users. uProxy, a project supported by Google Ideas, is a browser modification that lets users easily pair up across distances to allow someone in one location (trapped in the Bad Internets) to send information unencumbered through the virtual position of another User in another location (enjoying the Good Internets). Recalling the proxy servers set up during the Arab Spring, one can see how Google Ideas (Jared Cohen’s group) might take special interest in baking this into Chrome. For Sino-Google geopolitics, the platform could theoretically be available at a billion-user scale to those who live in China, even if Google is not technically “in China,” because those Users, acting through and as foreign proxies, are themselves, as far as internet geography is concerned, both in and not in China. Developers of uProxy believe that it would take two simultaneous and synchronized man-in-the-middle attacks to hack the link, and at a population scale that would prove difficult even for the best state actors, for now. More disconcerting perhaps is that such a framework could just as easily be used to withdraw data from a paired site—a paired “user”—which for good reasons should be left alone.
Some plural User subject that is conjoined by a proxy link or other means could be composed of different types of addressable subjects: two humans in different countries, or a human and a sensor, a sensor and a bot, a human and a robot and a sensor, a whatever and a whatever. In principle, any one of these subcomponents could not only be part of multiple conjoined positions, but might not even know or need to know which meta-User they contribute to, any more than the microbial biome in your gut needs to know your name. Spoofing with honeypot identities, between humans and nonhumans, is measured against the theoretical address space of IPv6 (roughly 1023 addresses per person) or some other massive universal addressing scheme. The abyssal quantity and range of “things” that could, in principle, participate in these vast pluralities includes real and fictional addressable persons, objects, and locations, and even addressable mass-less relations between things, any of which could be a sub-User in this Internet of Haeccities.
So while the Stack (and the Black Stack) stage the death of the User in one sense—the eclipse of a certain resolute humanism—they do so because they also bring the multiplication and proliferation of other kinds of nonhuman Users (including sensors, financial algorithms, and robots from nanometric to landscape scale), any combination of which one might enter into a relationship with as part of a composite User. This is where the recent shift by major Cloud platforms into robotics may prove especially vital, because—like Darwin’s tortoises finding their way to different Galapagos islands—the Cambrian explosion in robotics sees speciation occur in the wild, not just in the lab, and with “us” on “their” inside, not on the outside. As robotics and Cloud hardware of all scales blend into a common category of machine, it will be unclear in general human-robotic interaction whether one is encountering a fully autonomous, partially autonomous, or completely human-piloted synthetic intelligence. Everyday interactions replay the Turing Test over and over. Is there a person behind this machine, and if so, how much? In time, the answer will matter less, and the postulation of human (or even carbon-based life) as the threshold measure of intelligence and as the qualifying gauge of a political ethics may seem like tasteless vestigial racism, replaced by less anthropocentric frames of reference.
The position of the User then maps only very incompletely onto any one individual body. From the perspective of the platform, what looks like one is really many, and what looks like many may only be one. Elaborate schizophrenias already take hold in our early negotiation of these composite User positions. The neoliberal subject position makes absurd demands on people as Users, as Quantified Selves, as SysAdmins of their own psyche, and from this, paranoia and narcissism are two symptoms of the same disposition, two functions of the same mask. For one, the mask works to pluralize identity according to the subjective demands of the User position as composite alloy; and for another, it defends against those same demands on behalf of the illusory integrity of a self-identity fracturing around its existential core. Ask yourself: Is that User “Anonymous” because he is dissolved into a vital machinic plurality, or because public identification threatens individual self-mastery, sense of autonomy, social unaccountability, and so forth? The former and the latter are two very different politics, yet they use the same masks and the same software suite. Given the schizophrenic economy of the User—first over-individuated and then multiplied and de-differentiated—this really isn’t an unexpected or neurotic reaction at all. It is, however, fragile and inadequate.
In the construction of the User as an aggregate profile that both is and is not specific to any one entity, there is no identity to deduce other than the pattern of interaction between partial actors. We may find, perhaps ironically, that the User position of the Stack actually has far less in common with the neoliberal form of the subject than some of today’s oppositionalist formats for political subjectivity that hope (quite rightly) to challenge, reform, and resist the State Stack as it is currently configuring itself. However, something like a Digital Bill of Rights for Users, despite its cosmopolitan optimism, becomes a much more complicated, fragile, and limited solution when the discrete identification of a User is both so heterogeneous and so fluid. Are all proxy composite users one User? Is anything with an IP address a User? If not, why not? If this throne is reserved for one species—humans—when is any one animal of that species being a User, and when is it not? Is it a User anytime that it is generating information? If so, that policy would in practice crisscross and trespass some of our most basic concepts of the political, and for that reason alone it may be a good place to start.
In addition to the fortification of the User as a geopolitical subject, we also require a redefinition of the political subject in relation to the real operations of the User, one that is based not on homo economicus, nor on parliamentary liberalism, nor on post-structuralist linguistic reduction, nor on the will to secede into the moral safety of individual privacy and withdraw from coercion. Instead, this definition should focus on composing and elevating sites of governance from the immediate, suturing, interfacial material between subjects, in the stitches and the traces and the folds of interaction between bodies and things at a distance, congealing into different networks demanding very different kinds of platform sovereignty.
I will conclude with some thoughts on the Stack-we-have and on the Black Stack, the generic figure for its alternative totalities: the Stack-to-come. The Stack-we-have is defined not only by its form, its layers, its platforms, and their interrelations, but also by its content. As leak after leak has made painfully clear, its content is also the content of our daily communications, now weaponized against us. If the panopticon effect is when you don’t know if you are being watched or not, and so you behave as if you are, then the inverse panopticon effect is when you know you are being watched but act as if you aren’t. This is today’s surveillance culture: exhibitionism in bad faith. The emergence of Stack platforms doesn’t promise any solution, or even any distinctions between friend and enemy within this optical geopolitics. At some dark day in the future, when considered versus the Google Caliphate, the NSA may even come to be seen by some as the “public option.” “At least it is accountable in principle to some parliamentary limits,” they will say, “rather than merely stockholder avarice and flimsy user agreements.”
If we take 9/11 and the rollout of the Patriot Act as Year Zero for the USA’s massive data gathering, encapsulation, and digestion campaign (one that we are only now beginning to comprehend, even as parallel projects from China, Russia, and Europe are sure to come to light in time), then we can imagine the entirety of network communication for the last decade—the Big Haul—as a single, deep-and-wide digital simulation of the world (or a significant section of it). It is an archive, a library of the real. Its existence as the purloined property of a state, just as a physical fact, is almost occult. Almost.
The geophilosophical profile of the Big Haul, from the energy necessary to preserve it to its governing instrumentality understood as both a text (a very large text) and as a machine with various utilities, overflows the traditional politics of software. Its story is much more Borges than Lawrence Lessig. As is its fate. Can it be destroyed? Is it possible to delete this simulation, and is it desirable to do so? Is there a trash can big enough for the Big Delete? Even if the plug could be pulled on all future data hauls, surely there must be a backup somewhere, the identical double of the simulation, such that if we delete one, the other will forever haunt history until it is rediscovered by future AI archaeologists interested in their own Paleolithic origins. Would we bury it, even if we could? Would we need signs around it like those designed for the Yucca Mountain nuclear waste disposal site that warn off unknowable future excavations? Those of us “lucky” enough to be alive during this fifteen-year span would enjoy a certain illegible immortality, curiosities to whatever meta-cognitive entity pieces us back together using our online activities, both public and private, proud and furtive, each of us rising again centuries from now, each of us a little Ozymandias of cat videos and Pornhub.
In light of this, the Black Stack could come to mean very different things. On the one hand, it would imply that this simulation is opaque and unmappable—not disappeared, but ultimately redacted entirely. It could imply that, from the ruined fragments of this history, another coherent totality can be carved against the grain, even from the deep recombinancy at and below the Earth layer of the Stack. Its blackness is the surface of a world that can no longer be composed by addition because it is so absolutely full, overwritten, and overdetermined, that to add more is just so much ink in the ocean. Instead of tabula rasa, this tabula plenus allows for creativity and figuration only by subtraction, like scratching paint from a canvas—only by carving away, by death, by replacement.
The structural logic of any Stack system allows for the replacement of whatever occupies one layer with something else, and for the rest of the architecture to continue to function without pause. For example, the content of any one layer—Earth, Cloud, City, Address, Interface, User—could be replaced (including the masochistic hysterical fiction of the individual User, both neoliberal and neo-other-things), while the rest of the layers remain a viable armature for global infrastructure. The Stack is designed to be remade. That is its technical form, but unlike replacing copper wire with fiber optics in the transmission layer of TCP/IP, replacing one kind of User with another is more difficult. Today, we are doing it by adding more and different kinds of things into the User position, as described above. We should, however, also allow for more comprehensive displacements, not just by elevating things to the status of political subjects or technical agents, but by making way for genuinely posthuman and ahuman positions.
In time, perhaps at the eclipse of the Anthropocene, the historical phase of Google Gosplan will give way to stateless platforms for multiple strata of synthetic intelligence and biocommunication to settle into new continents of cyborg symbiosis. Or perhaps instead, if nothing else, the carbon and energy appetite of this ambitious embryonic ecology will starve its host.
For some dramas, but hopefully not for the fabrication of the Stack-to-come (Black or otherwise), a certain humanism and companion figure of humanity still presumes its traditional place in the center of the frame. We must let go of the demand that any Artificial Intelligence arriving at sentience or sapience must care deeply about humanity—us specifically—as the subject and object of its knowing and its desire. The real nightmare, worse than the one in which the big machine wants to kill you, is the one in which it sees you as irrelevant, or as not even a discrete thing to know. Worse than being seen as an enemy is not being seen at all. As Eliezer Yudkowsky puts it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”6 See his “Artificial Intelligence as a Positive and Negative Factor in Global Risk” in Global Catastrophic Risks, eds. Nick Bostrom and Martin Rees (New York: Oxford University Press, 2008).
One of the integral accidents of the Stack may be an anthrocidal trauma that shifts us from a design career as the authors of the Anthropocene, to the role of supporting actors in the arrival of the Post-Anthropocene. The Black Stack may also be black because we cannot see our own reflection in it. In the last instance, its accelerationist geopolitics is less eschatological than chemical, because its grounding of time is based less on the promise of historical dialectics than on the rot of isotope decay. It is drawn, I believe, by an inhuman and inhumanist molecular form-finding: pre-Cambrian flora changed into peat oil changed into children’s toys, dinosaurs changed into birds changed into ceremonial headdresses, computation itself converted into whatever meta-machine comes next, and Stack into Black Stack.
A lecture on the cognitive equipment of the State at the occasion of the anniversary of the Scientific Council for Government Policiy, in 2007 in presence of the Queen of Holland.