User Tools

Site Tools


Action disabled: register
b:the_age_of_surveillance_capitalism

Table of Contents
Introduction

1 Home or Exile in the Digital Future 3

Part I The Foundations of Surveillance Capitalism

2 August 9, 2011: Setting the Stage for Surveillance Capitalism 27

3 The Discovery of Behavioral Surplus 63

4 The Moat Around the Castle 98

5 The Elaboration of Surveillance Capitalism: Kidnap, Corner, Compete 128

6 Hijacked: The Division of Learning in Society 176

Part II The Advance of Surveillance Capitalism

7 The Reality Business 199

8 Rendition: From Experience to Data 233

9 Rendition from the Depths 255

10 Make Them Dance 293

11 The Right to the Future Tense 329

Part III Instrumentarian Power for a Third Modernity

12 Two Species of Power 351

13 Big Other and the Rise of Instrumentarian Power 376

14 A Utopia of Certainty 398

15 The Instrumentarian Collective 416

16 Of Life in the Hive 445

17 The Right to Sanctuary 475

Conclusion

18 A Coup from Above 495

Detailed Table of Contents 526

Acknowledgments 532

Notes 537

Index 665

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power

Introduction

CHAPTER ONE

HOME OR EXILE IN THE DIGITAL FUTURE

I saw him crying, shedding floods of tears upon
Calypso's island, in her chambers.
She traps him there; he cannot go back home.
HOMER, THE ODYSSEY

I. The Oldest Questions

“Are we all going to be working for a smart machine, or will we have smart people around the machine?” The question was posed to me in 1981 by a young paper mill manager sometime between the fried catfish and the pecan pie on my first night in the small southern town that was home to his mammoth plant and would become my home periodically for the next six years. On that rainy night his words flooded my brain, drowning out the quickening tap tap tap of raindrops on the awning above our table. I recognized the oldest political questions: Home or exile? Lord or subject? Master or slave? These are eternal themes of knowledge, authority, and power that can never be settled for all time. There is no end of history; each generation must assert its will and imagination as new threats require us to retry the case in every age.

Perhaps because there was no one else to ask, the plant manager's voice was weighted with urgency and frustration: “Whats it gonna be? Which way are we supposed to go? I must know now. There is no time to spare.” I wanted the answers, too, and so I began the project that thirty years ago became my first book, In the Age of the Smart Machine: The Future of Work and Power. That work turned out to be the opening chapter in what became a lifelong quest to answer the question “Can the digital future be our home?”

It has been many years since that warm southern evening, but the oldest questions have come roaring back with a vengeance. The digital realm is overtaking and redefining everything familiar even before we have had a chance to ponder and decide. We celebrate the networked world for the many ways in which it enriches our capabilities and prospects, but it has birthed whole new territories of anxiety, danger, and violence as the sense of a predictable future slips away.

When we ask the oldest questions now, billions of people from every social strata, generation, and society must answer. Information and communications technologies are more widespread than electricity, reaching three billion of the world's seven billion people.1) The entangled dilemmas of knowledge, authority, and power are no longer confined to workplaces as they were in the 1980s. Now their roots run deep through the necessities of daily life, mediating nearly every form of social participation.2)

Just a moment ago, it still seemed reasonable to focus our concerns on the challenges of an information workplace or an information society. Now the oldest questions must be addressed to the widest possible frame, which is best defined as “civilization” or, more specifically, information civilization. Will this emerging civilization be a place that we can call home?

All creatures orient to home. It is the point of origin from which every species sets its bearings. Without our bearings, there is no way to navigate unknown territory; without our bearings, we are lost. I am reminded of this each spring when the same pair of loons returns from their distant travels to the cove below our window. Their haunting cries of homecoming, renewal, connection, and safeguard lull us to sleep at night, knowing that we too are in our place. Green turtles hatch and go down to the sea, where they travel many thousands of miles, sometimes for ten years or twenty. When ready to lay their eggs, they retrace their journey back to the very patch of beach where they were born. Some birds annually fly for thousands of miles, losing as much as half their body weight, in order to mate in their birthplace. Birds, bees, butterflies… nests, holes, trees, lakes, hives, hills, shores, and hollows… nearly every creature shares some version of this deep attachment to a place in which life has been known to flourish, the kind of place we call home.

It is in the nature of human attachment that every journey and expulsion sets into motion the search for home. That nostos, finding home, is among our most profound needs is evident by the price we are willing to pay for it. There is a universally shared ache to return to the place we left behind or to found a new home in which our hopes for the future can nest and grow. We still recount the travails of Odysseus and recall what human beings will endure for the sake of reaching our own shores and entering our own gates.

Because our brains are larger than those of birds and sea turtles, we know that it is not always possible, or even desirable, to return to the same patch of earth. Home need not always correspond to a single dwelling or place. We can choose its form and location but not its meaning. Home is where we know and where we are known, where we love and are beloved. Home is mastery, voice, relationship, and sanctuary: part freedom, part flourishing… part refuge, part prospect.

The sense of home slipping away provokes an unbearable yearning. The Portuguese have a name for this feeling: saudade, a word said to capture the homesickness and longing of separation from the homeland among emigrants across the centuries. Now the disruptions of the twenty-first century have turned these exquisite anxieties and longings of dislocation into a universal story that engulfs each one of us.3)

II. Requiem for a Home

In 2000 a group of computer scientists and engineers at Georgia Tech collaborated on a project called the “Aware Home.”4) It was meant to be a “living laboratory” for the study of “ubiquitous computing.” They imagined a “human-home symbiosis” in which many animate and inanimate processes would be captured by an elaborate network of “context aware sensors” embedded in the house and by wearable computers worn by the home's occupants. The design called for an “automated wireless collaboration” between the platform that hosted personal information from the occupants' wearables and a second one that hosted the environmental information from the sensors.

There were three working assumptions: first, the scientists and engineers understood that the new data systems would produce an entirely new knowledge domain. Second, it was assumed that the rights to that new knowledge and the power to use it to improve one's life would belong exclusively to the people who live in the house. Third, the team assumed that for all of its digital wizardry, the Aware Home would take its place as a modern incarnation of the ancient conventions that understand “home” as the private sanctuary of those who dwell within its walls.

All of this was expressed in the engineering plan. It emphasized trust, simplicity, the sovereignty of the individual, and the inviolability of the home as a private domain. The Aware Home information system was imagined as a simple “closed loop” with only two nodes and controlled entirely by the home's occupants. Because the house would be “constantly monitoring the occupants' whereabouts and activities… even tracing its inhabitants' medical conditions,” the team concluded, “there is a clear need to give the occupants knowledge and control of the distribution of this information.” All the information was to be stored on the occupants' wearable computers “to insure the privacy of an individual's information.”

By 2018, the global “smart-home” market was valued $36 billion and expected to reach $151 billion by 2023.5) The numbers betray an earthquake beneath their surface. Consider just one smart-home device: the Nest thermostat, which was made by a company that was owned by Alphabet, the Google holding company, and then merged with Google in 2018.6) The Nest thermostat does many things imagined in the Aware Home. It collects data about its uses and environment. It uses motion sensors and computation to “learn” the behaviors of a home's inhabitants. Nest's apps can gather data from other connected products such as cars, ovens, fitness trackers, and beds.7) Such systems can, for example, trigger lights if an anomalous motion is detected, signal video and audio recording, and even send notifications to homeowners or others. As a result of the merger with Google, the thermostat, like other Nest products, will be built with Google's artificial intelligence capabilities, including its personal digital “assistant.”8) Like the Aware Home, the thermostat and its brethren devices create immense new stores of knowledge and therefore new power?but for whom?

Wi-Fi enabled and networked, the thermostat's intricate, personalized data stores are uploaded to Google's servers. Each thermostat comes with a “privacy policy,” a “terms-of-service agreement,” and an “end-user licensing agreement.” These reveal oppressive privacy and security consequences in which sensitive household and personal information are shared with other smart devices, unnamed personnel, and third parties for the purposes of predictive analyses and sales to other unspecified parties. Nest takes little responsibility for the security of the information it collects and none for how the other companies in its ecosystem will put those data to use.9) A detailed analysis of Nest's policies by two University of London scholars concluded that were one to enter into the Nest ecosystem of connected devices and apps, each with their own equally burdensome and audacious terms, the purchase of a single home thermostat would entail the need to review nearly a thousand so-called contracts.10)

Should the customer refuse to agree to Nest's stipulations, the terms of service indicate that the functionality and security of the thermostat will be deeply compromised, no longer supported by the necessary updates meant to ensure its reliability and safety. The consequences can range from frozen pipes to failed smoke alarms to an easily hacked internal home system.11)11

By 2018, the assumptions of the Aware Home were gone with the wind. Where did they go? What was that wind? The Aware Home, like many other visionary projects, imagined a digital future that empowers individuals to lead more-effective lives. What is most critical is that in the year 2000 this vision naturally assumed an unwavering commitment to the privacy of individual experience. Should an individual choose to render her experience digitally, then she would exercise exclusive rights to the knowledge garnered from such data, as well as exclusive rights to decide how such knowledge might be put to use. Today these rights to privacy, knowledge, and application have been usurped by a bold market venture powered by unilateral claims to others' experience and the knowledge that flows from it. What does this sea change mean for us, for our children, for our democracies, and for the very possibility of a human future in a digital world? This book aims to answer these questions. It is about the darkening of the digital dream and its rapid mutation into a voracious and utterly novel commercial project that I call surveillance capitalism.

III. What Is Surveillance Capitalism?

Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to product or service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as “machine intelligence,” and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace for behavioral predictions that I call behavioral futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behavior.

As we shall see in the coming chapters, the competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism's evolution, the means of production are subordinated to an increasingly complex and comprehensive “means of behavioral modification.” In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others' ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of “smart” networked devices, things, and spaces.

In the coming chapters we will follow the growth and dissemination of these operations and the instrumentarian power that sustains them. Indeed, it has become difficult to escape this bold market project, whose tentacles reach from the gentle herding of innocent Pokemon Go players to eat, drink, and purchase in the restaurants, bars, fast-food joints, and shops that pay to play in its behavioral futures markets to the ruthless expropriation of surplus from Facebook profiles for the purposes of shaping individual behavior, whether it's buying pimple cream at 5:45 P.M. on Friday, clicking “yes” on an offer of new running shoes as the endorphins race through your brain after your long Sunday morning run, or voting next week. Just as industrial capitalism was driven to the continuous intensification of the means of production, so surveillance capitalists and their market players are now locked into the continuous intensification of the means of behavioral modification and the gathering might of instrumentarian power.

Surveillance capitalism runs contrary to the early digital dream, consigning the Aware Home to ancient history. Instead, it strips away the illusion that the networked form has some kind of indigenous moral content, that being “connected” is somehow intrinsically pro-social, innately inclusive, or naturally tending toward the democratization of knowledge. Digital connection is now a means to others' commercial ends. At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx's old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human's experience.

Google invented and perfected surveillance capitalism in much the same way that a century ago General Motors invented and perfected managerial capitalism. Google was the pioneer of surveillance capitalism in thought and practice, the deep pocket for research and development, and the trailblazer in experimentation and implementation, but it is no longer the only actor on this path. Surveillance capitalism quickly spread to Facebook and later to Microsoft. Evidence suggests that Amazon has veered in this direction, and it is a constant challenge to Apple, both as an external threat and as a source of internal debate and conflict.

As the pioneer of surveillance capitalism, Google launched an unprecedented market operation into the unmapped spaces of the internet, where it faced few impediments from law or competitors, like an invasive species in a landscape free of natural predators. Its leaders drove the systemic coherence of their businesses at a breakneck pace that neither public institutions nor individuals could follow. Google also benefited from historical events when a national security apparatus galvanized by the attacks of 9/11 was inclined to nurture, mimic, shelter, and appropriate surveillance capitalism's emergent capabilities for the sake of total knowledge and its promise of certainty.

Surveillance capitalists quickly realized that they could do anything they wanted, and they did. They dressed in the fashions of advocacy and emancipation, appealing to and exploiting contemporary anxieties, while the real action was hidden offstage. Theirs was an invisibility cloak woven in equal measure to the rhetoric of the empowering web, the ability to move swiftly, the confidence of vast revenue streams, and the wild, undefended nature of the territory they would conquer and claim. They were protected by the inherent illegibility of the automated processes that they rule, the ignorance that these processes breed, and the sense of inevitability that they foster.

Surveillance capitalism is no longer confined to the competitive dramas of the large internet companies, where behavioral futures markets were first aimed at online advertising. Its mechanisms and economic imperatives have become the default model for most internet-based businesses. Eventually, competitive pressure drove expansion into the offline world, where the same foundational mechanisms that expropriate your online browsing, likes, and clicks are trained on your run in the park, breakfast conversation, or hunt for a parking space. Today's prediction products are traded in behavioral futures markets that extend beyond targeted online ads to many other sectors, including insurance, retail, finance, and an ever-widening range of goods and services companies determined to participate in these new and profitable markets. Whether it's a “smart” home device, what the insurance companies call “behavioral underwriting,” or any one of thousands of other transactions, we now pay for our own domination.

Surveillance capitalism's products and services are not the objects of a value exchange. They do not establish constructive producer-consumer reciprocities. Instead, they are the “hooks” that lure users into their extractive operations in which our personal experiences are scraped and packaged as the means to others' ends. We are not surveillance capitalism's “customers.” Although the saying tells us “If it's free, then you are the product,” that is also incorrect. We are the sources of surveillance capitalism's crucial surplus: the objects of a technologically advanced and increasingly inescapable raw-material-extraction operation. Surveillance capitalism's actual customers are the enterprises that trade in its markets for future behavior.

This logic turns ordinary life into the daily renewal of a twenty-first-century Faustian compact. “Faustian” because it is nearly impossible to tear ourselves away, despite the fact that what we must give in return will destroy life as we have known it. Consider that the internet has become essential for social participation, that the internet is now saturated with commerce, and that commerce is now subordinated to surveillance capitalism. Our dependency is at the heart of the commercial surveillance project, in which our felt needs for effective life vie against the inclination to resist its bold incursions. This conflict produces a psychic numbing that inures us to the realities of being tracked, parsed, mined, and modified. It disposes us to rationalize the situation in resigned cynicism, create excuses that operate like defense mechanisms (“I have nothing to hide”), or find other ways to stick our heads in the sand, choosing ignorance out of frustration and helplessness.12) In this way, surveillance capitalism imposes a fundamentally illegitimate choice that twenty-first-century individuals should not have to make, and its normalization leaves us singing in our chains.13)

Surveillance capitalism operates through unprecedented asymmetries in knowledge and the power that accrues to knowledge. Surveillance capitalists know everything about us, whereas their operations are designed to be unknowable to us. They accumulate vast domains of new knowledge from us, but not for us. They predict our futures for the sake of others' gain, not ours. As long as surveillance capitalism and its behavioral futures markets are allowed to thrive, ownership of the new means of behavioral modification eclipses ownership of the means of production as the fountainhead of capitalist wealth and power in the twenty-first century.

These facts and their consequences for our individual lives, our societies, our democracies, and our emerging information civilization are examined in detail in the coming chapters. The evidence and reasoning employed here suggest that surveillance capitalism is a rogue force driven by novel economic imperatives that disregard social norms and nullify the elemental rights associated with individual autonomy that are essential to the very possibility of a democratic society.

Just as industrial civilization flourished at the expense of nature and now threatens to cost us the Earth, an information civilization shaped by surveillance capitalism and its new instrumentarian power will thrive at the expense of human nature and will threaten to cost us our humanity. The industrial legacy of climate chaos fills us with dismay, remorse, and fear. As surveillance capitalism becomes the dominant form of information capitalism in our time, what fresh legacy of damage and regret will be mourned by future generations? By the time you read these words, the reach of this new form will have grown as more sectors, firms, startups, app developers, and investors mobilize around this one plausible version of information capitalism. This mobilization and the resistance it engenders will define a key battleground upon which the possibility of a human future at the new frontier of power will be contested.

감시 자본주의는 행동 데이터로 번역하기위한 자유로운 원료로서의 인간 경험을 일방적으로 주장한다. 이러한 데이터 중 일부는 제품 또는 서비스 개선에 적용되지만 나머지는 독점적 인 행동 과잉으로 선언되어 “기계 지능”으로 알려진 고급 제조 프로세스에 공급되며 현재, 곧 수행 할 작업을 예측하는 예측 제품으로 가공됩니다. 나중에. 마지막으로, 이러한 예측 제품은 ​​행동 선물 시장을 호출하는 행동 예측에 대한 새로운 종류의 시장에서 거래됩니다. 감시 회사의 자본가들은이 거래를 통해 엄청나게 부유 해졌습니다. 많은 기업들이 미래의 행동에 대해 열망하고 있습니다.

다음 장에서 볼 수 있듯이이 새로운 시장의 경쟁적 역학 관계는 감시 자본가가 항상 예언하는 행동 과잉의 원인 인 목소리, 인격 및 감정을 습득하도록 유도합니다. 결국, 감시 자본가들은 가장 예측 가능한 행동 데이터가 수익성있는 결과를 향한 행동을 자극, 동축, 조정 및 집단화하기 위해 게임 상태에 개입하여 발생한다는 사실을 발견했습니다. 경쟁 압력으로 이러한 변화가 발생했습니다. 자동화 된 기계 프로세스는 우리의 행동을 파악할뿐만 아니라 규모에 따라 행동을 결정합니다. 지식에서 권력으로의 방향 전환으로 더 이상 우리에 관한 정보 흐름을 자동화 할 수 없습니다. 목표는 이제 우리를 자동화하는 것입니다. 감시 자본주의의 진화 단계에서 생산 수단은 점차 복잡하고 포괄적 인 “행동 수정 수단”에 종속된다. 이런 식으로 감시 자본주의는 내가 도구주의라고 부르는 새로운 힘의 종을 낳는다. 도구 주의적 힘은 다른 사람의 목적에 대한 인간의 행동을 알고 있고 형성한다. 군비와 군대 대신에, 그것은 스마트 한 네트워크 장치, 사물, 공간의 유비쿼터스 컴퓨팅 아키텍처의 자동화 된 매체를 통해 의지를 발휘합니다.

IV. The Unprecedented

One explanation for surveillance capitalism's many triumphs floats above them all: it is unprecedented. The unprecedented is necessarily unrecognizable. When we encounter something unprecedented, we automatically interpret it through the lenses of familiar categories, thereby rendering invisible precisely that which is unprecedented. A classic example is the notion of the “horseless carriage” to which people reverted when confronted with the unprecedented facts of the automobile. A tragic illustration is the encounter between indigenous people and the first Spanish conquerors. When the Tainos of the pre-Columbian Caribbean islands first laid eyes on the sweating, bearded Spanish soldiers trudging across the sand in their brocade and armor, how could they possibly have recognized the meaning and portent of that moment? Unable to imagine their own destruction, they reckoned that those strange creatures were gods and welcomed them with intricate rituals of hospitality. This is how the unprecedented reliably confounds understanding; existing lenses illuminate the familiar, thus obscuring the original by turning the unprecedented into an extension of the past. This contributes to the normalization of the abnormal, which makes fighting the unprecedented even more of an uphill climb.

On a stormy night some years ago, our home was struck by lightning, and I learned a powerful lesson in the comprehension-defying power of the unprecedented. Within moments of the strike, thick black smoke drifted up the staircase from the lower level of the house and toward the living room. As we mobilized and called the fire department, I believed that I had just a minute or two to do something useful before rushing out to join my family. First, I ran upstairs and closed all the bedroom doors to protect them from smoke damage. Next, I tore back downstairs to the living room, where I gathered up as many of our family photo albums as I could carry and set them outside on a covered porch for safety. The smoke was just about to reach me when the fire marshal arrived to grab me by the shoulder and yank me out the door. We stood in the driving rain, where, to our astonishment, we watched the house explode in flames.

I learned many things from the fire, but among the most important was the unrecognizability of the unprecedented. In that early phase of crisis, I could imagine our home scarred by smoke damage, but I could not imagine its disappearance. I grasped what was happening through the lens of past experience, envisioning a distressing but ultimately manageable detour that would lead back to the status quo. Unable to distinguish the unprecedented, all I could do was to close doors to rooms that would no longer exist and seek safety on a porch that was fated to vanish. I was blind to conditions that were unprecedented in my experience.

I began to study the emergence of what I would eventually call surveillance capitalism in 2006, interviewing entrepreneurs and staff in a range of tech companies in the US and the UK. For several years I thought that the unexpected and disturbing practices that I documented were detours from the main road: management oversights or failures of judgment and contextual understanding.

My field data were destroyed in the fire that night, and by the time I picked up the thread again early in 2011, it was clear to me that my old horseless-carriage lenses could not explain or excuse what was taking shape. I had lost many details hidden in the brush, but the profiles of the trees stood out more clearly than before: information capitalism had taken a decisive turn toward a new logic of accumulation, with its own original operational mechanisms, economic imperatives, and markets. I could see that this new form had broken away from the norms and practices that define the history of capitalism and in that process something startling and unprecedented had emerged.

Of course, the emergence of the unprecedented in economic history cannot be compared to a house fire. The portents of a catastrophic fire were unprecedented in my experience, but they were not original. In contrast, surveillance capitalism is a new actor in history, both original and sui generis. It is of its own kind and unlike anything else: a distinct new planet with its own physics of time and space, its sixty-seven-hour days, emerald sky, inverted mountain ranges, and dry water.

Nonetheless, the danger of closing doors to rooms that will no longer exist is very real. The unprecedented nature of surveillance capitalism has enabled it to elude systematic contest because it cannot be adequately grasped with our existing concepts. We rely on categories such as “monopoly” or “privacy” to contest surveillance capitalist practices. And although these issues are vital, and even when surveillance capitalist operations are also monopolistic and a threat to privacy, the existing categories nevertheless fall short in identifying and contesting the most crucial and unprecedented facts of this new regime.

Will surveillance capitalism continue on its current trajectory to become the dominant logic of accumulation of our age, or, in the fullness of time, will we judge it to have been a toothed bird: A fearsome but ultimately doomed dead end in capitalism's longer journey? If it is to be doomed, then what will make it so? What will an effective vaccine entail?

Every vaccine begins in careful knowledge of the enemy disease. This book is a journey to encounter what is strange, original, and even unimaginable in surveillance capitalism. It is animated by the conviction that fresh observation, analysis, and new naming are required if we are to grasp the unprecedented as a necessary prelude to effective contest. The chapters that follow will examine the specific conditions that allowed surveillance capitalism to root and flourish as well as the “laws of motion” that drive the action and expansion of this market form: its foundational mechanisms, economic imperatives, economies of supply, construction of power, and principles of social ordering. Let's close doors, but let's make sure that they are the right ones.

V. The Puppet Master, Not the Puppet

Our effort to confront the unprecedented begins with the recognition that we hunt the puppet master, not the puppet. A first challenge to comprehension is the confusion between surveillance capitalism and the technologies it employs. Surveillance capitalism is not technology; it is a logic that imbues technology and commands it into action. Surveillance capitalism is a market form that is unimaginable outside the digital milieu, but it is not the same as the “digital.” As we saw in the story of the Aware Home, and as we shall see again in Chapter 2, the digital can take many forms depending upon the social and economic logics that bring it to life. It is capitalism that assigns the price tag of subjugation and helplessness, not the technology.

That surveillance capitalism is a logic in action and not a technology is a vital point because surveillance capitalists want us to think that their practices are inevitable expressions of the technologies they employ. For example, in 2009 the public first became aware that Google maintains our search histories indefinitely: data that are available as raw-material supplies are also available to intelligence and law-enforcement agencies. When questioned about these practices, the corporation's former CEO Eric Schmidt mused, “The reality is that search engines including Google do retain this information for some time.”14

In truth, search engines do not retain, but surveillance capitalism does. Schmidt's statement is a classic of misdirection that bewilders the public by conflating commercial imperatives and technological necessity. It camouflages the concrete practices of surveillance capitalism and the specific choices that impel Google's brand of search into action. Most significantly, it makes surveillance capitalism's practices appear to be inevitable when they are actually meticulously calculated and lavishly funded means to self-dealing commercial ends. We will examine this notion of “inevitabilism” in depth in Chapter 7. For now, suffice to say that despite all the futuristic sophistication of digital innovation, the message of the surveillance capitalist companies barely differs from the themes once glorified in the motto of the 1933 Chicago World's Fair: “Science Finds?Industry Applies?Man Conforms.”

In order to challenge such claims of technological inevitability, we must establish our bearings. We cannot evaluate the current trajectory of information civilization without a clear appreciation that technology is not and never can be a thing in itself, isolated from economics and society. This means that technological inevitability does not exist. Technologies are always economic means, not ends in themselves: in modern times, technology's DNA comes already patterned by what the sociologist Max Weber called the “economic orientation.”

Economic ends, Weber observed, are always intrinsic to technology's development and deployment. “Economic action” determines objectives, whereas technology provides “appropriate means.” In Weber's framing, “The fact that what is called the technological development of modern times has been so largely oriented economically to profit-making is one of the fundamental facts of the history of technology.”15 In a modern capitalist society, technology was, is, and always will be an expression of the economic objectives that direct it into action. A worthwhile exercise would be to delete the word “technology” from our vocabularies in order to see how quickly capitalism's objectives are exposed.

Surveillance capitalism employs many technologies, but it cannot be equated with any technology. Its operations may employ platforms, but these operations are not the same as platforms. It employs machine intelligence, but it cannot be reduced to those machines. It produces and relies on algorithms, but it is not the same as algorithms. Surveillance capitalism's unique economic imperatives are the puppet masters that hide behind the curtain orienting the machines and summoning them to action. These imperatives, to indulge another metaphor, are like the body's soft tissues that cannot be seen in an X-ray but do the real work of binding muscle and bone. We are not alone in falling prey to the technology illusion. It is an enduring theme of social thought, as old as the Trojan horse. Despite this, each generation stumbles into the quicksand of forgetting that technology is an expression of other interests. In modern times this means the interests of capital, and in our time it is surveillance capital that commands the digital milieu and directs our trajectory toward the future. Our aim in this book is to discern the laws of surveillance capitalism that animate today's exotic Trojan horses, returning us to age-old questions as they bear down on our lives, our societies, and our civilization.

We have stood at this kind of precipice before. “We've stumbled along for a while, trying to run a new civilization in old ways, but we've got to start to make this world over.” It was 1912 when Thomas Edison laid out his vision for a new industrial civilization in a letter to Henry Ford. Edison worried that industrialism's potential to serve the progress of humanity would be thwarted by the stubborn power of the robber barons and the monopolist economics that ruled their kingdoms. He decried the “wastefulness” and “cruelty” of US capitalism: “Our production, our factory laws, our charities, our relations between capital and labor, our distribution?all wrong, out of gear.” Both Edison and Ford understood that the modern industrial civilization for which they harbored such hope was careening toward a darkness marked by misery for the many and prosperity for the few.

Most important for our conversation, Edison and Ford understood that the moral life of industrial civilization would be shaped by the practices of capitalism that rose to dominance in their time. They believed that America, and eventually the world, would have to fashion a new, more rational capitalism in order to avert a future of misery and conflict. Everything, as Edison suggested, would have to be reinvented: new technologies, yes, but these would have to reflect new ways of understanding and fulfilling people's needs; a new economic model that could turn those new practices into profit; and a new social contract that could sustain it all. A new century had dawned, but the evolution of capitalism, like the churning of civilizations, did not obey the calendar or the clock. It was 1912, and still the nineteenth century refused to relinquish its claim on the twentieth.

The same can be said of our time. As I write these words, we are nearing the end of the second decade of the twenty-first century, but the economic and social contests of the twentieth continue to tear us apart. These contests are the stage upon which surveillance capitalism made its debut and rose to stardom as the author of a new chapter in the long saga of capitalism's evolution. This is the dramatic context to which we will turn in the opening pages of Part I: the place upon which we must stand in order to evaluate our subject in its rightful context. Surveillance capitalism is not an accident of overzealous technologists, but rather a rogue capitalism that learned to cunningly exploit its historical conditions to ensure and defend its success.

VI. The Outline, Themes, and Sources of this Book

This book is intended as an initial mapping of a terra incognita, a first foray that I hope will pave the way for more explorers. The effort to understand surveillance capitalism and its consequences has dictated a path of exploration that crosses many disciplines and historical periods. My aim has been to develop the concepts and frameworks that enable us to see the pattern in what have appeared to be disparate concepts, phenomena, and fragments of rhetoric and practice, as each new point on the map contributes to materializing the puppet master in flesh and bone.

Many of the points on this map are necessarily drawn from fast-moving currents in turbulent times. In making sense of contemporary developments, my method has been to isolate the deeper pattern in the welter of technological detail and corporate rhetoric. The test of my efficacy will be in how well this map and its concepts illuminate the unprecedented and empower us with a more cogent and comprehensive understanding of the rapid flow of events that boil around us as surveillance capitalism pursues its long game of economic and social domination.

The Age of Surveillance Capitalism has four parts. Each presents four to five chapters as well as a final chapter intended as a coda that reflects on and conceptualizes the meaning of what has gone before. Part I addresses the foundations of surveillance capitalism: its origins and early elaboration. We begin in Chapter 2 by setting the stage upon which surveillance capitalism made its debut and achieved success. This stage setting is important because I fear that we have contented ourselves for too long with superficial explanations of the rapid rise and general acceptance of the practices associated with surveillance capitalism. For example, we have credited notions such as “convenience” or the fact that many of its services are “free.” Instead, Chapter 2 explores the social conditions that summoned the digital into our everyday lives and enabled surveillance capitalism to root and flourish. I describe the “collision” between the centuries-old historical processes of individualization that shape our experience as self-determining individuals and the harsh social habitat produced by a decades-old regime of neoliberal market economics in which our sense of self-worth and needs for self-determination are routinely thwarted. The pain and frustration of this contradiction are the condition that sent us careening toward the internet for sustenance and ultimately bent us to surveillance capitalism's draconian quid pro quo.

Part I moves on to a close examination of surveillance capitalism's invention and early elaboration at Google, beginning with the discovery and early development of what would become its foundational mechanisms, economic imperatives, and “laws of motion.” For all of Google's technological prowess and computational talent, the real credit for its success goes to the radical social relations that the company declared as facts, beginning with its disregard for the boundaries of private human experience and the moral integrity of the autonomous individual. Instead, surveillance capitalists asserted their right to invade at will, usurping individual decision rights in favor of unilateral surveillance and the self-authorized extraction of human experience for others' profit. These invasive claims were nurtured by the absence of law to impede their progress, the mutuality of interests between the fledgling surveillance capitalists and state intelligence agencies, and the tenacity with which the corporation defended its new territories. Eventually, Google codified a tactical playbook on the strength of which its surveillance capitalist operations were successfully institutionalized as the dominant form of information capitalism, drawing new competitors eager to participate in the race for surveillance revenues. On the strength of these achievements, Google and its expanding universe of competitors enjoy extraordinary new asymmetries of knowledge and power, unprecedented in the human story. I argue that the significance of these developments is best understood as the privatization of the division of learning in society, the critical axis of social order in the twenty-first century.

Part II traces the migration of surveillance capitalism from the online environment to the real world, a consequence of the competition for prediction products that approximate certainty. Here we explore this new reality business, as all aspects of human experience are claimed as raw-material supplies and targeted for rendering into behavioral data. Much of this new work is accomplished under the banner of “personalization,” a camouflage for aggressive extraction operations that mine the intimate depths of everyday life. As competition intensifies, surveillance capitalists learn that extracting human experience is not enough. The most-predictive raw-material supplies come from intervening in our experience to shape our behavior in ways that favor surveillance capitalists' commercial outcomes. New automated protocols are designed to influence and modify human behavior at scale as the means of production is subordinated to a new and more complex means of behavior modification. We see these new protocols at work in Facebook's contagion experiments and the Google-incubated augmented reality “game” Pokemon Go. The evidence of our psychic numbing is that only a few decades ago US society denounced mass behavior-modification techniques as unacceptable threats to individual autonomy and the democratic order. Today the same practices meet little resistance or even discussion as they are routinely and pervasively deployed in the march toward surveillance revenues. Finally, I consider surveillance capitalism's operations as a challenge to the elemental right to the future tense, which accounts for the individual's ability to imagine, intend, promise, and construct a future. It is an essential condition of free will and, more poignantly, of the inner resources from which we draw the will to will. I ask and answer the question How did they get away with it? Part II ends with a meditation on our once and future history. If industrial capitalism dangerously disrupted nature, what havoc might surveillance capitalism wreak on human nature?

Part III examines the rise of instrumentarian power; its expression in a ubiquitous sensate, networked, computational infrastructure that I call Big Other; and the novel and deeply antidemocratic vision of society and social relations that these produce. I argue that instrumentarianism is an unprecedented species of power that has defied comprehension in part because it has been subjected to the “horseless-carriage” syndrome. Instrumentarian power has been viewed through the old lenses of totalitarianism, obscuring what is different and dangerous. Totalitarianism was a transformation of the state into a project of total possession. Instrumentarianism and its materialization in Big Other signal the transformation of the market into a project of total certainty, an undertaking that is unimaginable outside the digital milieu and the logic of surveillance capitalism. In naming and analyzing instrumentarian power, I explore its intellectual origins in early theoretical physics and its later expression in the work of the radical behaviorist B. F. Skinner.

Part III follows surveillance capitalism into a second phase change. The first was the migration from the virtual to the real world. The second is a shift of focus from the real world to the social world, as society itself becomes the new object of extraction and control. Just as industrial society was imagined as a well-functioning machine, instrumentarian society is imagined as a human simulation of machine learning systems: a confluent hive mind in which each element learns and operates in concert with every other element. In the model of machine confluence, the “freedom” of each individual machine is subordinated to the knowledge of the system as a whole. Instrumentarian power aims to organize, herd, and tune society to achieve a similar social confluence, in which group pressure and computational certainty replace politics and democracy, extinguishing the felt reality and social function of an individualized existence. The youngest members of our societies already experience many of these destructive dynamics in their attachment to social media, the first global experiment in the human hive. I consider the implications of these developments for a second elemental right: the right to sanctuary. The human need for a space of inviolable refuge has persisted in civilized societies from ancient times but is now under attack as surveillance capital creates a world of “no exit” with profound implications for the human future at this new frontier of power.

In the final chapter I conclude that surveillance capitalism departs from the history of market capitalism in surprising ways, demanding both unimpeded freedom and total knowledge, abandoning capitalism's reciprocities with people and society, and imposing a totalizing collectivist vision of life in the hive, with surveillance capitalists and their data priesthood in charge of oversight and control. Surveillance capitalism and its rapidly accumulating instrumentarian power exceed the historical norms of capitalist ambitions, claiming dominion over human, societal, and political territories that range far beyond the conventional institutional terrain of the private firm or the market. As a result, surveillance capitalism is best described as a coup from above, not an overthrow of the state but rather an overthrow of the people's sovereignty and a prominent force in the perilous drift toward democratic deconsolidation that now threatens Western liberal democracies. Only “we the people” can reverse this course, first by naming the unprecedented, then by mobilizing new forms of collaborative action: the crucial friction that reasserts the primacy of a flourishing human future as the foundation of our information civilization. If the digital future is to be our home, then it is we who must make it so.

My methods combine those of a social scientist inclined toward theory, history, philosophy, and qualitative research with those of an essayist: an unusual but intentional approach. As an essayist, I occasionally draw upon my own experiences. I do this because the tendency toward psychic numbing is increased when we regard the critical issues examined here as just so many abstractions attached to technological and economic forces beyond our reach. We cannot fully reckon with the gravity of surveillance capitalism and its consequences unless we can trace the scars they carve into the flesh of our daily lives.

As a social scientist, I have been drawn to earlier theorists who encountered the unprecedented in their time. Reading from this perspective, I developed a fresh appreciation for the intellectual courage and pioneering insights of classic texts, in which authors such as Durkheim, Marx, and Weber boldly theorized industrial capitalism and industrial society as it rapidly constructed itself in their midst during the nineteenth and early twentieth centuries. My work here has also been inspired by mid-twentieth-century thinkers such as Hannah Arendt, Theodor Adorno, Karl Polanyi, Jean-Paul Sartre, and Stanley Milgram, who struggled to name the unprecedented in their time as they faced the comprehension-defying phenomena of totalitarianism and labored to grasp their trail of consequence for the prospects of humanity. My work has also been deeply informed by the many insights of visionary scholars, technology critics, and committed investigative journalists who have done so much to illuminate key points on the map that emerges here.

During the last seven years I have focused closely on the top surveillance capitalist firms and their growing ecosystems of customers, consultants, and competitors, all of it informed by the larger context of technology and data science that defines the Silicon Valley zeitgeist. This raises another important distinction. Just as surveillance capitalism is not the same as technology, this new logic of accumulation cannot be reduced to any single company or group of companies. The top five internet companies?Apple, Google, Amazon, Microsoft, and Facebook?are often regarded as a single entity with similar strategies and interests, but when it comes to surveillance capitalism, this is not the case.

First, it is necessary to distinguish between capitalism and surveillance capitalism. As I discuss in more detail in Chapter 3, that line is defined in part by the purposes and methods of data collection. When a firm collects behavioral data with permission and solely as a means to product or service improvement, it is committing capitalism but not surveillance capitalism. Each of the top five tech companies practices capitalism, but they are not all pure surveillance capitalists, at least not now.

For example, Apple has so far drawn a line, pledging to abstain from many of the practices that I locate in the surveillance capitalist regime. Its behavior in this regard is not perfect, the line is sometimes blurred, and Apple might well change or contradict its orientation. Amazon once prided itself on its customer alignment and the virtuous circle between data collection and service improvement. Both firms derive revenues from physical and digital products and therefore experience less financial pressure to chase surveillance revenues than the pure data companies. As we see in Chapter 9, however, Amazon appears to be migrating toward surveillance capitalism, with its new emphasis on “personalized” services and third-party revenues.

Whether or not a corporation has fully migrated to surveillance capitalism says nothing about other vital issues raised by its operations, from monopolistic and anticompetitive practices in the case of Amazon to pricing, tax strategies, and employment policies at Apple. Nor are there any guarantees for the future. Time will tell if Apple succumbs to surveillance capitalism, holds the line, or perhaps even expands its ambitions to anchor an effective alternative trajectory to a human future aligned with the ideals of individual autonomy and the deepest values of a democratic society.

One important implication of these distinctions is that even when our societies address capitalist harms produced by the tech companies, such as those related to monopoly or privacy, those actions do not ipso facto interrupt a firm's commitment to and continued elaboration of surveillance capitalism. For example, calls to break up Google or Facebook on monopoly grounds could easily result in establishing multiple surveillance capitalist firms, though at a diminished scale, and thus clear the way for more surveillance capitalist competitors. Similarly, reducing Google and Facebook's duopoly in online advertising does not reduce the reach of surveillance capitalism if online advertising market share is simply spread over five surveillance capitalist firms or fifty, instead of two. Throughout this book I focus on the unprecedented aspects of surveillance capitalist operations that must be contested and interrupted if this market form is to be contained and vanquished.

My focus in these pages tends toward Google, Facebook, and Microsoft. The aim here is not a comprehensive critique of these companies as such. Instead, I view them as the petri dishes in which the DNA of surveillance capitalism is best examined. As I suggested earlier, my goal is to map a new logic and its operations, not a company or its technologies. I move across the boundaries of these and other companies in order to compile the insights that can flesh out the map, just as earlier observers moved across many examples to grasp the once-new logics of managerial capitalism and mass production. It is also the case that surveillance capitalism was invented in the United States: in Silicon Valley and at Google. This makes it an American invention, which, like mass production, became a global reality. For this reason, much of this text focuses on developments in the US, although the consequences of these developments belong to the world.

In studying the surveillance capitalist practices of Google, Facebook, Microsoft, and other corporations, I have paid close attention to interviews, patents, earnings calls, speeches, conferences, videos, and company programs and policies. In addition, between 2012 and 2015 I interviewed 52 data scientists from 19 different companies with a combined 586 years of experience in high-technology corporations and startups, primarily in Silicon Valley. These interviews were conducted as I developed my “ground truth” understanding of surveillance capitalism and its material infrastructure. Early on I approached a small number of highly respected data scientists, senior software developers, and specialists in the “internet of things.” My interview sample grew as scientists introduced me to their colleagues. The interviews, sometimes over many hours, were conducted with the promise of confidentiality and anonymity, but my gratitude toward them is personal, and I publicly declare it here.

Finally, throughout this book you will read excerpts from W. H. Auden's Sonnets from China, along with the entirety of Sonnet XVIII. This cycle of Auden's poems is dear to me, a poignant exploration of humanity's mythic history, the perennial struggle against violence and domination, and the transcendent power of the human spirit and its relentless claim on the future.

PART I. THE FOUNDATIONS OF SURVEILLANCE CAPITALISM

CHAPTER TWO. AUGUST 9, 2011: SETTING THE STAGE FOR SURVEILLANCE CAPITALISM

The dangers and the punishments grew greater,
And the way back by angels was defended
Against the poet and the legislator.
- W. H. AUDEN SONNETS FROM CHINA, II

On August 9, 2011, three events separated by thousands of miles captured the bountiful prospects and gathering dangers of our emerging information civilization. First, Silicon Valley pioneer Apple promised a digital dream of new solutions to old economic and social problems, and finally surpassed Exxon Mobil as the world's most highly capitalized corporation. Second, a fatal police shooting in London sparked extensive rioting across the city, engulfing the country in a wave of violent protests. A decade of explosive digital growth had failed to mitigate the punishing austerity of neoliberal economics and the extreme inequality that it produced. Too many people had come to feel excluded from the future, embracing rage and violence as their only remedies. Third, Spanish citizens asserted their rights to a human future when they challenged Google by demanding “the right to be forgotten.” This milestone alerted the world to how quickly the cherished dreams of a more just and democratic digital future were shading into nightmare, and it foreshadowed a global political contest over the fusion of digital capabilities and capitalist ambitions. We relive that August day every day as in some ancient fable, doomed to retrace this looping path until the soul of our information civilization is finally shaped by democratic action, private power, ignorance, or drift.

I. The Apple Hack

Apple thundered onto the music scene in the midst of a pitched battle between demand and supply. On one side were young people whose enthusiasm for Napster and other forms of music file sharing expressed a new quality of demand: consumption my way, what I want, when I want it, where I want it. On the other side were music-industry executives who chose to instill fear and to crush that demand by hunting down and prosecuting some of Napster's most-ardent users. Apple bridged the divide with a commercially and legally viable solution that aligned the company with the changing needs of individuals while working with industry incumbents. Napster hacked the music industry, but Apple appeared to have hacked capitalism.

It is easy to forget just how dramatic Apple's hack really was. The company's profits soared largely on the strength of its iPod/iTunes/iPhone sales. Bloomberg Businessweek described Wall Street analysts as “befuddled” by this mysterious Apple “miracle.” As one gushed, “We can't even model out some of the possibilities.… It's like a religion.”1 Even today the figures are staggering: three days after the launch of the Windows-compatible iTunes platform in October 2003, listeners downloaded a million copies of the free iTunes software and paid for a million songs, prompting Steve Jobs to announce, “In less than one week we've broken every record and become the largest online music company in the world.”2 Within a month there were five million downloads, then ten million three months later, then twenty-five million three months after that. Four and a half years later, in January 2007, that number rose to two billion, and six years later, in 2013, it was 25 billion. In 2008 Apple surpassed Walmart as the world's largest music retailer. iPod sales were similarly spectacular, exploding from 1 million units per month after the music store's launch to 100 million less than four years later, when Apple subsumed the iPod's functions in its revolutionary iPhone, which drove another step-function of growth. A 2017 study of stock market returns concluded that Apple had generated more profit for investors than any other US company in the previous century.3

One hundred years before the iPod, mass production provided the gateway to a new era when it revealed a parallel universe of economic value hidden in new and still poorly understood mass consumers who wanted goods, but at a price they could afford. Henry Ford reduced the price of an automobile by 60 percent with a revolutionary industrial logic that combined high volume and low unit cost. He called it “mass production,” summarized in his famous maxim “You can have any color car you want so long as it's black.”

Later, GM's Alfred Sloan expounded on that principle: “By the time we have a product to show them [consumers], we are necessarily committed to selling that product because of the tremendous investment involved in bringing it to market.”4 The music industry's business model was built on telling its consumers what they would buy, just like Ford and Sloan. Executives invested in the production and distribution of CDs, and it was the CD that customers would have to purchase.

Henry Ford was among the first to strike gold by tapping into the new mass consumption with the Model T. As in the case of the iPod, Ford's Model T factory was pressed to meet the immediate explosion of demand. Mass production could be applied to anything, and it was. It changed the framework of production as it diffused throughout the economy and around the world, and it established the dominance of a new mass-production capitalism as the basis for wealth creation in the twentieth century.

The iPod/iTunes innovations flipped this century-old industrial logic, leveraging the new capabilities of digital technologies to invert the consumption experience. Apple rewrote the relationship between listeners and their music with a distinct commercial logic that, while familiar to us now, was also experienced as revolutionary when first introduced.

The Apple inversion depended on a few key elements. Digitalization made it possible to rescue valued assets?in this case, songs?from the institutional spaces in which they were trapped. The costly institutional procedures that Sloan had described were eliminated in favor of a direct route to listeners. In the case of the CD, for example, Apple bypassed the physical production of the product along with its packaging, inventory, storage, marketing, transportation, distribution, and physical retailing. The combination of the iTunes platform and the iPod device made it possible for listeners to continuously reconfigure their songs at will. No two iPods were the same, and an iPod one week was different from the same iPod another week, as listeners decided and re-decided the dynamic pattern. It was an excruciating development for the music industry and its satellites?retailers, marketers, etc.?but it was exactly what the new listeners wanted.

How should we understand this success? Apple's “miracle” is typically credited to its design and marketing genius. Consumers' eagerness to have “what I want, when, where, and how I want it” is taken as evidence of the demand for “convenience” and sometimes even written off as narcissism or petulance. In my view, these explanations pale against the unprecedented magnitude of Apple's accomplishments. We have contented ourselves for too long with superficial explanations of Apple's unprecedented fusion of capitalism and the digital rather than digging deeper into the historical forces that summoned this new form to life.

Just as Ford tapped into a new mass consumption, Apple was among the first to experience explosive commercial success by tapping into a new society of individuals and their demand for individualized consumption. The inversion implied a larger story of a commercial reformation in which the digital era finally offered the tools to shift the focus of consumption from the mass to the individual, liberating and reconfiguring capitalism's operations and assets. It promised something utterly new, urgently necessary, and operationally impossible outside the networked spaces of the digital. Its implicit promise of an advocacy-oriented alignment with our new needs and values was a confirmation of our inner sense of dignity and worth, ratifying the feeling that we matter. In offering consumers respite from an institutional world that was indifferent to their individual needs, it opened the door to the possibility of a new rational capitalism able to reunite supply and demand by connecting us to what we really want in exactly the ways that we choose.

As I shall argue in the coming chapters, the same historical conditions that sent the iPod on its wild ride summoned the emancipatory promise of the internet into our everyday lives as we sought remedies for inequality and exclusion. Of most significance for our story, these same conditions would provide important shelter for surveillance capitalism's ability to root and flourish. More precisely, the Apple miracle and surveillance capitalism each owes its success to the destructive collision of two opposing historical forces. One vector belongs to the longer history of modernization and the centuries-long societal shift from the mass to the individual. The opposing vector belongs to the decades-long elaboration and implementation of the neoliberal economic paradigm: its political economics, its transformation of society, and especially its aim to reverse, subdue, impede, and even destroy the individual urge toward psychological self-determination and moral agency. The next sections briefly sketch the basic contours of this collision, establishing terms of reference that we will return to throughout the coming chapters as we explore surveillance capitalism's rapid rise to dominance.

II. The Two Modernities

Capitalism evolves in response to the needs of people in a time and place. Henry Ford was clear on this point: “Mass production begins in the perception of a public need.”5 At a time when the Detroit automobile manufacturers were preoccupied with luxury vehicles, Ford stood alone in his recognition of a nation of newly modernizing individuals?farmers, wage earners, and shopkeepers?who had little and wanted much, but at a price they could afford. Their “demand” issued from the same conditions of existence that summoned Ford and his men as they discovered the transformational power of a new logic of standardized, high-volume, low-unit-cost production. Ford's famous “five-dollar day” was emblematic of a systemic logic of reciprocity. In paying assembly-line workers higher wages than anyone had yet imagined, he recognized that the whole enterprise of mass production rested upon a thriving population of mass consumers.

Although the market form and its bosses had many failings and produced many violent facts, its populations of newly modernizing individuals were valued as the necessary sources of customers and employees. It depended upon its communities in ways that would eventually lead to a range of institutionalized reciprocities. On the outside the drama of access to affordable goods and services was bound by democratic measures and methods of oversight that asserted and protected the rights and safety of workers and consumers. On the inside were durable employment systems, career ladders, and steady increases in wages and benefits.6 Indeed, considered from the vantage point of the last forty years, during which this market form was systematically deconstructed, its reciprocity with the social order, however vexed and imperfect, appears to have been one of its most-salient features.

The implication is that new market forms are most productive when they are shaped by an allegiance to the actual demands and mentalities of people. The great sociologist Emile Durkheim made this point at the dawn of the twentieth century, and his insight will be a touchstone for us throughout this book. Observing the dramatic upheavals of industrialization in his time?factories, specialization, the complex division of labor?Durkheim understood that although economists could describe these developments, they could not grasp their cause. He argued that these sweeping changes were “caused” by the changing needs of people and that economists were (and remain) systematically blind to these social facts:

The division of labor appears to us otherwise than it does to economists. For them, it essentially consists in greater production. For us, this greater productivity is only a necessary consequence, a repercussion of the phenomenon. If we specialize, it is not to produce more, but it is to enable us to live in the new conditions of existence that have been made for us.7

The sociologist identified the perennial human quest to live effectively in our “conditions of existence” as the invisible causal power that summons the division of labor, technologies, work organization, capitalism, and ultimately civilization itself. Each is forged in the same crucible of human need that is produced by what Durkheim called the always intensifying “violence of the struggle” for effective life: “If work becomes more divided,” it is because the “struggle for existence is more acute.”8 The rationality of capitalism reflects this alignment, however imperfect, with the needs that people experience as they try to live their lives effectively, struggling with the conditions of existence that they encounter in their time and place.

When we look through this lens, we can see that those eager customers for Ford's incredible Model T and the new consumers of iPods and iPhones are expressions of the conditions of existence that characterized their era. In fact, each is the fruit of distinct phases of a centuries-long process known as “individualization” that is the human signature of the modern era. Ford's mass consumers were members of what has been called the “first modernity,”9 but the new conditions of the “second modernity” produced a new kind of individual for whom the Apple inversion, and the many digital innovations that followed, would become essential. This second modernity summoned the likes of Google and Facebook into our lives, and, in an unexpected twist, helped to enable the surveillance capitalism that would follow.

What are these modernities and how do they matter to our story? The advent of the individual as the locus of moral agency and choice initially occurred in the West, where the conditions for this emergence first took hold. First let's establish that the concept of “individualization” should not be confused with the neoliberal ideology of “individualism” that shifts all responsibility for success or failure to a mythical, atomized, isolated individual, doomed to a life of perpetual competition and disconnected from relationships, community, and society. Neither does it refer to the psychological process of “individuation” that is associated with the lifelong exploration of self-development. Instead, individualization is a consequence of long-term processes of modernization.10

Until the last few minutes of human history, each life was foretold in blood and geography, sex and kin, rank and religion. I am my mother's daughter. I am my father's son. The sense of the human being as an individual emerged gradually over centuries, clawed from this ancient vise. Around two hundred years ago, we embarked upon the first modern road where life was no longer handed down one generation to the next according to the traditions of village and clan. This “first modernity” marks the time when life became “individualized” for great numbers of people as they separated from traditional norms, meanings, and rules.11 That meant each life became an open-ended reality to be discovered rather than a certainty to be enacted. Even where the traditional world remains intact for many people today, it can no longer be experienced as the only possible story.

I often think about the courage of my great-grandparents. What mixture of sadness, terror, and exhilaration did they feel when in 1908, determined to escape the torments of the Cossacks in their tiny village outside of Kiev, they packed their five children, including my four-year-old grandfather Max, and all their belongings into a wagon and pointed the horses toward a steamer bound for America? Like millions of other pioneers of this first modernity, they escaped a still-feudal world and found themselves improvising a profoundly new kind of life. Max would later marry Sophie and build a family far from the rhythms of the villages that birthed them. The Spanish poet Antonio Machado captured the exhilaration and daring of these first-modernity individuals in his famous song: “Traveler, there is no road; the road is made as you go.” This is what “search” has meant: a journey of exploration and self-creation, not an instant swipe to already composed answers.

Still, the new industrial society retained many of the hierarchical motifs of the older feudal world in its patterns of affiliation based on class, race, occupation, religion, ethnicity, sex, and the leviathans of mass society: its corporations, workplaces, unions, churches, political parties, civic groups, and school systems. This new world order of the mass and its bureaucratic logic of concentration, centralization, standardization, and administration still provided solid anchors, guidelines, and goals for each life.

Compared to their parents and all the generations before, Sophie and Max had to make things up on their own, but not everything. Sophie knew she would raise the family. Max knew he would earn their living. You adapted to what the world had on offer, and you followed the rules. Nor did anyone ask your opinion or listen if you spoke. You were expected to do what you were supposed to do, and little by little you made your way. You raised a nice family, and eventually you'd have a house, car, washing machine, and refrigerator. Mass production pioneers like Henry Ford and Alfred Sloan had found a way to get you these things at a price you could afford.

If there was anxiety, it reflected the necessity of living up to the requirements of one's roles. One was expected to suppress any sense of self that spilled over the edges of the given social role, even at considerable psychic cost. Socialization and adaptation were the materials of a psychology and sociology that regarded the nuclear family as the “factory” for the “production of personalities” ready-made for conformity to the social norms of mass society.12 Those “factories” also produced a great deal of pain: the feminine mystique, closeted homosexuals, church-going atheists, and back-alley abortions. Eventually, though, they even produced people like you and me.

When I set out on the open road, there were few answers, nothing to emulate, no compass to follow except for the values and dreams that I carried inside me. I was not alone; the road was filled with so many others on the same kind of journey. The first modernity birthed us, but we brought a new mentality to life: a “second modernity.”13 What began as a modern migration from traditional lifeways bloomed into a new society of people born to a sense of psychological individuality, with its double-edged birthright of liberation and necessity. We experience both the right and the requirement to choose our own lives. No longer content to be anonymous members of the mass, we feel our entitlement to self-determination, an obvious truth to us that would have been an impossible act of hubris for Sophie and Max. This mentality is an extraordinary achievement of the human spirit, even as it can be a life sentence to uncertainty, anxiety, and stress.

Since the second half of the twentieth century, the individualization story has taken this new turn toward a “second modernity.” Industrialization modernity and the practices of mass production capitalism at its core produced more wealth than had ever been imagined possible. Where democratic politics, distributional policies, access to education and health care, and strong civil society institutions complemented that wealth, a new “society of individuals” first began to emerge. Hundreds of millions of people gained access to experiences that had once been the preserve of a tiny elite: university education, travel, improved life expectancy, disposable income, rising standards of living, broad access to consumer goods, varied communication and information flows, and specialized, intellectually demanding work.

The hierarchical social compact and mass society of the first modernity promised predictable rewards, but their very success was the knife that cut us loose and sent us tumbling onto the shores of the second modernity, propelling us toward more-intricate and richly patterned lives. Education and knowledge work increased mastery of language and thought, the tools with which we create personal meaning and form our own opinions. Communication, information, consumption, and travel stimulated individual self-consciousness and imaginative capabilities, informing perspectives, values, and attitudes in ways that could no longer be contained by predefined roles or group identity. Improved health and longer life spans provided the time for a self-life to deepen and mature, fortifying the legitimacy of personal identity over and against a priori social norms.

Even when we revert to traditional roles, these are choices now rather than absolute truths imposed at birth. As the great clinician of identity, Erik Erikson, once described it, “The patient of today suffers most under the problem of what he should believe and who he should?or… might?be or become; while the patient of early psychoanalysis suffered most under inhibitions which prevented him from being what and who he thought he knew he was.”14 This new mentality has been most pronounced in wealthier countries, but research shows significant pluralities of second-modernity individuals in nearly every region of the world.15

The first modernity suppressed the growth and expression of self in favor of collective solutions, but by the second modernity, the self is all we have. The new sense of psychological sovereignty broke upon the world long before the internet appeared to amplify its claims. We learn through trial and error how to stitch together our lives. Nothing is given. Everything must be reviewed, renegotiated, and reconstructed on the terms that make sense to us: family, religion, sex, gender, morality, marriage, community, love, nature, social connections, political participation, career, food…

Indeed, it was this new mentality and its demands that summoned the internet and the burgeoning information apparatus into our everyday lives. The burdens of life without a fixed destiny turned us toward the empowering information-rich resources of the new digital milieu as it offered new ways to amplify our voices and forge our own chosen patterns of connection. So profound is this phenomenon that one can say without exaggeration that the individual as the author of his or her own life is the protagonist of our age, whether we experience this fact as emancipation or affliction.16

Western modernity had formed around a canon of principles and laws that confer inviolable individual rights and acknowledge the sanctity of each individual life.17 However, it was not until the second modernity that felt experience began to catch up with formal law. This felt truth has been expressed in new demands to make actual in everyday life what is already established in law.18

In spite of its liberating potential, the second modernity was slated to become a hard place to live, and our conditions of existence today reflect this trouble. Some of the challenges of the second modernity arise from the inevitable costs associated with the creation and sustenance of one's own life, but second-modernity instability is also the result of institutionalized shifts in economic and social policies and practices associated with the neoliberal paradigm and its rise to dominance. This far-reaching paradigm has been aimed at containing, rechanneling, and reversing the secular wave of second-modernity claims to self-determination and the habitats in which those claims can thrive. We live in this collision between a centuries-old story of modernization and a decades-old story of economic violence that thwarts our pursuit of effective life.

There is a rich and compelling literature that documents this turning point in economic history, and my aim here is simply to call attention to some of the themes in this larger narrative that are vital to our understanding of the collision: the condition of existence that summoned both the Apple “miracle” and surveillance capitalism's subsequent gestation and growth.19

III. The Neoliberal Habitat

The mid-1970s saw the postwar economic order under siege from stagnation, inflation, and sharply reduced growth, most markedly in the US and the UK. There were also new pressures on the political order as second-modernity individuals?especially students, young workers, African Americans, women, Latinos, and other marginalized groups?mobilized around demands for equal rights, voice, and participation. In the US the Vietnam War was a focal point of social unrest, and the corruption exposed by the Watergate scandal triggered public insistence on political reform. In the UK inflation had strained industrial relations beyond the breaking point. In both countries the specter of apparently intractable economic decay combined with vocal new demands on the democratic social compact produced confusion, anxiety, and desperation among elected officials ill-equipped to judge why once-reliable Keynesian policies had failed to reverse the course.

Neoliberal economists had been waiting in the wings for this opportunity, and their ideas flowed into the “policy vacuum” that now bedeviled both governments.20 Led by the Austrian economist Friedrich Hayek, fresh from his 1974 Nobel Prize, and his American counterpart Milton Friedman, who received the Nobel two years later, they had honed their radical free-market economic theory, political ideology, and pragmatic agenda throughout the postwar period at the fringe of their profession, under the shadow of Keynesian domination, and now their time had come.21

The free-market creed originated in Europe as a sweeping defense against the threat of totalitarian and communist collectivist ideologies. It aimed to revive acceptance of a self-regulating market as a natural force of such complexity and perfection that it demanded radical freedom from all forms of state oversight. Hayek explained the necessity of absolute individual and collective submission to the exacting disciplines of the market as an unknowable “extended order” that supersedes the legitimate political authority vested in the state: “Modern economics explains how such an extended order… constitutes an information-gathering process… that no central planning agency, let alone any individual, could know as a whole, possess, or control.…”22 Hayek and his ideological brethren insisted on a capitalism stripped down to its raw core, unimpeded by any other force and impervious to any external authority. Inequality of wealth and rights was accepted and even celebrated as a necessary feature of a successful market system and as a force for progress.23 Hayek's ideology provided the intellectual superstructure and legitimation for a new theory of the firm that became another crucial antecedent to the surveillance capitalist corporation: its structure, moral content, and relationship to society.

The new conception was operationalized by economists Michael Jensen and William Meckling. Leaning heavily on Hayek's work, the two scholars took an ax to the pro-social principles of the twentieth-century corporation, an ax that became known as the “shareholder value movement.” In 1976 Jensen and Meckling published a landmark article in which they reinterpreted the manager as a sort of parasite feeding off the host of ownership: unavoidable, perhaps, but nonetheless an obstacle to shareholder wealth. They boldly argued that structural disconnect between owners and managers “can result in the value of the firm being substantially lower than it otherwise could be.”24 If managers suboptimized the value of the firm to its owners in favor of their own preferences and comfort, it was only rational for them to do so. The solution, these economists argued, was to assert the market's signal of value, the share price, as the basis for a new incentive structure intended to finally and decisively align managerial behavior with owners' interests. Managers who failed to bend to the ineffable signals of Hayek's “extended order” would quickly become prey to the “barbarians at the gate” in a new and vicious hunt for unrealized market value.

In the “crisis of democracy” zeitgeist, the neoliberal vision and its reversion to market metrics was deeply attractive to politicians and policy makers, both as the means to evade political ownership of tough economic choices and because it promised to impose a new kind of order where disorder was feared.25 The absolute authority of market forces would be enshrined as the ultimate source of imperative control, displacing democratic contest and deliberation with an ideology of atomized individuals sentenced to perpetual competition for scarce resources. The disciplines of competitive markets promised to quiet unruly individuals and even transform them back into subjects too preoccupied with survival to complain.

As the old collectivist enemies had receded, new ones took their place: state regulation and oversight, social legislation and welfare policies, labor unions and the institutions of collective bargaining, and the principles of democratic politics. Indeed, all these were to be replaced by the market's version of truth, and competition would be the solution to growth. The new aims would be achieved through supply-side reforms, including deregulation, privatization, and lower taxes.

Thirty-five years before Hayek and Friedman's ascendance, the great historian Karl Polanyi wrote eloquently on the rise of the market economy. Polanyi's studies led him to conclude that the operations of a self-regulating market are profoundly destructive when allowed to run free of countervailing laws and policies. He described the double movement: “a network of measures and policies… integrated into powerful institutions designed to check the action of the market relative to labor, land, and money.”26

The double movement, Polanyi argued, supports the market form while tethering it to society: balancing, moderating, and mitigating its destructive excesses. Polanyi observed that such countermeasures emerged spontaneously in every European society during the second half of the nineteenth century. Each constructed legislative, regulatory, and institutional solutions to oversee contested new arenas such as workers' compensation, factory inspection, municipal trading, public utilities, food safety, child labor, and public safety.

In the US the double movement was achieved through decades of social contest that harnessed industrial production, however imperfectly, to society's needs. It appeared in the trust busting, civil society, and legislative reforms of the Progressive Era. Later it was elaborated in the legislative, juridical, social, and tax initiatives of the New Deal and the institutionalization of Keynesian economics during the post?World War II era: labor market, tax, and social welfare policies that ultimately increased economic and social equality.27 The double movement was further developed in the legislative initiatives of the Great Society, especially civil rights law and landmark environmental legislation. Many scholars credit such countermeasures with the success of market democracy in the US and Europe, a political economics that proved far more adaptive in its ability to produce reciprocities of demand and supply than either leftist theorists or even Polanyi had imagined, and by mid-century the large corporation appeared to be a deeply rooted and durable modern social institution.28

The double movement was scheduled for demolition under the neoliberal flag, and implementation began immediately. In 1976, the same year that Jensen and Meckling published their pathbreaking analysis, President Jimmy Carter initiated the first significant efforts to radically align the corporation with Wall Street's market metrics, targeting the airline, transportation, and financial sectors with a bold program of deregulation. What began as a “ripple” turned into “a tidal wave that washed away controls from large segments of the economy in the last two decades of the twentieth century.”29 The implementation that began with Carter would define the Reagan and Thatcher eras, virtually every subsequent US presidency, and much of the rest of the world, as the new fiscal and social policies spread to Europe and other regions in varying degrees.30

Thus began the disaggregation and diminishment of the US public firm.31 The public corporation as a social institution was reinterpreted as a costly error, and its long-standing reciprocities with customers and employees were recast as destructive violations of market efficiency. Financial carrots and sticks persuaded executives to dismember and shrink their companies, and the logic of capitalism shifted from the profitable production of goods and services to increasingly exotic forms of financial speculation. The disciplines imposed by the new market operations stripped capitalism down to its raw core, and by 1989 Jensen confidently proclaimed the “eclipse of the public corporation.”32

By the turn of the century, as the foundational mechanisms of surveillance capitalism were just beginning to take shape, “shareholder value maximization” was widely accepted as the “objective function” of the firm.33 These principles, culled from a once-extremist philosophy, were canonized as standard practice across commercial, financial, and legal domains.34 By 2000, US public corporations employed fewer than half as many Americans as they did in 1970.35 In 2009 there were only half as many public firms as in 1997. The public corporation had become “unnecessary for production, unsuited for stable employment and the provision of social welfare services, and incapable of proving a reliable long-term return on investment.”36 In this process the cult of the “entrepreneur” would rise to near-mythic prominence as the perfect union of ownership and management, replacing the rich existential possibilities of the second modernity with a single glorified template of audacity, competitive cunning, dominance, and wealth.

IV. The Instability of the Second Modernity

On August 9, 2011, around the same time that cheers erupted in Apple's conference room, 16,000 police officers flooded the streets of London, determined to quell “the most widespread and prolonged breakdown of order in London's history since the Gordon riot of 1780.”37 The rioting had begun four nights earlier when a peaceful vigil triggered by the police shooting of a young man suddenly turned violent. In the days that followed, the number of rioters swelled as looting and arson spread to twenty-two of London's thirty-two boroughs and other major cities across Britain.38 Over four days of street action, thousands of people caused property damage of over $50 million, and 3,000 people were arrested.

Even as Apple's ascension appeared to ratify the claims of second-modernity individuals, the streets of London told the grim legacy of a three-decade experiment in economic growth through exclusion. One week after the rioting, an article by sociologist Saskia Sassen in the Daily Beast observed that “if there's one underlying condition, it has to do with the unemployment and bitter poverty among people who desire to be part of the middle class and who are keenly aware of the sharp inequality between themselves and their country's wealthy elite. These are in many ways social revolutions with a small ‘r,' protests against social conditions that have become unbearable.”39

What were the social conditions that had become so unbearable? Many analysts agreed that the tragedy of Britain's riots was set into motion by neoliberalism's successful transformation of society: a program that was most comprehensively executed in the UK and the US. Indeed, research from the London School of Economics based on interviews with 270 people who had participated in the rioting reported on the predominant theme of inequality: “no job, no money.”40 The terms of reference in nearly every study sound the same drumbeat: lack of opportunity, lack of access to education, marginalization, deprivation, grievance, hopelessness.41 And although the London riots differed substantially from other protests that preceded and followed, most notably the Indignados movement that began with a large-scale public mobilization in Madrid in May 2011 and the Occupy movement that would emerge on September 17 in Wall Street's Zuccotti Park, they shared a point of origin in the themes of economic inequality and exclusion.42

The US, the UK, and most of Europe entered the second decade of the twenty-first century facing economic and social inequalities more extreme than anything since the Gilded Age and comparable to some of the world's poorest countries.43 Despite a decade of explosive digital growth that included the Apple miracle and the penetration of the internet into everyday life, dangerous social divisions suggested an even more stratified and antidemocratic future. “In the age of new consensus financial policy stabilization,” one US economist wrote, “the economy has witnessed the largest transfer of income to the top in history.”44 A sobering 2016 report from the International Monetary Fund warned of instability, concluding that the global trends toward neoliberalism “have not delivered as expected.” Instead, inequality had significantly diminished “the level and the durability of growth” while increasing volatility and creating permanent vulnerability to economic crisis.45

The quest for effective life had been driven to the breaking point under the aegis of market freedom. Two years after the North London riots, research in the UK showed that by 2013, poverty fueled by lack of education and unemployment already excluded nearly a third of the population from routine social participation.46 Another UK report concluded, “Workers on low and middle incomes are experiencing the biggest decline in their living standards since reliable records began in the mid-19th Century.”47 By 2015, austerity measures had eliminated 19 percent, or 18 billion pounds, from the budgets of local authorities, had forced an 8 percent cut in child protection spending, and had caused 150,000 pensioners to no longer enjoy access to vital services.48 Buy 2014 nearly half of the US population lived in functional poverty, with the highest wage in the bottom half of earners at about $34,000.49 A 2012 US Department of Agriculture survey showed that close to 49 million people lived in “food-insecure” households.50

In Capital in the Twenty-First Century, the French economist Thomas Piketty integrated years of income data to derive a general law of accumulation: the rate of return on capital tends to exceed the rate of economic growth. This tendency, summarized as r > g, is a dynamic that produces ever-more-extreme income divergence and with it a range of antidemocratic social consequences long predicted as harbingers of an eventual crisis of capitalism. In this context, Piketty cites the ways in which financial elites use their outsized earnings to fund a cycle of political capture that protects their interests from political challenge.51 Indeed, a 2015 New York Times report concluded that 158 US families and their corporations provided almost half ($176 million) of all the money that was raised by both political parties in support of presidential candidates in 2016, primarily in support of “Republican candidates who have pledged to pare regulations, cut taxes… and shrink entitlements.”52 Historians, investigative journalists, economists, and political scientists have analyzed the intricate facts of a turn toward oligarchy, shining a light on the systematic campaigns of public influence and political capture that helped drive and preserve an extreme free-market agenda at the expense of democracy.53

A precis of Piketty's extensive research may be stated simply: capitalism should not be eaten raw. Capitalism, like sausage, is meant to be cooked by a democratic society and its institutions because raw capitalism is antisocial. As Piketty warns, “A market economy… if left to itself… contains powerful forces of divergence, which are potentially threatening to democratic societies and to the values of social justice on which they are based.”54 Many scholars have taken to describing these new conditions as neofeudalism, marked by the consolidation of elite wealth and power far beyond the control of ordinary people and the mechanisms of democratic consent.55 Piketty calls it a return to “patrimonial capitalism,” a reversion to a premodern society in which one's life chances depend upon inherited wealth rather than meritocratic achievement.56

We now have the tools to grasp the collision in all of its destructive complexity: what is unbearable is that economic and social inequalities have reverted to the preindustrial “feudal” pattern but that we, the people, have not. We are not illiterate peasants, serfs, or slaves. Whether “middle class” or “marginalized,” we share the collective historical condition of individualized persons with complex social experiences and opinions. We are hundreds of millions or even billions of second-modernity people whom history has freed both from the once-immutable facts of a destiny told at birth and from the conditions of mass society. We know ourselves to be worthy of dignity and the opportunity to live an effective life. This is existential toothpaste that, once liberated, cannot be squeezed back into the tube. Like a detonation's rippling sound waves of destruction, the reverberations of pain and anger that have come to define our era arise from this poisonous collision between inequality's facts and inequality's feelings.57

Back in 2011, those 270 interviews of London participants in the riots also reflected the scars of this collision. “They expressed it in different ways,” the report concludes, “but at heart what the rioters talked about was a pervasive sense of injustice. For some, this was economic?the lack of a job, money, or opportunity. For others it was more broadly social, not just the absence of material things, but how they felt they were treated compared with others.…” The “sense of being invisible” was “widespread.” As one woman explained, “The young these days need to be heard. It's got to be justice for them.” And a young man reflected, “When no one cares about you you're gonna eventually make them care, you're gonna cause a disturbance.”58 Other analyses cite “the denial of dignity” expressed in the wordless anger of the North London rampage.59

When the Occupy movement erupted on another continent far from London's beleaguered neighborhoods, it appeared to have little in common with the violent eruptions that August. The 99 percent that Occupy intended to represent is not marginalized; on the contrary, the very legitimacy of Occupy was its claim to supermajority status. Nevertheless, Occupy revealed a similar conflict between inequality's facts and inequality's feelings, expressed in a creatively individualized political culture that insisted on “direct democracy” and “horizontal leadership.”60 Some analysts concluded that it was this conflict that ultimately crippled the movement, with its “inner core” of leaders unwilling to compromise their highly individualized approach in favor of the strategies and tactics required for a durable mass movement.61 However, one thing is certain: there were no serfs in Zuccotti Park. On the contrary, as one close observer of the movement ruminated, “What is different is that from the start very large sections of we, the people, proved to be wiser than our rulers. We saw further and proved to have better judgment, thus reversing the traditional legitimacy of our elite governance that those in charge know better than the unwashed.”62

This is the existential contradiction of the second modernity that defines our conditions of existence: we want to exercise control over our own lives, but everywhere that control is thwarted. Individualization has sent each one of us on the prowl for the resources we need to ensure effective life, but at each turn we are forced to do battle with an economics and politics from whose vantage point we are but ciphers. We live in the knowledge that our lives have unique value, but we are treated as invisible. As the rewards of late-stage financial capitalism slip beyond our grasp, we are left to contemplate the future in a bewilderment that erupts into violence with increasing frequency. Our expectations of psychological self-determination are the grounds upon which our dreams unfold, so the losses we experience in the slow burn of rising inequality, exclusion, pervasive competition, and degrading stratification are not only economic. They slice us to the quick in dismay and bitterness because we know ourselves to be worthy of individual dignity and the right to a life on our own terms.

The deepest contradiction of our time, the social philosopher Zygmunt Bauman wrote, is “the yawning gap between the right of self-assertion and the capacity to control the social settings which render such self-assertion feasible. It is from that abysmal gap that the most poisonous effluvia contaminating the lives of contemporary individuals emanate.” Any new chapter in the centuries-old story of human emancipation, he insisted, must begin here. Can the instability of the second modernity give way to a new synthesis: a third modernity that transcends the collision, offering a genuine path to a flourishing and effective life for the many, not just the few? What role will information capitalism play?

V. A Third Modernity

Apple once launched itself into that “abysmal gap,” and for a time it seemed that the company's fusion of capitalism and the digital might set a new course toward a third modernity. The promise of an advocacy-oriented digital capitalism during the first decade of our century galvanized second-modernity populations around the world. New companies such as Google and Facebook appeared to bring the promise of the inversion to life in new domains of critical importance, rescuing information and people from the old institutional confines, enabling us to find what and whom we wanted, when and how we wanted to search or connect.

The Apple inversion implied trustworthy relationships of advocacy and reciprocity embedded in an alignment of commercial operations with consumers' genuine interests. It held out the promise of a new digital market form that might transcend the collision: an early intimation of a third-modernity capitalism summoned by the self-determining aspirations of individuals and indigenous to the digital milieu. The opportunity for “my life, my way, at a price I can afford” was the human promise that quickly lodged at the very heart of the commercial digital project, from iPhones to one-click ordering to massive open online courses to on-demand services to hundreds of thousands of web-based enterprises, apps, and devices.

There were missteps, shortfalls, and vulnerabilities, to be sure. The potential significance of Apple's tacit new logic was never fully grasped, even by the company itself. Instead, the corporation produced a steady stream of contradictions that signaled business as usual. Apple was criticized for extractive pricing policies, offshoring jobs, exploiting its retail staff, abrogating responsibility for factory conditions, colluding to depress wages via illicit noncompete agreements in employee recruitment, institutionalized tax evasion, and a lack of environmental stewardship?just to name a few of the violations that seemed to negate the implicit social contract of its own unique logic.

When it comes to genuine economic mutation, there is always a tension between the new features of the form and its mother ship. A combination of old and new is reconfigured in an unprecedented pattern. Occasionally, the elements of a mutation find the right environment in which to be “selected” for propagation. This is when the new form stands a chance of becoming fully institutionalized and establishes its unique migratory path toward the future. But it's even more likely that potential mutations meet their fate in “transition failure,” drawn back by the gravitational pull of established practices.63

Was the Apple inversion a powerful new economic mutation running the gauntlet of trial and error on its way to fulfilling the needs of a new age, or was it a case of transition failure? In our enthusiasm and growing dependency on technology, we tended to forget that the same forces of capital from which we had fled in the “real” world were rapidly claiming ownership of the wider digital sphere. This left us vulnerable and caught unawares when the early promise of information capitalism took a darker turn. We celebrated the promise of “help is on the way” while troubling questions broke through the haze with increasing regularity, each one followed by a predictable eruption of dismay and anger.

Why did Google's Gmail, launched in 2004, scan private correspondence to generate advertising? As soon as the first Gmail user saw the first ad targeted to the content of her private correspondence, public reaction was swift. Many were repelled and outraged; others were confused. As Google chronicler Steven Levy put it, “By serving ads related to content, Google seemed almost to be reveling in the fact that users' privacy was at the mercy of the policies and trustworthiness of the company that owned the servers. And since those ads made profits, Google was making it clear that it would exploit the situation.”64

In 2007 Facebook launched Beacon, touting it as “a new way to socially distribute information.” Beacon enabled Facebook advertisers to track users across the internet, disclosing users' purchases to their personal networks without permission. Most people were outraged by the company's audacity, both in tracking them online and in usurping their ability to control the disclosure of their own facts. Facebook founder Mark Zuckerberg shut the program down under duress, but by 2010 he declared that privacy was no longer a social norm and then congratulated himself for relaxing the company's “privacy policies” to reflect this self-interested assertion of a new social condition.65 Zuckerberg had apparently never read user Jonathan Trenn's rendering of his Beacon experience:

I purchased a diamond engagement ring set from overstock in preparation for a New Year's surprise for my girlfriend.… Within hours, I received a shocking call from one of my best friends of surprise and “congratulations” for getting engaged.(!!!) Imagine my horror when I learned that overstock had published the details of my purchase (including a link to the item and its price) on my public Facebook newsfeed, as well as notifications to all of my friends. ALL OF MY FRIENDS, including my girlfriend, and all of her friends, etc.… ALL OF THIS WAS WITHOUT MY CONSENT OR KNOWLEDGE. I am totally distressed that my surprise was ruined, and what was meant to be something special and a lifetime memory for my girlfriend and I was destroyed by a totally underhanded and infuriating privacy invasion. I want to wring the neck of the folks at overstock and facebook who thought that this was a good idea. It sets a terrible precedent on the net, and I feel that it ruined a part of my life.66

Among the many violations of advocacy expectations, ubiquitous “terms-of-service agreements” were among the most pernicious.67 Legal experts call these “contracts of adhesion” because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not. Online “contracts” such as terms-of-service or terms-of-use agreements are also referred to as “click-wrap” because, as a great deal of research shows, most people get wrapped in these oppressive contract terms by simply clicking on the box that says “I agree” without ever reading the agreement.68 In many cases, simply browsing a website obligates you to its terms-of-service agreement even if you don't know it. Scholars point out that these digital documents are excessively long and complex in part to discourage users from actually reading the terms, safe in the knowledge that most courts have upheld the legitimacy of click-wrap agreements despite the obvious lack of meaningful consent.69 US Supreme Court Chief Justice John Roberts admitted that he “doesn't read the computer fine print.”70 Adding insult to injury, terms of service can be altered unilaterally by the firm at any time, without specific user knowledge or consent, and the terms typically implicate other companies (partners, suppliers, marketers, advertising intermediaries, etc.) without stating or accepting responsibility for their terms of service. These “contracts” impose an unwinnable infinite regress upon the user that law professor Nancy Kim describes as “sadistic.”

Legal scholar Margaret Radin observes the Alice-in-Wonderland quality of such “contracts.” Indeed, the sacred notions of “agreement” and “promise” so critical to the evolution of the institution of contract since Roman times have devolved to a “talismanic” signal “merely indicating that the firm deploying the boilerplate wants the recipient to be bound.”71 Radin calls this “private eminent domain,” a unilateral seizure of rights without consent. She regards such “contracts” as a moral and democratic “degradation” of the rule of law and the institution of contract, a perversion that restructures the rights of users granted through democratic processes, “substituting for them the system that the firm wishes to impose.… Recipients must enter a legal universe of the firm's devising in order to engage in transactions with the firm.”72

The digital milieu has been essential to these degradations. Kim points out that paper documents once imposed natural restraints on contracting behavior simply by virtue of their cost to produce, distribute, and archive. Paper contracts require a physical signature, limiting the burden a firm is likely to impose on a customer by requiring her to read multiple pages of fine print. Digital terms, in contrast, are “weightless.” They can be expanded, reproduced, distributed, and archived at no additional cost. Once firms understood that the courts were disposed to validate their click-wrap and browse-wrap agreements, there was nothing to stop them from expanding the reach of these degraded contracts “to extract from consumers additional benefits unrelated to the transaction.”73 This coincided with the discovery of behavioral surplus that we examine in Chapter 3, as terms-of-service agreements were extended to include baroque and perverse “privacy policies,” establishing another infinite regress of these terms of expropriation. Even the former Federal Trade Commission Chairperson Jon Leibowitz publicly stated, “We all agree that consumers don't read privacy policies.”74 In 2008 two Carnegie Mellon professors calculated that a reasonable reading of all the privacy policies that one encounters in a year would require 76 full workdays at a national opportunity cost of $781 billion.75 The numbers are much higher today. Still, most users remain unaware of these “rapacious” terms that, as Kim puts it, allow firms “to acquire rights without bargaining and to stealthily establish and embed practices before users, and regulators, realize what has happened.”76

At first, it had seemed that the new internet companies had simply failed to grasp the moral, social, and institutional requirements of their own economic logic. But with each corporate transgression, it became more difficult to ignore the possibility that the pattern of violations signaled a feature, not a bug. Although the Apple miracle contained the seeds of economic reformation, it was poorly understood: a mystery even to itself. Long before the death of its legendary founder, Steve Jobs, its frequent abuses of user expectations raised questions about how well the corporation understood the deep structure and historic potential of its own creations. The dramatic success of Apple's iPod and iTunes instilled internet users with a sense of optimism toward the new digital capitalism, but Apple never did seize the reins on developing the consistent, comprehensive social and institutional processes that would have elevated the iPod's promise to an explicit market form, as Henry Ford and Alfred Sloan had once done.

These developments reflect the simple truth that genuine economic reformation takes time and that the internet world, its investors and shareholders, were and are in a hurry. The credo of digital innovation quickly turned to the language of disruption and an obsession with speed, its campaigns conducted under the flag of “creative destruction.” That famous, fateful phrase coined by evolutionary economist Joseph Schumpeter was seized upon as a way to legitimate what Silicon Valley euphemistically calls “permissionless innovation.”77 Destruction rhetoric promoted what I think of as a “boys and their toys” theory of history, as if the winning hand in capitalism is about blowing things up with new technologies. Schumpeter's analysis was, in fact, far more nuanced and complex than modern destruction rhetoric suggests.

Although Schumpeter regarded capitalism as an “evolutionary” process, he also considered that relatively few of its continuous innovations actually rise to the level of evolutionary significance. These rare events are what he called “mutations.” These are enduring, sustainable, qualitative shifts in the logic, understanding, and practice of capitalist accumulation, not random, temporary, or opportunistic reactions to circumstances. Schumpeter insisted that this evolutionary mechanism is triggered by new consumer needs, and alignment with those needs is the discipline that drives sustainable mutation: “The capitalist process, not by coincidence but by virtue of its mechanism, progressively raises the standard of life of the masses.”78

If a mutation is to be reliably sustained, its new aims and practices must be translated into new institutional forms: “The fundamental impulse that sets and keeps the capitalist engine in motion comes from the new consumers' goods, the new methods of production or transportation, the new markets, the new forms of industrial organization that capitalist enterprise creates.” Note that Schumpeter says “creates,” not “destroys.” As an example of mutation, Schumpeter cites “the stages of organizational development from the craft shop to the factory to a complex corporation like U.S. Steel.…”79

Schumpeter understood creative destruction as one unfortunate by-product of a long and complex process of creative sustainable change. “Capitalism,” he wrote, “creates and destroys.” Schumpeter was adamant on this point: “Creative response shapes the whole course of subsequent events and their ‘long-run' outcome.… Creative response changes social and economic situations for good.… This is why creative response is an essential element in the historical process: No deterministic credo avails against this.”80 Finally, and contrary to the rhetoric of Silicon Valley and its worship of speed, Schumpeter argued that genuine mutation demands patience: “We are dealing with a process whose every element takes considerable time in revealing its true features and ultimate effects.… We must judge its performance over time, as it unfolds through decades or centuries.”81

The significance of a “mutation” in Schumpeter's reckoning implies a high threshold, one that is crossed in time through the serious work of inventing new institutional forms embedded in the new needs of new people. Relatively little destruction is creative, especially in the absence of a robust double movement. This is illustrated in Schumpeter's example of US Steel, founded by some of the Gilded Age's most notorious “robber barons,” including Andrew Carnegie and J. P. Morgan. Under pressure from an increasingly insistent double movement, US Steel eventually institutionalized fair labor practices through unions and collective bargaining as well as internal labor markets, career ladders, professional hierarchies, employment security, training, and development, all while implementing its technological advances in mass production.

Mutation is not a fairy tale; it is rational capitalism, bound in reciprocities with its populations through democratic institutions. Mutations fundamentally change the nature of capitalism by shifting it in the direction of those it is supposed to serve. This sort of thinking is not nearly as sexy or exciting as the “boys and their toys” gambit would have us think, but this is what it will take to move the dial of economic history beyond the collision and toward a third modernity.

VI. Surveillance Capitalism Fills the Void

A new breed of economic power swiftly filled the void in which every casual search, like, and click was claimed as an asset to be tracked, parsed, and monetized by some company, all within a decade of the iPod's debut. It was as if a shark had been silently circling the depths all along, just below the surface of the action, only to occasionally leap glistening from the water in pursuit of a fresh bite of flesh. Eventually, companies began to explain these violations as the necessary quid pro quo for “free” internet services. Privacy, they said, was the price one must pay for the abundant rewards of information, connection, and other digital goods when, where, and how you want them. These explanations distracted us from the sea change that would rewrite the rules of capitalism and the digital world.

In retrospect, we can see that the many discordant challenges to users' expectations were actually tiny peepholes into a rapidly emerging institutional form that was learning to exploit second-modernity needs and the established norms of “growth through exclusion” as the means to an utterly novel market project. Over time, the shark revealed itself as a rapidly multiplying, systemic, internally consistent new variant of information capitalism that had set its sights on domination. An unprecedented formulation of capitalism was elbowing its way into history: surveillance capitalism.

This new market form is a unique logic of accumulation in which surveillance is a foundational mechanism in the transformation of investment into profit. Its rapid rise, institutional elaboration, and significant expansion challenged the tentative promise of the inversion and its advocacy-oriented values. More generally, the rise of surveillance capitalism betrayed the hopes and expectations of many “netizens” who cherished the emancipatory promise of the networked milieu.82

Surveillance capitalism commandeered the wonders of the digital world to meet our needs for effective life, promising the magic of unlimited information and a thousand ways to anticipate our needs and ease the complexities of our harried lives. We welcomed it into our hearts and homes with our own rituals of hospitality. As we shall explore in detail throughout the coming chapters, thanks to surveillance capitalism the resources for effective life that we seek in the digital realm now come encumbered with a new breed of menace. Under this new regime, the precise moment at which our needs are met is also the precise moment at which our lives are plundered for behavioral data, and all for the sake of others' gain. The result is a perverse amalgam of empowerment inextricably layered with diminishment. In the absence of a decisive societal response that constrains or outlaws this logic of accumulation, surveillance capitalism appears poised to become the dominant form of capitalism in our time.

How did this happen? It is a question that we shall return to throughout this book as we accumulate new insights and answers. For now we can recognize that over the centuries we have imagined threat in the form of state power. This left us wholly unprepared to defend ourselves from new companies with imaginative names run by young geniuses that seemed able to provide us with exactly what we yearn for at little or no cost. This new regime's most poignant harms, now and later, have been difficult to grasp or theorize, blurred by extreme velocity and camouflaged by expensive and illegible machine operations, secretive corporate practices, masterful rhetorical misdirection, and purposeful cultural misappropriation. On this road, terms whose meanings we take to be positive or at least banal?“the open internet,” “interoperability,” and “connectivity”?have been quietly harnessed to a market process in which individuals are definitively cast as the means to others' market ends.

Surveillance capitalism has taken root so quickly that, with the exception of a courageous cadre of legal scholars and technology-savvy activists, it has cunningly managed to evade our understanding and agreement. As we will discuss in more depth in Chapter 4, surveillance capitalism is inconceivable outside the digital milieu, but neoliberal ideology and policy also provided the habitat in which surveillance capitalism could flourish. This ideology and its practical implementation bends second-modernity individuals to the draconian quid pro quo at the heart of surveillance capitalism's logic of accumulation, in which information and connection are ransomed for the lucrative behavioral data that fund its immense growth and profits. Any effort to interrupt or dismantle surveillance capitalism will have to contend with this larger institutional landscape that protects and sustains its operations.

History offers no control groups, and we cannot say whether with different leadership, more time, or other altered circumstances Apple might have perceived, elaborated, and institutionalized the jewel in its crown as Henry Ford and Alfred Sloan had done in another era. Nor is that opportunity forever lost?far from it. We may yet see the founding of a new synthesis for a third modernity in which a genuine inversion and its social compact are institutionalized as principles of a new rational digital capitalism aligned with a society of individuals and supported by democratic institutions. The fact that Schumpeter reckoned the time line for such institutionalization in decades or even centuries lingers as a critical commentary on our larger story.

These developments are all the more dangerous because they cannot be reduced to known harms?monopoly, privacy?and therefore do not easily yield to known forms of combat. The new harms we face entail challenges to the sanctity of the individual, and chief among these challenges I count the elemental rights that bear on individual sovereignty, including the right to the future tense and the right to sanctuary. Each of these rights invokes claims to individual agency and personal autonomy as essential prerequisites to freedom of will and to the very concept of democratic order.

Right now, however, the extreme asymmetries of knowledge and power that have accrued to surveillance capitalism abrogate these elemental rights as our lives are unilaterally rendered as data, expropriated, and repurposed in new forms of social control, all of it in the service of others' interests and in the absence of our awareness or means of combat. We have yet to invent the politics and new forms of collaborative action?this century's equivalent of the social movements of the late nineteenth and twentieth centuries that aimed to tether raw capitalism to society?that effectively assert the people's right to a human future. And while the work of these inventions awaits us, this mobilization and the resistance it engenders will define a key battleground upon which the fight for a human future unfolds.

On August 9, 2011, events ricocheted between two wildly different visions of a third modernity. One was based on the digital promise of democratized information in the context of individualized economic and social relations. The other reflected the harsh truths of mass exclusion and elite rule. But the lessons of that day had not yet been fully tallied when fresh answers?or, more modestly, the tenuous glimmers of answers as fragile as a newborn's translucent skin?rose to the surface of the world's attention gliding on scented ribbons of Spanish lavender and vanilla.

VII. For a Human Future

In the wee hours of August 9, 2011, eighteen-year-old Maria Elena Montes sat on the cool marble floor of her family's century-old pastry shop in the El Raval section of Barcelona, nursing her cup of sweet cafe con leche, lulled by the sunrise scuffling of the pigeons in the plaza as she waited for her trays of rum-soaked gypsy cakes to set.

Pasteleria La Dulce occupied a cramped medieval building tucked into a tiny square on one of the few streets that had escaped both the wrecking ball and the influx of yuppie chic. The Montes family took care that the passing decades had no visible effect on their cherished bakery. Each morning they lovingly filled sparkling glass cases with crispy sugar-studded churros, delicate bunuelos fat with vanilla custard, tiny paper ramekins of strawberry flan, buttery mantecados, coiled ensaimadas drenched in powdered sugar, fluffy magdalenas, crunchy pestinos, and Great-Grandmother Montes's special flao, a cake made with fresh milk cheese laced with Spanish lavender, fennel, and mint. There were almond and blood-orange tarts prepared, according to Senora Montes, exactly as they had once been served to Queen Isabella. Olive-oil ice cream flavored with anise filled the tubs in the gleaming white freezer along the wall. An old ceiling fan cycled slowly, nudging the perfume of honey and yeast into every corner of the ageless room.

Only one thing had changed. Any other August would have found Maria Elena and her family at their summer cottage nestled into a pine grove near the seaside town of Palafrugell that had been the family's refuge for generations. In 2011, however, neither the Montes nor their customers and friends would take their August holidays. The economic crisis had ripped through the country like the black plague, shrinking consumption and driving unemployment to 21 percent, the highest in the EU, and to an astonishing 46 percent among people under twenty-four years old. In Catalonia, the region that includes Barcelona, 18 percent of its 7.5 million people had fallen below the poverty line.83 In the summer of 2011, few could afford the simple pleasure of an August spent by the sea or in the mountains.

There was new pressure to sell the building and let the future finally swallow La Dulce. The family could live comfortably on the proceeds of such a sale, even at the bargain rates they would be forced to accept. Business was slow, but Senor Fito Montes refused to lay off any members of a staff that was like an extended family after years of steady employment. Just about everyone they knew said that the end was inevitable and that the Montes should leap at the opportunity for a dignified exit. But the family was determined to make every sacrifice to safeguard Pasteleria La Dulce for the future.

Just three months earlier, Juan Pablo and Maria had made the pilgrimage to Madrid to join thousands of protesters at the Puerta del Sol, where a month-long encampment established Los Indignados, the 15M, as the new voice of a people who had finally been pushed to the breaking point by the economics of contempt. All that was left to say was “Ya. No mas!” Enough already! The convergence of so many citizens in Madrid led to a wave of protests across the nation, and eventually those protests would give way to new political parties, including Podemos. Neighborhood assemblies had begun to convene in many cities, and the Montes had attended such a meeting in El Raval just the night before.

With the evening's conversations still fresh, they gathered in the apartment above the shop in the early afternoon of August 9 to share their midday meal and discuss the fate of La Dulce, not quite certain what Papa Montes was thinking.

“The bankers may not know it,” Fito Montes reflected, “but the future will need the past. It will need these marble floors and the sweet taste of my gypsy cakes. They treat us like figures in a ledger, like they are reading the number of casualties in a plane crash. They believe the future belongs only to them. But we each have our story. We each have our life. It is up to us to proclaim our right to the future. The future is our home too.”

Maria and Juan Pablo breathed a shared sigh of relief as they outlined their plan. Juan Pablo would withdraw temporarily from his university studies, and Maria Elena would postpone her matriculation. They would work on expanding La Dulce's sales with new home-delivery and catering options. Everyone would take a pay cut, but no one would have to leave. Everyone would tighten their belts, except the fat bunuelos and their perfect comrades steadfast in neat, delicious rows.

We know how to challenge the inevitable, they said. We've survived wars; we've survived the Fascists. We'll survive again. For Fito Montes, his family's right to anticipate the future as their home demanded continuity for some things that are elusive, beautiful, surprising, mysterious, inexpressible, and immaterial but without which, they all agreed, life would be mechanical and soulless. He was determined, for example, to ensure that another generation of Spanish children would recognize the bouquet of his blood-orange tarts flecked with rose petals and thus be awakened to the mystery of medieval life in the fragrant gardens of the Alhambra.

On August 9 the heat rose steadily in the shady square, and the sun emptied the avenues where Huns, Moors, Castilians, and Bourbons had each in their turn marched to triumph. Those silent streets bore little evidence of the historic deliberations in Madrid that would be featured in the New York Times that very day.84 But I imagine the two cities linked by invisible ribbons of scent rising from La Dulce high into the bleached Barcelona sky and drifting slowly south and west to settle along the austere facade of the building that housed the Agencia Espanola de Proteccion de Datos, where another struggle for the right to the future tense was underway.

The Spanish Data Protection Agency had chosen to champion the claims of ninety ordinary citizens who, like the Montes family, were determined to preserve inherited meaning for a world bent on change at the speed of light.85 In the name of “the right to be forgotten,” the Spaniards had stepped into the bullring brandishing red capes, resolved to master the fiercest bull of all: Google, the juggernaut of surveillance capitalism. When the agency ordered the internet firm to stop indexing the contested links of these ninety individuals, the bull received one of its first and most significant blows.

This official confrontation drew upon the same tenacity, determination, and sentiment that sustained the Montes family and millions of other Spaniards compelled to claw back the future from the self-proclaimed inevitability of indifferent capital. In the assertion of a right to be forgotten, the complexity of human existence, with its thousand million shades of gray, was pitted against surveillance capitalism's economic imperatives that produced the relentless drive to extract and retain information. It was there, in Spain, that the right to the future tense was on the move, insisting that the operations of surveillance capitalism and its digital architecture are not, never were, and never would be inevitable. Instead, the opposition asserted that even Google's capitalism was made by humans to be unmade and remade by democratic processes, not commercial decree. Google's was not to be the last word on the human or the digital future.

Each of the ninety citizens had a unique claim. One had been terrorized by her former husband and didn't want him to find her address online. Informational privacy was essential to her peace of mind and her physical safety. A middle-aged woman was embarrassed by an old arrest from her days as a university student. Informational privacy was essential to her identity and sense of dignity. One was an attorney, Mario Costeja Gonzalez, who years earlier had suffered the foreclosure of his home. Although the matter had long been resolved, a Google search of his name continued to deliver links to the foreclosure notice, which, he argued, damaged his reputation. While the Spanish Data Protection Agency rejected the idea of requiring newspapers and other originating sites to remove legitimate information?such information, they reasoned, would exist somewhere under any circumstances?it endorsed the notion that Google had responsibility and should be held to account. After all, Google had unilaterally undertaken to change the rules of the information life cycle when it decided to crawl, index, and make accessible personal details across the world wide web without asking anyone's permission. The agency concluded that citizens had the right to request the removal of links and ordered Google to stop indexing the information and to remove existing links to its original sources.

Google's mission to “organize the world's information and make it universally accessible and useful”?starting with the web?changed all of our lives. There have been enormous benefits, to be sure. But for individuals it has meant that information that would normally age and be forgotten now remains forever young, highlighted in the foreground of each person's digital identity. The Spanish Data Protection Agency recognized that not all information is worthy of immortality. Some information should be forgotten because that is only human. Unsurprisingly, Google challenged the agency's order before the Spanish High Court, which selected one of the ninety cases, that of attorney Mario Costeja Gonzalez, for referral to the Court of Justice of the European Union. There, after lengthy and dramatic deliberations, the Court of Justice announced its decision to assert the right to be forgotten as a fundamental principle of EU law in May of 2014.86

The Court of Justice's decision, so often reduced to the legal and technical considerations related to the deletion or de-linking of personal data, was in fact a key inflection point at which democracy began to claw back rights to the future tense from the powerful forces of a new surveillance capitalism determined to claim unilateral authority over the digital future. Instead, the court's analysis claimed the future for the human way, rejecting the inevitability of Google's search-engine technology and recognizing instead that search results are the contingent products of the specific economic interests that drive the action from within the belly of the machine: “The operator of a search engine is liable to affect significantly the fundamental rights to privacy and to the protection of personal data. In the light of the potential seriousness of the interference” with those interests, “it cannot be justified by merely the economic interest which the operator of such an engine has in that processing.”87 As legal scholars Paul M. Schwartz and Karl-Nikolaus Peifer summarized it, “The Luxembourg Court felt that free flow of information matters, but not as much, ultimately, as the safeguarding of dignity, privacy, and data protection in the European rights regime.”88 The court conferred upon EU citizens the right to combat, requiring Google to establish a process for implementing users' de-linking requests and authorizing citizens to seek recourse in democratic institutions, including “the supervisory authority or the judicial authority, so that it carries out the necessary checks and orders the controller to take specific measures accordingly.”89

In reasserting the right to be forgotten, the court declared that decisive authority over the digital future rests with the people, their laws, and their democratic institutions. It affirmed that individuals and democratic societies can fight for their rights to the future tense and can win, even in the face of a great private power. As the human rights scholar Federico Fabbrini observed, with this vital case the European Court of Justice evolved more assertively into the role of a human rights court, stepping into “the mine-field of human rights in the digital age.…”90

When the Court of Justice's decision was announced, the “smart money” said that it could never happen in the US, where the internet companies typically seek cover behind the First Amendment as justification for their “permissionless innovation.”91 Some technology observers called the ruling “nuts.”92 Google's leaders sneered at the decision. Reporters characterized Google cofounder Sergey Brin as “joking” and “dismissive.” When asked about the ruling during a Q&A at a prominent tech conference, he said, “I wish we could just forget the ruling.”93

In response to the ruling, Google CEO and cofounder Larry Page recited the catechism of the firm's mission statement, assuring the Financial Times that the company “still aims to ‘organise the world's information and make it universally accessible and useful.'” Page defended Google's unprecedented information power with an extraordinary statement suggesting that people should trust Google more than democratic institutions: “In general, having the data present in companies like Google is better than having it in the government with no due process to get that data, because we obviously care about our reputation. I'm not sure the government cares about that as much.”94 Speaking to the company's shareholders the day after the court's ruling, Eric Schmidt characterized the decision as a “balance that was struck wrong” in the “collision between a right to be forgotten and a right to know.”95

The comments of Google's leaders reflected their determination to retain privileged control over the future and their indignation at being challenged. However, there was ample evidence that the American public did not concede the corporation's unilateral power. In fact, the smart money appeared not to be all that smart. In the year following the EU decision, a national poll of US adults found that 88 percent supported a law similar to the right to be forgotten. That year, Pew Research found that 93 percent of Americans believed that it was important to have control of “who can get information about you.” A series of polls echoed these findings.96

On January 1, 2015, California's “Online Eraser” law took effect, requiring the operator of a website, online service, online application, or mobile application to permit a minor who is a registered user of the operator's service to remove, or to request and obtain removal of, content or information posted by the minor. The California law breached a critical surveillance embattlement, attenuating Google's role as the self-proclaimed champion of an unbounded right to know and suggesting that we are still at the beginning, not the end, of a long and fitful drama.

The Spanish Data Protection Agency and later the European Court of Justice demonstrated the unbearable lightness of the inevitable, as both institutions declared what is at stake for a human future, beginning with the primacy of democratic institutions in shaping a healthy and just digital future. The smart money says that US law will never abandon its allegiance to the surveillance capitalists over the people. But the next decades may once again prove that the smart money can be wrong. As for the Spanish people, their Data Protection Agency, and the European Court of Justice, the passage of time is likely to reveal their achievements as a stirring early chapter in the longer story of our fight for a third modern that is first and foremost a human future, rooted in an inclusive democracy and committed to the individual's right to effective life. Their message is carefully inscribed for our children to ponder: technological inevitability is as light as democracy is heavy, as temporary as the scent of rose petals and the taste of honey are enduring.

VIII. Naming and Taming

Taming surveillance capitalism must begin with careful naming, a symbiosis that was vividly illustrated in the recent history of HIV research, and I offer it as an analogy. For three decades, scientists aimed to create a vaccine that followed the logic of earlier cures, training the immune system to produce neutralizing antibodies, but mounting data revealed unanticipated behaviors of the HIV virus that defy the patterns of other infectious diseases.97

The tide began to turn at the International AIDS Conference in 2012, when new strategies were presented that rely on a close understanding of the biology of rare HIV carriers whose blood produces natural antibodies. Research began to shift toward methods that reproduce this self-vaccinating response.98 As a leading researcher announced, “We know the face of the enemy now, and so we have some real clues about how to approach the problem.”99

The point for us is that every successful vaccine begins with a close understanding of the enemy disease. The mental models, vocabularies, and tools distilled from past catastrophes obstruct progress. We smell smoke and rush to close doors to rooms that are already fated to vanish. The result is like hurling snowballs at a smooth marble wall only to watch them slide down its facade, leaving nothing but a wet smear: a fine paid here, an operational detour there, a new encryption package there.

What is crucial now is that we identify this new form of capitalism on its own terms and in its own words. This pursuit necessarily returns us to Silicon Valley, where things move so fast that few people know what just happened. It is the habitat for progress “at the speed of dreams,” as one Google engineer vividly describes it.100 My aim here is to slow down the action in order to enlarge the space for such debate and unmask the tendencies of these new creations as they amplify inequality, intensify social hierarchy, exacerbate exclusion, usurp rights, and strip personal life of whatever it is that makes it personal for you or for me. If the digital future is to be our home, then it is we who must make it so. We will need to know. We will need to decide. We will need to decide who decides. This is our fight for a human future.

CHAPTER THREE THE DISCOVERY OF BEHAVIORAL SURPLUS

He watched the stars and noted birds in flight;
A river flooded or a fortress fell:
He made predictions that were sometimes right;
His lucky guesses were rewarded well.
- W. H. AUDEN SONNETS FROM CHINA, VI

I. Google: The Pioneer of Surveillance Capitalism

Google is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production?based managerial capitalism. New economic logics and their commercial models are discovered by people in a time and place and then perfected through trial and error. In our time Google became the pioneer, discoverer, elaborator, experimenter, lead practitioner, role model, and diffusion hub of surveillance capitalism. GM and Ford's iconic status as pioneers of twentieth-century capitalism made them enduring objects of scholarly research and public fascination because the lessons they had to teach resonated far beyond the individual companies. Google's practices deserve the same kind of examination, not merely as a critique of a single company but rather as the starting point for the codification of a powerful new form of capitalism.

With the triumph of mass production at Ford and for decades thereafter, hundreds of researchers, businesspeople, engineers, journalists, and scholars would excavate the circumstances of its invention, origins, and consequences.1 Decades later, scholars continued to write extensively about Ford, the man and the company.2 GM has also been an object of intense scrutiny. It was the site of Peter Drucker's field studies for his seminal Concept of the Corporation, the 1946 book that codified the practices of the twentieth-century business organization and established Drucker's reputation as a management sage. In addition to the many works of scholarship and analysis on these two firms, their own leaders enthusiastically articulated their discoveries and practices. Henry Ford and his general manager, James Couzens, and Alfred Sloan and his marketing man, Henry “Buck” Weaver, reflected on, conceptualized, and proselytized their achievements, specifically locating them in the evolutionary drama of American capitalism.3

Google is a notoriously secretive company, and one is hard-pressed to imagine a Drucker equivalent freely roaming the scene and scribbling in the hallways. Its executives carefully craft their messages of digital evangelism in books and blog posts, but its operations are not easily accessible to outside researchers or journalists.4 In 2016 a lawsuit brought against the company by a product manager alleged an internal spying program in which employees are expected to identify coworkers who violate the firm's confidentiality agreement: a broad prohibition against divulging anything about the company to anyone.5 The closest thing we have to a Buck Weaver or James Couzens codifying Google's practices and objectives is the company's longtime chief economist, Hal Varian, who aids the cause of understanding with scholarly articles that explore important themes. Varian has been described as “the Adam Smith of the discipline of Googlenomics” and the “godfather” of its advertising model.6 It is in Varian's work that we find hidden-in-plain-sight important clues to the logic of surveillance capitalism and its claims to power.

In two extraordinary articles in scholarly journals, Varian explored the theme of “computer-mediated transactions” and their transformational effects on the modern economy.7 Both pieces are written in amiable, down-to-earth prose, but Varian's casual understatement stands in counterpoint to his often-startling declarations: “Nowadays there is a computer in the middle of virtually every transaction… now that they are available these computers have several other uses.”8 He then identifies four such new uses: “data extraction and analysis,” “new contractual forms due to better monitoring,” “personalization and customization,” and “continuous experiments.”

Varian's discussions of these new “uses” are an unexpected guide to the strange logic of surveillance capitalism, the division of learning that it shapes, and the character of the information civilization toward which it leads. We will return to Varian's observations from time to time in the course of our examination of the foundations of surveillance capitalism, aided by a kind of “reverse engineering” of his assertions, so that we might grasp the worldview and methods of surveillance capitalism through this lens. “Data extraction and analysis,” Varian writes, “is what everyone is talking about when they talk about big data.” “Data” are the raw material necessary for surveillance capitalism's novel manufacturing processes. “Extraction” describes the social relations and material infrastructure with which the firm asserts authority over those raw materials to achieve economies of scale in its raw-material supply operations.

“Analysis” refers to the complex of highly specialized computational systems that I will generally refer to in these chapters as “machine intelligence.” I like this umbrella phrase because it trains us on the forest rather than the trees, helping us decenter from technology to its objectives. But in choosing this phrase I also follow Google's lead. The company describes itself “at the forefront of innovation in machine intelligence,” a term in which it includes machine learning as well as “classical” algorithmic production, along with many computational operations that are often referred to with other terms such as “predictive analytics” or “artificial intelligence.” Among these operations Google cites its work on language translation, speech recognition, visual processing, ranking, statistical modeling, and prediction: “In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, applying learning algorithms to understand and generalize.”9 These machine intelligence operations convert raw material into the firm's highly profitable algorithmic products designed to predict the behavior of its users. The inscrutability and exclusivity of these techniques and operations are the moat that surrounds the castle and secures the action within.

Google's invention of targeted advertising paved the way to financial success, but it also laid the cornerstone of a more far-reaching development: the discovery and elaboration of surveillance capitalism. Its business is characterized as an advertising model, and much has been written about Google's automated auction methods and other aspects of its inventions in the field of online advertising. With so much verbiage, these developments are both over-described and under-theorized. Our aim in this chapter and those that follow in Part I is to reveal the “laws of motion” that drive surveillance competition, and in order to do this we begin by looking freshly at the point of origin, when the foundational mechanisms of surveillance capitalism were first discovered.

Before we begin, I want to say a word about vocabulary. Any confrontation with the unprecedented requires new language, and I introduce new terms when existing language fails to capture a new phenomenon. Sometimes, however, I intentionally repurpose familiar language because I want to stress certain continuities in the function of an element or process. This is the case with “laws of motion,” borrowed from Newton's laws of inertia, force, and equal and opposite reactions.

Over the years historians have adopted this term to describe the “laws” of industrial capitalism. For example, economic historian Ellen Meiksins Wood documents the origins of capitalism in the changing relations between English property owners and tenant farmers, as the owners began to favor productivity over coercion: “The new historical dynamic allows us to speak of ‘agrarian capitalism' in early modern England, a social form with distinctive ‘laws of motion' that would eventually give rise to capitalism in its mature, industrial form.”10 Wood describes how the new “laws of motion” eventually manifested themselves in industrial production:

The critical factor in the divergence of capitalism from all other forms of “commercial society” was the development of certain social property relations that generated market imperatives and capitalist “laws of motion”… competitive production and profit-maximization, the compulsion to reinvest surpluses, and the relentless need to improve labour-productivity associated with capitalism.… Those laws of motion required vast social transformations and upheavals to set them in train. They required a transformation in the human metabolism with nature, in the provision of life's basic necessities.11

My argument here is that although surveillance capitalism does not abandon established capitalist “laws” such as competitive production, profit maximization, productivity, and growth, these earlier dynamics now operate in the context of a new logic of accumulation that also introduces its own distinctive laws of motion. Here and in following chapters, we will examine these foundational dynamics, including surveillance capitalism's idiosyncratic economic imperatives defined by extraction and prediction, its unique approach to economies of scale and scope in raw-material supply, its necessary construction and elaboration of means of behavioral modification that incorporate its machine-intelligence?based “means of production” in a more complex system of action, and the ways in which the requirements of behavioral modification orient all operations toward totalities of information and control, creating the framework for an unprecedented instrumentarian power and its societal implications. For now, my aim is to reconstruct our appreciation of familiar ground through new lenses: Google's early days of optimism, crisis, and invention.

II. A Balance of Power

Google was incorporated in 1998, founded by Stanford graduate students Larry Page and Sergey Brin just two years after the Mosaic browser threw open the doors of the world wide web to the computer-using public. From the start, the company embodied the promise of information capitalism as a liberating and democratic social force that galvanized and delighted second-modernity populations around the world.

Thanks to this wide embrace, Google successfully imposed computer mediation on broad new domains of human behavior as people searched online and engaged with the web through a growing roster of Google services. As these new activities were informated for the first time, they produced wholly new data resources. For example, in addition to key words, each Google search query produces a wake of collateral data such as the number and pattern of search terms, how a query is phrased, spelling, punctuation, dwell times, click patterns, and location.

Early on, these behavioral by-products were haphazardly stored and operationally ignored. Amit Patel, a young Stanford graduate student with a special interest in “data mining,” is frequently credited with the groundbreaking insight into the significance of Google's accidental data caches. His work with these data logs persuaded him that detailed stories about each user?thoughts, feelings, interests?could be constructed from the wake of unstructured signals that trailed every online action. These data, he concluded, actually provided a “broad sensor of human behavior” and could be put to immediate use in realizing cofounder Larry Page's dream of Search as a comprehensive artificial intelligence.12

Google's engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system that constantly improved search results and spurred product innovations such as spell check, translation, and voice recognition. As Kenneth Cukier observed at that time,

Other search engines in the 1990s had the chance to do the same, but did not pursue it. Around 2000 Yahoo! saw the potential, but nothing came of the idea. It was Google that recognized the gold dust in the detritus of its interactions with its users and took the trouble to collect it up.… Google exploits information that is a by-product of user interactions, or data exhaust, which is automatically recycled to improve the service or create an entirely new product.13

What had been regarded as waste material?“data exhaust” spewed into Google's servers during the combustive action of Search?was quickly reimagined as a critical element in the transformation of Google's search engine into a reflexive process of continuous learning and improvement.

At that early stage of Google's development, the feedback loops involved in improving its Search functions produced a balance of power: Search needed people to learn from, and people needed Search to learn from. This symbiosis enabled Google's algorithms to learn and produce ever-more relevant and comprehensive search results. More queries meant more learning; more learning produced more relevance. More relevance meant more searches and more users.14 By the time the young company held its first press conference in 1999, to announce a $25 million equity investment from two of the most revered Silicon Valley venture capital firms, Sequoia Capital and Kleiner Perkins, Google Search was already fielding seven million requests each day.15 A few years later, Hal Varian, who joined Google as its chief economist in 2002, would note, “Every action a user performs is considered a signal to be analyzed and fed back into the system.”16 The Page Rank algorithm, named after its founder, had already given Google a significant advantage in identifying the most popular results for queries. Over the course of the next few years it would be the capture, storage, analysis, and learning from the by-products of those search queries that would turn Google into the gold standard of web search.

The key point for us rests on a critical distinction. During this early period, behavioral data were put to work entirely on the user's behalf. User data provided value at no cost, and that value was reinvested in the user experience in the form of improved services: enhancements that were also offered at no cost to users. Users provided the raw material in the form of behavioral data, and those data were harvested to improve speed, accuracy, and relevance and to help build ancillary products such as translation. I call this the behavioral value reinvestment cycle, in which all behavioral data are reinvested in the improvement of the product or service (see Figure 1).

The cycle emulates the logic of the iPod; it worked beautifully at Google but with one critical difference: the absence of a sustainable market transaction. In the case of the iPod, the cycle was triggered by the purchase of a high-margin physical product. Subsequent reciprocities improved the iPod product and led to increased sales. Customers were the subjects of the commercial process, which promised alignment with their “what I want, when I want, where I want” demands. At Google, the cycle was similarly oriented toward the individual as its subject, but without a physical product to sell, it floated outside the marketplace, an interaction with “users” rather than a market transaction with customers.

This helps to explain why it is inaccurate to think of Google's users as its customers: there is no economic exchange, no price, and no profit. Nor do users function in the role of workers. When a capitalist hires workers and provides them with wages and means of production, the products that they produce belong to the capitalist to sell at a profit. Not so here. Users are not paid for their labor, nor do they operate the means of production, as we'll discuss in more depth later in this chapter. Finally, people often say that the user is the “product.” This is also misleading, and it is a point that we This helps to explain why it is inaccurate to think of Google's users as its customers: there is no economic exchange, no price, and no profit. Nor do users function in the role of workers. When a capitalist hires workers and provides them with wages and means of production, the products that they produce belong to the capitalist to sell at a profit. Not so here. Users are not paid for their labor, nor do they operate the means of production, as we'll discuss in more depth later in this chapter. Finally, people often say that the user is the “product.” This is also misleading, and it is a point that we will revisit more than once. For now let's say that users are not products, but rather we are the sources of raw-material supply. As we shall see, surveillance capitalism's unusual products manage to be derived from our behavior while remaining indifferent to our behavior. Its products are about predicting us, without actually caring what we do or what is done to us.

To summarize, at this early stage of Google's development, whatever Search users inadvertently gave up that was of value to the company they also used up in the form of improved services. In this reinvestment cycle, serving users with amazing Search results “consumed” all the value that users created when they provided extra behavioral data. The fact that users needed Search about as much as Search needed users created a balance of power between Google and its populations. People were treated as ends in themselves, the subjects of a nonmarket, self-contained cycle that was perfectly aligned with Google's stated mission “to organize the world's information, making it universally accessible and useful.”

figure 1. here

Invalid Link
Figure 1. The Behavioral Value Reinvestment Cycle

III. Search for Capitalism: Impatient Money and the State of Exception

By 1999, despite the splendor of Google's new world of searchable web pages, its growing computer science capabilities, and its glamorous venture backers, there was no reliable way to turn investors' money into revenue. The behavioral value reinvestment cycle produced a very cool search function, but it was not yet capitalism. The balance of power made it financially risky and possibly counterproductive to charge users a fee for search services. Selling search results would also have set a dangerous precedent for the firm, assigning a price to indexed information that Google's web crawler had already taken from others without payment. Without a device like Apple's iPod or its digital songs, there were no margins, no surplus, nothing left over to sell and turn into revenue.

Google had relegated advertising to steerage class: its AdWords team consisted of seven people, most of whom shared the founders' general antipathy toward ads. The tone had been set in Sergey Brin and Larry Page's milestone paper that unveiled their search engine conception, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” presented at the 1998 World Wide Web Conference: “We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. This type of bias is very difficult to detect but could still have a significant effect on the market… we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.”17

Google's first revenues depended instead on exclusive licensing deals to provide web services to portals such as Yahoo! and Japan's BIGLOBE.18 It also generated modest revenue from sponsored ads linked to search query keywords.19 There were other models for consideration. Rival search engines such as Overture, used exclusively by the then-giant portal AOL, or Inktomi, the search engine adopted by Microsoft, collected revenues from the sites whose pages they indexed. Overture was also successful in attracting online ads with its policy of allowing advertisers to pay for high-ranking search listings, the very format that Brin and Page scorned.20

Prominent analysts publicly doubted whether Google could compete with its more-established rivals. As the New York Times asked, “Can Google create a business model even remotely as good as its technology?”21 A well-known Forrester Research analyst proclaimed that there were only a few ways for Google to make money with Search: “build a portal [like Yahoo!]… partner with a portal… license the technology… wait for a big company to purchase them.”22

Despite these general misgivings about Google's viability, the firm's prestigious venture backing gave the founders confidence in their ability to raise money. This changed abruptly in April 2000, when the legendary dot-com economy began its steep plunge into recession, and Silicon Valley's Garden of Eden unexpectedly became the epicenter of a financial earthquake.

By mid-April, Silicon Valley's fast-money culture of privilege was under siege with the implosion of what came to be known as the “dot-com bubble.” It is easy to forget exactly how terrifying things were for the valley's ambitious young people and their slightly older investors. Startups with outsized valuations just months earlier were suddenly forced to shutter. Prominent articles such as “Doom Stalks the Dotcoms” noted that the stock prices of Wall Street's most-revered internet “high flyers” were “down for the count,” with many of them trading below their initial offering price: “With many dotcoms declining, neither venture capitalists nor Wall Street is eager to give them a dime.…”23 The news brimmed with descriptions of shell-shocked investors. The week of April 10 saw the worst decline in the history of the NASDAQ, where many internet companies had gone public, and there was a growing consensus that the “game” had irreversibly changed.24

As the business environment in Silicon Valley unraveled, investors' prospects for cashing out by selling Google to a big company seemed far less likely, and they were not immune to the rising tide of panic. Many Google investors began to express doubts about the company's prospects, and some threatened to withdraw support. Pressure for profit mounted sharply, despite the fact that Google Search was widely considered the best of all the search engines, traffic to its website was surging, and a thousand resumes flooded the firm's Mountain View office each day. Page and Brin were seen to be moving too slowly, and their top venture capitalists, John Doerr from Kleiner Perkins and Michael Moritz from Sequoia, were frustrated.25 According to Google chronicler Steven Levy, “The VCs were screaming bloody murder. Tech's salad days were over, and it wasn't certain that Google would avoid becoming another crushed radish.”26

The specific character of Silicon Valley's venture funding, especially during the years leading up to dangerous levels of startup inflation, also contributed to a growing sense of emergency at Google. As Stanford sociologist Mark Granovetter and his colleague Michel Ferrary found in their study of valley venture firms, “A connection with a high-status VC firm signals the high status of the startup and encourages other agents to link to it.”27 These themes may seem obvious now, but it is useful to mark the anxiety of those months of sudden crisis. Prestigious risk investment functioned as a form of vetting?much like acceptance to a top university sorts and legitimates students, elevating a few against the backdrop of the many?especially in the “uncertain” environment characteristic of high-tech investing. Loss of that high-status signaling power assigned a young company to a long list of also-rans in Silicon Valley's fast-moving saga.

Other research findings point to the consequences of the impatient money that flooded the valley as inflationary hype drew speculators and ratcheted up the volatility of venture funding.28 Studies of pre-bubble investment patterns showed a “big-score” mentality in which bad results tended to stimulate increased investing as funders chased the belief that some young company would suddenly discover the elusive business model destined to turn all their bets into rivers of gold.29 Startup mortality rates in Silicon Valley outstripped those for other venture capital centers such as Boston and Washington, DC, with impatient money producing a few big wins and many losses.30 Impatient money is also reflected in the size of Silicon Valley startups, which during this period were significantly smaller than in other regions, employing an average of 68 employees as compared to an average of 112 in the rest of the country.31 This reflects an interest in quick returns without spending much time on growing a business or deepening its talent base, let alone developing the institutional capabilities that Joseph Schumpeter would have advised. These propensities were exacerbated by the larger Silicon Valley culture, where net worth was celebrated as the sole measure of success for valley parents and their children.32

For all their genius and principled insights, Brin and Page could not ignore the mounting sense of emergency. By December 2000, the Wall Street Journal reported on the new “mantra” emerging from Silicon Valley's investment community: “Simply displaying the ability to make money will not be enough to remain a major player in the years ahead. What will be required will be an ability to show sustained and exponential profits.”33

IV. The Discovery of Behavioral Surplus

The declaration of a state of exception functions in politics as cover for the suspension of the rule of law and the introduction of new executive powers justified by crisis.34 At Google in late 2000, it became a rationale for annulling the reciprocal relationship that existed between Google and its users, steeling the founders to abandon their passionate and public opposition to advertising. As a specific response to investors' anxiety, the founders tasked the tiny AdWords team with the objective of looking for ways to make more money.35 Page demanded that the whole process be simplified for advertisers. In this new approach, he insisted that advertisers “shouldn't even get involved with choosing keywords?Google would choose them.”36

Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. New rhetoric took hold to legitimate this unusual move. If there was to be advertising, then it had to be “relevant” to users. Ads would no longer be linked to keywords in a search query, but rather a particular ad would be “targeted” to a particular individual. Securing this holy grail of advertising would ensure relevance to users and value to advertisers.

Absent from the new rhetoric was the fact that in pursuit of this new aim, Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal. To meet the new objective, the behavioral value reinvestment cycle was rapidly and secretly subordinated to a larger and more complex undertaking. The raw materials that had been solely used to improve the quality of search results would now also be put to use in the service of targeting advertising to individual users. Some data would continue to be applied to service improvement, but the growing stores of collateral signals would be repurposed to improve the profitability of ads for both Google and its advertisers. These behavioral data available for uses beyond service improvement constituted a surplus, and it was on the strength of this behavioral surplus that the young company would find its way to the “sustained and exponential profits” that would be necessary for survival. Thanks to a perceived emergency, a new mutation began to gather form and quietly slip its moorings in the implicit advocacy-oriented social contract of the firm's original relationship with users.

Google's declared state of exception was the backdrop for 2002, the watershed year during which surveillance capitalism took root. The firm's appreciation of behavioral surplus crossed another threshold that April, when the data logs team arrived at their offices one morning to find that a peculiar phrase had surged to the top of the search queries: “Carol Brady's maiden name.” Why the sudden interest in a 1970s television character? It was data scientist and logs team member Amit Patel who recounted the event to the New York Times, noting, “You can't interpret it unless you know what else is going on in the world.”37

The team went to work to solve the puzzle. First, they discerned that the pattern of queries had produced five separate spikes, each beginning at forty-eight minutes after the hour. Then they learned that the query pattern occurred during the airing of the popular TV show Who Wants to Be a Millionaire? The spikes reflected the successive time zones during which the show aired, ending in Hawaii. In each time zone, the show's host posed the question of Carol Brady's maiden name, and in each zone the queries immediately flooded into Google's servers.

As the New York Times reported, “The precision of the Carol Brady data was eye-opening for some.” Even Brin was stunned by the clarity of Search's predictive power, revealing events and trends before they “hit the radar” of traditional media. As he told the Times, “It was like trying an electron microscope for the first time. It was like a moment-by-moment barometer.”38 Google executives were described by the Times as reluctant to share their thoughts about how their massive stores of query data might be commercialized. “There is tremendous opportunity with this data,” one executive confided.39

Just a month before the Carol Brady moment, while the AdWords team was already working on new approaches, Brin and Page hired Eric Schmidt, an experienced executive, engineer, and computer science Ph.D., as chairman. By August, they appointed him to the CEO's role. Doerr and Moritz had been pushing the founders to hire a professional manager who would know how to pivot the firm toward profit.40 Schmidt immediately implemented a “belt-tightening” program, grabbing the budgetary reins and heightening the general sense of financial alarm as fund-raising prospects came under threat. A squeeze on workspace found him unexpectedly sharing his office with none other than Amit Patel.

Schmidt later boasted that as a result of their close quarters over the course of several months, he had instant access to better revenue figures than did his own financial planners.41 We do not know (and may never know) what other insights Schmidt might have gleaned from Patel about the predictive power of Google's behavioral data stores, but there is no doubt that a deeper grasp of the predictive power of data quickly shaped Google's specific response to financial emergency, triggering the crucial mutation that ultimately turned AdWords, Google, the internet, and the very nature of information capitalism toward an astonishingly lucrative surveillance project.

Google's earliest ads had been considered more effective than most online advertising at the time because they were linked to search queries and Google could track when users actually clicked on an ad, known as the “click-through” rate. Despite this, advertisers were billed in the conventional manner according to how many people viewed an ad. As Search expanded, Google created the self-service system called AdWords, in which a search that used the advertiser's keyword would include that advertiser's text box and a link to its landing page. Ad pricing depended upon the ad's position on the search results page.

Rival search startup Overture had developed an online auction system for web page placement that allowed it to scale online advertising targeted to keywords. Google would produce a transformational enhancement to that model, one that was destined to alter the course of information capitalism. As a Bloomberg journalist explained in 2006, “Google maximizes the revenue it gets from that precious real estate by giving its best position to the advertiser who is likely to pay Google the most in total, based on the price per click multiplied by Google's estimate of the likelihood that someone will actually click on the ad.”42 That pivotal multiplier was the result of Google's advanced computational capabilities trained on its most significant and secret discovery: behavioral surplus. From this point forward, the combination of ever-increasing machine intelligence and ever-more-vast supplies of behavioral surplus would become the foundation of an unprecedented logic of accumulation. Google's reinvestment priorities would shift from merely improving its user offerings to inventing and institutionalizing the most far-reaching and technologically advanced raw-material supply operations that the world had ever seen. Henceforth, revenues and growth would depend upon more behavioral surplus.

Google's many patents filed during those early years illustrate the explosion of discovery, inventiveness, and complexity detonated by the state of exception that led to these crucial innovations and the firm's determination to advance the capture of behavioral surplus.43 Among these efforts, I focus here on one patent submitted in 2003 by three of the firm's top computer scientists and titled “Generating User Information for Use in Targeted Advertising.”44 The patent is emblematic of the new mutation and the emerging logic of accumulation that would define Google's success. Of even greater interest, it also provides an unusual glimpse into the “economic orientation” baked deep into the technology cake by reflecting the mindset of Google's distinguished scientists as they harnessed their knowledge to the firm's new aims.45 In this way, the patent stands as a treatise on a new political economics of clicks and its moral universe, before the company learned to disguise this project in a fog of euphemism.

The patent reveals a pivoting of the backstage operation toward Google's new audience of genuine customers. “The present invention concerns advertising,” the inventors announce. Despite the enormous quantity of demographic data available to advertisers, the scientists note that much of an ad budget “is simply wasted… it is very difficult to identify and eliminate such waste.”46

Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising. The inventors point out that online ad systems had also failed to achieve this elusive goal. The then-predominant approaches used by Google's competitors, in which ads were targeted to keywords or content, were unable to identify relevant ads “for a particular user.” Now the inventors offered a scientific solution that exceeded the most-ambitious dreams of any advertising executive:

There is a need to increase the relevancy of ads served for some user request, such as a search query or a document request… to the user that submitted the request.… The present invention may involve novel methods, apparatus, message formats and/or data structures for determining user profile information and using such determined user profile information for ad serving.47

In other words, Google would no longer mine behavioral data strictly to improve service for users but rather to read users' minds for the purposes of matching ads to their interests, as those interests are deduced from the collateral traces of online behavior. With Google's unique access to behavioral data, it would now be possible to know what a particular individual in a particular time and place was thinking, feeling, and doing. That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.

The techniques described in the patent meant that each time a user queries Google's search engine, the system simultaneously presents a specific configuration of a particular ad, all in the fraction of a moment that it takes to fulfill the search query. The data used to perform this instant translation from query to ad, a predictive analysis that was dubbed “matching,” went far beyond the mere denotation of search terms. New data sets were compiled that would dramatically enhance the accuracy of these predictions. These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.

Where would UPI come from? The scientists announce a breakthrough. They first explain that some of the new data can be culled from the firm's existing systems with its continuously accruing caches of behavioral data from Search. Then they stress that even more behavioral data can be hunted and herded from anywhere in the online world. UPI, they write, “may be inferred,” “presumed,” and “deduced.” Their new methods and computational tools could create UPI from integrating and analyzing a user's search patterns, document inquiries, and myriad other signals of online behaviors, even when users do not directly provide that personal information: “User profile information may include any information about an individual user or a group of users. Such information may be provided by the user, provided by a third-party authorized to release user information, and/or derived from user actions. Certain user information can be deduced or presumed using other user information of the same user and/or user information of other users. UPI may be associated with various entities.”48

The inventors explain that UPI can be deduced directly from a user's or group's actions, from any kind of document a user views, or from an ad landing page: “For example, an ad for prostate cancer screening might be limited to user profiles having the attribute ‘male' and ‘age 45 and over.'“49 They describe different ways to obtain UPI. One relies on “machine learning classifiers” that predict values on a range of attributes. “Association graphs” are developed to reveal the relationships among users, documents, search queries, and web pages: “user-to-user associations may also be generated.”50 The inventors also note that their methods can be understood only among the priesthood of computer scientists drawn to the analytic challenges of this new online universe: “The following description is presented to enable one skilled in the art to make and use the invention.… Various modifications to the disclosed embodiments will be apparent to those skilled in the art.…”51

Of critical importance to our story is the scientists' observation that the most challenging sources of friction here are social, not technical. Friction arises when users intentionally fail to provide information for no other reason than that they choose not to. “Unfortunately, user profile information is not always available,” the scientists warn. Users do not always “voluntarily” provide information, or “the user profile may be incomplete… and hence not comprehensive, because of privacy considerations, etc.”52

A clear aim of the patent is to assure its audience that Google scientists will not be deterred by users' exercise of decision rights over their personal information, despite the fact that such rights were an inherent feature of the original social contract between the company and its users.53 Even when users do provide UPI, the inventors caution, “it may be intentionally or unintentionally inaccurate, it may become stale.… UPI for a user… can be determined (or updated or extended) even when no explicit information is given to the system.… An initial UPI may include some expressly entered UPI information, though it doesn't need to.”54

The scientists thus make clear that they are willing?and that their inventions are able?to overcome the friction entailed in users' decision rights. Google's proprietary methods enable it to surveil, capture, expand, construct, and claim behavioral surplus, including data that users intentionally choose not to share. Recalcitrant users will not be obstacles to data expropriation. No moral, legal, or social constraints will stand in the way of finding, claiming, and analyzing others' behavior for commercial purposes.

The inventors provide examples of the kinds of attributes that Google could assess as it compiles its UPI data sets while circumnavigating users' knowledge, intentions, and consent. These include websites visited, psychographics, browsing activity, and information about previous advertisements that the user has been shown, selected, and/or made purchases after viewing.55 It is a long list that is certainly much longer today.

Finally, the inventors observe another obstacle to effective targeting. Even when user information exists, they say, “Advertisers may not be able to use this information to target ads effectively.”56 On the strength of the invention presented in this patent, and others related to it, the inventors publicly declare Google's unique prowess in hunting, capturing, and transforming surplus into predictions for accurate targeting. No other firm could equal its range of access to behavioral surplus, its bench strength of scientific knowledge and technique, its computational power, or its storage infrastructure. In 2003 only Google could pull surplus from multiple sites of activity and integrate each increment of data into comprehensive “data structures.” Google was uniquely positioned with the state-of-the-art knowledge in computer science to convert those data into predictions of who will click on which configuration of what ad as the basis for a final “matching” result, all computed in micro-fractions of a second.

To state all this in plain language, Google's invention revealed new capabilities to infer and deduce the thoughts, feelings, intentions, and interests of individuals and groups with an automated architecture that operates as a one-way mirror irrespective of a person's awareness, knowledge, and consent, thus enabling privileged secret access to behavioral data.

A one-way mirror embodies the specific social relations of surveillance based on asymmetries of knowledge and power. The new mode of accumulation invented at Google would derive, above all, from the firm's willingness and ability to impose these social relations on its users. Its willingness was mobilized by what the founders came to regard as a state of exception; its ability came from its actual success in leveraging privileged access to behavioral surplus in order to predict the behavior of individuals now, soon, and later. The predictive insights thus acquired would constitute a world-historic competitive advantage in a new marketplace where low-risk bets about the behavior of individuals are valued, bought, and sold.

Google would no longer be a passive recipient of accidental data that it could recycle for the benefit of its users. The targeted advertising patent sheds light on the path of discovery that Google traveled from its advocacy-oriented founding toward the elaboration of behavioral surveillance as a full-blown logic of accumulation. The invention itself exposes the reasoning through which the behavioral value reinvestment cycle was subjugated to the service of a new commercial calculation. Behavioral data, whose value had previously been “used up” on improving the quality of Search for users, now became the pivotal?and exclusive to Google?raw material for the construction of a dynamic online advertising marketplace. Google would now secure more behavioral data than it needed to serve its users. That surplus, a behavioral surplus, was the game-changing, zero-cost asset that was diverted from service improvement toward a genuine and highly lucrative market exchange.

These capabilities were and remain inscrutable to all but an exclusive data priesthood among whom Google is the ubermensch. They operate in obscurity, indifferent to social norms or individual claims to self-determining decision rights. These moves established the foundational mechanisms of surveillance capitalism.

The state of exception declared by Google's founders transformed the youthful Dr. Jekyll into a ruthless, muscular Mr. Hyde determined to hunt his prey anywhere, anytime, irrespective of others' self-determining aims. The new Google ignored claims to self-determination and acknowledged no a priori limits on what it could find and take. It dismissed the moral and legal content of individual decision rights and recast the situation as one of technological opportunism and unilateral power. This new Google assures its actual customers that it will do whatever it takes to transform the natural obscurity of human desire into scientific fact. This Google is the superpower that establishes its own values and pursues its own purposes above and beyond the social contracts to which others are bound.

V. Surplus at Scale

There were other new elements that helped to establish the centrality of behavioral surplus in Google's commercial operations, beginning with its pricing innovations. The first new pricing metric was based on “click-through rates,” or how many times a user clicks on an ad through to the advertiser's web page, rather than pricing based on the number of views that an ad receives. The click-through was interpreted as a signal of relevance and therefore a measure of successful targeting, operational results that derive from and reflect the value of behavioral surplus.

This new pricing discipline established an ever-escalating incentive to increase behavioral surplus in order to continuously upgrade the effectiveness of predictions. Better predictions lead directly to more click-throughs and thus to revenue. Google learned new ways to conduct automated auctions for ad targeting that allowed the new invention to scale quickly, accommodating hundreds of thousands of advertisers and billions (later it would be trillions) of auctions simultaneously. Google's unique auction methods and capabilities earned a great deal of attention, which distracted observers from reflecting on exactly what was being auctioned: derivatives of behavioral surplus. Click-through metrics institutionalized “customer” demand for these prediction products and thus established the central importance of economies of scale in surplus supply operations. Surplus capture would have to become automatic and ubiquitous if the new logic was to succeed, as measured by the successful trading of behavioral futures.

Another key metric called the “quality score” helped determine the price of an ad and its specific position on the page, in addition to advertisers' own auction bids. The quality score was determined in part by click-through rates and in part by the firm's analyses of behavioral surplus. “The clickthrough rate needed to be a predictive thing,” one top executive insisted, and that would require “all the information we had about the query right then.”57 It would take enormous computing power and leading-edge algorithmic programs to produce powerful predictions of user behavior that became the criteria for estimating the relevance of an ad. Ads that scored high would sell at a lower price than those that scored poorly. Google's customers, its advertisers, complained that the quality score was a black box, and Google was determined to keep it so. Nonetheless, when customers followed its disciplines and produced high-scoring ads, their click-through rates soared.

AdWords quickly became so successful that it inspired significant expansion of the surveillance logic. Advertisers demanded more clicks.58 The answer was to extend the model beyond Google's search pages and convert the entire internet into a canvas for Google's targeted ads. This required turning Google's newfound skills at “data extraction and analysis,” as Hal Varian put it, toward the content of any web page or user action by employing Google's rapidly expanding semantic analysis and artificial intelligence capabilities to efficiently “squeeze” meaning from them. Only then could Google accurately assess the content of a page and how users interact with that content. This “content-targeted advertising” based on Google's patented methods was eventually named AdSense. By 2004, AdSense had achieved a run rate of a million dollars per day, and by 2010, it produced annual revenues of more than $10 billion.

So here was an unprecedented and lucrative brew: behavioral surplus, data science, material infrastructure, computational power, algorithmic systems, and automated platforms. This convergence produced unprecedented “relevance” and billions of auctions. Click-through rates skyrocketed. Work on AdWords and AdSense became just as important as work on Search.

With click-through rates as the measure of relevance accomplished, behavioral surplus was institutionalized as the cornerstone of a new kind of commerce that depended upon online surveillance at scale. Insiders referred to Google's new science of behavioral prediction as the “physics of clicks.”59 Mastery of this new domain required a specialized breed of click physicists who would secure Google's preeminence within the nascent priesthood of behavioral prediction. The firm's substantial revenue flows summoned the greatest minds of our age from fields such as artificial intelligence, statistics, machine learning, data science, and predictive analytics to converge on the prediction of human behavior as measured by click-through rates: computer-mediated fortune-telling and selling. The firm would recruit an authority on information economics, and consultant to Google since 2001, as the patriarch of this auspicious group and the still-young science: Hal Varian was the chosen shepherd of this flock.

Page and Brin had been reluctant to embrace advertising, but as the evidence mounted that ads could save the company from crisis, their attitudes shifted.60 Saving the company also meant saving themselves from being just another couple of very smart guys who couldn't figure out how to make real money, insignificant players in the intensely material and competitive culture of Silicon Valley. Page was haunted by the example of the brilliant but impoverished scientist Nikola Tesla, who died without ever benefiting financially from his inventions. “You need to do more than just invent things,” Page reflected.61 Brin had his own take: “Honestly, when we were still in the dot-com boom days, I felt like a schmuck. I had an internet startup?so did everybody else. It was unprofitable, like everybody else's.”62 Exceptional threats to their financial and social status appear to have awakened a survival instinct in Page and Brin that required exceptional adaptive measures.63 The Google founders' response to the fear that stalked their community effectively declared a “state of exception” in which it was judged necessary to suspend the values and principles that had guided Google's founding and early practices.

Later, Sequoia's Moritz recalled the crisis conditions that provoked the firm's “ingenious” self-reinvention, when crisis opened a fork in the road and drew the company in a wholly new direction. He stressed the specificity of Google's inventions, their origins in emergency, and the 180-degree turn from serving users to surveilling them. Most of all, he credited the discovery of behavioral surplus as the game-changing asset that turned Google into a fortune-telling giant, pinpointing Google's breakthrough transformation of the Overture model, when the young company first applied its analytics of behavioral surplus to predict the likelihood of a click:

The first 12 months of Google were not a cakewalk, because the company didn't start off in the business that it eventually tapped. At first it went in a different direction, which was selling its technology?selling licenses for its search engines to larger internet properties and to corporations.… Cash was going out of the window at a feral rate during the first six, seven months. And then, very ingeniously, Larry… and Sergey… and others fastened on a model that they had seen this other company, Overture, develop, which was ranked advertisements. They saw how it could be improved and enhanced and made it their own, and that transformed the business.64

Moritz's reflections suggest that without the discovery of behavioral surplus and the turn toward surveillance operations, Google's “feral” rate of spending was not sustainable and the firm's survival was imperiled. We will never know what Google might have made of itself without the state of exception fueled by the emergency of impatient money that shaped those crucial years of development. What other pathways to sustainable revenue might have been explored or invented? What alternative futures might have been summoned to keep faith with the founders' principles and with their users' rights to self-determination? Instead, Google loosed a new incarnation of capitalism upon the world, a Pandora's box whose contents we are only beginning to understand.

VI. A Human Invention

Key to our conversation is this fact: surveillance capitalism was invented by a specific group of human beings in a specific time and place. It is not an inherent result of digital technology, nor is it a necessary expression of information capitalism. It was intentionally constructed at a moment in history, in much the same way that the engineers and tinkerers at the Ford Motor Company invented mass production in the Detroit of 1913.

Henry Ford set out to prove that he could maximize profits by driving up volumes, radically decreasing costs, and widening demand. It was an unproven commercial equation for which no economic theory or body of practice existed. Fragments of the formula had surfaced before?in meatpacking plants, flour-milling operations, sewing machine and bicycle factories, armories, canneries, and breweries. There was a growing body of practical knowledge about the interchangeability of parts and absolute standardization, precision machines, and continuous flow production. But no one had achieved the grand symphony that Ford heard in his imagination.

As historian David Hounshell tells it, there was a time, April 1, 1913, and a place, Detroit, when the first moving assembly line seemed to be “just another step in the years of development at Ford yet somehow suddenly dropped out of the sky. Even before the end of the day, some of the engineers sensed that they had made a fundamental breakthrough.”65 Within a year, productivity increases across the plant ranged from 50 percent to as much as ten times the output of the old fixed-assembly methods.66 The Model T that sold for $825 in 1908 was priced at a record low for a four-cylinder automobile in 1924, just $260.67

Much as with Ford, some elements of the economic surveillance logic in the online environment had been operational for years, familiar only to a rarefied group of early computer experts. For example, the software mechanism known as the “cookie”?bits of code that allow information to be passed between a server and a client computer?was developed in 1994 at Netscape, the first commercial web browser company.68 Similarly, “web bugs”?tiny (often invisible) graphics embedded in web pages and e-mail and designed to monitor user activity and collect personal information?were well-known to experts in the late 1990s.69

These experts were deeply concerned about the privacy implications of such monitoring mechanisms, and at least in the case of cookies, there were institutional efforts to design internet policies that would prohibit their invasive capabilities to monitor and profile users.70 By 1996, the function of cookies had become a contested public policy issue. Federal Trade Commission workshops in 1996 and 1997 discussed proposals that would assign control of all personal information to users by default with a simple automated protocol. Advertisers bitterly contested this scheme, collaborating instead to avert government regulation by forming a “self-regulating” association known as the Network Advertising Initiative. Still, in June 2000 the Clinton administration banned cookies from all federal websites, and by April 2001, three bills before Congress included provisions to regulate cookies.71

Google brought new life to these practices. As had occurred at Ford a century earlier, the company's engineers and scientists were the first to conduct the entire commercial surveillance symphony, integrating a wide range of mechanisms from cookies to proprietary analytics and algorithmic software capabilities in a sweeping new logic that enshrined surveillance and the unilateral expropriation of behavioral data as the basis for a new market form. The impact of this invention was just as dramatic as Ford's. In 2001, as Google's new systems to exploit its discovery of behavioral surplus were being tested, net revenues jumped to $86 million (more than a 400 percent increase over 2000), and the company turned its first profit. By 2002, the cash began to flow and has never stopped, definitive evidence that behavioral surplus combined with Google's proprietary analytics were sending arrows to their marks. Revenues leapt to $347 million in 2002, then $1.5 billion in 2003, and $3.2 billion in 2004, the year the company went public.72 The discovery of behavioral surplus had produced a stunning 3,590 percent increase in revenue in less than four years.

VII. The Secrets of Extraction

It is important to note the vital differences for capitalism in these two moments of originality at Ford and Google. Ford's inventions revolutionized production. Google's inventions revolutionized extraction and established surveillance capitalism's first economic imperative: the extraction imperative. The extraction imperative meant that raw-material supplies must be procured at an ever-expanding scale. Industrial capitalism had demanded economies of scale in production in order to achieve high throughput combined with low unit cost. In contrast, surveillance capitalism demands economies of scale in the extraction of behavioral surplus.

Mass production was aimed at new sources of demand in the early twentieth century's first mass consumers. Ford was clear on this point: “Mass production begins in the perception of a public need.”73 Supply and demand were linked effects of the new “conditions of existence” that defined the lives of my great-grandparents Sophie and Max and other travelers in the first modernity. Ford's invention deepened the reciprocities between capitalism and these populations.

In contrast, Google's inventions destroyed the reciprocities of its original social contract with users. The role of the behavioral value reinvestment cycle that had once aligned Google with its users changed dramatically. Instead of deepening the unity of supply and demand with its populations, Google chose to reinvent its business around the burgeoning demand of advertisers eager to squeeze and scrape online behavior by any available means in the competition for market advantage. In the new operation, users were no longer ends in themselves but rather became the means to others' ends.

Reinvestment in user services became the method for attracting behavioral surplus, and users became the unwitting suppliers of raw material for a larger cycle of revenue generation. The scale of surplus expropriation that was possible at Google would soon eliminate all serious competitors to its core search business as the windfall earnings from leveraging behavioral surplus were used to continuously draw more users into its net, thus establishing its de facto monopoly in Search. On the strength of Google's inventions, discoveries, and strategies, it became the mother ship and ideal type of a new economic logic based on fortune-telling and selling?an ancient and eternally lucrative craft that has fed on humanity's confrontation with uncertainty from the beginning of the human story.

It was one thing to proselytize achievements in production, as Henry Ford had done, but quite another to boast about the continuous intensification of hidden processes aimed at the extraction of behavioral data and personal information. The last thing that Google wanted was to reveal the secrets of how it had rewritten its own rules and, in the process, enslaved itself to the extraction imperative. Behavioral surplus was necessary for revenue, and secrecy would be necessary for the sustained accumulation of behavioral surplus.

This is how secrecy came to be institutionalized in the policies and practices that govern every aspect of Google's behavior onstage and offstage. Once Google's leadership understood the commercial power of behavioral surplus, Schmidt instituted what he called the “hiding strategy.”74 Google employees were told not to speak about what the patent had referred to as its “novel methods, apparatus, message formats and/or data structures” or confirm any rumors about flowing cash. Hiding was not a post hoc strategy; it was baked into the cake that would become surveillance capitalism.

Former Google executive Douglas Edwards writes compellingly about this predicament and the culture of secrecy it shaped. According to his account, Page and Brin were “hawks,” insisting on aggressive data capture and retention: “Larry opposed any path that would reveal our technological secrets or stir the privacy pot and endanger our ability to gather data.” Page wanted to avoid arousing users' curiosity by minimizing their exposure to any clues about the reach of the firm's data operations. He questioned the prudence of the electronic scroll in the reception lobby that displays a continuous stream of search queries, and he “tried to kill” the annual Google Zeitgeist conference that summarizes the year's trends in search terms.75

Journalist John Battelle, who chronicled Google during the 2002?2004 period, described the company's “aloofness,” “limited information sharing,” and “alienating and unnecessary secrecy and isolation.”76 Another early company biographer notes, “What made this information easier to keep is that almost none of the experts tracking the business of the internet believed that Google's secret was even possible.”77 As Schmidt told the New York Times, “You need to win, but you are better off winning softly.”78 The scientific and material complexity that supported the capture and analysis of behavioral surplus also enabled the hiding strategy, an invisibility cloak over the whole operation. “Managing search at our scale is a very serious barrier to entry,” Schmidt warned would-be competitors.79

To be sure, there are always sound business reasons for hiding the location of your gold mine. In Google's case, the hiding strategy accrued to its competitive advantage, but there were other reasons for concealment and obfuscation. What might the response have been back then if the public were told that Google's magic derived from its exclusive capabilities in unilateral surveillance of online behavior and its methods specifically designed to override individual decision rights? Google policies had to enforce secrecy in order to protect operations that were designed to be undetectable because they took things from users without asking and employed those unilaterally claimed resources to work in the service of others' purposes.

That Google had the power to choose secrecy is itself testament to the success of its own claims. This power is a crucial illustration of the difference between “decision rights” and “privacy.” Decision rights confer the power to choose whether to keep something secret or to share it. One can choose the degree of privacy or transparency for each situation. US Supreme Court Justice William O. Douglas articulated this view of privacy in 1967: “Privacy involves the choice of the individual to disclose or to reveal what he believes, what he thinks, what he possesses.…”80

Surveillance capitalism lays claim to these decision rights. The typical complaint is that privacy is eroded, but that is misleading. In the larger societal pattern, privacy is not eroded but redistributed, as decision rights over privacy are claimed for surveillance capital. Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism. Google discovered this necessary element of the new logic of accumulation: it must assert the rights to take the information upon which its success depends.

The corporation's ability to hide this rights grab depends on language as much as it does on technical methods or corporate policies of secrecy. George Orwell once observed that euphemisms are used in politics, war, and business as instruments that “make lies sound truthful and murder respectable.”81 Google has been careful to camouflage the significance of its behavioral surplus operations in industry jargon. Two popular terms?”digital exhaust” and “digital breadcrumbs”?connote worthless waste: leftovers lying around for the taking.82 Why allow exhaust to drift in the atmosphere when it can be recycled into useful data? Who would think to call such recycling an act of exploitation, expropriation, or plunder? Who would dare to redefine “digital exhaust” as booty or contraband, or imagine that Google had learned how to purposefully construct that so-called “exhaust” with its methods, apparatus, and data structures?

The word “targeted” is another euphemism. It evokes notions of precision, efficiency, and competence. Who would guess that targeting conceals a new political equation in which Google's concentrations of computational power brush aside users' decision rights as easily as King Kong might shoo away an ant, all accomplished offstage where no one can see?

These euphemisms operate in exactly the same way as those found on the earliest maps of the North American continent, in which whole regions were labeled with terms such as “heathens,” “infidels,” “idolaters,” “primitives,” “vassals,” and “rebels.” On the strength of those euphemisms, native peoples?their places and claims?were deleted from the invaders' moral and legal equations, legitimating the acts of taking and breaking that paved the way for church and monarchy.

The intentional work of hiding naked facts in rhetoric, omission, complexity, exclusivity, scale, abusive contracts, design, and euphemism is another factor that helps explain why during Google's breakthrough to profitability, few noticed the foundational mechanisms of its success and their larger significance. In this picture, commercial surveillance is not merely an unfortunate accident or occasional lapse. It is neither a necessary development of information capitalism nor a necessary product of digital technology or the internet. It is a specifically constructed human choice, an unprecedented market form, an original solution to emergency, and the underlying mechanism through which a new asset class is created on the cheap and converted to revenue. Surveillance is the path to profit that overrides “we the people,” taking our decision rights without permission and even when we say “no.” The discovery of behavioral surplus marks a critical turning point not only in Google's biography but also in the history of capitalism.

In the years following its IPO in 2004, Google's spectacular financial breakthrough first astonished and then magnetized the online world. Silicon Valley investors had doubled down on risk for years, in search of that elusive business model that would make it all worthwhile. When Google's financial results went public, the hunt for mythic treasure was officially over.83

The new logic of accumulation spread first to Facebook, which launched the same year that Google went public. CEO Mark Zuckerberg had rejected the strategy of charging users a fee for service as the telephone companies had done in an earlier century. “Our mission is to connect every person in the world. You don't do that by having a service people pay for,” he insisted.84 In May 2007 he introduced the Facebook platform, opening up the social network to everyone, not just people with a college e-mail address. Six months later, in November, he launched his big advertising product, Beacon, which would automatically share transactions from partner websites with all of a user's “friends.” These posts would appear even if the user was not currently logged into Facebook, without the user's knowledge or an opt-in function. The howls of protest?from users but also from some of Facebook's partners such as Coca-Cola?forced Zuckerberg to back down swiftly. By December, Beacon became an opt-in program. The twenty-three-year-old CEO understood the potential of surveillance capitalism, but he had not yet mastered Google's facility in obscuring its operations and intent.

The pressing question in Facebook's headquarters?“How do we turn all those Facebook users into money?”?still required an answer.85 In March 2008, just three months after having to kill his first attempt at emulating Google's logic of accumulation, Zuckerberg hired Google executive Sheryl Sandberg to be Facebook's chief operating officer. The onetime chief of staff to US Treasury Secretary Larry Summers, Sandberg had joined Google in 2001, ultimately rising to be its vice president of global online sales and operations. At Google she led the development of surveillance capitalism through the expansion of AdWords and other aspects of online sales operations.86 One investor who had observed the company's growth during that period concluded, “Sheryl created AdWords.”87

In signing on with Facebook, the talented Sandberg became the “Typhoid Mary” of surveillance capitalism as she led Facebook's transformation from a social networking site to an advertising behemoth. Sandberg understood that Facebook's social graph represented an awe-inspiring source of behavioral surplus: the extractor's equivalent of a nineteenth-century prospector stumbling into a valley that sheltered the largest diamond mine and the deepest gold mine ever to be discovered. “We have better information than anyone else. We know gender, age, location, and it's real data as opposed to the stuff other people infer,” Sandberg said. Facebook would learn to track, scrape, store, and analyze UPI to fabricate its own targeting algorithms, and like Google it would not restrict extraction operations to what people voluntarily shared with the company. Sandberg understood that through the artful manipulation of Facebook's culture of intimacy and sharing, it would be possible to use behavioral surplus not only to satisfy demand but also to create demand. For starters, that meant inserting advertisers into the fabric of Facebook's online culture, where they could “invite” users into a “conversation.”88

VIII. Summarizing the Logic and Operations of Surveillance Capitalism

With Google in the lead, surveillance capitalism rapidly became the default model of information capitalism on the web and, as we shall see in coming chapters, gradually drew competitors from every sector. This new market form declares that serving the genuine needs of people is less lucrative, and therefore less important, than selling predictions of their behavior. Google discovered that we are less valuable than others' bets on our future behavior. This changed everything.

Behavioral surplus defines Google's earnings success. In 2016, 89 percent of the revenues of its parent company, Alphabet, derived from Google's targeted advertising programs.89 The scale of raw-material flows is reflected in Google's domination of the internet, processing over 40,000 search queries every second on average: more than 3.5 billion searches per day and 1.2 trillion searches per year worldwide in 2017.90

On the strength of its unprecedented inventions, Google's $400 billion market value edged out ExxonMobil for the number-two spot in market capitalization in 2014, only sixteen years after its founding, making it the second-richest company in the world behind Apple.91 By 2016, Alphabet/Google occasionally wrested the number-one position from Apple and was ranked number two globally as of September 20, 2017.92

It is useful to stand back from this complexity to grasp the overall pattern and how the puzzle pieces fit together:

1. The logic: Google and other surveillance platforms are sometimes described as “two-sided” or “multi-sided” markets, but the mechanisms of surveillance capitalism suggest something different.93 Google had discovered a way to translate its nonmarket interactions with users into surplus raw material for the fabrication of products aimed at genuine market transactions with its real customers: advertisers.94 The translation of behavioral surplus from outside to inside the market finally enabled Google to convert investment into revenue. The corporation thus created out of thin air and at zero marginal cost an asset class of vital raw materials derived from users' nonmarket online behavior. At first those raw materials were simply “found,” a by-product of users' search actions. Later those assets were hunted aggressively and procured largely through surveillance. The corporation simultaneously created a new kind of marketplace in which its proprietary “prediction products” manufactured from these raw materials could be bought and sold.

The summary of these developments is that the behavioral surplus upon which Google's fortune rests can be considered as surveillance assets. These assets are critical raw materials in the pursuit of surveillance revenues and their translation into surveillance capital. The entire logic of this capital accumulation is most accurately understood as surveillance capitalism, which is the foundational framework for a surveillance-based economic order: a surveillance economy. The big pattern here is one of subordination and hierarchy, in which earlier reciprocities between the firm and its users are subordinated to the derivative project of our behavioral surplus captured for others' aims. We are no longer the subjects of value realization. Nor are we, as some have insisted, the “product” of Google's sales. Instead, we are the objects from which raw materials are extracted and expropriated for Google's prediction factories. Predictions about our behavior are Google's products, and they are sold to its actual customers but not to us. We are the means to others' ends.

Industrial capitalism transformed nature's raw materials into commodities, and surveillance capitalism lays its claims to the stuff of human nature for a new commodity invention. Now it is human nature that is scraped, torn, and taken for another century's market project. It is obscene to suppose that this harm can be reduced to the obvious fact that users receive no fee for the raw material they supply. That critique is a feat of misdirection that would use a pricing mechanism to institutionalize and therefore legitimate the extraction of human behavior for manufacturing and sale. It ignores the key point that the essence of the exploitation here is the rendering of our lives as behavioral data for the sake of others' improved control of us. The remarkable questions here concern the facts that our lives are rendered as behavioral data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor foretell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing; and that encryption is the only positive action left to discuss when we sit around the dinner table and casually ponder how to hide from the forces that hide from us.

2. The means of production: Google's internet-age manufacturing process is a critical component of the unprecedented. Its specific technologies and techniques, which I summarize as “machine intelligence,” are constantly evolving, and it is easy to be intimidated by their complexity. The same term may mean one thing today and something very different in one year or in five years. For example, Google has been described as developing and deploying “artificial intelligence” since at least 2003, but the term itself is a moving target, as capabilities have evolved from primitive programs that can play tic-tac-toe to systems that can operate whole fleets of driverless cars.

Google's machine intelligence capabilities feed on behavioral surplus, and the more surplus they consume, the more accurate the prediction products that result. Wired magazine's founding editor, Kevin Kelly, once suggested that although it seems like Google is committed to developing its artificial intelligence capabilities to improve Search, it's more likely that Google develops Search as a means of continuously training its evolving AI capabilities.95 This is the essence of the machine intelligence project. As the ultimate tapeworm, the machine's intelligence depends upon how much data it eats. In this important respect the new means of production differs fundamentally from the industrial model, in which there is a tension between quantity and quality. Machine intelligence is the synthesis of this tension, for it reaches its full potential for quality only as it approximates totality.

As more companies chase Google-style surveillance profits, a significant fraction of global genius in data science and related fields is dedicated to the fabrication of prediction products that increase click-through rates for targeted advertising. For example, Chinese researchers employed by Microsoft's Bing's research unit in Beijing published breakthrough findings in 2017. “Accurately estimating the click-through rate (CTR) of ads has a vital impact on the revenue of search businesses; even a 0.1% accuracy improvement in our production would yield hundreds of millions of dollars in additional earnings,” they begin. They go on to demonstrate a new application of advanced neural networks that promises 0.9 percent improvement on one measure of identification and “significant click yield gains in online traffic.”96 Similarly, a team of Google researchers introduced a new deep-neural network model, all for the sake of capturing “predictive feature interactions” and delivering “state-of-the-art performance” to improve click-through rates.97 Thousands of contributions like these, some incremental and some dramatic, equate to an expensive, sophisticated, opaque, and exclusive twenty-first-century “means of production.”

3. The products: Machine intelligence processes behavioral surplus into prediction products designed to forecast what we will feel, think, and do: now, soon, and later. These methodologies are among Google's most closely guarded secrets. The nature of its products explains why Google repeatedly claims that it does not sell personal data. What? Never! Google executives like to claim their privacy purity because they do not sell their raw material. Instead, the company sells the predictions that only it can fabricate from its world-historic private hoard of behavioral surplus.

Prediction products reduce risks for customers, advising them where and when to place their bets. The quality and competitiveness of the product are a function of its approximation to certainty: the more predictive the product, the lower the risks for buyers and the greater the volume of sales. Google has learned to be a data-based fortune-teller that replaces intuition with science at scale in order to tell and sell our fortunes for profit to its customers, but not to us. Early on, Google's prediction products were largely aimed at sales of targeted advertising, but as we shall see, advertising was the beginning of the surveillance project, not the end.

4. The marketplace: Prediction products are sold into a new kind of market that trades exclusively in future behavior. Surveillance capitalism's profits derive primarily from these behavioral futures markets. Although advertisers were the dominant players in the early history of this new kind of marketplace, there is no reason why such markets are limited to this group. The new prediction systems are only incidentally about ads, in the same way that Ford's new system of mass production was only incidentally about automobiles. In both cases the systems can be applied to many other domains. The already visible trend, as we shall see in the coming chapters, is that any actor with an interest in purchasing probabilistic information about our behavior and/or influencing future behavior can pay to play in markets where the behavioral fortunes of individuals, groups, bodies, and things are told and sold (see Figure 2).

[Figure 2. The Discovery of Behaviroral Surplus

1)
Martin Hilbert, “Technological Information Inequality as an Incessantly Moving Target: The Redistribution of Information and Communication Capacities Between 1986 and 2010,” Journal of the American Society for Information Science and Technology 65, no. 4 (2013): 821?35, https://doi.org/10.1002/asi.23020.
2)
By 2014, about twenty years after the invention of the world wide web, an extensive survey by Pew Research found 87 percent of Americans using the internet. Among those, 76 percent regarded it as “a good thing for society” and 90 percent as “a good thing for me.” Indeed, people routinely call 911 when Facebook is down. In less than two decades after the Mosaic browser was released to the public, enabling easy access to the world wide web, a 2010 BBC poll found that 79 percent of people in twenty-six countries considered internet access to be a fundamental human right. Six years later, the United Nations adopted specific language on internet access: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” See Susannah Fox and Lee Rainie, “The web at 25 in the U.S.,” PewResearchCenter, February 27, 2014, http://www.pewinternet.org/2014/02/27/the-web-at-25-in-the-u-s; “911 Calls About Facebook Outage Angers L.A. County Sheriff's Officials,” Los Angeles Times, August 1, 2014, http://www.latimes.com/local/lanow/la-me-ln-911-calls-about-facebook-outage-angers-la-sheriffs-officials-20140801-htmlstory.html; “Internet Access ‘a Human Right,'” BBC News, March 8, 2010, http://news.bbc.co.uk/2/hi/8548190.stm; “The Promotion, Protection and Enjoyment of Human Rights on the Internet,” United Nations Human Rights Council, June 27, 2016, https://www.article19.org/data/files/Internet_Statement_Adopted.pdf.
3)
Joao Leal, The Making of Saudade: National Identity and Ethnic Psychology in Portugal (Amsterdam: Het Spinhuis, 2000), https://run.unl.pt/handle/10362/4386.
4)
Cory D. Kidd et al., “The Aware Home: A Living Laboratory for Ubiquitous Computing Research,” in Proceedings of the Second International Workshop on Cooperative Buildings, Integrating Information, Organization, and Architecture, CoBuild '99 (London: Springer-Verlag, 1999), 191?98, http://dl.acm.org/citation.cfm?id=645969.674887.
5)
“Global Smart Homes Market 2018 by Evolving Technology, Projections & Estimations, Business Competitors, Cost Structure, Key Companies and Forecast to 2023,” Reuters, February 19, 2018, https://www.reuters.com/brandfeatures/venture-capital/article?id=28096.
6)
Ron Amadeo, “Nest Is Done as a Standalone Alphabet Company, Merges with Google,” Ars Technica, February 7, 2018, https://arstechnica.com/gadgets/2018/02/nest-is-done-as-a-standalone-alphabet-company-merges-with-google; Leo Kelion, “Google-Nest Merger Raises Privacy Issues,” BBC News, February 8, 2018, http://www.bbc.com/news/technology-42989073.
7)
Kelion, “Google-Nest Merger Raises Privacy Issues.”
8)
Rick Osterloh and Marwan Fawaz, “Nest to Join Forces with Google's Hardware Team,” Google, February 7, 2018, https://www.blog.google/inside-google/company-annoucements/nest-join-forces-googles-hardware-team.
9)
Grant Hernandez, Orlando Arias, Daniel Buentello, and Yier Jin, “Smart Nest Thermostat: A Smart Spy in Your Home,” Black Hat USA, 2014, https://www.blackhat.com/docs/us-14/materials/us-14-Jin-Smart-Nest-Thermostat-A-Smart-Spy-In-Your-Home-WP.pdf.
10)
Guido Noto La Diega, “Contracting for the ‘Internet of Things': Looking into the Nest” (research paper, Queen Mary University of London, School of Law, 2016); Robin Kar and Margaret Radin, “Pseudo-Contract & Shared Meaning Analysis” (legal studies research paper, University of Illinois College of Law, November 16, 2017), https://papers.ssrn.com/abstract=3083129.
11)
Hernandez, Arias, Buentello, and Jin, “Smart Nest Thermostat.”
12)
For a prescient early treatment of these issues, see Langdon Winner, “A Victory for Computer Populism,” Technology Review 94, no. 4 (1991): 66. See also Chris Jay Hoofnagle, Jennifer M. Urban, and Su Li, “Privacy and Modern Advertising: Most US Internet Users Want ‘Do Not Track' to Stop Collection of Data About Their Online Activities” (BCLT Research Paper, Rochester, NY: Social Science Research Network, October 8, 2012), https://papers.ssrn.com/abstract=2152135; Joseph Turow et al., “Americans Reject Tailored Advertising and Three Activities That Enable It,” Annenberg School for Communication, September 29, 2009, http://papers.ssrn.com/abstract=1478214; Chris Jay Hoofnagle and Jan Whittington, “Free: Accounting for the Costs of the Internet's Most Popular Price,” UCLA Law Review 61 (February 28, 2014): 606; Jan Whittington and Chris Hoofnagle, “Unpacking Privacy's Price,” North Carolina Law Review 90 (January 1, 2011): 1327; Chris Jay Hoofnagle, Jennifer King, Su Li, and Joseph Turow, “How Different Are Young Adults from Older Adults When It Comes to Information Privacy Attitudes & Policies?” April 14, 2010, http://repository.upenn.edu/asc_papers/399.
13)
The phrase is from Roberto Mangabeira Unger, “The Dictatorship of No Alternatives,” in What Should the Left Propose? (London: Verso, 2006), 1-11.
b/the_age_of_surveillance_capitalism.txt · Last modified: 2019/03/13 10:39 by hkimscil

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki