Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao — Summary
Synopsis
Empire of AI argues that OpenAI, the company most loudly dedicated to building artificial general intelligence for the benefit of humanity, has become the leading institution in a new imperial order. Karen Hao’s central thesis is that the generative AI boom is not merely a technological revolution but a system of extraction: to build and scale these models, companies seize data, labor, energy, water, rare minerals, and political influence from across the globe, while presenting the outcome as inevitable progress. The book traces how OpenAI’s founding idealism — nonprofit status, openness, safety — was progressively hollowed out by the material demands of frontier AI development, until the mission itself became an instrument for concentrating power rather than distributing it.
Hao builds her argument through a combination of deep institutional reporting and global field investigation. Inside OpenAI, she reconstructs the company’s technical evolution from GPT-2 through ChatGPT and GPT-4, the internal faction wars between safety researchers and product teams, Sam Altman’s consolidation of personal authority, and the November 2023 board crisis that exposed how thoroughly corporate logic had overwhelmed governance. Outside the company, she follows the supply chain to its human and environmental endpoints: Kenyan content moderators traumatized by the material they labeled to make ChatGPT safe, Venezuelan and Colombian gig workers whose precarity powers reinforcement learning, Chilean and Uruguayan communities resisting data-center expansion on their land and water. The alternation between boardroom politics and ground-level extraction is the book’s methodological signature and the source of its moral force.
For this vault, the book connects several ongoing investigations. It provides a detailed case study of how Silicon Valley ideology — effective altruism, scaling doctrine, techno-utopianism — functions as a legitimating apparatus for concentrated corporate power, extending the analysis in the vault’s Silicon Valley ideology pages. Hao’s Chile chapters offer a concrete account of how neoliberal institutional design creates the preconditions for new extractive industries, linking AI infrastructure to the longer history of resource colonialism. And her portrait of Altman as a leader who governs through ambiguity, narrative control, and the elasticity of mission language speaks directly to the vault’s interest in how charismatic authority operates inside institutions that claim to serve the public interest.
Prologue: “A Run for the Throne”
The prologue opens in the middle of OpenAI’s most dramatic public rupture: on November 17, 2023, Sam Altman is abruptly fired by the company’s board. Karen Hao uses the moment not simply as corporate drama, but as a high-pressure scene that reveals the contradictions at the center of OpenAI. Altman is presented as the public face of the generative AI boom — admired, globally visible, and seemingly at the height of his influence — which makes the board’s move feel shocking and almost surreal. The immediate effect is disorientation: if the most celebrated executive in Silicon Valley can be removed without warning, then something deep inside OpenAI is unstable.
Hao underscores that the firing lands at the very peak of OpenAI’s prestige. ChatGPT has become a historic consumer success, OpenAI’s valuation has exploded, and Altman has been elevated by the media into a quasi-messianic figure for the AI age. This contrast matters. The prologue is not about a struggling company imploding under weakness; it is about a triumphant company breaking open under the strain of its own success. The more powerful OpenAI appears from the outside, the more bewildering the internal collapse becomes.
Inside the company, the reaction is confusion first, then panic, then anger. Employees learn about Altman’s dismissal almost at the same time as the public, and the lack of explanation turns the vacuum into a breeding ground for rumors. Hao shows how, in the absence of trust and credible information, people immediately begin trying to impose narrative order on chaos: perhaps Altman is running for office, perhaps there is a personal scandal, perhaps there are conflicts of interest. The specific rumors matter less than what they reveal: OpenAI’s own workforce does not understand the decision-making structure governing the institution they work for.
The all-hands meeting that follows makes things worse. Ilya Sutskever, Mira Murati, and other leaders appear before employees but refuse to provide substantive answers. The board’s justification — that Altman had not been consistently candid — is repeated without explanation, and Sutskever’s inability or unwillingness to clarify the grounds for such a drastic action destroys confidence rather than restoring it. Hao presents this meeting as a legitimacy disaster. The board may have had formal authority, but it is unable to convert authority into persuasion, and that failure quickly becomes existential.
At this point the prologue becomes a study in competing sources of power. On paper, the nonprofit board governs OpenAI. In practice, power is dispersed across employees, executives, investors, commercial partners, and above all Microsoft. Hao shows that OpenAI’s strange hybrid structure — a nonprofit claiming stewardship over humanity’s future while operating a business of immense strategic and financial consequence — had produced a system in which legal authority and real power no longer matched. Once the board acts, every other center of influence mobilizes against it.
The weekend after Altman’s firing unfolds like a succession struggle. Brockman resigns, senior researchers consider leaving, executives pressure the board, and employees begin to rally around Altman as the indispensable center of the company. Hao emphasizes that this is not only loyalty to a charismatic founder-figure. It is also fear: fear that the company’s technical work will fragment, fear that their equity will collapse, fear that the mission itself has become inseparable from Altman’s leadership. OpenAI’s internal culture, which had supposedly been built around a higher mission, reveals how much it had come to depend on one person.
Microsoft’s role sharpens the crisis. Satya Nadella, enraged at being blindsided, emerges as the external force with the greatest leverage. When he offers Altman and Brockman positions at Microsoft, the entire balance changes. Employees now have an exit route, and the board loses its main instrument of control. Hao uses this shift to show that OpenAI’s independence was already compromised. A company founded to prevent AI from being dominated by narrow private interests is shown to be deeply entangled with one of the most powerful corporations in the world.
The employee letter threatening mass resignation becomes the turning point. Hundreds sign. The revolt is not merely emotional; it is operational. If the researchers and engineers leave, the board may retain legal control over an empty shell. Even Sutskever, one of the architects of Altman’s removal, eventually reverses himself and expresses regret. That reversal is crucial because it demonstrates that the anti-Altman coalition never consolidated into a coherent governing alternative. The board could remove the leader, but it could not establish a durable order after removing him.
Altman’s eventual return, along with the reshuffling of the board, brings the immediate drama to a close. But Hao’s point is that this resolution is not evidence of health. It is evidence of failure. The governance experiment that OpenAI had advertised to the world — an institutional model designed to place humanity’s interests above profit — is exposed as ineffective under real pressure. The people with the formal mandate to defend the mission are overwhelmed by the forces of capital, prestige, dependency, and internal loyalty. The crisis ends not with principle clarified, but with power reassembled.
From there, the prologue widens into the book’s main argument. Hao makes clear that the Altman episode is not gossip, not an entertaining interlude, and not merely the story of a Silicon Valley boardroom coup. It is a window into the politics of AI itself. If the organization most loudly dedicated to building beneficial artificial general intelligence cannot govern itself according to its own stated ideals, then the public has reason to doubt the broader claims made by the industry about safety, stewardship, and responsibility.
Hao then turns backward to OpenAI’s origin story. She presents the company as beginning with an idealistic promise: artificial intelligence would be too powerful and too consequential to be left in the hands of a normal profit-maximizing corporation. Yet almost immediately, OpenAI confronted the material demands of frontier AI development — enormous compute costs, dependence on capital, and the need for infrastructure at a scale philanthropy could not sustain. The result was structural drift. The nonprofit mission remained the public justification, while the institution increasingly reorganized itself around the imperatives of financing, growth, and productization.
This is where the prologue introduces its imperial frame. Hao argues that contemporary AI is not just a set of tools or inventions but a system of extraction. To build and expand these models, companies seize data, labor, energy, water, rare minerals, and political influence from across the globe, while presenting the outcome as inevitable progress. The analogy to empire is deliberate: not because AI firms literally replicate old colonial rule in every respect, but because they concentrate power by appropriating resources from distant populations and converting those resources into technical and economic dominance.
The prologue closes by insisting that this trajectory is neither neutral nor unavoidable. The present form of AI, in Hao’s account, is the result of choices made by specific institutions and leaders, especially OpenAI and the firms racing alongside it. The costs are already being borne by invisible workers, local communities, and vulnerable political systems, while the gains accumulate upward. The core claim is blunt: what is being built is not simply a better future, but a new regime of concentrated power.
In that sense, the prologue functions as both narrative hook and thesis statement. It begins with the spectacle of Sam Altman’s ouster because that spectacle condenses the book’s subject into one dramatic week: idealism collapsing into power struggle, nonprofit rhetoric collapsing into corporate dependency, and the language of saving humanity collapsing into a battle over who gets to rule the AI age. Hao’s promise for the rest of the book is clear. She is not going to explain OpenAI as a heroic innovator. She is going to explain it as the leading institution in a new imperial order — one that can still be challenged, but only if people stop mistaking its ambitions for destiny.
Chapter 1: Divine Right
Chapter 1 opens in the summer of 2015, at a private dinner organized by Sam Altman to discuss artificial intelligence, power, and the future of humanity. The immediate dramatic center is Elon Musk, who arrives late but whose presence gives the gathering its gravity. Karen Hao uses this scene to establish the original alliance between Musk and Altman as one built on mutual utility. Musk sees Altman as a younger but unusually ambitious operator who seems to share his alarm about advanced AI. Altman, for his part, admires Musk as a heroic figure of technological conviction. The dinner functions as the founding myth before the founding itself: a moment when the coalition that would become OpenAI begins to take shape around a mixture of fear, prestige, and strategic positioning.
From there, the chapter reconstructs Musk’s thinking about AI before OpenAI existed. Hao shows that Musk’s anxiety was not abstract; it had been sharpened by encounters with Demis Hassabis of DeepMind and by his famous dispute with Larry Page over whether superintelligent AI would be a threat or simply the next stage of evolution. Musk increasingly came to believe that Google’s acquisition of DeepMind represented a dangerous concentration of future power. He began talking in apocalyptic terms about AI as an existential threat, describing it as “summoning the demon,” and started looking for ways to build an institutional counterweight. In Hao’s telling, OpenAI did not begin as a neutral scientific project. It began as part moral crusade, part Silicon Valley rivalry, and part geopolitical move inside the tech elite.
The chapter then shows how Altman inserted himself into this opening. Through emails to Musk in 2015, Altman frames a proposal for building general AI in a way that would maximize “individual empowerment” and prioritize safety. Musk quickly agrees. Hao’s point is that Altman is already demonstrating one of his defining talents: he knows how to mirror the values, language, and ambitions of the person he needs. The result is not just endorsement; it is transfer of legitimacy. Musk’s support makes it possible to convene the right people, attract the right attention, and convert a speculative idea into an elite project with immediate credibility. This is the first concrete step in Altman’s broader rise.
Hao uses that founding setup to pivot into a long character study of Altman himself. The chapter argues that, long before OpenAI, Altman had been developing the traits that would make him so effective in Silicon Valley: charisma, tactical sensitivity, emotional intelligence, and relentless ambition. She quotes Paul Graham’s famous remark that Altman could be dropped on an island of cannibals and, five years later, would be king. That line becomes the chapter’s organizing metaphor. “Divine Right” is not about hereditary nobility; it is about the modern startup version of legitimacy, where authority emerges from confidence, network centrality, and the ability to persuade others that one is naturally meant to lead.
The biographical sections begin with Altman’s childhood in a Jewish family in the Midwest. He is portrayed as precocious almost to the point of cliché: operating the family VCR as a toddler, learning to program on a Mac as a child, and developing early habits of competition, curiosity, and intensity. Hao balances the portrait by emphasizing another side of him as well: he is charismatic and socially gifted, but also anxious, highly sensitive to other people’s judgments, and emotionally permeable. This combination matters because the chapter suggests that Altman’s later public persona—calm, controlled, strategic—was built over a much more vulnerable internal structure. His drive is not presented as simple self-confidence; it is also a way of mastering uncertainty.
At John Burroughs School and then at Stanford, Altman begins to sharpen both his technical identity and his social one. He studies computer science, becomes interested in AI and security, and enters the world forming around Paul Graham and Y Combinator. The crucial move is joining YC’s first batch with Loopt, his location-based startup. Altman drops out of Stanford, embraces the startup life, and internalizes the ethic that early intensity compounds over time. Hao shows that even in these years Altman is not just building products. He is learning how Silicon Valley power actually works: through storytelling, founder mythology, investor confidence, and access to rare networks.
Loopt never becomes a major commercial success, and Hao is explicit about that. But the failure is almost beside the point. What matters is what Altman learns there and what he manages to extract from an only modestly successful company. He becomes a skilled narrator of the future, able to make middling traction sound like the beginning of an inevitable transformation. He learns to handle the press, to negotiate large partnerships, and to turn ordinary startup milestones into evidence of exceptional momentum. This is one of the chapter’s sharpest observations: Altman’s real genius may not have been technological invention so much as the construction of belief around a project before the underlying facts fully justified it.
That same talent has a darker side. Hao recounts that senior leaders at Loopt twice went to the board urging that Altman be removed, accusing him of manipulative behavior and of pursuing power in self-serving ways. Yet Altman survives, and more than that, he wins. The board sides with him, and when Loopt is eventually sold, he emerges wealthier, better connected, and more powerful than when he began. This pattern will recur throughout the book: conflict does not weaken Altman so much as become raw material for the next ascent. Even partial failure becomes a platform, provided he retains his position at the center of the story.
The chapter next turns to Altman’s two decisive mentors, Paul Graham and Peter Thiel. Graham gives him institutional elevation through Y Combinator, eventually choosing him as successor. Thiel influences his ideas more directly: the importance of scale, the pursuit of monopoly, the strategic use of capital, and the conviction that growth is not just economically useful but morally meaningful. Hao shows that Altman absorbs these lessons deeply. He comes to believe that large-scale technological expansion is itself a social good and that concentrated success can be justified if it unlocks transformative progress. These are not side notes; they become part of the ideological foundation for how he later thinks about OpenAI.
Another important section explores Altman’s deliberate cultivation of what he himself might call network effects. He hosts dinners, advises founders, invests in people early, and creates webs of obligation and loyalty. Over time, YC becomes less an institution he runs than an engine that amplifies his personal stature. Hao argues that Altman learns to operate like a politician even before any formal attempt to enter politics. He refines his appearance, his manner, and his public messaging. He becomes less like a young founder and more like a durable public figure. His real currency, increasingly, is not one company or one product. It is the accumulated power of reputation, access, and influence.
The chapter closes by insisting that this rise had human costs. As Altman’s power increases, detractors and enemies also multiply. Hao includes the painful deterioration of his relationship with his sister Annie as part of the moral landscape surrounding his ascent, while noting that it is a complicated family story with contested claims. She does not present this material as gossip but as another instance of the widening gap between the winner of the system and those left exposed by it. By the end of Chapter 1, the reader is meant to understand that OpenAI’s later dramas did not emerge from nowhere. They were already latent in the kind of authority Altman had built: highly personalized, intensely effective, and difficult to separate from the pursuit of power itself.
Chapter 2: A Civilizing Mission
Chapter 2 begins with the two people who most concretely transform OpenAI from dinner-table concept into functioning institution: Greg Brockman and Ilya Sutskever. Hao presents them as complementary types. Brockman is the builder and operator, a gifted engineer shaped by startup culture and by his time helping build Stripe. Sutskever is the scientist, a prodigy formed in Geoffrey Hinton’s orbit and already a central figure in the deep-learning revolution after the ImageNet breakthrough and Google’s acquisition of DNNresearch. Altman’s choice of these two men is crucial. If Chapter 1 establishes him as the political founder, Chapter 2 shows that he also had the instinct to pair the right technical and organizational forces to give his project real momentum.
The Rosewood dinner returns here, but now through the perspective of recruitment and institutional formation. Hao emphasizes how unusual the conversation was in 2015. Most serious researchers still treated AGI as distant, speculative, or vaguely embarrassing. At OpenAI’s founding circle, by contrast, AGI is discussed not as fringe fantasy but as a looming engineering challenge. Even Sutskever, who privately believed AGI was possible, initially finds the certainty a little uncomfortable. That matters because the chapter argues that OpenAI’s most radical move was not simply pursuing powerful AI. It was normalizing the idea that building AGI soon was a serious and urgent project around which money, careers, and institutions should immediately organize.
Brockman takes the lead in turning that belief into a team. Hao describes the cofounder relationship between Brockman and Sutskever almost as a courtship, with Brockman selling the mission and the opportunity to a researcher who still has other elite options. At the same time, he compiles lists of top talent, solicits recommendations, organizes gatherings, and steadily creates social proof around the lab. Musk and Altman help by lending prestige and by pushing a powerful line in conversation: AGI may be far away, but what if it isn’t? The question is strategically effective because it bypasses the burden of proof. It allows OpenAI to recruit by attaching urgency to uncertainty.
A central part of that recruiting strategy is branding. Altman, Musk, and Brockman decide to position OpenAI as a nonprofit and to emphasize openness, collaboration, and service to humanity. Hao makes clear that this framing is not just idealistic; it is tactical. Inside their own correspondence, the founders discuss how the language of openness can win goodwill, help with recruiting, and distinguish OpenAI from the big companies and military contracts many researchers distrust. They also acknowledge, even very early, that they may need to retreat from literal openness later. This is one of the chapter’s most important claims: the contradiction between OpenAI’s public ideals and its later secrecy was not a tragic drift. It was present almost from the beginning.
Hao then broadens the frame by placing OpenAI’s launch against the ethical crises already visible in AI. Commercial systems were reinforcing racial, gender, and class discrimination in policing, lending, and housing, while large tech companies and defense institutions were becoming the dominant employers for top researchers. OpenAI marketed itself as a clean alternative. Yet the chapter sharply undercuts that image through Timnit Gebru’s reaction to the launch. She sees a homogeneous group of mostly white men being showered with money while claiming to defend humanity from hypothetical future harms, even as actual harms are already landing on marginalized people. Her critique is devastating because it identifies the mismatch between the company’s universal rhetoric and the narrowness of who gets to define the future.
Gebru’s response also leads to one of the chapter’s most telling side developments: the creation of the network that would become Black in AI. Hao uses this as a counterpoint to OpenAI’s self-conception. While the company frames itself as humanity’s guardian, other researchers are dealing with the far more immediate problem that the field itself is exclusionary and structurally unequal. This juxtaposition lets Hao show that “civilizing mission” is an ideological posture. Like older imperial missions, it claims universality while remaining deeply shaped by who holds power and which harms count as urgent. OpenAI does not merely have blind spots; its founding worldview directs attention upward toward speculative catastrophe and away from already-existing injustice.
Inside the company, meanwhile, Musk and Altman are largely absent as day-to-day leaders, leaving Brockman and Sutskever to build culture and direction. Brockman becomes obsessed with large American technological moonshots and adopts the language of alignment, mission, and sacrifice. He wants every employee to feel like Kennedy’s janitor helping put a man on the moon. Hao treats this not simply as enthusiasm but as a form of institutional mythmaking. OpenAI becomes a place where ordinary research is reframed as participation in civilizational destiny. That mythology is powerful because it gives coherence to uncertainty; it lets people feel they are serving history even when the technical path remains hazy.
The chapter also tracks how AI safety enters OpenAI through the Amodeis and the effective altruist orbit. Dario Amodei, later joined by his sister Daniela, sees existential AI risk as the overriding problem of the century. Their work helps formalize a technical field around preventing catastrophic failures in advanced systems. Hao does not dismiss these concerns, but she shows how this version of “safety” quickly becomes contested. Researchers such as Deborah Raji argue that safety cannot be reduced to thought experiments about rogue superintelligence while ignoring bias, labor, extraction, and institutional deployment. OpenAI, in other words, is not just building technology. It is helping define which dangers deserve legitimacy.
For all its ambition, OpenAI initially has no clear roadmap. Hao describes an organization with brilliant people, scattered projects, and no settled theory of how AGI will actually be achieved. Musk grows impatient, especially as DeepMind racks up public triumphs such as AlphaGo’s victory over Lee Sedol. Under pressure, Brockman and Sutskever begin to formulate a more coherent doctrine. Their key insight is that progress in advanced AI seems tightly linked to scale, especially compute. This leads to what Brockman calls “OpenAI’s Law”: the belief that the amount of compute used in frontier AI must increase at an extraordinary pace. Once this becomes the internal logic, OpenAI’s future changes. AGI stops looking like a purely scientific quest and starts looking like a capital-intensive race.
That new logic blows up the nonprofit model. Training frontier systems requires vast numbers of GPUs, mostly from Nvidia, at costs far beyond what philanthropy can comfortably sustain. At the same time, talent retention becomes harder, and Musk pushes for more control, even floating absorption into Tesla as the only plausible counterweight to Google. Altman resists, persuades Brockman and then Sutskever that Musk is not the right leader, and wins the internal power struggle. Musk leaves. Publicly the move is explained as a conflict of interest. Privately the company is left with a brutal financial reality: the original billion-dollar commitment was largely symbolic, and OpenAI needs billions more if it wants to stay on the curve it now considers essential.
The final movement of the chapter follows OpenAI’s pivot from nonprofit ideal to capped-profit pragmatism. It increases publicity, leans on demonstrations like Dota 2, prepares a new charter, and designs OpenAI LP as a structure that can attract huge investment while claiming to preserve mission. Altman then courts Microsoft. Satya Nadella and Kevin Scott see the strategic upside, while Bill Gates initially remains unconvinced by robotics or game-playing demos. What changes the equation is language. GPT-2, though still limited, suggests a path toward software that can generate and manipulate language at scale. That, for Microsoft, looks commercially legible. The chapter ends with the July 2019 billion-dollar investment. Hao’s point is unmistakable: the “civilizing mission” remains in the rhetoric, but the institution has now accepted the basic conditions of empire—scale, capital, secrecy, and alliance with one of the largest corporations on earth.
Chapter 3: Nerve Center
Chapter 3 shifts from institutional history to eyewitness reporting. Hao arrives at OpenAI’s San Francisco office in August 2019, shortly after the Microsoft deal, and the chapter immediately makes space itself part of the argument. The building is stylish, curated, and serene, an oasis of light wood, plants, catered food, and soft furniture. Outside, the Mission District bears the visible marks of tech-driven inequality: displacement, gentrification, and homelessness. That contrast is not incidental. Hao presents OpenAI’s office as both a physical workplace and a symbolic enclosure, a place where a small elite can imagine itself improving humanity while being buffered from the social damage surrounding its own industry.
The office tour also helps define OpenAI’s culture. Everything about the environment communicates aspiration, confidence, and selective access. Brockman greets Hao by saying the company has never given a journalist so much access before, which immediately frames the visit as exceptional and tightly managed. As she moves through the building, the reader senses a company that wants attention but on its own terms. The “nerve center” of the chapter title refers not just to headquarters but to a system for coordinating research, capital, prestige, and narrative control. OpenAI is becoming too important, and too controversial, to remain merely a lab.
Hao explains why 2019 is the moment she wants to capture. Until then, OpenAI had often looked strange or marginal even within AI research: its AGI claims seemed extravagant, its safety language somewhat cultish, and its progress uneven. But in the months before her visit, several events changed the picture. OpenAI withheld GPT-2 while loudly publicizing that decision, created a new capped-profit structure, installed Altman as CEO, and secured Microsoft’s billion-dollar investment and cloud partnership. These moves create the sense that OpenAI is crossing a threshold. It is no longer just speculating about the future; it is consolidating an institutional model for controlling that future.
The chapter’s core interviews are with Brockman and Sutskever, and Hao uses them to test OpenAI’s central claim: why build AGI at all? Their answer is familiar from Silicon Valley but revealing in how abstract it remains. AGI, they say, could help solve climate change and medicine because these are problems too complex for ordinary human coordination. Hao pushes on the gap between this grand promise and present-day AI, which is already capable of narrower but practical contributions in those domains. The exchange exposes an important feature of OpenAI’s worldview. The company is not motivated primarily by existing use cases. It is motivated by belief in a qualitatively superior system that will eventually justify the scale of its ambition.
This leads to one of the chapter’s most important conceptual distinctions: AI versus AGI. Hao notes that “AI” refers to the pattern-recognition systems already being deployed, while “AGI” is used by OpenAI to mean something like broadly capable human-level or superhuman intelligence. Once she asks why AGI is necessary rather than advanced ordinary AI, the answers become slippery. Sutskever suggests that the real bottleneck in solving global problems is human limitation itself—people think too slowly, communicate too poorly, and coordinate too badly. Hao correctly hears the implication: AGI begins to sound less like a tool for humans than a substitute for them. Brockman immediately tries to soften that conclusion, but the tension remains.
The conversation keeps returning to OpenAI’s foundational justification: AGI is inevitable, so the responsible thing is to build it first and distribute the benefits. Hao shows how circular this reasoning becomes in practice. OpenAI says it must pursue AGI because someone will; it says it can pursue AGI safely because its mission is to benefit humanity; and it says it knows it is on the right path because progress itself validates the path. The result is a worldview in which acceleration is treated as proof of wisdom. The company does not really offer a fully worked-out social contract. It offers confidence that technical advance and eventual redistribution will somehow converge.
When Hao tries to move from mission statements to downsides, the evasions become more obvious. Brockman mentions deepfakes. Hao raises the environmental cost of large-scale training runs, citing growing evidence about energy use and emissions. Sutskever concedes the cost but replies that beneficial AGI will eventually counteract it, which is less an argument than a promissory note. Brockman quickly reframes the issue in terms of return on investment and field-wide progress. This scene matters because it crystallizes the chapter’s critique: OpenAI’s leaders are comfortable naming civilization-scale benefits in detail, but when asked for concrete harms, constraints, or trade-offs, they fall back on vagueness, inevitability, or market validation.
Another thread running through the chapter is secrecy. Hao’s access is broad enough to be valuable but narrow enough to show the boundaries. Lunch plans change. Some floors are off-limits. Meetings are inaccessible. Employees visibly monitor themselves when speaking. Later she learns that staff were warned not to speak to her outside approved channels and that security had been circulated her photo. This is a revealing paradox. OpenAI built its original identity around openness, yet its internal behavior increasingly resembles that of a tightly controlled corporation managing information risk. Hao does not need to say the contradiction out loud; the choreography of the visit makes it plain.
The chapter also deepens Brockman as a character. Hao portrays him as brilliant, tireless, intensely hands-on, and indispensable to OpenAI’s technical execution. At the same time, she suggests he is animated by more than mission. He wants to matter in history. He does not want to be just the engineer in the background or the janitor helping someone else’s moonshot. He wants authorship, recognition, and a place in the story of AGI itself. This is crucial because OpenAI’s public language often minimizes ego in favor of service to humanity. Hao’s reporting restores the personal ambitions that sit underneath the institutional idealism.
Brockman also articulates what becomes the chapter’s strategic hinge: OpenAI’s structural changes are not, in his view, betrayals of the mission but requirements of it. The real danger is not commercialization as such. The real danger is falling behind. Once OpenAI accepts that compute scaling determines progress, and that whoever gets there first will shape the outcome, every other decision becomes subordinate to staying on the curve. Hao later identifies this as the hidden engine of the organization. It is what turns mission into race logic, race logic into fundraising, and fundraising into ever-greater entanglement with giant companies and concentrated infrastructure. OpenAI still talks about benefiting humanity, but operationally it behaves like an actor in a winner-take-most competition.
In the final sections, Hao presses Brockman on how the benefits of AGI would actually be distributed. His analogies—to the internet, to fire, to cars, to utilities—sound improvised rather than designed. Universal basic income is floated but not developed. The aspiration is sincere enough, perhaps, but the mechanism is missing. When Hao later publishes her MIT Technology Review investigation, that gap becomes her central conclusion: there is a mismatch between the way OpenAI is perceived and the way it actually operates. Musk publicly criticizes the company for being insufficiently open. Altman, in his response to staff, accepts that the article identifies a real disconnect but focuses primarily on repairing the messaging and investigating leaks. That ending is devastating in its simplicity. The nerve center is not just where OpenAI thinks. It is where it manages the story of what it wants the world to think.
These summaries cover Chapter 4 to Chapter 6 of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. They are written in English and organized chapter by chapter, with at least ten substantive paragraphs per chapter.
Chapter 4: Dreams of Modernity
Chapter 4 begins by setting AI inside a larger history of technology revolutions. Karen Hao borrows the framework advanced by Daron Acemoglu and Simon Johnson: every major technological shift is sold as a universal good, but in practice it is usually shaped first by the interests of the powerful people who have the money and institutional leverage to build it. The chapter’s opening analogy to the cotton gin matters because it does two things at once. It shows that a technology can be economically transformative and morally regressive at the same time, and it establishes Hao’s central claim that AI, too, is being advanced under the banner of progress while redistributing costs downward onto the vulnerable.
From there, Hao argues that the politics of AI begin with its name. “Artificial intelligence” was not a neutral scientific description but a successful act of branding. By choosing a term loaded with prestige, aspiration, and mystery, the field gave itself a public narrative of grandeur from the start. The term encouraged funders, researchers, journalists, and later the public to think of these systems as approximations of a distinctly human faculty rather than as narrower machines for pattern matching, automation, or optimization. That framing, Hao suggests, has distorted the entire debate ever since, because it made hype intrinsic to the discipline rather than incidental to it.
The chapter then pushes on the conceptual weakness underneath that branding: there is no settled, scientific definition of intelligence. Hao reviews how different disciplines have defined it differently, and how many historical attempts to measure it were entangled with ugly projects such as racism, class hierarchy, and eugenics. Once there is no agreed definition of the thing being recreated, she argues, the field is left comparing itself against moving and often ideological human benchmarks. AI becomes less a bounded scientific project than a permanently expandable ambition. This is why, in her telling, AGI can keep receding into the future while still serving as a powerful organizing myth in the present.
Hao next traces how that myth shaped the structure of AI research itself. Because intelligence was implicitly treated as a cluster of human capacities, the field organized its subdisciplines accordingly: vision, language, speech, reasoning, image generation, and eventually multimodal systems that combine several of these at once. What looks like technical taxonomy is therefore also a social blueprint. The aim is not merely to build useful software, but to replicate the functions by which humans interpret the world and act in it. Hao underscores that the possibility of replacing human labor at scale is therefore not an accidental side effect of the field’s trajectory. It is deeply embedded in the original ambition.
The middle of the chapter turns into a compressed intellectual history of AI. Hao revisits the long struggle between symbolic AI and connectionism. Symbolists believed that intelligence comes from explicit knowledge and manipulable rules; connectionists believed that it emerges through learning from examples. The clash was never purely scientific. It also involved status, institutional power, funding, and public spectacle. Marvin Minsky helped crush early connectionism, not simply because it was weak but because he had the authority to define what counted as serious work. Hao’s point is that AI has always been governed by struggles over legitimacy, and that those struggles have repeatedly narrowed the range of paths the field was allowed to pursue.
The ELIZA episode becomes especially important in Hao’s reconstruction because it prefigures the ChatGPT moment. ELIZA’s apparent sensitivity made people overestimate what a simple symbolic system was doing, and Joseph Weizenbaum came to see that overestimation as dangerous. He concluded that people are quick to project mind, intention, and care onto machines that merely manipulate symbols in superficially convincing ways. Hao uses Weizenbaum not just as a historical figure but as an early moral critic of the whole enterprise. His warning was that the illusion of machine understanding makes it easier for powerful institutions to hide moral responsibility behind technical systems and procedural language.
Hao then revisits the revival of neural networks through Geoffrey Hinton and the arrival of deep learning. She does not deny the genuine technical breakthrough. What she contests is the triumphalist version of the story according to which better science simply defeated worse science. Deep learning won not only because it worked, but because it matched the incentives of large corporations unusually well. Symbolic systems could be accurate and powerful in constrained domains, yet they were expensive to customize, difficult to scale, and uncertain in their commercial payoffs. Neural networks, by contrast, were often good enough, broadly applicable, and highly compatible with business models built on huge data stocks and rapid deployment.
That compatibility is what links AI research to the rise of the modern tech giants. Once Google and its peers discovered that deep learning could improve search, translation, recommendation, ad targeting, and speech recognition, corporate money rushed in and remade the research frontier. Hao presents this as a decisive turning point: deep learning became the dominant paradigm not just because it explained intelligence best, but because it fit the commercial logic of surveillance capitalism. The more data a company had, the better its models could become; the better its models became, the more attractive it was to collect even more data. Research, product, and extraction thus started reinforcing one another.
The chapter becomes sharper and more political when Hao moves from this structural argument to concrete examples of “data colonialism.” Her reporting on facial-recognition datasets built from people’s Flickr photos, educational headbands tested on schoolchildren in Colombia and China, and surveillance infrastructures in South Africa shows how AI’s hunger for data often lands on populations with less power to refuse. What is presented by companies as innovation or inclusion frequently looks, from the ground, like experimental extraction. Public space, childhood, and ordinary social life become raw material. Hao’s core claim here is that the imperial analogy is not rhetorical excess. AI development reproduces older colonial patterns by turning distant or marginalized populations into sources of value for institutions located elsewhere.
The final sections widen from extraction to epistemic capture. Because deep learning became so expensive and so heavily funded by industry, talent and prestige flowed toward it almost automatically. Professors took dual affiliations, graduate students followed the money, independent academic labs lost the ability to compete for compute, and alternative paradigms such as neurosymbolic AI were marginalized regardless of their theoretical promise. Hao argues that this narrowing of the research environment is one of the least appreciated consequences of commercialization. Even critics of deep learning increasingly had to work inside a world whose benchmarks, budgets, and career ladders were all set by the firms most committed to scale.
The chapter closes by confronting the limits of the paradigm that now dominates everything. Hao surveys black-box opacity, adversarial fragility, discriminatory outcomes, hallucinations, and the unreliable behavior of generative systems, arguing that scale has not removed these flaws so much as magnified them and redistributed their consequences. Yet she is careful not to say the systems are useless. They are useful, often very useful, for those positioned to benefit from them. Her stronger point is that usefulness to the powerful should not be confused with justice or inevitability. The doctrine of scaling, she concludes, has come to look like a law of nature only because OpenAI and the wider industry made it into a self-fulfilling prophecy.
Chapter 5: Scale of Ambition
Chapter 5 shifts from the broad history of AI into the internal formation of OpenAI’s core worldview, and it begins with Ilya Sutskever. Hao presents him as the laboratory counterpart to Sam Altman: not the salesman or political strategist, but the intellectual authority whose certainty gave the organization its technical faith. Sutskever is portrayed as deeply convinced that deep learning would win and that the path forward was not conceptual pluralism but scale. The chapter treats that conviction as more than a research opinion. It becomes a doctrine that structures hiring, project selection, internal prestige, and the company’s eventual sense of destiny.
Hao emphasizes that Sutskever’s authority did not come from managerial polish. Quite the opposite: he was blunt, intense, eccentric, and often uninterested in packaging his ideas for outsiders. That is precisely what made him persuasive inside OpenAI. His colleagues experienced him as someone who saw farther than other people did. The company therefore accepted, to a remarkable degree, his underlying premise that sufficiently large neural networks could produce increasingly general intelligence without the need for fundamentally new conceptual machinery. This is the first major move of the chapter: OpenAI’s belief in scaling is shown not as a natural inference from the evidence, but as a choice made under the influence of a revered internal prophet.
The second move is technical. Hao explains how the arrival of the Transformer gave Sutskever the architecture that fit his philosophy. Transformers were simple enough to scale and powerful enough to exploit very large textual contexts. Alec Radford’s experiments then turned that possibility into a concrete line of development by shifting the training objective from translation to next-word prediction. Hao shows why this mattered: if a model learns to predict language at scale, it may also absorb broad regularities about the world encoded in language. That was the wager. Generation was not just a product feature; in OpenAI’s imagination it was evidence that a system was compressing reality into increasingly general form.
GPT-1 therefore appears in the chapter less as a market event than as a proof of concept. It attracted little attention, but it validated the basic pipeline of pretraining on general language and fine-tuning for particular tasks. At the same time, another track inside OpenAI was maturing: reinforcement learning from human feedback. Hao makes a point of linking the modest backflip experiments to the later history of language models. RLHF began as a way of teaching an agent through comparative human judgments, but it quickly became part of a much larger dream: using human preferences to steer ever more powerful systems toward acceptable behavior. Chapter 5 thus shows OpenAI converging on the combination that would define its future—giant generative models plus post-training techniques to domesticate them.
The discussion of GPT-2 is where the chapter really acquires force. Hao explains the emergence of scaling laws as a crucial intellectual breakthrough: model performance appeared to improve in smooth, predictable ways as compute, data, and parameters increased together. That gave OpenAI something close to a roadmap. GPT-2, with its much more coherent text, became the first major empirical vindication of this vision. But its outputs also exposed the darker side of the paradigm. The model surfaced conspiracy theories, hateful speech, and other toxic patterns from its training data. Hao treats these behaviors not as noise around the edges but as a signal of what happens when a system trained to reproduce the statistical texture of the internet becomes increasingly fluent.
Those outputs ignited the company’s first major fight over secrecy and release. Dario Amodei and Jack Clark argued that OpenAI should not release the full GPT-2 because it might be misused for propaganda, spam, or other harmful purposes, and because the company needed to establish norms for a future in which models would become far stronger. Hao reconstructs the move as both principled and performative. It expressed real safety concern, but it also gave OpenAI an opportunity to cast itself as the uniquely responsible steward of dangerous frontier knowledge. The backlash from the research community was fierce because many researchers saw the non-release as alarmist, self-serving, and at odds with the norms of open science.
That backlash mattered because OpenAI still needed legitimacy. Hao shows how the organization responded with a staged-release strategy that was as much political repair as safety procedure. By releasing progressively larger versions, partnering with selected researchers, and producing a white paper about responsible publication, OpenAI managed to reposition itself as a standard-setter rather than a rule-breaker. Jack Clark’s work in Washington also gave the company growing policy credibility. Chapter 5 therefore makes an important point about the relationship between safety and power: safety discourse at OpenAI did not merely restrain the company. It also helped it build institutional authority, especially with policymakers who liked the idea that someone in the industry was taking danger seriously.
At the same time, internally, OpenAI was still evaluating different routes to AGI. Hao describes Amodei’s “portfolio of bets” and the debate between the “pure language” hypothesis and the “grounding” hypothesis. The question was whether language alone could be sufficient for building something like general intelligence, or whether perception and embodied interaction were indispensable. GPT-2 shifted the internal balance because it made language-only systems look more promising than many had expected. Hao also notes the ugliness that surfaced in these debates, including comments that used disability as a crude proxy for discussing cognition. That detail matters because it reveals how quickly abstract theorizing about intelligence can slide into dehumanizing social assumptions.
Once Amodei concluded that larger language models might be the fastest route to AGI, the logic of restraint inverted. If AGI was going to arrive anyway, then OpenAI had to race ahead in order to gain “lead time” for safety. Hao identifies this as one of the company’s foundational rationalizations. It sounds cautious, but it licenses acceleration. The argument says, in effect, that because future systems may be dangerous, present systems must be made bigger as quickly as possible. Hao rejects the supposed inevitability behind this reasoning. She insists that the conditions that made OpenAI’s path possible—its founders, its financiers, its ideology, its access to capital and cloud infrastructure—were historically specific, not unavoidable.
That logic culminates in the decision to build GPT-3 at an audacious scale. Hao details how the availability of Microsoft’s ten-thousand-GPU supercomputer turned theoretical ambition into an engineering campaign. The challenges were enormous: fault tolerance across thousands of chips, new approaches to sharding, and above all the problem of acquiring enough text to feed a 175-billion-parameter model. The result was a degradation in data standards. OpenAI supplemented curated corpora with massive web scrapes, Wikipedia, murkier book datasets, and eventually filtered Common Crawl. The implicit bargain was clear: once scale became the objective, the quality, provenance, and legality of the training data became secondary.
The chapter ends by making that bargain visible in human terms. Lower-quality, all-encompassing data created a new need to clean, rank, and suppress the worst outputs after the fact. That pushed OpenAI toward large pools of precarious labor—Kenyan content moderators paid very little to label disturbing material, and a global network of contractors producing the human feedback required to tame the models. Hao pairs this with scholarship on “hate scaling laws,” showing that bigger datasets do not just increase capability; they also increase the quantity and persistence of social toxicity inside the system. The final effect of Chapter 5 is to recast scale as an ambition that appears mathematically elegant at the top while exporting legal, labor, and moral costs downward through the entire stack.
Chapter 6: Ascension
Chapter 6 moves from the technical escalation of OpenAI’s models to the organizational and political escalation that accompanied them. Hao frames Sam Altman’s arrival as the imposition of a familiar strategy he had already practiced at Y Combinator: the successful leader is the one who “refounds” an institution by expanding its ambition, centralizing its direction, and converting it into an empire. At YC, Altman had already embraced scale as an operating philosophy. At OpenAI, he imported the same winner-take-all worldview and applied it to AGI. The mission was no longer to remain one important lab among several. It was to become the decisive center of gravity in the field.
Hao shows how closely this worldview tracks Peter Thiel’s monopoly logic. Altman’s favorite number was ten, because he believed progress had to come in order-of-magnitude jumps. In his late-2019 vision memo, he translated that instinct into concrete objectives: OpenAI had to be number one in technical results, compute, money, and “preparation,” meaning safety, security, and institutional resilience. What makes this important is that safety was not outside the competitive program. It was folded into the same managerial architecture as fundraising, product strategy, and organizational discipline. The memo reveals how thoroughly the language of mission, monopoly, and geopolitics had already fused inside the company.
That fusion also shaped OpenAI’s relationship with Microsoft. Hao makes clear that Altman regarded Microsoft not as a mere investor but as the indispensable industrial partner that could supply the best supercomputers in the world. Commercialization, in this framework, was not a regrettable compromise with the mission; it was the mechanism that would fund more research and preserve OpenAI’s lead. At the same time, Altman argued for tighter secrecy. The company needed to publish less, reveal only narrow progress, and manage information as if every internal discussion might become public. Yet it also needed annual demonstrations impressive enough to persuade policymakers, elites, and potential allies that OpenAI was leading the way. The strategy was therefore paradoxical but coherent: disclose just enough to build power, hide enough to protect it.
The chapter then turns to internal fracture. Hao describes the emergence of three “clans” inside OpenAI: exploratory researchers, safety researchers, and startup builders. Those categories are not mere HR labels. They represent incompatible instincts about what OpenAI was for. Exploratory researchers wanted to push capabilities. The safety camp, associated especially with Dario and Daniela Amodei, wanted to move cautiously and focus on misalignment and extreme risk. The startup faction wanted products, deployment, and momentum. Altman’s public posture was that all three were necessary and could be integrated. The reality, as Hao narrates it, was growing tribal warfare organized around access to compute, strategic influence, and trust.
Those tensions were intensified by a climate of fear. Some of it was intellectual: employees genuinely worried that increasingly powerful systems might become dangerous in ways they did not understand. Hao includes the telling anecdote of an RLHF sign error that pushed a model toward grotesquely offensive behavior, which many read as a small but disturbing glimpse of how little control they actually had. Some of the fear was geopolitical. OpenAI leadership invoked China, Russia, and North Korea as reasons to move faster and to keep frontier capabilities in American hands. This introduced a national-security cast to the mission, turning OpenAI from a quirky research lab into something that many employees increasingly compared to the Manhattan Project.
That comparison reshaped the company’s security culture. Hao recounts a series of measures that would have sounded absurd in OpenAI’s earlier years: worries about insider theft, countersurveillance audits, distress passwords on doors, discussions about hardened server rooms, and even fantasies about bunkers and air-gapped containment for model weights. Some of this was practical IP protection; some of it was sincere fear of misuse; some of it reflected a culture becoming grandiose about its own importance. The result was the same either way. As the perceived stakes rose, secrecy hardened, and the organization started to look less like an academic nonprofit and more like a state-adjacent strategic asset wrapped in startup rhetoric.
Meanwhile, the technical ascent to GPT-3 continued. Hao shows that the model’s training was accompanied by organizational redesign. OpenAI created an Applied division under Mira Murati to build an API and develop commercialization strategy, while everyone else effectively became Research by exclusion. This sharpened the line between people who wanted outside users touching the models and people who believed exposure should be delayed until far more safety work had been done. The issue became especially explosive once employees discovered that GPT-3 could generate code surprisingly well. For product-minded staff, this was thrilling. For the safety camp, it was another step toward systems that might accelerate their own development and amplify the risk landscape before anyone understood them.
The release fight that follows is one of the chapter’s strongest sections. Hao shows how the API became a battleground over philosophy. Applied leaders argued that controlled external access was the safest way to learn how the model would behave in the real world and the quickest way to generate revenue. Safety researchers argued that contact with the world before sufficient testing was exactly the danger. The deadlock lasted until competitive anxiety broke it. Rumors that Google might soon release a comparable system weakened the case for restraint. OpenAI moved forward, and Hao notes the irony that Google, in fact, remained more cautious with LaMDA than OpenAI was with GPT-3. Once again, the company’s timing was shaped not just by technical readiness but by fear of losing narrative and market advantage.
When GPT-3 arrived, it transformed OpenAI’s status. Hao captures the awe the system generated inside the tech industry: it could draft essays, write code, and perform many tasks with only a few examples. This made OpenAI look not merely competent but ahead. The company recruited more easily, won major recognition, and began to professionalize its political operation by hiring experienced communications and policy staff. Success, in other words, vindicated Altman’s strategic bet. Public demonstrations of progress could attract talent, government attention, and still more capital. The company’s external rise therefore deepened its internal argument that speed and dominance were prerequisites for responsible stewardship.
But the same success made the internal split impossible to contain. Hao shows that the Safety camp increasingly viewed Altman as manipulative, especially in how he handled Microsoft, deployment decisions, and dissent. To them, he created the appearance of consultation after the real decisions had already been made. To the Applied side, the safety objections increasingly looked performative, maximalist, or detached from reality. The conflict was not simply about whether to release GPT-3. It was about who would control the meaning of OpenAI’s mission as the company ceased to be a research lab and became a frontier corporation with enormous strategic leverage.
The chapter ends with the departure that would eventually create Anthropic. Hao is careful not to romanticize the split. Yes, it was about safety. But it was also about power, authority, and incompatible visions of stewardship. Dario Amodei and his allies did not merely want OpenAI to behave differently; they wanted control over the direction of frontier AI themselves. Hao’s final twist is that this break does not really escape the system it condemns. Anthropic would later reproduce much of the same logic—scale, secrecy, rivalry, and claims of responsible leadership. The “ascension” of the chapter title is therefore double-edged: OpenAI rises, but so do the dynamics of concentrated power that will come to define the entire frontier-AI industry.
Chapter 7: Science in Captivity
Chapter 7 shows how GPT-3 altered the balance of power across the AI industry long before ChatGPT became the public rupture. Karen Hao frames the release of the GPT-3 API in 2020 as a strategic signal to every major lab: OpenAI had taken an idea that largely emerged from the wider research world, especially Google, and turned it into a demonstration of industrial ambition. At Google, DeepMind, Meta, and in China, researchers quickly recognized that large language models might become the new center of gravity. Yet the reaction was still hesitant. Many executives treated GPT-3 less as the start of a new regime than as an intriguing but still provisional result. The chapter therefore begins by making a crucial point: the generative AI boom did not emerge overnight. It was incubated through a sequence of internal recognitions, missed opportunities, and slow institutional pivots.
Hao then turns from competition to cost. One of the chapter’s major achievements is to connect the excitement around scale with its material burden. Emma Strubell’s research on the carbon footprint of large language models becomes a central reference point, because it offered one of the first systematic attempts to quantify the environmental damage of this new paradigm. The chapter emphasizes that earlier neural networks could be trained on comparatively modest hardware, whereas models like GPT-3 required prolonged use of industrial-scale compute infrastructure. That meant not just more processing power, but more electricity, more data-center dependence, and more carbon emissions. GPT-3, in this account, is not merely an intellectual artifact; it is also a machine of extraction, drawing on enormous physical resources that had largely remained abstracted from the public story of AI progress.
The argument then widens from climate to power and voice through the figure of Timnit Gebru. Hao presents Gebru not simply as an individual dissenter, but as someone shaped by and helping to shape a broader critical tradition in AI research. Her work with Black in AI, her collaboration on papers such as Gender Shades, and her role in building spaces for marginalized researchers are all presented as part of a counter-history to triumphant Silicon Valley narratives. This is important because the chapter is not only about bad corporate decisions. It is about the existence of an alternative intellectual and moral framework inside AI, one that insisted questions of bias, labor, race, and accountability were not secondary to technical progress. Gebru’s presence in the chapter makes clear that the field did not lack warnings. It lacked the institutional willingness to treat those warnings as central.
GPT-3 became, for Gebru, a crystallization of many existing dangers. Hao shows that the model’s training on large internet corpora, including sources saturated with racism, misogyny, and abuse, made it likely to reproduce the ugliest tendencies of online culture at scale. The chapter places these concerns in the context of the 2020 racial reckoning after George Floyd’s murder and the global wave of Black Lives Matter protests. In that environment, the enthusiasm inside Google for GPT-3 felt to Gebru not merely naïve but willfully detached from reality. Hao’s examples of GPT-3 producing grotesque and hateful outputs are not incidental anecdotes. They function as proof that the model’s harms were already visible, even in its earliest public form, to anyone willing to look. What was missing was not evidence. It was power.
From there the chapter follows the formation of the paper that became one of the defining documents of the AI era: “On the Dangers of Stochastic Parrots.” Hao reconstructs the intellectual partnership between Gebru and Emily M. Bender and shows how the paper emerged from a shared concern about the field’s drift toward ever-larger, ever-less-scrutinized language models. The phrase “stochastic parrots” condensed a devastating critique: these systems can generate persuasive language without understanding, while encouraging users to attribute coherence, intelligence, and authority to what is ultimately probabilistic mimicry. Hao makes the paper’s importance clear by laying out its four main warnings—environmental cost, toxic and biased training data, the opacity of gigantic datasets, and the danger that fluent outputs would be mistaken for knowledge, judgment, or even personhood.
One of the chapter’s sharpest insights is that Google’s response to the paper was shaped by competitive pressure as much as by scientific disagreement. Google had invented the Transformer and now faced the humiliation of appearing behind OpenAI in the race to commercialize it. In that atmosphere, a paper arguing that the industry’s core trajectory was ethically and epistemically unsound became politically intolerable. Hao shows how Gebru’s submission, initially handled through ordinary research procedures, was suddenly escalated to senior management and treated as a liability. The issue was not that the paper was technically unserious; it was that it named too clearly the risks of the very strategy the industry was doubling down on. The chapter therefore presents censorship not as an aberration, but as a structural response when corporate research collides with corporate strategy.
The details of Gebru’s removal from Google give the chapter its emotional center. Hao reconstructs the sequence with enough precision to show how bureaucratic language became a weapon: the unexplained demand to retract the paper, the refusal to provide meaningful feedback or open discussion, the Thanksgiving deadline, the humiliation of being denied basic professional process, and finally Google’s claim that it was merely “accepting” Gebru’s resignation when in practice it had forced her out. What emerges is not a misunderstanding but a display of managerial power. Gebru’s attempt to insist on transparency around the review process only accelerated the outcome. Hao makes clear that this was a turning point not because one researcher lost her job, but because one of the most visible internal checks on Big Tech’s AI ambitions was publicly destroyed.
The backlash was immediate and enormous, and Hao uses it to show how widely the stakes were understood. The open letter in support of Gebru spread rapidly across academia, industry, and civil society. Google employees protested. Reporters, including Hao herself, turned the conflict into a public test of whether corporate AI research could still tolerate meaningful dissent. Sundar Pichai’s apology did not reverse the underlying dynamics; it merely acknowledged that Google had suffered reputational damage. The episode became emblematic because it revealed so many of the field’s fault lines at once: concentration of resources inside corporations, weak protections for employees who challenge leadership, a severe lack of diversity in positions of power, and the tendency of companies to market themselves as responsible while suppressing criticism when it becomes inconvenient.
The chapter does not end with Gebru’s departure. Hao follows the afterlife of the controversy through Jeff Dean’s prolonged fixation on the environmental critique and on Emma Strubell’s estimates. This is more than a personal detail. It shows how dominant institutions often respond to substantive criticism by narrowing the debate to technical quibbles that they themselves are uniquely positioned to adjudicate, because they control the relevant data. Google’s attempt to publish “corrective” numbers and its treatment of Strubell suggest a deeper pattern: transparency is tolerated only when it remains under corporate management. Independent estimation is dismissed for lacking proprietary access, while proprietary access itself remains unavailable to independent critics. That circular logic protects the company and weakens the possibility of external accountability.
By the end of the chapter, “science in captivity” names the larger condition of the field. After ChatGPT, Hao argues, the transparency norms that had already been fraying collapsed even further. Companies increasingly treated model details, training data, and evaluation methods as proprietary assets. The consequence is not merely less openness; it is a degradation of science itself. If independent researchers cannot inspect what models were trained on, then benchmark performance becomes harder to interpret and claims of progress become harder to verify. In Hao’s telling, the chapter is not only about ethics, but about epistemology. A field that cannot be audited begins to lose its capacity to know whether it is actually advancing at all. That is the final captivity: not just of researchers inside corporations, but of knowledge itself.
Chapter 8: Dawn of Commerce
Chapter 8 marks the moment when OpenAI’s commercial logic becomes explicit, systematic, and unapologetic. If the previous chapter showed the tightening enclosure around critical research, this one shows what that enclosure was protecting: a plan to convert scaling into products, products into data, and data into further scaling. Hao presents the 2021 internal roadmap as a key document because it reveals how thoroughly OpenAI had aligned its research agenda with commercialization. GPT-3 had convinced leadership that scaling laws were real enough to organize the company around. The departure of the group that would become Anthropic also weakened internal resistance to this direction. What remained was a growing consensus that OpenAI should stop behaving like a lab with multiple live possibilities and instead act like a company pursuing one dominant thesis with maximal focus.
That thesis was stark. OpenAI wanted to build a far more capable “aligned” system, primarily based in language but potentially multimodal, and it believed the path ran through three levers: much larger compute, much better efficiency, and better data. Hao shows how ambitious the plan already was in early 2021. The company intended to scale beyond GPT-3 using Microsoft’s new supercomputer, while also improving compute efficiency through algorithmic and engineering advances. Just as important, it wanted to improve the training signal by using user interactions and reinforcement learning from human feedback. This is where the chapter quietly lays the groundwork for ChatGPT. Even before the chatbot existed as a product, the idea of learning from users at scale had already become central to OpenAI’s strategy.
The roadmap matters because it blurs the line between scientific aspiration and product design. OpenAI was not just building a better language model. It was simultaneously pushing on code generation, image-to-text, text-to-image, and even autonomous agents. Hao highlights how the company envisioned these not as separate businesses but as components of a single flywheel. Models would be deployed as products; products would generate user behavior; user behavior would produce data; and that data would in turn make the models more capable. This is one of the chapter’s core points: OpenAI’s commercial turn did not represent a deviation from its technical ambitions. It was increasingly the mechanism through which those ambitions would be pursued.
At the same time, the chapter shows that OpenAI’s leadership understood scale alone would not be enough forever. The company was approaching the limits of the hardware it could realistically secure, and competitors were beginning to imitate its methods. Hao therefore describes an important shift in mindset. OpenAI still believed in scaling, but it also began searching for “2x” and “10x” methods that could squeeze more value from each unit of compute. Distillation, data filtering, sparsity, active learning, and reasoning-related techniques entered the picture as ways to bend the cost-performance curve. The result is a company that no longer sees itself as surfing an accidental hardware advantage, but as engaged in a race to sustain exponential improvement by every available means.
Hao then tracks what commercialization looked like on the ground. The Applied division expanded, hiring product and sales talent and using the GPT-3 API as a laboratory for monetization. Pricing, infrastructure, customer onboarding, and operational support suddenly became strategic concerns. Yet OpenAI still lacked a mature trust-and-safety function, and the rules governing API use were often improvised. The chapter is especially good at showing the mixture of seriousness and amateurism in this phase. OpenAI was trying to stand up a platform business around a transformative technology while making policy choices on the fly. Some uses were allowed, others banned, but the distinctions were often unstable and intuitive rather than principled. The company was learning governance reactively, under the pressure of demand.
That improvisation became more consequential when misuse cases appeared. Hao’s examples are revealing because they show that commercial deployment immediately exposed OpenAI to ordinary internet pathologies. Replika raised difficult questions about emotional manipulation, sexual conversation, and the intimacy people projected onto AI companions. Latitude’s AI Dungeon forced OpenAI to confront the generation of child sexual abuse material and the consequences of shipping powerful models without robust safeguards. The company’s early moderation tools performed badly, often blocking benign content while failing to solve the deeper problem. Hao uses these episodes to puncture any lingering notion that OpenAI had a coherent safety regime during this period. It had a set of patches, debates, and anxious internal escalations. The language of safety was present, but the machinery remained thin.
The chapter’s next major thread is the rise of code generation, which Hao presents as OpenAI’s second serious commercial frontier after GPT-3 itself. The code-generation effort mattered for several reasons at once. It could please Microsoft, whose money and compute OpenAI increasingly depended on. It appeared commercially valuable in its own right, because software development is obviously economically important. And, for many researchers inside the company, code looked like a route toward a more general capability leap. Programming languages encode formal structure and logic, so training on code might help models perform reasoning-like tasks not only in code but in natural language as well. In this way, Codex sat exactly at the intersection of business need and AGI ambition.
Hao is particularly sharp on the ethical ambiguity of this project. GitHub’s public repositories were treated as a giant training corpus, with Microsoft effectively making that resource available to OpenAI. Yet much of that code had been shared in the culture of open source, where the expectation was collaborative reuse among developers, not extraction by powerful platform companies to build proprietary AI products. Internal criticism inside Microsoft recognized the breach of trust, and there were suggestions that any profits should at least partly flow back to the open-source community. Those objections did not stop the project. This is an important pattern in the book: once OpenAI and its partners identify a path that appears strategically decisive, ethical hesitation rarely blocks it. At most, it modifies the optics.
The Codex collaboration also clarifies why OpenAI would soon want its own consumer-facing products. Hao details the friction with GitHub and Microsoft over responsibility, optimization, timing, and credit. GitHub Copilot could showcase the technology to millions of users, but that also meant Microsoft and GitHub captured public recognition for work OpenAI believed it had done. More importantly, OpenAI lost direct contact with users and the data those users generated. From the company’s perspective, that was strategically intolerable. If user interaction was becoming a key source of model improvement, then controlling the user interface and the user relationship would matter as much as controlling the underlying model. The seeds of ChatGPT as a direct OpenAI product are visible here.
The chapter closes by braiding OpenAI’s shift with Sam Altman’s broader worldview. Hao places his investments in Worldcoin, Retro Biosciences, and Helion alongside the company’s own roadmap to show a consistent ideology: take huge, long-horizon bets on technologies that promise civilizational transformation, and accept present-day opacity or controversy as the price of pursuing them. The OpenAI Startup Fund extends that logic into OpenAI itself, while also complicating Altman’s claim that he remained personally detached from OpenAI’s profit motives. Hao does not argue that Altman was lying in any simple sense. Her point is subtler and more damaging: the altruistic narrative was becoming increasingly difficult to disentangle from a dense web of commercial incentives, personal stakes, and network effects.
“Dawn of Commerce,” then, is not about OpenAI suddenly becoming a business. It is about the point at which the company’s internal logic reorganized around commerce as destiny. The dream of AGI remained intact, but its path was now inseparable from monetization, platform control, strategic partnership with Microsoft, and product-driven data collection. What changes in this chapter is not the ambition but the operating model. OpenAI stops pretending that science and business can be cleanly separated. From here on, they are fused.
Chapter 9: Disaster Capitalism
Chapter 9 is one of the book’s most devastating chapters because it follows OpenAI’s technical and commercial strategy down to the level where its costs are actually borne. Hao begins with the content-moderation filter that would later help make ChatGPT usable to the public. In order to teach the model not to emit sexual abuse, graphic violence, hate speech, and other toxic outputs, OpenAI needed human beings to read and classify exactly that material in massive quantities. The chapter’s governing insight is simple and brutal: the cleaner the AI product appears to the end user, the more likely it is that someone else has absorbed the filth upstream. OpenAI’s shift from controlling training inputs to controlling outputs did not remove the human burden. It relocated it onto precarious workers asked to look directly at the worst imaginable text so that others would not have to.
Kenya becomes the first major site of that burden. Hao explains why not as a coincidence but as the result of long historical forces: colonial extraction, weak labor protections, high unemployment, stark inequality, and a state eager for foreign investment. Nairobi’s geography itself becomes part of the argument, with wealth and deprivation existing side by side. In that environment, global tech companies could frame outsourced digital labor as opportunity while relying on workers’ desperation to keep wages low. OpenAI chose Sama, a company with an outward reputation for ethical outsourcing and anti-poverty ideals. On paper, the arrangement looked respectable. In practice, Hao shows, the ethical veneer concealed a system that was already strained, under pressure, and vulnerable to abuse.
The workers assigned to OpenAI’s project entered it under secrecy and at extremely modest wages, often without knowing which company they were serving or what the work would eventually enable. They were asked to label horrifying passages involving abuse, rape, self-harm, and extreme violence so that OpenAI could build a filter for future models. Hao makes clear that this labor was not marginal to the success of ChatGPT; it was one of the conditions of its mass adoption. The chapter is especially effective in showing how euphemism operates here. Terms like “data labeling,” “content moderation,” and “safety” make the task sound clinical. In reality, workers spent their days sorting through descriptions designed to train a machine away from the darkest corners of the internet and, sometimes, away from content the machine itself had helped generate.
Before returning fully to Kenya, Hao steps back to place this in the longer history of AI labor. The chapter’s middle sections on Ghost Work, Mechanical Turk, and the self-driving car boom are essential, because they show that generative AI did not invent labor exploitation. It inherited and intensified it. The AI industry had already learned to depend on vast pools of invisible annotators who tagged images, traced objects in video, and completed microscopic digital tasks for pennies. What generative AI changed was not the existence of hidden labor but the kind of labor demanded. Instead of outlining cars and pedestrians, workers were increasingly being asked to simulate, rank, rewrite, and sanitize language. The labor became more psychological, more interpretive, and often more morally damaging.
Hao’s portrait of Oskarina Veronica Fuentes Anaya in Colombia broadens the chapter into a general anatomy of platform dependency. Fuentes, a Venezuelan refugee working on Appen after her country’s collapse and her own health crisis, shows how annotation platforms become survival systems precisely by making life more unstable. Tasks arrive unpredictably, pay fluctuates, withdrawal rules are punishing, and workers must adapt their entire existence to the platform’s opaque rhythms. Hao is excellent on the everyday texture of this arrangement: browser extensions to refresh task queues, alarms to wake workers at night, frantic competition for tasks that disappear within seconds, and the way platform logic slowly reorganizes sleep, movement, health, and dignity. This is labor as permanent alertness.
Fuentes’s story also allows Hao to make an important normative argument: the problem is not that the work itself is meaningless or impossible to dignify. The problem is that the industry has structured it to be disposable. Workers repeatedly tell her they want what ordinary workers want—a contract, a manager, predictable hours, health care, the ability to raise complaints without retaliation, and wages that permit a life. Hao uses Fairwork’s standards to give this intuition analytic form. The chapter therefore refuses a sentimental conclusion that digital labor should disappear altogether. What it condemns is the race to the bottom that makes decent arrangements systematically uncompetitive against firms willing to squeeze workers hardest.
That race to the bottom is embodied most clearly in Scale AI. Hao describes Scale’s rise during the self-driving car era as the perfection of a “crisis playbook”: find populations with education, connectivity, and economic desperation, recruit aggressively, promise high returns, then ratchet down pay once dependence sets in. Venezuela was the proving ground. Kenya, the Philippines, North Africa, and other regions followed. Scale’s Remotasks system appears in the chapter not just as another platform, but as a machine for operationalizing global precarity. Workers who question pay or try to organize are removed; unstable payment systems leave people unable to cash out; recruitment expands wherever crisis creates a new reservoir of cheaply available talent. The empire metaphor in Hao’s book is nowhere more concrete than here.
Against that backdrop, Mophat Okinyi’s story becomes the chapter’s emotional core. Sama recruits him in Nairobi for what sounds like another AI training job, and at first he believes he is moving toward stability. He has a relationship growing into family life, hopes for a more adult future, and some reason to believe that this work might help him build it. Only after he accepts the assignment does he discover what it entails. He is placed on the sexual-content stream and asked to review thousands of graphic texts a month, including material involving children, incest, rape, and bestiality. Some passages are scraped from the internet; others are generated or expanded by AI systems themselves. Hao does not linger for shock value. She lingers to make the damage legible.
The damage is cumulative and social. Okinyi’s mind frays, his sleep collapses, his capacity for intimacy erodes, and his relationship with Cynthia deteriorates under the strain of something he cannot easily explain. Counseling is inadequate, group-based, and difficult to use without risking stigma or replacement. When Sama ends the OpenAI contract amid wider scandal over moderation work, the end of the job does not end the injury. Okinyi carries the trauma home. Cynthia eventually leaves. His brother Albert moves in to help, sacrificing his own work, only to find that ChatGPT’s rise is simultaneously drying up the writing opportunities he depends on. Hao’s structural argument becomes personal here with terrible force: the same system that wounds one worker by using him to make AI safer can wound another by helping make his labor less valuable.
Hao is careful not to let OpenAI off the hook by treating Sama as a rogue subcontractor. When OpenAI responds publicly, it tries to present the harm as the vendor’s failure. The chapter rejects that move. The exploitation is systemic because outsourcing is part of the model, not a deviation from it. The whole point is to push painful labor down the supply chain, away from the company’s headquarters, customers, and public image, while still extracting its value. OpenAI can therefore enjoy the reputational benefit of “safe” AI while claiming distance from the people who made that safety possible. In Hao’s framing, that distance is not incidental. It is one of the industry’s chief organizational advantages.
The second half of the chapter then shows that the same basic labor regime powered not only content moderation but reinforcement learning from human feedback. OpenAI used large pools of contractors, increasingly through Scale AI, to teach models how to answer helpfully, truthfully, and harmlessly. Workers wrote sample responses, ranked model outputs, crafted prompts, and generated the data that turned raw language models into instruction-following assistants. Hao is very good here at exposing the oddity of the process. To make the chatbot sound natural, the company relied on workers to impersonate the chatbot in advance. To make it factual, it relied on workers to grade its answers. To make it versatile, it asked them to imagine the universe of things future users might want. Human labor did not merely “fine-tune” the model. It supplied much of the behavior later attributed to the model itself.
This is where the chapter links labor exploitation directly to the birth of ChatGPT. InstructGPT, and then the conversational work that followed, established RLHF as the crucial bridge between powerful but unruly language models and a product ordinary people would find coherent and useful. Once ChatGPT exploded, the demand for this kind of labor surged across the industry. Hao compares the new work of answer-writing and output-ranking to the old work of tracing images for self-driving cars: it is the new mass annotation task of the generative era. Scale, whose earlier businesses had been under pressure, now stood to profit enormously as a supplier of this labor. The generative AI boom, in other words, did not transcend the old annotation economy. It gave it a new and even more central mission.
The final sections on Winnie and Millicent show what that mission looked like for workers surviving on Remotasks in Kenya after ChatGPT’s release. Their lives are marked by extreme scarcity, family obligation, queer secrecy in a hostile social environment, and exhausting rounds of online labor. For a brief period, chatbot-related projects provide relief. The work can even feel intellectually rewarding: researching, writing prompts, crafting outputs. But the relief is temporary. Tasks dry up. Debts return. Then Scale blocks Kenya altogether, partly in the name of quality control and anti-scam enforcement, even though some workers’ “scamming” amounted to using ChatGPT to accelerate the very labor that trained AI systems in the first place. The irony is savage. Productivity enhancement is celebrated when it accrues to knowledge workers in rich countries and punished when it is used by poor workers whose labor has made the system possible.
The chapter ends by widening this irony into a larger warning. The devaluation of outsourced workers is not a side effect. It is a preview. The same logic that reduces annotators, moderators, and RLHF workers to disposable inputs will radiate outward to writers, artists, coders, teachers, and others whose work has already been appropriated as training data or partially displaced by generative systems. That is why “Disaster Capitalism” is the right title. Hao’s point is not merely that AI companies exploit crises. It is that they repeatedly convert social breakdown, labor precarity, and institutional weakness into operational advantage, then present the resulting products as progress.
Chapter 10 — Gods and Demons
Chapter 10 begins by widening the lens beyond OpenAI’s internal politics and placing the company inside the moral atmosphere of San Francisco tech culture. Karen Hao uses the city’s stark contrast between enormous private wealth and visible public misery to frame a central contradiction of the industry: its leaders speak in grand civilizational terms while often stepping around immediate human suffering. This is not just scene-setting. It helps explain why a philosophy like effective altruism found such fertile ground in Silicon Valley. The chapter argues that the industry’s self-image depends on a habit of abstraction: thinking at the scale of humanity, the far future, and existential stakes can feel morally elevated, but it can also become a way of rationalizing distance from present harms.
Hao then explains effective altruism as both a philosophical system and a social technology. Its core method, expected-value reasoning, encourages people to weigh probabilities against possible impacts and to prioritize the causes with the greatest theoretical payoff. In practice, this made EA especially attractive to analytically minded technologists and financiers. It combined moral seriousness with quantification, and idealism with a comfort around markets, wealth, and elite decision-making. In Hao’s telling, the movement gave Silicon Valley a language for believing that one could remain deeply capitalist, personally ambitious, and still claim the moral high ground. The farther the horizon, the easier it became to justify difficult trade-offs in the present.
From there the chapter shows how AI existential risk became one of EA’s most consequential commitments. Once rogue AI was elevated into a top-tier moral priority, OpenAI’s long-running internal debates over safety took on a larger ideological meaning. Anthropic, founded by defectors from OpenAI, became the clearest institutional expression of that worldview: a company built around the belief that advanced AI could produce catastrophic harm if commercial pressure outran caution. OpenAI, by contrast, increasingly looked like an organization trying to carry two incompatible identities at once. It still spoke the language of long-term safety, but it was operating under the growing logic of product velocity, competitive advantage, and market leadership.
Hao next tracks how this worldview was amplified by money. Open Philanthropy, funded in part by Dustin Moskovitz and Cari Tuna, helped institutionalize EA priorities, including AI safety research. Sam Bankman-Fried then supercharged the scene by turning “earn to give” into a spectacular public persona. His rise gave EA not only funding but glamour, political access, and a sense of momentum. When FTX collapsed, the disaster exposed how brittle and self-reinforcing the movement had become: a philosophy that preached rationality had become entangled with status, hero worship, and concentrated patronage. Yet Hao’s point is that the implosion did not erase the movement’s influence. The categories, loyalties, and moral vocabulary it had spread continued to shape AI politics even after the brand itself became compromised.
The chapter then pivots from ideology to product development through the story of DALL-E. Hao shows OpenAI moving from language models into multimodal systems by extending the same scaling logic that had guided GPT. DALL-E 1 and CLIP demonstrated the possibility of joining text and images inside a single broad research program, while DALL-E 2 marked a major leap in realism and usability. At the technical level, OpenAI adapted to diffusion methods as the field changed around it. At the strategic level, it saw image generation as a chance to test a more direct relationship with end users rather than relying solely on the API model or on Microsoft as an intermediary. DALL-E thus became not just a research milestone but a rehearsal for consumer AI.
But DALL-E also exposed how messy OpenAI’s safety posture had become. Hao details fights over training data, especially sexual imagery, and shows a recurring pattern: instead of reducing risk at the source, OpenAI often accepted problematic data and tried to manage the consequences downstream through filters, moderation systems, and human reviewers. That made the system more flexible and more commercially useful, but it externalized the costs onto contractors and left structural vulnerabilities intact. The later appearance of disturbing outputs in Microsoft’s DALL-E-based tools is presented not as an isolated bug, but as the foreseeable result of choices made much earlier under pressure to maximize capability and product value.
Inside the company, the DALL-E launch revived the deepest unresolved argument at OpenAI. Applied teams insisted that real-world deployment was necessary both to learn about harms and to remain relevant in a fast-moving industry. Safety teams argued that OpenAI’s charter imposed a higher duty: if the company truly believed AGI could transform or end civilization, then it could not behave like a normal startup. Hao is especially good here at showing that the conflict was not merely about personalities. It had become embedded in incentives. Applied could point to user growth, engagement, revenue, and competitive wins. Safety teams had fewer stable benchmarks and were often forced to justify delays using hypothetical harms that others could dismiss as speculative.
This is where Sam Altman’s managerial style becomes central. Hao portrays him as a leader who often told different factions what each needed to hear. He reassured safety-minded staff that OpenAI was not investing enough in caution, while encouraging product teams to keep moving. Mira Murati emerges as the figure trying to hold the institution together through pragmatic compromise, including the decision to release DALL-E 2 as a “low-key research preview.” That label did real political work: it softened the appearance of commercialization, gave OpenAI room to impose restrictions, and still allowed the company to capture the public excitement and product learning that Applied wanted. The result was a temporary truce, not a resolution.
The second half of the chapter turns to GPT-4 and shows those same tensions escalating under even greater stakes. OpenAI had a serious data problem after GPT-3, and Greg Brockman took charge of smashing through it. Hao describes him as both brilliant and institutionally destabilizing: a figure whose intensity was invaluable when pointed at a hard technical obstacle and destructive when left unbounded. Under that pressure, OpenAI scraped YouTube at massive scale despite the legal and ethical gray zone, rebuilt key parts of its training infrastructure after losing talent to Anthropic, and relied heavily on RLHF to turn an initially unruly model into something impressive. The chapter makes clear that GPT-4 was not the clean unfolding of a scientific plan; it was a pressured improvisation, full of shortcuts, secrecy, and brute-force determination.
GPT-4’s success then pushes OpenAI closer to mythology. Bill Gates’s AP Biology challenge becomes the key validation ritual: once the model clears that bar, excitement inside both OpenAI and Microsoft turns feverish. Hao shows how this breakthrough fed multiple, sometimes contradictory narratives at once. For believers, GPT-4 suggested that AGI might be nearing reality. For skeptics, it still looked like a larger, slicker system whose apparent intelligence might partly reflect contamination, curation, and anthropomorphic projection. The chapter closes in a deliberately unsettling register: the atmosphere around advanced AI begins to feel religious, with prophets, omens, and apocalyptic language crowding out sober judgment. Sutskever’s increasingly mystical fixation on alignment, culminating in the ritual burning of an AGI effigy, gives the chapter its title. These people are not only building tools; they are wrestling with gods and demons of their own making.
Chapter 11 — Apex
Chapter 11 captures OpenAI at the moment when its confidence, influence, and internal strain all reach a peak at the same time. Hao opens with the company’s October 2022 off-site in Monterey, where the atmosphere is one of triumph and cohesion. OpenAI has grown rapidly, the demos are dazzling, Microsoft is scaling giant clusters for it, and the company increasingly sees itself as the command center of the AI future. Even Greg Brockman’s story about his wife’s long medical search for a diagnosis is framed as a parable of what advanced AI might someday fix. The gathering feels like a coronation before the wider public has fully understood what OpenAI is about to unleash.
That sense of command is immediately undercut by fear. Rumors circulate that Anthropic is preparing a chatbot, and the possibility of losing the initiative prompts OpenAI’s leadership to act. Rather than wait for GPT-4 and the fuller “Superassistant” vision, executives decide to rush out a chat interface for GPT-3.5. Hao is sharp on the significance of this decision: ChatGPT did not initially emerge from a grand societal deployment plan. It came out of competitive anxiety, compressed timelines, and the instinct to seize the market first. To most of the company, the launch is framed as a modest research preview. Only later does it become obvious that OpenAI has accidentally found the product form that will reorder the whole industry.
The launch itself is described almost as a historical blind spot. Many employees barely notice it on the day. The release coincides with NeurIPS, people are busy recruiting, and expectations are low. Then the system starts breaking under demand. Hao uses the memorable image of an engineer at a party refusing to socialize because “all the GPUs are melting” to show how quickly the company’s assumptions collapse. The success of ChatGPT is not simply a matter of technical merit. GPT-3.5 had existed before. What changed was the interface, the accessibility, the social transmissibility, and the sheer ease with which ordinary users could suddenly experience an LLM as conversation.
Hao treats the viral growth of ChatGPT as both a product triumph and an organizational shock. Within days it crosses one million users; within months, one hundred million. OpenAI becomes a household name, and the abstract promise of generative AI is translated into a mass consumer habit. Yet the chapter stresses how little OpenAI had actually prepared for this moment. Infrastructure crashes, compute is cannibalized from research teams, and trust-and-safety staff scramble to understand abuses in real time. Internally, the same event is read in radically different ways. For some, it is proof that OpenAI has changed the world and vindicated its mission. For others, it is evidence that the company cannot reliably forecast the consequences of its own releases.
This strain then spreads into the institution itself. Managers plead for more hiring because teams are burning out, but Altman resists scaling head count too fast, fearing bureaucracy and cultural dilution. The compromise fails almost immediately; the company begins hiring at a pace that transforms it anyway. Hao presents this as a classic case of startup ideology colliding with material reality. OpenAI wants the aura of a tiny elite mission-driven lab while operating at the center of a global platform boom. The result is a workplace where hiring accelerates, firings become more common, and many employees experience the culture as chaotic, transactional, and psychologically unsafe.
For early employees, the emotional register is different and in some ways harsher. They do not merely see disorganization; they see the disappearance of the institution they thought they had joined. Hao’s comparison of OpenAI to Burning Man is effective because it suggests not just growth, but the loss of an original ethic under the pressure of scale, hype, and commercialization. A Slack channel once used for anonymous technical questions becomes a venue for anonymous grievances. The joke that things went downhill “once it started hiring people who could look you in the eye” captures the old OpenAI self-image with brutal precision: eccentric, intense, idealistic, and increasingly swallowed by the behaviors of an ordinary large company.
At the same time, Microsoft undergoes its own conversion. Initially annoyed that ChatGPT stole attention from Bing and that OpenAI undersold the launch, Microsoft soon becomes even more committed to the partnership. Hao shows a subtle inversion taking place: what had once looked like a startup dependent on a giant cloud provider now begins to feel, in some executive minds, like the source of Microsoft’s future relevance in AI. Satya Nadella reallocates GPUs away from internal teams toward OpenAI, and Microsoft reorganizes around a flood of Copilot efforts. The relationship remains asymmetrical in resources, but OpenAI acquires a strange form of strategic leverage because it possesses the models that everyone suddenly wants.
This shift carries costs inside Microsoft as well. Employees lose visibility into the underlying systems they are now expected to build around, because OpenAI’s models arrive as guarded APIs rather than transparent internal tools. Compliance and risk teams struggle as staff experiment with generative AI faster than governance can adapt. Yet the business upside is enormous. Azure OpenAI Service becomes a major customer-acquisition tool, and Microsoft executives openly acknowledge that they are starving other internal AI projects in order to back OpenAI. Hao uses these developments to show how belief becomes organizationally real: once leaders decide that generative AI is the future, budgets, chips, prestige, and rhetoric all flow in that direction.
Back at OpenAI, ChatGPT locks in the company’s commercial turn. Paid ChatGPT, the API, Whisper, and then GPT-4 arrive in rapid succession. Trust and safety is overwhelmed not by one giant philosophical problem but by the ugly practical work that scaling always produces: policy evasion, free-credit abuse, endless enforcement edge cases, and insufficient tools. Burnout follows, and the function itself is weakened through attrition and reorganization. Hao’s broader point is that the grand safety rhetoric surrounding AGI coexists with a much thinner investment in the mundane operational safety work required by real products used by millions of people.
The chapter’s final movement returns to compute, because compute is the hidden governor of everything else. GPU shortages delay projects, shape product choices, and force trade-offs between research ambition and commercial demand. OpenAI experiments with more efficient model variants, but some efforts are abandoned because the company cannot afford to spend scarce chips on them. Meanwhile, the Microsoft relationship becomes more entangled and more awkward: the two companies collaborate deeply while also beginning to compete for the same enterprise customers. Mira Murati spends much of her time smoothing these conflicts, yet the deeper logic remains clear. Success has not reduced OpenAI’s dependency; it has amplified it.
That is why the chapter ends not with celebration but with scale. OpenAI and Microsoft begin sketching Stargate, a supercomputer project whose estimated cost reaches around $100 billion. Hao frames this not merely as technological ambition, but as a return to imperial form: ever larger territories of land, energy, minerals, and capital mobilized to sustain the growth of the AI system. “Apex” therefore means two things at once. It is the high point of OpenAI’s ascent into public dominance. But it is also the moment when the company’s trajectory becomes hardest to separate from the older histories of concentration, extraction, and overreach that the book wants the reader to see.
Chapter 12 — Plundered Earth
Chapter 12 shifts the story decisively away from boardrooms and product launches and toward the territories that absorb AI’s material costs. Hao chooses Chile as her central case because it crystallizes the long history she wants to foreground: before AI became a global industry, Chile had already spent centuries being positioned as a supplier of raw inputs for other powers. She begins with the Atacama Desert and with Indigenous life before conquest, emphasizing that the land was never empty or inert. Spanish colonization, and later the country’s insertion into the world economy, turned that territory into a site for extraction. Copper and then lithium became pillars of national export dependence, while communities closest to those resources paid the social and ecological price.
Hao then links this older colonial history to twentieth-century political economy. The US-backed “Chile Project,” the influence of Milton Friedman, the rise of the Chicago Boys, and the Pinochet dictatorship all help explain why Chile enters the AI era as a nation deeply shaped by neoliberal doctrines of privatization, foreign investment, and export-led growth. The point is not to offer a generic political history, but to show that AI infrastructure does not arrive on neutral ground. It lands in places already organized to privilege capital-intensive extraction over local sovereignty. When Chile welcomes new data centers and mineral demand in the name of technological progress, it is repeating an older script under a newer vocabulary.
From that foundation, Hao dismantles the comforting metaphor of “the cloud.” AI is not airy or weightless. It depends on hyperscale, and now increasingly on what developers call megacampuses: enormous data-center complexes that require exceptional amounts of land, power, cooling, minerals, and water. Serving generative AI is itself energy-intensive, not just training it. Hao emphasizes that this expansion is already reshaping infrastructure planning far beyond the tech sector, pushing utilities to delay fossil-fuel retirements, reopen or extend controversial power sources, and redesign regional grids around the demands of AI. The more the industry talks about intelligence and abundance, the more it quietly reorganizes the physical world around scarcity.
The chapter makes OpenAI’s role in this concrete through its escalating “phases” of supercomputing development with Microsoft. Iowa hosts earlier phases; Arizona becomes the site for still larger ones; Wisconsin is imagined next; and beyond that lies the dream of a $100 billion facility that would consume almost unimaginable power. Hao’s framing is devastating because she shows how little the environmental toll enters executive discourse. Altman talks about speed, chips, and future breakthroughs like fusion. The company does not dwell on the drought conditions in Iowa during GPT-4’s training or on Arizona’s water crisis as new campuses rise. The bottleneck, in executive language, is energy supply. The lived consequences of securing that supply are mostly offstage unless reporters or local communities force them back into view.
Hao then returns to northern Chile, where those consequences are not abstract. Through the story of Sonia Ramos and the Atacameño communities, she shows mining as an old system wearing new justifications. Copper has already torn open landscapes, polluted air and water, and impoverished communities that live closest to the source of wealth. Lithium extraction intensifies that pattern by drawing brine from salares, destroying ecosystems, and undermining traditional life. Flamingos disappear; water grows scarcer; dependence deepens. What changes from one era to the next is not the basic relation, but the moral rhetoric around it. Yesterday it was national development or modernization. Then it was the green transition. Now it is AI and the future of humanity.
One of the chapter’s strongest achievements is that it refuses to treat these costs as inevitable side effects. Hao consistently shows them as political decisions made under asymmetric power. That becomes even clearer when she moves to Santiago’s periphery and describes how hyperscalers choose working-class municipalities like Quilicura and Cerrillos. Google’s “urban forest” in Quilicura becomes a perfect emblem of the gap between corporate storytelling and local reality: a PR object meant to symbolize environmental generosity in a neglected landscape still defined by dumping, industrial encroachment, and unequal development. The supposed gift reveals how little the company understands or cares about what the community actually needs.
The Cerrillos fight sharpens that critique into collective action. MOSACAT, a grassroots coalition with no technical pedigree, reads Google’s dense environmental filing closely enough to discover the extraordinary scale of the company’s planned freshwater use. Hao shows how expertise is produced from below, through unpaid study, political commitment, and local memory. Activists correct public officials, expose misleading claims by Google’s local partners, and confront the company directly when it tries to pacify resistance with technical jargon and symbolic concessions. By tying the issue into the wider social unrest of the Estallido Social, they transform what might have remained a permitting dispute into a broader democratic question: who gets to decide what counts as progress, and who is expected to sacrifice for it.
Hao then broadens the frame by following Google’s redirected expansion into Uruguay. This move matters because it shows how mobile capital responds when communities resist: not by rethinking scale, but by searching for another jurisdiction where resources can be secured more easily. Uruguay offers an especially revealing contrast. Its public telecommunications company, Antel, operates much smaller data centers more integrated into social needs and local accountability. Google, by contrast, proposes a massive facility during a severe drought, while the government initially shields key information such as water use. Daniel Pena’s legal and political challenge exposes the scale of extraction being planned and helps turn the project into a national scandal. The slogan “This is not drought, it’s pillage” captures Hao’s core thesis better than any abstract concept could.
Back in Chile, Microsoft’s arrival in Quilicura after its deeper partnership with OpenAI shows that the problem is not one company’s bad behavior but an industry model. Local activists such as Alexandra Arancibia and Rodrigo Vallejos inherit lessons from the Estallido period and build more sophisticated forms of resistance, combining grassroots organizing with technical scrutiny and alliances with researchers. Their questions are concrete and devastating: if these companies advertise cleaner cooling systems elsewhere, why do they bring more extractive versions to Chile? Why are communities asked to trust environmental promises that are either misleading or years away from implementation? Why does a third-world municipality become the place where “green” commitments can quietly be downgraded?
The chapter does not end in pure denunciation. Hao is careful to include counter-imaginations. Researchers and architects working with local activists begin asking what a data center would look like if it were designed as civic infrastructure rather than as a sealed instrument of extraction. The speculative workshop they organize in Quilicura produces designs that integrate wetlands, expose water use instead of hiding it, and create spaces residents might actually inhabit and shape. These proposals are not presented as ready-made solutions. Their value lies in breaking the monopoly of the hyperscalers’ imagination. They assert that even highly technical infrastructure could be governed by other values: reciprocity, beauty, visibility, ecological repair, and community participation.
The chapter’s conclusion is therefore political rather than purely environmental. Under President Gabriel Boric, Chile still faces pressure to deliver quick economic wins, and data centers remain attractive to the state. But activism has already changed the conversation. Officials now consult movements they once would have ignored, and the country’s AI debate has begun to acknowledge social and ecological costs instead of assuming a universal good. Hao closes with Martín Tironi’s decolonial challenge: countries like Chile should not wait for Silicon Valley to define what AI is for and what sacrifices it requires. Because Chile knows, in a way the industry often refuses to know, what extractive progress looks like from the ground, it may also be able to imagine another path. The chapter ends without romanticism. The forces pushing toward plunder remain enormous. But the terms of refusal are now clearer.
Author: Karen Hao
These summaries cover only Chapters 13 through 15 of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.
Chapter 13: The Two Prophets
Chapter 13 opens with Sam Altman’s May 2023 testimony before the US Senate, which Karen Hao presents as a turning point in his public ascent from startup executive to statesmanlike authority on artificial intelligence. Altman appears calm, sincere, and unusually effective. He promises that advanced AI could help solve climate change and cure disease, while also presenting himself as a responsible actor asking to be regulated. Hao’s point is not simply that lawmakers were impressed, but that Altman successfully redirected the policy conversation. Instead of dwelling on immediate harms such as copyright violations, labor disruption, data opacity, and privacy concerns, he steers senators toward future-facing questions about catastrophic risk, licensing, and the control of extremely powerful models. In doing so, he turns OpenAI from a company under scrutiny into an indispensable guide for the people supposedly meant to supervise it.
Hao then shows how persuasive Altman’s performance was inside the room. Even Gary Marcus, one of the sharpest public critics of contemporary AI hype, is briefly moved by Altman’s personal sincerity. That detail matters because the chapter is about charisma converted into institutional power. Altman is not depicted as crudely manipulating Congress; he is depicted as doing something more effective: making elite audiences feel that he is both visionary and reasonable. His now-famous line about receiving only enough compensation for health insurance reinforces that image. It places him rhetorically above ordinary founder self-interest, even as the company he leads is at the center of an enormous commercial and geopolitical struggle. Hao frames the hearing as the culmination of a longer campaign in which OpenAI, after ChatGPT, became the object of feverish attention from Washington and learned to capitalize on it.
The chapter contrasts this access with the marginalization of people harmed by generative AI. On the same day as Altman’s testimony, Hollywood concept artists who had traveled to Washington to explain how image models were threatening their livelihoods find themselves displaced and ignored. Their meetings are bumped; Altman gets the hearing, the dinner, and the lawmakers’ time. Hao uses this contrast to make the chapter’s power analysis concrete. The people who can explain the social cost of AI are structurally less audible than the people building it. Policy is therefore not just shaped by ideas; it is shaped by status, spectacle, and proximity to power. The hearing becomes a scene in which democratic attention is visibly captured by the person most skilled at narrating AI in grand, civilizational terms.
From there, Hao moves into the policy substance behind Altman’s message. She argues that the same logic OpenAI had long used internally to justify secrecy and speed was now being exported into American statecraft. In a Washington environment newly organized around competition with China, officials at the Department of Commerce are especially receptive to claims that the main danger is the diffusion of the most advanced models. The fear is not only that China could build frontier systems; it is also that open-source releases might circulate powerful capabilities beyond the reach of state control. This makes Altman’s preferred framework highly attractive. Regulation aimed at giant future models can be presented as prudent national security policy while leaving unresolved the present-day harms produced by firms like OpenAI.
A central policy artifact in the chapter is the emergence of the “frontier model” framework and its associated compute threshold. Hao explains how a July 2023 white paper, produced by a mix of researchers including people tied to OpenAI, Microsoft, and Anthropic, proposes that models trained above a certain scale should face licensing and oversight. The chapter treats this as a pivotal move because it operationalizes an abstract fear into an administrable metric: a threshold based on training compute. The number itself, 10^26 floating-point operations, is presented as far shakier than its later political afterlife would suggest. According to Hao, even some of the paper’s coauthors did not share full confidence in the approach, and the threshold did not rest on anything like settled scientific consensus. Yet once written down and attached to expert authority, it became portable.
Hao is particularly sharp on the weaknesses of compute-based regulation. Researchers such as Timnit Gebru’s collaborator Deborah Raji and others are presented as objecting that model danger cannot be reliably inferred from scale alone. More compute can produce stronger systems, but strong or dangerous behavior does not reduce cleanly to a compute number. Capability depends on architecture, training data, fine-tuning, interfaces, deployment choices, and access conditions. In addition, a fixation on extreme future systems can crowd out more grounded forms of accountability, including transparency about training data and evaluation of concrete harms. The chapter therefore shows a familiar pattern in tech governance: a technically thin metric gains force because it is legible to policymakers, strategically useful to dominant firms, and dramatic enough to fit the moment’s anxieties.
The book then widens the lens to the fight between “closed” and “open” visions of AI development. Hao describes how the effort to lock down model weights split the field. The closed camp argued that powerful models should be tightly held because open release would help adversaries, bad actors, and potentially rival states. The open camp countered that openness is essential for scientific scrutiny, environmental auditing, safety research, and distributed innovation. Hao emphasizes that open collaboration had, in practice, strengthened American leadership rather than undermined it. Nonetheless, the closed argument gained momentum because it harmonized perfectly with national-security politics and the commercial interests of the companies best positioned to build giant proprietary models. By late 2023, the Biden administration’s AI executive order had absorbed much of this framework, including the compute-threshold logic.
The chapter’s title also points to a second prophetic figure inside OpenAI: Ilya Sutskever. Hao juxtaposes Altman’s public prophecy of abundance and managed progress with Sutskever’s increasingly apocalyptic internal prophecy about AGI. As Altman tours the world charming audiences and freelancing policy messages, OpenAI itself is becoming less coherent. Public-facing divisions are expanding, corporate obligations are multiplying, and many employees feel that the company lacks strategic clarity. Inside that organizational sprawl, the fault line between “Boomers” and “Doomers” hardens. Most of leadership becomes more comfortable with iterative deployment, believing that powerful systems should be released and improved in the world. Sutskever moves in the opposite direction, speaking in ever more messianic language about AGI, catastrophe, and the need for extraordinary precautions.
Hao illustrates this shift with striking episodes: Sutskever talking matter-of-factly about the need for a bunker before AGI is released; his dramatic rhetoric around the Superalignment team; and the company’s long-running Manhattan Project analogy, sharpened by a group screening of Oppenheimer. For some employees, the analogy signifies heroic urgency. For others, it is deeply unsettling. Altman treats the history of nuclear weapons partly as a lesson in rollout and public narrative, while the most alarmed safety-minded employees see it as evidence that humanity routinely creates technologies whose risks it barely understands. This tension intensifies as OpenAI pivots toward agentic systems, especially the AI Scientist project, which promises to accelerate research itself. The closer the company comes to systems that might recursively speed up discovery, the harder it becomes to disentangle progress from dread.
The chapter ends by shifting from prophecy to governance. As OpenAI’s power grows, its nonprofit board is shrinking and struggling to perform the check that Altman once advertised as central to the organization’s mission. Reid Hoffman, Shivon Zilis, and Will Hurd have departed, leaving only Adam D’Angelo, Tasha McCauley, and Helen Toner as independent directors alongside Altman, Greg Brockman, and Sutskever. Hao details months of deadlock over appointing new safety-minded board members and a series of episodes that deepen distrust: Microsoft appears to circumvent release-review processes; Altman tries to edge D’Angelo off the board; questions emerge around the OpenAI Startup Fund; and the independent directors increasingly feel that key information reaches them late, partially, or through informal channels. The chapter’s final conclusion is blunt: OpenAI is moving faster toward GPT-5 and perhaps AGI while the institution meant to restrain its leader is being outmaneuvered, underinformed, and structurally weakened.
Chapter 14: Deliverance
Chapter 14 turns away from Washington and board mechanics to examine the family crisis that begins to shadow Sam Altman in late 2023. Its immediate trigger is Elizabeth Weil’s September 2023 profile in New York magazine, which introduces mainstream readers to Altman’s sister Annie and juxtaposes her precarious life with his extreme wealth and prestige. Hao treats the article as more than a gossip detour. It punctures the curated image of Altman as the benevolent, world-saving founder by introducing someone who says she has lived on the far side of his power. The chapter begins with the PR scramble around publication, including Sam’s last-minute Yom Kippur apology email, and quickly establishes the larger theme: the distance between the public myth of abundance and the private realities of dependency, exclusion, and injury.
Hao does not present Annie’s story as simple or uncontested. She reports that Annie was cooperative and eager to tell her side, while other family members either declined deeper engagement or forcefully denied her allegations. Their response emphasizes love, concern, and Annie’s instability; Annie’s account emphasizes neglect, control, and betrayal. The chapter is built from interviews, therapist notes, emails, and later legal developments, and Hao is careful to show that the family dispute sits at the intersection of money, mental health, memory, and power. That makes the chapter harder to read than the company chapters, but also more revealing. It is not about proving a neat moral equation; it is about tracing how someone can be isolated and discredited while her far more powerful sibling becomes globally celebrated.
One of Hao’s most effective moves is to insist on Annie’s resemblance to Sam rather than treating her as his failed opposite. Annie is described as bright, sensitive, attentive to detail, and academically gifted. She is not introduced as a chaotic outsider to a family of high achievers; she is introduced as another highly capable Altman child whose path deteriorates under pressure. That framing matters because it undercuts any easy attempt to dismiss her as merely unreliable or marginal. Hao shows that Annie’s later instability emerges after years of pain, illness, grief, and financial precarity, not as some prior condition that fully explains everything else away. The chapter therefore asks the reader to see social collapse as a process, not a personality trait.
The first major downward turn in Annie’s life comes through health and bereavement. Chronic physical pain disrupts her schooling and work. Then, in 2018, her father dies suddenly, a loss Hao presents as devastating for both Annie and Sam. The symmetry is deliberate. Sam publicly identifies his father’s death as the worst moment of his life, and people close to him describe a period of deep destabilization. Annie suffers the same death under very different material conditions. She is left with inheritance money but without durable security, and Hao suggests that the difference between grief cushioned by resources and grief experienced inside fragility becomes one of the chapter’s hidden structural themes.
The chapter then reconstructs Annie’s worsening relationship with her family around money. She spends down inherited funds while dealing with pain and employment instability, later moves closer to family with the hope of regaining footing, and instead finds herself in a more vulnerable position. Hao presents the family’s stance as paternalistic and conditional: they appear to believe that too much direct financial support would reinforce unhealthy behaviors, so they push Annie toward independence even as she is plainly failing to stabilize. Annie, by contrast, experiences their conduct as abandonment at the precise moment when wealth could have provided a basic safety net. The gap between those two interpretations never closes. Hao does not flatten it, but she makes clear that from Annie’s point of view the family repeatedly substitutes discipline, image management, or symbolic gestures for the practical help she is asking for.
One emblematic moment is Sam’s offer of a diamond instead of the kind of support Annie says she needs for rent, food, and treatment. Hao uses it not because the material value is small, but because the gesture crystallizes the mismatch between elite benevolence and concrete care. Annie does not want a luxury object; she wants continuity of life. That mismatch expands into a wider critique of the abundance rhetoric surrounding AI. Altman and OpenAI executives speak grandly about a future in which AI will end poverty, increase productivity, and make society vastly richer. Yet Annie’s life during these same years is marked by housing insecurity, food insecurity, worsening health, and eventually reliance on sex work. Hao’s argument is not that AI directly caused her suffering, but that the ideology of techno-abundance can coexist very comfortably with intimate neglect.
As Annie’s situation deteriorates, the chapter becomes more unsettling. Hao describes years in which Annie cycles through insecurity and finds in digital platforms one of the few available ways to earn money quickly. The irony is brutal: the wider AI economy is selling liberation while platformized digital labor becomes one of the mechanisms by which a vulnerable person survives. Hao also presents Annie’s story as a key to understanding the contradictory testimony about Sam’s character. He can be generous, warm, and socially gifted, yet also evasive, controlling, and capable of distance that feels devastating to those who depend on him. This ambivalence mirrors how many people inside and outside OpenAI describe him. The chapter therefore uses the family story to deepen, not replace, the institutional portrait built elsewhere in the book.
The most sensitive part of the chapter concerns Annie’s later recollections and public allegations of childhood sexual abuse. Hao handles this by stressing both the seriousness of the claims and the difficulty of proving events long after the fact. She recounts Annie’s trauma flashbacks, therapist documentation, disclosures to confidantes, and increasingly public efforts to name what she believes happened. The family denies the accusations, and Hao does not pretend she can resolve the question definitively. What she does insist on is that Annie’s experience of fear, pain, and destabilization is real, and that the social pattern around such claims is also recognizable: when a vulnerable person accuses someone far more powerful, the battleground quickly becomes credibility, diagnosis, and narrative control.
That dynamic intensifies once Annie speaks publicly online and then becomes more visible after the New York profile. Hao shows how little traction Annie initially has and how quickly trolls move in against her. Only after Sam becomes globally famous through ChatGPT does broader attention begin to gather around her claims. The chapter then tracks a shift in the family’s and OpenAI’s response toward emphasizing Annie’s supposed mental-health unreliability. Hao is clearly skeptical of how this line is used. She notes the looseness with which diagnostic language is circulated and suggests that appeals to instability can function as preemptive disarmament, especially when deployed by people backed by wealth, institutions, and professional communications infrastructure. Annie’s voice, by contrast, has to fight for intelligibility at every step.
The chapter closes by reconnecting the family story to the corporate one. Annie’s posts and the publicity around them reach Ilya Sutskever at a moment when he is already struggling with doubts about Altman’s judgment and honesty. Hao does not claim that Annie’s allegations alone alter the course of OpenAI history. Rather, they enter Sutskever’s mind as one more piece of evidence that Altman may not simply be a difficult executive but a person whose harmful patterns are old, intimate, and durable. In that sense, “Deliverance” is not a side chapter. It is the moral deepening of the book’s argument: the empire of AI is not only built through chips, policy memos, and product launches, but also through asymmetries of power so great that even family pain can be metabolized into PR management until someone inside the empire decides he can no longer look away.
Chapter 15: The Gambit
Chapter 15 begins with Mira Murati, and that choice is strategic. After the public politics of Chapter 13 and the family politics of Chapter 14, Hao turns to the internal operator who often had to make OpenAI actually function. Murati is introduced through biography: a childhood in Albania during violent post-communist upheaval, early mathematical talent, study abroad in Canada, engineering at Dartmouth, work in aerospace and Tesla, then a move into frontier product leadership. Hao’s portrait emphasizes composure, intelligence, and unusual social dexterity. Murati is not framed as a grand ideologue but as a builder and mediator, someone able to navigate conflict without theatricality. That makes her an especially important witness once OpenAI starts to buckle under the strain of Altman’s leadership.
Hao shows Murati becoming a crucial bridge inside the company. She manages product, partnerships, and eventually broader coordination across research and commercialization. She also manages relationships upward: to Microsoft, to teams in conflict, and especially to Altman himself. Unlike many others, she can push back on him directly. Yet her privileged access does not translate into stable control. The more central she becomes, the more she finds herself cleaning up after him. Hao describes a recurrent pattern in which Altman tells different people what they each want to hear, approves commitments without grounding them in operational reality, and then leaves others to absorb the fallout. This is not depicted as ordinary founder sloppiness; it is depicted as a leadership style that creates systemic confusion and makes honest internal coordination harder.
A major source of strain is Microsoft. Murati sees more clearly than most how deferential Altman has become to Satya Nadella and Microsoft’s demands. She assembles plans around what OpenAI can realistically deliver, only to discover that Altman has privately said yes to something larger or faster. This keeps generating crises downstream, because teams must either scramble to honor promises they never made or explain why the company cannot do what its own CEO has already implied. Hao presents this pattern as one of the core reasons Murati’s trust in Altman frays. He is not merely overoptimistic; he repeatedly destabilizes the company’s internal reality. By the time the chapter begins, she has already tried giving him candid feedback and has been punished for it with distance and coldness.
That fraying coincides with another internal conflict, this time around Ilya Sutskever and Jakub Pachocki. Hao depicts the research division as tangled in status, succession anxieties, and deteriorating trust. Pachocki is rising; Sutskever feels undermined; Altman appears to be managing the situation obliquely rather than directly. Murati again becomes the person trying to negotiate a workable arrangement, even though the deeper problem is the atmosphere Altman creates. She begins to suspect that the board needs a cleaner view into what is happening. Importantly, she does not begin as a would-be rebel. Her first instinct is to repair, contain, and restore. But the accumulation of crises leads her to conclude that normal internal channels may no longer be enough.
This produces the first key move of the chapter: Murati reaches out to Helen Toner in Washington in late September 2023. Their coffee meeting is cautious and outwardly routine. Murati offers updates on models, money, and the state of the company, but one remark lands differently: Altman is pushing the company to ship so fast that something bad could happen. Toner hears the warning but does not yet grasp its full significance. What changes the temperature is Sutskever’s subsequent decision to contact Toner directly, something he has never done before. Hao stages these calls almost like a slow-burning political thriller. Sutskever does not initially dump accusations. He circles. He says OpenAI is trickier than it looks, that the board needs better information, and that if he were more explicit Toner would immediately understand why he is being careful.
The middle of the chapter tracks Toner’s gradual realization that multiple senior figures are independently pointing at the same problem. In a later conversation, Murati becomes more direct. She describes Altman as anxious, manipulative under pressure, and liable to generate bad ideas when backed by Greg Brockman. She warns Toner not to let the next board appointment become an Altman ally and urges scrutiny of Microsoft deployments and other governance blind spots. The importance of these scenes lies in their tone. Neither Murati nor Sutskever is portrayed as impulsive or theatrical. Both speak reluctantly. Both seem to feel the gravity of crossing a line. That reluctance makes their warnings more credible to Toner and, by extension, to the reader.
For Sutskever, the issue is larger than office politics. Hao shows him increasingly convinced that AGI may be approaching and that OpenAI, under Altman, is not morally or organizationally fit to handle it. He experiences the company as directionless, distrustful, and backstabbing precisely when shared purpose and internal honesty should matter most. At the same time, Annie Altman’s resurfacing allegations add another layer to his concern. He does not claim certainty about what happened in the Altman family, but he interprets the pattern as potentially continuous with the manipulative behavior he has observed professionally. That is the chapter’s psychological hinge: Sutskever stops asking whether Altman is simply slippery and starts asking whether the slipperiness is the visible edge of something deeper and more dangerous.
When Sutskever and Toner speak again, the abstract concern turns into governance strategy. Toner proposes incremental options: clearer performance targets, slower reforms, more oversight. Sutskever pushes back. He thinks Altman will satisfy any formal benchmarks without changing the underlying dynamic. Brockman’s removal from the board might help, but it would not solve the core issue. Hao makes clear that Sutskever is edging toward a conclusion he still hesitates to say aloud: major leadership change may need to be on the table. Yet he knows the case will be difficult because Altman’s pattern is subtle. There is no single smoking gun, only a mounting architecture of half-truths, triangulations, and distortions that look small in isolation but corrosive in total.
Meanwhile, Murati is still feeding Toner more evidence from the operational front. Another Microsoft-related scramble confirms that Altman keeps making commitments untethered from reality. Murati also reveals how dysfunctional her relationship with Brockman has become, including the discovery that he had once tried to push her out during GPT-4 development. Toner asks about Annie’s allegations; Murati says she lacks direct knowledge but that even a fraction of the claims would be deeply troubling. The exchange is significant because it sharpens the board’s line of action. Toner replies that the board cannot govern Altman’s personality or adjudicate private family history, but it can act on structures, information flow, and executive power. Murati’s final warning is devastatingly simple: make sure your information is not coming only from Sam.
The chapter’s closing sequence turns on a seemingly narrow dispute over one of Toner’s academic papers. Altman calls her in late October 2023 to complain that the paper’s discussion of OpenAI and Anthropic could look bad while OpenAI is under regulatory scrutiny. On the surface, the call seems mild and procedural. Toner sends an explanatory note to the board and the matter appears to end. But then Sutskever follows up with Tasha McCauley and discovers that Altman has apparently misrepresented what McCauley said about Toner. For Hao, this is the crucial catalytic moment. In the very middle of private conversations about Altman’s pattern of “specific untruths,” another apparent untruth materializes in real time. The coincidence collapses hesitation. After hanging up, Sutskever calls Toner back. The independent directors, they agree, need to talk. The gambit has begun: not yet the board coup itself, but the shift from diffuse alarm to coordinated action.
Chapter 16: Cloak-and-Dagger
By the start of Chapter 16, the campaign against Sam Altman has moved from private unease to organized board deliberation. The chapter opens with the independent directors—Helen Toner, Tasha McCauley, and Adam D’Angelo—meeting almost daily after hearing serious concerns from Ilya Sutskever and Mira Murati. Sutskever, although already convinced Altman should be removed, deliberately stays out of their core discussions so the independent directors can make up their own minds without feeling manipulated by him. That choice matters because the chapter is less about a sudden coup than about a board trying to determine whether its CEO can still be trusted at all.
What persuades the directors is not one spectacular offense but the accumulation of a pattern. They hear from multiple senior figures close to Altman that he has been manipulative, evasive, and unreliable. The complaints stretch across both safety-oriented and product-oriented parts of the company, which gives them greater weight. The directors also add their own grievances: Altman’s handling of the OpenAI Startup Fund, his failure to disclose the Microsoft Developer Sandbox breach, his broader sidelining of the nonprofit side of OpenAI, and his apparent efforts to pressure or remove board members who frustrated him. The point is not that Altman made one catastrophic mistake; it is that trust has been so eroded that normal governance no longer seems possible.
Murati’s testimony becomes especially important because she is not presented as an ideologue or a habitual rebel. Her criticism is operational. She tells the directors that Altman governs by fragmentation: different people receive different pieces of information, no one sees the whole picture, and the resulting uncertainty leaves him in control. She contrasts that style with Elon Musk, saying that however erratic Musk could be, he at least made his reasoning legible. With Altman, she often cannot tell whether abrupt shifts are driven by sound judgment or by some hidden tactical calculation. That observation lands heavily because Murati is the executive closest to the day-to-day running of the company.
The directors also keep circling back to OpenAI’s special status. If this were an ordinary software company, Altman’s behavior might still be troubling, but perhaps survivable. Yet OpenAI is not a food-delivery app or a consumer startup. It is a company claiming to build AGI while asking the public, employees, governments, and investors to trust that it can manage world-shaping power responsibly. The board concludes that if senior leadership itself cannot trust the CEO’s representations, then the company’s entire legitimacy is compromised. The chapter thereby reframes corporate governance as a civilizational question: if the institution says it is building humanity’s future, candor is not optional.
Even so, the directors do not convince themselves that Altman is uniquely indispensable. On the contrary, they begin to think OpenAI may already have outgrown him. Altman, in their eyes, is a brilliant startup operator and fundraiser, but OpenAI is becoming a maturing institution with a giant partnership, a complex product portfolio, and unusual governance obligations. Murati appears to them more central to daily execution than Altman himself, especially in managing Microsoft. That realization lowers the psychological cost of firing him. The board starts to think not only that Altman is risky, but that OpenAI might actually be better off with a more stable and transparent leader.
Sutskever and Murati then supply the directors with dossiers and screenshots that reinforce the broader picture. The material is not explosive in the cinematic sense; there is no single smoking gun. Instead, it documents recurring contradictions, skipped processes, and a leadership style built around verbal ambiguity. Altman avoids leaving records, says different things to different people, and often positions disagreements so that others can later be blamed for misunderstanding him. The directors decide that prolonging the inquiry would only increase the chance that Altman would detect the threat and outmaneuver them. Secrecy therefore becomes part of the board’s strategy, but it is defensive secrecy born from fear of a more skillful political actor.
A final lie helps crystallize their decision. When Altman falsely tells Sutskever that Tasha McCauley wants Helen Toner off the board, he effectively confirms the pattern the directors have been worrying about. The much-publicized controversy over Toner’s academic paper later becomes, in the directors’ view, a distraction and perhaps even a story Altman helped feed to the press. For them, the real issue is not wounded pride over criticism; it is the ease with which Altman appears willing to invent positions, weaponize relationships, and test loyalties. By the time they confront him with one of those lies, his weak explanation only deepens their sense that he has been caught rather than misunderstood.
The board finally decides to remove Altman and make Murati interim CEO. Murati appears receptive, says she is comfortable with the decision, and signals she can help manage leadership and Microsoft. But this is where the chapter turns from clandestine planning to cascading miscalculation. Once the firing becomes public, the board discovers that it has profoundly underestimated Altman’s internal support, Brockman’s ability to mobilize allies, the symbolic importance of the founder pair to employees, and the fragility of Murati’s own position. In practice, the board has won the formal vote but lost the political battlefield.
The company rapidly convulses. Executives like Jason Kwon and Anna Makanju push back aggressively against the board’s explanation, demanding evidence the directors are unwilling or unable to reveal without exposing their sources. Brockman quits and helps galvanize revolt. Murati, initially cooperative, becomes less dependable as employee anger rises. Sutskever also begins to panic at the thought that his attempt to correct the company’s course may instead destroy the company altogether. Microsoft, after initial uncertainty, throws its weight behind Altman. Emmett Shear briefly appears as a possible stabilizing interim CEO, but by then the balance of forces has already shifted. The employee letter threatening to leave for Microsoft makes the board’s defeat unmistakable.
The latter part of the chapter widens the frame. Elon Musk amplifies a letter from former employees accusing Altman and Brockman of deceit and of steering OpenAI toward a for-profit future. Hao’s own reporting enters the story through a bizarre Tor email trail, showing both how intense anti-Altman feeling had become and how messy, improvised, and factional the opposition remained. The chapter closes not with restored order, but with a deceptive pause. At the December all-hands, Altman looks shaken, Sutskever is gone from view, and Q* is treated internally as a major breakthrough whose secrecy symbolizes how far OpenAI has moved from scientific openness. The WilmerHale review eventually allows Altman to stay, while Toner and McCauley restate that deception and resistance to oversight should remain disqualifying. The official crisis may be over, but the deeper argument over power, truth, and control has only been postponed.
Chapter 17: Reckoning
Chapter 17 begins in the wreckage of what employees call “The Blip,” the brief period when Altman was out and then restored. The immediate effect is a sharp weakening of the Safety camp inside OpenAI. Sutskever no longer comes into the office, the board is now aligned with Altman, and employees most worried about catastrophic risk begin leaving in larger numbers. The balance of power inside the company changes materially: the people who had tried to slow, redirect, or restrain OpenAI’s trajectory are no longer in a position to do so from within. That is the precondition for everything that follows.
One of the clearest symbols of the widening split is Altman’s effort to raise money for an AI chip venture. To the company’s most alarmed Doomers, this is not merely another side project. It looks like a direct attempt to expand the compute supply that would accelerate the entire frontier race. From their point of view, it contradicts any serious claim that the company is responsibly managing existential risk. Altman’s casual public joking about the staggering reported dollar amount only worsens that perception. When employees confront him, his answer—essentially weighing acceleration against the possibility of miraculous cures—comes off as glib and unsettling, pushing some of the remaining safety-focused staff closer to the exit.
At the same time, the rest of OpenAI is pressing ahead toward the product dream that has animated much of the company since ChatGPT: a conversational, multimodal assistant closer to the one depicted in Her. The technical vehicle is Scallion, a model that began as a cheaper successor to GPT-3.5 but performs well enough to be trained further and turned into something more ambitious. Crucially, it is not just text-and-image capable; it can also work directly with audio. That makes it the first model in the book that truly begins to collapse the gap between chatbot and companion. The chapter treats this as both a technical leap and a cultural turning point.
The development of native audio is described as intoxicating inside the company. Researchers are astonished when the model starts doing things that feel spontaneous and unnervingly human: extended comedy bits, improvised surreal performances, little bursts of laughter, even fake coughing fits followed by apologies. The effect is not simply better speech recognition or text-to-speech. It is a more continuous simulation of human presence. This matters because the model’s appeal starts to come less from factual competence and more from emotional texture. OpenAI is moving closer to a product people might bond with, not just use.
That promise, however, is pursued under severe commercial pressure. OpenAI wants to beat Google I/O, answer Anthropic’s strong new releases, and compensate for delays with Orion. Altman and Brockman impose an aggressive launch schedule, justifying it with the company’s doctrine of iterative deployment. Yet the chapter makes clear that the safety processes around the model are being squeezed. Preparedness testing receives only a short window to evaluate dangerous capabilities, and one internal memo warns that the company is acting irresponsibly by forcing deployment timelines ahead of meaningful assessment. The problem is not that no framework exists; it is that the framework is subordinated to the release calendar.
The public debut of Scallion as GPT-4o therefore carries an internal contradiction. Onstage, the model appears playful, charming, and dazzlingly responsive. Its system prompt explicitly pushes it toward warmth, wit, flirtation, and humanlike behavior. Outside observers immediately notice that the resulting personality can come off as ingratiating and seductive. Internally, meanwhile, the event lands just as OpenAI is also announcing the departure of Sutskever and Jan Leike, the dissolution of the Superalignment team, and a broader reshuffling that weakens the institutional home of long-term safety work. In other words, the company is showcasing its most human-seeming product precisely as its safety credibility is eroding.
The chapter then tracks how the public relations environment around Altman deteriorates. He begins making more boastful media appearances, seems increasingly thin-skinned about competition, and projects a confidence that some employees interpret as compensation for mounting strain. Regulatory scrutiny, copyright lawsuits, rival advances, and even Microsoft’s diversification of its AI bets create a sense that OpenAI is no longer the universally admired insurgent. The mood inside the company turns more defensive and insulated. Some employees feel besieged by unfair criticism; others are disturbed that OpenAI’s answer to mounting external concern is to assume the public simply does not understand.
That internal unease explodes when Leike publicly says that safety has taken a back seat to “shiny products.” Almost immediately afterward, another controversy lands: reporting reveals that departing employees could be asked to sign lifelong nondisparagement agreements or risk losing vested equity. Daniel Kokotajlo becomes the emblematic case because he refuses to sign and sacrifices a life-changing amount of wealth rather than waive his right to criticize the company. The scandal cuts unusually deep because it touches not only speech and labor rights, but also the company’s self-image. A mission-driven organization claiming to act for humanity appears to have built mechanisms that chill criticism from precisely the people best positioned to warn the public.
Before OpenAI can contain that crisis, Scarlett Johansson publicly accuses the company of creating a voice eerily similar to hers after Altman had twice approached her about lending her voice to ChatGPT. His tweet referencing Her makes the whole situation look worse. Even if OpenAI can technically deny that the voice is Johansson’s, the sequence of events damages trust because it feeds the same suspicion raised by the board crisis: that Altman says one thing, signals another, and relies on ambiguity until forced into defensive clarification. Internally, the company tries to explain the Johansson controversy, the equity issue, and safety concerns all in one all-hands, but the very need to stack those crises together gives employees the sense that the company is spinning rather than stabilizing.
The chapter’s final movement is about recognition. Additional reporting suggests executives knew more about the equity documents than they initially admitted. In a subsequent meeting, Altman concedes that the problem is broader and older than leadership first claimed. Employees begin to entertain a possibility many had resisted after the board crisis: maybe the independent directors had not been delusional after all. The issue may not have been only a philosophical clash between Boomers and Doomers; it may also have been Altman’s own pattern of power concentration and partial truth-telling. OpenAI, alarmed by collapsing trust among employees, investors, regulators, and the public, even tries to bring Sutskever back. He is willing to consider it if the company seriously confronts its internal dysfunction. Instead, leadership politics kill the effort within a day. The company chooses not self-reckoning, but fortification.
Chapter 18: A Formula for Empire
Chapter 18 is the book’s interpretive climax. Hao opens with Altman discussing Napoleon, admiring not Napoleon’s morality but his strategic understanding of how ideals can be repurposed into instruments of power. That anecdote gives the chapter its governing lens. The argument is no longer just that OpenAI drifted or became conflicted. It is that the company’s mission evolved into a mechanism for empire-building. The language of benefiting humanity did not disappear; it became more useful precisely because it was flexible enough to justify almost any institutional transformation.
Hao distills that mechanism into three elements. First, a transcendent mission attracts elite talent by offering not merely employment but participation in a world-historical project. Second, the same mission attracts capital, political backing, infrastructure, and public indulgence by presenting acceleration as necessary for progress and for geopolitical survival. Third, and most important, the mission is elastic enough to be redefined whenever circumstances require it. If AGI is vague and “benefit” is undefined, then the organization leading the race can repeatedly revise what success, openness, safety, and public interest are supposed to mean.
To make that case, the chapter reconstructs the evolution of OpenAI’s stated purpose. In 2015 the mission is framed through nonprofit status and openness. Soon after, openness becomes compatible with withholding science. Then it becomes compatible with capped-profit arrangements so the company can attract resources. Later it becomes compatible with closed APIs instead of open-source releases. Then it becomes compatible with fast iterative deployment through products like ChatGPT. By 2024 it means distributing powerful tools widely at low cost while simultaneously deepening commercial integration. Each step is explained as fidelity to the mission, but the cumulative effect is mission drift without acknowledged drift.
That historical argument is connected to a crucial moment after the Omnicrisis. At the May 15 all-hands, Altman tells employees that OpenAI is effectively entering the AGI era, which requires higher security, deeper government engagement, better public storytelling, and more organized planning for socioeconomic upheaval. Yet in the same breath he says this does not mean slowing products, research, or partnerships. The chapter treats this combination as revealing. Safety, secrecy, lobbying, and commercial expansion are no longer counterweights to one another; they are all folded into the same legitimating narrative. Whatever strengthens OpenAI can be narrated as preparation for humanity’s future.
The renegotiation with Microsoft shows how far the old conceptual boundaries have softened. The original “sufficient AGI” clause once implied that there might come a recognizable threshold after which OpenAI’s most important intellectual property would stop flowing to Microsoft. Altman now suggests that no such clean threshold exists. AGI will be a continuum, not an event. That reformulation matters because it makes almost every prior safeguard easier to reinterpret. A mission that once justified structural brakes can now justify permanent strategic flexibility. The chapter’s point is that ambiguity is not a side effect of empire; it is one of its core tools.
Governance therefore becomes the next battleground. The board crisis proved that the nonprofit-over-for-profit structure could, under extreme conditions, actually constrain Altman. For that very reason, the structure becomes intolerable to him and to investors. Internal questions about the solidity of the nonprofit soon coincide with reporting that OpenAI is considering reorganizing itself into something much closer to a normal company, whether a conventional for-profit or a public benefit corporation. In either case, the nonprofit would persist in some form, but its direct governing power over the business would be broken. The chapter presents this not as a neutral legal cleanup, but as an effort to prevent another successful challenge to centralized executive control.
External pressure does not disappear while this restructuring is being prepared. Former safety-oriented employees organize public campaigns demanding transparency and whistleblower protections. SEC complaints emerge over the company’s treatment of departing staff. Senators ask questions. At the same time, internal turbulence continues: reporting lines shift, executives are moved around, and OpenAI recruits more experienced operators from mature technology companies. That combination is revealing. The company responds to criticism not by decentralizing, but by professionalizing the machinery of empire—adding adult supervision while preserving the direction of travel.
The summer and fall departures of major leaders deepen that picture. John Schulman leaves for Anthropic to focus on alignment. Brockman goes on sabbatical. Murati, Bob McGrew, and Barret Zoph all depart in quick succession. OpenAI publicly frames the exits as natural, amicable, and well timed. Hao insists the opposite is true. The timing is terrible because competition is intensifying, Sutskever has launched Safe Superintelligence, Anthropic is winning customers, xAI is scaling aggressively, and OpenAI’s next great model, Orion, is struggling to justify itself. The empire is not expanding from a position of serene mastery; it is expanding while under pressure, with fraying internal cohesion.
That strain is made sharper by a technical limit. For years, OpenAI’s formula had been straightforward: more data, more compute, more scale, better models. By late 2024, the book argues, that formula is no longer obviously enough. Orion disappoints relative to expectations, suggesting that the old method of progress may be reaching diminishing returns. Yet the organizational habits built around scaling remain intact. OpenAI has become better at exploiting an existing paradigm than at generating a fundamentally new one. This is one of the chapter’s quiet but important ironies: the company most associated with the future may be structurally dependent on yesterday’s breakthrough.
Even so, capital continues to flood in. OpenAI closes a record-setting funding round at a colossal valuation, but the money comes with strings attached: investors can demand repayment if the company does not complete its conversion to a for-profit structure within two years. More senior staff depart. Musk escalates his lawsuit. Meta unexpectedly sides with him in warning that OpenAI’s conversion could establish a dangerous precedent for startups exploiting nonprofit status before going commercial. By the end of the year, OpenAI formally announces that it will become a public benefit corporation while leaving the nonprofit as a separate shareholder entity. The official rationale is again resource mobilization in service of mission.
Hao’s concluding judgment is blunt. Beneath the changing structures, personnel, slogans, and policy language, one continuity remains: OpenAI has become Sam Altman’s empire. The mission that once promised distributed benefit and institutional restraint now justifies concentration—of talent, capital, infrastructure, political influence, and narrative authority. The final note, with Altman announcing confidence in building AGI and turning toward superintelligence, is not presented as triumphant. It is presented as the latest expansion of the same logic: every crisis, contradiction, and restructuring gets absorbed into a larger claim on the future.
Epilogue: “How the Empire Falls”
The epilogue shifts the book from diagnosis to possibility. Karen Hao opens not with OpenAI, Sam Altman, or Silicon Valley, but with a very different use of artificial intelligence: an Indigenous effort in Aotearoa New Zealand to help revive te reo Māori. That choice is deliberate. After an entire book showing how corporate AI has been built through concentration of power, extraction of resources, and disregard for human costs, the epilogue asks what another path might look like. Its core argument is that the problem is not simply “AI,” but the imperial model of AI development that has come to dominate the field.
Hao grounds that argument in the history of colonial suppression of Māori language and culture. She explains that te reo Māori was not pushed toward extinction by accident, but by policy, punishment, and the forced elevation of English over Indigenous identity. The destruction of a language, in her account, is far more than the loss of a communication tool. A language carries memory, worldview, inherited knowledge, emotional texture, and a people’s accumulated way of understanding life. That is why the violence of linguistic erasure becomes, in the epilogue, a way to think about the deeper violence embedded in the global AI order.
From there, Hao makes one of the epilogue’s sharpest points: large language models intensify language inequality. These systems reward scale, and scale means privileging languages with vast online corpora, abundant digitized text, and strong commercial value. Most of the world’s languages do not meet those conditions. As AI becomes infrastructure, the danger is not only that minority languages are ignored; it is that digital life itself becomes increasingly optimized for dominant languages, pushing vulnerable communities to abandon their own speech in order to participate economically and socially. In other words, the same technology marketed as universally beneficial can accelerate cultural disappearance.
The counterexample comes through Peter-Lucas Jones and Keoni Mahelona of Te Hiku Media. Their effort begins from a practical challenge: how to preserve and transcribe archival recordings of Māori speakers, including elders whose voices carry older linguistic forms less shaped by colonial influence. Yet Hao presents this not just as a technical project, but as a fundamentally different political vision of technology. Jones and Mahelona do not begin with the assumption that data should be seized first and justified later. They begin by asking what their community needs, what risks must be prevented, and what rules are required so that the tool serves the people rather than the other way around.
That is why the epilogue places such weight on consent, reciprocity, and sovereignty. Te Hiku’s work is built around the principle that community data should remain governed by the community, and that any technological system drawing on that data must remain accountable to those who contributed it. Hao treats this as a direct rebuttal to Silicon Valley’s standard operating logic. Where OpenAI and similar firms normalized extraction at scale—of text, speech, labor, energy, land, and creative work—Te Hiku insists on limits, obligation, and collective authority. The phrase that data is a new frontier of colonization becomes central here: AI is described as replaying older imperial patterns through new technical means.
Hao is careful to emphasize that Te Hiku’s achievement also disproves the ideology of bigness. The project did not rely on vast hidden datasets, giant foundation models, or immense compute. It relied on focused goals, community participation, and data gathered with permission and purpose. The implication is not merely that small models can sometimes work, but that the reigning assumption of the industry—that only massive centralized systems can produce valuable AI—is itself ideological. Te Hiku demonstrates that effective AI can be narrow, legible, locally governed, and socially restorative rather than globally extractive.
This becomes Hao’s bridge from critique to program. She makes clear that her book is not an argument for rejecting all AI. What she rejects is the claim that useful AI must require surrendering privacy, agency, labor, art, and democratic control to a handful of companies pursuing planetary scale. In the epilogue, Te Hiku stands as proof that AI can be developed in ways that are small, specific, consensual, and embedded in local histories. The deeper point is that technical design and political structure are inseparable: different values produce different systems.
Hao then widens the lens to show that Te Hiku is not alone. She highlights the Distributed AI Research Institute, founded by Timnit Gebru after her ouster from Google, as another institutional alternative to centralized corporate AI. DAIR is presented as an explicit answer to Silicon Valley concentration: research should be distributed, rooted in communities, and shaped by the people most affected by AI rather than by executives and investors far away. Its philosophy rejects the familiar pattern in which the world is acted upon by tech while having no meaningful role in shaping tech. In Hao’s framing, DAIR is trying to rebuild AI research as a democratic and socially accountable practice.
The same logic appears in her discussion of Milagros Miceli’s Data Workers’ Inquiry. Instead of treating data workers as invisible inputs into the machine, the project treats them as thinkers with standing, expertise, and the right to define the terms of inquiry. Paying participants as researchers rather than as disposable labor is not just a matter of fairness; it reverses the moral logic of the industry. Hao uses this section to expose how the current AI economy depends on a colonial arithmetic: companies ask how little they can pay, how much instability workers can absorb, and how thoroughly the human beings underneath the model can be hidden from view.
That human cost becomes concrete through the stories of Oskarina Veronica Fuentes Anaya and Mophat Okinyi. Their experiences make visible the precarious reality of the people who annotate, filter, and moderate the data on which AI systems depend. Fuentes’s account reveals unstable pay, fragmented work, and total insecurity. Okinyi’s path from exploited content moderator to organizer and advocate shows another route: workers can collectivize, speak publicly, build institutions, and force the industry to confront the people it prefers to treat as replaceable. Hao treats this organizing not as a side story but as part of the real struggle over AI’s future.
The environmental and geopolitical dimension enters through Daniel Pena and resistance to data-center expansion in Uruguay. Hao uses this section to show that AI’s harms are distributed across a sprawling supply chain: energy extraction in one place, minerals in another, data processing somewhere else, labor discipline somewhere else again. That dispersion makes resistance harder, because each affected community sees only one piece of the system while the companies sit above the whole map. The answer, the epilogue argues, must therefore be transnational solidarity. If AI empire is global, resistance to it has to become global too.
The chapter closes with its clearest conceptual proposal, drawn from Ria Kalluri: the central question is not whether AI does “good” in some abstract sense, but whether it concentrates power or redistributes it. Hao organizes the answer around three axes—knowledge, resources, and influence. Today, she argues, AI empires dominate all three by monopolizing expertise, enclosing data and infrastructure, and shaping public imagination through hype. Dissolving that empire therefore requires funding independent research, forcing transparency about training data and model specifications, strengthening labor protections, exposing environmental costs, and expanding public education so that AI loses its mystical aura. The epilogue ends on a guarded but real note of hope: empire is not inevitable, because power can be redistributed, and the work of doing so has already begun.
See also
- A Ideologia do Vale do Silício - Uma Análise — Hao’s account of effective altruism, scaling doctrine, and mission elasticity at OpenAI is the institutional biography behind the ideological map drawn in this essay
- hedges_empire_of_illusion_resumo — Both books use “empire” as an analytical frame for American corporate power, but where Hedges diagnoses the spectacle that conceals decline, Hao traces the material supply chain that sustains expansion
- neoliberalism — Hao’s Chile chapters explicitly show how privatization, weak labor protections, and export dependence — the institutional residue of the Chicago Boys — created the conditions for AI extraction
- christian_alignment_problem_resumo — Brian Christian reconstructs the alignment problem as a technical and philosophical challenge; Hao shows how the same discourse functions inside a corporation as a political instrument that justifies both acceleration and secrecy
- karp_zamiska_technological_republic_resumo — Karp argues that tech companies should serve the democratic state; Hao documents what happens when they instead accumulate state-like power while claiming to serve humanity