Karen Hao on the Empire of AI: AGI Evangelists, the Cult of Belief, and Its Hidden Costs
In the high-stakes world of artificial intelligence, one company’s quest for god-like machines has birthed what journalist Karen Hao calls an “empire.” Her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, dissects how OpenAI’s fervor for artificial general intelligence (AGI) has reshaped the industry—and at what price. In a recent TechCrunch interview, Hao unpacks the ideological machinery behind this empire, where belief in AGI drives billions in spending but risks ethical blind spots and global inequities.
OpenAI as Modern Empire: From Nonprofit Idealism to Powerhouse
OpenAI started in 2015 as a nonprofit lab, promising transparent AI for humanity’s benefit. Fast-forward to 2025: It’s a secretive behemoth, wielding economic and political clout rivaling nation-states. Hao likens it to an empire terraforming the planet—rewiring geopolitics, economies, and lives in pursuit of AGI, defined as autonomous systems outpacing humans in economically vital tasks.
This shift wasn’t accidental. Under CEO Sam Altman, OpenAI pivoted to a “capped-profit” model, attracting Microsoft billions while cloaking ambition in messianic rhetoric: AGI to “elevate humanity” through abundance and discovery. Hao’s interviews with 260 insiders reveal a culture blending rationalism, effective altruism, and long-termism—ideologies born in Bay Area forums that envision AI as savior or destroyer, justifying speed over scrutiny.
The AGI Evangelists: Zealots in Silicon Valley Temples
At the heart of this empire are AGI evangelists—OpenAI leaders and acolytes whose voices “shook with fervor,” as Hao recounts. Altman, a self-proclaimed “missionary,” frames AGI as “the most important project in the world,” echoing religious founders who build “religions” disguised as companies.
Hao traces this zeal to chapter titles like “Divine Right” and “A Civilizing Mission,” where employees saw themselves civilizing the world with AGI’s light. Utopians dream of utopia; doomers fear apocalypse, racing to control it first. This “scale-at-all-costs” doctrine—pouring resources into massive models like GPT-4—dominates, sidelining alternatives like algorithmic tweaks that could save compute and data. Victory means monopoly; falling behind cedes AGI’s reins.
Critics like Hao argue this isn’t inevitable progress. It’s a self-fulfilling prophecy: Define AGI as a winner-takes-all quest, and speed trumps ethics, turning labs into temples of unchecked power.
The Cost of Belief: From Global Exploitation to Ethical Compromises
Belief in AGI isn’t free—it exacts a toll. Hao’s exposé highlights exploited global labor: Data annotators in Kenya earn pennies labeling toxic content for models, enduring trauma without recourse. Massive compute demands guzzle energy, exacerbating climate woes, while secrecy breeds governance chaos—like Altman’s 2023 ousting and rehiring.
The empire’s narrative power sustains it: Mission rhetoric rationalizes compromises, blurring profit and salvation. Hao warns of “moral blind spots”—justifying extreme measures because stakes feel cosmic. Who benefits? Not the vulnerable workers or everyday users, but the empire’s architects.
Public reactions mix awe and alarm. On X, posts echo Hao’s critique: “OpenAI’s AGI cult justifies everything—from secrecy to exploitation.” Reviews praise her “prosecutorial” lens but debate overreach: Facts solid, but is Altman a “colonial emperor”? Experts like Charley Johnson call it a “religion of AGI,” urging skepticism amid hype.
Impact on U.S. Readers: Power, Profit, and Everyday AI
For Americans, Hao’s empire hits close. OpenAI’s influence permeates daily life—ChatGPT in schools, DALL-E in design—yet its opacity raises alarms. Economically, the AGI race funnels billions into compute (Nvidia’s market cap soared $2 trillion in 2025), boosting U.S. tech but widening inequality.
Lifestyle-wise, belief-driven AI promises abundance but delivers job flux—writers and artists displaced by generative tools. Politically, it echoes Big Tech’s sway: OpenAI lobbies like a nation-state, shaping regs amid 2026 elections. Technologically, the “scale” dogma stifles innovation, ignoring efficient paths that could democratize AI.
Conclusion: Reclaiming Agency from the AGI Mirage
Karen Hao’s Empire of AI isn’t just OpenAI’s story—it’s a cautionary tale of belief’s double edge. AGI evangelists’ zeal builds wonders but at the cost of ethics, equity, and oversight. As Hao urges, pause the “is AGI coming?” debate; ask who profits from the faith that it will.
Looking ahead, 2026 could see pushback—academics and communities demanding accountability. For now, her work empowers us: In this empire, we’re not subjects—we’re citizens with agency to demand better.
