Empire of AI
Dreams and Nightmares in Sam Altman's OpenAI
By Karen Hao
Category: Technology & the Future | Reading Duration: 22 min | Rating: 4.1/5 (154 ratings)
About the Book
Empire of AI (2025) chronicles the evolution of OpenAI from an idealistic nonprofit into a $157 billion empire. It details the messy power struggles, untold human tragedies, and backroom partnerships that defined the unlikely race to build ChatGPT. This explosive goes beyond the hype and helps you to understand the real power dynamics shaping the technology that could very well define your future.
Who Should Read This?
- Users of ChatGPT curious about how the technology came into being
- Anyone concerned with the hidden ethical costs of new technologies
- Tech enthusiasts who love stories of Silicon Valley power
What’s in it for me? Uncover the truth behind the AI revolution.
You’ve probably used ChatGPT, or heard someone rave about how AI will transform everything from health care to education. Like many, you’ve probably marveled at its capabilities. Maybe this new tool could save you time. Maybe it could emancipate humanity from work forever.
But, perhaps you’ve also felt a nagging unease about where this is all heading – and what it means for the survival of humanity. Well, behind the hype lies a story, one of how a nonprofit founded to democratize AI for humanity’s benefit has morphed into a $157 billion empire. And it’s precisely this story that you’ll discover in this Blink – a narrative that sometimes sounds more like dystopian fiction rather than recent history. By the end, you’ll understand how OpenAI went from refusing to release GPT-2 for safety reasons to rushing ChatGPT to market in a mere two weeks.
These events are important to help understand the world today – one increasingly tying its fate to that of AI. Once you see past OpenAI’s sterile PR narrative about progress, you'll have a better idea whether AI is truly redistributing power to you – or simply fortifying an empire built on your data, your creativity, and your trust. With that said, let’s go back to the very beginning – to a private dinner where the seeds of our current reality were unwittingly planted. The guests were all seated except for one – Elon Musk was over an hour late.
Chapter 1: A new beginning
It’s 2015, and Sam Altman, the 30-year-old president of Y Combinator, is watching as guests arrive for an intimate dinner. The venue? The prestigious Rosewood Hotel. His guests?
Some of Silicon Valley’s top minds on artificial intelligence. The agenda? One item: to discuss the future of humanity. Between courses of Wagyu beef, of course. An hour after the dinner was meant to start, Musk finally arrives. For him, the purpose of the dinner is to put a stop to months of anxiety – an anxiety rooted in Google’s purchase of the AI lab DeepMind for a measly $500 million.
It was keeping him awake at night, and even led to him clashing with his friend and Google cofounder, Larry Page. While Page saw superintelligence as the next logical step in evolution, Musk disagreed. Page even called Musk a “speciesist” – someone who discriminated against nonhuman intelligence. After Musk took his seat at the table, Altman started to address the assembled group. Musk had studied Altman – in their email exchanges, the young man had said all the right things about AI safety. Altman’s stated goal was to create the world’s first general AGI, or artificial general intelligence – and to harness it for individual empowerment.
Most importantly, they both shared the conviction that it should be safety-focussed and not for the purpose of profit. Or so Musk thought. So they began to pitch their vision for an AI lab to all those assembled – one free from Google's commercial pressures. Musk pledged up to a billion dollars, and the deal was sealed. They would call it OpenAI and, importantly, it would be a nonprofit. But by 2017, the nonprofit dream was meeting with a harsh reality.
The reality was quickly dawning upon them that building AGI would require billions of dollars annually to afford the tens of thousands of GPUs needed. The nonprofit structure was a dead end. So, the leadership began to discuss what had been previously unthinkable: OpenAI had to transform into a for-profit company. But who would be CEO of such an entity? A power struggle ensued. Musk’s plan was to fold OpenAI into Tesla, using his existing riches to battle Google.
But Altman had different plans. He appealed directly to key leaders, framing Musk as too “erratic” to lead the company. Could they really trust AGI in the hands of one all-powerful, unpredictable man? The maneuver worked and the showdown was over. In early 2018, outmaneuvered and defeated, Musk announced his departure at a tense all-hands meeting. He didn’t go quietly, though.
He told the stunned employees that OpenAI would fail as a nonprofit – and that he would now pursue the mission of safe AGI at Tesla instead. He then walked away – and took his money with him. Altman stood victorious, but he was the president of a company with a dangerously empty bank account. The safety-obsessed cofounder was gone. How could OpenAI plug the holes in its coffers?
Chapter 2: The price of scale
With Musk gone, OpenAI’s nonprofit dream seemed dead and buried. But something was starting to take shape that might make all the drama worth it. This something emanated from the mind of Ilya Sutskever – the chief scientist Musk and Altman had poached from Google. While others debated novel AI architectures, Sutskever started developing an idea – if the intelligence of different species correlated with the size of their brains, why wouldn’t AI follow the same principle?
Basically, if he was right – and artificial nodes were like neurons in the human brain – then artificial intelligence wouldn’t require a new architecture at all. It just needed more computing power – and much more data. This became OpenAI’s new religion, and it became known as the scaling doctrine. The plan was simple. Instead of training its model on curated texts, OpenAI simply fed its AI the raw internet itself, scraping text from billions of websites without prejudice. Sutskever’s idea was that if OpenAI trained its AI on this huge amount of data, eventually intelligence would simply emerge.
There was just one tiny problem. Early experiments showed that models trained on the raw internet ended up spouting conspiracy theories about George Soros and neo-Nazi propaganda. It turned out large parts of the raw internet were catering to humanity’s darker impulses. So OpenAI came up with a solution – build a human filter with the help of outsourced workers in the Global South. One of those workers was Mophat Okinyi in Kenya. He arrived each morning as a quality analyst on the sexual content team.
His job was to read and categorize thousands of grotesque text snippets. The worst was child sexual abuse, followed by content describing incest and bestiality. Some of it was scraped from erotica sites detailing rape fantasies – other parts were generated by an AI instructed to imagine the unimaginable. For less than $2 an hour, Okinyi’s team labelled humanity’s worst impulses. As time dragged on, Okinyi felt his sanity start to fray. Some of the more horrifying posts followed him home and even into his sleep.
He withdrew from his family, unable to explain the source of his trauma due to strict NDAs. Around him, his colleagues began to break down. They didn’t know they were building ChatGPT – only that the work was slowly destroying them. When ChatGPT launched and became the fastest-growing app in history, Okinyi and the others finally understood. The magical AI that was now helping millions write emails and essays spoke so well because teams like his had filtered out the filth first. This would be the first great trauma of the AI age, one held by tech workers in the Global South for the sake of enriching Silicon Valley.
So, with the data of the internet tamed, it was time to tackle the next big problem. To go with the huge amount of data – so much data – OpenAI needed a huge amount of computing power to go with it. A partner with access to computation on an industrial scale was needed. In the summer of 2019, Altman found himself at the most exclusive gathering of the American capitalist class.
Chapter 3: The deal that changed everything
This was, of course, the Allen & Company conference in Sun Valley, Idaho. Little did he know that it was here where a partnership was about to be formed, one that would solve his computation problem – and set history in motion. It was here that Altman found a counterpart in Microsoft CEO Satya Nadella. Nadella was anxious – Google was winning the AI race, publishing groundbreaking research and acquiring the field’s top minds.
Altman, in turn, was facing his own problem of computational hunger that his scaling doctrine required. So, over mountain views and private dinners, a partnership started to take shape. It looked like this: Microsoft would invest $1 billion into a new “capped-profit” entity within OpenAI, a corporate structure so unconventional it confounded lawyers. In return, Microsoft secured exclusive rights to commercialize OpenAI’s technology for an unspecified time, gaining a critical advantage over rivals. It was win-win – Altman’s demand for computation became a guaranteed revenue stream for Microsoft. The partnership simmered quietly for three years.
Then, in late 2022, a rumor reached Altman: Anthropic, a rival lab founded by disgruntled OpenAI alumni, was close to releasing its own chatbot. The risk of losing OpenAI’s lead – and to former colleagues, no less – was simply unthinkable. So, OpenAI abandoned its plan to wait for the more advanced GPT-4. Instead, it quickly packaged its existing GPT-3. 5 model with a new chat interface. It would be launched in two weeks, framing as a “research preview” rather than a finished product.
The name they chose was ChatGPT. The team’s expectations were that it would generate a brief flurry of interest before fading. The night before launch, employees placed casual wagers on user numbers – optimists guessed a few tens of thousands. They decided in the end to provision servers for one hundred thousand users – just to be safe. On November 30, 2022, the “research preview” went live. The rest, as they say, is history.
Within five days, one million people had signed up. An OpenAI engineer at a recruiting event had to ditch the party, furiously texting colleagues that “everything is crashing. ” At Google, “code red” alerts pulled staff from their holidays. The entire tech industry was forced to react. Two months later, the user count hit the one hundred million mark, making ChatGPT the fastest-growing consumer application in history. For many at OpenAI, the public triumph felt like vindication.
Many in the applied and product teams were energized to deploy new versions as fast as possible. But an old internal rift in the company was starting to grow. A small but vocal minority of safety researchers sounded increasingly loud alarms about the dangers the technology could unleash. The company, born out of a desire to safely manage AI development, was now hurtling at breakneck speed toward a high-stakes race to dominate the market. These two opposing visions for the company couldn’t coexist – one would have to yield.
Chapter 4: The war for OpenAI
In late 2020, employees watched a video all-hands where Dario Amodei, one of OpenAI’s top safety researchers, read a canned statement – he, his sister Daniela, and several others were leaving. The next year, they announced their own public benefit corporation: Anthropic. Their departure was an early warning shot against the company’s pivot to commercialization, one that went unheeded. Fast forward to 2023 – and with the company hurtling forward on the success of ChatGPT, Ilya Sutskever found himself the last remaining founder who embodied that original safety-first position.
As OpenAI’s chief scientist, he’d been the primary architect of the very scaling doctrine that made everything possible. He was, in a sense, at war with himself – thrilled by the progress, yet terrified by the implications. The arrival of GPT-4 was the breaking point for him. The model’s leap in capability, its sparks of what looked unnervingly like real understanding, convinced Sutskever that AGI was no longer a distant abstraction. It was coming. Fast.
His behavior grew prophetic. At company gatherings, he’d urge employees to “Feel the AGI. ” At one retreat, senior scientists in bathrobes stood around a fire pit as Sutskever set a wooden effigy ablaze that he’d commissioned. It represented, he explained, a deceitful, misaligned AGI. OpenAI’s duty, he declared, was to destroy it. It turned out that Sutskever’s paranoia was rooted in Sam Altman’s patterns of behavior.
Each of the CEO’s actions, viewed in isolation, seemed minor. But when viewed together, a clear pattern of manipulation – pitting leaders against each other, creating chaos only he could resolve – snapped into focus. The most recent example involved Jakub Pachocki, a top researcher who reported to Sutskever. Frustrated with his lack of authority, Pachocki went to Altman, who encouraged his ambitions and gave him a more senior role. Altman never mentioned any of it to Sutskever, his own chief scientist. Sutskever considered it a direct betrayal.
Meanwhile, as Sutskever wrestled with all this, old allegations from Sam’s sister, Annie Altman, recirculated on social media. This added another dimension to his doubts. One viral tweet accused Sam of various forms of abuse. Another alluded to a nonconsensual encounter in her childhood bed. Sutskever didn’t know if the allegations were true. For him, they were potential evidence of a long-standing history of problematic behavior, a pattern he now felt he was experiencing firsthand.
By October 2023, Sutskever had seen enough. He sent an email to Helen Toner, an AI safety researcher on OpenAI’s board who had her own concerns about Altman’s lack of transparency. When they spoke, Sutskever was explicit. He noted the “tremendous opportunity” the board now had and told her that he believed Altman was the wrong person to be in control when AGI finally arrived.
The scientific founder was moving against the CEO. The holy war was about to go public. Sutskever’s move set the board in motion. Independent directors Helen Toner, Tasha McCauley, and Adam D’Angelo began meeting in secret, almost daily.
Chapter 5: The final gambit
Their accusations against Altman centered around a lack of transparency as well as patterns of manipulation and lying – sometimes to the point where he seemed not to even believe himself. On Friday, November 17, at noon Pacific time, Altman joined a Google Meet with four board members. Sutskever was brief: “You're fired. ” Witnesses say Altman fell silent, stunned, looking away to compose himself.
Its public statement was surgical, concluding that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. ” They thought it would be clean. But it turned out they’d made a massive miscalculation. OpenAI President and Altman ally Brockman quit in protest within hours. Senior researchers followed. At an emergency all-hands that afternoon, Sutskever fumbled the explanation.
Pressed for details by anxious employees, he offered little, simply repeating the public statement. He told them to “read the press release” – and to keep their expectations for transparency “low. ” The blowback intensified. At Microsoft, Satya Nadella, who found out only a minute before the announcement, was furious. His multi-billion-dollar partner had just imploded. He immediately offered to hire Altman and Brockman to lead a new AI lab.
Over that weekend, an open letter spread through Slack with a blunt message: reinstate Sam and resign, or the signatories would walk to Microsoft. Within hours, it had over 500 signatures – more than two-thirds of the company – a number that later climbed above 700. By Tuesday, November 21, they surrendered – Altman returned as CEO, vindicated. The directors who fired him were replaced by allies like Larry Summers and Bret Taylor. The five-day coup was a spectacular failure of OpenAI's founding experiment. Sutskever and his allies had missed their shot – and now no one was left to pump the brakes.
In the months that followed, the empire's transformation was complete. Sutskever never returned to the office. The safety researchers scattered to Anthropic, Google, or their own labs. Altman pursued the final betrayal of the original vision: restructuring into a typical corporation. OpenAI showed that it had become, and would remain, Sam Altman’s personal empire of AI. Which brings us more or less to the present, one where the thoughts of Joseph Weizenbaum, one of AI’s founding fathers, ring truer than ever.
He understood something that Altman seemed to have forgotten: once you explain how a magic trick like AI works, “its magic crumbles away. ” It’s as simple as that. And it’s entirely this myth that drives OpenAI’s value. Once people realize that you don’t need some special group of people to build and control the magic, the magic evaporates completely.
Which begs the question of not whether Altman’s empire will fail, but when. And even more importantly: What will we build in its place? This is a question that’s even harder to answer.
Final summary
In this Blink to Empire of AI by Karen Hao, you’ve learned that the story of OpenAI started with a big vision to “benefit all of humanity” – but that quickly turned into a messy power struggle. When the nonprofit realized it had to become a for-profit to survive the dynamics of the market, Elon Musk and Sam Altman jostled for control. Altman ended up outmaneuvering Musk, who then left the company. With Altman now in charge, he secured a partnership with Microsoft to get the money for his scaling doctrine.
Then came the surprise launch of ChatGPT, which was rushed out in two weeks. This led to an escalation in the internal war between safety and product-focused employees, culminating in a failed coup. Altman’s return cemented his control – and soon there were plans to dismantle the nonprofit’s oversight for good. Okay, that’s it for this Blink.
We hope you enjoyed it. If you can, please take the time to leave us a rating – we always appreciate your feedback. See you in the next Blink.
About the Author
Karen Hao is an award-winning journalist specializing in the societal impacts of artificial intelligence. She’s served as a reporter for the Wall Street Journal and as the senior editor for AI at MIT Technology Review. Her reporting is regularly taught in universities and has been cited by governments, establishing her as a leading voice on the subject