Everything is Obvious
Once You Know the Answer
By Duncan J. Watts
Category: Psychology | Reading Duration: 38 min
About the Book
Everything Is Obvious offers insights into the failures of the most commonly used method of explaining human behavior: common sense. By offering sound solutions to common sense reasoning, it gives the reader the tools to better attempt to understand human behavior.
Who Should Read This?
- Curious minds who question what seems obvious
- Decision-makers navigating complex human behavior
- Fans of behavioral science and surprising insights
What’s in it for me? Learn how to avoid the pitfalls of common sense.
Every morning, people wake up and make thousands of decisions without thinking twice. Which shoe goes on first. Whether to grab an umbrella. Which side of the escalator to stand on. These micro-choices flow effortlessly, guided by something we rarely question: common sense. It's the accumulated wisdom of ordinary life – a mental library built from years of navigating social situations, avoiding embarrassment, and learning what works. Common sense tells you not to show up to work without pants. It tells you not to touch the stove when it's glowing red. It's the reason you know to look both ways before crossing the street, even when the light is green. But here's where things get interesting – and a little unsettling. That same trusted advisor, the one who's been so reliable with escalator etiquette and wardrobe basics, sometimes gives catastrophically bad advice when the stakes get higher. When you're trying to predict which fashion trends will dominate next season, or why certain paintings become priceless while others gather dust, or how to understand why people in different countries make wildly different choices about the same question – common sense doesn't just stumble. It falls flat on its face. In this summary, we're going to walk you through a series of moments where common sense completely fails us. Not to dismiss common sense entirely, or to make you distrust it, but to simply help you recognize when you may be using the wrong tool for the job. Think of it like using a butter knife to tighten a screw – it might work if you're lucky, but there's a better approach. Because the real danger of common sense isn’t that it’s sometimes wrong. It’s that it usually feels right. And once you learn to spot that feeling, you’ll begin to see the world very differently.
Chapter 1: Common Sense Isn't Common
You're standing on a crowded subway platform in your underwear. Everyone stares. You feel the heat of embarrassment crawl up your neck because – well, obviously you're supposed to wear pants on public transit. Right? Except: who decided that? Where's the rulebook? What we call "common sense" feels universal, like gravity or sunrise. It's the invisible architecture of daily life – which side of the escalator to stand on, whether it's okay to cut in line, how to split a dinner bill. These unwritten rules seem so obvious that we rarely question them. Until they shatter completely. Let me show you what happens when researchers take a simple game about fairness and play it across different cultures. The results reveal something unsettling about what we assume everyone "just knows. " The game is called the ultimatum game, and it works like this: Two strangers sit across from each other. One gets $100 and must propose how to split it – anything from keeping it all to giving it all away. The other person has exactly two choices: accept the offer and both walk away with their share, or reject it and both get nothing. When Western players sat down to play, they gravitated toward the same "fair" solution again and again: a 50-50 split. Offers below $30? Rejected outright. People would rather get nothing than accept what felt like an insult. Common sense, right? Fairness means equal. Then researchers brought the game to the Machiguenga tribe in Peru. The offers dropped to 25 percent of the total. Even more striking: virtually no one rejected these low offers. What seemed unfair in New York or London felt perfectly reasonable in the Amazon. Plot twist: In Papua New Guinea, the Au and Gnau tribes flipped the script entirely. Players offered more than half – sometimes significantly more. Generosity beyond the 50-50 split. And yet these overly generous offers got rejected just as often as stingy ones. Too much fairness was somehow. . . unfair? What's happening here isn't that some cultures "get it" and others don't. Each group is following their common sense perfectly – it's just that common sense itself is a local product, shaped by the specific social world each group inhabits. The Machiguenga live in small, isolated family units with little market exchange. The Au and Gnau have complex gift-giving traditions where accepting too much creates uncomfortable social debts. Same game. Same money. Completely different "obvious" answers. This matters more than you might think. When we try to solve society's big problems – designing policies, predicting behavior, building systems – we lean heavily on common sense. We assume people will respond "rationally" or "naturally" to incentives. But if common sense shifts from one culture to another, from one context to the next, then our solutions might work brilliantly in one place and fail spectacularly everywhere else. We're building on sand, thinking it's bedrock. The fairness instinct, the pants-on-subway expectation – none of it is hardwired. All of it is learned. And that means it can be unlearned, rewritten, or simply absent in the next room over.
Chapter 2: When Common Sense Becomes Dangerous
Chicago's South Side, 1962. Twenty-eight identical high-rise towers begin climbing skyward, each one sixteen stories tall, stretching for nearly two miles. The Robert Taylor Homes. Architects and urban planners gathered for the groundbreaking ceremony, confident they'd cracked the code on public housing. They'd designed what they knew would work – spacious units, modern amenities, efficient layouts. Their blueprint came from something deeper than data: it came from their gut sense of how people should live. Fast forward three decades. Those same towers had become synonymous with gang violence, poverty, and decay – worse than the slums they'd replaced. What went wrong? The planners had fallen into a trap that ensnares decision-makers everywhere, from politicians to social scientists. They'd relied on common sense. Common sense serves us brilliantly in daily life. Should you wear a coat when it's cold? Take the shorter route to work? These micro-decisions benefit from our intuitive grasp of how the world works. Common sense helps us navigate social norms, follow unwritten rules, fit into the fabric of society. But scale up the problem – shift from personal choices to societal challenges – and common sense starts to crack. The urban planners behind the Robert Taylor Homes weren't reckless. They genuinely believed their designs would elevate residents' socioeconomic status. They'd seen poverty – read about it in newspapers, witnessed people struggling on street corners. That familiarity bred confidence. Why conduct rigorous studies when the solution seemed obvious? Which brings us to a peculiar double standard in how humans approach problems. When studying the physical world, we demand evidence. Scientists don't rely on hunches about gravity or chemical reactions – they experiment, measure, test. The scientific method reigns supreme. Yet when it comes to human behavior? We trust our instincts. Why the inconsistency? Because we're swimming in society every single day. We're so immersed in human interaction that we assume we understand it. A physicist wouldn't presume to know particle behavior without experimentation, but a planner feels qualified to redesign entire communities based on assumptions about how families should live, how neighbors should interact, how poverty works. That presumption – that intimate familiarity with society grants us expertise – is precisely what makes common sense so treacherous at scale. The very thing that helps us function individually becomes our blindspot collectively. Urban planning isn't unique in this regard. Policymakers craft legislation based on what "makes sense. " Business leaders restructure organizations according to conventional wisdom. Social programs get designed around intuitive theories of human motivation. And time and again, these common-sense solutions fail spectacularly. I want to take you deeper into this paradox in what follows – to explore the specific ways our intuition misleads us when tackling big problems. Because recognizing where common sense fails isn't just academic. It's the first step toward making decisions that actually work.
Chapter 3: The Invisible Forces Behind Behavior – and Success
Germany and Austria share a language, a border, and much of their history. Yet when it comes to organ donation, they couldn’t look more different. In Germany, only about 12 percent of people consent to donate their organs. In Austria, it’s nearly everyone. Common sense reaches for a cultural explanation. Different values, different attitudes toward death, perhaps even different moral frameworks. But the real reason is almost absurdly simple: Austria makes organ donation the default option. Germany doesn’t. One checkbox – preselected or not – and an entire nation behaves differently. This is your first clue that human behavior isn’t driven by stable preferences or clear reasoning. It’s shaped by the situation – the invisible architecture surrounding our choices. Defaults are just the beginning. Subtle cues constantly steer our actions without us noticing. Researchers have shown that simply exposing people to words associated with aging can cause them to walk more slowly. Suggested numbers can anchor our decisions, pulling donations or estimates toward arbitrary reference points. These forces operate quietly, beneath awareness. And yet, when we observe behavior – our own or others’ – we instinctively ignore them. Instead, we tell simple stories. Someone gives a large donation? They must be generous. Someone accepts a low offer? They must be selfish or rational. We explain actions by pointing to character, not context. And that mistake doesn’t stop at individual behavior. It scales up. Consider the Mona Lisa. Why is it the most famous painting in the world? The answer feels obvious: its technique, its mystery, Leonardo’s genius. The painting must be famous because it is exceptional. But look closer, and the explanation begins to unravel. There are countless works of similar technical brilliance. Many masterpieces never achieved anything close to the same recognition. So why this one? Because success, like behavior, is shaped by invisible forces. Researchers demonstrated this in a simple experiment. Participants could listen to and download songs. Some saw how popular each song already was; others didn’t. In the group with social information, early popularity snowballed. Songs that got a slight initial advantage quickly pulled ahead, attracting more downloads simply because others had chosen them. The same songs, in a different setting, produced entirely different winners. This is cumulative advantage: once something gets ahead, it tends to stay ahead – not necessarily because it’s better, but because it’s already ahead. Popularity becomes self-reinforcing. Visibility creates more visibility. Success feeds on itself. Which means our common-sense explanations are often backward. We look at what succeeded and assume it must have had the qualities of success all along. We confuse outcome with cause. The Mona Lisa isn’t famous simply because it’s the best painting. It’s famous, in part, because it became famous – through a series of historical accidents, moments of attention, and self-reinforcing visibility. Put these insights together, and a deeper pattern emerges. We misunderstand behavior because we ignore the situation. We misunderstand success because we ignore the process. In both cases, common sense reaches for clean, intuitive explanations – personality, quality, merit – while overlooking the subtle forces that actually drive outcomes. The result is a comforting illusion: that people act the way they do because of who they are, and that winners win because they deserve to. Reality is messier.
Chapter 4: The Myth of the Influencer
And far more interesting. What if everything we believe about how ideas spread is backwards? Marketing executives spend millions chasing influencers. Celebrity endorsements. Viral campaigns centered on a handful of well-connected individuals. The logic seems airtight: find the hubs, the connectors, the people with massive networks, and let them do the heavy lifting. One Kim Kardashian tweet reaches millions. One post from the right person, and your product explodes. Except there's a problem. The science doesn't support it. Let me take you back to 1967, to a psychology lab where Stanley Milgram was about to launch one of the most misunderstood experiments in social science. He recruited 300 people and gave them a simple task: get a message to a specific friend, but you can't contact them directly. You have to pass it through strangers, each one handing it off to someone they think might be closer to the target. Six steps. That's what it took on average for messages to reach their destination. Six degrees of separation was born, and it became a cultural touchstone. But Milgram discovered something else, something that would shape marketing strategy for decades. Nearly half of those messages funneled through just three individuals. Three people acting as critical bridges in the network. The conclusion seemed obvious: networks need hubs. Find your influencers, and you control the flow. Fast forward several decades. Researchers decided to test whether this lab finding held up in the real world. They replicated Milgram's experiment, but this time with 60,000 people across 166 countries, using email instead of physical letters. Massive scale. Real-world conditions. The results? Completely different. Instead of messages clustering through a few key individuals, the paths were astonishingly diverse. Nearly as many unique chains as there were recipients. The network wasn't hub-dependent at all – it was egalitarian. Information flowed through thousands of different routes, treating people more or less equally. Which brings us back to Kim Kardashian and her $10,000 tweets. Companies paid that premium believing they were buying access to her network, her influence, her power to make things spread. But what if they'd taken that same $10,000 and given one dollar each to 10,000 ordinary people? What if the power wasn't concentrated in the celebrity, but distributed across the crowd? The uncomfortable truth: we each play a role in spreading information. Not just the famous. Not just the well-connected. All of us. The network doesn't depend on finding the perfect influencer – it depends on activating the many, not the few. Common sense told us to chase the hubs. The data tells us to trust the crowd.
Chapter 5: The Stories We Tell Ourselves About Yesterday
A French sailor stands on the deck of his ship in the English Channel, 1337. Salt spray stings his face. English vessels loom on the horizon. Within hours, cannonfire and chaos. Blood on wooden planks. Men screaming, falling into cold water. What's he thinking in that moment? That he's launching the Hundred Years War? Of course not. He's thinking about survival. About his family back in port. About whether the wind will hold. He has no idea that centuries later, historians will mark this exact skirmish as the grand opening act of an epic conflict that would reshape Europe. That gap – between what people lived and what we say they lived – is where history gets slippery. I want to show you something unsettling about how we understand the past. We think we're learning from history, extracting wisdom from what came before. But our common sense keeps playing tricks on us, turning messy reality into tidy stories that feel true but might be completely wrong. Take Iraq, 2007. Violence had been spiraling for years. Then came the surge – 30,000 additional American troops deployed. The following summer, violence dropped dramatically. Most people connect those dots instantly: surge caused peace. Simple. Satisfying. Except we're missing something crucial. What would have happened without the surge? We'll never know. That alternate timeline doesn't exist. Meanwhile, during that exact same period, the Iraqi Army started taking a more aggressive stance against militias. Sunni tribes began turning against Al-Qaeda. Sectarian cleansing in Baghdad had already separated many communities, reducing friction. A dozen factors shifted simultaneously. Which one mattered most? Impossible to say with certainty. Yet common sense demands we pick one clean explanation and stick with it. The problem runs deeper than just Iraq. Every time we look backward, we're watching a movie when the people who lived it were improvising without a script. They didn't know which moments would become "historic. " They couldn't see the narrative arc we impose on their lives. Those English and French sailors in 1337? They thought they were settling a trade dispute, maybe. A territorial squabble. Another skirmish in an endless series of skirmishes. The phrase "Hundred Years War" wouldn't be coined for centuries. The very concept would have been meaningless to them. We create these narratives – beginnings, middles, ends – because our minds crave structure. Stories make the chaos bearable, memorable, teachable. But they're our stories, not history's truth. We're essentially writing fiction over the top of what actually happened, then mistaking our plot summary for reality. Which means when we say we're "learning from history," we might just be learning from our own storytelling habits. Absorbing lessons from narratives we invented rather than from the messy, ambiguous, unresolved complexity that people actually experienced. History isn't a textbook. It's millions of people who had no idea what chapter they were in.
Chapter 6: The Illusion of Prediction – and Its Limits
Financial advisors speak with the confidence of prophets. “Invest here,” they declare, charts and graphs spread before them like sacred texts. “The data is clear. The trajectory is obvious. ” Millions follow their guidance. Then the market collapses. What’s striking isn’t that experts get it wrong – it’s that they keep believing they can get it right. Which raises an uncomfortable question: what if prediction itself rests on a flawed assumption? Our common sense tells us there is one future waiting to unfold. One path. One outcome. The job of experts, then, is simply to read the signals correctly. But reality doesn’t work that way. The systems we’re trying to predict – markets, politics, technology – are complex and interconnected. Small changes ripple outward, creating consequences no one can fully anticipate. There isn’t one future ahead of us, but many possible futures, each shaped by tiny variations in the present. And yet, common sense nudges us to focus only on what seems important. Don’t worry about unlikely scenarios, it says. Concentrate on what matters. The problem is, we only know what matters after it happens. On September 10th, 2001, airport security focused on known threats: metal weapons, familiar hijacking tactics. The idea that planes themselves could be used as weapons wasn’t seriously considered. That blind spot only became visible in hindsight. Prediction fails not because we lack intelligence, but because we’re asking the wrong question. We’re searching for the future, when we should be preparing for many – including the ones that seem implausible. So what can we do instead? One approach is to turn to the crowd. Prediction markets aggregate thousands of individual judgments into a single forecast. One person overestimates, another underestimates. Optimism balances pessimism. In many cases – especially for repeatable events – this collective estimate outperforms individual experts. But even this approach has limits. The decisions that matter most – launching a new product, navigating a geopolitical crisis – don’t repeat often enough for reliable patterns to emerge. These are one-off events. There’s no dataset to learn from. So we try another strategy: scenario planning. Imagine several possible futures. Prepare for each. Stay flexible. It sounds sensible – until reality delivers something no one imagined. In the 1980s, an oil company modeled three scenarios for the future of supply: slow growth, moderate growth, rapid growth. Each assumed oil production would increase. Instead, it dropped. None of their scenarios had accounted for that possibility. The future they faced wasn’t one of their options – it was something they hadn’t even considered. That’s the deeper problem with prediction. Even when we expand our models, consult experts, and imagine alternatives, we’re still working within the limits of what we can conceive. And the real future has no obligation to stay within those limits. If prediction keeps failing – even with our best tools – then perhaps prediction isn’t the goal. Perhaps the real challenge is simpler, and harder: Not to foresee the future, but to be ready to respond when it arrives.
Chapter 7: Stop Predicting, Start Responding
Every January, fashion forecasters gather in conference rooms across Milan, Paris, and New York to declare what colors, cuts, and silhouettes will dominate runways eighteen months from now. They pore over trend reports. They analyze cultural movements. They make bold proclamations about the future of style. And almost nobody checks if they were right. Think about that paradox for a moment. An entire industry built on prediction, yet no one bothers to measure accuracy. Which raises an uncomfortable question: what if the whole exercise is pointless? Zara figured this out decades ago. While competitors were hiring trend forecasters and placing massive bets on what might be cool next season, the Spanish retailer was doing something radically different. They were watching what people were already wearing. Right now. On the streets. Their approach – what they call a measure-and-react strategy – works in three elegant steps. Designers and scouts observe real consumers in real cities, noting what catches on organically. They develop small batches of new styles based on these observations and ship them to select stores. Then comes the crucial part: they measure ruthlessly. What sells out by Tuesday? What's still hanging on racks by Friday? The winners get scaled up fast. The losers get abandoned without sentimentality. No crystal balls. No guesswork. Just observation and rapid response. This same principle has migrated far beyond fashion. During the 2000s, Google and Yahoo! discovered they could track influenza outbreaks simply by counting search queries for "flu" and "flu shots. " Their estimates came remarkably close to official statistics from the Centers for Disease Control and Prevention – except they got the data faster, without waiting for doctors' reports to filter through bureaucratic channels. The internet's constant activity creates a real-time feedback loop that makes measure-and-react strategies incredibly powerful. You're not predicting human behavior; you're reading it as it happens. But – and this matters – you can't always measure and react. Some decisions are too big, too irreversible. When you're choosing a factory location or restructuring your entire organization, you can't just "test it out" and pivot next week. Does that mean falling back on executive intuition and crossing your fingers? Not quite. I want to show you a different approach: tap into local knowledge. When Toyota wants to optimize an assembly line, they don't hire external consultants to theorize about efficiency. They ask the workers standing at that line every single day. These people know where the bottlenecks form. They understand why certain processes jam up. More importantly, they've often already invented workarounds that actually function. Their proximity to the problem gives them insights no distant expert could match. This isn't about democratic feel-good gestures – it's about accessing the most accurate data source available. The people closest to the work know things that spreadsheets can't capture. Whether you're tracking flu patterns through search data or redesigning a factory floor through worker input, the principle remains constant: the present contains more useful information than any forecast about the future. You just have to know where to look.
Chapter 8: The Luck We Don't See
A driver chapters heavily at the wheel, fighting exhaustion as streetlights blur past. He's almost home. His eyes close for just a second – maybe two – and when they open, there's a sickening thud. A child, appearing from nowhere, now lies in the road. We send that driver to prison. Absolutely, without question. He deserves it, right? But rewind the scene. Same driver, same exhaustion, same dangerous moment of closed eyes. Except this time, the street is empty. He makes it home, crawls into bed, wakes up the next morning with no idea how close he came. Do we hunt him down and lock him up for what might have happened? Of course not. That would be absurd. Yet the only difference between these two scenarios is luck – terrible, random luck about where a child happened to be standing. The driver's choices, his negligence, his level of danger to society? Identical in both cases. What changes everything is something entirely outside his control. I want to explore what this reveals about fairness itself. Our common sense understanding of justice gets hijacked by outcomes. We judge people not by their intentions or even their actions in isolation, but by the random consequences that follow – consequences that often have nothing to do with who they actually are. Which means the way society treats us can hinge on pure chance. The luck of the draw. This isn't just philosophical musing. Researchers tracking people with similar talents and abilities have found something startling: five or ten years down the line, their success levels diverge dramatically. Same starting point, wildly different destinations. The difference? A series of random events and lucky breaks. Someone bumps into the right person at a party and lands their dream job. Another person, equally qualified, never gets that chance encounter. One thrives; the other struggles. Not because of merit, but because of timing, geography, coincidence – the invisible hand of randomness dealing different cards. Fast forward to the bigger question: if luck plays such an enormous role in where we end up, what does that mean for how we build society? The philosopher John Rawls spent years wrestling with exactly this. His conclusion? A truly just society is one that actively works to minimize how much random luck determines inequality. Not eliminate chance entirely – that's impossible – but at least stop pretending that success and failure are purely earned. We won't solve this overnight. Creating a fair society is messy, complicated work. But we can start by recognizing when our common sense is lying to us. When we're about to judge someone's worth based on outcomes they couldn't control, we can pause. Question. Look deeper. Because fairness isn't about punishing bad luck or rewarding good fortune. It's about seeing past the randomness to what actually matters.
Final summary
In this summary of Everything is Obvious by Duncan J. Watts, you’ve learned that what feels obvious is often wrong. Common sense works well in everyday life. It helps you navigate social norms, make quick decisions, and avoid small mistakes. But when the stakes rise – when you try to explain success, understand behavior, or predict the future – it starts to mislead you. We’ve seen how. We assume behavior reflects character, when it’s often shaped by context. We assume success reflects quality, when it’s often driven by timing and momentum. We assume the past tells a clear story, when it’s really a messy collection of events we simplify after the fact. And we assume the future can be predicted, when in reality it branches into countless possibilities. Each time, common sense offers a simple explanation – and each time, it leaves something important out. That’s what makes it so powerful. It doesn’t feel like a guess. It feels like understanding. But recognizing its limits gives you something better: a different way of seeing. One that asks what might be missing, what hidden forces are at play, and which assumptions you’re taking for granted. This shift – from trusting intuition to questioning it – is the beginning of uncommon sense. It’s not about having all the answers. It’s about being less certain of the easy ones. Today, we have more tools than ever to understand how the world actually works. But the real challenge isn’t better data – it’s letting go of what feels obvious. Because common sense is fast and satisfying. Uncommon sense is slower and less comfortable. But it offers something far more valuable: a clearer view of reality. Not perfectly. Not completely. But closer than before. Okay, that’s it for this summary. We hope you enjoyed it. If you can, please take the time to leave us a rating – we always appreciate your feedback. See you in the next chapter.
About the Author
Duncan J. Watts is a principal researcher at Microsoft Research and well known for his work on network science. He has also authored the bestseller Six Degrees: The Science of a Connected Age, and was formerly a professor of sociology at Columbia University.