The Watchman's Rattle
by Rebecca D. Costa
← Back

The Watchman's Rattle

Thinking Our Way Out of Extinction

By Rebecca D. Costa

Category: Science | Reading Duration: 23 min | Rating: 4.5/5 (40 ratings)


About the Book

The Watchman’s Rattle (2009) asks the chilling question of what happens when the world we’ve built becomes too complex for the human brain to manage. Drawing on history, neuroscience, and real-world case studies, it reveals why brilliant civilizations stall, why obvious solutions get ignored, and why insight may be humanity’s last evolutionary advantage.

Who Should Read This?

  • Big-picture thinkers who like connecting history, psychology, and science
  • Problem-solvers and innovators who care about finding smarter solutions
  • Anyone feeling skeptical of modern life and curious about our future

What’s in it for me? Discover how we can break the doomsday cycle that topples complex societies.

Every generation likes to believe it’s living at the peak of human progress, armed with better tools, better knowledge, and better answers than anyone before it. Yet history tells a less comforting story: advanced civilizations tend to stall and unravel in remarkably similar ways.

This Blink steps back from daily headlines and asks a deeper question – whether the real threat to modern society isn’t a lack of technology or effort, but a growing mismatch between the speed of change and the limits of the human mind. What follows is a guided tour through that mismatch and, more importantly, the escape routes it leaves behind. We’ll blend history, neuroscience, and real-world examples to show how gridlock forms, why obvious solutions so often get ignored, and how deeply rooted beliefs can quietly block progress. At the same time, we’ll highlight a powerful and underused human capacity – insight – and explain how strengthening the way we think may be the most practical survival strategy we have.

Chapter 1: When civilizations hit the wall

Every era has its shiny new tools, its clever systems, its sense that “this time, we’ve figured it out. ” But time and time again, that confidence can prove to be misplaced. The problem is what the author calls the brain-speed gap. To put it simply, the worlds we build change so fast that we can’t actually evolve and keep up with it.

Instead, we remain stuck in our ways and big problems go unsolved. Right now we have new laws, technologies, markets, and scientific discoveries that our human brains struggle to keep track of. So, while the way the world functions may change dramatically within one or two generations, our brains only change on a much slower evolutionary timeline. That mismatch can be seen as playing a big role in the repeating historical cycle of boom, stall, and collapse. We can see it with the great empires of the past, including the Mayans, the Romans, and the Khmer Empire. At their height, the Mayans supported enormous populations in harsh territory without any modern infrastructure.

Still, they built sophisticated cities and systems that continue to astound. But then the civilization unraveled within a relatively short amount of time. Scholars agree that a sustained period of drought contributed to this downfall. But if we look at what happened just prior to the drought, we can also identify how shifts in the Mayan society made them less able to respond. To make a long story short, Mayan civilization had become too intricate for the society to fully grasp. It had grown to such a point that the available mental tools, institutions, and coordination capacity proved insufficient to save the day.

When you reach that point, problems don’t disappear. They stack. They get handed forward like unpaid bills. Collapse rarely arrives like a lightning strike. It shows up after a long period where progress slows, decision-making thickens, and solutions start dying in committee. Costa flags two early warning signs.

The first is gridlock: a society can name its threats – water stress, instability, environmental damage – yet can’t move decisively. The second is a drift toward beliefs replacing evidence. Belief itself is part of being human, and it helps people function under uncertainty. It’s bad news when evidence becomes too hard to gather and comforting narratives grab the steering wheel. The Maya illustrate this arc. As pressures grew, practical efforts gave way to religious and ritual responses, and the unresolved issues swelled across generations.

Rome shows a parallel pattern: the empire became so expensive and complex to maintain that there was no conceivable system to meet the combined needs of the economy, governance, and defense. In these and other cases, collapse needn’t be guaranteed. As we’ll see in the sections ahead, the key is noticing the threshold early, while there’s still room to adapt. That sets up the next question: when the mind hits its limits, how can we generate the needed breakthrough?

Chapter 2: Insight as evolution’s escape hatch

Here’s the big hopeful twist: the human brain has more than one way to solve problems, and one of those ways is built for moments when the usual methods fail. It’s called insight, and it’s the kind of mental leap that shows up in the origin stories of major discoveries and inventions – be it the ah-ha moment that led to Newton’s theory of gravity, James Watson and Francis Crick cracking the code of DNA, or Charles Townes solving the problems facing NASA’s Apollo lunar landing program. For centuries, we chalked these kinds of insights up to genius, luck, or mystery. But neuroscience has finally started to explain it.

Basically, there are three cognitive “modes”. First is analysis, the orderly method: you gather facts, sort them, eliminate options one by one, and choose what survives. Second is synthesis, the creative pattern-maker: you work with hints, context, and implied information, connecting dots that aren’t obviously related. Then comes the third mode: insight. Insight arrives with that unmistakable “eureka” sensation: fast, specific, and hard to trace backward. You’ve been carrying the problem around, with the brain kicking it around in the background, and then the final answer suddenly emerges as a complete package.

We can think of these three modes as steps we take when faced with a problem. We start with analysis, but sometimes you run into the cognitive threshold, where analysis can’t narrow the options because there are too many. That’s when you level up to synthesis and start spinning patterns. When the patterns don’t hold, insight becomes the breaker switch that can restart the system and reveal a path through the complexity. But here lies the rub. If the research is true, and the power of insight is a built-in human capacity, then why do we still hit these barriers?

For example, why do we continue to cling to the idea of nuclear power as the best alternative to fossil fuels, when we’re well aware that nuclear power still involves burying dangerous radioactive waste into the ground that must be dealt with for generations? Why aren’t we giving more consideration to much cheaper and sustainable solutions? Consider an idea that has been bouncing around for decades: if we painted our roofs and roads white, they would be more reflective and absorb less heat. This would immediately reduce energy demand, cool our cities, and reduce air pollution. Another idea is releasing sulfur dioxide into the air to generate atmospheric shading. Studies suggest this would cost $250 million the first year and $100 million every year – a fraction of the price spent on current plans for carbon emission reduction.

And yet, these ideas trigger fierce resistance. Why? Because our barriers aren’t technical. They live in culture, identity, and shared beliefs. Our barriers are what are known as supermemes – ideas so deeply embedded in us that they steer us and cause obvious solutions to stall.

Chapter 3: Supermemes, the silent saboteurs

Memes are like viruses. They spread quickly, from one person to the next, and before long a whole group is infected. Your average meme can be any sort of bit of information, a concept, a behavior – a catchy slogan. But a supermeme is something else.

This is a potent mix of belief and behavior, the kind of pattern that can become so dominant that it can dictate which solutions a society will even consider. There are five supermemes that tend to stand in the way of insight breakthroughs. The first is irrational opposition. How often have you met protesters who can articulate what they reject with passion and detail, but go quiet when asked for alternate solutions? The same pattern shows up in everyday policy contradictions: people want lower emissions while resisting higher fuel prices or smaller cars; they dislike taxes while expecting robust public services. Viable solutions invariably get rejected because they fail to meet the standards of impossible, irrational, unmovable opposition.

Okay, let’s move on to the second supermeme, which is the personalization of blame. Now, the root issue in a lot of the problems caused by these supermemes lies in complexity. The fact is that many of the big problems we face, from obesity to climate change, are complex, systemic issues. But when systems fail and the causes are tangled, societies still instinctively reach for a single villain. Leaders blame individuals, citizens blame leaders, and many people blame themselves. Systemic problems are like dropping a bag of marbles on the floor – causes scatter everywhere: laws, incentives, technologies, habits, biology, random shocks.

Blame feels satisfying because it draws a straight line through a messy landscape. Yet it rarely fixes anything. In fact, it can make it worse. On a personal level, this supermeme produces guilt and confusion, like worrying about your household recycling setup as being key to solving the world’s many resource issues. Which leads us to the third supermeme: counterfeit correlation. This is when we mistake coincidence for cause, and then building policy and identity on it.

While we’ve gotten increasingly better at collecting data, we’ve only gotten more in the weeds at being able to draw correlations as an easy shortcut to real factual information. What’s worse is that we’ve such a surplus of data that it’s easy to reverse-engineer the so-called “evidence” to make it suggest whatever suits the argument. Then, consensus starts replacing proof. Are hybrid cars better or worse for the environment?

Was Saddam Hussein the greatest threat to America or not? We ping pong back and forth, with experts pitching different counterfeit correlations to fit the mood of the day. The cost is misdiagnosis: when you pick the wrong cause, you pick the wrong cure, then burn years cycling through remedies that never touch the root. In the next section, we’ll finish our coverage of the supermeme lineup with the last two barriers to insight.

Chapter 4: Silo thinking and extreme economics

The fourth supermeme is silo thinking. This one often starts off with good intentions. You’ve got a big problem? Well why not break it down into smaller parts and make complexity feel manageable.

Organizations love to silo things because it makes accountability tidy. Departments form, agencies specialize, disciplines split, targets get measured. But trouble arrives when that structure becomes a way of life – that’s silo thinking – where the parts stop talking to each other and the whole problem becomes everyone’s responsibility and no one’s job. Modern gridlock is defined by “non-conversations”: agencies that don’t share intelligence, academic departments that operate like separate countries, industries and advocates who treat outside dialogue as betrayal, political parties who refuse to coordinate even when they agree on the stakes. In a siloed environment, information that’s already hard to acquire becomes more elusive, because it’s trapped behind walls, permissions, and turf instincts. Resources get wasted duplicating work that another silo has already done, and collaboration becomes a draining uphill climb.

Consider what happened in Haiti after the 2010 earthquake. Hundreds of separate volunteer and government groups descended en masse – all with good intentions, yet created chaos as clogged ports and airstrips prevented doctors and rescue professionals from doing their jobs. That brings us to the fifth and final supermeme, the one that can feel so normal it escapes notice: extreme economics. These days, we’ve entered a cultural climate where business principles – profit and loss, risk and reward – are acting as the default test for what matters, across education, policy, research, and even personal life decisions. Of course, commerce has powered innovation, efficiency, longer lives, and remarkable technologies. But when profitability becomes the main gatekeeper of legitimacy, filtering out solutions that help humanity but don’t fit neat models or quick returns, then we’ve got a problem.

Now, on the other hand, let’s consider Grameen Bank, which was started by Muhammad Yunus as a solution for getting money to the cash strapped people who need it most. Grameen Bank specializes in microfinance, and its approach shows what can happen when you overcome the five supermemes. Approving financing for destitute people goes against the oppositional beliefs and the personalization of blame – the thinking that such clients are lazy and won’t repay their loans. The idea that poor people don’t repay loans is a false correlation, proven by the fact that 97 percent of the approved loans have been repaid.

Yunus’s insight also showed a willingness to embrace the complexity of the problem, he treated bank and community goals as intertwined, and put people ahead of profit at any cost. The result is that millions of people in 58 countries have been helped out of poverty. This is the kind of insight that comes from defying our manmade supermemes.

Chapter 5: Parallel fixes and better brains

So it’s true that our world is reaching a dangerous level of complexity, similar to the kind that brought down the Mayans and Romans. But the key difference between us and ancient civilizations is that we know our cognitive limits. We have the benefit of evolutionary knowledge, neuroscience, and historical hindsight, which means we can act with awareness and avoid the same mistakes. Our solution lies in tactical thinking.

This is a process of stabilizing the immediate situation while at the same time working on the deeper source of the problem. We can’t rely on single target, short-term relief plans, which are often mistaken for the cure itself. When the immediate pain fades, everyone gets complacent and the complexity continues to deepen. Deeper change is needed, and that’s what we can achieve with parallel incrementalism: launching many incrementally useful actions at once so their combined effect gets at the root problems. A good example is the post-World War II plan, when massive coordination across multiple fronts helped humanity build safeguards for the future. This was parallel incrementalism in action – multiple, coordinated actions, taken from agencies and government sectors both big and small, to provide short-term and long-term solutions.

In a complex environment, you often can’t predict which interventions will pay off, so you have to make room for uncertainty and try enough options in parallel to create a few big wins. Overfishing is another example. This complex problem involves fishermen’s livelihoods, communities that rely on fish as a main protein, international borders, and ecological collapse. No single “sustainable restaurant” trend or price tweak can hold all that. It requires political, legal, economic, educational, cultural, and international moves happening together. For any of these plans to work, we need to restore the balance between knowledge and belief by elevating truth-seeking.

This means stronger education, real fact-checking, serious journalism, protected research, and government structures that consult experts regularly and widely. Knowledge is power because it behaves like an immune system against supermemes. Finally, on a personal level, there are steps we can take to improve our capacity for insight and coming up with the solutions we need for the future. Brain fitness is a booming field of research, and it suggests that targeted cognitive training can measurably improve memory and processing, sometimes by dramatic margins. The conditions that are shown to coax insight into the open include reducing stress, getting regular rest, exercise and movement in varied environments, and using small collaborative groups – often in the four-to-nine person range – where diverse expertise can converge without collapsing into chaos. Education and reading matters here too, because insight draws on raw material; the richer the mental inventory, the greater the chance of novel connections.

So there are reasons to stay optimistic. Yes, complexity is rising, but the mind can be trained, supported, and organized for insight. With a tactical approach today and better cognition tomorrow, humanity has a real shot at breaking the old cycle and stepping into a stable future. In this Blink to The Watchman’s Rattle by Rebecca D. Cota, you’ve learned that civilizations don’t collapse because people stop trying or stop caring.

Final summary

They collapse when the complexity they create outpaces the brain’s inherited ability to manage it. The historical cycle is that innovation fuels growth, complexity accumulates, gridlock sets in, evidence becomes harder to access, beliefs rush in to fill the gap, and progress quietly stalls long before anything visibly falls apart. Modern society is not exempt. The warning signs are familiar in our five supermemes: endless opposition without alternatives, blame replacing understanding, shaky correlations masquerading as proof, siloed institutions that don’t talk, and an economic lens so dominant it can veto solutions that would genuinely help humanity.

But there is a way forward. When we recognize the limitations of the supermemes, human insight can cut through complexity when analysis fails. Insights can be supported through better collaboration, better learning, and conditions that allow the mind to relax and connect ideas freely. In the short term, survival depends on attacking complex problems from many angles at once through parallel incrementalism. In the long term, it depends on rebuilding respect for knowledge and deliberately strengthening the brain’s capacity to think. Okay, that’s it for this Blink.

We hope you enjoyed it. If you can, please take the time to leave us a rating – we always appreciate your feedback. See you in the next Blink.


About the Author

Rebecca D. Costa is an American sociobiologist, futurist, and author. As the founder and leader of a Silicon Valley technology marketing firm, she worked alongside major tech innovators and had a front-row seat to how quickly modern complexity accelerates. She pivoted away from that career to spend years researching and writing, and her work has positioned her as a prominent speaker and commentator on fast adaptation in high-complexity environments.