Shared Wisdom
by Alex Pentland
← Back

Shared Wisdom

Cultural Evolution in the Age of AI

By Alex Pentland

Category: Science | Reading Duration: 21 min | Rating: 4.5/5 (44 ratings)


About the Book

Shared Wisdom (2025) explores the relationship between technological progress and human nature – and reveals how we can utilize innovations like AI in a way that benefits everyone. Drawing lessons from historic technological milestones and their impact, it shows how smartly used, these gadgets can amplify our collective intelligence and help us solve pressing global challenges.

Who Should Read This?

  • Entrepreneurs seeking to develop AI and digital platforms that enhance rather than harm human society
  • Social scientists interested in understanding how cultural evolution and technological change intersect
  • Concerned citizens wanting to understand how to shape technology's role in addressing climate change and social challenges

What’s in it for me? Learn how to harness new technology for human flourishing.

We live in troubled times. Our institutions seem paralyzed in the face of climate change, pandemics, and social breakdown.

On top of that, new technologies like Artificial Intelligence are set to disrupt society further–and not for the better, many believe. But what if there’s hidden potential for positive change in all this upheaval? History shows that transformative cultural inventions—from the first cities to scientific peer review—have repeatedly accelerated human progress during critical moments. Whether you're a technologist, policymaker, or concerned citizen, this Blink will provide you with a science-backed framework for renovating our broken institutions and creating technologies that genuinely serve humanity's needs.

Chapter 1: Humanity’s hidden superpower

What if humanity's greatest invention wasn't the wheel, but the campfire conversation? It turns out that the stories we share with each other serve a far bigger purpose than casual socializing. Think about it: you make most decisions by drawing on stories. When choosing a restaurant, starting a new habit, or solving a problem at work, you rarely consult scientific papers.

Instead, you rely on what worked for your friend, what you've heard people say, what seems like common sense in your community. Human civilization has always advanced this way. Stories are our species' superpower. They transmit hard-won knowledge across time and space. When our ancestors sat around campfires sharing tales of where they found food or which paths proved dangerous, they were building what we might call "community intelligence. " The best stories, the ones that proved useful again and again, became shared wisdom that guided group decisions and enabled coordinated action.

Australian Aboriginal communities, for instance, preserved survival knowledge for over 7,000 years through songlines—rhythmic stories encoding where to find water, which plants were edible, and how to navigate the land. Different communities developed different stories, creating the cultural diversity that allowed at least some groups to survive pandemics, climate shifts, and disasters. Today, stories matter just as much. In a study researching how 1,700 professional financial advisors made investment decisions, something surprising emerged. The experts who relied purely on data and mathematical models initially performed slightly better than their peers. But when Brexit hit, these isolated experts suffered devastating losses while those who stayed connected to their professional community, swapping knowledge colloquially, navigated the crisis successfully.

Whether you’re a hunter-gatherer or a financial advisor: in order to advance in life, you combine what your community has learned with your personal experience to make decisions. Researchers found this approach achieves "minimum-regret" decision-making—making the best possible choice given available information. For millennia, humans have survived through collective wisdom built on shared experiences. And this fundamental truth should reshape how we think about AI.

Chapter 2: Communal wisdom through technology

Stories are the basis of human advancement. That’s why, throughout history, humanity's greatest leaps forward came after technologies improving how we share stories. Three major innovations have transformed our trajectory in this regard. First, regular gatherings created spaces for communities to exchange experiences daily.

While it may seem intuitive now, hunter-gatherer bands sitting around campfires at day's end to swap information wasn't instinctive behavior but a learned social practice. Yet it dramatically accelerated how quickly useful knowledge spread through a group. Second, the formation of cities, beginning as early as 11000 BCE, allowed different communities to cross-pollinate ideas. Having large populations live in closer proximity—despite the costs of disease and resource strain—enabled stories to flow between different cultural groups. This cross-community exchange proved so valuable that populations grew faster than ever before. Third, scientific societies formalized story-sharing through documentation and citation, creating powerful incentives to build on each other's work.

Beginning around 1500, scholars started sending letters to trade observations and theories. Organizations like the British Royal Society memorialized these exchanges and established the tradition of referencing others' contributions. The result was society-wide networks of stories that continually evolved as people with similar interests added their experiences. Most successful older AI technologies—navigation apps, flight booking systems, web search—are also based on advancing our story-sharing abilities. Rather than replacing human judgement, they connect humans to other humans' knowledge. Today’s AI innovations bear the same potential.

Generative models like ChatGPT are storytelling technologies—they aggregate human narratives and patterns. It’s true that if designed poorly, they could replace the social learning processes that build trust, shared understanding, and collective action. But if designed with a greater purpose in mind, they could dramatically improve the flow of information to benefit everyone. They could, for instance, connect us with what our specific communities are doing, help us find people in similar situations, and support collective decision-making without manipulating our choices. Essentially, AI could radically enhance the story-sharing networks that have kept humanity alive through countless existential threats.

Chapter 3: Truly representative democracy

Think about how your country is governed. If you live in a Western democracy, you probably elect representatives who make decisions on your behalf, right? Even though that seems progressive on the surface, here's an uncomfortable truth about these systems: they still function remarkably like the elite-run systems of ancient Rome. We've just gotten better at hiding it.

When Japan opened up to the West in the late 1870s, the Japanese rulers were dismayed that the Europeans considered their feudal system to be “barbarian”. They also recognized that British and American democracies were essentially run by wealthy families too. So they devised an elegant workaround by simply changing the vocabulary of their governance, turning “fiefdoms” into “companies” and “serfs” into “lifetime employees”. Suddenly, Japan could present itself as a modern democracy that fit right in with the Western world–without actually changing who was in charge. Such unfair concentration of power in the hands of the elite is still typical for modern representative systems. And it’s extremely corrosive to genuine community intelligence.

When decisions flow through a small group of elites, we lose the diverse perspectives that lead to wise choices. The World Bank estimates that concentration of power costs the global economy between five and twenty percent of GDP every year, in particular through regulations that favor decision-makers over everyone else. But there's a different model that's been quietly transforming the world since the seventeenth century: consensus networks. The first scientific communities were essentially consensus networks. When scientists started citing each other's work, success no longer depended on central authorities approving your ideas. Instead, it came from peers in your field finding your contributions valuable enough to reference.

This same pattern still drives progress in science, technology patents, and legal precedent. For instance, informal networks of doctors sharing infant care practices have reduced death rates tenfold. Open-source software communities built much of our digital infrastructure through voluntary collaboration. These networks organize around shared interests rather than geography or hierarchy.

Contributors earn recognition by producing work that generates community agreement, not by climbing a bureaucratic ladder or using their wealth to buy influence. Now imagine if your government was run this way: as a truly representative, democratic network that is built on sharing all the diverse perspectives in a community. This may seem like a utopian pipe dream, but the digital tools to build consensus networks for governance already exist. We just need to use them right.

Chapter 4: Considering risks over rewards

There’s plenty of room to imagine how new technologies like modern AI could solve society's problems. But if we actually want to make it happen, it’s just as important to look at how past technologies have exacerbated our problems. Since the 1950s, we've seen three major AI booms, each following a troubling pattern. The technology worked brilliantly for specific tasks—optimizing delivery routes, automating loan decisions, personalizing search results.

Yet beneath these successes, they began to slowly erode the social fabric. The first AI systems around the 1960s used logic and mathematics to solve well-defined problems like calculating optimal delivery routes. Companies saved fortunes, and the success inspired grander ambitions. The Soviet Union adopted a Nobel Prize-winning system to manage their entire economy through optimal resource allocation. But the experiment ended disastrously, contributing to the nation's eventual collapse. The models simply couldn't capture how human societies actually function and evolve.

In the 1980s, new American banking systems promised to make lending fairer and cheaper by replacing human loan officers with consistent automated decisions. The technology delivered on efficiency. But it also destroyed over half of the community financial institutions across the country within a few decades. Neighborhood credit unions vanished. Local bankers who understood your family's situation disappeared. What remained were ATMs and call centers staffed by people following rigid scripts, unable to account for the messy realities of individual lives.

Then, in the 2000s the internet explosion generated massive amounts of user data. Companies learned to predict behavior by comparing people to others with similar patterns. This collaborative filtering approach built empires like Google and Facebook. But it did so by creating echo chambers where people only saw content similar to what had already grabbed attention from others like them.

Even worse, the algorithms amplified voices unusually skilled at capturing attention–dominant figures who accumulate outsized audiences through rich-get-richer dynamics. Today's AI differs fundamentally from earlier waves because it doesn't just optimize processes or make predictions–it generates stories and images that directly shape what people believe. Whether it strengthens or weakens human communities depends entirely on how we choose to design and deploy it.

Chapter 5: Can AI save democracy?

AI bears as many risks as it promises rewards. And as we’ve seen, our democracies and bureaucracies weren't built for the digital age. They're stuck using ancient organizational models to solve twenty-first-century problems—and it shows. Trust in government has collapsed from 80 percent in 1960 to under 20 percent today.

Communities feel powerless, managed by distant professionals who don't understand local needs. But even the most rigid institutions can transform when they embrace distributed decision-making. Take the US Army in 2003 Iraq. Facing nimble guerrilla forces, the traditional command-and-control structure simply couldn't keep up. General Stanley McChrystal made a radical move: he created "teams of teams", empowering frontline units to make their own decisions based on the commander's overall intent, not rigid orders. Using digital networks to share information instantly, the Army became genuinely agile.

The same principle could be applied to empower our democratic institutions and truly harness the rewards of new technologies. The key is shifting power back to communities—the people actually affected by decisions. Taiwan already uses a digital platform called Polis for policy debates. Citizens post ideas and vote on others' comments, but they can't reply directly, which prevents arguments from spiraling. The system visualizes where consensus exists and where divides remain. What makes it work is Taiwan's already strong neighborhood traditions, which create real incentives for finding common ground.

This approach aligns with Nobel laureate Elinor Ostrom's research on managing shared resources. Her findings are clear: successful governance requires three things. Communities must govern themselves with clear boundaries about what they control. They need transparent data to monitor outcomes and hold leaders accountable. And there must be genuine alignment between what people contribute and what they receive. Today's centralized representative governance systems violate all three principles.

But digital tools make decentralization not just possible but cheaper than central administration. Citizen Stack, a data commons developed by an Indian non-profit, proves this at scale. It allows over a billion people to control their own data through community trusts, much like credit unions manage money. They authorize specific uses while maintaining ownership, successfully competing with big tech. The path forward isn't more centralization. It's returning power to communities while using AI and digital networks to help them coordinate, learn from each other, and hold themselves accountable.

Chapter 6: Not-so-new rules and regulations

When world leaders gathered at the Club de Madrid two years ago, a fascinating split emerged. Senior EU regulators pushed for rigid, top-down control of AI–new laws and bureaucracies tailored to each technology type. But the former presidents and prime ministers saw things differently. They advocated for something simpler: audit trails and liability rules.

Let companies innovate, but hold them accountable when things go wrong. This disagreement captures our fundamental challenge with AI regulation. We're tempted to predict and prevent every possible harm before it happens. But AI evolves too rapidly and comes in too many forms for this approach to work. The instinct for control can override practical wisdom. There's a better path forward, and it already exists in unexpected places.

Think about how the internet functions globally, how international financial standards prevent fraud, or how the World Health Organization coordinates pandemic responses. These consensus organizations lack enforcement power, yet they work remarkably well. Countries cooperate because it serves their own interests. These same frameworks can guide AI governance. We need three key elements: transparent data showing what AI systems actually do, continuous auditing to catch problems early, and compatible rules across borders so companies don't simply relocate to the least regulated countries. The liability approach isn't novel.

It's how we've regulated physical products since the 1960s. If your toaster causes a fire, the manufacturer is responsible. Why should AI be different? Require companies to maintain detailed records of AI decisions, create regulators who can audit those records, and let civil law assign responsibility when harm occurs. The history of the internet actually offers a useful cautionary tale here. Because the internet's origins lie in military and academic networks where everyone was trusted, security wasn't built in from the start.

As a result, we're now struggling to retrofit protections onto a global system. With AI, we have a chance to get it right from the beginning–not through prediction and control, but through accountability and adaptation. As a result, we can build a system that actually serves the greater purpose of human connection and story-sharing rather than destroying it.

Final summary

The main takeaway of this Blink to Shared Wisdom by Alex Pentland is that new technologies should be designed by communities for communities. . Human progress has always depended on sharing stories and collective wisdom–from campfire conversations to scientific networks. Modern AI threatens to disrupt this social learning by isolating individuals, much like previous technology booms eroded community institutions.

However, if designed thoughtfully, AI could dramatically enhance our ability to share knowledge and coordinate action. The solution lies in decentralizing power back to communities while using digital tools to strengthen connection rather than replace it. Instead of rigid top-down regulation, we need transparent audit trails and accountability measures that hold companies responsible for harm. By understanding that collective intelligence drives human flourishing, we can engineer a tech revolution that genuinely serves humanity's needs.

Okay, that’s it for this Blink. We hope you enjoyed it. If you can, please take the time to leave us a rating – we always appreciate your feedback. See you in the next Blink.


About the Author

Alex Pentland is a computer scientist at MIT and Stanford who pioneered the field of computational social science, using big data and AI to understand human behavior and society. Named one of the world's most powerful data scientists, he has been highly influential in shaping global technology policy, including helping develop the EU's GDPR privacy regulations and the UN's Sustainable Development Goals. He is the author of Social Physics and Honest Signals, both of which present groundbreaking research on human interaction and organizational behavior.