The Means of Prediction
by Maximilian Kasy
← Back

The Means of Prediction

How AI Really Works (and Who Benefits)

By Maximilian Kasy

Category: Technology & the Future | Reading Duration: 17 min | Rating: 4.1/5 (63 ratings)


About the Book

The Means of Prediction (2025) reveals that artificial intelligence isn’t an inevitable force beyond our control, but rather a tool shaped by whoever holds its reins. Through accessible explanations of how AI functions, it exposes the real tension at play: not humans versus machines, but a struggle between those who own the technology and everyone else. It argues that we need democratic oversight of this technology now, before those in power can cement their advantage.

Who Should Read This?

  • Tech professionals seeking to understand AI’s social and economic implications
  • Business leaders curious about the dynamics shaping AI’s development and deployment
  • Students in economics, computer science, or social sciences exploring the intersection of technology and society

What’s in it for me? See through the hype and understand who really controls AI.

The alarm bells around artificial intelligence grow louder by the day. Killer robots, mass unemployment, systems we can’t comprehend or stop – the narrative suggests we’re hurtling toward an inevitable technological reckoning. But what if the entire framing misses the point? This Blink strips away the mystique surrounding AI to reveal something far more tangible: a technology whose direction depends entirely on who owns the resources to build it.

You’ll learn how AI actually functions beneath the jargon, and why the critical questions aren’t about machine capabilities but about human power structures. Discover the essential resources that determine AI’s trajectory, and why bringing this technology under public oversight matters more than any technical breakthrough. These insights offer both clarity and agency in our AI-saturated world.

Chapter 1: Who really controls the machines?

Think of films like The Matrix, Terminator, 2001: A Space Odyssey – for decades, Hollywood has told us the story of humanity locked in an epic battle against superintelligent machines. The recent rise of AI has fanned fears that we’re inching closer to this scenario by the second. Tech leaders amplify these fears, with figures like Elon Musk warning that AI poses existential threats comparable to nuclear war. But this dramatic framing misses the actual conflict.

The real struggle isn’t between humans and machines, but between different groups of people with competing interests. Consider how AI actually works. Every AI system pursues a specific target that someone chose. Someone has to program which outcomes the system should prioritize. The crucial question isn’t whether the algorithm works properly – it’s who gets to define its objectives. Right now, those who control the resources to build AI – like data, computing power, and expertise – set its goals.

And in our capitalist system, that typically means objectives serve profit maximization rather than the public good. For instance, social media algorithms maximize ad clicks, even when promoting outrage harms society. Hiring algorithms screen out candidates with caregiving responsibilities because it boosts short-term productivity. In both cases, the artificial intelligence systems are working exactly as intended for those who benefit. So worrying about machines controlling us is a moot point – what we should be worrying about is who controls the machines. Understanding this can actually be empowering.

Despite what the tech industry claims, AI’s basic principles aren’t impossibly complex. Once you grasp how it works, you can participate in decisions about how it should be used. So let’s dig in.

Chapter 2: From delicious pizza to deadly predictions

Why do certain ads follow you around the internet? And how does Netflix consistently recommend shows you actually want to watch? It starts with understanding how AI actually works – and why who controls it shapes everything that follows. AI boils down to a straightforward concept: systems that make automated choices to maximize whatever goal they’ve been given.

These systems need four building blocks: possible actions to select from, a target to optimize toward, some baseline knowledge, and datasets for training. Machine learning makes this possible by detecting patterns in massive amounts of information rather than following hand-coded instructions. At the heart of every AI system lies a crucial tension: the trade-off between exploration and exploitation. This principle becomes clear through the simple example of you trying to decide what to eat for dinner. Say you love pizza but have never tried Ethiopian food. Sticking with pizza means using your existing knowledge to get a guaranteed tasty meal.

Trying Ethiopian food means risking trying something new, but potentially discovering better options for the future. AI systems navigate this tension constantly, following the principle of “optimism in the face of uncertainty” – giving uncertain options the benefit of the doubt when they might prove superior. Facebook and Google generate billions from this exact framework. Your online experience consists of continuous micro-experiments. Their systems probe which advertisements capture your attention, testing novel approaches while also deploying proven tactics. Whether selecting dinner or driving ad revenue, the underlying mathematics remains identical.

What these systems optimize toward shapes everything, though. During the Gaza conflict, an AI system called “Lavender” predicted Hamas affiliations while accepting a 10 percent error rate. Another system called “Where’s Daddy? ” predicted when targets would be home to increase bombing effectiveness, often with families present.

This wasn’t an algorithm run amok – it was a system working exactly as it was programmed. This tragic example demonstrates why democratic debate shouldn’t lose itself in technical minutiae about algorithm selection. We need to scrutinize what objectives these systems pursue – because given sufficient training data, various algorithmic approaches reach similar conclusions. The crucial questions concern what we’re directing AI to predict, and who holds that decision-making power.

Chapter 3: The hidden human cost of AI

When you ask ChatGPT a question, you’re not just interacting with clever software. You’re tapping into the compressed labor of millions of invisible workers, the harvested knowledge of countless creators, and computing power dominated by a handful of corporations. To understand power in AI, follow the resources. Four essential ingredients determine control: data, computing infrastructure, technical expertise, and energy.

Whoever commands these building blocks decides what AI prioritizes – and currently, that’s corporate profits rather than human flourishing. Consider the hidden workforce behind AI breakthroughs. A famous dataset that revolutionized image recognition required human workers to categorize more than 14 million pictures. These workers, scattered across the Global South and toiling for minimal pay on Amazon’s platform, provided the foundation for billion-dollar AI systems. Yet their crucial role remains largely invisible. This extraction pattern echoes a dark historical parallel.

Before the Industrial Revolution, English peasants cultivated shared agricultural lands. Powerful landowners seized these collectively used spaces and transformed them into private sheep pastures for profit. Dispossessed farmers had no choice but to seek work in the new factories. Today’s tech giants are conducting a similar appropriation – not of physical land, but of digital commons. Content from Wikipedia, open-source programming repositories, and creative works gets harvested and repackaged into commercial AI products. The concentration is extreme.

One company commands nine-tenths of the market for specialized AI processors. Meanwhile, the energy demands of AI infrastructure already account for a substantial portion of global electricity use and are projected to surge dramatically. Who can counter this consolidation? Individual software engineers often face constraints from profit-driven employers. Many are themselves trapped in an “every person for themselves” mindset, and are only looking for the next profitable gig. Real potential emerges from collective action by organized workers, engaged citizens, democratic institutions, and thoughtful regulations that frame AI as a social question rather than merely a technical puzzle.

Chapter 4: Why democratic technology matters

Consider Amazon’s warehouse algorithms, which maximize delivery speed and worker output. The US Department of Labor found epidemic rates of back injuries from constant lifting, awkward movements, and relentless pacing. In both cases, the algorithms worked exactly as designed, serving their owners’ interests while ignoring public welfare. Now imagine if the Amazon workers themselves controlled the warehouse systems.

They would likely optimize them for safety, injury prevention, and humane working conditions. Same technology, completely different outcomes depending on who holds power. This underscores that the systems reflect the values of whoever designs them. Workers, consumers, and citizens have different priorities than the current corporate owners of these artificial systems. Purely technical solutions miss this fundamental point: you can’t engineer your way out of a power imbalance. The challenge extends to privacy in surprising ways.

Even perfect data protection for individuals fails because machine learning finds patterns across people. When your neighbor shares health data with an insurance company, algorithms can predict your risks and deny you coverage without ever touching your personal information. Individual rights can’t solve collective problems. Three practical strategies can help shift the balance. First up? Regulation.

For instance, governments could impose fees on harmful data collection practices. Or they could provide financial incentives for beneficial ones, making companies account for the social costs of their AI systems when calculating profits. Second, there’s communal data trusts – organizations where people collectively pool their information and vote on how it gets used. You might contribute your health data to enable medical research while explicitly prohibiting insurance companies from accessing it for coverage decisions. Third, we need better laws on transparency. That involves requiring companies to explain what their AI systems actually optimize for.

This doesn’t mean understanding complex neural networks, but it does mean disclosing the basic objectives: Is your university admissions algorithm maximizing predicted test scores or social mobility? Is your social media feed designed to maximize engagement time or quality of democratic discourse? We can only have meaningful debates about whether these goals serve the public good when they become visible.

Chapter 5: Ancient Athens and modern AI

When it comes to the challenge of regulating AI, there might be surprising insight hidden in the same democratic principle that lets you serve on a jury. And it all goes back thousands of years. Despite AI’s dizzying pace of change, the fundamental challenges aren’t new. We’re still grappling with timeless questions: How do we learn?

How should we act? What makes a fair society? The problem is that right now, whoever controls the data, computing power, and technical expertise gets to decide AI’s goals. Asking engineers to “be ethical" won’t solve this – not when they’re working for profit-driven corporations. Real change requires redistributing power to the people affected by AI decisions. But how?

Don’t limit your imagination to voting booths and elected representatives. Consider sortition – the random selection of citizens to make decisions, just like jury duty. Ancient Athens used this system for political governance. Today, this could mean randomly selecting a demographically representative group, giving them compensated time off work, and having them deliberate on AI policy questions. No professional politicians required. Another option is liquid democracy, which was invented by Lewis Carroll (yes, the Alice in Wonderland author) in the 1800s.

Everyone gets a vote but can delegate it to trusted experts on specific topics – and take it back anytime. European pirate parties already use software like LiquidFeedback to make this work online. Scandinavian countries have pioneered another interesting model to deal with the challenge of regulating new technologies in the workplace. In the 1970s, they introduced the concept of “participatory design,” giving workers real power over workplace technology decisions through strong unions and legal codetermination rights. It’s clear that token participation won’t cut it. If democratic input gets overruled whenever it conflicts with powerful interests, people will stop participating.

Real democratic control means genuine power-sharing, not just consultation. Whether it’s through sortition, liquid democracy, or workplace democracy, the path forward requires building institutions where those impacted by AI actually control it. The tools exist, from ancient Athens to modern Scandinavian workplace politics. Now we need the collective will to use them.

Final summary

In this Blink to The Means of Prediction by Maximilian Kasy, you explored the real social and political question at the heart of AI. The real danger isn’t sentient machines – it’s who gets to steer the technology. Every AI system is built to chase goals chosen by its creators. And right now those goals are set by the people with the most resources, typically in ways that favor corporate priorities over public ones.

At its core, AI is simple: systems making automated choices to maximize whatever targets they’re given. The real challenge is shifting power away from a handful of tech giants and toward the communities affected by these systems. That means strategic regulation, collective data trusts, transparency rules, and democratic approaches like sortition or participatory design in the workplace. Instead of fearing machine intelligence, we should focus on democratic oversight of the systems shaping our lives. AI isn’t inherently good or bad – it’s the power behind it that determines the impact. Okay, that’s it for this Blink.

We hope you enjoyed it. If you can, please take the time to leave us a rating – we always appreciate your feedback. See you soon.


About the Author

Maximilian Kasy is a professor of economics at the University of Oxford, where he coordinates the Machine Learning and Economics Group and teaches courses on the foundations of machine learning. He has a PhD from UC Berkeley and previously held appointments at UCLA and Harvard University. His research focuses on the social foundations of statistics and machine learning, algorithmic decision-making, economic inequality, and taxation policy.