Metacognition’s Law: Thinking About Thinking in the Age of AI

By Anthea Roberts

These days, I’m spending a lot of time thinking about how generative AI is reshaping the world of cognitive work. Mike Maples, a well respected venture capitalist, recently captured this sea change in a way that resonates with me. He said we’ve moved through three distinct eras: mass computing, mass connectivity, and now—mass cognition. Each shift has fundamentally changed the way we interact with technology, and today, it’s time to rethink how we think.

This journey started with Moore’s Law, which promised exponential growth in computing power—faster chips, smaller devices, and the tantalizing hope that processing power alone could solve humanity's hardest problems. Then we transitioned into the age of Metcalfe’s Law, which taught us that the value of technology lies in connection. Networks became the new frontier, their power amplified by every new user who joined—whether it was a Facebook friend request or a LinkedIn connection.

But today, with artificial intelligence as our new partner, we’re facing a different kind of question: How do we harness mass cognition—the combined thinking power of humans and machines—effectively? If Moore’s Law defined the age of computing and Metcalfe’s Law defined the age of connectivity, we need a new guiding principle for today’s AI-driven world. This era calls for something more reflective—Metacognition’s Law: the ability to think about thinking.

Introducing Metacognition’s Law

Metacognition’s Law recognizes that success in the AI era will not be determined by who has the most data or the biggest network, but by who can best understand and direct how computers think, alone and in collaboration with humans. It’s not just about accumulating information; it’s about understanding how to reflect on and wield that information deliberately.

In many ways, we’re now in a transition from what the cognitive psychologist Daniel Kahneman would describe as System 1 thinking—fast, instinctive, and automatic, like recognizing a face in a crowd—to System 2 thinking—slow, deliberate, and effortful, like solving a complex math problem.

Recent developments in generative AI reflect this shift vividly. OpenAI’s latest model, known as o1 or “Strawberry,” marks a leap toward System 2 capabilities. Traditionally, pre-trained models relied on next-token prediction—a method where the model generates the most likely next word based on the words that came before it. This is a System 1 type of thinking that churns out responses quickly. 

But Strawberry brings something new to the table: it stops and thinks before it responds. This “inference-time compute” allows the model to perform more deliberate System 2 type of reasoning, effectively giving it the ability to engage in a kind of computational reflection. It does this by breaking down complex problems and reasoning through them step-by-step—a critical example of metacognition that AI researchers call “chain of thought” reasoning.

This evolution in generative AI is reminiscent of AlphaGo’s historic victory against Lee Sedol in 2016. AlphaGo wasn’t just mimicking human moves from a massive dataset; it paused, simulated countless possible future scenarios, and chose the optimal path. The more time it had to think, the better it performed—much like a human making a difficult decision. Just as we often arrive at better outcomes when we take the time to deliberate, weigh options, and consider long-term consequences, AI models benefit from extended reasoning to optimize their responses.

We are also witnessing a fundamental shift in how AI scales. In the past, progress was driven by increasing the data and compute used during pre-training—a scaling law that fits comfortably under Moore’s framework. But Strawberry’s breakthrough suggests a new scaling law: the more compute allocated at inference time, the better the model can reason. Instead of focusing solely on making models faster and more efficient, we’re now seeing the benefits of making them think longer and harder.

Intelligence demands not just knowing things, but knowing how to deliberate on what’s known. In an age of abundant knowledge, metacognition—the ability to think about thinking—becomes crucial.

Metacognition and Dragonfly Thinking

I’ve always been drawn to metacognition, even before I knew the term. At school, I was that kid who loved math and physics, but then at university, I mixed things up with philosophy and law. Add a healthy dose of debating on top, and you’ve got a pretty eclectic toolkit. Math and physics taught me to break down problems into manageable pieces. But philosophy, law, and debating taught me to handle the trickier questions that didn’t have one right answer and to see controversial issues from multiple sides.

This metacognitive approach underpins everything we do at Dragonfly Thinking. We’ve always believed in the power of structured thinking tools to help navigate complex issues. For example, when applying LLMs to contested issues, we deliberately use frameworks like Risk, Reward, and Resilience to structure how the AI thinks through the problem. This kind of cognitive architecture allows us to guide AI toward more nuanced outputs, rather than just relying on surface-level, instinctive responses.

Through the tools we are developing, we’re effectively teaching AI to adopt some of the reflective practices that characterize good human thinking. One method we use is Multi-Lens Analysis, which considers issues from multiple viewpoints, recognizing that different stakeholders bring different values, priorities and experiences to the table. Whether we’re analyzing globalization, sustainability, or the impact of new technologies, this approach helps us see how the same issue can appear vastly different depending on the lens through which it is viewed.

What we’re doing also addresses one of the weaknesses of Strawberry’s approach. In fields like math and coding, where answers can be definitively determined, the o1 model can generate multiple possible chains of thought, verify which ones are correct, and use that feedback to improve its reasoning. But in more ambiguous domains like the humanities or social sciences, this approach doesn’t work. How do you “score” the value of a poem or determine the correctness of an ethical argument? These questions don’t have simple right or wrong answers.

In these fields, we need more nuanced metacognitive approaches that recognize the ambiguity of problem framing, the power of perspectives, and the subjectivity involved in evaluating outcomes. Engineers, used to deterministic systems, often struggle with the indeterminate nature of LLMs. But those comfortable with human ambiguity—managers, teachers, philosophers—are finding themselves adept at guiding these models. It turns out the AI whisperers of tomorrow may well be those most at ease with the complexities of human thought.

The Future of Cognitive Work

Where does this leave us? In an era in which the value of human cognition isn’t about being faster or more efficient than a machine, our edge lies in our ability to engage deeply and reflectively with these tools. So, are you ready to move from “thinking” to “thinking about thinking”? In the age of AI, that might just become a defining advantage.

Previous
Previous

Directors, Coaches, and Editors: The Human Role in the Age of AI

Next
Next

Why I Founded a Start-Up (Despite Being a Technophobe)