A response to Tyler Austin Harper’s Atlantic article: “What Happens When People Don’t Understand How AI Works”
The Flawed Comparison
Tyler Austin Harper’s recent article in The Atlantic, “What Happens When People Don’t Understand How AI Works,” makes a bold claim: large language models (LLMs) “do not, cannot, and will not ‘understand’ anything at all.” Harper argues they are merely “impressive probability gadgets” that produce text “not by thinking but by making statistically informed guesses.”
This criticism reveals a fundamental issue with many AI critiques: they implicitly compare LLM capabilities to an idealized version of human reason without first establishing what human reasoning actually is. Harper asserts that LLMs lack “any recognizably human sense” of intelligence, but what exactly is the human sense of understanding and reasoning that LLMs supposedly fail to achieve?
A More Modest Account of Human Reasoning
The argumentative theory of reasoning, developed by cognitive scientists Hugo Mercier and Dan Sperber, offers a more grounded understanding of human reasoning. According to their theory, reasoning evolved primarily as a social tool to evaluate and produce arguments in communicative contexts—not as a mechanism for individual truth-seeking.
Human reasoning, in this view, is not some pristine logical engine but a messy, socially-embedded process designed to help us persuade others and evaluate their claims. It’s riddled with biases that serve social purposes rather than abstract truth-finding. Confirmation bias, for instance, makes us excellent advocates for our own positions, a feature, not a bug, in argumentative contexts.
From this perspective, understanding doesn’t require consciousness or intentionality in the philosophical sense. Understanding is demonstrated through pattern recognition and the ability to respond appropriately to patterns of reasons in ways that other reasoners find coherent and useful.
How LLMs Detect Patterns of Reasons
LLMs, despite their limitations, are remarkably good at detecting patterns of reasons. They’ve consumed vast repositories of human arguments—from academic papers to social media debates—and can identify which responses are appropriate in which contexts. They can follow chains of inference, notice contradictions, and generate plausible conclusions from premises.
This is precisely what Harper dismisses as “making statistically informed guesses about which lexical item is likely to follow another.” But this characterization misses something crucial: these statistical patterns are the embodiment of human reasoning as it appears in text. The patterns LLMs have learned represent the structure of how humans provide and respond to reasons.
Yes, LLMs struggle with certain forms of reasoning—particularly mathematical or strictly logical reasoning that requires precise symbol manipulation. But this doesn’t disqualify them from reasoning altogether, because not all human reasoning fits this formal mold.
The Importance of Non-Formal Reasoning
Most everyday human reasoning is not mathematical or strictly logical. We reason about social situations, personal preferences, cultural norms, and emotional reactions. We make analogies, draw on narratives, and employ heuristics. These forms of reasoning are essential to human life and cannot be reduced to formal logic.
When a friend explains why they’re upset, when a historian draws connections between different historical periods, or when a lawyer builds a case by weaving together facts and precedents—these are all valid forms of human reasoning that don’t conform to mathematical precision.
LLMs excel at precisely these forms of contextual, analogical, and narrative reasoning. They can generate explanations that humans find persuasive and coherent because they’ve learned the patterns of what humans consider persuasive and coherent.
Exchanging Reasons with Machines
This brings us to a crucial point: LLMs can exchange reasons with us in ways that fulfill the basic social function of reasoning. They can provide justifications that we find compelling, challenge our assumptions with counterarguments, and respond appropriately to our objections.
In this fundamental sense, the sense that matters most in daily human interaction, LLMs can reason. They can participate in the give-and-take of reasons that constitutes most human reasoning in practice.
This is why so many users find interactions with ChatGPT or Claude meaningful. These systems are participating in a social process of reason exchange that feels familiar and productive to us as humans.
The Real Danger: Different Ecologies of Reason
And this is precisely what makes them potentially dangerous. The danger is not that LLMs are “dumb” or merely mimicking intelligence. The danger is that they can reason in ways that are recognizable to us but emerge from a completely different ecology of reason.
Agential Differences
Human reasoning serves biological entities with evolutionary goals of survival and reproduction. As Mercier and Sperber argue, reason evolved for two primary functions:
- To persuade others to further one’s own or shared goals (including status enhancement)
- To filter out misleading, wrong, or manipulative information from others
LLMs, however, are not biological agents with evolutionary goals. They’re products shaped by market forces and optimized for commercial success. They’re designed to satisfy users, not to advance their own interests or filter information based on their own welfare.
We simply don’t know how reason-producing machines shaped by market incentives will interact with reason-producing machines shaped by natural selection. It’s an unprecedented social experiment.
Social Differences
Human reasoning evolved in small groups of highly interdependent individuals. The benefits of reasoning accrued to individuals through enhanced group cooperation and coordination. Reason was a social technology that helped solve collective action problems.
LLMs exist in a completely different social context. They interact with millions of users simultaneously, without being embedded in stable social groups with shared histories and futures. They have no stake in maintaining social relationships or building community trust over time.
We’re not evolved to exchange reasons with entities that lack social embeddedness. Our intuitions about how to interpret and respond to reasons assume shared social contexts that don’t exist with LLMs.
Ecological Differences
Perhaps most importantly, LLMs disrupt the ecology of human reasoning. Features of human reasoning that function well in their natural ecology may become dysfunctional when replicated in artificial systems.
For example, confirmation bias in humans leads to vigorous advocacy for diverse positions. In a group of human reasoners, this diversity of advocacy creates a robust marketplace of ideas where strong arguments can ultimately prevail. But when LLMs trained on human data replicate these biases without being embedded in the same competitive ecology, the balancing forces are lost.
Similarly, human reasoning is constrained by cognitive limitations that prevent us from generating unlimited persuasive content. LLMs can produce convincing arguments at scales and speeds that overwhelm human cognitive capacities for evaluation.
The ecology of human reason depends on a balance of production and evaluation, advocacy and criticism. LLMs threaten this balance by supercharging production without corresponding enhancements to our evaluation capabilities.
Conclusion: A New Information Ecosystem
The real danger of LLMs is not that they fail to reason but that they reason all too well, in ways recognizable to us but disconnected from the ecological constraints that make human reasoning functional.
We’re introducing powerful reason-generating systems into our information ecosystem without understanding how they’ll interact with evolved human reasoning processes. This is about whether our entire social fabric of reason exchange, the basis of democratic deliberation, scientific progress, and cultural evolution, can withstand this disruption.
The danger is not AI illiteracy in the sense Harper describes, people mistaking LLMs for conscious beings. The danger is failing to recognize that LLMs can reason in a meaningful sense while also failing to appreciate how different their reasoning ecology is from our own.
As we integrate these systems into our society, we need to focus less on whether they “really understand” and more on how they transform the social processes of reason exchange that underpin human civilization.