The Universal Human Tendency to Hallucinate

We all hallucinate. Not clinically, but cognitively. The human brain is a pattern-recognition machine that abhors uncertainty. When faced with incomplete information or pure noise, it doesn’t simply report “insufficient data” – it generates patterns.

This isn’t a bug but a feature. Our ancestors who quickly “saw” predators in shadows survived more often than those who hesitated. Our brains evolved to make rapid inferences, even at the cost of occasional errors.

What is “human hallucination” in this sense? Simply: the generation of a response when adequate information is lacking. We see faces in clouds, hear messages in songs played backward, and attribute meaning to random events. This tendency intensifies when we face pressure to provide answers regardless of available information.

The Eerie Similarities Between Human and LLM Hallucination

Large Language Models (LLMs) have been criticized for “hallucinating” – generating plausible-sounding but fabricated content. This phenomenon mirrors human cognition in remarkable ways.

These twin systems – one biological, one digital – share the same fundamental tendencies. Both match patterns from vast training data, whether from evolutionary experience or computational training sets. They generate responses based on statistical relationships rather than genuine understanding, producing answers even when no valid answer exists. And most tellingly, both hallucinate most frequently at the boundaries of their knowledge, where familiar patterns begin to dissolve into noise.

The key difference? We scrutinize AI hallucinations while accepting many of our own as “just thinking.”

AI Hallucination Demystified

Consider a simple image classifier trained only on cats and dogs. When shown random noise, it must classify the image as either a cat or dog – that’s all it knows. It detects illusory patterns in noise that match its learned representations.

LLMs operate similarly, just in higher dimensions. When asked questions outside their training or ones with no factual answer, they generate text that statistically resembles an answer based on their training patterns. They transform noise into what appears to be knowledge.

The core issue: when any system is forced to generate answers using insufficient information, hallucination inevitably follows.

The Question-Answer Imperative

Both humans and AI hallucinate most under the “question-answer imperative” – the requirement to provide answers regardless of information adequacy.

Humans face this imperative in numerous contexts. When authority figures demand immediate explanations, we fabricate plausible narratives. In educational settings that penalize uncertainty, students learn to guess rather than admit ignorance. Social situations often make acknowledging knowledge gaps uncomfortable, while high-stakes environments create pressure to appear more confident than our information warrants.

AI systems, meanwhile, are typically designed to always respond rather than recognize knowledge boundaries. Their training rewards plausible-sounding outputs over principled silence. They learn to generate text that mimics understanding, even when that understanding is impossible given their training or the nature of the question.

Hallucination emerges precisely when a question has no answer within the available framework or when the information is insufficient for a reliable conclusion. Yet the imperative to answer remains, and both human and artificial minds comply – generating illusions of knowledge where none can exist.

The Dangerous Allure of Pseudo-Rationality: Case Studies

Case Study 1: Existential AI Risk Calculations

Current discourse around catastrophic AI risk employs risk assessment frameworks (probability × harm) that face fundamental challenges:

Probability estimates for unprecedented, complex events cannot be frequency-based. They remain subjective assessments in mathematical clothing, with shaky philosophical foundations.

When we quantify the probability of AI-driven extinction, we hallucinate – generating answers that mimic knowledge but exceed our information bounds.

Case Study 2: Systemic Risk Assessment for the Digital Society

Similarly, recent regulatory frameworks like the EU’s Digital Services Act demand risk assessments for advanced AI systems. These assessments borrow methodologies from domains with well-understood causal mechanisms and attempt to apply them to vastly more complex sociotechnical systems.

The problem isn’t that we shouldn’t try to understand risks – it’s that forcing traditional risk frameworks onto phenomena they weren’t designed for leads to hallucinatory confidence. We generate numbers, charts, and formal assessments that provide the illusion of understanding without its substance.

As my colleagues and I argue in our forthcoming paper in Philosophy and Technology, these approaches could produce sophisticated hallucinations rather than actionable insights, requiring regulatory efforts to prevent this outcome.

Philosophical Wisdom: Ancient Remedies for Modern Hallucinations

The solution to our hallucination problem lies not in novel methodologies but in ancient wisdom. Three philosophical giants, spanning nearly a millennium, offer particularly relevant insights.

Socrates (470-399 BCE) gave us the foundation: true wisdom begins with acknowledging ignorance. “I know that I know nothing” wasn’t mere humility; it was an epistemological breakthrough. By recognizing knowledge boundaries, Socrates established genuine inquiry over fabricated answers.

Aristotle (384-322 BCE) provided crucial wisdom about different domains requiring different standards of precision. In the Nicomachean Ethics, he famously stated: “It is the mark of an educated mind to expect that amount of exactness in each kind which the nature of the particular subject admits.” Aristotle understood that applying mathematical precision to ethics or politics constitutes a category error – a sophisticated form of hallucination.

Kant (1724-1804) completed this philosophical framework with his revolutionary insight about reason’s inherent limitations. His “Critique of Pure Reason” demonstrated that human knowledge has structural boundaries that cannot be transcended. Kant showed that certain questions, while seemingly answerable, actually lie beyond what rational inquiry can determine. When we force ourselves or AI systems to answer questions that exceed available evidence or applicable frameworks, we’re demanding the impossible – precisely the condition that generates hallucination.

The Way Forward: Philosophy as Cure

For AI systems, these philosophical insights translate to expressing uncertainty proportional to evidence (Socratic), recognizing domains where precision is inappropriate (Aristotelian), and understanding structural limitations of knowledge (Kantian).

For humans addressing complex societal questions, they mean embracing “I don’t know” as a valid response, matching methods to domains rather than forcing universal frameworks, and recognizing when our questions exceed rational determination.

The parallels between human and AI hallucination reveal a shared vulnerability and a shared solution. Both emerge from the gap between what can be known and what we demand to know.

Our most advanced AI systems shouldn’t be those that answer any question, but those that recognize when “I don’t know” is the right answer – truly Socratic machines. Similarly, our most sophisticated approach to AI governance shouldn’t attempt to extract signal from noise where fundamental uncertainty exists, but should match methods to domains as Aristotle advised.

By embracing these philosophical principles, we gain a powerful framework for addressing hallucination across human and artificial cognition. The remedy for generating knowledge from noise begins with the courage to acknowledge the limits of what can be known, the wisdom to recognize which questions deserve precision, and the humility to treat “I don’t know” as a valid and often optimal response.

Perhaps the most important insight is that hallucination isn’t primarily a technical problem but a philosophical one – and its solution lies not in more sophisticated pattern-recognition algorithms, but in rediscovering when to stop the pattern-recognition process altogether. In both human and artificial intelligence, the ability to recognize noise as noise, rather than forcing it into meaningful patterns, may be the highest form of wisdom.

Leave a Reply

Your email address will not be published. Required fields are marked *