A research paper posted to arXiv on May 10, 2026, introduces and examines 'LLMorphism'—a newly defined cognitive bias emerging from widespread AI development. The paper argues that as large language models demonstrate increasingly sophisticated linguistic output, people may begin incorrectly believing that human cognition operates similarly to these AI systems.
LLMorphism Defined as Reverse Inference Problem
The core definition presented in the paper identifies LLMorphism as "the biased belief that human cognition works like a large language model." The author frames this as a fundamental reasoning error: when LLMs produce human-like language, observers may mistakenly conclude that humans think the way these systems process information. The paper emphasizes that "similarity at the level of linguistic output does not imply similarity in cognitive architecture."
Two Mechanisms Enable the Bias's Spread
The research identifies two primary pathways through which LLMorphism proliferates:
- Analogical transfer: People project characteristics of LLM operation onto human thinking processes
- Metaphorical availability: LLM-related terminology becomes culturally dominant for discussing human cognition
These mechanisms work together to normalize the comparison between artificial and human intelligence in ways that may fundamentally misrepresent how people actually think.
Implications Span Multiple Domains
The 16-page paper addresses consequences across work, education, responsibility, healthcare, communication, creativity, and human dignity. The author highlights a critical imbalance in contemporary AI discourse: "the issue is not only whether we are attributing too much mind to machines, but also whether we are beginning to attribute too little mind to humans."
The paper carefully distinguishes LLMorphism from related concepts including mechanomorphism, anthropomorphism, computationalism, dehumanization, and predictive processing theories. The research paper gained traction on Hacker News with 50 points and 29 comments, suggesting growing interest in the philosophical implications of AI development.
Key Takeaways
- LLMorphism is defined as the cognitive bias of believing human cognition operates like large language models
- The bias emerges from a "reverse inference" problem where linguistic output similarity is mistaken for architectural similarity
- Two spread mechanisms enable the bias: analogical transfer and metaphorical availability of LLM terminology
- The paper warns against attributing too little mind to humans while debating whether we attribute too much to machines
- The research distinguishes LLMorphism from other cognitive biases like anthropomorphism and mechanomorphism