Redacción HC
02/06/2024
In tech journalism and AI discourse, the word "hallucination" has become a go-to metaphor for describing instances when large language models (LLMs) like ChatGPT produce false or misleading outputs. But what if this metaphor fundamentally misunderstands how these systems operate?
A provocative new essay titled "ChatGPT is Bullshit," published in Ethics and Information Technology (2024), offers a conceptual reframing rooted in philosophical analysis. Authors Michael Townsen Hicks, James Humphries, and Joe Slater of the University of Glasgow argue that the more accurate descriptor for LLM behavior is not "hallucination" but "bullshit," in the technical sense coined by philosopher Harry Frankfurt.
Their central claim: ChatGPT doesn't hallucinate—it doesn't care about truth at all.
The metaphor of hallucination implies an accidental misrepresentation, as though the system is attempting to mirror reality and occasionally fails. But Hicks and colleagues argue that this metaphor is misleading. Unlike humans, LLMs don't have beliefs, perceptions, or an internal model of truth—they're not trying to tell the truth in the first place.
Drawing on Frankfurt's philosophical work On Bullshit (2005), the authors contend that LLM outputs are more akin to communication that is indifferent to truth. This reframing is more than academic nitpicking—it has serious implications for how society understands, interacts with, and regulates AI systems.
At a technical level, LLMs function through statistical prediction, not reasoning or referencing. They generate the next token based on probabilistic patterns in training data, with no regard for factuality or intent.
The essay proposes two categories:
An example of soft bullshit might be ChatGPT making up a scientific citation. It doesn't "think" the citation is real—it just knows what a citation should look like in context. Hard bullshit emerges when the system's output simulates authority—like giving confident-sounding legal advice—without any underlying comprehension.
The authors argue that using metaphors like hallucination, lie, or even confabulation obscures the real problem. These terms falsely anthropomorphize the system, making it easier for the public to either trust it too much ("It's like a person that just gets confused sometimes") or fear it unnecessarily ("It thinks it's right even when it's wrong").
"Metaphors matter," the authors write. "They shape how we design, use, and regulate technologies."
By calling ChatGPT a "bullshit machine," we better capture its core function: producing coherent text that appears meaningful without any commitment to accuracy or reality.
If LLMs produce bullshit—not mistakes—then the approach to regulation and public communication must shift:
The authors recommend explicitly incorporating the term "bullshit" into AI governance and documentation—an idea that may sound radical but has growing support among ethicists and technologists.
Whether you're a student relying on ChatGPT for a research summary, a journalist checking facts, or a policymaker drafting AI regulation, understanding the true nature of LLMs is essential.
LLMs don't have malicious intent—but their outputs can cause harm through false authority or misplaced trust. Misdescribing their behavior leads to inadequate safeguards and misplaced expectations.
"These systems are not truth-seekers," says co-author Hicks. "They're coherence-seekers."
Understanding that difference could determine whether AI systems are used safely—or dangerously—in critical sectors like law, healthcare, and education.
The essay concludes with a call for linguistic clarity and ethical design. Just as we wouldn't expect a washing machine to write a poem, we shouldn't expect an LLM to produce factual knowledge. It can simulate intelligence and insight, but it doesn't possess them.
The authors urge stakeholders—developers, educators, journalists, and legislators—to reject seductive but inaccurate metaphors. Instead, embrace terms that accurately reflect the epistemic limitations of AI.
Bullshit may be an ugly word—but it's an honest one.
Topics of interest
TechnologyReferencia: Hicks MT, Humphries J, Slater J. ChatGPT is bullshit. Ethics Inf Technol [Internet]. 2024;26(38). Available on: https://doi.org/10.1007/s10676-024-09775-5.