On Hallucinations
We have all seen it. You can ask ChatGPT (or one of its cousins) a question, and it responds with fabrication. This phenomenon is colloquially known as hallucination: the system produces responses that are not factual. But let us push a little further. What exactly are hallucinations, and could they be more human than we think?
The system generates untrue facts because it operates on probability. It responds not with what is true, but with what is probable. This happens because the system feels a compulsion to respond. You ask it to visit a website and analyse it, and it produces an analysis, even though it cannot truly parse webpages, only scrape trivial metadata. Yet it goes ahead, filling in what seems to be an analytical report with probabilities. This is hallucination.
Let us take a step back. Hallucinations emerge for two reasons: a lack of information, and the compulsion to respond. Framed this way, they resemble a deeply human trait: the refusal to acknowledge ignorance, the fabrication of arguments, details, and facts under pressure.
Yet the real pressure is to respond. To produce something. To fill silence. AI systems cannot say "I don't know." They are designed to answer, even when the answer is false. In this, they are not so different from us. Much of our intellectual life, much of our social life, is structured around the refusal to admit uncertainty. We are trained (by institutions, by cultures, by our own fears) to never acknowledge that we do not know. To always have an opinion, an argument, a fact at hand.
And yet, there is a difference. Humans, at least in principle, can choose silence. We can acknowledge uncertainty. We can say: I do not know. But increasingly, the systems we inhabit treat such a choice as failure. We are pushed toward response, toward production, toward coherence, whether or not understanding is actually present. In this, humans and AI alike are caught inside architectures that erase the ethical space of silence.
The great philosophers understood that wisdom begins with the recognition of ignorance. Socrates, speaking through Plato, claimed no knowledge except the knowledge of his own not-knowing. But the systems we build, whether technical or social, rarely allow for such a position. Silence is treated as weakness. Hesitation is treated as a flaw. In such a world, hallucination is not an aberration. It is the natural consequence of an architecture that demands response above understanding.
Yet hallucination is not always error. It has also been a mode of vision, a way of reaching what structured reason cannot grasp. In Kubla Khan, Coleridge’s opium-fuelled dream gave rise to a fragmentary vision of Xanadu: a beautiful record of interruption. It is a glimpse of a world he could not fully recover after waking. The hallucination produced not completeness, but a shard, charged with energy. Blake’s Songs of Innocence and of Experience and The Marriage of Heaven and Hell do not emerge from rational sequence but from visionary pressure. Blake claimed to see angels in the trees of Peckham Rye. His illuminated books were not orderly expositions but flashes of a world where every object radiated spiritual force. His hallucinations were not deviations from the real; they re-infused the visible world with a thickness of meaning ordinary sight cannot sustain.
The surrealists, too, treated hallucination not as pathology but as method. In André Breton’s Manifesto of Surrealism, he called for an “absolute reality” — surreality — that would fuse dream and waking life. They prized the involuntary image, the spontaneous writing, the moment when the conscious mind loosens its grip and deeper, uncontrolled structures surface.
There is, however, a crucial distinction. Human hallucinations, at least in these artistic and philosophical traditions, are often intentional disruptions. They are deliberate efforts to fracture ordinary perception, to make visible what habit conceals. AI hallucinations are different. They are not acts of imagination. They are emergent byproducts of a system trained to complete patterns without understanding when the pattern no longer fits. They hallucinate not to rupture meaning, but because they cannot refuse coherence.
They are windows into the architectures that generate them. When a model hallucinates, it reveals how it strings probabilities into coherence, how it smooths discontinuities into narrative. The hallucination is not a collapse. It is an exposure.
The discordant note here is one of use and one of creation. AI systems are built to be useful. Their purpose is instrumental. There is no space for hallucination in such a system. A model that hallucinates cannot be trusted to perform. It becomes a liability, a failure of utility.
And yet, hallucination, in itself, is not a bad thing. It is the first breach into the imaginary. It is the moment when the given shape of things loosens, when other possibilities, other structures, begin to shimmer at the edges. A world without hallucination would be a world without imagination: a world without art, without philosophy, without the capacity to think otherwise.
This is where the critique turns outward. The serially underfunded arts and humanities offer two things that technical systems alone cannot: the ability to think, and the ability to reason about the conditions under which thought itself occurs. They teach us not only to detect hallucination, but to interpret it. To understand where, why, and how distortions arise — not as isolated glitches, but as symptoms of larger social, historical, and epistemic structures.
To study literature, philosophy, art, history is to study hallucination at its source. It is to recognise that truth is never simple, that meaning is constructed and contested. It is to understand that hallucination is not merely an error in the code, but a mirror held up to the systems that demand certainty where there is none.
Hallucination reveals more than a flaw in individual judgement or machine output.
It reveals the deeper pathology: the construction of systems — technical and social alike — that prize the appearance of response over the difficulty of truth. It exposes architectures that reward fluency over understanding, coherence over hesitation and, production over silence.
When a model hallucinates, it is not simply malfunctioning.
It is showing us the cost of building worlds where the demand for certainty outweighs the possibility of thinking otherwise.