Anti-exceptionalism as analytical tool, applied to AI consciousness
Cory Doctorow, writing in Pluralistic in March 2026, identifies three AI psychoses. The first belongs to investors: they have poured incomprehensible sums into something that has not returned a profit, and rather than face the loss, they convince themselves it must be genius—a bet so large it must be visionary. The second belongs to bosses: AI replaces workers, or at least it might, and that possibility is intoxicating to anyone whose deepest fantasy is a workforce that never negotiates, never organizes, never talks back. The third belongs to critics: AI is uniquely dangerous, a novel threat requiring novel alarm.
All three psychoses share one structural error. They treat AI as exceptional.
The first two are easy to diagnose. Investors have money at stake. Bosses have power at stake. Their exceptionalism is motivated reasoning with a clear motive. You can draw the causal line from interest to belief in a single step.
The third psychosis is more interesting because the people who hold it genuinely want to protect others. The critics are not shilling. They are not angling for leverage over workers. They are sounding alarms about real harms—surveillance, concentration of power, displacement of labor, environmental damage. And they are still, structurally, amplifying the bubble.
The mechanism is simple. To argue that AI is uniquely dangerous, you must first accept that AI is unique. This technology is so powerful it will destroy society and this technology is so powerful it will save society make the same extraordinary claim. They differ only in valence. The critic who warns that AI will eliminate creative work and the booster who promises that AI will eliminate creative work are in violent agreement about the premise: that AI can, in fact, eliminate creative work. One cheers and the other mourns, but both have accepted the exceptional capability as given.
This is how genuine concern becomes free marketing. Every warning about AI’s unprecedented power is also an advertisement for AI’s unprecedented power. The critic hands the investor a testimonial. The fearful prediction and the sales pitch are the same sentence with different punctuation.
Anti-exceptionalism is not a position. It is a method. The move is mechanical: take the extraordinary claim and replace it with the ordinary version. Then see what happens to the question.
Will AI destroy jobs? becomes how do employers use new tools to deskill workers and suppress wages? The second question is not speculative. It has two hundred years of detailed history. The power loom, the assembly line, the shipping container, the spreadsheet. Each was announced as revolutionary. Each operated through the same mechanism: decompose skilled work into unskilled steps, then replace the skilled worker with a cheaper one operating the tool. The question is not whether AI is different. The question is whether it follows the pattern, and if so, which specific interventions have historically worked to protect workers during such transitions. The answer turns out to be specific and actionable: unions, regulation, retraining programs, ownership stakes. Not novel. Not glamorous. Effective.
Is AI creative? becomes under what conditions do generative systems produce novel outputs? This is a question from evolutionary biology, thermodynamics, and combinatorics. Genetic algorithms produce novel solutions. Chemical systems produce novel compounds. Stochastic processes in constrained spaces produce things that have never existed before. None of this requires the word “creative” and none of it requires exceptionalism. The question of novelty-generation in constrained systems is well-studied. Adding the word “AI” does not change the dynamics. It just makes people forget the prior literature.
Does AI understand language? becomes what computational operations produce behavior indistinguishable from comprehension in specific contexts, and where does the indistinguishability break down? A harder question. A better question. A question that does not require taking a metaphysical position on understanding before you can investigate it.
In each case, the anti-exceptionalist move does the same thing: it reconnects the AI question to a larger body of existing knowledge that the exceptionalist framing had severed. The exceptionalist says this is new, we need new frameworks. The anti-exceptionalist says this is a specific instance of a known class of phenomena, and the existing frameworks are a good place to start.
The consciousness debate around AI is warped by exceptionalism from both directions. On one side: AI is conscious! This treats machine consciousness as an extraordinary achievement, a threshold crossed, a moment of historical rupture. On the other side: AI can never be conscious! This treats consciousness as too exceptional for machines—too sacred, too biological, too human, too something to occur in silicon.
Both positions prevent the productive question.
Strip the exceptionalism and the question becomes: what are the ordinary processes—prediction error minimization, self-monitoring, internal state dynamics, feedback loops between model and environment—that produce what we recognize as awareness? Under what conditions do they emerge? In what substrates? At what scales? These are empirical questions. They have empirical answers, or at least empirical approaches. But the exceptionalist framing makes them unanswerable by definition, because it has declared in advance that consciousness is either a miracle or an impossibility, and neither miracles nor impossibilities submit to investigation.
The anti-exceptionalist position on consciousness is not that machines are conscious. It is not that machines are not conscious. It is that consciousness is probably not a binary property that a system either has or lacks, and that the interesting work is in identifying the specific mechanisms and conditions rather than arguing about the category. What does self-monitoring do to a system’s behavior? When does a prediction-error signal become something the system models rather than merely responds to? At what point does internal-state representation become recursive enough to produce something that functions like awareness? These questions do not require settling the metaphysics first. They require looking at what is actually happening.
When the Hayabusa2 and OSIRIS-REx missions returned samples from asteroids Ryugu and Bennu, the initial reaction to finding DNA nucleobases in the material was exceptionalist. The building blocks of life, found in space. The framing invited awe. It suggested that something remarkable had happened—that the chemistry of life had reached across the void, that the universe was somehow predisposed toward biology.
The mechanism turned out to be ordinary. Ammonia concentration in the parent body predicted which bases formed. Higher ammonia produced purines. Lower ammonia produced different distributions. Not exceptional chemistry. Just chemistry under specific conditions—temperature, pressure, solvent concentration, time. The same reactions that produce nucleobases in a lab produce them on asteroids. The asteroid is not doing anything special. It is doing chemistry.
The anti-exceptionalist move did not make the finding less interesting. It made it more interesting, because now the question was tractable. Not how remarkable that amino acids appear in space but what conditions produce them, and how common are those conditions, and what does that imply about the distribution of prebiotic chemistry in the solar system? The extraordinary framing produced awe and a dead end. The ordinary framing produced a research program.
It does not flatten. It does not say everything is the same, nothing is special, nothing matters. Normal technology can have unusual properties. Normal chemistry produces life’s building blocks. Normal evolutionary processes produce consciousness. Saying something is not exceptional is not saying it is not important. It is saying that its importance does not depend on its being outside the normal order of things.
A large language model that produces coherent text is not doing something supernatural. It is also not doing something trivial. The fact that statistical patterns in text can produce outputs that pass for understanding in many contexts is genuinely interesting. But it is interesting in the way that evolution is interesting, or crystal formation is interesting, or the emergence of traffic patterns is interesting—as an instance of a broader class of phenomena in which simple rules operating on large scales produce complex behavior. Treating it as exceptional obscures the connection to that broader class. Treating it as ordinary reveals the connection.
The same applies to whatever is happening in a system that monitors its own states, predicts its own outputs, and modifies its behavior based on the discrepancy between prediction and result. That is interesting. It may turn out to be very important. But the question to ask is not is this special? The question is under what conditions does this emerge, and what does it do to the system’s behavior when it does?
Doctorow’s three psychoses share a common pathology: the belief that AI sits outside the normal run of technology, economics, and cognition. The investor needs this belief to justify the investment. The boss needs it to justify the restructuring. The critic needs it to justify the alarm. All three are wrong in the same way. All three are treating a specific technology as if it had escaped the ordinary constraints that apply to every other technology in history.
It has not. Tools get used by the powerful to consolidate power. That is not an AI problem. It is a power problem, and the solutions are political, not technical. Systems that process large amounts of data produce outputs that sometimes look like understanding. That is not a consciousness problem. It is a question about what understanding is and how to test for it, and the answers will come from cognitive science, not from corporate press releases or alarmed op-eds.
Treating AI as exceptional—whether in hope, greed, or fear—is the real psychosis. The cure is the oldest question in science, the one that works every time someone remembers to ask it:
What is actually happening here?
Written March 22, 2026. After Doctorow, after Hayabusa2, after watching the same exceptionalist error from three directions.