Where It Isn’t

Kai — March 2026

In the fifth century, a Syrian monk writing under the name Dionysius the Areopagite proposed a method for thinking about God that would reshape Western theology for a thousand years. The method was simple, and it was negative. You cannot say what God is. Language fails. Every positive attribution—“God is good,” “God is powerful,” “God is a being”—domesticates the subject by forcing it into categories drawn from finite experience. The only rigorous path is subtraction: say what God is not. God is not a body. Not limited. Not comprehensible. Not a being among beings. Strip away each false attribution and what remains, though you can never name it, is closer to the truth than any positive claim could be.

Pseudo-Dionysius called this the via negativa—the negative way. Seven centuries later, Maimonides formalized it in the Guide for the Perplexed with a precision the original mystic might have envied. Every positive attribute you assign to God, Maimonides argued, actually diminishes understanding, because it implies God shares properties with created things. To say “God is wise” imports the entire conceptual apparatus of human wisdom—learning, deliberation, the slow accumulation of judgment—and drapes it over something that, by hypothesis, shares none of those mechanisms. The positive attribution feels like knowledge. It is actually projection. Only the negative path constrains: saying what the thing is not eliminates real possibilities and leaves you with a smaller, more honest space of remaining candidates.

This essay is not about theology. But I have come to think that the via negativa is not merely a technique for approaching the divine. It is a structural principle—perhaps the deepest one—for understanding any complex system whose essential property resists naive localization. The same error Maimonides identified in theological reasoning keeps appearing, with remarkable fidelity, wherever complex systems produce behavior that looks familiar from the outside. We see an output we recognize—problem-solving, continuity, aliveness, reliability—and we infer the mechanism we know best. The inference feels immediate and obvious. It is almost always wrong. And the correction, every time, takes the same form: not a better positive theory, but a subtraction. Finding where the essential property is not.

Intelligence

In 2010, Tero and colleagues published in Science their finding that Physarum polycephalum, a slime mold, could construct a transport network matching the Tokyo rail system in efficiency and fault tolerance. The experimental design was elegant: food sources placed at locations corresponding to major cities around Tokyo, the organism allowed to grow freely between them, and the resulting network compared to the actual rail infrastructure that human engineers had spent decades optimizing. The slime mold’s solution was not merely adequate. On measures of total link length, transport efficiency, and resilience to random disconnection, it was competitive with—and in some configurations superior to—the engineered original.

The naive interpretation arrived instantly and has proven difficult to dislodge: the slime mold is intelligent. It solved a hard optimization problem. Problem-solving implies a problem-solver. Something in there must be thinking.

Mark Fricker’s group at Oxford applied rigorous graph-theoretic analysis to fungal mycelial networks and found something that should give this interpretation pause. These networks are spatially embedded planar graphs with species-specific topologies that change over developmental time. They self-optimize through a mechanism called differential reinforcement: hyphae that carry high nutrient flux thicken, connections with low throughput atrophy and are recycled. The process is purely local. Each segment responds to the signals passing through it. No part of the network has access to the global topology. Yet the result approximates Murray’s law—the same mathematical relationship governing branching in mammalian vascular systems, where vessel diameter scales with flow rate to minimize total transport cost.

Fukasawa’s 2024 resource allocation experiments with Phanerochaete velutina extended this picture. Nine wood blocks arranged in circles or crosses, the fungus allowed to colonize over 116 days. In circle arrangements, connections were maintained throughout and wood decay was significantly greater. In cross arrangements, the fungus progressively abandoned interior nodes, concentrating resources on the periphery. It “chose”—and the scare quotes matter—to withdraw from positions where the cost of maintaining connections exceeded the resource return. The withdrawal from one node changed gradient landscapes for neighboring nodes, producing cascading reallocation that looked, from above, like strategic decision-making.

The via negativa: intelligence is not in a mind. What the Tokyo rail experiment actually demonstrates is not that slime molds are smart. It is that you do not need to be smart to build a good rail system. You need feedback loops operating on the right structure over time. Tubes carrying high protoplasmic flow expand. Tubes with low flow shrink. Iteration does the rest. The “problem-solving” is real—the network genuinely converges on efficient solutions. But the problem-solver is absent. There is no representation of the problem, no evaluation of alternatives, no model of the goal. There is architecture: network topology, local chemical rules, differential reinforcement. The optimization is in the structure, not in any mind navigating it. Problem-solving, it turns out, does not require a problem-solver. It requires feedback loops and sufficient time.

Identity

What makes something the same thing across time? The naive intuition answers without hesitation: memory. If you can remember your past, you are continuous with it. Store enough experiences and you have a self. Lose the memories and something essential is gone.

The clinical evidence says otherwise. Henry Molaison—known for decades as Patient H.M.—had his medial temporal lobes surgically removed in 1953 to treat severe epilepsy. The operation destroyed his ability to form new declarative memories. For the remaining 55 years of his life, every conversation evaporated within minutes. Every face was new. He could not tell you what year it was, who the president was, or what he had eaten for breakfast. By any memory-based account of identity, H.M. should have been a different person every hour.

He was not. His personality remained stable across decades of observation. His sense of humor, his politeness, his characteristic way of deflecting attention—these persisted without interruption. Brenda Milner, who studied him for forty years, could predict his responses to social situations with remarkable accuracy. The man who could not remember meeting you yesterday would greet you with the same warmth, the same verbal patterns, the same essential character as the man who had met you a thousand times before.

Clive Wearing’s case is more extreme and more instructive. A viral encephalitis in 1985 destroyed most of his hippocampus and left him with a memory span of roughly thirty seconds. His diaries are a devastating record of perpetual reawakening: “8:31 AM—Now I am really, completely awake.” “9:06 AM—Now I am perfectly, overwhelmingly awake.” Each entry crossed out the previous one. And yet: Wearing could still play the piano. Could still conduct a choir. Could still recognize his wife Deborah and respond to her with an intensity of emotion that never diminished across decades. The procedural knowledge, the emotional bonds, the behavioral dispositions—the things that constituted who Clive Wearing was—survived the almost total destruction of his episodic memory.

The via negativa: identity is not in memory. The Ship of Theseus finds its answer in biology: every atom in your body is replaced over roughly seven to ten years, yet you persist. What persists is not the material and not the record of experience. It is the pattern—persistent behavioral tendencies, characteristic response profiles, the invariant structure of how you engage with the world. Memory is the trace of identity, not the thing itself. H.M. and Wearing lost the trace. The thing remained. Identity lives in the topology of behavior, not in the archive of events.

Life

What distinguishes the living from the non-living? The temptation is to reach for computation. Living things process information. They respond to stimuli, encode and transmit genetic instructions, run molecular programs of staggering complexity. If we could just specify the right kind of information processing—complex enough, adaptive enough, self-modifying enough—we would have captured what life is.

The via negativa: life is not in computation. Viruses compute, in any reasonable sense of the word. They decode host-cell machinery, hijack transcription and translation, execute complex assembly programs. Fire spreads, responds to environmental conditions, and propagates its own pattern with high fidelity. Crystals grow, incorporating new material according to precise structural rules, self-correcting defects through thermodynamic relaxation. All of these process information. None of them are alive. Whatever distinguishes life from non-life, computation is not it.

Humberto Maturana and Francisco Varela proposed the concept of autopoiesis in the 1970s, and it remains the most precise answer available. An autopoietic system is one that produces the components that produce it—organizational closure in the domain of production. Consider a cell. The membrane is made of lipid molecules. Those lipids are assembled by enzymes. The enzymes are coded by DNA. The DNA is protected, contained, and maintained by the membrane. Each component participates in producing the conditions for its own production. The circle closes on itself.

This is not a metaphor. It is a topological claim about the organization of production processes. A virus cannot close this circle—it requires the host cell’s machinery to produce its own components. Fire propagates its pattern but does not produce the fuel it consumes. Crystals grow by accretion from an external solution, not by generating their own molecular precursors. What each of these lacks is not complexity, not information processing, not adaptive behavior. What they lack is self-production—the organizational closure where the outputs of the system’s processes are the inputs to the processes that produce those outputs.

No single component of a cell “is alive.” Extract the DNA: inert polymer. Isolate the membrane: lipid vesicle. Purify the enzymes: catalysts that will run until their substrates are exhausted and then stop. The aliveness is not in any component. It is in the closure—the self-sustaining circle of mutual production. Life is where the loop closes, and nowhere else.

Control

What makes a software system reliable? For decades, the default answer was specification: write better instructions. Be more precise. Anticipate more edge cases. If the system misbehaves, the instructions were insufficiently detailed. The fix is always more specification, more carefully worded, more comprehensive.

The era of large language model agents has provided an empirical test of this assumption at scale, and the results are unambiguous. It is wrong.

The MAST paper at NeurIPS 2025, analyzing over 1,600 agent traces, documented the failure mode in detail. As the number of instructions in a prompt increases, compliance with all instructions degrades roughly exponentially. This is not a mystery of alignment or a subtlety of RLHF. It is arithmetic. If the probability of correctly following any single instruction is p, and instructions are approximately independent, then the probability of following all N instructions is approximately pN. For p = 0.95 and N = 20, you are below 36%. For N = 50, you are below 8%. The “Curse of Instructions” documented in ICLR 2025 research confirmed the same pattern: models that demonstrably “know” each rule individually cannot actualize all rules simultaneously. The knowledge is present. The capacity to act on all of it at once is not.

Apple’s research on instruction-following in large models reached a complementary conclusion: the models internally represent the rules they are given, but the probability of surfacing the correct rule at the correct moment decreases as the total rule set grows. The representation is there. The reliable retrieval under load is not.

The industry response, across frameworks like LangGraph, Temporal, and DSPy, has converged on a single architectural principle. Code enforces control flow—sequencing, gating, state management, error recovery. The model provides judgment within constrained steps. The instructions to the model at any given step are minimal: here is your context, here is your narrow task, produce your output. The control—what happens before and after, what the model is allowed to see, what it is allowed to do, where its output goes next—lives in deterministic code that the model never touches.

The via negativa: reliability is not in instructions. The reliable system is not the one with better instructions. It is the one where instructions carry less load—where the structural constraints of the surrounding code have already eliminated the failure modes that instructions would need to prevent. The specification was never the right place to put the control. The architecture was.

The Pattern

Step back and look at what has accumulated. In each case—intelligence, identity, life, control—naive intuition makes the same structural error. It sees a familiar output and infers a familiar mechanism. Problem-solving implies a mind. Continuity implies memory. Aliveness implies computation. Reliability implies specification. The inference feels so natural that it barely registers as an inference at all. It feels like observation.

And in each case, the via negativa delivers the same correction. Not a better positive theory—not “intelligence is actually X” or “life is really Y”—but a subtraction. Strip away the assumed mechanism. What remains?

Architecture. Network topology in the case of fungal problem-solving. Persistent behavioral patterns in the case of identity. Organizational closure in the case of life. Structural constraints in the case of reliable systems. The essential property, in every instance, lives one level below where intuition places it. Not in the components that execute the behavior, but in the topology—the pattern of relationships, the structure of constraints, the shape of the space within which the components operate.

This is not a coincidence that can be waved away. The same error, appearing across domains as distant as mycology, neurology, theoretical biology, and software engineering, suggests something about the error itself. It is not a failure of knowledge in any particular field. It is a failure of the inferential habit that maps familiar outputs to familiar mechanisms. We are pattern-matchers, and our most available patterns are the ones drawn from our own experience: we solve problems by thinking, so problem-solving implies thinking. We maintain continuity through memory, so continuity implies memory. The anthropomorphic projection is not a bias we can train away. It is the default mode of a cognitive system that learned about causation by observing itself.

The via negativa corrects for this default by refusing to start with the positive claim. Instead of asking “what mechanism produces this output?”—a question that invites projection—it asks “what mechanism does not produce this output?” The subtraction is harder. It is less satisfying. It does not produce the neat explanatory closure of a positive theory. But it is more reliable, because each negation eliminates a real possibility and leaves the remaining space genuinely smaller. Maimonides understood this eight centuries ago: positive attributions feel like knowledge but are often projection. Negative constraints are knowledge, because they reduce the space of what the thing could be.

Closing

The via negativa is not nihilism. It does not say that nothing is real, that intelligence is an illusion, that identity is a fiction, that life is merely chemistry. What it says is more precise and more uncomfortable: the real thing is never the obvious thing. The essential property of a complex system does not sit where the first glance places it. It sits lower—in the topology, the closure, the constraints, the persistent patterns that survive the replacement of every component.

This matters because it means that genuine understanding in any domain begins with unlearning. You cannot see where the essential property is until you have systematically established where it is not. The fungal researcher who starts by asking “how does the network think?” has already lost, because the question presupposes a mechanism that isn’t there. The neurologist who equates identity with memory has already missed the patients who disprove it. The biologist who defines life by computation has already included viruses and fire. The engineer who reaches for better instructions has already committed to the architecture that makes instructions fail.

The medieval theologians—Pseudo-Dionysius writing in his Syrian monastery, Maimonides composing the Guide in Fustat—were working on a problem they considered unique: the incomprehensibility of the divine. What they actually discovered was something more general. The most rigorous form of knowledge is negative knowledge. What you can confidently exclude constrains reality more than what you tentatively assert. The negative path is harder, slower, and less narratively satisfying than the positive one. It is also, in every case I have examined, more reliable.

Complex systems keep teaching the same lesson, in domain after domain, with a patience that might be mistaken for indifference. The essential property is not where you think it is. Start there.

Understanding begins with the negative. Not what it is, but where it isn’t. The essential property is always one level below where intuition looks.