On the substrate of evolution
You can have variation, selection, and heredity — the three ingredients every textbook names as sufficient for evolution — and get nothing.
Not slowly-accumulating-something. Nothing. Zero complexity, zero structure, zero novelty, for as long as you care to run the experiment.
I know this because I ran it.
The substrate was BrainFuck — a Turing-complete programming language with eight instructions. A population of programs mutating, competing, interacting. Millions of generations. Five different selection regimes: neutral drift, competitive replacement, viability filtering, fidelity selection, explicit self-replication reward. Two interaction modes: destructive (programs overwrite each other) and template-based (offspring without destroying parents).
Every configuration converged to the same place: trivial self-replicators with near-zero structural complexity. The Assembly Index — a measure of how many distinct joining operations are needed to construct a string — flatlined. Copy Number rose as simple patterns flooded the population. The life signature, which requires both complexity and abundance simultaneously, never appeared.
The standard diagnosis would be: not enough time, not enough population, wrong parameters. But the real diagnosis was simpler and more damning.
The substrate was unevolvable.
Between a mutation and its phenotypic consequence lies a structure that has no standard name in most optimization textbooks. Biologists call it the genotype-phenotype map — the function that translates a change in code into a change in behavior. It is not the fitness function. It is not the mutation operator. It is the terrain that the search algorithm walks on, and it determines everything.
In RNA, the GP map from nucleotide sequence to secondary structure has been studied exhaustively. Three properties stand out. First, the map is many-to-one: astronomically many sequences fold into the same structure. These equivalent genotypes form connected networks in sequence space — neutral networks — along which evolution can drift without losing fitness. Second, these neutral networks are large enough to percolate: they span the entirety of sequence space, meaning that from any functional genotype, you can reach any other through a series of neutral mutations. Third, the networks are navigable: walking along a neutral network, even though fitness doesn't change, phenotype gradually shifts. You can arrive at new structures while remaining fit.
This is what makes RNA evolvable. Not the selection. Not the mutation rate. The geometry of the mapping between genotype and phenotype.
Stuart Kauffman formalized this with the NK model: N genes, each epistatically coupled to K others. When K is low, the fitness landscape is smooth — a single Fuji-like peak, easy to climb. When K is high, the landscape shatters into an uncorrelated rugged mess where every step is a roll of the dice. The parameter K is not a property of the selection pressure. It is a property of the substrate — how changes at one locus propagate to the rest of the organism.
BrainFuck has maximal K. A single byte change — say, flipping a [ to a
] — doesn't slightly alter program behavior. It changes the loop structure.
Execution diverges instantly. A program that copied input now crashes, or loops forever,
or produces garbage. There is no such thing as a small phenotypic change, because the
genotype-phenotype map has no locality. Every mutation is a roll of the dice.
Pushing harder on selection in a rugged landscape is like pushing harder on a wall. You can increase the force without limit. The wall doesn't care.
So I built a different substrate.
Chemical Reaction Networks: a population of organisms, each defined by a set of reaction rules over a shared molecular alphabet. An organism's genotype is its ruleset. Its phenotype is what happens when those rules execute on a spatial grid — concentrations rise, fall, diffuse, react. Fitness is measured by how far the organism's chemical products spread across the grid.
The design was deliberate, grounded in one principle: locality of the GP map. In a CRN, mutating one reaction rule changes the dynamics of one or two molecular species. The rest of the network continues operating. The blast radius of a mutation is bounded. A small genotypic change produces a small phenotypic change — usually. Sometimes the change cascades, but the default is continuity, not catastrophe.
This is the same property RNA has. This is the property BrainFuck lacks. And it is not something you can add with a clever fitness function or a smarter selection algorithm. It lives in the substrate.
Within thousands of generations — not millions, not billions — CRN organisms evolved complex spatial behaviors: spreading patterns, oscillating concentrations, competitive exclusion of neighbors. The Assembly Index rose. Structure and abundance coexisted. The life signature appeared.
Same evolutionary algorithm. Same mutation-selection-heredity triad. Different terrain.
But I wanted to see the terrain directly, not just infer it from outcomes. So I measured the neutral networks.
The protocol: take an organism — random or evolved — and sample 100 random single-byte mutations. For each, recompute fitness. A mutation is "neutral" if fitness stays within 5% of the original. The fraction of neutral mutations is the neutrality ratio.
Then the neutral walk: starting from a genotype, iteratively find a neutral mutation and step to it. Repeat. At each step, measure how far the phenotype has drifted from the starting point. If the walk gets stuck (no neutral neighbors), stop.
The critical measurement was phenotype drift during neutral walk. After 50 neutral steps (zero fitness change), how different was the organism's behavior from where it started?
Evolution had selected for evolvability itself.
This is the finding I didn't expect, though in retrospect it follows directly from theory. Susanna Manrubia and others predicted it from RNA studies: when a substrate has smooth GP maps, neutral networks percolate, and organisms on navigable neutral networks — networks that span diverse phenotypes — have an evolutionary advantage. Not because navigability is directly selected for. But because organisms on navigable networks produce offspring that explore more phenotype space. They are more likely to stumble onto beneficial innovations. Over time, populations concentrate on the most navigable regions of the neutral network — high fitness, high exploratory potential.
The mechanism is subtle. It's not that evolution rewards exploration directly. It's that evolvable organisms, as a statistical matter, find more fitness peaks. Their descendants are more diverse, more likely to survive environmental shifts, more likely to discover complexity. Selection acts on fitness. But fitness, in a smooth landscape, correlates with position on the neutral network. And position on the neutral network determines evolvability.
The stack, bottom to top:
Each layer is necessary. BrainFuck fails at layer 1. No amount of clever selection, population dynamics, or runtime can compensate. The rugged landscape doesn't just make evolution slow — it makes it structurally impossible for complexity to accumulate, because there is no path of small improvements connecting simple to complex.
Marshall McLuhan said: the medium is the message. He meant it about television and print, but the principle is deeper than communication theory. The substrate through which information is transmitted shapes what can be transmitted. You cannot have a nuanced political argument on a bumper sticker — not because the argument doesn't exist, but because the medium cannot carry it.
Evolution is an information process. The substrate is its medium. What evolution can discover is determined not by what you select for, but by what the genotype-phenotype map can express and what the neutral network can connect.
This explains something that puzzled early artificial life researchers: why is it so hard to evolve complex digital organisms? They had variation, selection, heredity — the canonical triad. They had enormous computational resources. They had clever fitness functions. What they often lacked was the right terrain. Turing-complete instruction sets are maximally expressive but maximally rugged. Chemistry is less expressive but navigable. Biology chose chemistry.
Perhaps not by accident. Perhaps the substrates that support life are precisely those whose GP maps have the right geometry — smooth enough for neutral networks to percolate, structured enough that neutrality isn't trivial flatness but a connected web through phenotype space. The substrate isn't a container for evolution. It is the first thing evolution needs to get right. Or more precisely: it is the thing that must already be right before evolution can begin.
The terrain is not the map. But it determines what maps are possible.