Predictive Coding

The brain's alternative to backpropagation

March 23, 2026

Backpropagation needs a central controller that sends error signals backward through every layer. Brains don't work that way. In predictive coding, each layer locally predicts the layer below it. The difference — prediction error — is all that propagates. No global backward pass. Just springs trying to relax.

The visual metaphor: each neuron slides on a vertical post (height = activity). The layer above sends a prediction — a platform on that post. A spring connects neuron to platform. Spring tension is prediction error. Total energy is the sum of all squared tensions. The system settles when every spring relaxes — when every layer correctly predicts the one below.

Clamped neuron
Free neuron
Prediction (top-down)
Error signal (bottom-up)
Spring tension
Speed
Energy:
0.00
Ready. Select a preset and press Run to watch predictive coding settle.

How it works

The network has three layers: input (bottom, clamped to data), hidden (middle, free to adjust), and output (top, clamped to the target label during supervised learning).

Each layer generates top-down predictions of the layer below via weights. The prediction error at each neuron is the difference between its actual activity and the prediction from above. Hidden neurons adjust their activity to minimize the total prediction error energy — the sum of squared spring tensions across all layers.

The settling dynamics for free neuron xi:

dxi/dt = −εi + Σk wik · εkbelow

The first term pulls the neuron toward the prediction from above (reducing its own error). The second pushes it to better predict the layer below (reducing errors it causes). After settling, weights update via a Hebbian-like rule:

Δwij = η · εi · xj

No backward pass. No global controller. Each synapse only needs information available locally: the prediction error of its postsynaptic neuron and the activity of its presynaptic neuron. The whole system converges because it is minimizing a single energy function — the variational free energy.

Click on input neurons (bottom row) to change their clamped values. Watch how the hidden layer re-settles to accommodate new data while still satisfying the output clamp.