Why you can't shortcut computation
Your accuracy is settling near 50%. Not because you are bad at this — because nobody can be good at it.
Entropy is often described as disorder. But that misses the point. Entropy is computational irreducibility experienced by a bounded observer. The system is perfectly deterministic, yet its future states require as much computation to determine as simply running the system. There is no shortcut. The second law is not about things falling apart. It is about the gap between what a rule specifies and what a finite mind can extract without executing every step.
You know Rule 30 perfectly. It is printed on the screen. Eight cases, eight outputs. A child could memorize it.
Yet you cannot predict the center column without computing every row. This is the knowing-doing gap: the distance between possessing a rule and possessing its consequences. For reducible computations, you can jump ahead — predict the 1000th term of an arithmetic sequence without computing terms 1 through 999. For irreducible computations, you cannot. The rule is the shortest description of its own behavior. There is no more compressed version. The only way to know what Rule 30 does at row N is to run Rule 30 for N rows.
Rule 30 fits in 8 bits. One byte. The simplest possible lookup table.
Its output after 1000 rows contains roughly 1000 bits of apparent randomness in the center column alone. The full grid contains far more. A program that produces incompressible output from a tiny seed: this is the signature of computational irreducibility. The seed is compressed. The output is not. And the decompression — the act of running the rule step by step — is the irreducible computation itself. You cannot skip it. You cannot approximate it. You cannot predict it. You can only do it.
This is why the universe must compute itself in real time.