Build causal graphs. Intervene on nodes. Watch effects propagate. Learn why do(X) ≠ see(X).
The do-operator sets a variable by force, breaking all causal influence from its parents. This is surgery on the graph.
A causal graph (or DAG, directed acyclic graph) represents how variables influence each other. An arrow from A to B means "A causally affects B." The strength on each edge controls how much influence flows. When multiple parents influence a node, their effects sum (capped at 1.0).
The critical distinction in causal reasoning is between intervention and observation. Observing that X is high tells you something about X's causes (information flows backward along edges). Intervening to set X high tells you nothing about X's causes—you broke those connections by forcing the value. This is Pearl's do-calculus: P(Y | X = x) ≠ P(Y | do(X = x)).
Try the Confounding preset: C causes both A and B. Observing A high suggests C is high, which suggests B is high—a spurious correlation. But intervening on A (do(A)=1) severs C→A, so C stays at 0, so B stays at 0. The intervention reveals the true causal effect of A on B: zero. This is why randomized experiments work—they are interventions.