HomeResourcesMental modelsRegression to the Mean

Regression to the Mean

Extreme results are usually followed by more typical ones—even without any real change.
Author

Francis Galton (1886); modern statistics and epidemiology

model type
,
about

Regression to the mean is a statistical tendency: when outcomes vary and today’s result is unusually high or low, the next measurement is likely to be closer to average simply because luck/noise won’t be extreme twice in a row. Galton noticed tall parents had tall—but less extreme—children on average. It’s not a force pushing outcomes; it’s a sampling effect that fools us into seeing causation (e.g., “the cure worked!”) where none exists.

How it works

Conditions – there’s genuine variability and the correlation between time-1 and time-2 is imperfect (r < 1).

ExpectationE[X_{2}\,|\,X_{1}=x] \approx \mu + r\,(x-\mu); with 0<r<1, the expected follow-up is nearer the mean \mu.

Selection on extremes – if you pick the worst performers or the best funds, their later scores will look better/worse even without intervention.

Illusory improvement – any programme targeted at extreme cases will appear to “work” unless you control for this effect.

use-cases

Performance & sport – “hot hands”, “SI jinx”, star funds; extreme runs cool naturally.

Product & ops – teams targeted after a bad month will often rebound anyway.

Medicine & policy – treat only the highest BP/lowest scores and you’ll overstate treatment effects.

A/B testing & analytics – picking variants after a lucky spike leads to disappointment on reruns.

Quality control – outlier weeks drift back without any process change.

How to apply
  1. Use proper controls – randomise or add a matched control group so both groups regress similarly.

  2. Measure multiple baselines – average several pre-tests to reduce noise before intervening.

  3. Analyse deltas, not levels – difference-in-differences or ANCOVA (post adjusted for pre) rather than raw post.

  4. Shrink extreme estimates – use Bayesian/hierarchical models or empirical Bayes to temper outliers.

  5. Beware “coach effect” stories – if you intervene because results were extreme, expect improvement even if the intervention is useless.

  6. Report uncertainty – show intervals, reliability (r), and rerun on new samples.

  7. Forecast sanely – temper peaks/troughs toward typical levels unless you have a causal reason not to.

pitfalls & cautions

Mistaking reversion for remedy – crediting training, penalties or bonuses for natural bounce-backs.

One-group pre/post – classic trap; without controls you mostly measure regression.

Overfitting – picking the best model/creator/fund from many ensures a later slide.

Changing variance – if measurement noise or mix shifts, the amount of regression changes.

Punishing extremes – regression makes “punishment works, praise fails” look true (Kahneman note); design fair feedback loops.