Breakpoints
General usage; systems dynamics, economics and statistics (threshold / change-point effects)

A breakpoint is a value of a variable where the relationship with outcomes changes slope or regime: capacity hits a wall, a discount tier kicks in, adoption passes critical mass, or regulation changes the rules. Thinking in breakpoints replaces straight-line extrapolation with piecewise models and if-this-then-that triggers.
Piecewise behaviour – below/above a threshold the system follows different rules (kinks, steps, plateaus).
Capacity & congestion – utilisation near 80–95% makes queues explode (queueing theory).
Step-fixed costs – costs jump when you add a shift, machine, or team; margins bend at those volumes.
Network thresholds – value accelerates once density or R>1 is reached (network effects, epidemics).
Psychological & UX cliffs – e.g., page load > 3s tanks conversion; forms beyond 5–7 fields spike drop-off.
Incentive cliffs – targets/bonuses create bunching and gaming around cut-offs.
Regulatory/contract tiers – tax bands, capital ratios, covenants, SLAs – behaviour changes at the line.
Statistical change-points – mean/variance shifts; detect with segmented regression or change-point tests.
Pricing & packaging – volume discounts, feature gates, geo pricing; design tiers at natural breakpoints.
Capacity planning – when to add lines, servers, or headcount; protect flow before queues explode.
Growth & product – referral loops that work only after a user/activity threshold; latency/SLA breakpoints.
Risk & finance – margin call/covenant triggers; liquidity turning points.
Policy & compliance – thresholds for KYC, fraud checks, or audit intensity.
People & incentives – quota bands, promotion thresholds, pass/fail rubrics.
Pick the variable pair – outcome vs driver (e.g., lead time vs utilisation; conversion vs fields; profit vs volume).
Visualise the range – scatter + moving average; look for kinks/steps/plateaus.
Model piecewise – fit segmented regression or simple “below/above” rules; write the breakpoint value(s).
Probe near the edge – run small experiments around suspected thresholds to confirm causality.
Set triggers – pre-agree actions when a metric crosses a line (add capacity, change tier, throttle traffic).
Build buffers – stay a safe distance from bad cliffs (e.g., keep utilisation ≤ 85%, cash ≥ covenant + headroom).
Monitor & re-estimate – thresholds drift with mix, tech and behaviour; review quarterly.
Linear bias – straight-line planning in non-linear domains.
Confounding – mistaking correlation for a breakpoint (seasonality, mix shift); replicate before locking policy.
Coarse data – sampling too widely hides the kink; capture with finer granularity.
Cliff incentives – cut-offs that encourage gaming, sandbagging, or risk-shifting.
Overreacting noise – false “breaks” from randomness; use bands and confirmation rules.
Hysteresis – the return path differs (e.g., churned users don’t come back at the same threshold); don’t assume symmetry.
