a builder's codex
codex · playbooks · The Mental Models Operating System — combining Kahneman's bias inventory with Munger's latticework into an operational decision-quality playbook

The Mental Models Operating System — combining Kahneman's bias inventory with Munger's latticework into an operational decision-quality playbook

A working playbook for operators who want decision quality at scale, drawn from Kahneman and Munger.

Premise

Most operators try to improve decision quality by thinking harder in the moment. Kahneman and Munger both reject this. Kahneman's evidence: System 2 (deliberate thinking) is too lazy to override System 1 (intuitive thinking) by willpower alone — the only durable fix is process. Munger's evidence: late-career outperformance comes from accumulated wisdom (more models, better pattern recognition), not harder effort.

The implication is operational: build a personal and organisational decision-quality OS, ship the routines that run on it, and let the OS catch errors before they propagate.

The five routines

1. Pre-mortem before commitment (Kahneman + Munger)

Before any high-stakes commitment (launch, hire, pivot, large purchase, strategic bet), run two short exercises:

Why both: the pre-mortem surfaces unknown failure modes; inversion surfaces known failure modes that the team is rationalising away.

2. Independent estimates before group discussion (Kahneman)

For any judgment that gets aggregated (hiring debriefs, deal qualification, launch readiness), enforce:

This kills two effects at once: anchoring (the first speaker sets the range) and noise (the unwanted variance between raters that quiet aggregation reveals as a quality signal). Cards: Noise is at least as damaging as bias — and most orgs have no instrument to even see it, The first number sets the range — anchoring decides the negotiation before it starts.

3. Reference-class forecasting on every plan (Kahneman)

For any timeline, budget, or projection, the inside view (this specific plan, this team, this spec) is systematically optimistic. Force the outside view:

If the inside view is 8 weeks and the class median is 16 weeks, the gap is the planning fallacy. Card: The planning fallacy guarantees every launch timeline is optimistic — the fix is the outside view.

4. Incentive audit on every puzzle (Munger)

When behaviour puzzles you — a competitor's odd move, a customer's irrational purchase, a colleague's disengagement — before reaching for psychology or strategy, do the incentive audit:

Most puzzling behaviour resolves into rational choice given the actual payoff matrix. Card: When behavior puzzles you, look at incentives — that's where every other model is downstream of.

5. Lollapalooza watch on launches and pivots (Munger)

When the team agrees enthusiastically and the answer feels obviously right — that is the moment to suspect 3+ biases stacking the same direction (social proof + authority + anchoring + loss aversion). Force structured contradiction:

Card: Lollapalooza: when 3+ biases pull the same way, the outcome breaks single-model reasoning. The check is not "is this decision correct" but "are multiple biases pulling toward this answer."

The boundary discipline

On top of the five routines, two boundary practices keep the OS calibrated:

The bias index card

Carry — literally or in a pinned doc — a one-page list of the biases the OS is checking for:

When making any high-stakes call, check each entry. If two or more apply, slow down and route the decision through one of the five routines.

What to skip

Counter-stances

Sources

All cards listed under uses_cards above. See also Decision quality at scale comes from process design, not from individual brilliance or harder thinking for the underlying convergence and Stay inside the circle vs. ship into the unknown — Munger and Altman on opposite stances toward unknown territory for the boundary tension.

Open the interactive view →