A working playbook for operators who want decision quality at scale, drawn from Kahneman and Munger.
Premise
Most operators try to improve decision quality by thinking harder in the moment. Kahneman and Munger both reject this. Kahneman's evidence: System 2 (deliberate thinking) is too lazy to override System 1 (intuitive thinking) by willpower alone — the only durable fix is process. Munger's evidence: late-career outperformance comes from accumulated wisdom (more models, better pattern recognition), not harder effort.
The implication is operational: build a personal and organisational decision-quality OS, ship the routines that run on it, and let the OS catch errors before they propagate.
The five routines
1. Pre-mortem before commitment (Kahneman + Munger)
Before any high-stakes commitment (launch, hire, pivot, large purchase, strategic bet), run two short exercises:
- Pre-mortem (Kahneman): "It is six months from now. This bet has failed. Write the post-mortem." Forces the team to populate the failure modes that WYSIATI hides.
- Inversion (Munger): "What would guarantee failure?" Then refuse to do those things. (Card: Invert, always invert: instead of "how do I succeed?" ask "what would guarantee failure?")
Why both: the pre-mortem surfaces unknown failure modes; inversion surfaces known failure modes that the team is rationalising away.
2. Independent estimates before group discussion (Kahneman)
For any judgment that gets aggregated (hiring debriefs, deal qualification, launch readiness), enforce:
- Each rater submits a written rating before hearing anyone else's.
- Aggregate the ratings mechanically (median, average, or rubric-weighted) before deliberation.
- Use deliberation only to resolve disagreements, not to set positions.
This kills two effects at once: anchoring (the first speaker sets the range) and noise (the unwanted variance between raters that quiet aggregation reveals as a quality signal). Cards: Noise is at least as damaging as bias — and most orgs have no instrument to even see it, The first number sets the range — anchoring decides the negotiation before it starts.
3. Reference-class forecasting on every plan (Kahneman)
For any timeline, budget, or projection, the inside view (this specific plan, this team, this spec) is systematically optimistic. Force the outside view:
- Identify the reference class (other launches like this one, other migrations like this one, other hires for this role).
- Pull the actual distribution of outcomes from that class.
- Adjust the inside-view estimate toward the class median.
If the inside view is 8 weeks and the class median is 16 weeks, the gap is the planning fallacy. Card: The planning fallacy guarantees every launch timeline is optimistic — the fix is the outside view.
4. Incentive audit on every puzzle (Munger)
When behaviour puzzles you — a competitor's odd move, a customer's irrational purchase, a colleague's disengagement — before reaching for psychology or strategy, do the incentive audit:
- What is actually rewarded here? (Not the stated KPI; the actual one.)
- Who decides what gets rewarded?
- What does this person have to do to keep their job / status / income?
Most puzzling behaviour resolves into rational choice given the actual payoff matrix. Card: When behavior puzzles you, look at incentives — that's where every other model is downstream of.
5. Lollapalooza watch on launches and pivots (Munger)
When the team agrees enthusiastically and the answer feels obviously right — that is the moment to suspect 3+ biases stacking the same direction (social proof + authority + anchoring + loss aversion). Force structured contradiction:
- Devil's advocate: assigned to argue against the consensus.
- Red team: independent group reviews from a hostile prior.
- Cool-off period: 48 hours between agreement and commitment.
Card: Lollapalooza: when 3+ biases pull the same way, the outcome breaks single-model reasoning. The check is not "is this decision correct" but "are multiple biases pulling toward this answer."
The boundary discipline
On top of the five routines, two boundary practices keep the OS calibrated:
- Circle of competence (Munger): name the decisions you are inside-circle on. Refuse the others or recruit someone whose circle covers the gap. Card: Knowing what you don't know beats being brilliant — the discipline is the boundary, not the expansion.
- Latticework breadth (Munger): over years, internalise the foundational ideas from 5-10 disciplines (psychology, economics, biology, statistics, history, engineering, game theory). Cross-domain analogy is what makes the OS run automatically rather than by lookup. Cards: Reliable thinking requires 80-90 mental models from multiple disciplines, not one, Mental models compound only if they run automatically — looking up the right model in the moment is too slow.
The bias index card
Carry — literally or in a pinned doc — a one-page list of the biases the OS is checking for:
- Anchoring — first number sets the range. Control who anchors. (The first number sets the range — anchoring decides the negotiation before it starts)
- Loss aversion — losses 2× as painful as gains; the value lift has to clear this. (Losses feel about 2× as painful as equivalent gains — switching costs are paid in pain, not dollars)
- Planning fallacy — every timeline is optimistic. Run the outside view. (The planning fallacy guarantees every launch timeline is optimistic — the fix is the outside view)
- WYSIATI — confidence rises as data thins. Enumerate unknowns. (The less you know, the more confident you are — WYSIATI builds the cleanest stories from the thinnest data)
- Noise — unwanted variance between identical judgments. Audit it. (Noise is at least as damaging as bias — and most orgs have no instrument to even see it)
- Lollapalooza — when biases stack, outcomes break single-model reasoning. (Lollapalooza: when 3+ biases pull the same way, the outcome breaks single-model reasoning)
When making any high-stakes call, check each entry. If two or more apply, slow down and route the decision through one of the five routines.
What to skip
- Process theatre. Rubrics that get filled in but ignored when senior people overrule them. The discipline has to be culturally enforced or it adds friction without quality.
- Over-rotation to System 2. Kahneman's claim is not "always deliberate." It is "deliberate when the cost of wrong is high." For high-frequency tactical decisions, System 1 speed dominates and process overhead destroys throughput.
- Universal application of circle-of-competence. In genuinely novel categories (frontier AI, new platforms), strict circle discipline means missing windows. See Stay inside the circle vs. ship into the unknown — Munger and Altman on opposite stances toward unknown territory for the productive tension.
Counter-stances
- **Gary Klein, Sources of Power** — in expert pattern-rich domains, trained intuition outperforms structured deliberation. The OS is most useful for slow, high-stakes, infrequent decisions; it is least useful for fast pattern-matching by genuine experts.
- **Tetlock, Superforecasting** — explicit Bayesian reasoning sometimes beats internalised pattern matching, particularly for forecasting outside the operator's lived domain. Calibration via track-record is the missing leg of the OS.
- Bezos's Type 1 / Type 2 decisions — the OS as described above is heavy. For Type 2 (reversible) decisions, ship-and-learn beats deliberate. The discipline is matching weight to type.
Sources
All cards listed under uses_cards above. See also Decision quality at scale comes from process design, not from individual brilliance or harder thinking for the underlying convergence and Stay inside the circle vs. ship into the unknown — Munger and Altman on opposite stances toward unknown territory for the boundary tension.