a builder's codex
codex · operators · Cat Wu · ins_100-percent-automation-rule

An automation that works 95% of the time is not an automation

By Cat Wu · Head of Product, Claude Code + Co-work, Anthropic · 2026-04-27 · podcast · How Anthropic's product team moves faster than anyone else — Lenny's Podcast

Tier B · TL;DR
An automation that works 95% of the time is not an automation

Claim

If an automation does not work 100% of the time, it isn't really an automation — the last 5–10% takes more time to manage than the original task did to do once. Most users abandon at 95%; the payoff is repeated runs at full reliability, which only materializes if you push to 100%.

Mechanism

Manual exception handling is more cognitively expensive than the original work because it requires you to context-switch into "is this the 5% case?" every run. Below 100%, you build mental overhead that grows linearly with run count. At 100%, the automation drops out of mind and compounds with each repetition. Building automation is slower than doing the task once; the math only works at full coverage.

Conditions

Holds when:

Fails when:

Evidence

"If an automation doesn't work 100% of the time, it's not really an automation. That last 5–10% takes more time. Most users give up at 95%."

— Cat Wu on Lenny's Podcast, 2026-04-27

Signals

Counter-evidence

Aishwarya Naresh Reganti and Kiriti Badam's CCCD argument cuts the other way: AI products are non-deterministic and 100% is a category error in many domains. The right move is transparent uncertainty (confidence scores, multiple hypotheses) rather than a binary "automation / not." Cat's rule applies cleanly to deterministic automation tasks and less cleanly to probabilistic AI outputs.

Cross-references

Open the interactive view → View original source → Markdown source →