Claim
Technological acceleration in frontier AI is not a sudden event; it is a continuous process that appears vertical when looking forward and flat when looking backward. Each step feels incremental in hindsight even when cumulative change is dramatic. The framing reorients AI strategy from "wait for the breakthrough event" to "operate inside an ongoing curve."
Mechanism
Humans normalise rapid change. Today's capabilities (multi-step agentic execution, near-real-time multimodal understanding, durable cross-session memory) felt impossible 4 years ago and now feel routine. Each new capability is processed as the new normal within weeks; the cumulative gap between "today" and "5 years ago" feels enormous prospectively but unremarkable retrospectively. This is structurally different from the "singularity event" frame, which expects a discrete moment of qualitative change. The smooth-curve framing has operating consequences: stop waiting for the AGI moment; start operating against the curve as it is. The capability you cannot build today will be available in 12-18 months; plan for it.
Conditions
Holds when:
- The operator is making medium-horizon strategic bets in AI-adjacent categories.
- The capability trajectory is in fact roughly continuous (typical of the current LLM scaling regime).
- The operator can hold both stances simultaneously: today's constraints are real, tomorrow's are different.
Fails when:
- The operator expects a single discontinuous breakthrough (AGI moment, capability cliff) that breaks the smooth-curve assumption.
- The category is bounded by a non-AI constraint (regulation, hardware, energy) that doesn't scale with model capability.
- Short-horizon decisions where today's exact capabilities are the only thing that matters.
Evidence
"it always looks vertical looking forward and flat going backwards, but it is one smooth curve."
— see raw/expert-content/experts/sam-altman.md line 13.
Signals
- Strategy decks plan against capability trajectories with explicit 6-, 12-, and 24-month milestones, not against a fixed "today's capability."
- Product roadmaps include "build for the model 12 months out" alongside "ship for the model today" features.
- Hiring and team design assume capability trajectory continues; not betting against the curve.
Counter-evidence
The smooth-curve thesis is a forecast, not a fact. There are reasonable scenarios where capability hits an asymptote (data scarcity, training-cost scaling, regulatory limits) that bend the curve flat. The "smooth" assumption holds within the scaling-laws regime; if that regime breaks, planning against a continuing curve produces over-investment. Munger's circle of competence vs. iterative deployment tension is adjacent: at the frontier, both Altman's smooth-curve and Munger's stay-in-circle make sense in different domains.
Cross-references
- The cost of intelligence is converging toward the cost of electricity — durable advantage isn't using AI, it's parlaying AI — Altman's adjacent thesis on the long-run trajectory.
- When intelligence is abundant, taste, judgment, relationships, and the ability to identify what is worth doing become the scarce resources — what becomes scarce when intelligence is abundant.
- Stay inside the circle vs. ship into the unknown — Munger and Altman on opposite stances toward unknown territory — the productive tension between Munger's stay-inside vs. Altman's ship-into-unknown stance.