Claim
Foundation-model labs are running Google's classic commoditize-the-complement playbook against application-layer software; product teams should tag every feature as core or complement on a quarterly cadence and concede or bundle complements gracefully rather than defend them.
Mechanism
A free, good-enough version of any complement to the foundation-model layer (chat, code-completion, summarization, retrieval, simple agents) reduces the value the application layer can charge for. Defending complements wastes engineering and marketing spend that should fund the core differentiator. The strategic move is to identify what the foundation-model layer cannot easily commoditize (proprietary data, distribution, regulated category trust, specialized workflows, network effects) and concentrate investment there. Teams that don't run this audit quarterly drift into protecting eroding margins.
Conditions
Holds when:
- The product has any feature surface that overlaps with foundation-model-layer capability.
- The team has the political will to deprecate features that don't fit the core.
- The market has alternative cores (data, distribution, regulation) the team can defend.
Fails when:
- The product's "complement" is actually its core (the team mis-tags and concedes the moat).
- Foundation-model commoditization moves slower than the team's bet — the complement is still a real revenue line.
- The category has no defensible core and the team is genuinely a complement business; conceding gracefully means winding down.
Evidence
"A free, good-enough product is enough to change market dynamics."
— Tomasz Tunguz, https://tomtunguz.com/competitive-strategy-in-ai/, 2026-04-24
The historical reference is Google commoditizing operating systems (Android), browsers (Chrome), and productivity (Workspace) to defend the search/ads core. Tunguz argues Anthropic and OpenAI are running the same playbook against application-layer SaaS in 2026.
Signals
- Feature audits explicitly tag each feature core/complement and route deprecation decisions accordingly.
- Pricing and packaging shifts reflect concession of complements (bundle them or ship for free) rather than premium pricing on commoditized capabilities.
- Investment concentrates on data, distribution, regulated trust, or workflow depth — surfaces the model layer can't easily replicate.
Counter-evidence
The diagnosis depends on correctly identifying what the model layer can or can't commoditize, and that boundary moves with each model release. Teams that conceded too early have given up real revenue. The right cadence is quarterly re-tagging, not one-time strategic surgery.
Cross-references
- Software is not a moat — ecosystems, hardware, and distribution are — closely related: the moat sits beyond the software itself.
- AI makes specificity profitable; the Pareto distribution flattens at the long tail — Seufert's parallel claim from the creative side.
- The economic moat in AI is post-training on proprietary data, not pre-training a base model — the labs' own answer for what is hard to commoditize.