a builder's codex
codex · patterns · LLM-as-OS, post-training as moat

LLM-as-OS, post-training as moat

Convergence

Four operators argue the platform layer is the LLM, the apps are agents, and the moat is post-training on proprietary data — not pre-training a base model and not the surrounding software. Implication: foundation-model labs are running a commoditize-the-complement playbook; defensibility for application companies sits in proprietary data + integration + ecosystem.

Operators

Variation

Implication

For application companies: data flywheels, ecosystem positioning, and hardware/distribution control are where moat lives. Quarterly review which features are at risk of being absorbed by the next foundation-model release; assume "commoditize-the-complement" pressure is permanent. For platform companies: post-training pipelines are the IP.

Sources

Open the interactive view →