Position A — Post-training on proprietary data is the moat
- Operator: Asha Sharma (Microsoft AI)
- Card: The economic moat in AI is post-training on proprietary data, not pre-training a base model
- Claim: The economic moat in AI is post-training on proprietary data, not pre-training a base model.
Position B — Software/data is not the moat; ecosystems and distribution are
- Operator: Evan Spiegel (Software is not a moat — ecosystems, hardware, and distribution are), Brian Balfour (Every distribution platform follows a four-step cycle; cycles are getting shorter), Elena Verna (Build earned channels — every dollar in algorithm channels makes Google richer, not you)
- Claim: Software is not a moat. Ecosystems, hardware, distribution, earned audience are. By extension, model layer (including post-training) is not where defensibility sits.
Conditions distinguishing them
- Layer of the stack: Sharma operates at the foundation/applied AI layer where proprietary data IS the differentiator vs another model deployment. Spiegel/Balfour/Verna operate at the application/consumer layer where models commoditise rapidly and audience capital is the durable asset.
- Time horizon: Post-training moat is real this cycle but exposed to commoditize-the-complement pressure (Tunguz, AI labs are running the commoditize-the-complement playbook; tag features as core or complement quarterly). Distribution moats compound for decades.
- Buyer: Post-training matters most to enterprise B2B AI vendors. Distribution matters most to consumer + PLG.
Resolution / synthesis
Genuine layered contradiction. Sharma's claim that post-training is the moat contradicts Spiegel's claim that software (which includes ML weights) is not the moat. They cannot both be the dominant moat for the same business.
Resolution by layer:
- For an applied AI model company (Anthropic, OpenAI, MS-AI products), post-training is the current moat — but it sits inside a foundation-model commoditisation pressure that erodes it.
- For an application company shipping AI features, post-training is not durable; distribution + ecosystem are.
The cards together describe a moat hierarchy: distribution > ecosystem > post-training > pre-training. Sharma is right within her layer; Spiegel is right across layers. The genuine disagreement: at the application company layer, would you bet defensibility on a proprietary post-training pipeline (Sharma-implied: yes) or on distribution (Spiegel: no, software isn't the moat)? Most evidence in the corpus tilts toward Spiegel for application companies and Sharma for model providers.