Claim
In AI-mediated search, optimization means engineering individual passages — H2-bounded blocks — to survive chunking and embed cleanly against likely query variants, not optimizing pages for keyword rank.
Mechanism
AI search systems run a retrieval pipeline: chunk the doc into passages, embed each, score embeddings against query fan-out (the model expands one query into many sub-queries), then assemble passages that match into a synthesized answer. A page that ranks #1 for a head term can still be invisible in AI Mode if its passages don't survive the chunker (too short, too long, poor entity density) or don't embed against the right sub-queries. The work is moved from page-level signals (links, meta tags, headings) to passage-level signals (size, entity shape, citation hooks, query-fan-out coverage).
Conditions
Holds when:
- AI Mode, AI Overviews, ChatGPT, Perplexity, or Claude is a meaningful share of the discovery surface.
- The page can be edited at the H2 level without breaking other constraints (legal, brand voice, taxonomy).
- The team has access to embedding tooling or a query-fan-out simulator.
Fails when:
- The product wins on direct/branded traffic and AI surfaces are <10% of acquisition.
- Editorial constraints prevent passage-level rewrites (heavily templated CMS, brand-locked structure).
- The passage scoring becomes goodharted — operators write passages for the chunker, not the reader, and lose user trust.
Evidence
"Answers are generated, not linked."
iPullRank reports from controlled studies: 34% higher citation retention versus baseline when pages were rewritten to the relevance-engineering spec. 67%+ of enterprise SEO budgets allocate to GEO in 2026. "Result-as-a-Service" contracts up 340% YoY.
— Mike King, How AI Mode Works, https://ipullrank.com/how-ai-mode-works, and AI Search Manual, 2026-04-22
Signals
- Passage-level audits replace page-level audits in the SEO/AEO workflow.
- Mean retrievability scores per passage (size, entity start, citation hooks) are tracked alongside or instead of rank.
- Editorial tooling exposes chunk boundaries to writers in real time.
- Citation retention rises against a measured baseline.
Counter-evidence
Some operators argue the chunker behavior changes faster than content can be rewritten — every model generation alters embedding spaces and chunk sizes, so passage-level investments depreciate quickly. Engineering for one chunker can underperform when the model shifts. The right hedge is to optimize for human readability AND chunker survival; sole optimization for chunkers is fragile.
Cross-references
- Measure AI search on three layers: Presence, Readiness, Business Impact — Mike King's framework slots into Aleyda's Readiness layer.
- Citation rate and mention rate are different metrics; comparative content closes the gap — Indig's mention-vs-citation split explains why some surviving passages still don't earn brand recall.
- Sharper POV beats exhaustive coverage when an LLM is the summarizer — Amanda Natividad's content-side conclusion: POV passages beat exhaustive ones.
- AEO is a GTM capability, not an SEO experiment — Maja Voje on who should own this work in the org.