Claim
Agents now read your site in fundamentally different patterns than humans, and the platform-level moves that serve them are concrete and stackable: (1) access control via robots.txt + an agent-permissions.json declaration, (2) discovery via llms.txt, (3) capability signaling with explicit "this product does X with Y" statements, (4) content formatting that puts the answer first inside a 200-token TL;DR, (5) token surfacing so a single page doesn't blow an agent's context window, and (6) a UX bridge like a "Copy as Markdown" button so a human can easily hand the page to their agent. Treating these as a six-layer stack — and shipping each layer deliberately — converts an agent-illegible site into one agents can use, cite, and recommend.
Mechanism
A web page produced for human readers fails an agent on multiple axes at once. Agents fetch in 1-2 HTTP requests where humans browse for minutes; analytics built for humans don't see the agent traffic at all; a 193,000-token page silently gets truncated by the agent's context window. None of that is fixed by writing better headlines. Each of the six layers is a different mechanism: access controls so the right agents in (and the wrong ones out), discovery so the agent knows what to read, capability declarations so the agent can match the page to a query, formatting so the answer is reachable in the first 200 tokens, token discipline so the page fits, UX bridge so humans-with-agents have a clean handoff. They compound — none alone produces the lift, but together they convert a site from agent-hostile to agent-friendly.
Conditions
Holds when:
- The site is reasonably content-driven (docs, help center, integration pages, comparison pages).
- The team has authority to ship platform-level changes (robots.txt, schema, page templates).
- The audience has measurable agent traffic or is in a category where agents are imminent.
Fails when:
- The site is genuinely brand-only with no informational role for agents to fulfill.
- The org's CMS / publishing constraints prevent shipping per-page TL;DRs at scale.
- The work gets cargo-culted (every layer half-built, none working) — partial implementation often performs worse than not bothering.
Evidence
The source piece names each layer with concrete syntax — agent-permissions.json example, llms.txt discovery, the 200-token TL;DR pattern, the 193,000-token page case where context windows get exceeded. The 6-layer stack is the explicit organising frame.
— Addy Osmani, Agentic Engine Optimization, https://addyosmani.com/blog/agentic-engine-optimization/, 2026-05-01.
Signals
- An
llms.txtexists at the site root and is kept current. - Top-tier integration / comparison pages start with a 200-token answer-first TL;DR before any narrative.
- A "Copy as Markdown" button exists on the highest-intent pages.
- Page tokenisation is a measurable property of the publishing pipeline (CI flags any page over a threshold).
Counter-evidence
- The agent ecosystem isn't standardised yet.
llms.txtadoption is still under 11% of indexed domains as of mid-2026, and OpenAI/Anthropic/Google have not formally confirmed they act on it. Some of the six layers will turn out to be premature. - For brand-only marketing pages where the goal is human emotional response, optimising for agent legibility can flatten the experience for the human reader. Treat agent-first content audits as a top-of-funnel / docs / help-center concern, not a brand-page concern.
Cross-references
- Agents are first-class product users; design for output reliability, not navigation — Verna's parallel framing from the PLG side; same converging structural shift.
- AEO is a GTM capability, not an SEO experiment — Maja Voje's framing of why this work belongs to PMM, not SEO.
- Simple agents reading rich, specific context outperform complex agents reading thin context — same author's (Voje) point about what makes agents perform: context. AEO content is one input.