Claim
Don't throw every task at one AI agent. Build one agent per role, each with its own context window, identity, and tool scope, and manage them like teammates.
Mechanism
Context is the bottleneck for agent quality, not raw model capability. A single agent asked to do nine jobs accumulates conflicting context across them and degrades on each. Splitting work across role-scoped agents keeps each agent's context window clean, lets you onboard each to one job, and lets you grant tool access per-role rather than per-account.
Conditions
Holds when:
- The work decomposes into stable, namable roles (sales SDR, exec assistant, research analyst).
- You have the operating discipline to maintain per-agent identity files and tool inventories.
Fails when:
- Roles are amorphous or shift weekly. The agent count thrash is worse than a generalist.
- The org has no per-agent provisioning (shared logins, shared inboxes). The security and audit costs swamp the productivity gain.
Evidence
"Where people stumble with OpenClaw is they think they can throw any task at a single agent and get great results."
"I have nine Slack channels. My marketing team's in one, sales in another, dev in another. My development team does not care what was posted on X today."
Claire ran nine agents on three Mac Minis: Polly (work EA), Finn (family), Sam (sales SDR), Howie (podcast research), Sage (course PM), Q (kids' tutor), and others. Sam-the-SDR replaced 10 hours/week of contractor work running PLG sweeps and drafting soft outreach.
— Claire Vo on Lenny's Podcast, 2026-04-28
Signals
- Per-agent context windows stay focused (no cross-domain bleed).
- Each agent has a single named owner role and a single tool set.
- New work either fits an existing agent or warrants a new one with its own identity, not a feature added to a generalist.
- Agents that misbehave are corrected via tool inventory or identity edits, not via a stronger model.
Counter-evidence
A single very strong general agent (e.g., long-context Claude with broad tool access) can outperform a poorly-managed team of small agents. The win is in the management discipline, not the architecture itself. Operators who can't maintain identity + tool docs may get worse outcomes from the team approach.
Cross-references
- Onboard agents the way you onboard an EA: progressive trust, named tiers — how to ramp a new agent or teammate onto tools
- The unlock for AI agent productivity is management skill, not technical skill — why the unlock is management, not engineering
- An LLM should maintain a wiki, not re-derive knowledge per query — the substrate that makes per-agent context durable