domain
engineering
Strongest claims
- Agent-first content has six platform layers, access, discovery, capability, format, token, UX bridge Addy Osmani
- When the agent isn't doing what you want, fix the context, not the model Sherwin Wu
- Build for the model six months out, the current model will eat your scaffolding Sherwin Wu
- AI capability is not evenly distributed, it spikes where labs have data, rewards, and verification loops Andrej Karpathy
- Continuous Calibration, Continuous Development (CCCD) is the operating loop for AI products Aishwarya Naresh Reganti
- Code and media are the only forms of leverage that don't require asking, labour and capital both come gated Naval Ravikant
Adjacent domains
- ai-native · 31 co-occurrences
- leadership · 12 co-occurrences
- product · 12 co-occurrences
- research · 6 co-occurrences
- growth · 2 co-occurrences
- founder-craft · 2 co-occurrences
- gtm · 2 co-occurrences
- founder-operator · 1 co-occurrences
Synthesis patterns in engineering
- Build for the next model, not the current one
- Context, not capability, is the bottleneck
- Evals are data analysis, single judge, binary rubrics, error analysis first
- Principal/staff IC as force-multiplier archetype
- Verification, not execution, is the irreplaceable human job
41 insights in engineering
- 10-80-10: human direction, AI execution, human polish · Arvid Kahl
- An automation that works 95% of the time is not an automation · Cat Wu
- Advisor-tool replaces ensemble-of-3 stability hacks at near-Sonnet rates · Cat Wu
- Agent-first content has six platform layers, access, discovery, capability, format, token, UX bridge · Addy Osmani
- AI has crossed the threshold to something indistinguishable from judgment and taste, winners will know what to build, not how · Matt Shumer
- Above a certain level, every problem is a people problem · Silvia Botros
- Building production ML systems at scale · Anand Karunan
- When the agent isn't doing what you want, fix the context, not the model · Sherwin Wu
- Build for the model six months out, the current model will eat your scaffolding · Sherwin Wu
- AI capability is not evenly distributed, it spikes where labs have data, rewards, and verification loops · Andrej Karpathy
- Continuous Calibration, Continuous Development (CCCD) is the operating loop for AI products · Aishwarya Naresh Reganti
- Code and media are the only forms of leverage that don't require asking, labour and capital both come gated · Naval Ravikant
- The dark factory: nobody reads the code, gated by a simulated QA swarm · Simon Willison
- Test automation that can't adapt to product changes creates a maintenance burden worse than manual testing. · Dileep Krishna
- Give the model tools and a goal; do not hard-code the workflow · Boris Cherny
- Evals are systematic data analysis on your LLM application, start with error analysis, not tests · Hamel Husain
- Glue work is technical leadership, not a tax on the IC · Tanya Reilly
- n8n + MCP + Claude lets GTM teams build workflows in plain English. · 🇺🇦 Ilya Azovtsev
- You can outsource thinking, but not understanding, verification is the new human job · Andrej Karpathy
- Cheap external model for grunt work; Claude only sees judgment · Kesava Mandiga
- Build LLM-as-judge as binary true/false, one judge per pesky failure mode, and validate against human labels · Hamel Husain
- An LLM should maintain a wiki, not re-derive knowledge per query · Andrej Karpathy
- The middle is hollowing out, execution gets automated, leaving spec-writing and verification as the high-value human tasks · Eugene Yan
- Close the feedback loop by mining session transcripts for patterns to promote into config · Eugene Yan
- November 2025 was the qualitative threshold, coding agents now almost always do what you tell them · Simon Willison
- Sample 100+ traces, write one free-form note per trace, let an LLM cluster the notes, humans first, machines second · Hamel Husain
- Hoard a personal repository of things that worked, coding agents will recombine them · Simon Willison
- The planning fallacy guarantees every launch timeline is optimistic, the fix is the outside view · Daniel Kahneman
- A principal IC is a force multiplier, not a more-senior senior · Silvia Botros
- Encode jargon shorthand once, save tokens forever · Simon Willison
- When a new model lands, re-read the system prompt and remove crutches · Cat Wu
- Simple agents reading rich, specific context outperform complex agents reading thin context · Maja Voje
- Don't ask "how long will this take?", ask "how much time do we want to spend on this?" · Jason Fried
- Prompts are code, Skills deserve testing, documentation, dependency mapping, performance profiling · Nate
- We are in the transition from Software 2.0 to Software 3.0, AI Engineers will build the majority of new applications · Swyx (Shawn Wang)
- Staff engineering has four archetypes, two of them rotate across teams by design · Will Larson
- Iatrogenics, when the intervention causes more harm than the disease, most "fixes" in complex systems are net-negative · Nassim Nicholas Taleb
- Agents come in three classes, tag each loop or under-resource it · Hamza Farooq
- A trace alone teaches nothing; learning requires feedback attached to the trace · Harrison Chase
- Underfund teams deliberately so AI substrate, not headcount, absorbs the work · Boris Cherny
- Use new tools as new tools, not as old tools, be ambitious and retry from scratch · Benjamin Mann