a builder's codex
codex · operators · Dharmesh Shah · ins_hallucinations-when-makes-things-thinks

Hallucinations are when the AI makes up things that it *thinks* are true -- but just ar

By Dharmesh Shah · Founder and CTO at HubSpot. Helping millions grow better. · 2026-04-10 · thread · BREAKING NEWS: Seems like OpenAI may have come up with a way to dramatically reduce the hallucina

Tier B · TL;DR
Hallucinations are when the AI makes up things that it *thinks* are true -- but just ar

Claim

BREAKING NEWS: Seems like OpenAI may have come up with a way to dramatically reduce the hallucinations in AI models. Hallucinations are when the AI makes up things that it thinks are true -- but just aren't. The solution was brilliantly simple. So simple, I'm surprised we didn't come up with it sooner.

Mechanism

That's why standardized tests (like the SAT) have a penalty for wrong answers. They want to remove the benefit of simply guessing.

Conditions

Holds when: the operating context matches the post's stated frame (team shape, stage, tooling, buyer type).

Fails when: the practice is lifted into a different stage or buyer context without reworking the underlying mechanism.

Evidence

"Hallucinations are when the AI makes up things that it thinks are true -- but just aren't."

— Dharmesh Shah, LinkedIn, 2026-04-10

Signals

Counter-evidence

No opposing view in current corpus.

Cross-references

Open the interactive view → View original source → Markdown source →