AI Hallucinations: Why AI Makes Things Up
AI models sometimes generate information that sounds perfectly plausible but is completely wrong. This is called a hallucination, and it’s one of the most important limitations to understand before relying on AI output.
Why It Happens
Recall from the first snack: LLMs generate text by predicting the most likely next word. They’re optimized to produce plausible-sounding text, not verified facts. The model doesn’t check a database, consult a source, or verify its claims — it generates whatever continuation fits the patterns it learned during training.
Making this worse, the training process rewards confident responses. During the feedback phase (Snack 2), models learn that clear, direct answers get higher ratings than hedging or saying “I’m not sure.” The result: a model that sounds authoritative even when it’s wrong.
What Hallucinations Look Like
They’re not random gibberish — that’s what makes them dangerous. They look like perfectly normal, well-structured responses:
Prompt: "What landmark Supreme Court case established the 'digital
privacy doctrine'?"
Response: "In Henderson v. DataCorp (2019), the Supreme Court ruled
that..."
This looks authoritative — but the case, the parties, and the ruling are all invented.
Real-World Examples
Hallucinations have caused real consequences:
- Legal filings: In 2023, a New York attorney submitted a court brief citing six cases generated by ChatGPT — none of them existed. The lawyer was sanctioned by the judge.
- Academic citations: Students and researchers have cited AI-generated papers with fabricated authors, journals, and DOIs that lead nowhere.
- Medical advice: AI chatbots have confidently recommended drug interactions and dosages that contradicted established medical guidelines.
Three Ways to Protect Yourself
1. Verify with sources. If the AI cites a study, look it up. Treat AI output like a first draft from a confident but sometimes unreliable colleague.
2. Ask for reasoning. When the model shows its work, you’re more likely to spot logical gaps. Vague explanations are a red flag.
3. Stay skeptical in high-stakes domains. Legal, medical, and financial information demands extra scrutiny. AI is a useful starting point, not the final authority.
The Good News
Hallucinations are getting less frequent with each generation of models — through better training, improved feedback, and techniques like retrieval-augmented generation. But they haven’t been eliminated, and may never be entirely.
You now understand both the power and the key risk of AI models. The final snack brings it all together: how to choose the right model for your needs.