When Not to Use AI: Limits, Risks & Blind Spots

4 min read

AI is a powerful tool, but power without boundaries causes harm. Knowing when not to use AI is just as important as knowing how to use it well.

The Harm Assessment Checklist

Anthropic’s constitution includes a practical framework for evaluating whether AI use could cause harm. Before using AI in any consequential situation, consider these factors:

  • Probability — How likely is it that harm actually occurs?
  • Severity — How bad would the consequences be?
  • Reversibility — Can the damage be undone?
  • Breadth — How many people could be affected?
  • Consent — Have the people affected agreed to AI being involved?
  • Vulnerability — Are the people affected in a position where harm hits harder?

The more factors that point toward risk, the more caution you need — or the more clearly you should avoid AI entirely.

Hard Boundaries

Some uses are simply off-limits. Every major AI provider prohibits:

  • Generating child sexual abuse material
  • Providing instructions for weapons of mass destruction
  • Targeted political manipulation of individuals or demographics
  • Facilitating clearly illegal actions

These aren’t suggestions — they’re hard rules enforced at the platform level. NIST adds: when AI presents “unacceptable negative risk levels,” deployment should cease until risks can be managed.

The Professional Referral Principle

In regulated domains — healthcare, law, finance — AI should inform but never decide. It can help you research symptoms, draft a legal brief, or analyze financial data. But the final judgment must come from a qualified human professional.

AI can:     Summarize medical research for a patient's questions
AI can't:   Diagnose a condition or prescribe treatment

AI can:     Draft contract language for a lawyer to review
AI can't:   Provide binding legal advice

AI can:     Analyze spending patterns and flag anomalies
AI can't:   Make investment decisions on your behalf

NIST explicitly recommends considering non-AI alternatives — sometimes the right answer is a human expert or a simpler tool.

Your Judgment Matters Most

No checklist covers every situation. As Anthropic’s constitution puts it, the goal is “cultivating good values and judgment” rather than following rigid rules. The best question to ask yourself is simple: if something goes wrong with this AI output, what are the consequences — and can I live with them?

Knowing when not to use AI is about judgment. But when you do use it, you need practical skills to catch problems. Next, you’ll learn hands-on techniques for spotting and fixing bad AI output.

Quick Quiz

Question 1 of 2

What should you always do before using AI in a regulated domain like healthcare or law?