Structured Output: Get JSON and More from LLMs
Structured output is the practice of getting an AI model to respond in a specific, predictable format โ JSON, XML, markdown tables, CSV, or any schema you define. Instead of parsing free-text prose, you get data you can plug directly into code, databases, or downstream systems.
Why It Matters
A modelโs default response is natural language โ great for conversation, terrible for automation. If youโre building anything that processes AI output programmatically, you need structure. Extracting entities from emails? You want JSON. Generating a comparison? You want a table. Feeding results into another system? You need a consistent schema.
How to Get Structured Output
The simplest approach: show the format you want. Models are excellent at pattern-matching, so an example does more than a paragraph of instructions.
Extract the name, role, and company from this email signature.
Return the result as JSON:
{"name": "...", "role": "...", "company": "..."}
Signature: Jane Park, VP Engineering, Acme Corp
For more complex schemas, add inline constraints to guide each field:
Analyze this product review. Return your analysis as JSON:
{
"sentiment": "positive" | "negative" | "mixed",
"key_topics": ["topic1", "topic2"],
"confidence": 0.0-1.0
}
Structuring Your Prompts Too
Structure isnโt just for output โ it improves your inputs. Using delimiters like XML tags to organize your prompt helps the model distinguish instructions from context:
<instructions>
Summarize the document below in three bullet points.
</instructions>
<document>
{{DOCUMENT_TEXT}}
</document>
This is especially powerful when prompts grow complex. Clear boundaries reduce misinterpretation and let you swap components without rewriting the whole prompt.
Tips
- Provide an example of the exact output format โ this is the single most effective technique
- Name fields clearly โ
sentiment_scorebeatsss - Specify constraints inline โ
"rating": 1-5or"status": "approved" | "rejected" - Use API features when available โ both Claude and GPT offer structured output modes that enforce a schema at the API level, guaranteeing valid output
When your output is structured, everything downstream gets simpler โ parsing, validation, logging, and debugging. Now that you can control what comes back, the next step is building prompts you can reuse across many different inputs.