Assuming this holds up, I wonder if a good workaround for this problem - the problem that turning on structured outputs makes errors more likely - would be to do this:
1. Prompt the LLM "extract numbers from this receipt, return data in this JSON format: ..." - without using the structured output mechanism.
2. If the returned JSON does indeed fit the schema then great, you're finished! But if it doesn't...
3. Round-trip the response from the previous call through the LLM again, this time with structured outputs configured. This should give you back the higher quality extracted data in the exact format you want.
I usually start by adding an error type that will be overused by the LLM, and use that to gain visibility into the types of ambiguities that come up in real-world data. Then over time you can build a more correct schema and better prompts that help the LLM deal with ambiguities the way you want it to.
Also, a lot of the chain of thought issues are solved by using a reasoning model (which allows chain of thought that isn’t included in the output) or by using an agentic loop with a tool call to return output.
https://blog.dottxt.ai/say-what-you-mean.html
https://blog.dottxt.ai/prompt-efficiency.html
This also matches my own experiences.
Several issues were found:
1. A model may sometimes get stuck generating whitespace at the end forever (the JSON schema allows it), which can lock up the entire vLLM instance. The solution was to use xgrammer, because it has a handy feature that disallows whitespace outside of strings.
2. In some cases I had to fiddle with metainformation like minItems/maxItems for arrays, or the model would either hallucinate or refuse to generate anything.
3. Inference engines may reorder the fields during generation, which can impact the quality due to the autoregressive nature of LLMs (like, the "calculation" field must come before the "result" field). Make sure the fields are not reordered.
4. Field names must be as descriptive as possible, to guide the model to generate expected data in the expected form. For example, "durationInMilliseconds" instead of just "duration".
Basically, you can't expect a model to give you good results out of the box with structured outputs if the schema is poorly designed or underspecified.
Yes the banana weighs 0.4 pounds. But the question was not to return the weight or the quantity, the question was to return the quantity.
It seems like more instructions are needed in the prompt that the author is not even aware of.
1. In the examples provided, the author compares freeform CoT + JSON output vs. non-CoT structured output. This is unfair and biases the results towards what they wanted to show. These days, you don't need to include a "reasoning" field in the schema as mentioned in the article; you can just use thinking tokens (e.g., reasoning_effort for OpenAI models). You get the best of both worlds: freeform reasoning and structured output. I tested this, and the results were very similar for both.
2. Let Me Speak Freely? had several methodological issues. I address some of them (and .txt's rebuttal) here: https://dylancastillo.co/posts/say-what-you-mean-sometimes.h...
3. There's no silver bullet. Structured outputs might improve or worsen your results depending on the use case. What you really need to do is run your evals and make a decision based on the data.
I don't know if this is true. Libraries such as Pydantic AI and I would assume the model provider SDKs stream different events. If COT is needed then a <think> section would be emitted and then later the structured response would occur when the model begins its final response.
Structured outputs can be quite reliable if used correctly. For example, I designed an AST structure that allows me to reliably generate SQL. The model has tools to inspect data-points, view their value distributions (quartiles, medians, etc). Then once I get the AST structure back I can perform semantic validation easily (just walk the tree like a compiler). Once semantic validation passes (or forces a re-prompt with the error), I can just walk the tree again to generate SQL. This helps me reliably generate SQL where I know it won't fail during execution, and have a lot of control over what data-points are used together, and ensuring valid values are used for them.
I think the trick is just generating the right schema to model your problem, and understanding the depth of an answer that might come back.
Structured output was one of the lesser known topics that AI consultants and course writers got a lot of mileage out of because it felt like magic. A lot of management people would use ChatGPT but didn’t know how to bridge the text output into a familiar API format, so using a trick to turn it into JSON felt like the missing link. Now that I think about it, I don’t recall seeing any content actually evaluating the impact of constrained output on quality though.
This blog post blurs the lines between output quality reduction and incorrect error handling, though. I’d like to see some more thorough benchmarking that doesn’t try to include obvious schema issues in the quality reduction measurements.
The argument goes that because we are intentionally constraining the model - I believe OAI’s method is a soft max (I think, rusty on my ML math) to get tokens sorted by probability then taking the first that aligns with the current state machine - we get less creativity.
Maybe, but a one-off vibes example is hardly proof. I still use structured output regularly.
Oh, and tool calling is almost certainly implemented atop structured output. After all, it’s forcing the model to respond with a JSON schema representing the tool arguments. I struggle to believe that this is adequate for tool calling but inadequate for general purpose use.
1. Add a validation step (using a mini model) right at the beginning - sub-second response times; the validation will either emit True/False or emit a function call
2. Use a sequence of (1) large model without structured outputs for reasoning/parsing, chained to (2) small model for constrained sampling/structured output
3. Keep your Pydantic models/schemas as flat (not too nested and not too many enumarations) and "help" the model in the system prompt as much as you can
I'm a huge fan of structured outputs, but also recently started splitting both steps, and I think it has a bunch of upsides normally not discussed:
1. Separate concerns, schema validation errors don't invalidate the whole LLM response. If the only error is in generating schema-compliant tokens (something I've seen frequently), retries are much cheaper.
2. Having the original response as free text AND the structured output has value.
3. In line with point 1, it allows using a more expensive (reasoning) model for free-text generation, then a smaller model like gemini-2.5-flash to convert the outputs to structured text.
I used Python's Instructor[1], a package to force the model output to match the predefined Pydantic model. It's used like in the example below, and the output is guaranteed to fit the model.
import instructor
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
client = instructor.from_provider("openai/gpt-5-nano")
person = client.create(
response_model=Person,
messages=[{"role": "user", "content": "Extract: John is a 30-year-old"}]
)
print(person)
I defined a response model for chain of thought prompt with answers and its thinking process, then asked questions. class MathAnswer(BaseModel):
value: int
reasoning: str
answer = client.create(
response_model=MathAnswer,
messages=[{"role": "user", "content": "What's the answer to 17*4+1? Think step by step"}]
)
print(f"answer={answer.value}, {answer.reasoning}")
This worked in most cases, but once in a while, it produced very strange results: 67, First I calculated 17*4=68, then I added 1 so the answer is 69
The actual implementation was much more complicated with many and complex proerties, a lot of inserted context, and long, engineered prompt, and it happened only a few times, so I took hours to figure out if it's caused by a programming bug or just LLM's randomness.Turned out, because I defined MathAnswer in that order, the model output was in the same order and it put the `reasoning` after the `answer`, so the thinking process didn't influence the answer like `{"answer": 67, "reasoning": "..."}` instead of `{"reasoning": "...", "answer": 69}`. I just changed the order of the model's properties and the problem was gone.
class MathAnswer(BaseModel):
reasoning: str
value: int
[1] https://python.useinstructor.com/#what-is-instructorETA: Codex and Claude Code only said how shit my prompt and RAG system were, then suggested how to improve them, but it only made the problem worse. They really don't know how they work.
Step one ask the LLM to classify something from the prompt “creatively”. For example, ask it to classify the color or category of a product in an e-commerce catalog or user request. Give examples of what valid instance of these entities look like, ask for output that looks like these examples (encourage the LLM to engage in creative hallucination). Often helps to get the LLM to pick more than one and for it to choose many different, realistic diverse labels.
Step two, with hallucinated entities, lookup via embedding similarity to find the most similar “real” entities. Then return these.
It can save you a lot of tokens (you don’t have to enumerate every legal value). And you can get by with a tiny model.
I would love some more detailed and reproducible examples, because the claims don’t make sense for all use cases I had.
However I would say two things: 1. I doubt this quality drop couldn’t be mitigated by first letting the model answer in its regular language and then doing a second constrained step to convert that into structured outputs. 2. For the smaller models I have seen instances where the constrained sampling of structured outputs actually HELPS with output quality. If you can sufficiently encode information in the structure of the output it can help the model. It can effectively let you encode simple branching mechanisms to execute at sample time
Also, meta gripe: this article felt like a total bait-and-switch in that it only became clear that it was promoting a product right at the end.
Even Amazon’s cheapest and fastest model does that well - Nova Lite.
But even without using his framework, he did give me an obvious in hindsight method of handling image understanding.
I should have used a more advanced model to describe the image as free text and then used a cheap model to convert text to JSON.
I also had the problem that my process hallucinated that it understood the “image” contained in a Mac .DS_Store file
And about structured outputs messing with chain-of-thought... Is CoT really used with normal models nowadays? I think that if you need CoT you might as well use a reasoning model, and that solves the problem.
It's usually more productive to right about how LLMs work rather than how they don't. In this case especially, there's improvements that can be made to the schema, without forfeiting on the idea of schemas altogether
Every model has built-in segmentation between reasoning/CoT + JSON.
What if you put “float” instead of int to get the required number?
Also the post is missing another use case, enums in structured data. I’ve been using it successfully for a few months now and it’s doing a fantastic job.
also, xml works much better than json, all the model guides say this
Thanks for sharing
https://blog.dottxt.ai/say-what-you-mean.html
The blog post is doubly bad because any "failures" involving images and image understanding can't necessarily be traced back to structured generation at all!!!
> you need a parser that can find JSON in your output and, when working with non-frontier models, can handle unquoted strings, key-value pairs without comma delimiters, unescaped quotes and newlines; and you need a parser that can coerce the JSON into your output schema, if the model, say, returns a float where you wanted an int, or a string where you wanted a string[].
Oh cool I'm sure that will be really reliably. Facepalm.
> Allow it to respond in a free-form style: let it refuse to count the number of entries in a list, let it warn you when you've given it contradictory information, let it tell you the correct approach when you inadvertently ask it to use the wrong approach
This makes zero sense. The whole point of structured output is that it's a (non-AI) program reading it. That program needs JSON input with a given schema. If it is able to handle contradictory-information warnings, or being told you're using the wrong approach then that will be in the schema anyway!
I think the point about thinking models is interesting, but the solution to that is obviously to allow it to think without the structuring constraint, and then feed the output from that into a query with the structured output constraint.