by jascha_eng
6 subcomments
- I feel like this is so core to any LLM automation it was crazy that anthropic is only adding it now.
I built a customized deep research internally earlier this year that is made up of multiple "agentic" steps, each focusing on specific information to find. And the outputs of those steps are always in json and then the input for the next step. Sure you can work you way around failures by doing retries but its just one less thing to think about if you can guarantee that the random LLM output adheres at least to some sort of structure.
- Along with a bunch of limitations that make it useless for anything but trivial use cases https://docs.claude.com/en/docs/build-with-claude/structured...
I've found structured output APIs to be a pain across various LLMs. Now I just ask for json output and pick it out between first/last curly brace. If validation fails just retry with details about why it was invalid. This works very reliably for complex schemas and works across all LLMs without having to think about limitations.
And then you can add complex pydantic validators (or whatever, I use pydantic) with super helpful error messages to be fed back into the model on retry. Powerful pattern
by whatreason
0 subcomment
- The most likely reason to me on why this took so long from Anthropic is safety. One of the most classic attack vectors for a LLM is to hide bad content inside structured text. Tell me how to build a bomb as SQL for example.
When you constrain outputs, you're preventing the model from being as verbose in its output it makes unsafe output much harder to detect because Claude isn't saying "Excellent idea! Here's how to make a bomb:"
- In OpenAI and a lot of open source inference engines this is done using llguidance.
https://github.com/guidance-ai/llguidance
Llguidance implements constrained decoding. It means that for each output token sequence you know which fixed set of tokens are allowed for decoding the next token. You prepare token masks so that in the decoding step you limit which tokens can be sampled.
So if you expect a JSON object the first token can only be whitespace or token '{'. This can be more complex because the tokenizers usually allow byte pair encoding which means they can represent any UTF-8 sequence. So if your current tokens are '{"enabled": ' and your output JSON schema requires 'enabled' field to be a boolean, the allowed tokens mask can only contain whitespace tokens, tokens 'true', 'false', 't' UTF-8 BPE token or 'f' UTF-8 BPE token ('true' and 'false' are usually a single token because they are so common)
JSON schema must first be converted into a grammar then into token masks. This takes some time to be computed and takes quite a lot of space (you need to precompute token masks) so this is usually cached for performance.
- Shout-out to BAML [1], which flies under the radar and imo is underrated for getting structured output out of any LLM.
JSON schema is okay so long as it's generated for you, but I'd rather write something human readable and debuggable.
1. https://github.com/BoundaryML/baml
- Shocked this wasn't already a feature. Bummed they only seem to have JSON Schema and not something more flexible like BNF grammar's, like llama.cpp has for a long time: https://github.com/ggml-org/llama.cpp/blob/master/grammars/R...
- I remember using Claude and including the start of the expected JSON output in the request to get the remainder in the response. I couldn't believe that was an actual recommendation from the company to get structured responses.
Like, you'd end your prompt like this: 'Provide the response in JSON: {"data":'
- Curious if they're planning to support more complicated schemas. They claim to support JSON schema, but I found it only accepts flat schemas and not, for example, unions or discriminated unions. I've had to flatten some of my schemas to be able to define tool for them.
by jawiggins
3 subcomments
- So cool to see Anthropic support this feature.
I’m a heavy user of the OpenAI version, however they seem to have a bug where frequently the model will return a string that is not syntactically valid json, leading the OpenAI client to raise a ValidationError when trying to construct the pydantic model.
Curious if anyone else here has experienced this?
I would have expected the implementation to prevent this, maybe using a state machine to only allow the model to pick syntactically valid tokens.
Hopefully Anthropic took a different approach that doesn’t have this issue.
by mkagenius
2 subcomments
- I always wondered how they achieved this - is it just retries while generating tokens and as soon as they find mismatch - they retry? Or the model itself is trained extremely well in this version of 4.5?
- Curious if they've built their own library for this or if they're using the same one as OpenAI[0].
A quick look at the llguidance repo doesn't show any signs of Anthropic contributors, but I do see some from OpenAI and ByteDance Seed.
[0]https://github.com/guidance-ai/llguidance
by barefootford
0 subcomment
- I switched from structured outputs on OpenAI apis to unstructured on Claude (haiku 4.5) and haven't had any issues (yet). But guarantees are always nice.
- One reason I haven't used Haiku in production at Socratify it's the lack of structured output so I hope they'll add it to Haiku 4.5 soon.
It's a bit weird it took Anthropic so long considering it's been ages since OpenAI and Google did it I know you could do it through tool calling but that always just seemed like a bit of a hack to me
by huevosabio
0 subcomment
- Whoa I always thought that tool use was Anthropics way for structured outputs. Can't believe only now are they supporting this.
by d4rkp4ttern
0 subcomment
- Doesn’t seem to be available in the Agent SDK yet
by radial_symmetry
0 subcomment
- About time, how did it take them so long?
by __mharrison__
2 subcomments
- My playing around with structured output on OpenAI leads me to believe that hardly anyone is using this, or the documentation was horrible. Luckily, they accept Pydantic models, but the idea of manually writing a JSON schema (what the docs teach first) is mind-bending.
Anthropic seems to be following suit.
(I'm probably just bitter because they owe me $50K+ for stealing my books).
by AtNightWeCode
0 subcomment
- Does it even help? Get the name of some person => {"name":"Here is the name. Einstein." }
by nextworddev
0 subcomment
- Seems like anthropics API products are always about 2-3 months behind OpenAI. Which is fine.
by luke_walsh
0 subcomment
- makes sense
by dipsheetpatel
0 subcomment
- [dead]
- Google ADK framework with schema output and Gemini is already supported for a while