I want to clarify what Ken meant by "entropy in the output token probability distributions." Whenever an LLM outputs a token, it's choosing that token out of all possible tokens. Every possible output token has a probability assigned by the model (typically a logarithm of the probability). This is a probability distribution (the output token probabilities sum to 1). Entropy is a measure of uncertainty and can quantify if a token probability distribution is certain (1 token has a 99.9% probability, and the rest share the leftover 0.1% probability) or uncertain (every token has roughly the same probability, so it's pretty much random which token is selected). Low entropy is the former case, and high entropy is the latter.There is interesting research in the correlation of entropy with accuracy and hallucinations:
- https://www.nature.com/articles/s41586-024-07421-0
- https://arxiv.org/abs/2405.19648
- https://arxiv.org/abs/2509.04492 (when only a small number of probabilities are available, which is something we frequently deal with)
- https://arxiv.org/abs/2603.18940
- tons more, happy to chat about if interested