This repository empirically proves computational semiotics.
The only “metacognitive” (2nd order) and metapragmatic (3rd order) model I’m aware of.
I could imagine a harness that 1) reads LLM output 2) uses a research sub-agent to attempt to verify any factual claims 2) rephrase the main agent's output such that it conveys uncertainty if the factual claim cannot be independently verified
If I'm talking to a guy that says, "I have a really fast metabolism, that's why I can eat whatever I want", he's not hallucinating - he's full of shit.