Or maybe they would learn from feedback to use the system for some kinds of questions but not others? It depends on how easy it is to learn the pattern. This is a matter of user education.
Saying "I don't know" is sort of like an error message. Clear error messages make systems easier to use. If the system can give accurate advice about its own expertise, that's even better.
I use it for legal work and I find it just has issues when a term is defined in a very very specific way in one section and another way in another. It obviously has a lot of semantics when they rewrite the definitions of words in separate code sections and separate contracts.
Hallucination is an issue. But so is the lack of consistency.
The same question asked on different occasions can produce different results. Not just variations in wording but in meaning as well.
There is nothing determinate or consistent about the output. And this makes the output unreliable.
AI Overview
The real meaning of humility is having an accurate, realistic view of oneself, acknowledging both one's strengths and limitations without arrogance or boastfulness, and a modest, unassuming demeanor that focuses on others. It's not about having low self-esteem but about seeing oneself truthfully, putting accomplishments in perspective, and being open to personal growth and learning from others."
Sounds like a good thing to me. Even, winning.
Calling the model ‘calibrated’ or ‘honest’ or ‘humble’ suffers from what is called out: people don’t want a humble answer of ‘I don’t know’, they want a solution to their problem, confidently delivered so they can trust it.
Call the calibrated model ‘business mode’ and the guessing one ‘consumer mode’, problem solved… in as much capacity as possible without regulation.
> Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty
This is a de facto false equivalence for two reasons.
First, test takers that are faced with hard questions have the capability of _simply not guessing at all._ UNC did a study on this [^1] by administering a light version of the AMA medical exam to 14 staff members that were NOT trained in the life sciences. While most of the them consistently guessed answers, roughly 6% of them did not. Unfortunately, the study did not disambiguate correct guesses versus questions that were left blank. OpenAI's paper proves that LLMs, at this time of writing, simply do not have the self-awareness of knowing whether they _really_ don't know something, by design.
Second, LLMs are not test takers in the pragmatic sense. They are query answerers. Bar argument settlers. Virtual assistants. Best friends on demand. Personal doctors on standby.
That's how they are marketed and designed, at least.
OpenAI wants people to use ChatGPT like a private search engine. The sources it provides when it decides to use RAG are there more for instilling confidence in the answer instead of encouraging their users to check its work.
A "might be inaccurate" disclaimer on the bottom is about as effective as the Surgeon General's warning on alcohol and cigs.
The stakes are so much higher with LLMs. Totally different from an exam environment.
A final remark: I remember professors hammering "engineering error" margins into us when I was a freshman in 2005. 5% was what was acceptable. That we as a society are now okay with using a technology that has a >20% chance of giving users partially or completely wrong answers to automate as many human jobs as possible blows my mind. Maybe I just don't get it.
Once you train model within specific domain and add to training data out of domain questions or unresolvable questions within domain things will improve.
The question is, is this desirable if most of users grew to love sycophantic confident confabulators.