To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!
No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.
It is however fraud on the part of the travel company to advertise something that doesn't exist. Another form of externalized cost of AI.
No, it worked as designed. Generative AI simply creates content of the type that you specify, but has no concept of truth or facts.
> Yeah—roughly, from general local knowledge (no web searching, promise ). I’ll flag where my memory might be fuzzy.
> Weldborough Hot Springs are in northeast Tasmania, near Weldborough Pass on the Tasman Highway (A3) between Scottsdale and St Helens.
Screenshot with more: https://postimg.cc/14TqgfN4
There needs to be a more meta, layered approach to reason. Different personalities viewing the output with different hats on: "That's a bold claim, champ. Search required." But I guess the current real-time, interactive nature of these systems makes it difficult to justify.
At the end of the day, LLMs are a statistical approximation or projection.
A good example of this is how LLMs struggle with multiplication, particularly multipolcation of large numbers. It's not just that they make mistakes but the nature of the results.
Tell ChatGPT to multiply 129348723423 and 2987892342424 and it'll probably get it wrong because nowhere on Reddit is that exact question for it to copy. But what's interesting is it'll tend to get the first and large digits correct (more often than not) but the middle is just noise.
Someone will probably say "this is a solved problem" because somebody, somewhere has added this capability to a given LLM but these kinds of edge cases I think will constantly expose the fundamental limits of transformers, just like the famous "how many r's in strawberry?" example that di the rounds.
All this comes up when you tell LLMs to write legal briefs. They completely make up a precedent because they learn what a precedent looks like and generate something similar. Lawyers have been caught submitting fake precedents in court filings due to this.
Does this mean he decided to update his site, but didn't bother proofreading on vacation? Or did someone else decide to generate and post results without opening Google?
AI didn't tell people anything, it lied to exactly one employer, who decided to tell tourists unverified facts.
Not with the current state of technology. I haven't seen that it works yet. It requires supervision.
It's funny, back in the day computer calculations were checked with human computers. But now? Just trust it bro.
Seems par for course.