“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.
LLMs don’t have any awareness of their internal state, so there’s no way for them to see something as a gap of knowledge.
Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“
The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING
Wouldn’t it make sense for an ai to provide a confidence level though?
I’ve got 3 million bits of info on this topic but only 4 of them lead to this solution. Confidence level =1.5%
It doesn’t have “3 million bits of info” on a specific topic, or even if it did, it wouldn’t be able to directly measure it. It’s worth reading a bit about how LLMs work behind the hood, because although somewhat dense if you’re new to the concepts, you come out knowing a lot more about what to expect when using them, what the limitations actually are and how to use them better if you decide to go that route.
It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.
So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
It’s literally a guess machine …
And depending on how OpenAI tweaked it this time it will either realize its mistake after being made aware of it or double down even harder on it.
I only use it for coding and it once told me my code not working was due to a bug in Webkit, so I asked it which bug specifically. It created links to bug reports but rewrote the titles of them. So initially it looked like it had numerous sources that backed up its statement but when I clicked on them those were bugs about totally different things.
It would not back down even after I specifically told it “You just made all of this shit up and even rewrote the titles” and got stuck in a loop of “I’m sorry, but you’re wrong and I am 100% sure I haven’t made a mistake”.
Kinda creepy. Especially when you think about the system rewriting reality when it comes to much more important things. Let’s just reinvent some history, that would be a good idea, right?
I sometimes approach this like I do with students. Using your example, I’d ask it to restate the source, then ask it to read the title of that source directly. If it’s correct, I might ask it to briefly summarize what the source article covers. Then I would ask it to restate what it told me about the source earlier, and to explain where the inconsistency lies. Usually by this time, the AI is accurately pointing out flaws in its prior logic. At that point I ask again if it is 100% sure it didn’t make a mistake, and it might actually concede to having been wrong. Then I tell it to remember how and why it was wrong to avoid similar errors in the future. I don’t know if it actually works, but it makes me feel better about it.