Dude, the problem is you have no fucking idea if it’s wrong yourself, have nothing to back it up
That’s not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.
Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It’s just a smarter search engine with no ads and better focus on the question asked.
Eventually it becomes a search engine that replaces the ads on the source material with its own ads, thus choking out the source’s funding and taking it for itself.
No. They don’t actually use sources and can’t tell you why they said what they said. This actually reverses cause and effect the source is part of the inference and since it didn’t and in fact can’t go and read them frequently contains both imaginary sources and sources which don’t actually support the assertion.
The ability to evaluate the source requires the same skills and information as the production of the original text would have required which means perforce if you use chatGPT to produce text which you lack the skills to produce you also lack the skills to evaluate it.
You COULD go and read primary sources and in turn evaluate it but at that point its not that quick.
The latest example of that I encountered had a blatant logical inconsistency in its summary, a CVE that wasn’t relevant to what was discussed, because it was corrected years before the technology existed. Someone pointed at it.
The poster hadn’t done the slightest to check what they posted, they just regurgitated it. It’s not the reader’s job to check the crap you’ve posted without the slightest effort.
Ok, I didn’t need you to act as a middle man to tell me what the LLM just hallucinated, I can do this myself.
The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.
When we ask questions on a public forum, we’re looking to talk to people about their own experience and research through the lens of their own being and expertise. We’re all capable of prompting an AI agent. If we wanted AI answers, we’d prompt an AI agent.
If you have evaluated the statement for its correctness and relevance, then you can just own up to the statement yourself. There is no need to defer responsibility by prefacing it with “I asked [some AI service] and here’s what it said”. That is the point of the article that is being discussed, if you’d like to give it a read sometime.
And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn’t produce quality output like you’ve been conditioned to expect?
These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.
Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it’s giving you.
On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.
However, it is not rare that they add their own “info” to it, even though it’s not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it’s critical, even if it puts a nice citation.
That’s not true. For starters you can evaluate it on its own merits to see if it makes logical sense - the AI can help solve a maths equation for you and you can see that it checks out without needing something else to back it up.
Second, agentic or multiple-step AI:s will dig out the sources for you so you can check them. It’s just a smarter search engine with no ads and better focus on the question asked.
For now.
Eventually it becomes a search engine that replaces the ads on the source material with its own ads, thus choking out the source’s funding and taking it for itself.
No. They don’t actually use sources and can’t tell you why they said what they said. This actually reverses cause and effect the source is part of the inference and since it didn’t and in fact can’t go and read them frequently contains both imaginary sources and sources which don’t actually support the assertion.
The ability to evaluate the source requires the same skills and information as the production of the original text would have required which means perforce if you use chatGPT to produce text which you lack the skills to produce you also lack the skills to evaluate it.
You COULD go and read primary sources and in turn evaluate it but at that point its not that quick.
I am speaking from experience.
The latest example of that I encountered had a blatant logical inconsistency in its summary, a CVE that wasn’t relevant to what was discussed, because it was corrected years before the technology existed. Someone pointed at it.
The poster hadn’t done the slightest to check what they posted, they just regurgitated it. It’s not the reader’s job to check the crap you’ve posted without the slightest effort.
Ok, I didn’t need you to act as a middle man to tell me what the LLM just hallucinated, I can do this myself.
The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.
When we ask questions on a public forum, we’re looking to talk to people about their own experience and research through the lens of their own being and expertise. We’re all capable of prompting an AI agent. If we wanted AI answers, we’d prompt an AI agent.
If you have evaluated the statement for its correctness and relevance, then you can just own up to the statement yourself. There is no need to defer responsibility by prefacing it with “I asked [some AI service] and here’s what it said”. That is the point of the article that is being discussed, if you’d like to give it a read sometime.
And what happens when
mechahitlerthe next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn’t produce quality output like you’ve been conditioned to expect?These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a
ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestionsAI hallucination.Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it’s giving you.
On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.
However, it is not rare that they add their own “info” to it, even though it’s not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it’s critical, even if it puts a nice citation.
It gives you some links but in my experience what it says in the summary isn’t always the same as what’s in the link…