“hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information”
Is an output an hallucination when the training data involved in that output included factually incorrect data? Suppose my input is “is the would flat” and then an LLM, allegedly, accurately generates a flat-eather’s writings saying it is.
“hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information”
Is an output an hallucination when the training data involved in that output included factually incorrect data? Suppose my input is “is the would flat” and then an LLM, allegedly, accurately generates a flat-eather’s writings saying it is.