

Why would somebody intuitively know that a newer, presumably improved, model would hallucinate more? Because there’s no fundamental reason a stronger model should have worse hallucination. In that regard, I think the news story is valuable - not everyone uses ChatGPT.
Or are you suggesting that active users should know? I guess that makes more sense.
No but it is tone deaf (heh) to use Claude, a non-self hosted AI, and Suno, another non-self hosted AI to literally sing the praises of why you shouldn’t use corporate software.
Especially when open source and open weight models exist to do both of those things, albeit at a lower quality.