I went long enough without using Google (probably a year-ish) that, when I accidentally made a Google search a few days ago, it was a jarring experience.
It felt wrong the same way other search engines did when I first deGoogled. It was kind of nice actually.
The irony is Gemini is really good (like significantly better than ChatGPT), and cheap for them (no GPUs needed), yet somehow they made it utterly unbearable in search.
Gemini is really good at confidently talking nonsense but other than that I don’t really see where you get the idea that it is good. Mind you, that isn’t much better with the other LLMs.
I get the desire to say this, but I find them extremely helpful in my line of work. Literally everything they say needs to be validated, but so does Wikipedia and we all know that Wikipedia is extremely useful. It’s just another tool. But its a very useful tool if you know how to apply it.
But Wikipedia is basically correct 99% of the time on basic facts if you look at non-controversial topics where nobody has an incentive to manipulate it. LLMs meanwhile are lucky if 20% of what they see even has any relationship to reality. Not just complex facts either, if an LLM got wrong how many hands a human being has I wouldn’t be surprised.
It can be grounded in facts. It’s great at RAG. But even alone, Gemini 2.5 is kinda shockingly smart.
…But the bigger point is how Google presents it. It shouldn’t be the top result of every search just thrown into your face, it should be a opt-in, transparent, conditional feature with clear warnings, and only if it can source a set of whitelisted, reliable websites.
After just trying it again a few times today for a few practical problems that it not only misunderstood at first completely and then gave me a completely hallucinated answer to every single one I am sorry, but the only thing shocking about it is how stupid it is despite Google’s vast resources. Not that stupid/smart really apply to statistical analysis of language.
The one they use in search is awful, and not the same thing. Also, it’s not all knowing, you gotta treat it like it has no internet access (because generally it doesn’t).
I use it for document summarization and it works well. I use Paperless-ngx to manage documents, and have paperless-ai configured to instantly set the title and tags using Gemini as soon as a new document is added.
I chose Gemini over OpenAI since Google’s privacy policy is better. I’m using the paid version, and Google says data from paid users will never be used to train the model. Unfortunately I don’t have good enough hardware to run a local model.
I went long enough without using Google (probably a year-ish) that, when I accidentally made a Google search a few days ago, it was a jarring experience.
It felt wrong the same way other search engines did when I first deGoogled. It was kind of nice actually.
The irony is Gemini is really good (like significantly better than ChatGPT), and cheap for them (no GPUs needed), yet somehow they made it utterly unbearable in search.
Gemini is really good at confidently talking nonsense but other than that I don’t really see where you get the idea that it is good. Mind you, that isn’t much better with the other LLMs.
So it’s really good at the thing LLMs are good at. Don’t judge a fish by it’s ability to climb a tree etc…
No, it is mediocre at best compared to other models but LLMs in general have a very minimal usefulness.
I get the desire to say this, but I find them extremely helpful in my line of work. Literally everything they say needs to be validated, but so does Wikipedia and we all know that Wikipedia is extremely useful. It’s just another tool. But its a very useful tool if you know how to apply it.
But Wikipedia is basically correct 99% of the time on basic facts if you look at non-controversial topics where nobody has an incentive to manipulate it. LLMs meanwhile are lucky if 20% of what they see even has any relationship to reality. Not just complex facts either, if an LLM got wrong how many hands a human being has I wouldn’t be surprised.
It can be grounded in facts. It’s great at RAG. But even alone, Gemini 2.5 is kinda shockingly smart.
…But the bigger point is how Google presents it. It shouldn’t be the top result of every search just thrown into your face, it should be a opt-in, transparent, conditional feature with clear warnings, and only if it can source a set of whitelisted, reliable websites.
After just trying it again a few times today for a few practical problems that it not only misunderstood at first completely and then gave me a completely hallucinated answer to every single one I am sorry, but the only thing shocking about it is how stupid it is despite Google’s vast resources. Not that stupid/smart really apply to statistical analysis of language.
Gemini 2.5? Low temperature, like 0.2?
The one they use in search is awful, and not the same thing. Also, it’s not all knowing, you gotta treat it like it has no internet access (because generally it doesn’t).
The one they use on gemini.google.com (which is 2.5 right now but was awful in earlier versions too).
Try it here instead, set the temperature to like 0.1 or 0.2, and be sure to set 2.5 Pro:
https://aistudio.google.com/
It is indeed still awful for many things. It’s a text prediction tool, not a magic box, even though everyone advertises it kinda like the later.
I use it for document summarization and it works well. I use Paperless-ngx to manage documents, and have paperless-ai configured to instantly set the title and tags using Gemini as soon as a new document is added.
I chose Gemini over OpenAI since Google’s privacy policy is better. I’m using the paid version, and Google says data from paid users will never be used to train the model. Unfortunately I don’t have good enough hardware to run a local model.
“Significantly better than ChatGPT” and “Good” aren’t the same. Like ipecac is significantly better to drink than sewage water.
I had that happen too. Couldn’t find something with DDG. Hopped over to Google and was shocked at how completely unusable it was.