3.0: Surprisingly bad hallucination with product searching?

I should start by saying I tried Gemini 3 the other day, and it solved a difficult tech/coding related question that ChatGPT 5.1 thinking couldn't manage. So I went into this week thinking Gemini was amazing.

Today though I asked for a list of gift ideas, and it returned a ton of ideas that I said I could find by googling them. I asked to provide links, and it couldn't. And i couldn't find most of the products it suggested – it seems they're totally made up.

Whereas I say the same question to ChatGPT and it provides a full properly sourced list, no hallucination.

Does Gemini 3 lack grounding and can it not prove links??

Leave a Reply