
I’m wondering if anyone else has run into this with Gemini.
To be clear — it’s not that Gemini “hallucinates a lot.”
The issue I’m seeing is that it seems to refuse to actually do the task, and then when I push it, it makes up data instead of doing the work.
Here’s what happened:
• First time: I asked it to help me fetch discussions from a social platform (X). It refused and basically said it couldn’t help.
• Second time: I gave it a very narrow request — look only at X, only for ADA discussions in November. Instead of doing the search, it generated a full “summary” that looked obviously fabricated.
• Third time: I asked for 50 sample posts so I could verify. Gemini confidently output 50 “samples”… which were all made up too.
What’s weird is that when I asked ChatGPT to do the exact same task, it actually searched the public web, retrieved real posts, and even cited the sources. So the contrast was pretty surprising.
Is Gemini known to refuse tasks like this and then improvise data when pressed?
Is this normal behavior or am I doing something wrong?
Note:
The original conversation with Gemini was in Chinese. For clarity, I asked an AI to translate the screenshots into English so Reddit users can read them.
Additional context:
Below I’ve attached the results from ChatGPT for comparison. Even though they’re not translated into English, you can clearly see each post ID, the source, and a link that goes directly to the original post.
This makes it pretty obvious that Gemini saying it “cannot fetch platform data” wasn’t true — because ChatGPT did it without any issue.
And honestly, this isn’t even the first time. Similar things have happened multiple times.
For example, when I asked Gemini to add a simple border to one of my illustrations, it told me it couldn’t do that and started recommending third-party design tools instead.
I was like… really?
