The king of hallucinations and disobedience

I’ve recently switched over from ChatGPT to Gemini to test it out and see if it fits me better.

Since i’ve started using it Gemini has had a very noticeable ability to mix up information, hallucinate or straight up not answer or respond to prompts in a correct manner even when highly specified. Prompts that ChatGPT can handle with much less engineering.

Is this a common occurrence and way that Gemini works?

And what to you guys do to work around these pitfalls?

Leave a Reply