Things I really like from Gemini models are vision, long context & Google apps integration, but when it comes to quality of response for complex queries, voice mode & others, it was previously garbage & even now with Gemini 3, it's still garbage.
1) First, I don't understand what UI & UX engineers are doing there; despite making the world's powerful LLM, they can't make a proper website for users to use it. The placement of the temporary chat button, that sidebar, everything is awful to use. OpenAI & others are miles ahead here. At least they need to use their own model to do vibecode to get some good website
2) 2nd, I don't know how this model became 1st place in all benchmarks, but in my usage, when I give the same exact prompt & code to both ChatGPT (extended thinking) & Gemini 3 Pro, I was shocked how bad Gemini 3 responded. I checked the thinking process for both; ChatGPT followed exact instructions for every line & even it executed & crosschecked its own mistake & gave a nearly perfect result. Here Gemini 3 gave a response fast but gave a fully wrong answer, it's not even able to understand the code properly. (Even in AI Studio, it's same crap.)
3) A day won't be completed without getting this error from Gemini: "connection error: something went wrong pls try reloading" in the middle of getting a response, while other sites in the same browser like ChatGPT, Claude had no issues.
4) When I heard they changed the model of voice mode to 2.5 Pro, I was very excited, but my experience with it is the worst; that robotic voice, heavily censored to even tell daily news, garbage responses… despite my hate towards Grok, they did an excellent job with their voice mode
5) & so many others like deep research quality, losing context after hitting some token limit, etc.
So apart from good vision capability & best flash model, what actually did you guys feel great about using Gemini? Do I need to set any instructions properly or need to do any other changes to get the most of Gemini 3 Pro?