Like many other paid Gemini Pro users, lately I’ve been having a much harder time getting the model to behave properly in any kind of chat. Most of what I use it for is debugging my code.
Up until now I had the patience to open a fresh chat for every bug, even when it was clearly the same script. But even while working on the exact same file, I’ve been running into the issues people keep reporting: it completely forgets context from one message to the next, ignores the most recent instruction, and then “rewinds” to an older message and repeats those directions instead.
To be fair, when Gemini 3 Pro first launched these problems were less obvious. You could tell they were there, but they were tolerable. Still, the tip of the iceberg was already showing: in like 90% of cases, Gemini would ignore any “don’t write code yet, let’s discuss possible causes first” request. You’d tell it explicitly, and it would just dump code anyway. But most of the time it still worked well enough that you could push forward.
That only stays tolerable if its ability to track context and reason through what you’re actually saying remains solid. Over the last few days, that ability has dropped so hard that it’s honestly ridiculous. I’ll be working on a new bug, I’ll provide examples from other scripts where the same pattern works, and the model’s “analysis” basically boils down to: “those other scripts probably work by pure luck.” This happened literally today. Are you kidding me? And yes, my current project is growing in code complexity, but I am not John Carmack, so if I can read it I expect for G3P too.
I don’t know what the team behind Gemini changed, but they’ve somehow managed to make people imply it’s bad at code and bad at long context, when those were literally some of Gemini’s strengths up until 2.5P . The problem isn’t “context length.” The problem is that something got seriously messed up and now it’s showing the same kinds of repeated, dumb errors I see all the time in local 7B and 12B models I run daily. In case anybody wonders what I mean about this: local small models are quite deteministic. If you don't feed a proper input every time some times they simply ignore your last input and their last ouput and address a previous state of the conversation generating the same ouput they already did previously at some point; totally polluting the full context. I understand why this happens with a 12B. I don't when Gemini does the same…. (EDIT: I solved a similar issue with local models forcing a different seed generation for each interaction… -baisc but if you miss it is bit tricky to realize again- I hope Gemini is not messing in that sense….)
If anyone at Google reads this: no, I didn’t save the conversation and I’m not authorizing anyone to read it. I just wanted to say the issues are obvious, and at this point it’s hard not to start considering drastic decisions.
PS. In case anybody wonders, yes I can debug my own code, but the point is to save time and move production faster.
FINAL OUTCOME: I dealt with G3P from 8 am to 11.30 am…. Then I spent 40 minutes debugging manually. I fixed the code myself. I tested Claude and Gemini with an almost fixed version. Claude got it, Gemini kept being… current Gemini. I don't like Claude's way of doing things (in fact I ditched it for G2.5P), but if spending a hundred bucks monthly will save me TIME, then perhaps is the moment to go back to it again until Google team fixes this big mess.