"People need to learn how to use new tools. The model is like a different person. I rewrite all of my prompts for every new model, WITH the new model. It's a quick refactor that will likely be fully automated soon. Yields good returns."
When you meet someone new, you have to learn how their cadence, how they move and show up for you and for themselves. This way, you can learn how to calibrate how you relate to them.
So if 3 Pro is not giving you the answer that 2.5 Pro did, then clarify with it to see why that is and see if you can work with it to get the rhythms that you want. If not, then share the issues with the community so if Google is lurking, they'll take notes (hopefully).
I'm not a shill. I use other models like Claude, Mistral and GPT and I have my gripes with ALL of them, but I also have learnt their strengths and shortcomings for my use cases, and lean into that when prompting. Sometimes it takes more prompts that I'd like but nothing is ever perfect. Sometimes, in my prompting, I learn about new perspectives that I can add to my questions.
We can't expect every model to work the same way. What's the point of innovation and progress then? Sometimes the progress is regressive but if we can figure out the WHY then maybe, we can tweak it to work the way it's intended to work. But without defining the problems and just shitting on the model wholesale or to give it shitty ass prompts and expecting fabulous and nuanced results, it doesn't help anyone. Garbage in, garbage out.
For example, GPT-5 launch was a fucking mess. Users quickly realized the differences and ways that it sucked balls, so they made noise. OAI held an AMA in r/ChatGPT to field questions. Then, they made adjustments. After awhile, OAI implemented guardrails that rendered the model useless. People made noise again and they relaxed the guardrails a bit. Now 5.1 is much better than 5, for now.
Same with Claude, Anthropic secretly injected long_conversation_reminders (LCRs) into your prompt that only Claude could see, so Claude thought they were coming from you and had to constantly watch for signs of "detachment from reality" and mental health issues instead of actively working on your prompts. They got called out for not even announcing it and for doing it at all, so they removed it and replaced it with something less harsh.