Finally a model that tells me my ideas suck

I find myself using Gemini over other SOTA LLMs right now mainly because of one behavior: It pushes back. If your idea is bad, it tells you as much.

For context, I use LLMs to rapidly discard research ideas and code architecture designs. I don't use it as the decision maker, but rather like a smart colleague for me to bounce ideas off of and who gives me feedback, bringing in other aspects I maybe haven't thought about so far.

The issue with other LLMs is that they consistently praise my ideas and give reasons why. This approach provides zero informational value (in the true entropy sense of the word). You can work around it a bit with instructions to be brutally honest, but Gemini 3.0 is the first model that does that out-of-the box and does so consistently. It actively pushes back, helping me to discard bad ideas quickly.

Leave a Reply