Why Your ChatGPT Prompting Tricks Aren’t Working Anymore (and what to do instead)

For the last 2 years, I've been using the same ChatGPT prompting tricks: "Let's think step by step," give it examples, pile on detailed instructions. It all worked great.

Then I started using o1 and reasoning models. Same prompts. Worse results.

Turns out, everything I learned about prompting in 2024 is now broken.

Here's what changed:

Old tricks that helped regular ChatGPT now backfire on reasoning models:

  1. "Let's think step by step" — o1 already does this internally. Telling it to do it again wastes thinking time and confuses output.
  2. Few-shot examples — Showing it examples now limits its reasoning instead of helping. It gets stuck in the pattern instead of reasoning freely.
  3. Piling on instructions — All those detailed rules and constraints? They tangle reasoning models. Less instruction = cleaner output.

What actually works now:

Simple, direct prompts. One sentence if possible. No examples. No role assignment ("you are an expert…"). Just: What do you want?

Test it yourself:

Take one of your old ChatGPT prompts (the detailed one with examples). Try it on o1. Then try a simple version: just the core ask, no scaffolding.

Compare results. The simple one wins.

If you're still on regular ChatGPT: The old tricks still work fine. This only applies to reasoning models.

If you're mixing both: You'll get inconsistent results and get confused. Know which model you're using. Adjust accordingly.

I made a video breaking this down with real examples if anyone wants to see it in action. Link in comments if interested

Here it is: https://youtu.be/9qgfOuVIXR0

Leave a Reply

Your email address will not be published. Required fields are marked *