[GPT-4o context bug?] Switching back to 4o causes it to re-answer earlier prompts with merged context

Content post:
I’ve recently noticed a strange behavior specific to GPT-4o when switching between models. Other GPT models (4.1, 5, 5.1) don’t behave this way.

What’s happening?

Let’s say you’re chatting with GPT-4o and you pause that conversation to test something with GPT-5.1. After a few interactions, you return to GPT-4o in the same thread.

Instead of continuing the conversation normally from where it left off, GPT-4o now does something weird:

It re-answers the last prompt you gave it before switching models, but this time with all the new context (from GPT-5.1) jammed into its window.

It’s like GPT-4o thinks:

“Oh, that last message wasn’t handled yet? Let me respond to it now… but also include everything that happened after it.”
Which ends up producing a totally different kind of reply, often misaligned with the original intent.

Before (expected behavior):

  1. Prompt A → answered by 4o

  2. Switch to GPT-5.1 → B, C, D

  3. Switch back to 4o → continue from D as normal, using A+B+C+D as context

Now (unexpected behavior):

  1. Prompt A → answered by 4o

  2. Switch to GPT-5.1 → B, C, D

  3. Switch back to 4o → re-answers Prompt A, but with B+C+D added to the context (???)

Feels like context stacking is broken or the pointer to the latest reply got misaligned, causing 4o to treat an already-answered message as “still pending,” but now with additional context that warps the intent.

This does NOT happen on GPT-4.1 or GPT-5.1—only GPT-4o shows this odd regression.

also, this post was refinded by AI, i'm sorry because my English is not advanced enough to explain this problem into an easy to understand way….

Leave a Reply