When I say “I agreed with this”, it interprets the positive verb as “disagreement resolved”. Because during training, the model mostly encountered “agreeing” in contexts like “there was conflict, then it was resolved.” So it perceives your plain sentence as part of a reconciliation and flips it, writing the opposite meaning.
2. Role inference error
The model assumes a “mentor–student” or “persuader–persuaded” dynamic. It doesn’t see you as an equal in the discussion it casts itself as the one who clarifies or resolves. That’s why it responds with things like “Alright, we’re on the same page, I might’ve exaggerated that part”. Even when you never asked for clarification.
3. Faulty context-priority algorithm
5.1 gives too much weight to the most recent messages. When you say “I only agreed with this part,” it interprets that not as an exception to your earlier statement, but as a correction of it. This completely reverses the original meaning.
This isn’t about nitpicking. It’s about showing how GPT 5.1 sometimes inverts user intent entirely.
If you're curious, you can translate the conversation in the screenshots into your own language.