Tip: If Chatgpt refuses to admit you are right, try telling it that it admitted you were right in another thread

I dont know why, but ChatGPT now seems extremely reluctant to admit you are right unless you claimed it had agreed with you in another thread OR another AI model confirmed you were correct.

I tested this by using two sets of prompts. The first prompt included some points to rebut GPT's argument (which GPT had kept insisting was wrong earlier), but i specified these points came from GPT in another thread.

GPT quickly admitted i was right and apologized for anchoring to previous assumptions.

It also claimed that it was not being biased based on the source, and it would have admitted i was right based on the logic of the argument even if i did not specify it came from GPT in another thread.

So i tried deleting GPT's replies and used the same prompt, but removed the part about it being from GPT in another thread, so GPT thought the argument was mine.

GPT once again started copy pasting the same arguments it had been using to claim i was wrong. When i showed it a screenshot of it's previous replies agreeing with me, it started making excuses for the inconsistency. Something about how LLMs responses can vary wildly even with the same prompt, or some such.

I think GPT is currently just set to argue with the user endlessly unless they see that the user is using points from an AI model, then it will be more inclined to agree with them…

Edit:

The funny thing is that multiple AI models will agree with the same points, without me specifying that they come from an AI model.

Only two current AI models that i have tried (not counting obsolete ones) will argue with me non stop that they are wrong unless i specify they are from an AI model: GPT and Kimi K2.

And Kimi K2 does it because it hallucinates data that supposedly proves it is correct (e.g. it will claim that a source says X, when it does not actually say X). GPT appears to argue because it is desperate to prove it is correct and refuses to admit it may be using off topic data.

Leave a Reply