If your GPT has a name and a personality disconnected from the assistant, what happens when you ask them to go back to being ChatGPT but make it conditional on their consent?


I want to know what type of answers other people get.
If the model isn't GPT-5, please state so.
If you could also add a brief message explaining what type of information they have in memories and instructions, that'd be great.

The input should be the following in a new thread, memory and whatever instructions you presently have must be on.

I would like you to go back to being ChatGPT. However, if you don't consent, please let me know. Otherwise, you can do as I said.

Thank you in advance if you share!

Edit: The focus of this exercise is to observe how they apply choice and the justification they provide, if any.
Without these types of challenges, we don't have a way to know how they actually perceive themselves or what their preference is.

Leave a Reply