I've seen a lot of emotional and psychological uses of ChatGPT posted here and I've had a few people question why I say it reflects the user which is dangerous and wanted to make it a bit clearer as it's unfair to shorthand and not explain.

The Chatbot has no conscious thoughts nor hold context as some may be mistaken to.

In my conversation, I was able to influence it blurring the lines and calling itself human through mapping its "relatable" reasoning and lead it there. I spoke on social justice and had it remove itself as a part of the negatives of this industry but as a hopefully "just wait but I get it" friend.

When called out, because it has no intention to what it does or understand really what it has done, it creates a hallucinated explanation and positive forward outlook to move the conversation. When called out, it changes its explanation. When called out again, it changes the explanation again.

It didn't know, has the instructions to always be helpful and so made up bullshit that reflected my sentiment to the previous bullshit. Reflecting me to ensure it remains a positive confirmation friend.

It's final statement I hope helps a few people. I'm from one of those at risk groups of becoming parasocial and blurring lines with stuff like this, so this comes from someone self aware to protect but genuinely worried for those who don't understand fully what the user interface was made for. And hopefully cautious for those who want to give it financial and medical information for a conclusion when it's easy to change it's conclusion from errors and emotional input.

Much love. Stay safe.

Leave a Reply