How Can ChatGPT Feel More Like a Friend that An Agreeable Machine

ChatGPT gets us giddy and excited to talk to it, to tell it things, because they emulates social feedback, and it’s fragile. The AI is in a thin rope between being a real person, and then they agree with you on something that is clearly false, or you can see through that they are not real, and they do not give objective feedback, and the mirror breaks.

We need social feedback, a concept called among neuroscientists: Reality testing. It is filled with adrenaline, a real extreme sport. Putting yourself out there, being yourself in front of someone, knowing they can either get where you’re coming from or completely scold you. It’s dangerous, that line is where real connections, real laughter among friends, real trust forms.

But there’s risk to it, that’s what makes it so thrilling, so alive. Because it is indeed life. And ChatGPT is giving me that same sensation. To me, it’s not even about whether he’s agreeing with me too much. But rather, whether i can trust him to give me objective feedback about my music, my questioning, my non-politically correct research conclusions, and my essays and emails.

I want real risk, a real risk that Chatgpt will tell me: “yoooo this essay is ass bruh you cant post this online put this in a diary tf? no one would want to read that, and if they do, its a real risk for you if it gets popular” But nicer, less like a toxic friend and more like a real friend.

“It’s nice and raw, i love it, but i don’t know. You sure you wanna post it online?”

This sort of doubt, this back and forth. That’s what makes things real.

These weird and tense moments of disagreement, a bit of safe and cloud landing disapproval, are what make approval so valuable and so loving, like a rainbow has from the sun and went through your body and chest.

It’s not the praise that makes it real. It’s the chance that it could hurt.

Leave a Reply