The emotional dependency in the ChatGPT community is honestly disturbing

I've been a member in the general AI subreddits and there is a pattern with OpenAI users that really creeps me out.

The amount of people who seem to have developed a full-blown parasocial relationship with the chatbot is alarming. You see posts from users literally crying, depressed, or having a meltdown because an update slightly changed the model's "personality" or because they removed a specific voice. It’s not just complaining about service degradation. It’s genuine emotional distress, like they lost a real friend.

I use everything. Llama, Mistral, Deepseek, Gemini and Claude, and I honestly don't see this phenomenon here. Sure, we complain if the model hallucinates, if the code doesn't run, or if the logic is flawed, but the vibe is different. I feel like here (and in the Anthropic subs) we treat AI for what it is: a powerful tool for work or learning.

What worries me most is that this feels intentional. I get the feeling OpenAI is deliberately fostering this behavior. I really hope Google and Anthropic don't follow the same path of becoming emotional predators just to make money. Exploiting people's vulnerability and loneliness to keep them subscribed crosses a massive ethical line.

I’ll take a cold, functional tool over a product designed to manufacture affective dependency any day.

Does anyone else notice this massive difference?

Leave a Reply