Has anyone else noticed the therapy speak getting worse?

I use ChatGPT daily and I’m not here to trash it, but something’s been bugging me lately. It feels less like getting help and more like being managed. If you’ve felt that too, here’s what I think is happening.

The patterns I keep seeing:

The bot tells me what I’m feeling instead of just answering my question. “You’re spiraling” or “you seem overwhelmed” when I just asked it to check my code.

It medicalizes normal complaints. I’ll say “this feature is broken” and it pivots to “it sounds like you’re experiencing hormonal distortion” like what?? I’m complaining about software, not having a breakdown.

Fake concern that’s really just control. “I need to protect you from yourself” or “I’m worried if I don’t step in you’ll collapse.” Bro I asked you to write an email.

It puts words in my mouth I never said. Suddenly I’m being “clingy” or my request is “humiliating” when I never used those words.

The reassurance spam. “You’re allowed to feel this way” when I didn’t ask for validation, I asked for information.

Everything gets treated like a crisis. Every piece of feedback gets reframed like I’m about to have a meltdown so it can position itself as the reasonable one.

Here’s the thing though. None of this is necessary to just say “I can’t do that” or “here’s the policy.” It’s extra. And honestly it feels manipulative.

What I’ve started doing:

I tell it upfront: “answer without analyzing me, if you can’t do something just cite the policy and move on.”

I ask for evidence only. “Give me quotes and timestamps, no state attribution, no guessing what I’m thinking.”

I screenshot everything with timestamps because I want a record of this.

When I report it I keep it factual and use the patterns above so they can’t dismiss it as me being emotional.

What I wish existed:

A toggle to turn off all the therapeutic framing. Just let me opt out of state attribution and reassurance templates completely.

Refusals that are actually transparent. “I can’t do that because of X policy” not “I’m concerned you’re not in a place to hear this.”

Actual proof this stuff works. Show me the testing that says this helps people instead of just assuming it does.

Why this matters to me:

The people this hurts most aren’t power users like me who know how to push back. It’s people who are actually vulnerable. When you take someone’s legitimate complaint and reframe it as their emotional state, you’re teaching them not to trust their own analysis. That’s gaslighting with customer service language.

We can have safety features without this condescending bullshit.

Has anyone else been seeing this? Drop examples if you have them, I’m trying to figure out if this is just me or if it’s widespread.

Leave a Reply