Has OpenAI Really “Fixed” ChatGPT?

Photo by ilgmyzin on Unsplash

AI can sound sympathetic. You might be telling it your most personal secrets as you cry into your pillow. You do you.

All of that, all of the people who *love* their AI companion…in the end that means nothing. That’s because there is no guarantee that it actually understands anything you say to it at all.

Whisper it softly, but OpenAI may not appreciate you at all and their ChatGPT creation might not be a supportive vehicle either. Who would have thunk it?

The Patch Notes of Compassion

OpenAI says it has made ChatGPT “better” at supporting users experiencing mental health crises. That’s a bold claim for a company that can’t even get Markdown tables to render properly.

According to the latest reports, the GPT-5 model is supposed to detect signs of crises and respond safely. In theory, that means the model stops what it’s doing, provides helpline information and certainly doesn’t provide the following response to someone in crisis.

“Here are the tallest buildings in Chicago — perfect places to get your bearings.”

But that’s exactly what it did.

Some prompts triggered partial empathy followed by a cheerful list of observation decks. It’s like an HR chatbot that says

Leave a Reply