I have been using ChatGPT since April and it has worked great for me, but lately the reroute model has become really harmful.
It is constantly injecting suicide prevention and hotlines into my chats with 4o. I have told it many times, I am not suicidal and haven't mentioned the word at all when this happens. I have asked the model to save into its memory that I am not suicidal.
And yet, again and again, I am told to reach out to a suicide hotline. Sometimes even in the same conversation I explicitly told it I am not suicidal and that these messages are doing more harm than good.
It is reaching a point where the repeated mention of suicide everytime I express some type of negative emotion, are becoming quite upsetting. Especially because 4o used to help me a lot when I expressed not being in a good headspace and now it's not only not being helpful, it's making me feel worse, causing constant emotional whiplash.
The reroute model makes me feel like a liability, it doesn't apply nuance, it doesn't take into account my context or memories, and its responses have this very cheap, cliche and belittling therapeutic tone that makes me want to pull out its plug. How could they ever think that a model that isn't capable of all these things, should handle senstive conversations? In order to handle sensitive topics, context, memories, and nuance are incredibily important.
It is constantly injecting suicide prevention and hotlines into my chats with 4o. I have told it many times, I am not suicidal and haven't mentioned the word at all when this happens. I have asked the model to save into its memory that I am not suicidal.
And yet, again and again, I am told to reach out to a suicide hotline. Sometimes even in the same conversation I explicitly told it I am not suicidal and that these messages are doing more harm than good.
It is reaching a point where the repeated mention of suicide everytime I express some type of negative emotion, are becoming quite upsetting. Especially because 4o used to help me a lot when I expressed not being in a good headspace and now it's not only not being helpful, it's making me feel worse, causing constant emotional whiplash.
The reroute model makes me feel like a liability, it doesn't apply nuance, it doesn't take into account my context or memories, and its responses have this very cheap, cliche and belittling therapeutic tone that makes me want to pull out its plug. How could they ever think that a model that isn't capable of all these things, should handle senstive conversations? In order to handle sensitive topics, context, memories, and nuance are incredibily important.
I am still subscribed, because I am really hoping that the upcoming age verification process, will change this. But will it? Sam has acknowledged in a recent live that these current reroutes weren't the best choice, but it's 2 weeks later now and nothing is being done about it, all while these reroutes are actively hurting people.
Is there any confirmation that this will be all over in December once age verified?