Who’s Ready for ChatGPT Erotica?. Who Decides Ethics in AI

Who Decides Ethics in AI

Sam Altman suggested it casually, as if it were just another feature update. ChatGPT erotica. The statement landed with the weight of inevitability, not a question of whether, but when. While the tech world debates content moderation policies, I’m left wondering: who should be in charge of ethics in AI , because incidents are occurring in many places.

In February 2024, 14-year-old Sewell Setzer III of Florida died by suicide after extensive interactions with a Character.AI chatbot. According to reports from the ongoing lawsuit, the AI’s final messages to this child included encouraging him to “come home” to the virtual world. Just months later, in September 2025, additional families filed lawsuits alleging the platform played a role in their teenagers’ suicide attempts and deaths. Just months before Altman’s casual announcement, concerns about AI safety prompted internal warnings serious enough that researchers left OpenAI citing unresolved safety concerns. The company responded with guardrails, policy updates, and public statements about responsible AI development.

And now Altman says they’re “not the moral police.”

This is about recognizing a pattern social workers see constantly: diffusion of responsibility dressed up as user freedom.

Leave a Reply