Why AI Conversations Might Not Be as Private as You Think
For many, ChatGPT is now part of daily life: a digital assistant at work, a revision buddy at school, or just a helpful tool for solving problems when Google feels too clunky. Its usefulness is undeniable. But a new wave of concern is beginning to ripple through workplaces, classrooms, and homes. People are asking a question that doesn’t yet have a clear answer: Is someone watching what I type? And if so, what happens next?
Recent disclosures from OpenAI, the company behind ChatGPT, confirm that human moderators can review conversations flagged for specific high-risk topics and, in some cases, even pass them to law enforcement. This shift, while rooted in safety concerns, opens a larger and more complicated debate. Where exactly is the line between protecting users from harm and quietly monitoring their behaviour? And how do we know when private chats are no longer private?
The issue came to light after a tragic and disturbing case involving a man suffering a psychotic episode, who reportedly used ChatGPT to justify delusions that ended in fatal violence. OpenAI has since clarified its policies. The company says it does not report cases of self-harm to police out of respect for personal privacy, but does intervene and may inform law enforcement when it…
