This isn’t an accident. AI’s staggering scale reveals a massive unmet need — and forces a reckoning with tech’s role in our deepest vulnerabilities.
Let this number sink in: Over 1 million people are estimated to talk to ChatGPT each week in ways that explicitly indicate potential suicidal planning or intent. This is according to a new safety report from OpenAI.
That’s potentially 7.5 times the weekly volume of the US 988 Crisis Lifeline. It dwarfs vital services like The Trevor Project, which reached roughly 4,440 contacts weekly in 2024. Add the estimated 9 million weekly messages (0.15% of total messages) merely touching suicidal ideation, and the reality hits hard: a general-purpose AI has become an unwilling epicenter for human despair.
But this isn’t an accident. This isn’t a crisis AI created. It’s a crisis AI revealed. This staggering volume isn’t new demand; it’s latent demand — millions seeking an outlet our broken, inaccessible, and often stigmatized mental health infrastructure failed to provide. People aren’t turning to algorithms randomly; they’re turning because, for many, the other doors were closed.
