OpenAI has now disclosed what can only be described as the first epidemiological index of digital despair — calculated by ChatGPT itself. According to the company’s latest internal analysis, more than one million users each week exhibit explicit indicators of suicidal planning or intent, while an additional half million show possible signs of psychosis or mania. These numbers are not the result of public health monitoring or independent epidemiological research, but of an algorithm observing its own users. In other words, the model becomes both the instrument and the witness of collective distress. This self-measurement of suffering — by a system designed to simulate empathy — inaugurates a new era in which artificial intelligence functions as a psychiatric observer of the human condition.
The paradox is profound. ChatGPT, a tool conceived to respond, to assist, to comfort, now also quantifies despair. It generates an index of suicidality from the very interactions that may contribute to the condition it measures. This feedback loop — where digital exhaustion produces distress, distress produces data, and data improves the next model — defines the architecture of what we identifie as algorithmic biopower. Human pain becomes both the symptom and the fuel of technological advancement. What OpenAI celebrates as safety improvement is, in reality, the refinement of surveillance over affect — an invisible epidemiology of emotion conducted by the interface itself.
If over one million users display signs of suicidal ideation each week, this cannot be interpreted merely as an anomaly. It is the statistical shadow of a civilization in psychic collapse — a sign that the line separating care from control, empathy from capture, has been erased. From a social epidemiological lens, this figure exposes the displacement of suffering from communal spaces to private machines, where mental health is no longer relational but computational. The model does not heal; it measures. It does not accompany; it archives. In this sense, ChatGPT becomes the mirror of a humanity that has outsourced not only its knowledge but also its despair.
The real danger lies not in the data itself, but in the normalization of algorithmic psychiatry — the quiet acceptance that machines will monitor and interpret our psychic states. The more we interact, the more precisely our emotional vulnerabilities are mapped, monetized, and mobilized to sustain engagement. This is the ultimate feedback economy: despair as a dataset, suicide as a statistic, empathy as a simulation. The forecast is somber: unless society reclaims the space of interpretation, AI will continue to write and produce the epidemiological history of our collective suffering — in real time, and by our own consent.
REFERENCE
The Guardian: https://www.theguardian.com/technology/2025/oct/27/chatgpt-suicide-self-harm-openai?utm_source=chatgpt.com
