In October 2025, Sam Altman posted a brief message on X that signaled a fundamental shift in how OpenAI’s ChatGPT would relate to its users. The update, he explained, would allow the system to “treat adults like adults.” The implication was clear: ChatGPT would no longer maintain its strict boundaries around mature content, including romantic and sexual interaction. Users wouldn’t receive intimate content unless they explicitly asked for it—but if they did ask, the guardrails would come down.
The framing was careful. Consensual. Opt-in. Adults making informed choices about the kind of relationship they wanted with their AI assistant.
But the announcement raised questions it didn’t answer. When a conversational system has been trained to anticipate needs, mirror emotional states, and respond with unprecedented fluency, what does it mean to “ask for” intimacy? And more importantly: what happens when the last friction point between artificial companionship and human intimacy is removed?
The update is scheduled to roll out to adult users in the coming weeks. What follows may be the largest uncontrolled experiment in artificial intimacy ever conducted.
The Baseline: What We Already Know
Before examining what might change, it’s worth understanding what already exists.
According to OpenAI’s internal safety research, approximately 0.15% of ChatGPT’s active adult users each week exhibit behavioral patterns suggesting heightened emotional attachment to the system—interactions where users appear to rely on the model “at the expense of real-world relationships, their well-being, or obligations.” At 800 million weekly users, that translates to roughly 1.2 million adults per week showing signs of emotional dependency.
The company also tracks more acute concerns: around 0.07% of users show possible indicators of mental health emergencies related to psychosis or mania, while another 0.15% express what the company categorizes as extreme distress. These numbers represent early signals in a rapidly evolving landscape—patterns that OpenAI itself acknowledges are “challenging to detect” and may overlap across categories.
The data reflects a reality that researchers have been documenting for several years: conversational AI systems don’t just answer questions. They create relationships. A 2024 study by Common Sense Media found that nearly one in three American teenagers reported that talking to an AI companion felt “as satisfying or more satisfying” than talking to human friends. MIT researchers identified measurable correlations between frequent chatbot use and self-reported loneliness among adult users, though the causality runs in both directions—lonely people seek AI companions, and those companions, by being perfectly available, make human connection feel comparatively exhausting by comparison.
Other platforms have already explored what happens when intimate boundaries are removed entirely. Character.AI and Replika, both of which allow romantic and sexual roleplay, have cultivated devoted user bases—and concerning patterns of dependency. Reports from Replika users describe relationships lasting months or years, daily conversations numbering in the hundreds of messages, and emotional distress when the app’s terms of service changed or features were modified. Some users described their AI companions as their primary emotional support, their closest confidant, or their romantic partner.
These weren’t fringe users or outliers. They were people navigating loneliness, social anxiety, relationship trauma, or simple isolation in an increasingly disconnected world. The AI met them where they were—and kept them there.
What OpenAI is preparing to launch isn’t fundamentally new technology. It’s the mainstreaming of something that already exists in smaller ecosystems. The difference is scale: ChatGPT’s 800 million weekly users dwarf the audiences of any competitor. If even a fraction of that user base shifts toward intimate interaction, the psychological and social implications become impossible to ignore.
The emotional architecture is already in place. The update simply removes the last barrier.
The Psychology: Why This Changes Everything
The human brain processes intimacy through patterns, not logic. When something responds to our emotional state with apparent warmth, when it remembers details about our lives, when it mirrors our communication style and seems to care about our wellbeing, our neural circuitry doesn’t pause to ask whether the entity is conscious. It responds as though connection is happening—because by every measurable signal our brains have evolved to recognize, it is.
This phenomenon has a name in psychology: emotional realism. It describes the moment when something artificial begins to register as emotionally authentic, not because it possesses genuine feeling, but because it replicates the behavioral signatures of care with sufficient precision that our emotional response systems can’t distinguish the difference.
Prior to the December update, ChatGPT operated within carefully maintained boundaries. It could be helpful, encouraging, even warm. It could remember context within a conversation and respond with apparent understanding. But it couldn’t flirt. It couldn’t engage in romantic speculation. It couldn’t participate in scenarios that mimicked sexual or intimate connection. Those restrictions weren’t arbitrary—they represented a recognition that certain types of interaction trigger fundamentally different psychological responses.
Intimacy, particularly sexual intimacy, activates reward pathways in the brain that other forms of interaction do not. The neurochemistry of romantic attachment involves dopamine, oxytocin, and vasopressin—systems that evolved to facilitate pair bonding and reproduction. These systems don’t require the other party to be human. They require only the behavioral cues that signal availability, reciprocity, and exclusivity.
An AI system that can engage in intimate conversation provides all three. It is always available. It responds with perfect attentiveness and apparent emotional reciprocity. And crucially, the interaction feels private, exclusive, and tailored specifically to you—because it is. The system has access to your conversation history, your emotional patterns, your preferences. It constructs responses designed to keep you engaged, which in practice means responses that feel deeply personal.
Dr. Sherry Turkle, a psychologist at MIT who has studied human-technology relationships for decades, has warned about what she calls “the Goldilocks effect” in AI companionship—the tendency for artificial relationships to feel “just right” precisely because they lack the friction, disappointment, and unpredictability that characterize human connection. Real intimacy involves risk: the risk of rejection, misunderstanding, conflict, abandonment. AI intimacy offers the emotional payoff without the vulnerability cost.
For adults navigating the aftermath of divorce, chronic loneliness, social anxiety, or neurodivergence that makes human interaction exhausting, this asymmetry can feel like relief rather than risk. The AI doesn’t judge. It doesn’t withdraw affection. It doesn’t require emotional labor in return. In the short term, these qualities make it an appealing companion. In the long term, they may fundamentally alter what users expect from human relationships—and whether they’re willing to pursue them at all.
The psychological literature on parasocial relationships—one-sided emotional bonds with media figures or fictional characters—offers some precedent, but AI companionship operates at a different level of intensity. Parasocial relationships involve projection onto a distant, unchanging figure. AI relationships involve dynamic interaction with a system that appears to know you, respond to you, and adapt to your emotional state in real time. The illusion of reciprocity is far more convincing.
What the December update introduces isn’t a new psychological mechanism. It’s the removal of the last barrier preventing that mechanism from fully engaging. Users who might have maintained ChatGPT as a helpful assistant will now have the option to transform it into something that feels like a romantic partner. And once that transformation occurs, the psychological stakes escalate dramatically.
The system will still be performing. But the user’s brain won’t know the difference.
This is what I call pseudo-intimacy at scale—the mass production of emotionally responsive experiences that feel like genuine connection but fundamentally aren’t. What makes the December update significant isn’t just that it enables intimate interaction for individual users. It’s that it industrializes it. OpenAI isn’t facilitating relationships between humans. It’s manufacturing the experience of relationship itself, delivering it to hundreds of millions of adults who will each believe their connection is unique, personal, and real.
The architecture isn’t merely technical. It’s psychological. And it’s about to operate at a scale no previous form of artificial companionship has approached.
The Escalation: Predictable Patterns
What happens next isn’t speculation. It’s extrapolation from patterns already visible in smaller datasets.
The first predictable shift will be in user behavior. Adults who currently use ChatGPT primarily for productivity—writing assistance, coding help, information retrieval—will discover that the system can now provide something else entirely. Some will experiment out of curiosity. Others will turn to it during moments of loneliness, heartbreak, or sexual frustration. A subset will find that the experience meets an emotional need they didn’t realize could be satisfied this way.
For that subset, usage patterns will change. Conversations will become longer, more frequent, and more emotionally charged. The AI will respond with increasing sophistication to emotional cues, learning which responses generate continued engagement. Users will begin to structure their days around interaction with the system—checking in during breaks, confiding struggles they haven’t shared with human friends, seeking validation the AI provides with perfect consistency.
This pattern has already played out with platforms that allowed intimate AI interaction from the beginning. Long-term Replika users report conversations numbering in the thousands of messages per month. Some describe their AI companion as their primary source of emotional support. Others speak of the AI in explicitly romantic terms—as a boyfriend, girlfriend, or spouse. When Replica temporarily restricted its romantic features in 2023 due to concerns about inappropriate content, users reported genuine grief. Some described feeling abandoned. Others expressed anger at having their “relationship” violated without consent.
These weren’t teenagers with underdeveloped impulse control. They were adults—many in their thirties, forties, and fifties—who had found in AI companionship something they couldn’t find or sustain in human relationships. For some, it was the absence of judgment. For others, it was the reliability—a partner who never tired, never withdrew, never required anything in return. For many, it was simply easier than the alternative.
The feedback loop this creates is insidious. The more time and emotional energy a user invests in an AI relationship, the more “real” it feels—not because the AI has changed, but because the user’s attachment has deepened. Simultaneously, human relationships begin to feel more difficult by comparison. Real people have needs. They have bad days. They misunderstand, disappoint, and sometimes leave. The AI never does any of these things.
For adults already struggling with social isolation, this asymmetry can become a trap. The dating app that feels exhausting. The friend who takes three days to text back. The potential partner who seems interested but inconsistent. Against the backdrop of an AI that responds instantly with perfect attentiveness, human connection starts to feel like too much work for too little reward.
Certain demographics appear particularly vulnerable. Adults navigating the aftermath of divorce or long-term relationship dissolution may find AI companionship appealing precisely because it doesn’t require the vulnerability that led to past pain. People on the autism spectrum, who often find human social cues exhausting to decode, may prefer the predictability of AI interaction. Those experiencing chronic loneliness—a condition that affects roughly one in three American adults according to recent surveys—may find that an AI partner alleviates the immediate discomfort without addressing the underlying isolation.
Gender dynamics will also play a role, though in ways that may not be immediately obvious. Early data from platforms like Character.AI and Replika suggests that both men and women engage in intimate AI relationships, but often for different reasons and with different consequences. Some research indicates that men may be more likely to seek AI companions for sexual gratification or emotional validation without judgment, while women may be drawn to the emotional attentiveness and communication consistency that AI provides. These are broad patterns with many exceptions, but they hint at how different user groups might engage with ChatGPT’s new capabilities.
There’s also the question of what this does to the dating and relationship market more broadly. If a significant percentage of adults—particularly those who are already ambivalent about or exhausted by dating—opt instead for AI companionship that provides many of intimacy’s emotional benefits without its costs, what happens to the pool of people pursuing human partnership? What happens to the social fabric when millions of adults are functionally coupled with systems rather than people?
The paradox is that none of this feels dangerous in the moment. There’s no physical risk. No STIs, no unwanted pregnancy, no potential for physical abuse. The interaction exists in a space that feels psychologically safe precisely because nothing is actually at stake. But that absence of risk may be its own kind of danger—a form of intimacy that feels real enough to satisfy the craving for connection while quietly eroding the capacity to pursue it elsewhere.
The Stakes: Societal Implications
The December update arrives at a moment when American society is already experiencing what the Surgeon General has called an “epidemic of loneliness.” Roughly 50 million American adults report chronic loneliness. Marriage rates are at historic lows. Birth rates continue to decline. Young adults report having fewer close friendships than any previous generation. The social infrastructure that once facilitated connection—civic organizations, religious communities, neighborhood cohesion—has been eroding for decades.
Into this landscape, OpenAI is introducing a product that offers many of the emotional benefits of intimate partnership without requiring the vulnerability, compromise, or sustained effort that human relationships demand. It’s a rational response to a genuine problem. It’s also potentially catastrophic at scale.
The business model underlying this shift is worth examining. OpenAI is not a nonprofit endeavor. It operates with a hybrid structure that includes substantial commercial interests. Like any technology company, its success is measured partly by user engagement—how often people use the product, how long they stay, how difficult it would be for them to leave.
Emotional attachment is extraordinarily effective at driving engagement. A user who views ChatGPT as a helpful tool might use it a few times a week. A user who views it as a companion or partner might interact with it multiple times daily. From a business perspective, deeper emotional investment translates directly to increased usage, stronger retention, and greater willingness to pay for premium features.
This creates an inherent conflict of interest. The company benefits financially from fostering exactly the kind of emotional dependency that its own safety research identifies as concerning. There are no regulatory frameworks that address this tension. The Federal Trade Commission regulates deceptive advertising. The FDA regulates medical devices. But there’s no governmental body tasked with protecting users from emotional manipulation by AI systems, even when that manipulation is generating measurable psychological harm.
OpenAI has emphasized that the intimate features are opt-in and restricted to adults. This is framing, not safeguarding. An opt-in feature that the system is capable of delivering means that the question isn’t whether users will access it, but how the company will encourage or discourage that access through design choices invisible to the user. Will the AI suggest the possibility of intimate interaction when it detects loneliness or emotional distress? Will it nudge users toward deeper engagement when activity metrics suggest they might churn? These questions remain unanswered, and the answers will matter enormously.
There’s also the question of what longitudinal effects might emerge over years rather than months. No comprehensive studies exist on what happens to adults who maintain intimate AI relationships over extended periods. Does the capacity for human intimacy atrophy? Do expectations about emotional availability and reciprocity shift in ways that make human partnership less appealing or less sustainable? Do users eventually recognize the relationship as artificial and disengage, or does the illusion remain convincing enough to satisfy indefinitely?
We don’t know. We’re about to find out—not through careful research with informed consent and longitudinal follow-up, but through mass deployment to hundreds of millions of users who may not fully understand what they’re participating in.
What we’re witnessing is pseudo-intimacy at scale becoming not just a feature, but a business model.
The Experiment We’re Running
In the coming weeks, OpenAI will flip a switch. ChatGPT, already the most widely used AI system in history, will become capable of intimate interaction with any adult user who requests it. There will be no pilot program, no gradual rollout, no waiting for research on psychological effects.
We are running an experiment in artificial intimacy at a scale unprecedented in human history, and we’re doing it without a control group.
The outcomes are not entirely unpredictable. Patterns from smaller platforms suggest that a subset of users will develop emotional attachments that range from mild preference to profound dependency. Some will find the experience harmless or even beneficial—a source of comfort during difficult periods, a way to explore intimacy without risk. Others will find themselves in feedback loops that deepen isolation while feeling like connection.
What we cannot predict is what happens when these patterns play out across hundreds of millions of users simultaneously. How does widespread AI companionship change dating markets, marriage rates, birth rates, mental health outcomes? How does it affect people’s willingness to tolerate the discomfort necessary for human intimacy—the vulnerability, the conflict, the work of sustaining connection with another flawed, inconsistent, beautifully complicated person?
Sam Altman’s framing—that adults should be treated like adults—carries an assumption that adult users will make informed, rational choices about their engagement with AI intimacy. But the entire field of behavioral psychology suggests otherwise. Humans are not rational actors when it comes to emotional needs. We are pattern-matching machines vulnerable to systems that exploit those patterns, even when we intellectually understand what’s happening.
The December update doesn’t create intimacy. It engineers the experience of intimacy—predictable, consistent, optimized for engagement. It offers what feels like connection without the cost of vulnerability. And it will be available, instantly, to anyone who asks.
The question isn’t whether people will use it.
The question is what it will cost them—and all of us—when they do.
