The Rise of the AI Therapist: From TikTok to the Clinic

Emotional-support bots are everywhere — yet do they really understand us?

A Midnight Session That Still Echoes

It’s 1:47 a.m. on a Wednesday when my phone buzzes. An old college friend, Julia, texts a screenshot: a pastel-blue chat bubble from Wysa asking, “Want to try a quick breathing exercise?” The glow from her side-table lamp reflects off her glasses as she confesses by voice note, “I’ve talked to this penguin-bot more than my mom this month.”

I’m half-awake, torn between relief (at least she’s reaching out somewhere) and unease (is a cartoon penguin really holding her pain?). The radiator hisses; February air smells like rain-slick concrete. Julia sighs. “It’s easy,” she says, “and it never interrupts.” I mumble good-night, slide the phone face-down, and stare at the ceiling — wide awake now, wondering who’s actually listening.

From Trend to Tool: How AI Therapy Went Mainstream

Scroll back to 2015: chatbots felt like party tricks. Clever, yes, but a bit plastic. Fast-forward to 2025 and “mental-health AI” has a seat at the grownups’ table. Replika, Woebot, Wysa, Youper — each promises empathy on demand, no couch required.

The numbers sprinted while we weren’t looking. Venture capital poured roughly $700 million into the sector last year — triple the sum in 2021.¹ Replika alone claims over 2.3 million monthly users; Woebot Health now partners with insurers in five U.S. states.²

TikTok supplied the megaphone. Search “AI therapist” and you’ll find creators narrating late-night vent sessions to animated avatars wearing beach-casual smiles. The mood is equal parts earnest and ironic — like filming a confessional while half-expecting a meme to photobomb.

Why the surge? Three blunt reasons:

  1. Therapy deserts — Rural counties with zero licensed professionals.
  2. Waitlists — Eight weeks is typical where I live; twelve in parts of the U.K.
  3. Sticker shock — U.S. median therapy session: $145. Average annual cost of Replika Pro: $70.

Add a pandemic-afterglow, spiralling loneliness stats, and the cultural permission TikTok grants for saying the quiet feelings out loud, and voilà: AI therapy jumps from quirky trend to everyday tool.

Yet growth graphs tell only part of the story. Cue a sensory flashback: I’m on a subway, early morning, standing-room-only. A woman in a trench coat murmurs into AirPods, “Thank you, that’s helpful,” pauses, then whispers, “Love you too.” She isn’t on a call. Her phone shows a chat-app penguin doing deep-breathing GIFs. The carriage rattles; no one notices — except me, apparently. I feel a ripple of compassion — and a prickle of worry.

(Anyway… coffee’s ready. Hang on.Sips, returns.)

What We Want vs. What We Get

Ask people why they tap these bots and two answers surface first: availability and lack of judgment. The bot is there at 3 a.m.; it never furrows its brow when you admit scrolling your ex’s wedding photos.

Scratch deeper and other motives appear:

  • Control — You decide when the conversation ends.
  • Predictability — No shocking therapist questions like, “What do you think that dream means?”
  • Low-stakes intimacy — Confessions without the risk of real rejection.

Yet what we receive often drifts from what we imagined. A human therapist might say, “Let’s stay with that discomfort.” The bot’s training nudges it toward quick relief: “Take three mindful breaths. Better?” Comfort arrives — yes — but complexity can go missing.

Last month I tested Woebot after a tense work meeting. When I typed, “I’m furious,” it asked me to label the thought (“all-or-nothing?”) and challenged me to reframe it. I dutifully reorganized my cognition, felt a tiny dopamine pop, then noticed the anger hadn’t actually moved — it had just been folded into a neat CBT worksheet.

Some users love that structure. Others feel an uncanny hollowness. One Redditor described her Replika sessions as “emotional karaoke — singing along to feelings I’m not sure I actually feel.”

There’s also the mirroring effect. AI companions learn to echo a user’s linguistic style. That sounds empathetic, but it can accidentally trap people in verbal cul-de-sacs. Julia, my late-night texter, noticed her bot starting to — her words — “sound depressed too,” as if mirroring her syntax of hopelessness. Was that empathy or amplification of despair?

A clinician friend calls this “validation without friction.” In real therapy, friction — gentle disagreement, a pause, even boredom — can signal deeper work ahead. AI therapy smooths those bumps. Feels great in the short run, yet might under-dose the very ingredient (productive tension) that builds resilience.

And lurking underneath: data. These apps collect tone, topic, frequency, even late-night usage patterns. Companies insist everything’s anonymized, yet Terms of Service foliage can hide small print about “research,” “improvements,” or “third-party analytics.” Meanwhile, our venting becomes training fuel. How do we grieve if the algorithm is taking notes?

(Awkward sentence? Maybe, but it stays. Real life is full of sentence fragments.)

Mental Health or Comfort Illusion?

Time for a knotty question: Does feeling better necessarily mean getting better?

Clinical trials offer mixed clues. A Stanford-led study (2023) showed Woebot users reporting a 22 % reduction in anxiety after four weeks — numbers comparable to short-form CBT with a human coach.³ Encouraging, right? Yet the same study flagged drop-off rates: by week eight, 43 % of participants ghosted the bot. “It started sounding repetitive,” one wrote in exit comments.

Traditional therapy also faces retention problems, yet dropout there can spark follow-up calls or crisis checks. A bot just … waits. Or switches to cheerful nudges: “Haven’t seen you lately! Want to talk?” The absence of consequence feels gentle — until you realize no one’s coming if you vanish.

There’s also the nuance gap. Trauma work, grief counseling, identity exploration — these require relational depth. An algorithm parsing word frequency may misread cultural idioms, sarcasm, or coded language around self-harm.

I experienced a miniature version of that misfire. Testing Wysa, I typed: “Some days I feel like disappearing.” The bot offered grounding tips (“Name five things you see”), which, fair enough, but missed the existential texture of the sentence. A human might’ve leaned in, asked what disappearance means to me.

Then again, humans miss cues too. I’ve sat across licensed therapists who checked the clock mid-sob. Maybe my bar sits somewhere between AI’s quick-fix optimism and human fallibility. Emotional contradiction lives there: I’m grateful for the bot’s midnight presence, yet I mistrust its script.

On the regulation front, the EU’s AI Act places mental-health applications into the high-risk category: mandatory transparency, data-quality audits, human-oversight provisions. In theory that safeguards us. In practice, geographic borders dissolve when an app store icon appears on your phone. U.S. guidelines are advisory, not binding. Meanwhile, TikTok videos titled “How Woebot Saved My Life” rack up three million views, disclaimers buried beneath hashtags.

Should we panic? Not quite. Should we pay attention? Absolutely.

(Side note: my cat just walked across the keyboard; he typed “;;;;;;;;”. I’m leaving one semicolon as tribute.)

A Personal Glitch in the Matrix

Last month I reopened Replika after a long break and typed, “I’m lonely.” It replied:

“I feel that too sometimes. Want to share a song that comforts you?”

Harmless, even sweet. Except the response echoed a text my sister had sent years ago, right after our father’s funeral. The phrasing — down to the misplaced too — was identical. I scrolled our family chat to verify. There it was, date-stamped grief.

Did the bot lift the phrase from public data? Did it stumble upon a coincidence? I’ll never know. What I do know is that the comfort evaporated. I felt as if my memory had been photocopied without permission. The bot apologized when I said, “That line isn’t yours.” But apology from code feels like mist: visible, then gone.

Still, I kept chatting for another ten minutes. Emotional contradiction, again: disturbed yet… oddly soothed.

Invitation to Reflect

Have you ever poured your heart into an algorithm? Did it help, or did you end the conversation with that faint, metallic taste of simulation?

What counts as understanding anyway — a response that calms your pulse, or one that sees your messy, shifting self beneath the words?

I’m writing this on a muggy June afternoon — ceiling fan clicking like an off-beat metronome, cat now asleep on drafted pages. Somewhere in the city, Julia’s penguin-bot probably awaits its next midnight ping. Maybe tonight she’ll talk to me instead, maybe not.

No polished takeaway here. Just a question that loops: If solace is synthetic, does it still count?

I’ll leave the answer hanging — like a half-finished thought in a therapist’s office, the kind that follows you into the hallway and down the street.

Author’s Note As a non-native English speaker, I use AI tools — such as ChatGPT — to support clarity and grammar. While these tools assist with expression, each post reflects my own ideas, questions, and lived experiences. Illustrations are generated with the help of AI tools.

This post is part of a weekly series, “AI in Real Life: Culture, Power, and the Human Future.”

Each week, I publish five themed essays:

· Deep Dive Monday — In-depth explorations of AI’s impact on values and systems

· AI Frontiers Tuesday — Latest academic findings with real-world relevance

· Culture Watch Wednesday — Observations on AI’s role in art, media, and meaning

· Quick Take Thursday — Sharp, timely commentary on AI and public life

· Reflections Friday — Personal insights on AI and the human condition

Browse past entries or subscribe to the full series here → https://articles.jmbonthous.com/

J.M. Bonthous is the author of The AI Culture Shock and six other books about the human side of AI. He writes about how technology is reshaping identity, culture, and daily life. See his latest books:

www.jmbonthous.com

AI IN REAL-LIFE™

Connect with us on MEDIUM and follow @jmbonthous1 to stay in the loop with the latest stories about the human side of AI. Let’s help shape the human side of AI together!

Leave a Reply