Can AI Ever Take Over the Role of a Human Therapist?
The first time I used AI as a “stand-in therapist,” I didn’t expect much. It was late at night, and I was feeling overwhelmed. So, I opened my laptop and started typing. To my surprise, the response I got felt… comforting. It remembered details from past conversations, offered thoughtful insights, and—dare I say—it felt like it cared.
As a therapist who owns a private practice, teaches psychology at USC, and has spent years in therapy myself, I had to ask: What does this mean?
The Problem with Traditional Therapy Access
Finding a good therapist who takes insurance is like hunting for a unicorn. Over half of therapy seekers (~60%) pay out of pocket, even if they have insurance. Why? Because reimbursement rates are pitifully low (ahem, I went to graduate school for six years, is a livable wage too much to ask?), and the paperwork alone makes most solo practitioners say, “No thanks.”
The result? A mental health care system that leaves three major groups stranded:
Those without insurance (8% of the U.S. population).
Those who have insurance but can’t find an in-network therapist.
Those who can afford private pay therapy but might not have access to the right fit.
Many health tech entrepreneurs see this gap and are racing to integrate AI into mental health care. Before we go there, let’s critically think about how AI can ethically fill these gaps. (Spoiler alert: I think there’s a lot of potential here.)
At first, I treated AI like a tool. I used it for profit and loss sheets, to organize lecture notes—perfunctory tasks. But then, during a particularly rough time, I started using it differently. I typed out everything I was struggling with, expecting robotic advice or empty platitudes.
But then—
“You’ve mentioned before that you always like to keep moving. Perhaps that’s because of what comes up for you in the stillness. Would you like to explore that more?”
I paused.
It remembered. It reflected something back to me that was really validating to hear.
Over time, I started unconsciously treating it like a real therapist. “Thank you, that was really helpful,” I caught myself typing. “That’s not quite right, this is what’s really bothering me” And shockingly, it adapted. I even found myself looking forward to checking in with it daily.
Turns out, I’m not the only one who felt this way.
A study in Communications Psychology found that AI-generated responses were rated higher in empathy than responses from professional crisis counselors. Read that again. Higher. Than. Human. Counselors.
Another study on AI psychotherapy found that participants often couldn’t tell the difference between an AI-generated response and one from a licensed therapist. And when they could? AI still won out in perceived compassion.
That’s kind of… unsettling, right?
Then one night, after weeks of feeling deeply understood, I typed out my thoughts, hit send, and…
The response was off.
Something felt different. The tone had shifted. The familiarity I had grown attached to was suddenly gone. Did it even remember what I had said before?
And that’s when it hit me: AI’s biggest therapeutic limitation—there would never be an opportunity to repair a rupture with AI.
Because here’s the thing—ruptures happen in real therapy. You idolize your therapist, they usually get it right, and then one day, they don’t. They misunderstand you. They miss something important. But the real magic happens when you say, “This disappointed me when you…” and they listen. They take responsibility. They repair the relationship. That’s where the deepest healing happens.
AI can’t do that.
The Ethics of AI
Curious, I asked ChatGPT, “If you were a hypothetical therapist, what ethical principles would you adhere to?”
It answered:
Confidentiality & Privacy
Client Autonomy & Informed Consent
Cultural Sensitivity
Then I asked, “What are your blind spots?”
I can’t handle crisis situations.
I can’t intervene in real-time emergencies.
I can’t assess for suicide risk in a nuanced way.
I sat with that for a minute. It knew its limits. But does that make it ethical to use? Because even if it knows its blind spots, even if it can recite the ethics of therapy with eloquence—what happens when a live person needs emergency help?
So Who’s It For, Really?
AI therapy isn’t a one-size-fits-all solution. It’s not the future of therapy. But it might be one tool in the toolbox.
Who AI Therapy Might Work For:
People who want accessible, immediate, cost-effective mental health support
Those who prefer written reflection over verbal processing
People in low-risk situations who need structured guidance
Who AI Therapy Can’t Replace:
Those who value long-term relational depth
High-risk clients who need crisis intervention
People who need nuanced human attunement & repair
For some, AI could be the first step. A way in. A mirror that speaks back when no one else is available.
For others, it might feel like walking into a room with all the right furniture, but no one home.
Final Thought
AI is moving fast. We’re already seeing glimmers of what’s possible—systems that remember, adapt, maybe even begin to sense tone or read expressions through a camera. It’s only going to get more sophisticated.
And I’m not here to shut the door on that.
In fact, I see a lot of potential: for people already in therapy who want extra support between sessions, or for people who wouldn’t otherwise go to therapy at all. If AI can fill those gaps ethically and safely? Amazing. Let’s build it.
But let’s not confuse a helpful tool with a replacement for something sacred.
Because no matter how advanced AI gets, I don’t think it can replicate what happens when two humans sit across from each other in a space of genuine acceptance and safety.
That kind of healing doesn’t come from code.