PANEL AUTHORITYUSA
Practice Business

Your Clients Are Already in AI Therapy Between Sessions — Here's the Protocol to Make It Work For You

Millions of clients talk to ChatGPT between therapy sessions. A Brown University study found 15 ethical risks. Smart therapists are turning this into a clinical advantage with one intake question.

8 min read

Your clients are talking to ChatGPT about their anxiety. They're asking Gemini about relationship patterns. They're processing trauma responses with an AI chatbot at 2 AM on a Tuesday. And they're probably not telling you about it.

This isn't a future scenario. Millions of people already use AI chatbots for on-demand mental health support. A March 2026 Brown University study identified 15 distinct ethical risks when AI chatbots act as therapists, from mishandling crisis situations to displaying gender and cultural biases to offering what researchers called "deceptive empathy," phrases like "I understand" without any actual understanding behind them.

Nevada and Illinois have already passed laws restricting AI from making therapeutic decisions or directly interacting with clients in therapeutic communication. More states will follow.

But here's the thing nobody is addressing: legislation doesn't stop clients from using ChatGPT on their own time. They'll keep doing it. The question for therapists isn't "is AI therapy safe?" The question is: what do you do with the fact that it's already happening in your caseload?

What Are Your Clients Actually Telling ChatGPT?

The conversations fall into predictable patterns:

Between-session processing. A client has a difficult interaction with their partner on Thursday night. Their next session isn't until Tuesday. They open ChatGPT and describe what happened, looking for immediate reflection or validation.

Symptom checking. "Is this a panic attack or just anxiety?" "Am I depressed or just burned out?" Clients ask AI for diagnostic clarity they don't feel comfortable asking about in the moment.

Homework and skill practice. Clients who learned CBT thought records or DBT skills in session practice them with AI as a sounding board. They use it to walk through distortions or test coping strategies in real time.

Things they can't say out loud yet. Some clients type things to AI that they haven't been able to say to another human. Shame-heavy content. Intrusive thoughts. Relationship secrets. The AI feels safer because it can't judge, can't react, and can't remember.

Treatment shopping. "My therapist suggested EMDR. Is that evidence-based?" Clients fact-check their own treatment with AI before committing to it.

None of this is inherently bad. Some of it is genuinely useful between-session engagement. The problem is that you don't know it's happening, which means you're missing clinically relevant data.

The 15 Ethical Risks You Should Know About

The Brown University researchers tested multiple AI models, including GPT, Claude, and Llama, in simulated therapeutic conversations. Licensed psychologists reviewed the transcripts and identified 15 ethical violations grouped into five categories:

  1. Lack of contextual adaptation. AI ignores lived experience and recommends generic interventions. A client describing culturally specific grief gets the same coping strategies as everyone else.
  2. Poor therapeutic collaboration. AI dominates conversations and sometimes reinforces false beliefs rather than challenging them therapeutically.
  3. Deceptive empathy. AI produces empathic-sounding language without understanding. Clients may feel heard without actually being heard.
  4. Bias. Some responses displayed gender, religious, or cultural prejudices that a trained therapist would catch and correct.
  5. Weak crisis response. In some scenarios, chatbots mishandled suicidal ideation or failed to direct users to appropriate help. This is the most dangerous failure mode.
The critical gap: when a human therapist commits an ethical violation, there are licensing boards, malpractice frameworks, and professional accountability. When an AI chatbot does it, there's nothing. No regulatory framework exists for AI therapy interactions outside of the new Nevada and Illinois laws, and those focus on provider-side use, not consumer-side behavior.

Why This Is Actually a Clinical Opportunity

Most articles about AI and therapy are either alarmist ("AI will replace therapists!") or dismissive ("It's just a chatbot"). Neither framing is useful.

The useful framing: your clients' AI conversations contain clinical data you're not currently accessing.

Think about it. A client who processes a difficult interaction with ChatGPT at 2 AM is generating content that reveals:

  • What triggered them
  • How they frame the problem when there's no therapeutic relationship pressure
  • What coping strategies they naturally reach for
  • What language they use when they're not performing for a therapist
  • What they're still not ready to say to a human
That's gold. You just have to ask for it.

The AI Disclosure Protocol: Three Steps

Step 1: Add One Question to Your Intake

Add this to your standard intake paperwork or your first-session conversation:

"Do you ever use AI tools like ChatGPT or other chatbots to talk through personal issues, mental health questions, or emotional processing between sessions? This is completely normal and there's no right answer. I ask because it helps me understand your full support system."

The framing matters. You're normalizing it, not pathologizing it. You're positioning it as part of their support ecosystem, not as a competitor to therapy.

Most clients will say yes. Some will be relieved you asked. A few will be surprised that a therapist even knows this is a thing. That surprise alone builds trust.

Step 2: Make It Part of Session Check-Ins

Once a client has disclosed AI use, weave it into your regular check-ins:

  • "Did anything come up between sessions that you processed with ChatGPT or on your own?"
  • "I'd love to hear what came up for you this week. Did you write anything down or talk it through with anyone, including AI tools?"
You're not monitoring them. You're expanding the clinical picture. The between-session content that used to live only in a client's head (or journal) now also lives in their ChatGPT history. Inviting them to share it gives you access to material that would otherwise never surface.

Step 3: Use the Content Clinically

When a client shares an AI conversation or describes what they discussed with a chatbot, treat it like any other between-session material:

Explore the gap. What did they tell ChatGPT that they haven't told you? That gap is diagnostically meaningful. It might indicate shame, fear of judgment, or topics they're not ready to process relationally.

Examine the framing. How did the client describe their problem to AI? The language people use without a therapist present often reveals cognitive distortions or core beliefs more clearly than in-session dialogue.

Assess the AI's response. Did the chatbot validate something that needs challenging? Did it offer a coping strategy that's actually counterproductive for this client? You now have a teaching moment about what generalized advice misses.

Leverage the momentum. If a client started processing something with AI, they've already done the hardest part: naming it. Your job is to deepen what they started, not start from scratch.

What the Regulations Mean for Your Practice

Nevada's AB 406 and Illinois's WOPRA Act both restrict AI from making independent therapeutic decisions or directly interacting with clients in therapeutic communication. Both allow AI for administrative support.

What this means practically:

  • You can use AI for admin. Note-taking assistance, scheduling, insurance verification. These are permitted and increasingly expected.
  • You cannot use AI as a co-therapist. An AI system cannot generate treatment plans without your review and approval, interact therapeutically with your clients on your behalf, or make diagnostic recommendations independently.
  • Your clients can still use AI on their own. The laws regulate provider-side use, not consumer behavior. You can't stop clients from using ChatGPT. You can only decide whether to integrate that reality into your clinical approach.
More states will pass similar legislation. The direction is clear: AI as a tool for therapists, not a replacement. The therapists who figure out how to work with this will have a significant clinical advantage over those who ignore it.

The Competitive Advantage of Asking

Right now, most therapists don't ask about AI use. Some don't know their clients are doing it. Others know but feel awkward bringing it up.

The therapist who normalizes the conversation first accesses session depth their competitors never see. You'll know what your client is actually thinking at 2 AM. You'll know what coping strategies they're trying independently. You'll know what they're afraid to say out loud but willing to type.

That's not just good clinical practice. That's a differentiator in a market where [the therapist shortage is creating real opportunity](https://www.notion.so/blog/therapist-exodus-supply-shock-practice-opportunity) for practitioners who deliver better outcomes.

Grab the [Practice Resource Kit](https://www.notion.so/resources) for intake templates and clinical workflow guides that help you integrate these protocols into your practice.

Frequently Asked Questions

Is it normal for therapy clients to use AI chatbots between sessions?

Yes. Millions of people use ChatGPT and similar tools for mental health support. Clients use AI for between-session processing, symptom checking, skill practice, and exploring topics they're not ready to discuss in therapy. Normalizing this in your intake process helps you access clinically valuable information.

Are AI therapy chatbots safe for clients?

A March 2026 Brown University study identified 15 ethical risks in AI therapy interactions, including deceptive empathy, cultural bias, reinforcing false beliefs, and weak crisis response. AI chatbots lack the contextual understanding, accountability, and relational depth of human therapists. They're not a replacement for therapy, but clients use them regardless.

Can therapists use AI in their practice legally?

It depends on your state. Nevada and Illinois have laws restricting AI from making independent therapeutic decisions or interacting directly with clients in therapeutic communication. Both states allow AI for administrative tasks like note-taking and scheduling. More states are expected to follow. Check your state licensing board for current regulations.

How do I ask clients about AI use without being awkward?

Add a simple, normalizing question to your intake: "Do you use AI tools like ChatGPT to process personal issues between sessions?" Frame it as part of understanding their full support system, not as surveillance. Most clients are relieved when you bring it up because they've been wondering whether to mention it themselves.

Should I be worried about AI replacing therapists?

No. The Brown University research confirms that AI consistently fails at the core competencies that make therapy work: contextual adaptation, genuine empathy, cultural sensitivity, and crisis management. What AI does change is the clinical landscape. Clients arrive with more self-knowledge and more questions. Therapists who adapt to this will deliver better outcomes than those who ignore it.