PANEL AUTHORITYUSA
Practice Business

The AI Use Case for Therapists Nobody Is Writing About Yet

Not note-taking. Not scheduling. Therapists are quietly using AI as a clinical consultation tool for case reflection and treatment planning. No professional body has written the guidelines yet.

9 min read

Everyone is writing about AI saving therapists five hours a week on notes. That's fine. Note-taking is a real use case. But it's not the interesting one.

The interesting one is this: a clinical psychologist profiled by STAT News in February 2026 described using AI as a thinking partner for clinical consultation. Not to generate notes. Not to schedule clients. To reflect on clinical material, review published case studies, and pressure-test case formulations using de-identified client data.

More than half of psychologists reported using AI professionally in a 2025 survey. Yet no state licensing board, no APA, no NASW, no AAMFT has issued specific guidance on using AI for therapists clinical consultation. The practice is happening. The ethical framework doesn't exist yet.

Here's what an ethical, clinically rigorous AI consultation practice looks like and where the line is.

What Does AI for Clinical Consultation Actually Look Like?

This isn't about asking ChatGPT to diagnose your client. It's about using AI as a structured thinking tool for clinical reflection.

In practice, it looks like this:

Case formulation review. You've been working with a client for three months. Progress has stalled. You type a de-identified summary into an AI tool and ask: "What theoretical frameworks might explain this presentation?" The AI doesn't know your client. But it can surface frameworks you haven't considered, reference treatment literature you haven't read recently, and challenge assumptions you didn't know you were making.

Treatment planning pressure test. You're considering shifting from CBT to EMDR with a complex trauma client. You describe the clinical picture (de-identified) and ask the AI to outline contraindications, timing considerations, and what the research says about sequencing. It's not making the decision. You are. But it's giving you a more complete evidence base to decide from.

Ethical dilemma consultation. A dual relationship question comes up. Your state board's guidelines are vague on the specific scenario. You describe the situation to an AI tool and ask it to walk through the relevant ethics codes, identify the competing obligations, and surface case law or board opinions you might not know about.

Literature review. A client presents with a symptom pattern you haven't seen in a while. Instead of spending 90 minutes searching PubMed, you ask the AI to summarize recent research on the presentation and flag any treatment protocol updates since your last training.

None of these replace clinical judgment. All of them augment it.

Why This Is Different From AI Note-Taking

The note-taking conversation is about efficiency. Save time. Reduce admin burden. Bill more accurately. That's a productivity tool.

AI for clinical consultation is about clinical quality. Better formulations. Broader evidence base. Fewer blind spots. That's a thinking tool.

The distinction matters because the risks are different.

With note-taking AI, the primary risk is documentation accuracy and HIPAA compliance. Those are solvable problems with encryption, BAAs, and human review.

With clinical consultation AI, the risk is epistemic. You're using a tool that sounds authoritative but can be wrong. It can fabricate research citations. It can present outdated treatment protocols as current. It can miss nuances that a human consultant would catch because they know the field's unwritten rules.

The upside is also different. A good note-taking tool saves you five hours a week. A good AI clinical consultation practice makes you a better clinician. Those aren't the same thing.

The Ethics Gap Nobody Is Filling

Here's what makes this unusual: the practice is widespread and the guidance is nonexistent.

More than 50% of psychologists reported using AI professionally in a 2025 survey. Therapists across disciplines are doing the same thing the STAT News psychologist described. They're just not talking about it publicly because there's no professional framework telling them it's okay.

The major professional organizations have said almost nothing specific about this use case:

APA (American Psychological Association): Published general guidance on AI in psychology. Nothing specific to using AI as a consultation tool for active clinical cases.

NASW (National Association of Social Workers): Updated technology standards. No specific mention of AI-assisted clinical reflection or case consultation.

AAMFT (American Association for Marriage and Family Therapy): Silent on the topic as of March 2026.

State licensing boards: Nevada and Illinois passed laws restricting AI in therapeutic decision-making. But those laws target AI interacting with clients, not therapists using AI for their own clinical thinking. The consultation use case falls in a regulatory gap.

This means every therapist using AI for clinical reflection is operating without a professional framework. That's not necessarily dangerous. But it means you need to build your own guardrails.

How to Build an Ethical AI Clinical Consultation Practice

If you're going to use AI for clinical thinking, do it with the same rigor you'd bring to any other consultation relationship. Here are the guardrails.

Guardrail 1: De-identify everything

Never enter identifiable client information into an AI tool. No names. No dates of birth. No locations specific enough to identify someone. No insurance information. No session dates.

This isn't just a HIPAA issue. It's a clinical ethics issue. The moment you enter identifiable data into a system you don't control, you've created a record outside your clinical documentation that your client didn't consent to.

De-identification should be thorough: change demographics, alter identifying details, and describe the clinical picture in terms general enough that no one could work backward to identify the client.

Guardrail 2: Verify everything the AI tells you

AI tools hallucinate. They fabricate citations. They present synthesized information as established fact. They confidently state things that are wrong.

Treat AI output the way you'd treat a first-year intern's case notes: potentially useful, but requiring verification at every step. If the AI cites a study, look it up. If it recommends a treatment protocol, check the source. If it suggests a diagnostic framework, confirm it against your training and the current DSM.

The value of AI consultation isn't that it gives you answers. It's that it gives you directions to investigate. That's how good consultation works with human colleagues too.

Guardrail 3: Don't outsource clinical judgment

AI can help you think. It cannot think for you. The moment you let an AI tool make a clinical decision rather than inform one, you've crossed a line that every professional ethics code draws clearly.

This means: AI can suggest differential diagnoses for you to evaluate. It cannot diagnose. AI can outline treatment options for you to consider. It cannot select the treatment. AI can flag ethical considerations for you to weigh. It cannot resolve the dilemma.

If you couldn't defend your clinical decision without referencing what the AI told you, you've relied on it too heavily.

Guardrail 4: Document your consultation process

If you use AI as part of your clinical thinking on a case, note it the same way you'd note a peer consultation. "Reviewed de-identified case material using AI consultation tool to explore alternative treatment frameworks. Findings incorporated into updated treatment plan."

This creates transparency. If a licensing board ever asks about your clinical reasoning, you can show that AI was one input among many, not the decision-maker.

Guardrail 5: Know what AI is bad at

AI is good at breadth. It's bad at depth in areas that require lived experience, cultural nuance, and relational attunement.

Don't use AI for:

  • Cultural formulation that requires understanding a specific community's norms
  • Countertransference exploration (the AI doesn't have a relationship with you)
  • Risk assessment (it lacks access to the client, the context, and your nonverbal observations)
  • Anything where the clinical question depends on the therapeutic relationship itself
Use it for:
  • Broadening your differential diagnosis
  • Reviewing literature you haven't read recently
  • Pressure-testing treatment plans against current evidence
  • Structuring ethical reasoning when multiple codes apply
  • Exploring theoretical frameworks you don't typically use

What This Means for the Profession

The therapists using AI for clinical consultation right now are running an experiment without a protocol. That's not reckless. It's what happens when technology outpaces professional guidance.

But the lack of guidance creates risk. A therapist who enters identifiable client data into ChatGPT has committed an ethics violation even though no ethics code specifically mentions AI. A therapist who follows an AI-generated treatment recommendation without verification has made a clinical error even though no standard of care specifically addresses AI-assisted reasoning.

The professional organizations need to catch up. In the meantime, therapists need to build their own frameworks.

If you're already thinking seriously about how AI fits into clinical practice, you're ahead of most of the profession. The Brown University study on AI chatbot failures showed where AI falls short in direct client interaction. The fact that your clients are already using AI between sessions makes it even more important that you understand the technology's capabilities and limitations firsthand.

And if insurers are already using AI to audit your notes, understanding how these systems reason isn't just professional development. It's practice protection.

The AI use case nobody is writing about is the one that could make you a better clinician. Do it carefully. Do it ethically. But don't pretend it isn't happening.

Download the free Practice Resource Kit for clinical workflow guides and practice-building tools that help you stay ahead of the shifts reshaping private practice.

Frequently Asked Questions

Is it ethical for therapists to use AI for clinical consultation?

No professional organization has issued specific guidance on this use case as of March 2026. Using AI for clinical reflection is not inherently unethical, but it requires strict guardrails: de-identify all client data, verify every AI output, and never let AI make clinical decisions. Document your AI consultation the same way you'd document a peer consultation.

Can I enter client information into ChatGPT or other AI tools?

No. Never enter identifiable client information into any AI tool unless you have a signed BAA and the tool meets HIPAA requirements. For clinical consultation purposes, thoroughly de-identify all case material before using it with AI. Change names, demographics, locations, and any details that could identify the client.

What AI tools are best for therapist clinical consultation?

No AI tool is specifically designed or validated for clinical consultation in therapy as of 2026. General-purpose tools like ChatGPT, Claude, and Gemini can assist with literature review, case formulation brainstorming, and ethical reasoning. Treat any tool's output as a starting point requiring professional verification, not as clinical guidance.

Will professional organizations create AI guidelines for therapists?

It's likely. The APA has published general AI guidance and multiple states have passed AI legislation affecting healthcare. Specific guidelines for AI-assisted clinical consultation will probably emerge as usage becomes more widely acknowledged. In the meantime, apply existing ethics principles: competence, informed consent, confidentiality, and professional responsibility.

How is AI clinical consultation different from peer consultation?

A human consultant brings clinical experience, relational intuition, and professional accountability. AI brings breadth of knowledge, instant availability, and the ability to quickly surface research. AI cannot read between the lines, understand your countertransference, or share professional liability. Use AI consultation to complement peer consultation, not replace it.

More like this