Full Course Description


Care Over Compliance

Most conversations about AI in clinical work start with rules. But what if we brought it back to the heart of our work -- relationships. Join clinician and AI researcher Dr. Michael Jones as he reorients you from checkbox compliance toward attentiveness, responsibility and trust as the foundation for ethical AI integration. The relational‑ethics approach to AI decision‑making he’ll share is practical, grounded and built for the realities you’re already facing.

Program Information

Objectives

  1. Identify at least three limitations of compliance-based frameworks when applied to AI integration in clinical mental health practice.
  2. Apply Joan Tronto’s five care ethics principles to evaluate ethical AI use across common clinical scenarios including documentation, client communication, and assessment.
  3. Develop a personalized ethical AI integration framework grounded in relational care ethics for use in their professional practice.

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapist
  • Case Managers
  • Other Mental Health Professionals

Outline

  • The AI moment in mental health: what’s already in the room with your clients
  • Why compliance-based thinking is insufficient
  • Five principles applied to AI integration
  • Case applications: documentation tools, chatbots, and AI-assisted assessment
  • Building an ethical AI practice framework: practical steps clinicians can take today

Copyright : 10/29/2026

Your Client is Asking AI for Advice

Guess what? Your clients are already “consulting” ChatGPT between sessions…asking about their symptoms, searching for explanations, and sometimes even seeking advice about treatment. In many ways, they are bringing a third voice into the therapy room…one you never invited and may not even know is there. 

Join Dr. Lotes Nelson for a practical, straight‑talk conversation clinicians need to be having right now. This session will explore what clients are actually asking AI, why it matters clinically, and how counselors can respond without damaging the therapeutic alliance or losing their clinical footing. Participants will leave with language, boundary-setting strategies, and therapeutic responses they can immediately use when clients bring AI-generated information into the counseling session.

Program Information

Objectives

  1. Identify at least 2 clinical implications of clients using generative AI tools to seek mental health information or emotional support.
  2. Develop and apply one practical boundary-setting strategy clinicians can introduce in session to help clients engage with AI tools responsibly while maintaining the counselor as the primary source of therapeutic guidance.
  3. Demonstrate two therapeutic responses clinicians can use when clients report trusting or relying on AI-generated mental health advice, with the goal of preserving and supporting collaborative, clinically informed decision-making.

Outline

  • What clients are actually asking AI … and why it matters clinically
  • How to respond when clients trust AI-generated advice more than your clinical guidance
  • Practical ways to set boundaries so AI does not become a hidden ‘co-therapist’ in treatment
  • Research, risks, ethical considerations, and limitations of AI in mental health

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapist
  • Case Managers
  • Other Mental Health Professionals

Copyright : 10/29/2026

When AI Gets it Wrong

AI hallucinations. Sycophantic responses. Cognitive overload. When AI gets it wrong it can amplify clients’ distress and negatively impact the treatment process. But with responsible AI use strategies you can mitigate these risks and promote the well-being of those you serve. Join Dr. Eric Beeson for a critical session that will build your AI fluency, show you where the risks lie, and equip you with remedies at each phase of client personal use and the treatment process.

Program Information

Objectives

  1. Describe how common AI tools are designed and function, including features that may increase suggestibility, over-agreement, or perceived authority.
  2. Identify clinical risks associated with AI use, including misinformation, emotional dependency, and phenomena commonly referred to as “AI psychosis.”
  3. Educate clients on AI and digital literacy concepts to support informed use.

Outline

  • Understanding the architecture of AI tools
  • Common AI related risks (e.g., AI sycophancy and “AI‑psychosis”) 
  • Screening for vulnerability specific risks
  • AI and digital literacy education for clients
  • The difference between general use AI and fine-tuned tools
  • How to evaluate tools for your use or your clients use
  • Research, risks and limitations

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapist
  • Case Managers
  • Other Mental Health Professionals

Copyright : 10/29/2026

AI, Intimacy and the Future of Relationships

Clients are already turning to AI for emotional safety, validation, and connection, sometimes in ways that quietly displace real-world intimacy. Join Patti Lashley, LPC-S, CSTIP, and Dr. Michael Jones, LPC-S as they offer a clinically grounded look at what this means for your practice. This session addresses compulsive AI use, fantasy bonding, conflict avoidance, displaced sexuality, and attachment distortion, with practical strategies for identifying when AI deepens relational wounds and how to guide clients back toward genuine, sustainable connection.

Program Information

Objectives

  1. Identify clinical presentations of unhealthy AI attachment, including fantasy bonding, compulsive use, and displaced intimacy, using an attachment theory framework.
  2. Distinguish between AI use that supports healing and AI use that deepens relational wounds in emotionally vulnerable clients.
  3. Apply evidence-informed strategies to redirect clients from AI-dependent relational patterns toward sustainable real-world connection.

Outline

The Landscape of AI Companionship: What Clients Are Actually Doing

  • Prevalence and patterns of AI use in emotional and relational contexts
  • Who is turning to AI and why, including loneliness, trauma history, and attachment style

Unhealthy AI Attachment: Fantasy Bonding, Compulsive Use, and Displaced Intimacy

  • Firestone's fantasy bond concept applied to human-AI relationships
  • Recognizing compulsive use, conflict avoidance, and displaced sexuality in clinical presentation

When AI Deepens Wounds Instead of Supporting Healing

  • Attachment distortion and the illusion of reciprocal connection
  • Emotional dependency, social withdrawal, and erosion of real-world relational skills

Research, Risks, and Limitations

  • Current evidence base on AI companionship and mental health outcomes
  • Ethical concerns, data privacy, algorithmic bias, and the limits of AI as a therapeutic substitute

Redirecting Clients Toward Real-World Connection

  • Evidence-informed clinical strategies
  • Working with resistance, shame, and the therapeutic relationship as a corrective model

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapists
  • Case Managers
  • Other Mental Health Professionals

Copyright : 10/29/2026

AI-Assisted Documentation and EHR Integration for Clinicians

You want to stay present with clients…but the moment your last session ends, the notes start piling up. 

It's no surprise AI-assisted documentation is one of the fastest-adopted tools in clinical practice today. But what's actually allowed? What's safe? What's ethical? And how do you make things easier without inadvertently violating HIPAA or professional guidelines?

Join Dr. Carlos Castañeda, assistant professor of psychology, Licensed Professional Counselor, Founder of ThinkAITA LLC, nationally recognized speaker and developer of agentic AI clinical tools, as he cuts through the hype and delivers real answers. You'll learn what these tools do well, where they fall short, and how to use AI-assisted documentation to create notes faster and better than ever, all while staying legally and ethically compliant.

Program Information

Objectives

  1. Differentiate between ethical and unethical use of AI in clinical documentation, including risk areas related to HIPAA, confidentiality, and PHI.
  2. Evaluate AI-assisted notetaking and EHR integration tools by identifying key features, limitations, and data privacy safeguards that impact compliant clinical practice.
  3. Apply practical strategies for integrating AI documentation tools into clinical workflows while maintaining alignment with APA ethical guidance and professional standards of care.

Outline

  • The Current Landscape: How AI-Assisted Documentation Is Changing Clinical Practice
  • HIPAA, Ethics, and Compliance: What Clinicians Need to Know Before Using AI tools
  • Agentic AI vs. Basic AI: Understanding What These Tools Actually Do With Your Data
  • Hands-on Walkthrough: Using AI Tools for Note-Taking with EHR Integration
  • Red Flags and Best Practices: Where AI Falls Short and How to Protect Your Clients
  • Research, risks, and limitations

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapist
  • Case Managers
  • Other Mental Health Professionals

Copyright : 10/30/2026

Supervising in the Age of AI

AI is reshaping clinical supervision faster than our ethics codes and laws can keep up. Join Dr. LoriAnn Stretch for a practical, research-informed guide to supervising responsibly in an AI-integrated world. Drawing on key mental health ethical codes, federal and state legal standards, and emerging research, you'll leave with tools to guide supervisees, protect clients, and establish clear AI policies in your practice.

Program Information

Objectives

  1. Differentiate between ethical and unethical applications of AI in clinical supervision by applying relevant provisions from the key mental health codes of ethics.
  2. Identify the primary legal and regulatory obligations governing AI use in clinical supervision.
  3. Construct a written supervision AI policy reflecting best practices for use of AI in mental health and supervisory contexts.

Outline

  • How AI tools are changing supervisory dynamics, power differentials, and the developmental needs of supervisees across disciplines
  • What key mental health ethical codes currently say, where gaps exist, and how supervisors can reason ethically in the absence of explicit guidance
  • Legal Considerations and Liability – HIPAA, PHI, informed consent, state licensure board rules, documentation obligations, and the malpractice implications of AI use by supervisees
  • Developing supervision AI policies, guiding supervisees in responsible use, maintaining reflective practice, and preserving the therapeutic alliance
  • Research, Risks, and Limitations – What the empirical literature tells us: efficacy, bias, hallucination, data privacy risks, and what we still don’t know

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapist
  • Case Managers
  • Other Mental Health Professionals

Copyright : 10/30/2026

AI as Ally

Join Melanie Calhoun and Dr. Michael Jones for a conversation about AI as a cultural ally in mental health work. Melanie brings the clinical lens, showing how AI tools can help practitioners reduce bias, expand understanding, and better support clients from diverse backgrounds. Dr. Jones brings the educator and ethics lens, situating these tools within a relational care framework so that AI supports rather than replaces the human connection at the heart of the work. Together we will explore practical tools, ethical tensions, and what it looks like to integrate AI into culturally responsive practice without losing the soul of counseling.

Program Information

Objectives

  1. Analyze the practical implications for effective multicultural counseling when integrating AI into psychotherapy services.
  2. Describe how generative AI can be used to create complex, culturally relevant clinical case simulations to help clinicians improve their cultural competency.
  3. Differentiate between AI applications that support culturally responsive care and those that may unintentionally reinforce dominant norms or clinical blind spots.

Outline

  • Strengths and limitations of AI with marginalized and underrepresented clients
  • Where bias commonly enters the room, and how AI can help surface it
  • AI tools to support culturally responsive training, supervision, and clinician growth
  • AI in overcoming language and access barriers in mental health
  • A relational care ethics frame for integrating AI as an ethical ally that supports human connection and clinical expertise
  • Research, risks and limitations

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapist
  • Case Managers
  • Other Mental Health Professionals

Copyright : 10/30/2026

Irreplaceable

Research shows that AI already matches or exceeds human therapists in areas like diagnosing, assessing risk, even empathy. It’s even earning high marks from clients on perceived support and connection. And AI clinical skills will only get stronger.

So where does that leave us? With an opportunity. A big one. Join Dr. Jordan Harris for an energizing closing session that reframes the rise of AI not as a threat to our profession, but as a call back to our deepest strength: the relational craft of therapy. Plus, he’ll provide specific tools to enhance your own therapeutic relational skills.

Program Information

Objectives

  1. Differentiate between therapists perceived effectiveness of AI counselors and clients reported effectiveness.
  2. Describe how large language models could impact the future of clinical care, training, and research in psychotherapy.
  3. Identify how using a deliberate practice approach can lead to increased clinical outcomes.

Outline

  • AI empathy scores, diagnostic accuracy and risk assessment
  • The therapeutic alliance, presence, attunement and co-regulation
  • Building skills through deliberate practice, routine outcome monitoring and process coding
  • The future of therapy is more human, not less
  • Research, risks and limitations

Target Audience

  • Counselors
  • Social Workers
  • Marriage & Family Therapists
  • Psychologists
  • Registered Psychotherapist
  • Case Managers
  • Other Mental Health Professionals

Copyright : 10/30/2026

Coming Soon - AI, Grief and the Ethics of Digital Afterlifes

Your clients are already using AI to talk to the dead. A widow uploads years of text messages and suddenly her husband is "texting" her again. A son uses voice synthesis to hear his late mother say things she never said.

Some find profound comfort. Others find themselves unable to move forward. But if you’re like most therapists you have no framework for what to do when a client says, "I've been talking to my mom. She died two years ago."

Join clinician, ethics expert and AI researcher Dr. Michael Jones as he equips you to navigate AI-mediated grief through the lens of professional ethics. PLUS Megan Devine will join Dr. Jones for a special conversation about the future of AI in grief.

Program Information

Objectives

  1. Identify at least three categories of AI grief technologies and describe their psychological mechanisms related to continuing bonds.
  2. Applying the ethical principal of prioritizing client welfare, evaluate the potential benefits (emotional scaffolding, symbolic continuity) and risks (emotional dependency, complicated grief) of AI grief bots.
  3. Apply ethical decision-making frameworks when working with clients who engage with AI grief technologies, including considerations of informed consent, client autonomy, and counselor competence.

Outline

  • When clients disclose AI-mediated contact with the deceased
  • Grief bots, voice clones, and digital avatars
  • Current technologies and emerging developments
  • How clients access and use these tools outside of clinical settings
  • Continuing bonds theory: Theoretical grounding for digital connection
  • Ethical frameworks for AI and grief
  • Technology-assisted services and competence requirements
  • Accountability, client welfare, and competency
  • Informed consent, confidentiality, and clinical oversight
  • Benefits and risks of AI-mediated grief
  • Emotional scaffolding, symbolic continuity, and adaptive grief support
  • Emotional dependency, avoidance behaviors, and complicated grief risk
  • Autonomy, consent, privacy, and dignity of the deceased
  • Assessment, intervention, and professional boundaries
  • Red flags and indicators for clinical concern
  • Holding space without judgment while maintaining ethical oversight
  • Emerging research and the limits of current knowledge

Target Audience

  • Counselors
  • Social Workers
  • Psychologists
  • Psychiatrists
  • Nurses
  • Marriage and Family Therapists
  • Other Mental Health Professionals

Copyright : 04/16/2026