Artificial intelligence (AI) is rapidly integrating into mental health practice, appearing everywhere from documentation tools to therapy chatbots. These technologies promise to save time and alleviate administrative burdens, leading to a spectrum of responses from mental health professionals—from deep skepticism and outright refusal to enthusiastic adoption. All these reactions are understandable.
For some clinicians, AI raises concerns about privacy, bias, and what it might mean for the therapeutic relationship. Others are understandably interested in tools that could ease documentation demands and reduce some of the strain that contributes to burnout. Both reactions are reasonable. As mental health professionals, we are trained to protect our clients, safeguard confidentiality, and practice with care and responsibility. When new technologies emerge, it's natural to approach them with both curiosity and caution.
Over the past year, as I've developed and presented trainings on AI and ethics in mental health, I have personally explored and tested several AI tools to understand their functionality, risks, and limitations.. Some tools are impressive in what they can do, but all of them deserve careful consideration before being automatically adopted.
This deliberation is particularly challenging for clinicians working within larger systems where the decision to adopt a tool is not entirely theirs. I recently saw this firsthand as a clinical supervisor when a supervisee’s organization introduced an AI documentation tool with little to no explanation about how it worked, how client data would be handled, or the ethical considerations involved.
Fortunately, we were able to slow down and consider some basic questions.
- What does the tool do?
- How to introduce the tool to my client?
- Can clients opt-out?
- What is the clinician’s responsibility when using AI?
Moments like this are exactly what my training Ethics & AI: Navigating the Future of Mental Health Practice is designed to support. These tools are entering practice quickly, and in many settings clinicians are encountering them without clear guidance.
In past trainings on this topic, participants have also raised broader concerns about the companies creating these tools, how data may be used, the impact on the therapeutic relationship and even the environmental impact of large-scale AI systems.
These conversations are a good reminder that technology doesn't exist in a vacuum and that most of us are still figuring out what these systems involve and whether, or how, to even use them.
These conversations reinforce that technology does not exist in a vacuum. Most concerning to me is not the difference in opinion among clinicians, but the fact that the pace of technological change often moves faster than our ability to thoughtfully consider its implications. When AI tools enter a complex environment already strained by documentation, regulatory compliance, and time pressures, they introduce another layer of vital ethical questions:
- Who owns the data?
- What constitutes informed consent?
- How do we ensure we are competent in the technologies we choose to use?
- And how do we make sure that clinical judgment and relational care remain at the center of the work?
These are not simply technical questions. They are ethical ones. Our professional ethics already give us important guidelines regarding principles such as informed consent, competence, confidentiality, and client welfare and they remain essential anchors, even as technology evolves.
It is crucial that we, as a profession, create the necessary space to slow down and carefully consider how these tools are being integrated and what the potential impact may be for our clients. Clinicians must have a voice in this conversation; decisions about technology are too often driven by organizations or vendors without fully considering the realities of clinical practice. Mental health professionals bring an indispensable perspective on relational care, client safety, and ethical responsibility.
Training and reflective spaces can help support this essential process. They allow clinicians to learn about emerging tools, explore the complex questions they raise, and reflect on how new technologies intersect with the values and ethical obligations that guide our work. The question is no longer if AI will exist in healthcare, but how we, as mental health professionals, choose to engage with it.
If you are a mental health professional trying to make sense of these changes and want a space to think through the ethical questions together, I invite you to join me virtually on Zoom for Ethics & AI: Navigating the Future of Mental Health Practice, to be held Tuesday, March 10, where we will explore the practical questions and clinical realities involved in integrating AI into mental health care. My hope is that these conversations give clinicians room to learn, reflect, consider what's being asked of us, and move forward in ways that feel thoughtful, values-aligned, and ethically grounded.
Learn More and Register for Ethics & AI
Tuesday, March 10
Virtually on Zoom
Earn 3 Ethics CEUs

Kristin Whiting-Davis, LCSW-C, LICSW, is a clinical social worker, board-approved supervisor, and founder of KWD Wellness with over 25 years of experience in mental health and healthcare leadership. A certified mindfulness facilitator, she specializes in trauma-informed and somatic approaches that help clinicians regulate their nervous systems and navigate overwhelm.
