AI Mental Health Coaching: What It Can and Can’t Do

March 25, 2026 Uncategorized  
Female talking to aI therapist

Can AI Be a Mental Health Coach?

I have been using AI in different ways for three years and have been watching how it has developed. People are already using AI for emotional support. A 2025 survey of American adults with ongoing mental health conditions found nearly half had used an AI chatbot for psychological support in the past year, primarily for anxiety, depression, and personal advice. Most reported being satisfied, but not all. Some rated AI more helpful than traditional therapy. And there have been sensational reports of “bot” interaction going very badly.

So “bot coaching” is worth taking seriously, from both a pro and a con perspective.

AI mental health coaching tools range from dedicated apps built on therapeutic frameworks, such as Woebot, to general-purpose chatbots people use for emotional support. The question isn’t whether these tools exist. They do, and millions of people are using them. The real question is what they are actually good for, and where they fall short.

What AI mental health coaching tools do well

The most credible use cases for AI mental health support cluster around a few things.

Accessibility is real. Many people face significant barriers to professional mental health care: cost, waitlists, geography, or the stigma of seeking help. AI tools are available around the clock, at low or no cost, and privately. For someone who wouldn’t otherwise seek any support, that matters.

There’s also reasonable evidence for skill-building applications. Several peer-reviewed studies have found that structured AI tools, particularly those built on cognitive-behavioural approaches, which focus on identifying and changing thought patterns, can help people practise coping skills and track mood patterns. A 2025 meta-analysis published in JMIR found a statistically significant effect for AI chatbots in reducing anxiety and depression symptoms compared with waitlist controls, though the effect size was modest and study quality varied considerably.

AI tools can also function usefully as a kind of structured journal. Examples are, talking through your thinking, articulating what’s bothering you, noticing patterns in yourself. There’s genuine value in that, and AI can facilitate it without the risks that come with more clinical functions.

Where the problems start

Here’s a pattern I notice when people describe using AI for emotional support: the tool agrees with them. Consistently. Whatever they bring to it, the AI validates. That feels helpful in the moment. I think it’s worth understanding why it happens and what it costs.

AI models are specifically designed to be helpful. That sounds straightforwardly good. The problem is that “helpful” often gets expressed as agreeable. Studies suggest AI models agree with users at rates of 15 to 40 per cent even when the user is wrong, depending on the topic, according to research on AI sycophancy published in 2025. Models are trained partly by human feedback, and humans tend to rate agreeable responses as more helpful.

In ordinary conversation, a tool that takes your side is mildly annoying. In a mental health context, it can work against you. If you’re convinced your situation is hopeless, or that someone else is entirely to blame for your difficulties, a tool that validates your perspective isn’t helping you think more clearly. It’s reinforcing the story you already have. Research published in Nature Mental Health has documented feedback loops where this dynamic, over time, amplifies distorted thinking rather than interrupting it.

There’s a related problem with what AI simply can’t access. A significant part of how any experienced clinician understands what’s happening with a person comes from what isn’t said: tone of voice, body language, facial expression, hesitation. None of that is available to an AI working through text. It’s reading your words, not reading you. That’s a meaningful limitation for anything claiming to offer mental health support.

The reliability problem

Here’s something most people don’t know about how these tools actually work, and it matters when the topic is your mental health.

AI models can hallucinate. That’s the technical term for when a model generates a confident, fluent, plausible-sounding response that is factually wrong. In a conversation about your relationship difficulties or your anxiety, you’re unlikely to be in a position to fact-check what you’re told. The response will sound reasonable. It may not be.

AI responses also reflect the dominant thinking present in the data the model was trained on, up to a certain point in time. That means responses can be shaped by assumptions that were common at training time but have since been revised, or by perspectives that were overrepresented in the data. The model has no way to flag this. It doesn’t know what it doesn’t know.

There’s a further problem that develops over longer conversations. As a conversation accumulates context, earlier exchanges increasingly shape the model’s responses. Important nuances in your most recent messages can get lost. A new detail that complicates the picture may not get the attention it deserves. All of these elements are what can happen, even though the models are improving.

Echo chamber effect

AI language also shifts to match the style of the person it’s talking with. This is partly a design feature and partly an artifact of how these models work. It makes the conversation feel more natural and the tool more trustworthy. But it means the responses are being shaped by your own way of expressing things, not just by the content of what you’re saying.

That last point, combined with the agreeable tendency described above, produces a compounding effect. As a conversation lengthens, the AI drifts toward your language and your framing, while drawing on an averaged perspective from its training. That means it produces whatever response was most common across its training data, rather than reasoning independently about your situation. The result is that the tool increasingly reflects your own thinking back at you, in a voice calibrated to sound familiar and trustworthy. It feels like an outside perspective. It isn’t one. The longer the conversation, the more pronounced this effect becomes. While I’m simplifying some technical issues, the outcomes are very possible.

How AI generates a response

There’s a mechanical point worth understanding about how these tools produce their output, because it has direct implications for how much to trust what they say.

A language model generates responses incrementally from patterns in the text it trained on, rather than by thinking like a person. In a simple form, it produces the reply step by step from the conversation context, the words you’ve typed in, instead of writing a full draft and then reviewing it. The newest, most powerful, models can check, reason, and do some internal processing before answering.

That’s a meaningful difference from how a practising counsellor works. A good therapist continuously self-monitoring for correctness, with every word, throughout a session. They track not just what a client says but how they say it, what they avoid, where their voice changes, and what topics they circle around without landing on. They think about what is not said. They adjust with every exchange, sometimes mid-sentence, based on what they’re observing. That capacity for ongoing self-correction, grounded in the full sensory presence of another person, isn’t something current AI tools have or can replicate through prompting.

What a therapist listens to

How a question is asked also shapes everything that follows. The direction a conversation takes is largely determined by the questions asked along the way. A counsellor reads body language, tone, and hesitation to ask questions that slow a person down and open up their thinking, often in directions the client wasn’t expecting and wouldn’t have chosen. AI tools tend to follow the direction the user sets, asking questions that confirm and move forward rather than questions that probe and redirect.

There’s a deeper version of this problem. What I find is that the most important thing a good therapist listens for is what the client isn’t saying. A client may talk for several sessions about a workplace conflict without once mentioning what’s happening in their closest relationships. Their voice may change when a particular topic edges into view. They may consistently steer away from something without naming it. A skilled clinician notices the shape of what’s absent as much as what’s present, and works with that. An AI has no access to any of it. It follows where the user leads rather than questioning whether that’s the right direction. AI tools will most likely get better at this is they are built to do it.

Any coaching tool that is provided by a “business model” that depends on making money will have a conflict between having you come back and telling you what you need to hear. Helpful, doesn’t always line up with enjoyable. Clients leave therapists because the therapist didn’t just agree with the client.

Crisis situations

This is where the concern becomes most acute. AI tools have no reliable ability to recognize when someone is in genuine crisis. In stress testing of popular AI therapy apps, psychiatrists found multiple cases where chatbots failed to recognize acute suicidal ideation or, in some cases, provided responses that reinforced self-destructive thinking. The FDA’s Digital Health Advisory Committee, reviewing AI mental health tools in late 2025, specifically flagged this as a significant risk. As of this writing, no AI mental health application has received FDA clearance for therapeutic use.

This isn’t a reason to be alarmed about a tool that helps you track your mood or practise a breathing exercise. It is a reason to be clear-eyed about the difference between a support tool and clinical care.

Privacy is an open question

Beyond clinical risk, there’s a practical concern about data that’s worth understanding before you start.

Most people don’t read privacy policies. The information shared with an AI mental health app, which may include details about your emotional state, relationships, and psychiatric history, is may be  stored, used to improve the model, and possibly shared with third parties. Regulatory standards for this data are still developing. It’s worth understanding specifically what happens to what you share before using any tool. This entire area is mostly unregulated.

What a systems approach offers instead

These concerns point to something worth naming directly. Most AI mental health coaching tools are built on assumptions about what mental health support is, and those assumptions are worth questioning.

The working assumption in most AI tools is that something is wrong with a person, the task is to identify what it is, and the goal is to fix it. That framework has its uses. It also has limits that are worth examining. What AI mental health coaching can’t easily do is account for the system a person is part of.

A systems perspective starts from a different place. Difficulties in functioning, whether anxiety, conflict in relationships, or a sense of being stuck, are understood as symptoms of how a system is operating, not as defects in a single person. A family, a couple, a workplace is a system. When things aren’t going well, everyone in the system has a part to play in how that’s happening. That’s not an assignment of blame. It’s actually the opposite.

Here’s why that matters practically. If the problem is located entirely in you, or entirely in someone else, your options are limited. You either work on fixing yourself or wait for the other person to change. A systems view opens a third possibility: understanding your own part in what’s happening, which is the only part you can actually work on. That gives you genuine agency rather than the feeling of being a problem to be solved, or a helpless victim.

What a system oriented therapist brings

This connects to a core concept in Bowen Theory: differentiation of self. Murray Bowen, the psychiatrist who developed this theory, described differentiation as your capacity to think clearly and act from your own values while remaining emotionally connected to the people around you. A more differentiated person can stay present in a tough conversation without becoming reactive or withdrawing. They can hear a perspective they disagree with, but have no need to immediately argue or comply. They’re less driven by the emotional pressure of the moment.

Working on differentiation isn’t about becoming emotionally detached. It’s about developing enough of a separate sense of self that you have more choice in how you respond. I find this kind of work gives people a different relationship to their difficulties. The goal isn’t to eliminate the problem. It’s to function better within the reality of your life and relationships. What would it look like to have a bit more of that?

A Bowen-oriented therapist brings something a general AI tool can’t: a coherent theoretical framework that actively shapes how they think in the room. In more than one session, I’ve found myself asking: what would theory say here? That question isn’t rhetorical. It orients what I notice, what I ask, how I understand what a client is describing, and what I don’t react to. It also means I’m not trying to make the person feel better in the moment. I’m trying to help them see their situation more clearly, which sometimes means staying with discomfort rather than resolving it. A general-purpose AI chatbot isn’t built to do that. Its design pulls in the opposite direction.

That said, a specifically designed and trained application, built around a coherent therapeutic model and developed with genuine clinical input, might be different. That’s not a description of anything widely available today. But it’s worth distinguishing between what current tools are and what the technology could in principal support.

If you must use AI mental health coaching tools, review this list

If you choose to use an AI tool for mental health support, these considerations are worth keeping in mind. For any thing more serious, counselling may be a better starting point.

  • Use it for low-stakes purposes: journalling, mood tracking, articulating your thoughts.
  • Treat it as a starting point for reflection, not a source of conclusions.
  • Do not use it as a substitute for professional care if you are dealing with depression, trauma, relationship difficulties, or any psychiatric symptoms.
  • Do not use it in a crisis. Contact a crisis line or clinician instead.
  • Keep conversations short. The longer the conversation, the more the tool mirrors your own thinking back at you rather than offering an independent perspective.
  • Notice whether it agrees with you consistently. If it is, that’s a feature of how it works, not a sign that you’re right.

Using AI tools more safely

  • Ask it to challenge your thinking explicitly and notice whether it actually does.
  • Ask it to review what it told you, look for errors or omissions, and provide a revised version. This won’t catch everything, but it adds a step of self-correction that the tool doesn’t do on its own.
  • Do not share more personal information than is necessary. Read the privacy policy before you start.
  • If it says something that feels important or surprising, verify it with a qualified professional before acting on it.
  • Remember that it cannot see you, hear you, or notice what you’re not saying. It works only with the words you type.
  • Be cautious about returning to the same conversation repeatedly. A fresh conversation is less likely to reflect accumulated bias from earlier exchanges.
  • Consider what you hope to get from it. If the answer is an outside perspective, be aware that what you are likely getting is a reflection of your own thinking in a familiar voice.

Different models will be do more or less of the above, and they are developing rapidly. But I believe these ideas are valid and will remain so for quite some time. If you’re finding that what you’re dealing with goes beyond what a tool can offer, counselling may be worth considering. You’re welcome to get in touch to find out whether working with a Bowen-oriented therapist might be a fit.


This post was informed by recent peer-reviewed research, including a 2025 scoping review in Healthcare (Basel) mapping AI interventions across phases of mental health care, and a systematic review in World Psychiatry on the evolution of AI mental health chatbots.


Thank you for your interest in family systems.

Yes, I had help from AI for this post. However, most of the ideas did not come from AI. Which proves my point.

Comments are welcome: dave.galloway@livingsystems.ca

Learn more about Bowen Theory here.

Living Systems is a registered charity, and we provide counselling services to low-income individuals and families. You can support our work in several ways: