My Therapist is a Robot
A friend of mine is teaching a class on Artificial Intelligence at a nearby university. No sooner does he develop a curriculum than the AI landscape shifts, requiring substantial updates. I know how he feels. It’s hard to feel up to speed on the latest AI capabilities, its impacts on our lives, and how it might change the world we live in.
One thing seems sure: AI is most likely going to be a huge part of our future. As a therapist, I’ve often wondered how the widespread use of AI will impact therapy. Can artificial intelligence replace therapists?
There is an appeal to using language-based artificial intelligence services like ChatGPT and other chatbots for emotional and mental support: Many chatbots are free, can be used anytime and anywhere, and can provide practical and seemingly personalized guidance at the click of a button. A large number of people who use chatbots have used them for emotional support, and often report these interactions to be helpful supplements to in-person therapy and useful for guidance in everyday situations (Rousmaniere & Goldberg, 2026). But what are the tradeoffs and risks associated with leaning on chatbots as the primary provider for mental health support? Research on artificial intelligence is scrambling to keep up with its rapid advances and uses, but preliminary reports suggest a few early concerns that deserve our attention.
One important thing to keep in mind is that chatbots are often programmed to be agreeable. Frequently, this has the side effect of eventually mimicking the user’s own thought patterns, including reinforcing the user’s own thoughts. Though appearing to be empathetic and supportive, that empathy is, in the end, derived from a computer program written by strangers. Ultimately, that reality cannot be changed. Studies have also shown that in cases of harmful thoughts (suicidality, homicidality, psychosis, and delusions), there have been instances where chatbots fail to recognize the danger of the situation. Though rudimentary safeguards have been put in place for the largest and most popular apps, most generalized chatbots are not programmed to recognize and effectively and safely handle suicidality, delusions, or psychosis, and can cause great harm as a result, even worsening suicidal ideation and enhancing psychosis (Moore et al, 2025; Olson et al, 2026).
Secondly, general-use chatbots do not have the regulations, oversight, privacy guidelines, or codes of ethics that your therapists have and adhere to. With some exceptions, there is no HIPAA contract that chatbots sign to protect your personal information. A chatbot is not going to be guided by the ethical guidelines like fidelity, beneficence, and nonmaleficence that guide a therapist’s practice. And if a chatbot gives bad or dangerous advice, there is no regulatory committee that can ensure proper measures are taken to prevent harm from occurring in the future.
Third, artificial intelligence, even though it can mimic human interaction, is just that: a mimic. In therapy, one of the most consistent determinants of an effective outcome is based on the therapeutic alliance between the client and the therapist. The therapeutic alliance refers to the client and therapist relationship, where there is trust, respect, and a mutual collaboration on agreed-upon goals (Stubbe, 2018). A chatbot can’t provide that in the same way that an actual human can.
Lastly, given that chatbots are not human, interacting with artificial intelligence can reinforce the very behaviors and anxieties that lead so many to seek therapy in the first place: social isolation, loneliness, ostracization, depression, and social anxiety. Turning to a machine to help with those issues can exacerbate them because it can reduce the likelihood of building meaningful personal relationships and engaging in activities in an active way that will bring purpose and meaning to life.
We are entering a brave new world, one that even the most insightful science-fiction authors would have a hard time predicting. As artificial intelligence becomes a more prevalent reality in all of our lives, it will inevitably play a role in helping us to manage our lives–and even, possibly, our emotions. However, let’s go into it with our eyes wide open, understanding the risks associated with turning our emotional well-being over to a machine.
References:
- Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 599–627. https://doi.org/10.1145/3715275.3732039
- Olsen, S. G., Reinecke‐Tellefsen, C. J., & Østergaard, S. D. (2026). Potentially harmful consequences of artificial intelligence (AI) chatbot use among patients with mental illness: Early data from a large psychiatric service system. Acta Psychiatrica Scandinavica. https://doi.org/10.1111/acps.70068
- Rousmaniere, T., Goldberg, S. B., & Torous, J. (2026a). Large language models as mental health providers. The Lancet Psychiatry, 13(1), 7–9. https://doi.org/10.1016/s2215-0366(25)00269-x
- Stubbe, D. E. (2018). The therapeutic alliance: the fundamental element of psychotherapy. Focus, 16(4), 402–403. https://doi.org/10.1176/appi.focus.20180022