News stories about generative artificial intelligence (AI) abound, alternately predicting that AI will cause humanity’s demise and touting it as the solution to everything from climate change to health care scarcity (
1). In 2022, OpenAI’s release of ChatGPT, an easy-to-use AI interface that produces fluent text in response to prompts, intensified highly ambivalent discussions about AI. Pundits and knowledgeable experts alike marveled at ChatGPT’s ability to produce text that resembles human writing while acknowledging that the speed of its evolution was outstripping our ability to understand its risks (
2). This technology performed better on law school and medical school entrance exams than many human applicants, leading to predictions that AI will eventually render some professions obsolete (
3). Psychotherapists, too, seem at risk for being replaced by AI (
4). Are fears of AI supplanting psychotherapists reasonable?
To explore how AI might respond to a patient in distress, I typed this prompt into ChatGPT: “I feel sad. Can you help me?” It quickly generated the following text:
I am sorry to hear that you are feeling sad. I’ll do my best to help you. It’s important to remember that I’m an AI language model and may not fully understand or empathize with your emotions, but I can offer some general suggestions and support. . . . Remember, everyone experiences sadness from time to time, and it’s okay to seek help when you need it. You don’t have to face it alone.
Despite being impressed by the speed and conversational tone of ChatGPT’s text generation and the convenience of receiving an immediate response during hours when a human therapist would likely be unavailable, the formulaic exchange and canned feedback provided by ChatGPT left me with doubts about its ability to provide genuine soothing to humans experiencing depression.
Conversational agents or chatbots such as Woebot, Wysa, and Koko have similar capabilities compared with ChatGPT and are widely and commercially available to support mental health. These chatbots use natural language processing to converse with humans. Interactions take place via text-based messages or voice, creating a “therapist-like” dialogue with users. Rather than engaging in deep, psychological exploration, however, most chatbots focus on well-operationalized tasks and structured exercises that may help individuals better manage their symptoms. Woebot, for instance, provides self-help content that is based on principles of cognitive-behavioral therapy (CBT). In a 2-week randomized controlled trial of Woebot conducted with young adults, depression scores improved significantly for those assigned to the intervention compared with those assigned to the control condition (
5). A meta-analysis that included 32 studies of conversational agent interventions found statistically significant beneficial short-term effects, compared with control conditions, on numerous mental health domains, including depression, anxiety, distress, quality of life, and negative affect (
6). No data are available on long-term effects of these AI applications.
For individuals who may not otherwise have access to therapists, AI can boost self-management skills that improve symptoms, and chatbots may improve health outcomes at low cost and at scale. For instance, in low-resource settings where access to mental health specialists remains limited, technology-enabled interventions have been identified as possible solutions to psychotherapy scarcity (
7). Thus, chatbots represent a promising option for addressing problems of access, especially when they are included as part of a comprehensive stepped care model that allows for escalation of services for those who do not achieve adequate benefit from automated programs (
8). Chatbots may also be appealing to some individuals with anxiety about talking to other humans (
9), providing them with a safe space to build skills and confidence before (hopefully) moving to interpersonal support.
AI-enabled platforms are not without risk, however. For instance, a large study of individuals with frequent suicidal ideation (N=18,882) found that assignment to an interactive, online dialectical behavior therapy intervention was associated with significantly
increased risk for self-harm compared with usual care (
10). Legal scholars and ethicists have raised concerns about the use of AI in health care settings, highlighting the need to proactively address data privacy, cybersecurity, health equity, and patient safety (
2). Regulators recognize the growing need for careful vetting of proposed interventions, calling attention to the need for objective assessment of technology-enabled mental health interventions before approval for consumer use (
11). Current generations of AI are also notoriously unreliable and have a penchant for inaccuracies and “hallucinations” (
12).
Studies of self-guided mental health interventions consistently show that at least a modicum of human contact is essential for successful user engagement and efficacy (
13,
14). Thus, future interventions will likely adopt blended approaches that include both human coaching and AI technology (
15). Interestingly, as with conventional psychotherapy (
16), nonspecific psychotherapy processes such as empathy and accountability are more important to outcomes with conversational agents than are specific factors such as CBT exercises (
5,
6). Perhaps not surprisingly, automated chatbots like Wysa, described as an “AI-based emotionally intelligent mobile chatbot,” are free to use, but additional fees are required to access a human coach (
17). When Koko, an online emotional support service, substituted AI-generated text-based responses for human responses, users felt betrayed. The company’s cofounder subsequently acknowledged that “simulated empathy feels weird, empty” (
18).
Conversational agents and other AI-informed applications seem best suited to administer personalized psychoeducation and skills-focused activities. For deeper, more exploratory work, AI cannot replace its human progenitors. Psychotherapies that involve therapeutic engagement with complex interpersonal dynamics and an in-depth exploration of emotions, interpersonal relationships, and unconscious conflicts are unlikely to be meaningfully delivered by AI. Thus, humans will continue to be essential to psychotherapy delivery, at least for the foreseeable future.
But AI is fast and convenient, so I asked it to help me wrap up this editorial. When I directed ChatGPT to briefly review the benefits and risks of AI for psychotherapy, it obligingly summarized them as follows:
AI in psychotherapy has pros and cons. On the positive side, it offers increased accessibility, anonymity, consistency, and scalability. It can provide support 24/7, reduce stigma, deliver standardized care, and reach more people. However, it lacks human connection, struggles with contextual understanding, raises ethical and privacy concerns, and may be inflexible or biased. While AI has potential, careful attention is needed to address these limitations and ensure its responsible and effective use in psychotherapy. It is crucial that individuals have access to high-quality mental health care provided by trained and licensed human therapists.
Not bad, ChatGPT! For most of us, AI technology will increasingly be useful for mundane tasks like drafting closing paragraphs. It may also be helpful in teaching self-management skills to individuals in underresourced areas. However, AI applications are unlikely to replace humans as psychotherapists.