Dr. Dana Wang is a psychiatrist and co-founder of Rivia Mind, a telepsychiatry group aiming to provide mental healthcare one can trust.
“What do you think about using AIs as therapists?” This question has been increasingly posed to me at dinner parties, at conferences and even during Uber rides. It seems like everyone is talking about AI these days, and I find myself having debates like Yuval Noah Harari and Yann Le Cun did on artificial intelligence. My instinct is to defend my profession against this invasion, but increasingly, I recognize the appeal. Humans alone may not readily overcome the many obstacles leaders in the mental health space currently face, such as gaps in care, social stigma, high cost and limited access. It’s worth pausing and envisioning what role AI can play in helping patients with mental health issues.
My view on AI serving in the role of a therapist in general is quite torn. On one hand, I recognize that currently the number of providers is the bottleneck for patients to access care and no matter how many innovative companies work on this issue, the bottom line is that there may not be enough providers to serve the growing number of patients. It also doesn’t seem like there are immediate drastic expansions planned to train more psychiatrists, nurse practitioners and therapists to meet our future demand.
On the other hand, one of the contributing factors of more people needing mental health interventions in the first place may be the isolative effects of technology. We are increasingly interacting with screens rather than faces. The number of close friends people report having has declined, and many people are suffering from serious loneliness. Despite the initial good intentions to build social media as a tool to engage and connect us, some research has found correlations between social media use and loneliness. Can leaders and practitioners in the mental health space use AI technology to solve a problem that may have been due in part to technology replacing in-person social interactions in the first place?
Regardless of the causes of our current mental health crisis, people need help now. We do need reliable ways to deliver mental health services. Some companies are building AIs to embody all the nice qualities of a wonderful therapist, such as warmth, empathy and understanding. Quality would be controllable with AI, and it could take the guesswork out of matching with a therapist who isn’t a good fit, which can take multiple trial and error rounds that impede progress and can be daunting. The initial session with an AI therapist might be like a guaranteed first-date success, which could be undeniably more appealing to users of these tools and lower the fear of uncertainty as a psychological barrier to seeking help. But how will companies factor in longer-term attachment?
In my experience, the most important therapeutic factor in therapy isn’t any particular modality, but the relationship a patient has with their therapist. What kind of relationship can AI have with humans that feels genuine, and would businesses just be creating AIs that say what users want to hear? Already, people are forming virtual romantic or sexual relationships with AIs like CarynAI that can feel quite real. There are many accounts of users who are filled with heartache when an update to the model changes their AI’s responses, sometimes resulting in emotional devastation and tragic consequences. Our brain is able to project our needs, desires, hopes and fantasies onto cartoons and inanimate objects to make the relationship feel quite real even if it’s one-sided. But the difference with a therapist is the ability to challenge the patient to see their blind spots, call them out on the temporarily false securities of their psychological defenses, and bond over the human weakness we all face. Conflicts and arguments are grist for the mill. Will companies be able to create AIs capable of having conflict with patients and working through it? An AI might be able to serve as a comfort object with validating responses, but can it really challenge people to change?
Building trust between patients and AI therapists poses a different set of challenges that companies that build these tools will need to navigate. To me, I still treasure the kind of intimacy with humans knowing that the person I am talking to has shared similar experiences in life, has felt pain and knows the intensity of the emotions I experience. There is a trusting bond in genuine compassion. And knowing that someone has high regard for and cares for you because you are unique and not because you are part of a data set is meaningful.
To ethically build an AI therapist, companies need to carefully consider to what extent an AI will replace human interactions in the essence of healing as they build the framework for services. This is a tough question to pose for those businesses and entrepreneurs looking to advance this exciting technology. I hope AI therapy is subject to intensive testing not just for short-term outcomes but also for its long-term impact on our society as a whole.
There are still a lot more questions than answers. But one thing is for sure: leaders and organizations in the mental health space should never stop trying to solve our current gap in care and coming up with innovative solutions to deliver quality mental health services. And they should never stop asking tough questions to hold those designing our future accountable.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here