Home United States USA — software Your chatbot could become your therapist – and that might be a...

Your chatbot could become your therapist – and that might be a good thing

183
0
SHARE

A shark in the water or a lifebuoy?
Whether it’s using ChatGPT to make our Valentines to using Bard to help us ace Wordle, people have already become pretty adept at leveraging generative AI for a whole range of use cases. 
I’ve even tried using Bing, Bard, and ChatGPT to help with my anger issues, which made me wonder what the future of AI-assisted mental health care could look like.
The world is facing a mental health crisis. From overstretched healthcare services to increasing rates of mental health disorders in the wake of Covid-19, it’s harder than ever for many people to access services and treatment that could well save lives. So could AI help to lighten the load by assisting with, or even administering, that treatment?.
On a primal, instinctual level, it’s a terrifying concept for me. I’ve been on the waiting list for therapy for three years following a Covid-related deluge of patients, and I’ve had both cognitive behavioral therapy (CBT) and psychodynamic therapy before. Now, more than ever, I can think of nothing worse than pouring my heart out to a bot with no reference point by which to gauge how I’m feeling.
The absolute basics of mental health care involve extensive safeguarding, nuanced interpretation of a patient’s actions and emotions, and crafting thoughtful responses – all pretty challenging concepts to teach a machine that lacks empathy and emotional intelligence. Operationally, it’s a minefield too: navigating patient privacy, data control and rapid response to emergency situations is no small feat, especially when your practitioner is a machine. 
However, after speaking with mental health and AI experts about self-managed and clinical AI applications already available, I’ve come to realiz e that it doesn’t have to be all doom and gloom – but we’ve got a long way to go before we can safely trust our mental wellbeing to an AI.Bridging the gap
The relationship between AI chatbots and mental health runs deep; deeper than many may imagine. ELIZA, widely considered to be the first ‘true’ chatbot, was an early natural language processing computer program later scripted to simulate a psychotherapist of the Rogerian school, an approach to psychotherapy that was developed by Carl Rogers in the early 1940s, and which is based on person-centered or non-directive therapy 
The core principles of Rogerian psychotherapy were fertile grounds for AI programming; Rogers was known for his belief that facilitation was key to learning, and as such the therapist’s role became one of asking questions to engender self-learning and reflection on the part of the patient. 
For ELIZA’s programmer, Joseph Weizenbaum, this meant programming the chatbot to respond with non-directional questions, using natural language processing to identify keywords in user inputs and respond appropriately.
Elements of Rogerian therapy exist to this day in therapeutic treatment as well as in coaching and counseling; so too does ELIZA, albeit in slightly different forms.
I spoke with Dr. Olusola Ajilore, Professor of Psychiatry at the University of Illinois Chicago, who recently co-authored the results of a pilot study testing AI voice-based virtual coaching for behavioral therapy as a means to fill the current gaps in mental health care.
The AI application is called Lumen, an Alexa-based voice coach that’s designed to deliver problem-solving treatment, and there are striking parallels between the approach and results shared by the Lumen team and ELIZA. Much as the non-directional Rogerian psychotherapy meshed well with ELIZA’s list-processing programming (MAD-SLIP), Ajilore explains that problem-solving treatment is relatively easy to code, as it’s a regimented form of therapy.
Ajilore sees this as a way to “bridge the gap” while patients wait for more in-depth treatments, and acknowledges that it’s not quite at the level of sophistication to handle all of a patient’s therapeutic needs. “The patient actually has to do most of the work; what Lumen does is guide the patient through the steps of therapy rather than actively listening and responding to exactly what the patient is saying.”
Although it uses natural language processing (NLP), in its current state, Lumen is not self-learning, and was heavily guided by human intervention. The hype around and accessibility of large language models (LLM) peaked after the inception of the Lumen study, but Ajilore says the team “would be foolish not to update what we’ve developed to incorporate this technology”.
Despite this, Ajilore highlights a measured impact on test subjects treated using Lumen in its infancy – in particular, reductions in anxiety and depression, as well as changes in areas of the brain responsible for emotional regulation, and improved problem-solving skills.
Ultimately, he acknowledges, it might not be for everyone: “It might only help 20% of the people on the waitlist, but at least that’s 20% of the people that now have had their mental health needs addressed.” Pandora’s box is open
The work of Ajilore and specialists like him will form a vital part of the next stage of development of AI as a tool for treating mental health, but it’s evolution that’s already begun with the eruption of LLMs in recent months.
While researching this article, I came across a chatbot platform with a unique twist: Character.

Continue reading...