On January 2nd I wrote about an electronic health records (EHRs) suite designed for mental health counselors. I even joked that perhaps a future version would include a mental health counselor chatbot. The following is what I had to say on the matter:
If you’re old enough, you may recall the old group therapy sessions in the 1970’s television show, Bob Newhart. Dr. Hartley (Bob Newhart) was a psychologist whose patients would go around the room and discuss things that happened to them the previous week. They all seemed more eccentric than depressed. Dr. Hartley would often ask in his monotone voice “and how does that make you feel?” Maybe the next generation of remote mental health EHRs will include an artificial intelligence chatbot that mimics Newhart’s monotone voice and engages with patients without human involvement.
Imagine my surprise when I ran across an article about a firm that has developed such a chatbot. In the article “A Mental Health Service Used an AI Chatbot Without Telling Anyone,” nearly 4,000 people spoke with the chatbot but didn’t apparently realize it wasn’t human.
Per New Scientist, the free mental health service Koko used a chatbot powered by GPT-3, a publicly available AI built by OpenAI, to provide words of support and encouragement. Founder Rob Morris then explained what happened in a Twitter thread late last week.
“We used a ‘co-pilot’ approach, with humans supervising the AI as needed,” he wrote. “We did this on about 30,000 messages.”
Here is the kicker. Users rated the AI chatbots higher than similar counseling by humans. However, their opinions changed once they realized they weren’t speaking with real people.
The good news is that messages composed by AI (and supervised by humans) were “rated significantly higher than those written by humans on their own” and response times improved by 50%.
The bad news? Once people realized the responses were crafted by machines, their responses weren’t considered useful by users. “[The advice] sounds inauthentic,” Morris said, noting the problem might have been that AI wasn’t “expending” any effort.
According to InsideHook:
Morris said this might be an area where humans will always have an advantage.
Depending on the type of problem there may be a place for a mental health AI chatbot but it would depend on the patient. Those who are lonely and mainly want a sympathetic ear would likely not benefit. Those who want straightforward advice or encouragement may benefit. In all seriousness, I do not believe this idea is going to go away. A couple weeks ago I called Home Depot to see if they still had any discounted Christmas lights in stock. A chatbot took my call and asked if I wanted it to text me a link of the remaining inventory of Christmas decorations. It was faster, easier and more accurate than speaking with a human from that department.