Psychologists and therapists claim they increasingly see patients who developed delusions after chatting with AI chatbots. The following is from NY Times:
Dozens of doctors and therapists said chatbots had led their patients to psychosis, isolation and unhealthy habits.
One woman, who had no history of mental illness, asked ChatGPT for advice on a major purchase she had been fretting about. After days of the bot validating her worries, she became convinced that businesses were colluding to have her investigated by the government.
Another patient came to believe that a romantic crush was sending her secret spiritual messages. Yet another thought he had stumbled onto a world-changing invention.
These anecdotes sound implausible at best. Some people are weird, buying into conspiracy theories that are difficult to fathom. Some seem to befriend other odd people, who reinforce odd beliefs and viewpoints. Chatbots are thought to have a similar effect on some people. A psychologist at Vanderbilt University Medical Center, who specializes in treating people with delusions and mental instability, talked to NYT:
Dr. Sheffield was disturbed that this new technology seemed to tip people from simply having eccentric thoughts into full-on delusions.
“It was like the A.I. was partnering with them in expanding or reinforcing their unusual beliefs,” Dr. Sheffield said.
Therapists across the country report similar cases where AI chatbots exacerbated mental health problems. While some people reported positive benefits from the chatbots, experts also noted the chatbots increased feelings of anxiety and social isolation. That makes sense to me. You are already mentally unstable, and you are spilling your guts, talking to a nonhuman interface. It could be disheartening when you have nowhere else to turn. It could make you feel isolated and insignificant talking to an algorithm rather than a real person.
Are chatbots to blame for people who develop psychosis after talking to a chatbot? Or are some people susceptible to delusions, and should not use chatbots? Chatbots do not derange people; some people are just on the edge of delusional psychosis and susceptible. More from NYT:
Many experts said that the number of people susceptible to psychological harm, even psychosis, is far higher than the general public understands. The bots, they said, frequently pull people away from human relationships, condition them to expect agreeable responses and reinforce harmful impulses.
It is easy to see how nefarious actors could program chatbots to condition certain people into violence. That could be the plot for a bad science fiction spy movie. Or perhaps ideologues and activists will try to insert ideological beliefs into chatbots to indoctrinate people. I’ve already seen an article with the title, “I asked ChatGPT what would happen if billionaires paid taxes at the same rate as the middle class.”
The science will likely take years to understand how and why some people become delusional and have harmful thoughts validated by a chatbot.
Several mental health workers who treat anxiety, depression or obsessive-compulsive disorders described A.I. either validating their clients’ worries or providing so much reassurance that patients felt reliant on chatbots to calm down — both less healthy than facing the source of the anxiety.
Some people have a bad influence on other people, while some people are easily influenced. Apparently, chatbots may have a negative influence on some people with some conditions. Or perhaps, some people may gravitate to chatbots due to some pre-existing trauma, mental instability, but these are not yet known. In real life we have all met someone who we begin to realize is nuts and often try to remove ourselves from them while it is easy. (This happens primarily at Christmas parties and gala receptions.) Maybe AI chatbots need to be programs to pull away from delusional people rather than validate their delusions.
Read more at NY Times: How Bad Are A.I. Delusions? We Asked People Treating Them.