Hidden Use of ChatGPT in Online Mental Health Counseling Raises Ethical Concerns

by Staff Writer
January 17, 2023 at 4:04 PM UTC

ChatGPT is being used in online mental health counseling, possibly without consent.

Clinical relevance: Using a chatbot to counsel patients about their mental health without their consent is unethical

  • The founder of a free therapy program named Koko admitted on Twitter to using GPT-3 chatbots to help respond to more than 4,000 users seeking advice about their mental health.
  • He later clarified that users had full knowledge and gave consent to his “experiment” but both were unnecessary because they had signed the standard user agreement.
  • Expert opinion, and the general consensus on Twitter, was that providing AI generated responses to people seeking mental health advice, was unethical, moral and shameful.

“If you want to set back the use of AI in mental health, start exactly this way and offend as many practitioners and potential users as possible.”

So said medical ethicist, Art Caplan, of a digital mental health company’s use of the artificial intelligence program ChatGPT to generate responses to over 4,000 users. 

Rob Morris, the founder of a free therapy program named Koko, admitted on an extensive Twitter thread that his service used GPT-3 chatbots to help respond to more than 4,000 users seeking advice about their mental health. 

Koko partners with online communities to find and treat at-risk individuals. In a video posted by Morris, the platform is described as “a place where you can get help from our network or help someone else.” 

ChatGPT, created by the company OpenAI, is a free artificial intelligence (AI) program that generates realistically conversational text based on the prompts it is fed.

Computer Vision Syndrome

Personality Traits and Online Pornography

Boredom Proneness, Loneliness, and Smartphone Addiction

In the Twitter thread, Morris said that his team took a “co-pilot” approach to using the tool by having humans supervise AI-composed messages. He claimed that these responses were rated significantly higher than those written by humans on their own and that the service’s response times went down 50 percent to well under a minute when the AI chatbot was employed.

Yet, when users were told they were being counseled by a computer, they didn’t like it, Morris admitted. “Simulated empathy feels weird, empty,” Morris tweeted. 

It’s somewhat murky if the experiment was done without client consent or knowledge, said Caplan, who is the founding head of the Division of Medical Ethics at NYU Grossman School of Medicine in New York City. 

“If not, this was highly unethical,” he said. “Those with mental health issues require clear consent and in my view, a review by an independent committee to ensure all disclosures include a conflict of interest statement. Then there is privacy protection. And a notification that ChatGPT is experimental.  And so on,” he said. 

The backlash was swift with the majority of the more than 1200 people who responded to Morris’ original tweet siding with Caplan. Many called the experiment shameful and immoral.

Eventually, Morris attempted to clarify his original words. 

“We were not pairing people up to chat with GPT-3, without their knowledge. (in retrospect, I could have worded my first tweet to better reflect this),” he tweeted several days later. 

And in another tweet, he said:

“This feature was opt-in. Everyone knew about the feature when it was live for a few days.” This seemed to contradict his original statement about how people felt about “finding out” that they were receiving therapy from an AI bot. 

In a statement, Morris told Business Insider that, from his point of view, he didn’t need consent anyway. He referenced a clause that every user signs before they use the service, adding that, “If this were a university study (which it’s not, it was just a product feature explored), this would fall under an ‘exempt’ category of research.” 

Caplan disagreed, calling the use of  AI technology without informing users grossly unethical. Further, he explained, it is not standard of care nor has any professional organization tested its risks and benefits.

He also said that no matter how sophisticated ChatGPT seems, it is nowhere near ready for use in treating mental health issues.  

“ I can see its use in medicine in perhaps say, robotic surgery. AI can learn and be a responsive tool for health care. But in mental health, that day is not yet here. In fact it isn’t close,” Caplan said. 

Morris told Business Insider that his intention was to emphasize the importance of the human in the human-AI discussion. “I hope that doesn’t get lost here,” he said. 

He added that the tool imposed no further risk or deception to users and that Koko didn’t collect any personally identifiable information or personal health information during the test. 

Still, the Twitterverse did not appear to believe that Morris’ clarification let him off the hook.  One Twitter user summed the general consensus:

Treatments and Treatment Predictors in Patients With Substance Use Disorders and Comorbid Attention-Deficit/Hyperactivity Disorder: First Results From the International Naturalistic Cohort Study of ADHD and SUD (INCAS)

Treatment site explained more of the variation in ADHD treatment provision than patient factors. Higher ADHD symptom severity and sobriety at intake were associated with receiving ADHD treatment.

Christoffer Brynte and others

Case Series

Alzheimer Dementia Confirmed by FDG-PET After Negative Neuropsychological Testing: A Case Series

This case series presents several patients who underwent formal neuropsychological testing that did not diagnose dementia, but whose clinical course and neuroimaging findings were consistent with the diagnosis.

Richard Wu and others