Evaluating the Psychiatric Effects of Social Media

by Staff Writer
June 10, 2024 at 12:31 PM UTC

Research – and reader feedback – examines how mental health is hurting (and sometimes helping) our mental health.

Clinical relevance: Research – and reader feedback – examines how mental health is hurting (and sometimes helping) our mental health.

  • One study shows a potential link between problematic social media use and ADHD.
  • Another research paper examines how interpretable AI combs through social media images to better forecast suicide risks.
  • Finally, an encouraging letter to PCC highlights how abuse survivors have found support through ChatGPT and Reddit, suggesting AI can be a valuable tool for emotional support and healing.

Despite the ubiquity of social media – from its use as a cultural barometer to its nascent role as an artificial intelligence (AI) educator – there’s still so much we don’t understand about its impact on our mental health.

And Americans are increasingly concerned about it. A CVS Health/Morning Consult survey published last month showed that nearly two-thirds of adults “have experienced concerns about their own mental health or the mental health of their friends and family.” Roughly 70 percent of parents worry about their kids’ exposure to online content.

“Mental health became a top concern in 2020 and it has only risen since,” CVS Health Chief Psychiatric Officer Taft Parsons III, MD, elaborated in a press release. “Uncertainty around the future, current events, and social media continue to drive anxiety among adults.”

The Journal of Clinical Psychiatry and The Primary Care Companion for CNS Disorders has published emerging research in this area. We’ve compiled relevant summaries and links to these studies for further review.

Study Links Problematic Social Media Use to ADHD in Lebanese Adults

How social media affects those with attention-deficit/hyperactivity disorder (ADHD) has emerged as a focal point of new research.

Studies suggest that ADHD is prevalent among younger social media users whose online activity borders on problematic. This hints that the interactive and reactive nature of mobile media could exacerbate ADHD symptoms.

Additionally, depression and anxiety – equally common mental disorders – often coexist with ADHD. Research indicates that individuals with ADHD are at a higher risk of developing depression and anxiety, and vice versa. These disorders also increase susceptibility to addictions such as problematic social media use.

Lebanese researchers explored whether depression or anxiety mediates the link between problematic social media use and ADHD. The cross-sectional study involved 466 randomly selected Lebanese citizens. Using various validated scales, the research team assessed ADHD symptoms, problematic social media use, and anxiety and depression levels.

The researchers identified a strong link between higher problematic social media use and higher odds of ADHD. Anxiety, but not depression, partially mediated this association. This suggests that social media might impair attention, contributing to ADHD symptoms, while heightening anxiety levels.

The study emphasizes the need for increased awareness of the potential impacts of problematic social media use. Mental health professionals should consider ADHD symptoms and associated mood disorders during consultations.

The authors added that further research could explore the mechanisms behind these relationships. Additionally, it could help with the development of effective interventions, particularly for the younger generation.

Interpretable AI Models Use Social Media Images to Predict Suicide Risk

On the other end of the spectrum, the rise of social media – and AI – has revolutionized suicide prediction.

AI deep learning models have shown promise in improving prediction accuracy. While social media provides valuable insights into potential risk factors for suicide, which formal medical risk factors might not reveal.

This study – appearing in JCP in November – aimed to address conceptual gaps in AI-based suicide prediction models by developing an interpretable AI model that predicts clinically validated suicide risk using social media images. The study’s authors hypothesized that social media images would reveal information about emotions and interpersonal relationships, contributing to suicide prediction.

The researchers collected Facebook data from images shared by participants who completed the Columbia Suicide Severity Rating Scale (C-SSRS). The study used the CLIP model to extract interpretable visual features from these images. The features were predefined to reflect emotions and relationships, facilitating interpretation in the context of suicide theories and therapies.

The study found that its hybrid AI model produced good prediction performance, significantly outperforming common deep learning models like ResNet. High-risk participants displayed features indicative of negative emotions and loneliness in their images and lower levels of interpersonal relationships. These findings highlight the model’s ability to predict suicide risk from images alone. They also suggest that subtle visual cues on social media can be indicative of suicide risk.

The study highlights the need to combine top-down theory with bottom-up data to improve AI accuracy in suicide prediction. This integrative approach offers a promising path for developing practical, real-life monitoring tools for suicide prevention.

Can ChatGPT and Reddit Offer Support for Abuse Survivors?

(Finally, PCC received an encouraging letter to the editor about an unexpected avenue of support for abuse survivors.)

Since the rise of technology in the 21st century, modalities such as social media and artificial intelligence (AI) have continued to grow. Particularly during the 2020 COVID-19 pandemic, online social media platforms, such as Reddit, became supportive communities. In late 2022, ChatGPT, a large language model, was released. From performing on academic boards to answering patient questions empathetically, ChatGPT’s full potential has yet to be discovered.

“In 2023, the intersection of Reddit and ChatGPT was observed in communities such as the subreddit: r/NarcissisticAbuse, a group whose about statement is as follows: ‘This is a safe place for people who suffered or are currently suffering from narcissistic abuse to seek support, learn, vent, discuss, document their abuse, and come together in their path towards healing.’

“On a November 2023 post entitled ‘ChatGPT can help’ from the NarcissisticAbuse subreddit, a user shared that they copied and pasted communication from their narcissistic ex into ChatGPT with the following message: ‘Can you draft me a response to this email from my narcissistic ex?’ 

“The user also shared that they asked ChatGPT to analyze the communication from their ex for emotionally abusive tactics, and the responses they received from ChatGPT were enormously validating. This post received significant traction, with many users commenting that they have saved ongoing chats with ChatGPT for affirmation and perspective. One of the comments on this post even described possible responses from ChatGPT after asking ChatGPT to respond as a narcissist.

Studies have shown that explanations of abusive partners as narcissists help women process their trauma and heal faster. Identification of abusive partners as narcissists allows women to work with their counselor to find practical strategies for specific situations. Many victims of narcissistic abuse report victim blaming and lack of accountability in their partner.  Thus, it is refreshing to see an AI modality such as ChatGPT used to validate victims of narcissistic abuse and provide methods of recognizing emotional abuse.

“Research has demonstrated that integrative conversational AI chatbots can serve as cost-effective and accessible therapeutic methods.  While not a replacement for a trained therapist, AI may provide support by listening to and reassuring distressed individuals. Prior research has shown that ChatGPT may alleviate stress and supply encouragement to worried parents of children with atopic dermatitis.

“Analysis of the NarcissisticAbuse subreddit demonstrates that utilizing AI talkbots such as ChatGPT may be beneficial for victims of abuse. Based on the popularity of the post in the NarcissisticAbuse subreddit, counselors may want to suggest ChatGPT as an immediate, short-term method of validation and emotional support to victims of narcissistic abuse. Nevertheless, persons in acute distress should preferably reach out to a crisis hotline or seek inpatient mental health services. Potential pitfalls of AI include racial bias, particularly ChatGPT’s strikingly differing response for pain management (opioids vs aspirin) based on race.

“Other shortcomings of chatbots include a lack of information verification that has the potential to induce stress in users by providing them with an incorrect diagnosis or even dangerous information.6 Clinicians may mitigate some of these challenges by having [an] awareness of AI-based racial biases and informing their patients. Clinicians may also try to create open conversation[s] with their patients about AI, particularly where their patients acquire information and if they are involved in any online social support groups.

“The interplay of online social media support groups and interactive chatbots may represent a new realm of mental health support, but providers must approach this interplay with caution to ensure optimal patient education, patient safety, and ethical considerations. Future studies are needed to elucidate the most beneficial adaptation of social media and AI for mental health aid.”

Commentary

The Opioid Industryʼs Legacy: A Generation of Prescribed Suffering

The authors discuss the influential and far-reaching effects of messaging from the opioid industry on the opioid use epidemic.

Andrew Kolodny and others

Case Series

Alzheimer Dementia Confirmed by FDG-PET After Negative Neuropsychological Testing: A Case Series

This case series presents several patients who underwent formal neuropsychological testing that did not diagnose dementia, but whose clinical course and neuroimaging findings were consistent with the diagnosis.

Richard Wu and others