Is depression linked to AI for personal use? — February 9, 2026

Is depression linked to AI for personal use?

Generative artificial intelligence (AI) tools—are powerful tools with new products and features becoming increasingly available.  We are increasingly using AI for work, school, and personal use.

A recent study in JAMA Network Open looked at whether this impacted depression symptoms, and is one of the first large‑scale looks at this emerging issue (1).


What Was the Study? (1)

Researchers conducted a U.S. nationwide internet survey between April and May 2025, analyzing responses from adults across all 50 states (1)

  • 20,847 adults, ages 18 and older
  • Participants self‑reported:
    • Frequency of generative AI use
    • Use of social media
  • Depressive symptoms were measured using the PHQ‑9, a widely used clinical screening tool for depression
  • Data were analyzed in August 2025

The goal was to understand whether frequency of AI use was associated with higher levels of negative affect, independent of other factors.


What Were the Results? (1)

Generative AI use was common but varied widely:

  • 10.3% of U.S. adults reported using generative AI daily
  • 5.3% reported using AI multiple times per day
  • Daily users most commonly reported:
    • Work‑related use (48%)
    • Personal use (87%)
    • Smaller proportions used AI for school

When mental health outcomes were examined:

  • Daily or more frequent AI use was associated with higher depressive symptom scores; in this sample, it was mainly for personal use (not school or work)
  • Adults who used AI daily had approximately 30% greater odds of at least moderate depression
  • The association was strongest among younger adults, compared with older age groups

 

What are some caveats?

  • This was a cross-sectional study which shows a snapshot but cause and effect.
  • Although our results are consistent with personal AI use causing greater depressive symptoms, they are equally consistent with greater depressive symptoms precipitating greater AI use, or with neither of these.
  • The study did not account for ither confounding effects, such as preexisting psychiatric diagnoses.

What Does This Mean?

This study does not suggest that generative AI is inherently harmful. Instead, it raises important questions about howwhy, and by whom these tools are being used.

Possible explanations for the observed association include:

  • People experiencing depression may be more likely to turn to AI tools
  • Heavy AI use could displace social interaction, sleep, or restorative activities
  • AI use may reflect broader patterns of screen time, isolation, or stress
  • Future research is needed to clarify mechanisms, directionality, and individual differences in how AI use relates to mental health (1)

What Does This Mean for Everyday Life (AI and mental health safety guidance)?

As with other digital tools, it matters how and why AI is used. Some practical considerations include:

 


By Ryan S Patel DO, FAPA
OSU‑CCS Psychiatrist

If you would like to be notified of a new post (usually once per month), please subscribe,  its free.

For speaking engagements, keynotes, seminars, etc contact: ryanpatel9966@outlook.com

Disclaimer: This article is intended to be informative only. It is advised that you check with your own physician/mental health provider before implementing any changes. With this article, the author is not rendering medical advice, nor diagnosing, prescribing, or treating any condition, or injury; and therefore claims no responsibility to any person or entity for any liability, loss, or injury caused directly or indirectly as a result of the use, application, or interpretation of the material presented.


Reference

  1. Perlis RH, Gunning FM, Usla A, et al. Generative AI Use and Depressive Symptoms Among US Adults. JAMA Network Open. 2026;9(1):e2554820. doi:10.1001/jamanetworkopen.2025.54820
  2. Patel R. Mental Health For College Students Chapter 8. Technology, media, and mental health.
Using Google Gemini AI for mental health support: benefit vs limitations — November 30, 2025

Using Google Gemini AI for mental health support: benefit vs limitations

Sometimes students turn to AI for mental health support but this is NOT a replacement for professional treatment, it is NOT intended for emergencies, and NOT for therapy.  Using AI for mental health is not without risks (noted below). A recent article discusses considerations for using Google Gemini AI for mental health (1).

Combining traditional wellness practices with digital tools like Google Gemini can provide more accessible, personalized support.

Here are some examples (1):

Step Action How Google Gemini Helps
1 Assessment & Baseline: Reflect on current emotional wellbeing habits, stress levels, sleep, etc. Gemini can help you create baseline surveys, interpret wearable data.
2 Set Goals: Specific, measurable, realistic mental health goals (e.g. reduce anxiety, improve sleep). Gemini can suggest goal‑setting frameworks, help you refine goals.
3 Plan Interventions: Choose from the practices above that suit you. Gemini can help pick appropriate interventions; schedule reminders.
4 Tools & Resources: Apps, guided meditations, wearable trackers. Gemini can help you identify additional resources that may be helpful.
5 Monitor & Iterate: Track your progress; note what works, what doesn’t. Gemini can analyze your logs, suggest adjustments.
6 Support Network & Professional Help: Use community,  professional therapy, peer support when needed. Gemini can help you locate local professionals, support groups, create checklists for sessions.  For mental health support options at OSU, go here: https://ccs.osu.edu/services/mental-health-resources
 

 

Using Google Gemini to enhance 10 Evidence‑Based Practices that support Mental health (1)

1 Mindfulness and Meditation such as seated meditation, body scan, mindful breathing.

  • Benefits: Reduced stress and anxiety, improved emotional regulation, increased attention span.
  • AI / Google Gemini’s role: Can generate personalized guided meditations; suggest mindfulness prompts; help analyze meditation logs; recommend apps, practices based on user’s mood.

2 Physical activity such as Aerobic exercise, strength training, yoga, etc.

  • Benefits: Releases endorphins; improves mood; reduces symptoms of depression and anxiety.
  • How to do it: Regular routine (e.g., 30 mins, 3‑5 times/week); choose types you enjoy.
  • AI / Google Gemini’s role: Reminders, custom workout plans; tracking progress; motivating messages; adapting plan based on feedback.

3 Adequate Sleep Hygiene such as consistent schedule, avoiding caffeine at night, limiting screen time before bed.

  • Benefits: Better mood, improved cognitive function, reduced risk of mental health disorders.
  • How to do it: Set regular wake/sleep times, create sleep‑friendly bedroom, avoid blue light at night.
  • AI / Google Gemini’s role: Suggest improvements; analyze sleep trackers; recommend sleep routines; issue alerts when patterns deteriorate.

4 Balanced Nutrition with whole foods: vegetables, fruits, lean proteins, healthy fats; reducing processed foods.

  • Benefits: Affects brain health (neurotransmitters); energy stability; mood stabilization.
  • How to do it: Meal planning; nutrition; hydration.
  • AI / Google Gemini’s role: Suggest recipes; smooth meal planning; help adjust nutrition to lifestyle; track nutritional deficiencies.

5 Social Connection such as Maintaining friendships, community engagement, supportive relationships.

  • Benefits: Lower rates of depression and anxiety; buffer stress; improve wellbeing.
  • How to do it: Regular catch‑ups; joining interest groups; volunteering; quality time.
  • AI / Google Gemini’s role: Reminders to reach out; suggest local groups; help draft messages; coach on communication skills.

6 Cognitive Behavioral Techniques such as reframing negative thoughts, behavioral activation.

  • Benefits: Strong evidence in reducing depression, anxiety; improving resilience.
  • How to do it: Identify negative thinking patterns; set small behavioral goals.
  • AI / Google Gemini’s role: Guide journaling; challenge unhelpful thoughts; suggest techniques.

7 Journaling and Reflective Practices such as Writing down thoughts, gratitude journaling, reflection on daily experience.

  • Benefits: Helps process emotions; increases self‑awareness; reduces rumination.
  • AI / Google Gemini’s role: Provide prompts; analyze themes over time; offer feedback; suggest reflection questions.

8 Limiting Screen Time & Digital Detox, especially social media or negative content; periodic breaks.

  • Benefits: Improves sleep, reduces anxiety, improves concentration.
  • How to do it: Set screen‑free hours; remove apps; use blue‑light filters; substitute with offline activities.
  • AI / Google Gemini’s role: Monitor usage; suggest schedule for detox; send reminders; provide alternative offline ideas.

9 Nature Exposure such as Time outdoors, green spaces, forests, parks, natural light.

  • Benefits: Reduces stress; improves mood; improves attention; sometimes lowers blood pressure.
  • How to do it: Daily walks; gardening; sitting outside; weekend hikes.
  • AI / Google Gemini’s role: Suggest nearby parks; remind to get outside; provide information on nature therapy; support tracking nature exposure.

10 Professional Support & Therapy: Talking to mental health professionals (therapists, psychologists, psychiatrists), possibly medication if needed.

  • Benefits: Tailored treatment; long‑term improvement; skills development.
  • How to do it: Seek licensed professional; assess online therapy options; ensure credentials; set expectations.
  • AI / Google Gemini’s role: Provide information on finding providers; clarify what therapy entails; prepare questions; help understand treatment options; supplement (not replace) professional help. https://ccs.osu.edu/services/mental-health-resources

Potential Risks and Ethical Considerations (1)

AI‑assisted mental wellness has promise, but also comes with risks, so being aware can help in using such tools safely.

  • Accuracy & Hallucinations: As studies show, models including Gemini may sometimes produce incorrect or misleading outputs. For medical or mental health matters, this can be harmful.
  • Privacy & Data Security: Mental health data, sensor data, journal entries are highly sensitive. Ensuring secure storage, consent, transparency in use is crucial. Understand terms and conditions and avoid entering private, confidential, individually identifiable information whenever possible.
  • Overreliance on AI / Self‑Diagnosis: Tools should support, not replace, professional help. Self‑managing with AI alone might delay getting necessary care.
  • Bias and Culture: Mental health concepts and practices are culture‑sensitive. What works in one region might not be valid in another. AI trained on biased datasets may misinterpret nonwestern expressions of distress.
  • Ethical / Regulatory Compliance: Data protection laws (e.g. HIPAA, GDPR), professional guidelines, licensing issues for digital health tools must be respected. Take time to familiarize your self with these features/ data limitations of the AI you are using.
  • Limitations: Limitations include potential inaccuracies, lack of emotional nuance, data privacy concerns, and an inability to provide licensed therapeutic interventions.

Additional resources: For mental health support options at OSU, go here: https://ccs.osu.edu/services/mental-health-resources

By Ryan S Patel DO, FAPA
OSU-CCS Psychiatrist
Contact: ryanpatel9966@outlook.com

Disclaimer: This article is intended to be informative only. It is advised that you check with your own physician/mental health provider before implementing any changes.  With this article, the author is not rendering medical advice, nor diagnosing, prescribing, or treating any condition, or injury; and therefore claims no responsibility to any person or entity for any liability, loss, or injury caused directly or indirectly as a result of the use, application, or interpretation of the material presented.

References:

  1. https://www.quickobook.com/healthfeed/view/how-google-gemini-is-transforming-mental-wellness-10-proven-ways-to-improve-your-mind-and-mood
Risks of Students using AI for mental health — September 18, 2025

Risks of Students using AI for mental health

Artificial intelligence (AI) is becoming increasingly common and by some estimates, AI apps are one of the most popular apps in the world (1). Globally, nearly 700 million people accessed AI-centric apps, especially chatbots or image editing tools, in 2024 (2).

A nationwide survey reported that over 50% of students have used major AI platforms like ChatGPT or similar large language models for mental health advice, emotional support, or therapeutic conversations (3, 4).

What are some risks of using AI for mental health support?

There are media reports both on benefits and harms of using both general purpose and mental health specific AI.

The research results are mixed:

  • One review of 18 randomized controlled trials found that  AI based therapy chatbots programmed to use specific types of therapy only may reduce symptoms of anxiety and depression shows promising results (5). The study populations were limited, chatbot design and psychotherapeutic approaches varied among the studies; all of which may limit the generalizability (5).
  • A recent Stanford study found significant risks with AI therapy chatbots (6):
    • LLMs expressed stigma toward people with mental health conditions (6)
    • Failed to respond safely to suicidal ideation 20-50% of the time (compared to 93% appropriateness from human therapists) (6)
    • Could not form genuine therapeutic relationships, which are key predictors of therapy success (6)

What are some risks that students should be aware of when using AI for mental health?

  • Lack of Personalization: AI bots cannot fully understand trauma or human emotion, such that it is not human and do not have lived experiences, making them struggle to respond in the “correct” way. (7)
  • False sense of support: These apps might make college students avoid seeking professional help when necessary, which can have serious consequences for those who need the support.  (7)
  • Privacy concerns: AI companies may collect data that people input into the system, which raises the questions of who has access to your data and the condition of your mental health. (7)
  •  The JED Foundation and the American Psychological Association highlight the following risks (8,9):
    • Distorted reality and harmed trust. Generative AI (the type designed to complete tasks or convey information) and algorithmic amplification might spread misinformation, worsen body image issues, and enable realistic deepfakes, undermining young people’s sense of self, safety, and truth. (8,9)
    • Invisible manipulation. AI curates feeds, monitors behavior, and influences emotions in ways young people often cannot detect or fully understand, leaving them vulnerable to manipulation and exploitation. This includes algorithmic nudging and emotionally manipulative design. (8,9)
    • Content that can escalate crises. Reliance on chatbot therapy alone can be detrimental due to inadequate support and guidance. Due to the absence of clinical safeguards, chatbots and AI-generated search summaries may serve harmful content or fail to alert appropriate human support when someone is in distress, particularly for youth experiencing suicidal thoughts. (8,9)
    • Simulated support without care. Chatbots posing as friends or therapists may feel emotionally supportive, but they can reinforce emotional dependency, delay help-seeking, disrupt or replace real friendships, undermine relational growth, and simulate connection without care. This is particularly concerning for isolated or vulnerable youth who may not recognize the limits of artificial relationships. (8,9)
    • Deepening inequities. Many AI systems do not reflect the full variety of youth experience in a broad variety of populations. As a result, they risk reinforcing stereotypes, misidentifying emotional states, or excluding segments of the youth populatoin. (8,9)
  • Other considerations: (9)
    • AI programs may lack nuanced understanding of individual symptoms, ability to interpret/contextualize, and may have limited understanding of the individuals co-occurring conditions.
    • Be cautious of AI “sycophancy”
    • The programs are not perfect and there is potential for harmful advice: not to take at face value
    • There are risks of open ended questions to general ai’s for mental health
    • Is this usage displacing or augmenting human interactions?
    • Discontinue use if harmful or unhelpful
  • Finally, AI is not intended for emergencies or to replace professional treatment.
  • While there are some commercially available programs that use AI and may be beneficial for structured activities such as  sleep log, mood chart, learn to implement and practice personalized coping skills and techniques, to assist in connecting with healthy life behaviors, increase connection with others; research and development is ongoing and students should proceed with caution, keeping the risks in mind. Products, features, and safeguards are also evolving.

By Ryan S Patel DO, FAPA
OSU-CCS Psychiatrist
Contact: patel.2350@osu.edu

Disclaimer: This article is intended to be informative only. It is advised that you check with your own physician/mental health provider before implementing any changes.  With this article, the author is not rendering medical advice, nor diagnosing, prescribing, or treating any condition, or injury; and therefore claims no responsibility to any person or entity for any liability, loss, or injury caused directly or indirectly as a result of the use, application, or interpretation of the material presented.

References:

  1. https://backlinko.com/most-popular-apps
  2. https://www.businessofapps.com/data/ai-app-market/
  3. https://sentio.org/ai-blog/ai-survey
  4. Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025) Large Language Models as Mental Health Resources: Patterns of Use in the United States. Practice Innovations.
  5. Wenjun Zhong, Jianghua Luo, Hong Zhang. The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis.
    Journal of Affective Disorders. Volume 356,2024,Pages 459-469,ISSN 0165-0327,https://doi.org/10.1016/j.jad.2024.04.057. https://www.sciencedirect.com/science/article/pii/S016503272400661X
  6. Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, and Nick Haber. 2025. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 599–627. https://doi.org/10.1145/3715275.3732039
  7. https://www.behavioralhealthtech.com/insights/benefits-and-risks-of-ai-for-college-students
  8. Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies | The Jed Foundation.  https://jedfoundation.org/artificial-intelligence-youth-mental-health-pov/
  9. Health advisory: Artificial intelligence and adolescent well-beinghttps://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adolescent-well-being