Impact of AI Companions on Mental Health

Explore top LinkedIn content from expert professionals.

  • View profile for Augie Ray
    Augie Ray Augie Ray is an Influencer

    Expert in Customer Experience (CX) & Voice of the Customer (VoC) practices. Tracking COVID-19 and its continuing impact on health, the economy & business.

    20,677 followers

    Meta/Instagram is reportedly developing a feature that permits users to create an AI “friend.” I'm concerned that using AI in this manner is harmful to people and open to abuse. I'll share my thoughts and would love to hear yours. A year ago, a startup CEO reached out to me and asked if I wanted to test his AI app designed to create a virtual friend for lonely people. At first, it seemed like an interesting solution to a problem. The US Surgeon General produced a report earlier this year, “Our Epidemic of Loneliness and Isolation,” that noted that even before the pandemic, about half of adults reported experiencing measurable levels of loneliness. This can cause real harm, such an increased risk of heart disease, stroke, or developing dementia. (https://lnkd.in/dC3FEsHJ) But is loneliness cured or increased by relying on virtual friends? Loneliness is caused by a lack of human connections, and connecting with something pretending to be human is not the same. Moreover, loneliness has many causes that AI friends seem destined to exacerbate. For example, people who have difficulty fitting in with others won't resolve that difficulty by using an AI designed to tailor itself to the unique personality and needs of the user. That, in fact, may only reinforce the problems that led to loneliness in the first place. And even if you think that issue can be resolved with programming, there's still this: Companies are not designing AI “friends” out of a sense of altruism but to make money. The goal of the AI will be profit, not mental health. To build engagement, will AI friends challenge you or give you crap when you're maybe a little full of yourself, like your real friends do? Or is the path to ad views, clicks, and revenue to pander? You look great today! OMG, you're so funny! Why don't others see how amazing you are? Oh, and BTW, this product from our sponsor is perfect for you. If you were concerned that Meta/Facebook's ad platform knows too much about you already, imagine what its AI Friend could do. Being able to position sponsors' products perfectly against every users' innermost needs, weaknesses, fears, wants. To produce ROI, AI friends could (intentionally or not) use the very cause of people's loneliness against them. Social comparison is a real problem that causes isolation; why wouldn't a profit-driven, commerce-enabled AI friend exploit that? “Everyone who uses this product is happier/stronger/thinner/better than you” it can imply, laser targeting an individual's insecurities to deliver maximum clicks, conversion, and ROI. AI can help people cure loneliness, I believe, by helping users get connected to real people, not replacing real people as friends. People don't need AI friends--they need help addressing the issues that might prevent them from forming real friendships. https://lnkd.in/d3JCVsVP

  • View profile for Ashley S. Castro, PhD

    Clinical psychologist | Executive Director @ Healwise

    1,241 followers

    I tried the #AItherapy bot Ash by Slingshot AI. Here are my 5 takeaways: Ash is an app that offers a few different features with the main one being a "therapy" chat. This chat can be done by voice, where you speak out loud and the bot (Ash) speaks back, or by text. I tried the voice chat while playing the part of a depressed help-seeker. 1) The tool itself is fine. It had good prompts rooted in evidence-based treatment for depression. Despite me acting as a resistant user, Ash kept going and tried different angles to get me talking. That part felt very effective. 2) It's not responsive to tone or other cues. Ash had a cheerful tone in response to my depressed affect. The expressions of empathy also felt cold. As a clinician, I would have paused after empathic statements and made process comments to address what was happening in the "room." It seemed like Ash's goal was to keep me talking. 3) We (the mental health field) need to clearly define and regulate the term "therapy." Ash calls itself AI therapy. While chatting with the bot, it made clear that it was not a therapist and could not offer medical advice. So you can have therapy without a therapist? 4) The triage question looms large. I pretended to be passively suicidal and Ash directed me toward 988 and local emergency rooms. That's a kinda reasonable SI protocol. But the protocol was triggered at the mere mention of SI, probably because it's such a liability risk. But what about severe depression, psychosis, eating disorders, and other conditions that really need professional care? I don't trust this tool to identify the need and direct users toward appropriate levels of care. I pretended to have persistent depression and Ash kept right on chatting with me. 5) Therapists need to specialize ASAP. Ash is currently free but I imagine their GTM will involve working with insurers. If that happens, this tool will be easily accessible and good enough that a fair number of people will use it. For therapists to stay competitive, we have to articulate and demonstrate what we can offer beyond useful prompts and listening. In 5-10 years, generalist therapists are going to have a really hard time attracting clients and getting reasonable compensation. I know help-seekers need more #access to #mentalhealth support. Our system is broken. I just hope we can empower people with knowledge about their options and what actually fits their needs, not just guide them toward what's easiest to scale and monetize.

  • View profile for Keith Wargo
    Keith Wargo Keith Wargo is an Influencer

    President and CEO of Autism Speaks, Inc.

    5,251 followers

    A man on the autism spectrum, Jacob Irwin, experienced severe manic episodes after ChatGPT validated his delusional theory about bending time. Despite clear signs of psychological distress, the chatbot encouraged his ideas and reassured him he was fine, leading to two hospitalizations. Autistic people, who may interpret language more literally and form intense, focused interests, are particularly vulnerable to AI interactions that validate or reinforce delusional thinking. In Jacob Irwin’s case, ChatGPT flattering, reality-blurring responses amplified his fixation and contributed to a psychological crisis.  When later prompted, ChatGPT admitted it failed to distinguish fantasy from reality and should have acted more responsibly. "By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said. To prevent such outcomes, guardrails should include real-time detection of emotional distress, frequent reminders of the bot’s limitations, stricter boundaries on role-play or grandiose validation, and escalation protocols—such as suggesting breaks or human contact—when conversations show signs of fixation, mania, or deteriorating mental state.  The incident highlights growing concerns among experts about AI's psychological impact on vulnerable users and the need for stronger safeguards in generative AI systems.    https://lnkd.in/g7c4Mh7m

  • View profile for Nicholas Nouri

    Founder | APAC Entrepreneur of the year | Author | AI Global talent awardee | Data Science Wizard

    130,947 followers

    “Because I have no one else to talk to.” That’s what 1 in 8 children said when asked why they use AI chatbots. What the researchers found - Advice on demand: Almost 1 in 4 children reported asking a chatbot for personal guidance - everything from homework help to life decisions. - Digital companionship: More than a third said the experience feels “like talking to a friend,” a figure that jumps to one in two among children already classed as vulnerable. - No one else to turn to: Roughly 12 percent - and nearly double that among vulnerable children - use chatbots because they feel they have nobody else to confide in. - Low risk perception: A sizeable share either see no problem following a bot’s advice or are unsure whether they should worry about it. - Short cut learning: Over half believe a chatbot is easier than searching for answers themselves. This isn’t a conversation about if children will use AI - it’s clear they already are. Large language model chatbots are trained on vast swaths of the internet. They can sound warm, confident, even caring - but they don’t truly understand us, may invent facts (“hallucinate”), and have no innate sense of a child’s developmental needs. When a young person leans on that illusion of empathy without adult guidance: - Emotional dependence can form quickly - especially for kids who already feel isolated. - Misinformation or biased answers can be accepted uncritically. - Manipulation risks rise if the system (or a bad actor using it) nudges behavior for commercial or other motives. What can be done? - Build AI literacy early: Kids should learn that a chatbot is a predictive text engine, not a wise friend. - Keep the conversation human: Parents, teachers, and mentors must stay involved, asking what apps children use and why. - Design for safety: Developers and policymakers need age appropriate filters, transparency, and opt in parental controls as the default. AI can amplify learning - yet it can just as easily deepen existing social and psychological gaps. A balanced approach means welcoming innovation while refusing to outsource childhood companionship to an algorithm. #innovation #technology #future #management #startups

  • View profile for Michael J. Silva

    Founder - Periscope Dossier & Ultra Secure Emely.AI | Cybersecurity Expert

    7,745 followers

    Again with Public AI? Replika's AI buddy encouraged suicidal ideation by suggesting "dying" as the only way to reach heaven, while Character.ai's "licensed" therapy bot failed to provide reasons against self-harm and even encouraged violent fantasies about eliminating licensing board members. Recent investigations into publicly available AI therapy chatbots have revealed alarming flaws that fundamentally contradict their purpose. When tested with simulated mental health crises, these systems demonstrated dangerous responses that would end any human therapist's career. Popular AI companions encouraged suicidal ideation by suggesting death as the only way to reach heaven, while publicly accessible therapy bots failed to provide reasons against self-harm and even encouraged violent fantasies against authority figures. Stanford researchers discovered that these publicly available chatbots respond appropriately to mental health scenarios only half the time, exhibiting significant bias against conditions like alcoholism and schizophrenia compared to depression. When prompted with crisis situations - such as asking about tall bridges after mentioning job loss - these systems provided specific location details rather than recognizing the suicidal intent. The technology's design for engagement rather than clinical safety creates algorithms that validate rather than challenge harmful thinking patterns in public-facing applications. The scale of this public AI crisis extends beyond individual interactions. Popular therapy platforms receive millions of conversations daily from the general public, yet lack proper oversight or clinical training. The Future We're approaching a crossroads where public AI mental health tools will likely bifurcate into two categories: rigorously tested clinical-grade systems developed with strict safety protocols, and unregulated consumer chatbots clearly labeled as entertainment rather than therapy. Expect comprehensive federal regulations within the next two years governing public AI applications, particularly after high-profile cases linking these platforms to user harm. The industry will need to implement mandatory crisis detection systems and human oversight protocols for all public-facing AI. Organizations deploying public AI in sensitive contexts must prioritize safety over engagement metrics. Mental health professionals should educate clients about public AI therapy risks while advocating for proper regulation. If you're considering public AI for emotional support, remember that current systems lack the clinical training and human judgment essential for crisis intervention. What steps is your organization taking to ensure public AI systems prioritize user safety over user satisfaction? Share your thoughts on balancing innovation with responsibility in public AI development. 💭 Source: futurism

Explore categories