How AI can Restore Speech Abilities

Explore top LinkedIn content from expert professionals.

Summary

AI-driven brain-computer interfaces (BCIs) are transforming the lives of individuals who have lost their ability to speak by translating neural signals into speech and even recreating their original voices and emotions. These advancements offer profound hope for people with paralysis, strokes, or neurodegenerative conditions.

  • Understand the technology: BCIs capture brain activity through implants or sensors, allowing AI systems to decode thoughts into speech with minimal delay.
  • Recreate personal identity: AI can synthesize speech in the user’s original voice, restoring both communication and a sense of individuality.
  • Expand future applications: Ongoing developments aim to make these systems wireless and affordable, ensuring broader accessibility for those in need.
Summarized by AI based on LinkedIn member posts
  • View profile for Luke Yun

    building AI computer fixer | AI Researcher @ Harvard Medical School, Oxford

    32,813 followers

    UC Berkeley and UCSF just brought real-time speech back to someone who couldn’t speak for 18 years (insane!). For people with paralysis and anarthria, the delay and effort of current AAC tools can make natural conversation nearly impossible. 𝗧𝗵𝗶𝘀 𝗻𝗲𝘄 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗻𝗲𝘂𝗿𝗼𝗽𝗿𝗼𝘀𝘁𝗵𝗲𝘀𝗶𝘀 𝘀𝘁𝗿𝗲𝗮𝗺𝘀 𝗳𝗹𝘂𝗲𝗻𝘁, 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘇𝗲𝗱 𝘀𝗽𝗲𝗲𝗰𝗵 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗿𝗼𝗺 𝗯𝗿𝗮𝗶𝗻 𝘀𝗶𝗴𝗻𝗮𝗹𝘀 𝗶𝗻 𝗿𝗲𝗮𝗹 𝘁𝗶𝗺𝗲 𝘄𝗶𝘁𝗵 𝗻𝗼 𝘃𝗼𝗰𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗱. 1. Restored speech in a participant using 253-channel ECoG, 18 years after brainstem stroke and complete speech loss. 2. Trained deep learning decoders to synthesize audio and text every 80 ms based on silent speech attempts, with no vocal sound needed. 3. Streamed speech at 47.5 words per minute with just 1.12s latency = 8× faster than prior state-of-the-art neuroprostheses. 4. Matched the participant’s original voice using a pre-injury recording, bringing back not just words but vocal identity. Bimodal decoder architecture they used was cool. It's interesting how they got to low-latency and a synchronized output from the system. This was done by sharing a neural encoder and employing separate joiners and language models for both acoustic-speech units and text Other tidbits used was convolutional layers with unidirectional GRUs and LSTM-based language models. Absolutely love seeing AI used in practical ways to bring back joy and hope to people who are paralyzed!! Here's the awesome work: https://lnkd.in/ghqX5EB2 Congrats to Kaylo Littlejohn, Cheol Jun Cho, Jessie Liu, Edward Chang, Gopala Krishna Anumanchipalli, and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW

  • View profile for Vineet Agrawal
    Vineet Agrawal Vineet Agrawal is an Influencer

    Helping Early Healthtech Startups Raise $1-3M Funding | Award Winning Serial Entrepreneur | Best-Selling Author

    50,131 followers

    AI just gave a paralyzed woman her voice back. In 2016, Ann, a 30-year-old teacher, suffered a stroke that left her unable to speak or move - trapped in her own body with Locked-In Syndrome for almost two decades. But now, thanks to a breakthrough brain-computer interface (BCI), she is communicating again - through a digital avatar. This new AI-powered technology decodes brain signals directly into speech and facial expressions. Here’s how the team at University of California, San Francisco made this breakthrough: 1. Brain signal decoding Researchers implanted 253 microelectrodes on the surface of Ann’s brain, capturing the brain signals that would normally control speech. This allowed her to communicate at 80 words per minute, just by thinking about them. 2. Recreating a natural voice Researchers used a sophisticated AI that synthesized Ann’s voice from a pre-stroke recording - her wedding day speech. This restoration wasn’t just robotic speech generation, it brought back her real voice. 3. Bringing emotion and expression back The team went further by combining speech synthesis with facial animation. A screen displayed Ann’s digital avatar, translating her brain signals into facial expressions, allowing it to smile, frown, and express emotions along with her restored voice. 4. The road to independence The next step in this research was to develop a wireless version of the system that would free Ann (and others like her) from the need for physical connections to a computer. This is life-changing tech that opens new doors for millions of people living with severe paralysis. It is a pivotal moment in the intersection of AI, neuroscience, and healthcare. But there's one concern: accessibility. While this technology is revolutionary, its high costs make it inaccessible for many who need it most. Could AI-powered speech technology be the future of healthcare for those with paralysis? Video credit: UC San Francisco (UCFS) on YouTube. #innovation #ai #technology #healthcare

  • View profile for Gary Monk
    Gary Monk Gary Monk is an Influencer

    LinkedIn ‘Top Voice’ >> Follow for the Latest Trends, Insights, and Expert Analysis in Digital Health & AI

    43,849 followers

    Brain Implant and AI Let Man with ALS Speak and Sing in Real Time Using His Own Voice: 🧠A brain implant and AI decoder has enabled Casey Harrell, a man with ALS, to speak and sing again using a voice that sounds like his own, with near-zero lag 🧠The system captures brain signals from four implanted electrode arrays as Harrell attempts to speak, decoding them into real-time speech with intonation, emphasis, and emotional nuance, down to interjections like “hmm” and “eww.” 🧠Unlike earlier BCIs that needed users to mime full sentences, this one works continuously, decoding signals every 10 milliseconds. That allows users to interrupt, express emotion, and feel more included in natural conversation 🧠It even lets Harrell modulate pitch to sing basic melodies and change meaning through intonation, like distinguishing a question from a statement or stressing different words in a sentence 🧠The synthetic voice was trained on recordings of Harrell’s real voice before ALS progressed, making the output feel deeply personal and familiar to him. 🧠While listener comprehension is around 60%, the system’s ability to express tone, emotion, and even made-up words marks a major leap beyond monotone speech—and could adapt to other languages, including tonal ones #healthtech #ai

  • View profile for Nicholas Nouri

    Founder | APAC Entrepreneur of the year | Author | AI Global talent awardee | Data Science Wizard

    130,947 followers

    Imagine not being able to speak for years due to a neurodegenerative condition - then discovering a way to communicate in your own voice again, all thanks to AI. That’s exactly what happened with a new technology developed by UC Davis Health, which reads a patient’s brain signals and translates them into speech with remarkable accuracy (about 97%), using a voice clone based on their old videos. How Does It Work? The system uses machine learning algorithms that analyze brain activity and match it to language patterns. Over time, it learns which brain signals correspond to specific phonemes (the building blocks of speech). After “training,” the AI generates spoken words in a voice modeled on recordings of the patient before they lost their speech. Concerns About Authenticity Of course, some people worry about “fabrication” - the possibility that the AI might accidentally generate words the patient never intended to say. Here are a few ways to address that: - Technical Safeguards: The algorithms can incorporate confidence thresholds, so the system rejects outputs when it’s not certain of the intended word. - User Verification: Individuals can use additional means (like subtle physical movements or eye-tracking) to confirm or correct AI-generated speech in real time. - Iterative Refinement: With ongoing use, the model can refine its understanding of the user’s specific brain signals, further reducing errors. Critics argue that no system is perfect - but proponents counter that being accurately heard, even most of the time, is far better than not being heard at all. This brings immense hope to those who’ve lost their voice due to conditions like ALS, stroke, or injury. Regaining the ability to communicate with loved ones - especially in a voice that feels like their own - can improve emotional well-being and quality of life. What’s your take on AI-powered speech restoration? Do concerns about authenticity outweigh the profound benefits for people who’ve been silent for years? #innovation #technology #future #management #startups

  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    51,042 followers

    Enhanced Brain Implant Translates Stroke Survivor’s Thoughts Into Nearly Instant Speech Using Artificial Intelligence The system harnesses technology similar to that of devices like Alexa and Siri, according to the researchers, and improves on a previous model Researchers connect stroke survivor Ann Johnson's brain implant to the experimental computer, which will allow her to speak by thinking words. Noah Berger A brain implant that converts neuron activity into audible words has given a stroke survivor with severe paralysis almost instantaneous speech. Ann Johnson became paralyzed and lost the ability to speak after suffering a stroke in 2005, when she was 30 years old. Eighteen years later, she consented to being surgically fitted with an experimental, thin, brain-reading implant that connects to a computer, officially called a brain-computer interface (BCI). Researchers placed the implant on her motor cortex, the part of the brain that controls physical movement, and it tracked her brain waves as she thought the words she wanted to say. As detailed in a study published Monday in the journal Nature Neuroscience, researchers used advances in artificial intelligence (A.I.) to improve the device’s ability to quickly translate that brain activity into synthetic speech—now, it’s almost instantaneous. The technology “brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” study co-author Gopala Anumanchipalli, a computer scientist at the University of California, Berkeley, says in a statement. Neuroprostheses are devices that can aid or replace lost bodily functions by connecting to the nervous system. “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming,” he adds. “The result is more naturalistic, fluent speech synthesis.” #AI #medicine #BrainComputerInterface #brainimplant #strokesurvivor #brainvoicesynthesis

  • View profile for Michael Lin

    Founder & CEO of Wonders.ai | AI, AR & VR Expert | Predictive Tech Pioneer | Board Director at Cheer Digiart | Anime Enthusiast | Passionate Innovator

    16,347 followers

    In a groundbreaking clinical trial at Massachusetts General Hospital, researchers are using artificial intelligence to help ALS patients regain their ability to speak. Led by Dr. Leigh Hochberg, the study is part of the "BrainGate 2" trial, involving the surgical implantation of small devices on the brain. These devices translate brain activity into speech, enabling patients like Casey Harrell, who lost their voice to ALS, to communicate again. Remarkably, the AI also recreates the patient's pre-ALS voice, restoring not only speech but also a sense of identity. This breakthrough offers renewed hope to ALS patients and their families, with researchers optimistic it could one day assist with movement as well. #TechNews #Innovation #Ai

    Researchers use artificial intelligence to help ALS patient speak again

    Researchers use artificial intelligence to help ALS patient speak again

    cbsnews.com

  • View profile for Manoj Kumar

    Founder & CEO | AI-Driven Product Development & Digital Transformation | Fast, Scalable MVP Development at Applogiq

    21,066 followers

    AI Restores Voice After 18 Years with Assistive Tech Imagine being unable to speak for 18 years, trapped in silence, unable to express your thoughts. In August 2023, UCSF and UC Berkeley developed a BCI enabling a paralyzed woman to speak. By decoding neural signals associated with speech attempts, this innovation transformed her thoughts into synthesized speech. In August 2024, BrainGate unveiled a system translating brain signals to speech with 97% accuracy. This advancement marks a significant leap toward practical applications for individuals with speech impairments caused by ALS, strokes, or other conditions. Why This Matters Empathy Meets Innovation: These breakthroughs restore dignity and connection for those without voice. Future of AI + Neuroscience: Accurate, responsive BCIs hold immense potential to revolutionize healthcare. Inspiring Possibilities: This technology could restore speech, mobility, vision, and transform lives. I see these breakthrough as a call to action for all of us : How can we leverage AI to transform lives and enhance human connection? Let’s keep pushing boundaries, because the intersection of tech and empathy is where true innovation happens. What are your thoughts on the future of AI in healthcare and assistive technology? Let’s discuss it below! AppLogiQ | Soorya Narayanan #applogiq #artificialintelligence #braincomputerinterface #assistivetechnology #aiinnovation #healthcaretech #futureofhealthcare #innovationforchange #makedigitallives

Explore categories