How Brain-Computer Interfaces Assist Individuals With Disabilities

Explore top LinkedIn content from expert professionals.

Summary

Brain-computer interfaces (BCIs) are groundbreaking technologies that enable individuals with severe disabilities, such as paralysis or locked-in syndrome, to communicate by converting their brain signals into speech or other outputs. Recent advancements in AI-powered BCIs are restoring voices and self-expression for people who have lost the ability to speak due to conditions like strokes or ALS.

  • Understand the technology: BCIs use implants or sensors to capture brain signals, which are then decoded by AI algorithms to generate speech, text, or even virtual avatars for communication.
  • Recognize emotional impact: These systems don’t just restore communication but also bring back personal identity by recreating voices and expressions specific to the individual.
  • Consider accessibility: While revolutionary, the high cost of such technology highlights the need for future efforts to make it more affordable and widely available.
Summarized by AI based on LinkedIn member posts
  • View profile for Luke Yun

    building AI computer fixer | AI Researcher @ Harvard Medical School, Oxford

    32,813 followers

    UC Berkeley and UCSF just brought real-time speech back to someone who couldn’t speak for 18 years (insane!). For people with paralysis and anarthria, the delay and effort of current AAC tools can make natural conversation nearly impossible. 𝗧𝗵𝗶𝘀 𝗻𝗲𝘄 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗻𝗲𝘂𝗿𝗼𝗽𝗿𝗼𝘀𝘁𝗵𝗲𝘀𝗶𝘀 𝘀𝘁𝗿𝗲𝗮𝗺𝘀 𝗳𝗹𝘂𝗲𝗻𝘁, 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘇𝗲𝗱 𝘀𝗽𝗲𝗲𝗰𝗵 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗿𝗼𝗺 𝗯𝗿𝗮𝗶𝗻 𝘀𝗶𝗴𝗻𝗮𝗹𝘀 𝗶𝗻 𝗿𝗲𝗮𝗹 𝘁𝗶𝗺𝗲 𝘄𝗶𝘁𝗵 𝗻𝗼 𝘃𝗼𝗰𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗱. 1. Restored speech in a participant using 253-channel ECoG, 18 years after brainstem stroke and complete speech loss. 2. Trained deep learning decoders to synthesize audio and text every 80 ms based on silent speech attempts, with no vocal sound needed. 3. Streamed speech at 47.5 words per minute with just 1.12s latency = 8× faster than prior state-of-the-art neuroprostheses. 4. Matched the participant’s original voice using a pre-injury recording, bringing back not just words but vocal identity. Bimodal decoder architecture they used was cool. It's interesting how they got to low-latency and a synchronized output from the system. This was done by sharing a neural encoder and employing separate joiners and language models for both acoustic-speech units and text Other tidbits used was convolutional layers with unidirectional GRUs and LSTM-based language models. Absolutely love seeing AI used in practical ways to bring back joy and hope to people who are paralyzed!! Here's the awesome work: https://lnkd.in/ghqX5EB2 Congrats to Kaylo Littlejohn, Cheol Jun Cho, Jessie Liu, Edward Chang, Gopala Krishna Anumanchipalli, and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW

  • View profile for Vineet Agrawal
    Vineet Agrawal Vineet Agrawal is an Influencer

    Helping Early Healthtech Startups Raise $1-3M Funding | Award Winning Serial Entrepreneur | Best-Selling Author

    50,131 followers

    AI just gave a paralyzed woman her voice back. In 2016, Ann, a 30-year-old teacher, suffered a stroke that left her unable to speak or move - trapped in her own body with Locked-In Syndrome for almost two decades. But now, thanks to a breakthrough brain-computer interface (BCI), she is communicating again - through a digital avatar. This new AI-powered technology decodes brain signals directly into speech and facial expressions. Here’s how the team at University of California, San Francisco made this breakthrough: 1. Brain signal decoding Researchers implanted 253 microelectrodes on the surface of Ann’s brain, capturing the brain signals that would normally control speech. This allowed her to communicate at 80 words per minute, just by thinking about them. 2. Recreating a natural voice Researchers used a sophisticated AI that synthesized Ann’s voice from a pre-stroke recording - her wedding day speech. This restoration wasn’t just robotic speech generation, it brought back her real voice. 3. Bringing emotion and expression back The team went further by combining speech synthesis with facial animation. A screen displayed Ann’s digital avatar, translating her brain signals into facial expressions, allowing it to smile, frown, and express emotions along with her restored voice. 4. The road to independence The next step in this research was to develop a wireless version of the system that would free Ann (and others like her) from the need for physical connections to a computer. This is life-changing tech that opens new doors for millions of people living with severe paralysis. It is a pivotal moment in the intersection of AI, neuroscience, and healthcare. But there's one concern: accessibility. While this technology is revolutionary, its high costs make it inaccessible for many who need it most. Could AI-powered speech technology be the future of healthcare for those with paralysis? Video credit: UC San Francisco (UCFS) on YouTube. #innovation #ai #technology #healthcare

  • View profile for Katie Dreke

    Founder & CSO | former: Nike, Droga5, adidas, W+K, IDEO

    14,382 followers

    WHY AI -- Four years ago, Casey Harrell sang his last bedtime nursery rhyme to his daughter. Now, in an experiment that surpassed expectations, implants in his brain were able to recognize words he tried to speak, and A.I. helped produce sounds that came close to matching his true voice. “The key innovation was putting more arrays, with very precise targeting, into the speechiest parts of the brain we can find,” said Sergey Stavisky, a neuroscientist at the University of California, Davis, who helped lead the study. By day two of their initial working sessions together, the machine was ranging across an available vocabulary of 125,000 words with 90% accuracy and, for the first time, producing sentences of Mr. Harrell’s own making. The device spoke them in a voice remarkably like his own, too: Using podcast interviews and other old recordings, the researchers had created a deep fake of Mr. Harrell’s pre-A.L.S. voice. But beyond the tech itself -- what has changed for Casey? • The new AI persona wakened parts of him that had long lain dormant. He started 'small-talk' and bantering again. Just as speaking a foreign language can enable people to express otherwise buried parts of their personalities, his decoder gave him back old elements of himself, even if they had become slightly changed in transit. • He could now tell Aya, his 5yr old daughter, that he loved her. She, in turn, shared more with him, knowing that she would understand her father’s responses. • Visiting health workers who once seemed to take his impaired speech to mean he was stupid or hard of hearing — he is neither — now speak at normal volumes and touch him more carefully. • He could reach back out to old friends who had drifted away, and who he worried were too ashamed to get back in touch. He could “connect with them in a way that meets them where they are at” rather than on wordless terrain. • Casey describes working more productively and independently since the surgery. And that it was a source of pride. Give AI the right job, and the right things can happen. 🚀

  • View profile for Nicholas Nouri

    Founder | APAC Entrepreneur of the year | Author | AI Global talent awardee | Data Science Wizard

    130,947 followers

    For decades, Ann lived with “locked-in syndrome,” the aftermath of a stroke she experienced at age 30 while on a volleyball court. Although her mind remained alert, she couldn’t speak or express herself. A team of researchers stepped in and helped bridge that gap between her inner thoughts and the outside world. How It Works - Electrocorticography Grid: Surgeons implanted a grid on the surface of Ann’s brain to measure electrical activity directly from the cortex. - BCI + AI Translation: The signals recorded by the implant travel to a pedestal in her skull, and specialized algorithms convert these brain signals into words or sentences. - Digital Avatar: To personalize the experience, her “voice” is expressed through an avatar that can mirror her identity - giving not just speech, but also a sense of presence. While still experimental, successes like Ann’s point toward a world where even severe forms of paralysis needn’t mean a lifetime without self-expression. Every human story, however silent, can still be told through the right blend of empathy, innovation, and technology. Where else might we see BCI and AI break barriers next? #innovation #technology #future #management #startups

  • View profile for Yelena Bogdanova, PhD, PhD, FACRM

    Professor, Boston University | Clinical Neuropsychologist & Neuroscientist | Health Care Innovation & Neurorehabilitation Expert | Speaker | Author | Board Member | ACRM Fellow

    9,607 followers

    NEW Brain Implant Helps Voiceless ALS Patient Communicate A milestone in restoring the ability to communicate to people who have lost it, - more than three times as fast as the previous record, - beginning to approach natural conversation speed of ~160 words/min. Study Participant: 68 yo woman with amyotrophic lateral sclerosis (ALS) a degenerative disease that can eventually cause #paralysis. Study published in Nature: - Two #brain #implants with ~120 electrodes to monitor #neural activity. - Trained an algorithm to recognize her intended words over four months, then - Combined that with a #language model that predicts words based on the context. - Using 125,000-words vocabulary, - system decoded attempted speech at the rate of 62 words per min, - with a 24 percent word-error rate. - Accurate enough to generally get the gist of a sentence. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak. Nature | August 23, 2023 ------------------------- Francis R. Willett, Erin KunzChaofei FanDonald AvansinoGuy WilsonEun Young ChoiForam Kamdar, Matthew Glasser, Leigh H., Shaul Druckmann, Krishna V. Shenoy, Jaimie Henderson  Howard Hughes Medical Institute, Wu Tsai Neurosciences Institute, Stanford University, Brown University School of Engineering, Carney Institute for Brain Science, Brown University, Mass General Hospital Harvard Medical School, Washington University in St. Louis #innovation #technology #future #healthcare #medicine #health #management #startups #clinicalresearch #medtech #healthtech #scienceandtechnology #biotechnology #biotech #engineering #ai #research #science #rehabilitation #stroke #tbi #collaboration #electricalengineering #electrical #neuralnetwork #neuromodulation #personalizedmedicine #neurorehabilitation #braincomputerinterface #artificialintelligence #fda #disability #linkedin #news #precisionmedicine #communication #stanford #harvard #mgh #slp #neuroscience #als #brainstimulation

  • View profile for Michael Lin

    Founder & CEO of Wonders.ai | AI, AR & VR Expert | Predictive Tech Pioneer | Board Director at Cheer Digiart | Anime Enthusiast | Passionate Innovator

    16,347 followers

    In a groundbreaking clinical trial at Massachusetts General Hospital, researchers are using artificial intelligence to help ALS patients regain their ability to speak. Led by Dr. Leigh Hochberg, the study is part of the "BrainGate 2" trial, involving the surgical implantation of small devices on the brain. These devices translate brain activity into speech, enabling patients like Casey Harrell, who lost their voice to ALS, to communicate again. Remarkably, the AI also recreates the patient's pre-ALS voice, restoring not only speech but also a sense of identity. This breakthrough offers renewed hope to ALS patients and their families, with researchers optimistic it could one day assist with movement as well. #TechNews #Innovation #Ai

    Researchers use artificial intelligence to help ALS patient speak again

    Researchers use artificial intelligence to help ALS patient speak again

    cbsnews.com

  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    51,042 followers

    Enhanced Brain Implant Translates Stroke Survivor’s Thoughts Into Nearly Instant Speech Using Artificial Intelligence The system harnesses technology similar to that of devices like Alexa and Siri, according to the researchers, and improves on a previous model Researchers connect stroke survivor Ann Johnson's brain implant to the experimental computer, which will allow her to speak by thinking words. Noah Berger A brain implant that converts neuron activity into audible words has given a stroke survivor with severe paralysis almost instantaneous speech. Ann Johnson became paralyzed and lost the ability to speak after suffering a stroke in 2005, when she was 30 years old. Eighteen years later, she consented to being surgically fitted with an experimental, thin, brain-reading implant that connects to a computer, officially called a brain-computer interface (BCI). Researchers placed the implant on her motor cortex, the part of the brain that controls physical movement, and it tracked her brain waves as she thought the words she wanted to say. As detailed in a study published Monday in the journal Nature Neuroscience, researchers used advances in artificial intelligence (A.I.) to improve the device’s ability to quickly translate that brain activity into synthetic speech—now, it’s almost instantaneous. The technology “brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” study co-author Gopala Anumanchipalli, a computer scientist at the University of California, Berkeley, says in a statement. Neuroprostheses are devices that can aid or replace lost bodily functions by connecting to the nervous system. “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming,” he adds. “The result is more naturalistic, fluent speech synthesis.” #AI #medicine #BrainComputerInterface #brainimplant #strokesurvivor #brainvoicesynthesis

Explore categories