AI just gave a paralyzed woman her voice back. In 2016, Ann, a 30-year-old teacher, suffered a stroke that left her unable to speak or move - trapped in her own body with Locked-In Syndrome for almost two decades. But now, thanks to a breakthrough brain-computer interface (BCI), she is communicating again - through a digital avatar. This new AI-powered technology decodes brain signals directly into speech and facial expressions. Here’s how the team at University of California, San Francisco made this breakthrough: 1. Brain signal decoding Researchers implanted 253 microelectrodes on the surface of Ann’s brain, capturing the brain signals that would normally control speech. This allowed her to communicate at 80 words per minute, just by thinking about them. 2. Recreating a natural voice Researchers used a sophisticated AI that synthesized Ann’s voice from a pre-stroke recording - her wedding day speech. This restoration wasn’t just robotic speech generation, it brought back her real voice. 3. Bringing emotion and expression back The team went further by combining speech synthesis with facial animation. A screen displayed Ann’s digital avatar, translating her brain signals into facial expressions, allowing it to smile, frown, and express emotions along with her restored voice. 4. The road to independence The next step in this research was to develop a wireless version of the system that would free Ann (and others like her) from the need for physical connections to a computer. This is life-changing tech that opens new doors for millions of people living with severe paralysis. It is a pivotal moment in the intersection of AI, neuroscience, and healthcare. But there's one concern: accessibility. While this technology is revolutionary, its high costs make it inaccessible for many who need it most. Could AI-powered speech technology be the future of healthcare for those with paralysis? Video credit: UC San Francisco (UCFS) on YouTube. #innovation #ai #technology #healthcare
How Brain-Computer Interfaces Improve Communication
Explore top LinkedIn content from expert professionals.
Summary
Brain-computer interfaces (BCIs) are innovative technologies that enable individuals to communicate directly through brain signals, offering a life-changing solution for those who are unable to speak or move due to conditions like paralysis or neurological disorders. By decoding neural activity into speech or other forms of communication, BCIs provide a pathway to restore connection and independence for individuals facing such challenges.
- Understand neural decoding: BCIs use implanted electrodes or non-invasive methods to capture and translate brain signals into words, allowing people to communicate their thoughts without speaking or typing.
- Recreate personal voice: Advanced AI technology can synthesize voices using pre-recorded samples, ensuring that the communication feels personal and authentic.
- Explore emotional expression: BCIs are now integrating facial animation with speech synthesis, enabling users to convey nuanced emotions alongside their words.
-
-
Researchers have successfully used a brain implant coupled with AI to enable a bilingual individual, unable to articulate words due to a stroke, to communicate in both English and Spanish. This development not only enhances our understanding of how the brain processes language but also opens up new possibilities for restoring speech to those unable to communicate verbally. Known as Pancho, the participant demonstrated the ability to form coherent sentences in both languages with impressive accuracy, thanks to the neural patterns recognized and translated by the AI system. The findings suggest that different languages may not occupy distinct areas of the brain as previously thought, hinting at a more integrated neural basis for multilingualism. This technology represents a significant leap forward in neuroprosthetics, offering hope for personalized communication restoration in multilingual individuals. Key Insights: Dual Language Decoding 🗣️ - The AI system can interpret and translate neural patterns into both Spanish and English, adjusting in real-time. High Accuracy 🎯 - Achieved an 88% accuracy in distinguishing between languages and 75% in decoding full sentences. Unified Brain Activity 🧠 - Challenges prior assumptions with findings that both languages activate similar brain areas. Future Applications 🔍 - Potential expansion to other languages with varying linguistic structures, enhancing universal applicability. Enhanced Connection 💬 - Focuses not just on word replacement but on restoring deep personal connections through communication. https://buff.ly/3V8SiXe?
-
Imagine regaining your voice, not through vocal cords, but through the power of thought. This is not a scene from a movie but a reality for Casey Harrell, a 45-year-old man who lost his ability to speak due to ALS. Thanks to the work by researchers at the University of California, Harrell communicates again, his thoughts converted to speech through a brain-computer interface (BCI). 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐮𝐬𝐢𝐧𝐠 𝐍𝐞𝐮𝐫𝐨𝐜𝐡𝐢𝐩 𝐓𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 - High Precision: The implanted neurochip translates Harrell's thoughts into speech with an impressive 97.5% accuracy. This level of precision offers a new voice to those silenced by various neurological conditions. - Personalized Voice Synthesis: By training the neural network with recordings of Harrell's voice from before his condition advanced, the resulting speech retains the unique tonal qualities of his voice. This personal touch makes the communication not only clear but also deeply personal. - Focused Thought Recognition: The system is designed to interpret brain signals linked specifically to speech attempts, not the broader stream of consciousness. This ensures that only intentional speech is vocalized, mirroring the natural process of speaking. This breakthrough highlights a step forward in neurotechnology, offering hope and a new means of connection for individuals facing similar challenges. It underscores the potential of integrating advanced computing with human biology to overcome physical limitations and enhance lives. It is a gateway to restoring independence and emotional connection for many. How do you see this technology evolving? #innovation #technology #future #management #startups
-
Turning thoughts into speech. In real time. No typing. No voice. Just intent.👇 🧠 A new study in Nature Portfolio (Neuroscience) introduces a significant advancement in brain-computer interface research. Researchers at University of California, San Francisco and University of California, Berkeley developed a real-time speech neuroprosthesis that enables a person with severe paralysis and anarthria to produce streamed, intelligible speech directly from brain signals without vocalizing. Using high-density electrocorticography (ECoG) recordings from the speech sensorimotor cortex, the system decodes intended speech in 80-ms increments, allowing for low-latency, continuous communication. A personalized synthesizer also recreated the participant’s pre-injury voice, preserving identity in speech. 🔹 Reached up to 90 words per minute 🔹 Latency between 1–2 seconds, significantly faster than existing assistive tech 🔹 Generalized across other silent-speech interfaces, including intracortical recordings and EMG. This work highlights the potential for restoring more natural conversation in individuals who have lost the ability to speak. Full paper : "A streaming brain-to-voice neuroprosthesis to restore naturalistic communication" 🔗 : https://lnkd.in/d6tNwQE3 _______________________________________________________ #innovation #health #medicine #brain
-
NEW Brain Implant Helps Voiceless ALS Patient Communicate A milestone in restoring the ability to communicate to people who have lost it, - more than three times as fast as the previous record, - beginning to approach natural conversation speed of ~160 words/min. Study Participant: 68 yo woman with amyotrophic lateral sclerosis (ALS) a degenerative disease that can eventually cause #paralysis. Study published in Nature: - Two #brain #implants with ~120 electrodes to monitor #neural activity. - Trained an algorithm to recognize her intended words over four months, then - Combined that with a #language model that predicts words based on the context. - Using 125,000-words vocabulary, - system decoded attempted speech at the rate of 62 words per min, - with a 24 percent word-error rate. - Accurate enough to generally get the gist of a sentence. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak. Nature | August 23, 2023 ------------------------- Francis R. Willett, Erin Kunz, Chaofei Fan, Donald Avansino, Guy Wilson, Eun Young Choi, Foram Kamdar, Matthew Glasser, Leigh H., Shaul Druckmann, Krishna V. Shenoy, Jaimie Henderson Howard Hughes Medical Institute, Wu Tsai Neurosciences Institute, Stanford University, Brown University School of Engineering, Carney Institute for Brain Science, Brown University, Mass General Hospital Harvard Medical School, Washington University in St. Louis #innovation #technology #future #healthcare #medicine #health #management #startups #clinicalresearch #medtech #healthtech #scienceandtechnology #biotechnology #biotech #engineering #ai #research #science #rehabilitation #stroke #tbi #collaboration #electricalengineering #electrical #neuralnetwork #neuromodulation #personalizedmedicine #neurorehabilitation #braincomputerinterface #artificialintelligence #fda #disability #linkedin #news #precisionmedicine #communication #stanford #harvard #mgh #slp #neuroscience #als #brainstimulation
-
Last week, we explored how robots might move, feel, and understand like humans. Now, we flip the lens and tap into one of the most exciting frontiers in human augmentation: Brain-Computer Interfaces (BCIs). BCIs connect the brain directly to machines, translating neural activity into signals that control computers, devices, or even AI agents. With the rise of Agentic AI, a new possibility is emerging: What if your intentions could become instructions, from brainwaves to prompts, directing AI with intent alone? The most intuitive interface isn’t voice; it’s thought. A Thought-to-Agent Interface (T2A) links your brain activity to an AI Agent in real time, translating mental focus, intention, or emotional state into prompts, actions, or decisions. These are some use-case examples... 🧠 In Work: You're in deep focus. You imagine a slide, your AI Agent starts drafting it. You think of a person; it pulls up your last conversation. 🧠 In Accessibility: For someone unable to speak or type, the interface interprets intent from brain signals and helps control devices, compose messages, or navigate systems. 🧠 In Creativity: A designer imagines a shape, a scene, or a melody, and the AI Agent renders variations in real time, refining the output through guided intent. These are some current research projects... 📚 Meta AI’s Brain-to-Text Decoding: Decodes full sentences from non-invasive brain activity with up to 80% character accuracy, bridging neural intent to digital language. https://lnkd.in/gTEJpa4e 📚 UC Berkeley’s Brain-to-Voice Neuroprosthesis: Translates brain signals into audible speech, restoring naturalistic communication for people with speech loss. https://lnkd.in/g_D3Xeup 📚 Caltech’s Mind-to-Text Interface: Achieves 79% accuracy in translating imagined internal speech into real-time text, enabling seamless brain-to-device communication. https://lnkd.in/gEuVKreq These are some startups to watch... 🚀 Neurable: EEG-based wearables decoding cognitive load & focus in real-time. https://www.neurable.com/ 🚀 OpenBCI: Makers of Galea, a headset combining EEG, EMG, eye tracking, and skin conductance for immersive neural interfacing. https://lnkd.in/girt4PAW 🚀 Cognixion: Brain-powered communication integrated with AR and speech synthesis for non-verbal users. https://www.cognixion.com/ 🚀 Paradromics: High-bandwidth BCI for translating neural activity into speech or system commands for those with severe impairments. https://lnkd.in/giepGKH4 What is a likely time horizon... 1–2 years: Wearable EEG interfaces paired with AI for narrow tasks: adaptive UI, hands-free control, attention-based interaction. 3–5 years: Thought-to-agent pipelines for work, accessibility, and creative tools, personalized to individual brain patterns and cognitive signatures. The future isn’t just AI that understands your prompts. It’s AI that understands you as soon as you think. Next up: Multimodal AI Sensory Fusion (“Glass Whisperer”)
-
𝗪𝗶𝘁𝗵 𝗔𝗜 𝗻𝗲𝘄𝘀 𝗼𝗳𝘁𝗲𝗻 𝗳𝗼𝗰𝘂𝘀𝗲𝗱 𝗼𝗻 𝘁𝗵𝗲 𝗽𝗼𝘁𝗲𝗻𝘁𝗶𝗮𝗹 𝗱𝗮𝗻𝗴𝗲𝗿𝘀 𝗼𝗳 𝘁𝗵𝗲 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆, 𝗵𝗲𝗿𝗲’𝘀 𝗮 𝗿𝗲𝗺𝗶𝗻𝗱𝗲𝗿 𝗔𝗜 𝘄𝗶𝗹𝗹 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 𝗺𝗮𝗻𝘆 𝗽𝗲𝗼𝗽𝗹𝗲’𝘀 𝗹𝗶𝘃𝗲𝘀 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗯𝗲𝘁𝘁𝗲𝗿. Ann, a stroke survivor, is now able to "speak" again — not with her voice, but through a digital avatar that shows her facial expressions and speaks her words. This breakthrough was made possible by scientists at UC San Francisco and UC Berkeley, led by Dr. Edward Chang. They created a brain-computer interface (BCI) that reads signals from Ann’s brain and turns them into real-time speech, text, and expressions. 👉 Here’s how it works: - Tiny electrodes on the brain pick up signals - AI translates those signals into words and emotions - A digital avatar shows her expressions and speaks for her This is the first time brain signals have been used to control both voice and facial expressions. And it could one day help thousands of people who’ve lost the ability to speak. #AIforGood #Neurotech #StrokeRecovery #UCSF #BrainComputerInterface #DigitalHealth #Innovation #HealthcareAI