Researchers have successfully used a brain implant coupled with AI to enable a bilingual individual, unable to articulate words due to a stroke, to communicate in both English and Spanish. This development not only enhances our understanding of how the brain processes language but also opens up new possibilities for restoring speech to those unable to communicate verbally. Known as Pancho, the participant demonstrated the ability to form coherent sentences in both languages with impressive accuracy, thanks to the neural patterns recognized and translated by the AI system. The findings suggest that different languages may not occupy distinct areas of the brain as previously thought, hinting at a more integrated neural basis for multilingualism. This technology represents a significant leap forward in neuroprosthetics, offering hope for personalized communication restoration in multilingual individuals. Key Insights: Dual Language Decoding 🗣️ - The AI system can interpret and translate neural patterns into both Spanish and English, adjusting in real-time. High Accuracy 🎯 - Achieved an 88% accuracy in distinguishing between languages and 75% in decoding full sentences. Unified Brain Activity 🧠 - Challenges prior assumptions with findings that both languages activate similar brain areas. Future Applications 🔍 - Potential expansion to other languages with varying linguistic structures, enhancing universal applicability. Enhanced Connection 💬 - Focuses not just on word replacement but on restoring deep personal connections through communication. https://buff.ly/3V8SiXe?
How to Transform Communication Using Neurotechnology
Explore top LinkedIn content from expert professionals.
Summary
Neurotechnology is revolutionizing communication by enabling individuals with speech impairments to convey thoughts through brain-computer interfaces (BCIs). By decoding neural signals, these innovative systems translate brain activity into speech or text, restoring the ability to connect for those affected by conditions like paralysis, ALS, or strokes.
- Utilize brain-computer interfaces: These devices decode brain signals into words, enabling real-time communication for individuals who are unable to speak.
- Personalize voice restoration: Advanced AI systems can recreate a person’s natural voice by analyzing pre-injury recordings, making communication more personal and relatable.
- Expand language capabilities: Cutting-edge technologies allow for multilingual communication and the decoding of new language structures, offering broader accessibility.
-
-
Imagine regaining your voice, not through vocal cords, but through the power of thought. This is not a scene from a movie but a reality for Casey Harrell, a 45-year-old man who lost his ability to speak due to ALS. Thanks to the work by researchers at the University of California, Harrell communicates again, his thoughts converted to speech through a brain-computer interface (BCI). 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐮𝐬𝐢𝐧𝐠 𝐍𝐞𝐮𝐫𝐨𝐜𝐡𝐢𝐩 𝐓𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 - High Precision: The implanted neurochip translates Harrell's thoughts into speech with an impressive 97.5% accuracy. This level of precision offers a new voice to those silenced by various neurological conditions. - Personalized Voice Synthesis: By training the neural network with recordings of Harrell's voice from before his condition advanced, the resulting speech retains the unique tonal qualities of his voice. This personal touch makes the communication not only clear but also deeply personal. - Focused Thought Recognition: The system is designed to interpret brain signals linked specifically to speech attempts, not the broader stream of consciousness. This ensures that only intentional speech is vocalized, mirroring the natural process of speaking. This breakthrough highlights a step forward in neurotechnology, offering hope and a new means of connection for individuals facing similar challenges. It underscores the potential of integrating advanced computing with human biology to overcome physical limitations and enhance lives. It is a gateway to restoring independence and emotional connection for many. How do you see this technology evolving? #innovation #technology #future #management #startups
-
Turning thoughts into speech. In real time. No typing. No voice. Just intent.👇 🧠 A new study in Nature Portfolio (Neuroscience) introduces a significant advancement in brain-computer interface research. Researchers at University of California, San Francisco and University of California, Berkeley developed a real-time speech neuroprosthesis that enables a person with severe paralysis and anarthria to produce streamed, intelligible speech directly from brain signals without vocalizing. Using high-density electrocorticography (ECoG) recordings from the speech sensorimotor cortex, the system decodes intended speech in 80-ms increments, allowing for low-latency, continuous communication. A personalized synthesizer also recreated the participant’s pre-injury voice, preserving identity in speech. 🔹 Reached up to 90 words per minute 🔹 Latency between 1–2 seconds, significantly faster than existing assistive tech 🔹 Generalized across other silent-speech interfaces, including intracortical recordings and EMG. This work highlights the potential for restoring more natural conversation in individuals who have lost the ability to speak. Full paper : "A streaming brain-to-voice neuroprosthesis to restore naturalistic communication" 🔗 : https://lnkd.in/d6tNwQE3 _______________________________________________________ #innovation #health #medicine #brain
-
🧠 “She’s speaking again — using only her thoughts.” What sounds like science fiction just became science. A 47-year-old woman, paralyzed and unable to speak for 18 years, can now communicate in real time, using her own voice, powered entirely by her brain signals. Not text. Not robotic voices. Her actual voice — brought back by AI. Researchers implanted a 253-channel ECoG (electrocorticography) device on her brain’s speech motor cortex. This device captures neural signals when she imagines speaking. Then, a deep learning model trained on 23,000 silent speech attempts decodes these brain signals into words — and even synthesizes her original voice, using pre-paralysis recordings. Here’s what’s groundbreaking: - ⏱ Real-time speech generation every 80 milliseconds. - 🗣 Speeds up to 90 words per minute (most systems cap at 14 WPM). - 📉 Only 12% error rate in a 50-phrase test set. - 🧩 Can even decode never-before-seen words (e.g., “Zulu”, “Romeo”) with 46% accuracy. Even more impressive? The system doesn’t just “speak” — it also produces text output in parallel. That’s dual-modality communication from brain activity alone. Perfect? Not yet. Larger vocabularies still pose challenges, and accuracy varies. But for millions who’ve lost their voice due to stroke, ALS, or injury — this is a revolution in assistive communication. Could this redefine human-computer interaction in the next decade? 👇 Paper, data and code access links in the comments. #NeuroAI #AIResearch #BrainComputerInterface #HealthcareInnovation #MachineLearning
-
NEW Brain Implant Helps Voiceless ALS Patient Communicate A milestone in restoring the ability to communicate to people who have lost it, - more than three times as fast as the previous record, - beginning to approach natural conversation speed of ~160 words/min. Study Participant: 68 yo woman with amyotrophic lateral sclerosis (ALS) a degenerative disease that can eventually cause #paralysis. Study published in Nature: - Two #brain #implants with ~120 electrodes to monitor #neural activity. - Trained an algorithm to recognize her intended words over four months, then - Combined that with a #language model that predicts words based on the context. - Using 125,000-words vocabulary, - system decoded attempted speech at the rate of 62 words per min, - with a 24 percent word-error rate. - Accurate enough to generally get the gist of a sentence. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak. Nature | August 23, 2023 ------------------------- Francis R. Willett, Erin Kunz, Chaofei Fan, Donald Avansino, Guy Wilson, Eun Young Choi, Foram Kamdar, Matthew Glasser, Leigh H., Shaul Druckmann, Krishna V. Shenoy, Jaimie Henderson Howard Hughes Medical Institute, Wu Tsai Neurosciences Institute, Stanford University, Brown University School of Engineering, Carney Institute for Brain Science, Brown University, Mass General Hospital Harvard Medical School, Washington University in St. Louis #innovation #technology #future #healthcare #medicine #health #management #startups #clinicalresearch #medtech #healthtech #scienceandtechnology #biotechnology #biotech #engineering #ai #research #science #rehabilitation #stroke #tbi #collaboration #electricalengineering #electrical #neuralnetwork #neuromodulation #personalizedmedicine #neurorehabilitation #braincomputerinterface #artificialintelligence #fda #disability #linkedin #news #precisionmedicine #communication #stanford #harvard #mgh #slp #neuroscience #als #brainstimulation