Researchers have successfully used a brain implant coupled with AI to enable a bilingual individual, unable to articulate words due to a stroke, to communicate in both English and Spanish. This development not only enhances our understanding of how the brain processes language but also opens up new possibilities for restoring speech to those unable to communicate verbally. Known as Pancho, the participant demonstrated the ability to form coherent sentences in both languages with impressive accuracy, thanks to the neural patterns recognized and translated by the AI system. The findings suggest that different languages may not occupy distinct areas of the brain as previously thought, hinting at a more integrated neural basis for multilingualism. This technology represents a significant leap forward in neuroprosthetics, offering hope for personalized communication restoration in multilingual individuals. Key Insights: Dual Language Decoding 🗣️ - The AI system can interpret and translate neural patterns into both Spanish and English, adjusting in real-time. High Accuracy 🎯 - Achieved an 88% accuracy in distinguishing between languages and 75% in decoding full sentences. Unified Brain Activity 🧠 - Challenges prior assumptions with findings that both languages activate similar brain areas. Future Applications 🔍 - Potential expansion to other languages with varying linguistic structures, enhancing universal applicability. Enhanced Connection 💬 - Focuses not just on word replacement but on restoring deep personal connections through communication. https://buff.ly/3V8SiXe?
How to Restore Abilities With Medical Devices
Explore top LinkedIn content from expert professionals.
-
-
From robotic to real? We are entering a new era of mind-controlled limbs. Neuroprosthetic limbs are no longer science fiction, they are rapidly transforming into natural extensions of the human body. A remarkable new review by Tian, Kemp and colleagues in Annals of Neurology lays out how brain and nerve interfaces are now converging to restore true limb function, especially when following a devastating injury, disease or amputation. Key Points: - The authors teach us that by combining brain-computer interfaces or BCIs w/ peripheral nerve signals, a generation of future prosthetics may be able to interpret intention and deliver real-time sensory feedback. - The advances are bringing us closer to natural limb function. - Advanced surgical techniques like targeted muscle reinnervation (TMR) and regenerative peripheral nerve interfaces (RPNIs) are facilitating nerves communicating w/ prosthetics through reinnervated muscle. - The overarching idea is to create a natural signal amplifier. - Users are beginning to feel texture, pressure and temperature. - Are we reawakening a sense of ownership over the limb? My take: There were 5 points that resonated w/ me about where prosthetic limbs are headed. 1- Today's prosthetic hands are smarter than ever. These hands can grab, hold and even feel objects w/ increasing precision. 2- We are learning to plug into the brain and it seems to be working. Thought controlled limbs are now entering real world trials. 3- Rewiring nerves into muscles offers more clear signals, improved speed, better accuracy and hopefully more natural movement. 4- One day a prosthetic limb may not just replace a lost one, but could it do the unthinkable and surpass it in function? 5- We are moving from clunky robots to more intuitive life like limbs. I think this story is not just about mobility, it’s about restoring human dignity. The future of neuroprosthetics will be more personal, intelligent and 'deeply human.' https://lnkd.in/d9XvqTBU Parkinson's Foundation Norman Fixel Institute for Neurological Diseases International Parkinson and Movement Disorder Society
-
Imagine not being able to speak for years due to a neurodegenerative condition - then discovering a way to communicate in your own voice again, all thanks to AI. That’s exactly what happened with a new technology developed by UC Davis Health, which reads a patient’s brain signals and translates them into speech with remarkable accuracy (about 97%), using a voice clone based on their old videos. How Does It Work? The system uses machine learning algorithms that analyze brain activity and match it to language patterns. Over time, it learns which brain signals correspond to specific phonemes (the building blocks of speech). After “training,” the AI generates spoken words in a voice modeled on recordings of the patient before they lost their speech. Concerns About Authenticity Of course, some people worry about “fabrication” - the possibility that the AI might accidentally generate words the patient never intended to say. Here are a few ways to address that: - Technical Safeguards: The algorithms can incorporate confidence thresholds, so the system rejects outputs when it’s not certain of the intended word. - User Verification: Individuals can use additional means (like subtle physical movements or eye-tracking) to confirm or correct AI-generated speech in real time. - Iterative Refinement: With ongoing use, the model can refine its understanding of the user’s specific brain signals, further reducing errors. Critics argue that no system is perfect - but proponents counter that being accurately heard, even most of the time, is far better than not being heard at all. This brings immense hope to those who’ve lost their voice due to conditions like ALS, stroke, or injury. Regaining the ability to communicate with loved ones - especially in a voice that feels like their own - can improve emotional well-being and quality of life. What’s your take on AI-powered speech restoration? Do concerns about authenticity outweigh the profound benefits for people who’ve been silent for years? #innovation #technology #future #management #startups
-
Significant break through to help patients of Spinal Cord Injuries integrating disciplines: neuroscience, bio medical engineering and Artificial intelligence. The ARC nerve stimulation therapy system from startup Onward Medical passed another developmental milestone as the company announced the first successful installation of its brainwave-driven implantable electrode array to restore function and feeling to a patient’s hands and arms. The news comes just five months after the researchers implanted a similar system in a different patient to help them regain a more natural walking gait. The ARC system used differs depending on how what issue it's being applied to. The ARC-EX is an external, non-invasive stimulator array that sits on the patient’s neck and helps regulate their bladder control and blood pressure, as well as improving limb function and control. Onward’s lower limb study from May employed the IM along with a BCI controller from CEA-Clinatec to create a “digital bridge” spanning the gap in the patient’s spinal column. The study published Wednesday instead utilized the ARC-IM, an implantable version of the company’s stimulator array which is installed near the spinal cord and is controlled through wearable components and a smartwatch. Onward had previously used the IM system to enable paralyzed patients to stand and walk short distances without assistance, for which it was awarded an FDA Breakthrough Device Designation in 2020. Medical professionals led by by neurosurgeon Dr. Jocelyne Bloch, implanted the ARC-IM and the Clinatec BCI into a 46-year-old patient suffering from a C4 spinal injury, in mid-August. The BCI’s hair-thin leads pick up electrical signals in the patient’s brain, convert those analog signals into digital ones that machines can understand, and then transmits them to a nearby computing device where a machine learning AI interprets the patient’s electrical signals and issues commands to the implanted stimulator array. The patient thinks about what they want to do and these two devices work to translate that intent into computer-controlled movement.
-
Turning thoughts into speech. In real time. No typing. No voice. Just intent.👇 🧠 A new study in Nature Portfolio (Neuroscience) introduces a significant advancement in brain-computer interface research. Researchers at University of California, San Francisco and University of California, Berkeley developed a real-time speech neuroprosthesis that enables a person with severe paralysis and anarthria to produce streamed, intelligible speech directly from brain signals without vocalizing. Using high-density electrocorticography (ECoG) recordings from the speech sensorimotor cortex, the system decodes intended speech in 80-ms increments, allowing for low-latency, continuous communication. A personalized synthesizer also recreated the participant’s pre-injury voice, preserving identity in speech. 🔹 Reached up to 90 words per minute 🔹 Latency between 1–2 seconds, significantly faster than existing assistive tech 🔹 Generalized across other silent-speech interfaces, including intracortical recordings and EMG. This work highlights the potential for restoring more natural conversation in individuals who have lost the ability to speak. Full paper : "A streaming brain-to-voice neuroprosthesis to restore naturalistic communication" 🔗 : https://lnkd.in/d6tNwQE3 _______________________________________________________ #innovation #health #medicine #brain
-
Enhanced Brain Implant Translates Stroke Survivor’s Thoughts Into Nearly Instant Speech Using Artificial Intelligence The system harnesses technology similar to that of devices like Alexa and Siri, according to the researchers, and improves on a previous model Researchers connect stroke survivor Ann Johnson's brain implant to the experimental computer, which will allow her to speak by thinking words. Noah Berger A brain implant that converts neuron activity into audible words has given a stroke survivor with severe paralysis almost instantaneous speech. Ann Johnson became paralyzed and lost the ability to speak after suffering a stroke in 2005, when she was 30 years old. Eighteen years later, she consented to being surgically fitted with an experimental, thin, brain-reading implant that connects to a computer, officially called a brain-computer interface (BCI). Researchers placed the implant on her motor cortex, the part of the brain that controls physical movement, and it tracked her brain waves as she thought the words she wanted to say. As detailed in a study published Monday in the journal Nature Neuroscience, researchers used advances in artificial intelligence (A.I.) to improve the device’s ability to quickly translate that brain activity into synthetic speech—now, it’s almost instantaneous. The technology “brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” study co-author Gopala Anumanchipalli, a computer scientist at the University of California, Berkeley, says in a statement. Neuroprostheses are devices that can aid or replace lost bodily functions by connecting to the nervous system. “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming,” he adds. “The result is more naturalistic, fluent speech synthesis.” #AI #medicine #BrainComputerInterface #brainimplant #strokesurvivor #brainvoicesynthesis
-
AI just gave a paralyzed woman her voice back. In 2016, Ann, a 30-year-old teacher, suffered a stroke that left her unable to speak or move - trapped in her own body with Locked-In Syndrome for almost two decades. But now, thanks to a breakthrough brain-computer interface (BCI), she is communicating again - through a digital avatar. This new AI-powered technology decodes brain signals directly into speech and facial expressions. Here’s how the team at University of California, San Francisco made this breakthrough: 1. Brain signal decoding Researchers implanted 253 microelectrodes on the surface of Ann’s brain, capturing the brain signals that would normally control speech. This allowed her to communicate at 80 words per minute, just by thinking about them. 2. Recreating a natural voice Researchers used a sophisticated AI that synthesized Ann’s voice from a pre-stroke recording - her wedding day speech. This restoration wasn’t just robotic speech generation, it brought back her real voice. 3. Bringing emotion and expression back The team went further by combining speech synthesis with facial animation. A screen displayed Ann’s digital avatar, translating her brain signals into facial expressions, allowing it to smile, frown, and express emotions along with her restored voice. 4. The road to independence The next step in this research was to develop a wireless version of the system that would free Ann (and others like her) from the need for physical connections to a computer. This is life-changing tech that opens new doors for millions of people living with severe paralysis. It is a pivotal moment in the intersection of AI, neuroscience, and healthcare. But there's one concern: accessibility. While this technology is revolutionary, its high costs make it inaccessible for many who need it most. Could AI-powered speech technology be the future of healthcare for those with paralysis? Video credit: UC San Francisco (UCFS) on YouTube. #innovation #ai #technology #healthcare
-
Retinal implant ‘Prima’ shows promise in restoring vision for legally blind patients: 🦮Participants in a clinical trial improved their visual acuity significantly, enabling them to read and recognize faces 🦮The 2-mm chip is surgically implanted under the retina and uses infrared light to stimulate the brain 🦮Unlike previous devices, Prima offers the perception of shapes and patterns rather than just flashes of light 🦮Initial trial results from 32 participants showed an average improvement in visual acuity from 20/450 to 20/160, with some users achieving up to 20/63 acuity using a built-in zoom feature. The legally blind threshold is typically 20/200 or worse 🦮The study targeted patients with geographic atrophy, a severe form of age-related macular degeneration 🦮The Prima system received FDA breakthrough designation, indicating its potential impact on vision restoration therapies 💬This technology is promising and while we're not yet close to normal vision, features like optical zoom could lead to enhanced ‘superhuman’ vision capabilities in the future 👇Link to related articles in comments #digitalhealth #healthtech
-
A new experimental bionic leg was found to restore natural walking speeds and steps in people with amputated legs. The neuroprosthesis uses sensors placed between the reconstructed amputation site and the bionic leg to transmit electrical signals from the brain. This allows the prosthetic device to sense its position and movement and to send this information back to the patient, enabling a sense of proprioception: the brain’s ability to sense self-movement and location in space. One study, published in the journal Nature Medicine, indicated that participants who’d had the specialized amputation and neuroprosthesis increased their walking speed 41%, matching the ranges and abilities of people without leg amputations. Here's the latest: https://lnkd.in/eUF9KbW9
-
In a groundbreaking clinical trial at Massachusetts General Hospital, researchers are using artificial intelligence to help ALS patients regain their ability to speak. Led by Dr. Leigh Hochberg, the study is part of the "BrainGate 2" trial, involving the surgical implantation of small devices on the brain. These devices translate brain activity into speech, enabling patients like Casey Harrell, who lost their voice to ALS, to communicate again. Remarkably, the AI also recreates the patient's pre-ALS voice, restoring not only speech but also a sense of identity. This breakthrough offers renewed hope to ALS patients and their families, with researchers optimistic it could one day assist with movement as well. #TechNews #Innovation #Ai
Researchers use artificial intelligence to help ALS patient speak again
cbsnews.com