UC Berkeley and UCSF just brought real-time speech back to someone who couldn’t speak for 18 years (insane!). For people with paralysis and anarthria, the delay and effort of current AAC tools can make natural conversation nearly impossible. 𝗧𝗵𝗶𝘀 𝗻𝗲𝘄 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗻𝗲𝘂𝗿𝗼𝗽𝗿𝗼𝘀𝘁𝗵𝗲𝘀𝗶𝘀 𝘀𝘁𝗿𝗲𝗮𝗺𝘀 𝗳𝗹𝘂𝗲𝗻𝘁, 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘇𝗲𝗱 𝘀𝗽𝗲𝗲𝗰𝗵 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗿𝗼𝗺 𝗯𝗿𝗮𝗶𝗻 𝘀𝗶𝗴𝗻𝗮𝗹𝘀 𝗶𝗻 𝗿𝗲𝗮𝗹 𝘁𝗶𝗺𝗲 𝘄𝗶𝘁𝗵 𝗻𝗼 𝘃𝗼𝗰𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗱. 1. Restored speech in a participant using 253-channel ECoG, 18 years after brainstem stroke and complete speech loss. 2. Trained deep learning decoders to synthesize audio and text every 80 ms based on silent speech attempts, with no vocal sound needed. 3. Streamed speech at 47.5 words per minute with just 1.12s latency = 8× faster than prior state-of-the-art neuroprostheses. 4. Matched the participant’s original voice using a pre-injury recording, bringing back not just words but vocal identity. Bimodal decoder architecture they used was cool. It's interesting how they got to low-latency and a synchronized output from the system. This was done by sharing a neural encoder and employing separate joiners and language models for both acoustic-speech units and text Other tidbits used was convolutional layers with unidirectional GRUs and LSTM-based language models. Absolutely love seeing AI used in practical ways to bring back joy and hope to people who are paralyzed!! Here's the awesome work: https://lnkd.in/ghqX5EB2 Congrats to Kaylo Littlejohn, Cheol Jun Cho, Jessie Liu, Edward Chang, Gopala Krishna Anumanchipalli, and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW
How AI can Improve Communication for Disabled Individuals
Explore top LinkedIn content from expert professionals.
Summary
Artificial intelligence (AI) is revolutionizing communication for disabled individuals, offering innovative tools to decode brain signals, create personalized speech, or enhance accessibility with real-time translation and advanced interfaces. These technologies are transforming the way people with disabilities connect with the world around them.
- Enable thought-to-speech communication: Brain-computer interfaces (BCIs) combined with AI can convert neural signals into speech or text, providing new communication pathways for individuals with paralysis or speech impairments.
- Personalize communication tools: AI can synthesize voices based on pre-recorded audio, restoring not only speech but also emotional tone and identity for those who have lost their voice.
- Expand accessibility through tech: AI-powered tools, like sign language avatars and live-transcription glasses, break down barriers by enabling real-time communication for people with hearing or speech disabilities.
-
-
AI just gave a paralyzed woman her voice back. In 2016, Ann, a 30-year-old teacher, suffered a stroke that left her unable to speak or move - trapped in her own body with Locked-In Syndrome for almost two decades. But now, thanks to a breakthrough brain-computer interface (BCI), she is communicating again - through a digital avatar. This new AI-powered technology decodes brain signals directly into speech and facial expressions. Here’s how the team at University of California, San Francisco made this breakthrough: 1. Brain signal decoding Researchers implanted 253 microelectrodes on the surface of Ann’s brain, capturing the brain signals that would normally control speech. This allowed her to communicate at 80 words per minute, just by thinking about them. 2. Recreating a natural voice Researchers used a sophisticated AI that synthesized Ann’s voice from a pre-stroke recording - her wedding day speech. This restoration wasn’t just robotic speech generation, it brought back her real voice. 3. Bringing emotion and expression back The team went further by combining speech synthesis with facial animation. A screen displayed Ann’s digital avatar, translating her brain signals into facial expressions, allowing it to smile, frown, and express emotions along with her restored voice. 4. The road to independence The next step in this research was to develop a wireless version of the system that would free Ann (and others like her) from the need for physical connections to a computer. This is life-changing tech that opens new doors for millions of people living with severe paralysis. It is a pivotal moment in the intersection of AI, neuroscience, and healthcare. But there's one concern: accessibility. While this technology is revolutionary, its high costs make it inaccessible for many who need it most. Could AI-powered speech technology be the future of healthcare for those with paralysis? Video credit: UC San Francisco (UCFS) on YouTube. #innovation #ai #technology #healthcare
-
#GenAI has been a transformative experience for my team at The Accidental Ally , a team of individuals with intellectual and developmental disabilities (#IDD). Here’s an example of how GenAI can make communication easier and more empowering for individuals with IDD. Laith is a young man on my team who has lots of great ideas and always has something to say, but written communication is challenging for him. He used voice typing to input the following into ChatGPT: Laith’s Input: "Make a professional but cool bio for my mentor Gayatri. She is kind, fond of you around, and most importantly the most patient person I've ever met, and that's saying something because I push her buttons all the time. She can sometimes be strict, but that's ok because she cares. She's the founder of the Accidental Ally and teaches us how to prepare for the workforce but doesn't treat it like school. She can be brutally honest about how we act if we don't act professional. She's like an aunt to the team and just an awesome person all around." ChatGPT’s Response: "Gayatri is the founder of The Accidental Ally, a visionary leader who balances kindness with honesty. Known for her patience, she guides her team through the challenges of workforce preparation without turning it into a typical classroom. Her ability to be both a mentor and a caring figure has earned her respect, even when she's being blunt about the importance of professionalism. As someone who truly cares about her team’s success, Gayatri is the supportive yet no-nonsense mentor everyone needs—and to her team, she’s like an aunt who always has their back." What’s amazing is that Laith was able to create a professional bio quickly, using his own words. Here's how GenAI helps: - Better Self-Expression: Laith can now clearly express his thoughts using voice typing and GenAI, overcoming challenges with written communication. - Helps Refine Ideas: GenAI takes Laith’s raw input and turns it into a polished message, keeping the essence of his thoughts. - Keeps His Voice: The final message still feels like Laith’s own, with GenAI simply helping organize his ideas without changing his meaning. - Real-Time Learning: By seeing how GenAI refines his thoughts, Laith learns how to structure communication for different situations, like work or formal settings. - Boosts Confidence: Seeing his words transformed into professional language boosts Laith's confidence in his ability to communicate. - Prepares for Work: GENAI also helps Laith practice professional communication, which is a valuable skill for job preparation and workforce readiness. #GenAI is building bridges for people like Laith, empowering them to actively participate and thrive in the workforce!
-
Sign language avatars may seem like a great idea, but like sign language gloves, the details are in the implementation. Until now, most have been overly simplified and have not involved people with disabilities in their design or development. NVIDIA's new SIGNS platform (https://signs-ai.com/) takes a different approach. NVIDIA incorporated artificial intelligence to create more naturalistic American Sign Language (ASL) avatars. Unlike previous attempts, this model was trained using real Deaf signers to improve accuracy. It focuses on the full range of ASL expression, including nuanced facial movements, precise hand placement, and the flow of movement between signs. This represents progress in digital inclusion. Facial expressions are critical in ASL, and past avatars failed to capture them effectively. SIGNS recognizes that and the importance of correctly positioning the hands in three-dimensional space. There is still work to do. What happens if someone has Bell’s palsy and cannot move one side of their face? What if someone has ulnar neuropathy and cannot keep their pinky in with the rest of their fingers? What about other forms of sign language used in the US, like SEE, LSM, CSL, cued speech, and Black ASL just to name a few? d/Deaf people are diverse, and tools that promote inclusion must reflect that. More research, testing, and collaboration with the Deaf and disabled communities are needed to ensure AI-driven signing works for everyone. Technology alone doesn’t make something accessible. People do. NVIDIA's crowdsourcing of signs for training should make this a more robust system. NVIDIA's SIGNS platform is definitely a step forward in applying AI to sign language. #AccessibilityTriumphThursday #Accessibility #Inclusion #Disability #AI4Accessibility #SignLanguageTech #DigitalInclusion #ASLInnovation #AccessibleAI #ASL #SignLanguage https://lnkd.in/gcwjS5ab
-
Last week, we explored how robots might move, feel, and understand like humans. Now, we flip the lens and tap into one of the most exciting frontiers in human augmentation: Brain-Computer Interfaces (BCIs). BCIs connect the brain directly to machines, translating neural activity into signals that control computers, devices, or even AI agents. With the rise of Agentic AI, a new possibility is emerging: What if your intentions could become instructions, from brainwaves to prompts, directing AI with intent alone? The most intuitive interface isn’t voice; it’s thought. A Thought-to-Agent Interface (T2A) links your brain activity to an AI Agent in real time, translating mental focus, intention, or emotional state into prompts, actions, or decisions. These are some use-case examples... 🧠 In Work: You're in deep focus. You imagine a slide, your AI Agent starts drafting it. You think of a person; it pulls up your last conversation. 🧠 In Accessibility: For someone unable to speak or type, the interface interprets intent from brain signals and helps control devices, compose messages, or navigate systems. 🧠 In Creativity: A designer imagines a shape, a scene, or a melody, and the AI Agent renders variations in real time, refining the output through guided intent. These are some current research projects... 📚 Meta AI’s Brain-to-Text Decoding: Decodes full sentences from non-invasive brain activity with up to 80% character accuracy, bridging neural intent to digital language. https://lnkd.in/gTEJpa4e 📚 UC Berkeley’s Brain-to-Voice Neuroprosthesis: Translates brain signals into audible speech, restoring naturalistic communication for people with speech loss. https://lnkd.in/g_D3Xeup 📚 Caltech’s Mind-to-Text Interface: Achieves 79% accuracy in translating imagined internal speech into real-time text, enabling seamless brain-to-device communication. https://lnkd.in/gEuVKreq These are some startups to watch... 🚀 Neurable: EEG-based wearables decoding cognitive load & focus in real-time. https://www.neurable.com/ 🚀 OpenBCI: Makers of Galea, a headset combining EEG, EMG, eye tracking, and skin conductance for immersive neural interfacing. https://lnkd.in/girt4PAW 🚀 Cognixion: Brain-powered communication integrated with AR and speech synthesis for non-verbal users. https://www.cognixion.com/ 🚀 Paradromics: High-bandwidth BCI for translating neural activity into speech or system commands for those with severe impairments. https://lnkd.in/giepGKH4 What is a likely time horizon... 1–2 years: Wearable EEG interfaces paired with AI for narrow tasks: adaptive UI, hands-free control, attention-based interaction. 3–5 years: Thought-to-agent pipelines for work, accessibility, and creative tools, personalized to individual brain patterns and cognitive signatures. The future isn’t just AI that understands your prompts. It’s AI that understands you as soon as you think. Next up: Multimodal AI Sensory Fusion (“Glass Whisperer”)
-
AI just gave a 10-year-old superpowers! Meet Kendyl. She was born deaf and has used sign language since she was six months old. But now, thanks to AI, she can see conversations in real time. She recently tried on a pair of AI-powered glasses by Hear View that transcribe speech into live subtitles—displayed directly on the lenses. Not only does this help her understand others, but it also allows her to practice speaking by checking if her own words appear correctly on the screen. These glasses cost AUD $2,400, but can you really put a price on breaking down communication barriers? ↳ AI is turning accessibility challenges into solvable problems ↳ Technology isn’t replacing human connection—it’s making it stronger ↳ The future of communication is already here Kendyl just unlocked a new way to experience the world. And this is just the beginning. #AI
-
I recently came across an incredible story that I feel needs more attention. Ann lost her ability to speak 18 years ago due to paralysis. Imagine not being able to say a word for nearly two decades. Now, thanks to a combination of a brain implant and artificial intelligence, she's finding her voice again. 𝐇𝐨𝐰 𝐝𝐨𝐞𝐬 𝐢𝐭 𝐰𝐨𝐫𝐤? Ann is collaborating with researchers to develop a new technology called a brain-computer interface (BCI). Here's a simple breakdown: - Brain Implant: A tiny device is placed in her brain. This implant picks up the electrical signals that her brain generates when she thinks about speaking. - Artificial Intelligence: AI algorithms interpret these signals and translate them into words or sentences in real time. So when Ann thinks about saying something, the system reads her brain activity and converts it into spoken words. It's like her thoughts are given a voice. 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭? This technology isn't just about helping one person - it's a major step forward for anyone who has lost the ability to speak due to paralysis or other conditions. Regaining the ability to communicate verbally can drastically improve someone's quality of life, offering independence and a stronger connection with others. With so many stories about AI's potential downsides, like deepfakes and privacy concerns, it's uplifting to see how artificial intelligence can make a real, positive difference in people's lives. Ann's experience highlights the amazing possibilities when technology and human determination come together. It's stories like these that remind us of the good that can come from innovation. While it's important to be mindful of the challenges, let's also celebrate the breakthroughs that have the power to change lives. What are your thoughts on this incredible use of AI in healthcare? #innovation #technology #future #management #startups
-
Brain Implant and AI Let Man with ALS Speak and Sing in Real Time Using His Own Voice: 🧠A brain implant and AI decoder has enabled Casey Harrell, a man with ALS, to speak and sing again using a voice that sounds like his own, with near-zero lag 🧠The system captures brain signals from four implanted electrode arrays as Harrell attempts to speak, decoding them into real-time speech with intonation, emphasis, and emotional nuance, down to interjections like “hmm” and “eww.” 🧠Unlike earlier BCIs that needed users to mime full sentences, this one works continuously, decoding signals every 10 milliseconds. That allows users to interrupt, express emotion, and feel more included in natural conversation 🧠It even lets Harrell modulate pitch to sing basic melodies and change meaning through intonation, like distinguishing a question from a statement or stressing different words in a sentence 🧠The synthetic voice was trained on recordings of Harrell’s real voice before ALS progressed, making the output feel deeply personal and familiar to him. 🧠While listener comprehension is around 60%, the system’s ability to express tone, emotion, and even made-up words marks a major leap beyond monotone speech—and could adapt to other languages, including tonal ones #healthtech #ai
-
Researchers have successfully used a brain implant coupled with AI to enable a bilingual individual, unable to articulate words due to a stroke, to communicate in both English and Spanish. This development not only enhances our understanding of how the brain processes language but also opens up new possibilities for restoring speech to those unable to communicate verbally. Known as Pancho, the participant demonstrated the ability to form coherent sentences in both languages with impressive accuracy, thanks to the neural patterns recognized and translated by the AI system. The findings suggest that different languages may not occupy distinct areas of the brain as previously thought, hinting at a more integrated neural basis for multilingualism. This technology represents a significant leap forward in neuroprosthetics, offering hope for personalized communication restoration in multilingual individuals. Key Insights: Dual Language Decoding 🗣️ - The AI system can interpret and translate neural patterns into both Spanish and English, adjusting in real-time. High Accuracy 🎯 - Achieved an 88% accuracy in distinguishing between languages and 75% in decoding full sentences. Unified Brain Activity 🧠 - Challenges prior assumptions with findings that both languages activate similar brain areas. Future Applications 🔍 - Potential expansion to other languages with varying linguistic structures, enhancing universal applicability. Enhanced Connection 💬 - Focuses not just on word replacement but on restoring deep personal connections through communication. https://buff.ly/3V8SiXe?
-
Trevor Noah dives into a fascinating topic: "How can AI empower people with disabilities?" Saqib Shaikh, creator of SeeingAI, joins Trevor to show how AI is revolutionizing life for people who are blind or have low vision. He demonstrates how Seeing AI provides detailed descriptions of surroundings—from buildings and film crews to even birdhouses! Why is this important? ● AI is not just tech; it's a tool for independence. ● It transforms how people with disabilities interact with the world. ● Seeing AI offers real-time information, enhancing daily experiences. Key Takeaways from the Video: 1.) Empowerment through technology: AI like Seeing AI opens new possibilities. 2.) Creating independence: No more waiting for assistance—access information instantly. 3.) Breaking barriers: AI helps to navigate spaces, understand environments, and make decisions. Kudos to Saqib Shaikh and your team! They are leading the way in making AI accessible, practical, and genuinely transformative. What’s next? Imagine a world where every tech innovation includes accessibility from the start. AI is a powerful step forward, but there’s so much more to explore. P.S. Check out the video—it's an eye-opener!