UC Berkeley and UCSF just brought real-time speech back to someone who couldn’t speak for 18 years (insane!). For people with paralysis and anarthria, the delay and effort of current AAC tools can make natural conversation nearly impossible. 𝗧𝗵𝗶𝘀 𝗻𝗲𝘄 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗻𝗲𝘂𝗿𝗼𝗽𝗿𝗼𝘀𝘁𝗵𝗲𝘀𝗶𝘀 𝘀𝘁𝗿𝗲𝗮𝗺𝘀 𝗳𝗹𝘂𝗲𝗻𝘁, 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘇𝗲𝗱 𝘀𝗽𝗲𝗲𝗰𝗵 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗿𝗼𝗺 𝗯𝗿𝗮𝗶𝗻 𝘀𝗶𝗴𝗻𝗮𝗹𝘀 𝗶𝗻 𝗿𝗲𝗮𝗹 𝘁𝗶𝗺𝗲 𝘄𝗶𝘁𝗵 𝗻𝗼 𝘃𝗼𝗰𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗱. 1. Restored speech in a participant using 253-channel ECoG, 18 years after brainstem stroke and complete speech loss. 2. Trained deep learning decoders to synthesize audio and text every 80 ms based on silent speech attempts, with no vocal sound needed. 3. Streamed speech at 47.5 words per minute with just 1.12s latency = 8× faster than prior state-of-the-art neuroprostheses. 4. Matched the participant’s original voice using a pre-injury recording, bringing back not just words but vocal identity. Bimodal decoder architecture they used was cool. It's interesting how they got to low-latency and a synchronized output from the system. This was done by sharing a neural encoder and employing separate joiners and language models for both acoustic-speech units and text Other tidbits used was convolutional layers with unidirectional GRUs and LSTM-based language models. Absolutely love seeing AI used in practical ways to bring back joy and hope to people who are paralyzed!! Here's the awesome work: https://lnkd.in/ghqX5EB2 Congrats to Kaylo Littlejohn, Cheol Jun Cho, Jessie Liu, Edward Chang, Gopala Krishna Anumanchipalli, and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW
Latest Trends in Assistive Technology
Explore top LinkedIn content from expert professionals.
Summary
Assistive technology is rapidly evolving, using advanced AI and brain-computer interfaces (BCIs) to empower individuals with speech or mobility impairments. These innovations are transforming lives by restoring communication abilities and enhancing accessibility for people with disabilities.
- Explore BCI advancements: Cutting-edge brain-computer interfaces now translate neural signals into speech and movements, enabling real-time communication for individuals with paralysis or speech impairments.
- Focus on personalization: New AI-powered systems can recreate a person’s original voice or even integrate facial expressions for more natural and emotional communication.
- Consider accessibility challenges: While these technologies are groundbreaking, ensuring affordability and accessibility will be vital for their widespread adoption and impact.
-
-
AI just gave a paralyzed woman her voice back. In 2016, Ann, a 30-year-old teacher, suffered a stroke that left her unable to speak or move - trapped in her own body with Locked-In Syndrome for almost two decades. But now, thanks to a breakthrough brain-computer interface (BCI), she is communicating again - through a digital avatar. This new AI-powered technology decodes brain signals directly into speech and facial expressions. Here’s how the team at University of California, San Francisco made this breakthrough: 1. Brain signal decoding Researchers implanted 253 microelectrodes on the surface of Ann’s brain, capturing the brain signals that would normally control speech. This allowed her to communicate at 80 words per minute, just by thinking about them. 2. Recreating a natural voice Researchers used a sophisticated AI that synthesized Ann’s voice from a pre-stroke recording - her wedding day speech. This restoration wasn’t just robotic speech generation, it brought back her real voice. 3. Bringing emotion and expression back The team went further by combining speech synthesis with facial animation. A screen displayed Ann’s digital avatar, translating her brain signals into facial expressions, allowing it to smile, frown, and express emotions along with her restored voice. 4. The road to independence The next step in this research was to develop a wireless version of the system that would free Ann (and others like her) from the need for physical connections to a computer. This is life-changing tech that opens new doors for millions of people living with severe paralysis. It is a pivotal moment in the intersection of AI, neuroscience, and healthcare. But there's one concern: accessibility. While this technology is revolutionary, its high costs make it inaccessible for many who need it most. Could AI-powered speech technology be the future of healthcare for those with paralysis? Video credit: UC San Francisco (UCFS) on YouTube. #innovation #ai #technology #healthcare
-
AI Restores Voice After 18 Years with Assistive Tech Imagine being unable to speak for 18 years, trapped in silence, unable to express your thoughts. In August 2023, UCSF and UC Berkeley developed a BCI enabling a paralyzed woman to speak. By decoding neural signals associated with speech attempts, this innovation transformed her thoughts into synthesized speech. In August 2024, BrainGate unveiled a system translating brain signals to speech with 97% accuracy. This advancement marks a significant leap toward practical applications for individuals with speech impairments caused by ALS, strokes, or other conditions. Why This Matters Empathy Meets Innovation: These breakthroughs restore dignity and connection for those without voice. Future of AI + Neuroscience: Accurate, responsive BCIs hold immense potential to revolutionize healthcare. Inspiring Possibilities: This technology could restore speech, mobility, vision, and transform lives. I see these breakthrough as a call to action for all of us : How can we leverage AI to transform lives and enhance human connection? Let’s keep pushing boundaries, because the intersection of tech and empathy is where true innovation happens. What are your thoughts on the future of AI in healthcare and assistive technology? Let’s discuss it below! AppLogiQ | Soorya Narayanan #applogiq #artificialintelligence #braincomputerinterface #assistivetechnology #aiinnovation #healthcaretech #futureofhealthcare #innovationforchange #makedigitallives
-
#Meta #AI’s Latest Breakthrough: Decoding Thoughts into Text: What’s Next? Imagine a future where your brainwaves translate directly into words on a screen. No typing, no speaking—just thinking. Meta AI’s latest research is turning this into reality. Their new model can decode brain activity into text with surprising accuracy, unlocking groundbreaking possibilities: 🔹 Assistive communication for individuals with speech impairments or paralysis (Stroke Patients) 🔹 Enhanced human-AI interaction through direct brain-computer interfaces 🔹 Improved understanding of language processing disorders 🔹 Development of more intuitive and responsive AI language models 🔹 Personalized education, where learning adapts in real-time to cognitive engagement 🔹 Cognitive assessment tools that measure understanding beyond traditional tests 🔹 Greater accessibility in education, enabling students with disabilities to learn without barriers 🔹 Direct knowledge transfer, where brain-computer interfaces could one day allow for near-instant acquisition of complex information: reshaping how we learn and teach by 2050 This could redefine not only how we interact with technology but also how we teach, learn, and communicate. But with every breakthrough comes ethical concerns. #Privacy, consent, and potential misuse are critical questions we must address. So, what’s your take? Would you embrace brain-to-text technology, or does it raise too many ethical red flags? Let’s discuss. Please share post #DrGPT #AI #Neuroscience #Technology #MetaAI #FutureOfCommunication #HealthcareInnovation #EdTech #NeuroEducation #FutureOfLearning
-
Innovations like hands-free computer operations and screen readers for the visually impaired could reshape how individuals with disabilities contribute to their work and drive business success in the coming year. Excluding people with disabilities from the workforce can cost up to 7% of a country’s GDP, according to a 2023 World Economic Forum report. Implementing assistive AI in business strategies could lead to a 28% increase in revenue and a 30% increase in profit margins for companies. "As someone who navigates life in a wheelchair, I’ve seen firsthand how tools like voice recognition and adaptive interfaces open doors that were previously closed," says diversity and inclusion leader Alister Ong. "These advancements aren’t just about convenience — they’re about giving us the autonomy to perform, contribute, and thrive." What other ways do you think AI can help make the workplace more accessible in 2025 and beyond? Weigh in below or post a video with #BigIdeas2025. And check out the rest of this year’s Big Ideas here: https://lnkd.in/gQphjPrt. ✍️ Neha Jain Kale
-
Exciting news from the New England Journal of Medicine! A new study reveals a speech neuroprosthesis that converts the attempted speech of a man with ALS into text with 97.5% accuracy. This technology is allowing him to connect with his loved ones and colleagues directly from his home. How do you think advancements like this could impact the lives of those with speech challenges? #ALS #Neuroprosthesis #MedicalInnovation #SpeechTechnology #Accessibility #HealthcareAdvances #DigitalHealth #PatientCare #InclusiveTech #MedicalResearch #AIinHealthcare #Neuroscience #SpeechDecoding #AssistiveTechnology #LifeChangingTech #FutureOfMedicine #HealthTech #PatientVoice #MedTech #NEJM #CommunicationMatters