Brain-Computer Interface Innovations

Explore top LinkedIn content from expert professionals.

  • View profile for Luke Yun

    building AI computer fixer | AI Researcher @ Harvard Medical School, Oxford

    32,813 followers

    UC Berkeley and UCSF just brought real-time speech back to someone who couldn’t speak for 18 years (insane!). For people with paralysis and anarthria, the delay and effort of current AAC tools can make natural conversation nearly impossible. 𝗧𝗵𝗶𝘀 𝗻𝗲𝘄 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗻𝗲𝘂𝗿𝗼𝗽𝗿𝗼𝘀𝘁𝗵𝗲𝘀𝗶𝘀 𝘀𝘁𝗿𝗲𝗮𝗺𝘀 𝗳𝗹𝘂𝗲𝗻𝘁, 𝗽𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘇𝗲𝗱 𝘀𝗽𝗲𝗲𝗰𝗵 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗿𝗼𝗺 𝗯𝗿𝗮𝗶𝗻 𝘀𝗶𝗴𝗻𝗮𝗹𝘀 𝗶𝗻 𝗿𝗲𝗮𝗹 𝘁𝗶𝗺𝗲 𝘄𝗶𝘁𝗵 𝗻𝗼 𝘃𝗼𝗰𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗱. 1. Restored speech in a participant using 253-channel ECoG, 18 years after brainstem stroke and complete speech loss. 2. Trained deep learning decoders to synthesize audio and text every 80 ms based on silent speech attempts, with no vocal sound needed. 3. Streamed speech at 47.5 words per minute with just 1.12s latency = 8× faster than prior state-of-the-art neuroprostheses. 4. Matched the participant’s original voice using a pre-injury recording, bringing back not just words but vocal identity. Bimodal decoder architecture they used was cool. It's interesting how they got to low-latency and a synchronized output from the system. This was done by sharing a neural encoder and employing separate joiners and language models for both acoustic-speech units and text Other tidbits used was convolutional layers with unidirectional GRUs and LSTM-based language models. Absolutely love seeing AI used in practical ways to bring back joy and hope to people who are paralyzed!! Here's the awesome work: https://lnkd.in/ghqX5EB2 Congrats to Kaylo Littlejohn, Cheol Jun Cho, Jessie Liu, Edward Chang, Gopala Krishna Anumanchipalli, and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW

  • View profile for Vineet Agrawal
    Vineet Agrawal Vineet Agrawal is an Influencer

    Helping Early Healthtech Startups Raise $1-3M Funding | Award Winning Serial Entrepreneur | Best-Selling Author

    50,124 followers

    AI just gave a paralyzed woman her voice back. In 2016, Ann, a 30-year-old teacher, suffered a stroke that left her unable to speak or move - trapped in her own body with Locked-In Syndrome for almost two decades. But now, thanks to a breakthrough brain-computer interface (BCI), she is communicating again - through a digital avatar. This new AI-powered technology decodes brain signals directly into speech and facial expressions. Here’s how the team at University of California, San Francisco made this breakthrough: 1. Brain signal decoding Researchers implanted 253 microelectrodes on the surface of Ann’s brain, capturing the brain signals that would normally control speech. This allowed her to communicate at 80 words per minute, just by thinking about them. 2. Recreating a natural voice Researchers used a sophisticated AI that synthesized Ann’s voice from a pre-stroke recording - her wedding day speech. This restoration wasn’t just robotic speech generation, it brought back her real voice. 3. Bringing emotion and expression back The team went further by combining speech synthesis with facial animation. A screen displayed Ann’s digital avatar, translating her brain signals into facial expressions, allowing it to smile, frown, and express emotions along with her restored voice. 4. The road to independence The next step in this research was to develop a wireless version of the system that would free Ann (and others like her) from the need for physical connections to a computer. This is life-changing tech that opens new doors for millions of people living with severe paralysis. It is a pivotal moment in the intersection of AI, neuroscience, and healthcare. But there's one concern: accessibility. While this technology is revolutionary, its high costs make it inaccessible for many who need it most. Could AI-powered speech technology be the future of healthcare for those with paralysis? Video credit: UC San Francisco (UCFS) on YouTube. #innovation #ai #technology #healthcare

  • View profile for Harvey Castro, MD, MBA.
    Harvey Castro, MD, MBA. Harvey Castro, MD, MBA. is an Influencer

    ER Physician | Chief AI Officer, Phantom Space | AI & Space-Tech Futurist | 5× TEDx | Advisor: Singapore MoH | Author ‘ChatGPT & Healthcare’ | #DrGPT™

    49,504 followers

    #Meta #AI’s Latest Breakthrough: Decoding Thoughts into Text: What’s Next? Imagine a future where your brainwaves translate directly into words on a screen. No typing, no speaking—just thinking. Meta AI’s latest research is turning this into reality. Their new model can decode brain activity into text with surprising accuracy, unlocking groundbreaking possibilities: 🔹 Assistive communication for individuals with speech impairments or paralysis (Stroke Patients) 🔹 Enhanced human-AI interaction through direct brain-computer interfaces 🔹 Improved understanding of language processing disorders 🔹 Development of more intuitive and responsive AI language models 🔹 Personalized education, where learning adapts in real-time to cognitive engagement 🔹 Cognitive assessment tools that measure understanding beyond traditional tests 🔹 Greater accessibility in education, enabling students with disabilities to learn without barriers 🔹 Direct knowledge transfer, where brain-computer interfaces could one day allow for near-instant acquisition of complex information: reshaping how we learn and teach by 2050 This could redefine not only how we interact with technology but also how we teach, learn, and communicate. But with every breakthrough comes ethical concerns. #Privacy, consent, and potential misuse are critical questions we must address. So, what’s your take? Would you embrace brain-to-text technology, or does it raise too many ethical red flags? Let’s discuss. Please share post #DrGPT #AI #Neuroscience #Technology #MetaAI #FutureOfCommunication #HealthcareInnovation #EdTech #NeuroEducation #FutureOfLearning

  • View profile for Gary Monk
    Gary Monk Gary Monk is an Influencer

    LinkedIn ‘Top Voice’ >> Follow for the Latest Trends, Insights, and Expert Analysis in Digital Health & AI

    43,847 followers

    Brain Implant and AI Let Man with ALS Speak and Sing in Real Time Using His Own Voice: 🧠A brain implant and AI decoder has enabled Casey Harrell, a man with ALS, to speak and sing again using a voice that sounds like his own, with near-zero lag 🧠The system captures brain signals from four implanted electrode arrays as Harrell attempts to speak, decoding them into real-time speech with intonation, emphasis, and emotional nuance, down to interjections like “hmm” and “eww.” 🧠Unlike earlier BCIs that needed users to mime full sentences, this one works continuously, decoding signals every 10 milliseconds. That allows users to interrupt, express emotion, and feel more included in natural conversation 🧠It even lets Harrell modulate pitch to sing basic melodies and change meaning through intonation, like distinguishing a question from a statement or stressing different words in a sentence 🧠The synthetic voice was trained on recordings of Harrell’s real voice before ALS progressed, making the output feel deeply personal and familiar to him. 🧠While listener comprehension is around 60%, the system’s ability to express tone, emotion, and even made-up words marks a major leap beyond monotone speech—and could adapt to other languages, including tonal ones #healthtech #ai

  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    51,040 followers

    Enhanced Brain Implant Translates Stroke Survivor’s Thoughts Into Nearly Instant Speech Using Artificial Intelligence The system harnesses technology similar to that of devices like Alexa and Siri, according to the researchers, and improves on a previous model Researchers connect stroke survivor Ann Johnson's brain implant to the experimental computer, which will allow her to speak by thinking words. Noah Berger A brain implant that converts neuron activity into audible words has given a stroke survivor with severe paralysis almost instantaneous speech. Ann Johnson became paralyzed and lost the ability to speak after suffering a stroke in 2005, when she was 30 years old. Eighteen years later, she consented to being surgically fitted with an experimental, thin, brain-reading implant that connects to a computer, officially called a brain-computer interface (BCI). Researchers placed the implant on her motor cortex, the part of the brain that controls physical movement, and it tracked her brain waves as she thought the words she wanted to say. As detailed in a study published Monday in the journal Nature Neuroscience, researchers used advances in artificial intelligence (A.I.) to improve the device’s ability to quickly translate that brain activity into synthetic speech—now, it’s almost instantaneous. The technology “brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” study co-author Gopala Anumanchipalli, a computer scientist at the University of California, Berkeley, says in a statement. Neuroprostheses are devices that can aid or replace lost bodily functions by connecting to the nervous system. “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming,” he adds. “The result is more naturalistic, fluent speech synthesis.” #AI #medicine #BrainComputerInterface #brainimplant #strokesurvivor #brainvoicesynthesis

  • View profile for Andreas Sjostrom
    Andreas Sjostrom Andreas Sjostrom is an Influencer

    LinkedIn Top Voice | AI Agents | Robotics I Vice President at Capgemini's Applied Innovation Exchange | Author | Speaker | San Francisco | Palo Alto

    13,551 followers

    Last week, we explored how robots might move, feel, and understand like humans. Now, we flip the lens and tap into one of the most exciting frontiers in human augmentation: Brain-Computer Interfaces (BCIs). BCIs connect the brain directly to machines, translating neural activity into signals that control computers, devices, or even AI agents. With the rise of Agentic AI, a new possibility is emerging: What if your intentions could become instructions, from brainwaves to prompts, directing AI with intent alone? The most intuitive interface isn’t voice; it’s thought. A Thought-to-Agent Interface (T2A) links your brain activity to an AI Agent in real time, translating mental focus, intention, or emotional state into prompts, actions, or decisions. These are some use-case examples... 🧠 In Work: You're in deep focus. You imagine a slide, your AI Agent starts drafting it. You think of a person; it pulls up your last conversation. 🧠 In Accessibility: For someone unable to speak or type, the interface interprets intent from brain signals and helps control devices, compose messages, or navigate systems. 🧠 In Creativity: A designer imagines a shape, a scene, or a melody, and the AI Agent renders variations in real time, refining the output through guided intent. These are some current research projects... 📚 Meta AI’s Brain-to-Text Decoding: Decodes full sentences from non-invasive brain activity with up to 80% character accuracy, bridging neural intent to digital language. https://lnkd.in/gTEJpa4e 📚 UC Berkeley’s Brain-to-Voice Neuroprosthesis: Translates brain signals into audible speech, restoring naturalistic communication for people with speech loss. https://lnkd.in/g_D3Xeup 📚 Caltech’s Mind-to-Text Interface: Achieves 79% accuracy in translating imagined internal speech into real-time text, enabling seamless brain-to-device communication. https://lnkd.in/gEuVKreq These are some startups to watch... 🚀 Neurable: EEG-based wearables decoding cognitive load & focus in real-time. https://www.neurable.com/ 🚀 OpenBCI: Makers of Galea, a headset combining EEG, EMG, eye tracking, and skin conductance for immersive neural interfacing. https://lnkd.in/girt4PAW 🚀 Cognixion: Brain-powered communication integrated with AR and speech synthesis for non-verbal users. https://www.cognixion.com/ 🚀 Paradromics: High-bandwidth BCI for translating neural activity into speech or system commands for those with severe impairments. https://lnkd.in/giepGKH4 What is a likely time horizon... 1–2 years: Wearable EEG interfaces paired with AI for narrow tasks: adaptive UI, hands-free control, attention-based interaction. 3–5 years: Thought-to-agent pipelines for work, accessibility, and creative tools, personalized to individual brain patterns and cognitive signatures. The future isn’t just AI that understands your prompts. It’s AI that understands you as soon as you think. Next up: Multimodal AI Sensory Fusion (“Glass Whisperer”)

  • View profile for Nichol Bradford
    Nichol Bradford Nichol Bradford is an Influencer

    AI+HI Executive | Investor & Trustee | Keynote Speaker | Human Potential in the Age of AI

    20,752 followers

    As a guest of the The Neurorights Foundation and Jamie Daves, on Friday I watched 𝗪𝗲𝗿𝗻𝗲𝗿 𝗛𝗲𝗿𝘇𝗼𝗴'𝘀 𝗧𝗵𝗲𝗮𝘁𝗲𝗿 𝗼𝗳 𝗧𝗵𝗼𝘂𝗴𝗵𝘁, on the future of Neurotech. It deeply resonated with my perspective on the future of human rights and tech. As an African American whose father worked as a plumber, I'm acutely thoughtful on ‘access for all’ of powerful new technologies that can either bridge or widen societal divides. The film raises crucial questions about accessibility and equity in the coming neurotech revolution through conversations with visionaries like Professor Yuste and human rights lawyer Jared Genser of The Neurorights Foundation. What struck me most was the urgency of ensuring these technologies don't create an unbridgeable gap between those who can afford cognitive enhancement and those who cannot. Neurotech is moving faster than you think. Without ethical frameworks and equal access, we risk creating a world where cognitive advantages become permanently concentrated among the privileged few. The difference will no longer be in perception but hardwired into human capability. Have you considered what equal access to neurotechnology might mean for the future of human potential and social equity? #Neuroscience #SocialJustice #TechEquity #HumanRights #WernerHerzog #Innovation #Accessibility #FutureOfTechnology

  • View profile for Dipu Patel, DMSc, MPAS, ABAIM, PA-C

    📚🤖🌐 Educating the next generation of digital health clinicians and consumers Digital Health + AI Thought Leader| Speaker| Strategist |Author| Innovator| Board Executive Leader| Mentor| Consultant | Advisor| TheAIPA

    5,154 followers

    Researchers have successfully used a brain implant coupled with AI to enable a bilingual individual, unable to articulate words due to a stroke, to communicate in both English and Spanish. This development not only enhances our understanding of how the brain processes language but also opens up new possibilities for restoring speech to those unable to communicate verbally. Known as Pancho, the participant demonstrated the ability to form coherent sentences in both languages with impressive accuracy, thanks to the neural patterns recognized and translated by the AI system. The findings suggest that different languages may not occupy distinct areas of the brain as previously thought, hinting at a more integrated neural basis for multilingualism. This technology represents a significant leap forward in neuroprosthetics, offering hope for personalized communication restoration in multilingual individuals. Key Insights: Dual Language Decoding 🗣️ - The AI system can interpret and translate neural patterns into both Spanish and English, adjusting in real-time. High Accuracy 🎯 - Achieved an 88% accuracy in distinguishing between languages and 75% in decoding full sentences. Unified Brain Activity 🧠 - Challenges prior assumptions with findings that both languages activate similar brain areas. Future Applications 🔍 - Potential expansion to other languages with varying linguistic structures, enhancing universal applicability. Enhanced Connection 💬 - Focuses not just on word replacement but on restoring deep personal connections through communication. https://buff.ly/3V8SiXe?

  • View profile for Jon Krohn
    Jon Krohn Jon Krohn is an Influencer

    Co-Founder of Y Carrot 🥕 Fellow at Lightning A.I. ⚡️ SuperDataScience Host 🎙️

    42,970 followers

    Dr. David Moses and his colleagues have pulled off a miracle with A.I.: allowing paralyzed patients to "speak" through a video avatar in real time — using brain waves alone. In today's episode, David details how ML makes this possible 🤯 David: • Is an adjunct professor at the University of California, San Francisco. • Is the project lead on the BRAVO (Brain-Computer Interface Restoration of Arm and Voice) clinical trial. • The success of this extraordinary BRAVO project led to an article in the prestigious journal Nature and YouTube video that already has over 3 million views. Today’s episode does touch on specific machine learning (ML) terminology at points, but otherwise should be fascinating to anyone who’d like to hear how A.I. is facilitating real-life miracles. In this episode, David details:  • The genesis of the BRAVO project. • The data and the ML models they’re using on the BRAVO project in order to predict text, speech sounds and facial expressions from the brain activity of paralyzed patients. • What’s next for this exceptional project including how long it might be before these brain-to-speech capabilities are available to anyone who needs them. The SuperDataScience Podcast is available on all major podcasting platforms and a video version is on YouTube. I've left a comment for quick access to today's episode below ⬇️ #superdatascience #machinelearning #ai #neuroscience #medicine

  • View profile for Sanjeev Valentine

    Helping MedTech Executives Grow Their Teams & Careers

    21,222 followers

    Brain-Computer Interfaces: Ushering a New Era in MedTech Where human thoughts meet technological innovation. What if your brain controlled devices directly? BCIs are turning this vision into reality by creating seamless pathways between the brain and external systems, enabling: ▪️ Restored Mobility: Robotic limbs empowering paralysis patients. ▪️ Neurological Breakthroughs: Advanced treatments for epilepsy, depression, and beyond. ▪️ Communication Transformation: Giving a voice to those with severe motor impairments. Innovations like Neuralink’s FDA-approved trials, Synchron’s minimally invasive Stentrode system, and Precision Neuroscience’s high-resolution Layer 7 implant are propelling BCIs into practical use. As investments soar, are we prepared for BCIs to redefine MedTech and lives worldwide? Credit: MEDWIRE.AI #MedTech #BrainComputerInterfaces #Innovation #Neuralink #Synchron #BCI #MedicalDevices

Explore categories