Connecting ALS

This week, Jeremy is joined by the co-directors of the UC Davis Neuroprosthetics Lab, Dr. Sergey Stavisky and Dr. David Brandman, to learn more about their research testing the feasibility of intracortical Brain Computer Interface technology to restore speech.
 
Learn more about The ALS Association’s Seed Grant Program at https://www.als.org/stories-news/new-seed-grant-program-seeks-support-exploratory-als-research
 
For more information on research into using iCBIs to restore speech and trial participant criteria, go to https://www.ucdavis.edu/news/clinical-trial-aims-develop-new-methods-restore-speech-brain-computer-interface

This episode is brought to you by The ALS Association in partnership with CitizenRacecar

What is Connecting ALS?

Connecting ALS is a weekly podcast produced by The ALS Association in partnership with CitizenRacecar. We aim to discuss research and technology developments, highlight advocacy efforts, and share the personal stories woven through the community.

Jeremy Holden:
Our role as academics is to keep pushing the envelope of, what's the next harder thing. And in our case, that's restoring speech. It's going to be a longer road, but we're going to show that it's safe and that it can work in small clinical trials like ours. And then that's going to drum up the excitement and confidence for industry investment to then take it to that next step of development and eventually make a device that reaches a wider range of people.
Hello, everyone, and welcome to Connecting ALS. I'm your host, Jeremy Holden. The ability to communicate with our friends and loved ones is critical to our quality of life. Preserving and restoring this ability is a top priority for people living with ALS. Our guests this week are looking into the potential to restore actual speech by bypassing the damaged parts of the nervous system using an intracortical brain-computer interface, or iBCI, a device that links the brain to external devices.
Brain-computer interface technology is a promising area of ongoing research that employs advances in neuroscience and engineering to provide people with paralysis and ability to effortlessly translate their thoughts into electrical signals that can be used to control computers and other assistive devices to restore communication and functional independence.
With support from the ALS Association's Seed Grants program, which supports preliminary research with the potential to make ALS livable while advancing the search for a cure, Dr. Sergey Stavisky and Dr. David Brandman, co-directors of the UC Davis Neuroprosthetics Lab, are studying the feasibility of synthesizing intelligible speech sounds from neural signals via iBCI technology. I recently caught up with Dr. Stavisky and Dr. Brandman to explore their research. Dr. Stavisky, Dr. Brandman, thanks so much for being with us today on Connecting ALS.

David Brandman:
Thanks so much for having us.

Sergey Stavisky:
Yeah. Thank you.

Jeremy Holden:
So, I want to start with a basic one just so we're all singing from the same hymnal, as the expression goes. But let's start off with what is an intracortical brain-computer interface?

David Brandman:
Yeah. Sure. Let's start with what a brain-computer interface is. So, a brain-computer interface is a device that records brain signals and then translates or decodes those brain signals to allow people to control objects in their environments using their minds.
So, fundamentally, a BCI, or a brain-computer interface, has three parts. There's a sensor, a decoder, and an effector. A sensor is a source of neural signals. The decoder is the math that translates those neural signals, and the effector is whatever you're trying to control.
So, in the case of sensors for BCIs, a lot of things have been described out there. People have used fMRI machines. They've used these intracortical recordings. They've used electrocorticography, myriad sources of brain [inaudible 00:02:56]. And then the decoder is just the math that translates someone's brain signals as they try to control something in their environment, and the effector is just what they're trying to control.
Now, an intracortical BCI, or an iBCI, is a device that records directly from the nerve cell in the brain, or the neurons. The idea is by recording from nerve cells, you have the highest quality signals that we can possibly get to allow people to ultimately control objects [inaudible 00:03:24]. And we talked about iBCIs in our conversation today in the context of the ongoing BrainGate clinical trial.
So, BrainGate is actually studying the safety profile of chronically implanted electro arrays in people with paralysis, including ALS. The idea is we want to record from brain signals in people with paralysis, so people that otherwise want to interact in their environment but can't and then translate their attention to interact with their environment to allow them to control things like wheelchairs, robotic limbs, computer cursors on the screen.
And ultimately what Dr. Stavisky and I are interested in is actually developing a speech neuroprosthesis. The idea being that someone who is paralyzed who wants to talk but can't or release when they do speak are really difficult to understand. By recording directive from their brain and translating or decoding their intention to speak, we hope to be able to synthesize what they're trying to say in real time developing a speech neuroprosthesis.

Jeremy Holden:
Dr. Stavisky, you started researching ICBI technology as a Safenowitz fellow on a project that used it to control a robotic arm and hand to help people with the ALS perform daily tasks. How did you transition from there or what was the jump from there to seeing if this could work as a technology to restore speech?

Sergey Stavisky:
Yeah. Thanks. First of all, I'll say actually that fellowship was for my own career a big deal. It was my first postdoc fellowship. It was right when I transitioned from my PhD work working preclinical animal model to working as part of the BrainGate clinical trial. So, that was a lot of encouragement and validation and gave me a lot of independence to actually pursue this research and dive deeper into the field.
That project worked really well. We were able to have our clinical trial participant move a robot arm, pick up a water bottle, move things around on a table and all that. And as I was doing that, a couple observations happened. One, it worked better than I expected in that we were able to decode the complex movements of the arm in hand with a lot higher accuracy than I thought we would be able to going into. So, that was really good news. I was like, "Wow, this is fantastic." That was point one.
Point two, at the same time, there was work coming out of a variety of labs and especially Eddie Chang's lab at UCSF showing that using this technology called ECOG, so these grids of electrodes that go on the surface of the brain, they were able to reconstruct what people were saying. So, these were people with epilepsy who were having these brain monitoring for clinical purposes, but as an add-on study, they were able to reconstruct the voice from just that neuroactivity also better than I would've expected.
And I knew that these ECOG arrays, the signal quality is not as good as the intracortical arrays that we're using due to that difference that Dr. Brandman was describing where we're recording from hundreds of individual neurons versus sort of the average activity.
And so, I was thinking, okay, we know we can decode arm and hand movements with very good accuracy. We know that you can decode speech with a sort of coarser measurement. And there's a couple other kind of advantages of trying to decode speech. One, it's if you don't have the ability to speak, communication is really your top priority. So, arm and hand function is super important, but if you can't communicate, that's arguably even worse.
So, there's a huge need. It seemed to be perhaps feasible and more feasible than we would've initially thought based on these things I just described. And then also pragmatically, one of the challenges that I and others in the field had run into with restoring arm movements with at least a robot arm is you can have people control this robot arm with their brain activity, except for some very early research, they can't feel what the arm feels. And without that sense of feedback and touch, it's actually really hard to interact with objects.
So, if you think about if your arms asleep, you can barely pick up your phone, let alone make a meal. And still when your arms asleep, you have more sensation than brain-controlled robot are. Plus, the arms themselves are very expensive. They're not very reliable yet. Kind of the brain decoding problem was arguably easier for arm decoding. But all of the downstream things of that effect and the lack of feedback were problematic.
Whereas with this new goal of ours to build a speech brain computer interface, although decoding speech from brain signals is harder because it's a much faster, more complex behavior than reach and grasp if we can solve that. So, the park that's hard is under our control.
And then the other challenge is like what do you use to control? Okay, it's just like a laptop or a tablet. Those are cheap. They're affordable. Everyone has one. Most people, for example, they can still hear just fine. So, the sensory feedback part is already provided for us. So, if we can provide this output channel where someone tries to talk and the computer speaks for them based on their brain activity, I think we'd have a very compelling system that would really be ready to help a lot of people. So, it seemed like a very important and timely goal to go after, which is why I shifted my research to focus on this topic.

Jeremy Holden:
I want to get to that more complex problem that you have before you in a moment. But there is some augmented and alternative communication devices that currently exist that help people communicate who've maybe lost the ability to do so. How is this technology different? How does it build on what currently exists in the world?

David Brandman:
Yeah. That's a fantastic question. The canonical example of this for someone living with ALS is something like eye tracking communication device. The idea being you're presented with a computer screen and there's a gizmo that watches how your eyes move and does it best to guess where you're looking on a screen and translates where you're looking into some kind of communication strategy, being letters appearing on a screen, words appearing on a screen, or simple sentences that effect.
Gosh, when you and I are talking right now, we're talking using our voices. I am trying to speak and words are coming out. Speaking using Eyegaze is not intuitive speak. Further, if anyone has ever tried to use an Eyegaze technology, they're wonderful when they do work. But the key point here is when they do work and the follow up to that is they work when you're staring at a screen.
Now, as you and I were having this conversation over Zoom, I'm looking at you. I'm not looking at a screen in front of me and then not maintaining contact with you. And so, to that end, there is an element of disassociation in terms of who am I actually speaking with. I can't look away when I'm using the device. And perhaps really importantly, I have no sense of prosody. I have no sense of tone. I have no sense of inflection or pausing. And that aspect of human communication, it's so critical for not only what I'm saying, but how I'm saying it is something that Dr. Stavisky and I are really trying to tackle.

Jeremy Holden:
That's interesting. I hadn't even thought about the challenges of understanding tone or translating that. And I think that gets me to my next question. So, what do we know about how the brain turns, what I'm thinking into speech?

Sergey Stavisky:
We know quite a bit, but at a pretty imprecise resolution, it's how I would summarize it. One of the challenges is speech is a uniquely human behavior, which means that we really can only study it in people and that makes it a lot harder. So, other things like how we hear, how we see, how we control our limbs, neuroscientists know a great deal about these things.
Obviously, there's still myriad questions to share result and there are tens of thousands of scientists studying this, but we have really powerful tools because we can look at animal models, and we can record brain activity in great detail, and we can run careful experiments. And of course, we're not going to do that with humans for ethics reasons. And we have to mostly rely on non-invasive measures like fMRI, which look at blood flow related to brain activity.
So, we can tell you roughly what parts of the brain might be involved during a behavior, but not really exactly how they're doing this computation. We can look at clinical evidence like if someone has a stroke or an injury to part of their brain, what deficit do they exhibit? But that's also, again, really imprecise. You get the patients and there's usually damage across multiple areas and it's very hard to untangle which part's doing what.
And then there are things like microstimulation during brain surgery. So, when people often for tumor resections or epilepsy, they're undergoing a brain surgery to figure out what part of the brain surgeon like Dr. Brandman is going to remove. And the surgeon would actually apply a little current to different parts of the brain to map out, well, what are the most essential areas that we want to avoid damaging? So, that also provides a source of evidence. And then things like electrocorticography measuring from the surface of the brain or inside the brain during epilepsy monitoring.
Now you take all that together and we have a pretty decent idea that okay, turns out a lot of your brain is involved in speaking, unsurprisingly. Because it involves everything from hearing what someone else is saying. So, it uses your temporal lobes a lot. It involves planning and memory, which uses hippocampus and other frontal cortex areas. It involves going from an idea, a concept in a kind of language sense to a sequence of phonemes, so sequence of little sound units.
And we think that's happening maybe in the parietal cortex or near the back of your head and near your temporal lobes closer to the ears. And then that kind of moves forward to the frontal cortex, especially the motor cortex, which is the part of the brain that really is a final output to produce muscle commands, which cause movements.
And so, you have this path going from language to planning of speech, to actually planning of specific movements to generate speech. And by movements as I'm speaking right now, I'm moving my lips, my tongue, my jaw, my larynx, so my voice box is vibrating. I'm also coordinating my breathing. I have to exhale to produce speech, which is important. And a lot of that really comes together in some of these motor areas of the brain, and that's where we're putting electro arrays as part of our clinical trial.
So, the idea is look, language is very complicated. We don't know that much about it, but we know quite a bit about how the brain does that last computation of which muscles do I move and when. And so, by tapping into that, we're basically getting into the motor output aspect of speech. As in normally, these muscle commands would go to the muscles and you would talk, but if you have an injury like ALS or a stroke, that those commands aren't reaching the muscles and we're getting them at that last stage before they're leaving the brain going to the muscles.
And I think that makes the problem a lot easier. It also has advantages like privacy. So, a concern people might have is, are you reading my mind? The answer is no. We don't know how to do that. But because we're listening into the part of the brain that's trying to move the muscles, there's a signal that you've already decided you want to speak.
You're not imagining a conversation and this isn't the voice in your head as you like, think and rehearse what you might say. This is I am trying to speak. And that's the neural activity that we are tapping into and basically bypassing that injury. So, now you're trying to speak in this case through a computer instead of through your own muscles which are paralyzed.

Jeremy Holden:
That's fascinating. I hadn't even thought about the off-switch component of that that I'm thinking about and I think I shouldn't say that. This is not the right time and place. This is not the right context. I fail at that off-switch more often than I'd care to admit. But it's fascinating to think about to borrow your earlier expression, the upstream, downstream, and where we can start interventions the facilitate speech. This is amazing. All right, so let's look at the current clinical trial. How will it test the feasibility of using the ICBI tech we talked about earlier in order to restore speech?

David Brandman:
Yeah. Absolutely. Let's talk about BrainGate. So, BrainGate2 is an ongoing clinical trial looking at the CT profile of chronically implanted microelectrode arrays in people with paralysis. So, the inclusion criteria involves people with spinal cord injury or brainstem stroke or ALS along other kinds of neurogenerative diseases. And after recruitment, people undergo implantation of these microelectrode arrays, these devices that require for individual nerve cells and neurons brain.
And they work with the scientific groups over the course of at least a year to then develop next generation of communication neuroprosthetics. BrainGate is the longest running intracortical brain computer interface trial in the world. We're going on for 17 years now. And it's actually organized and run by Dr. Leigh Hochberg out of Mass General Brigham Hospital in addition to out of Brown University. So, it's a sponsor and investigator led clinical trial. The trial also has sites at UC Davis, which is where we're student from, at Stanford, Emory and Mass General.
And as part of the ongoing clinical trial, there have been roughly 15 people implanted. And through that can be able to demonstrate some groundbreaking results. For example, folks who are paralyzed through spinal cord injury being able to control computer courses for communication, the folks with spinal cord injuries being able to communicate roughly 18 words per minute simply by imagining handwriting. That is someone who is otherwise paralyzed and can't feel their hand or can't move.
It imagines handwriting. We're writing out the letter A or the letter B or the letter C and so forth, and having those words secure in a screen much like he'll be writing. We've also demonstrated the controlled robotic limbs, which is of course the what Dr. Stavisky was talking about before.
And we were also the first to demonstrate the development of a speech neuroprosthesis in someone who is otherwise able to speak, which is someone with a spinal cord injury.
And very recently, and something that's currently under peer review and has been published as preliminary data, not as part of open source availability over bio archive right now is actually a demonstration of a brain-to-text neuroprosthesis.
The idea being that someone with cortical bulbar ALS, who is very severely dysarthric, it's very difficult to understand what you're saying, tries to speak and words appear at a screen under vocabulary of a hundred thousand words, allowing her to communicate at a rate about 50 to 70 words per minute. The clinical trial's ongoing and we're actively recruiting. Each of the clinical trial site is looking for people with ALS to join the clinical trial.
Very broadly, what does it take to join? You have to live within three hours of one of the clinical trial sites. So, three hours of Sacramento, three hours of the Bay Area, three hours of Boston, and then three hours of Atlanta, Georgia. And then having a diagnosis of ALS with some weakness in the upper extremity and with or without difficulty speaking.
And each of the clinical trial sites has a slightly different focus of what they like to actually build in terms of neuroprosthesis. But to that end, there's a list of other inclusion exclusion criteria. But the big picture is someone within three hours of those clinical trial sites who is living with paralysis, especially with ALS, and is interested in volunteering their time and energy to develop the next generation communication device to other people living with ALS.

Jeremy Holden:
And we can share information in the show notes on that trial criteria for folks that may want consider participating and learn more about it. If someone is interested in participating or learning more, how could they go about getting in touch with you for that information?

David Brandman:
Yeah, absolutely. So, you can find us on the website braingate.org. That's the word brain, the word gate, then dot org. You can reach us over email at clinicaltrials@braingate.org. That's clinicaltrials@braingate.org. And if you happen to live within three hours of Sacramento, you can reach out to me at braingate@ucdavis.ed, braingate@ucdavis.ed.

Jeremy Holden:
I'm curious, what's the timeline for the clinical trial or letting people know where we are in the development of this next gen technology?

David Brandman:
Yeah. That's a fantastic question. I would love to get to the point where I can walk up to a patient and prescribe this, where I could walk up to Mr. and Ms. Smith and say, "Gosh, I'm really sorry, you've been diagnosed with ALS. Your brain computer interface surgery is scheduled for next week. We'll get you communicating in no time." We're not there yet.
And certainly, my goal as a clinician scientist, my goal as a surgeon and whose code in his laboratory with Dr. Stavisky is to develop such a neuroprosthesis. And BrainGate really is trying to study the safety profile of these planted arrays and trying to learn the fundamental insights of the brain and applying those insights towards building a first generation neuroprosthesis of communication.
So, we're not there yet. But for those who want to work with us as part of BrainGate, we're contributing towards a scientific literature that has moved things forward and probably safe to say, revolutionize the field of intracortical BCI technology in the last 17 years.

Jeremy Holden:
So, we've talked about some of the ways that this is currently being tested, where it's been tested in the past, and obviously the restoration of speech, I'm curious what other applications of this tech might be on the horizon? This sounds like something that doesn't stop with this current clinical trial. Just speculate forward.

Sergey Stavisky:
Yeah. So, neurotechnology broadly and neuroprosthetics have really taken off in the last decade and even more so the last five years. And when I started in this field, there were a handful of labs, including BrainGate that were doing this, maybe two or three groups doing this in human clinical trials. A dozen doing this in animal models.
Now, there are maybe a dozen human clinical trials across different things, not just communication. There are startups in this space. There are large device companies looking at it. So, it's really a maturing field. So, depending on your question could mean what's the future for communication BCIs or more broadly for neuroprosthesis.
And there are trials to restore vision by electrically stimulating either the retina or even the visual cortex, so, part of the brain that first receives information about site. There are clinical trials to restore walking by stimulating the spinal cord. There's a lot of great work out of Switzerland on that. There are clinical trials who are very early days looking at closed loop systems to help with psychiatric diseases like major depressions or detecting depressive episodes and actually stimulating parts of the brain to treat that.
And there's even some kind of early research on can we deal with conditions like memory loss due to aging or dementia stimulating the right parts of the brain to help sustain that memory function and bypass some of those injuries.
So, a lot of these various applications, they all use these three components that Dr. Brandman described. There's an electrical sensor that reads brand activity. There's some computation. So, there's a computer involved that's processing that neural data, and then there's an effector for things like robot arms that affect there's a physical thing in the world or for a communication [inaudible 00:22:33], it could be a cursor on the screen or words appearing on a screen.
And for other applications that effector actually is electrically writing information back into the brain, whether that's restoring your hearing. So, cochlear implants are a very mature technology. Millions of people now have them. They walk around hearing just fine, you wouldn't even know it that they have a device. Or earlier stage things like restoring vision by stimulating, stopping seizures and epilepsy by stimulating other parts of the brain, maybe augmenting memory in the future, things like that.
So, I think kind of at a wide view, that's what's starting to happen. The communication neuroprostheses were some of the first to get going, BrainGate's been going for 17 years, partly because it's a pretty well-defined problem. We want to transform someone's attempted arm movements and the brain signals out accompany that into moving a computer cursor so they can type on screen. And we already knew a lot about how the brain controls the arm from basic neuroscience.
So, that's moved along pretty far, such that there have been many participants at our clinical trial and others that have moved computer cursors or moved robot arms, and now there are even startups that are moving towards human clinical trials or pivotal trials on the devices that could eventually go to market.
So, I think in the next few years, there'll be commercial devices for simple things like moving a computer cursor. And our role as academics is to keep pushing the envelope of what's the next harder thing. And in our case, that's restoring speech. It's going to be a longer road, but I think we're going to see the same process where we're going to show that it's safe and that it can work in small clinical trials like ours.
And then that's going to drum up the excitement and confidence for industry investments to then take it to that next step of development and eventually make a device that reaches a wider range of people and is distributed at scale.

Jeremy Holden:
Truly fascinating technology. And I really enjoyed chat about it this afternoon. Any closing thoughts before I let you get back to the important work?

David Brandman:
There's an unmet need in people living with ALS. When you ask people who can't move their legs, what's the most important thing for you? The answer is, I'd like some bladder or bowel and sexual control. When you ask people that can't move their arms and their legs, what's the most important thing, the answer is, "I want to be able to move my arms again." But you ask someone who can't talk and can't move their arms and can't move the legs, the number one thing that's most important to people living with paralysis is restoration and communication.
It's being able to tell your loved one you love them. It's being able to say, "Gosh, can you scratch my nose?" It's being able to communicate with the healthcare provider that there's a problem. And I think what Dr. Stavisky and I are trying to accomplish is to really address this something to be able to take someone who is paralyzed that wants to speak but can't, and to translate that intention to speak in such a way that we can get them speaking intuitively, we can get them speaking robustly and we can get them to speak as hopefully one day as quickly as you and I are speaking right now.

Jeremy Holden:
Always love to end these conversations on a hopeful note. Dr. Stavisky, Dr. Brandman, thanks so much for being with us today.

Sergey Stavisky:
Thanks for having us. It was a pleasure.

David Brandman:
Thanks so much.

Jeremy Holden:
I want to thank my guests this week, Dr. Sergey Stavisky and Dr. David Brandman. If you like this episode, maybe go back and check out our earlier conversation on expanding access to speech language pathology. You can also share this episode with a friend. And while you're at it, please rate and review Connecting ALS wherever you listen to podcasts. It's a great way for us to connect with more listeners.
Our production partner for this series is CitizenRacecar. Post-production by Alex Brouwer, production management by Gabriela Montequin, supervised by David Hoffman. That's going to do it for this week. Thanks for tuning in. We'll connect with you again soon.