top of page
  • Writer's pictureSamuel Hosovsky

Reading A Cyborg’s Mind

Mini-series on decoding movement intentions using brain chips (1/5)


For a Motor BCI to be clinically viable, it must acquire and decode complex movement intentions at high spatiotemporal resolution in a safe, stable, and portable manner. This allows the User to reclaim full-body expression at naturalistic rates in day-to-day activities.


🔮 Motor BCIs aim to restore one’s lost motor function, by decoding their neural activity into digitized movements and speech which, in turn, command external devices.


Previously, I introduced you to the first-generation Users of Motor BCIs and presented a method for clinically valid measures of their Motor BCI-facilitated outcomes. I argued that more effort, especially by those commercializing the technology, should be directed toward PROs.


On the other hand, now I will pay homage to the scientists and engineers who brought Motor BCIs within earshot of the clinic. It examines the behavioral insights Motor BCIs can already expose which form the basis for proposing technically feasible clinical applications. After all, it is only after establishing feasibility that the field can justify investigating the clinical outcomes.


While intended as a gentle introduction, communicating the state of the art requires an acknowledgment of several fundamental principles behind the brain’s computation and modern data science. When needed, refer to the referenced works for more explanations.


 

Neural Data for Clinical Motor BCIs


The cortical areas demonstrating reliable encoding of full-body and fine movement are the Primary Motor Cortex (“BA4”), the Premotor Cortex (“BA6”), and the Posterior Parietal Cortex (”PPC”).


While many have characterized the general function of these areas, this section highlights the state-of-the-art Motor BCIs that tap into them. The focus is on developments in the current decade, extending the earlier seminal work namely by the Braingate Consortium, Cortical Bionics Research Group, Nick Ramsey’s Lab, Andresen Lab, Chang Lab, Sabes Lab, Chestek Lab, and Shanechi Lab among many others.


 

Only a handful of labs leverage the unprecedented simultaneous access to thousands of neurons to study long-term behavior, and even fewer freely share their data.


The Brunton Lab has consistently produced some of the largest publicly available human neurobehavioral datasets of upper extremity movements. The Annotated Joints in Long-term ECoG (”AJILE”) dataset by Wang et al. (2018) includes automatically annotated poses of 7 upper body joints for four human subjects over 670 total hours, along with the corresponding simultaneously acquired intracranial neural recordings.


(To play, see footnote link) Video 1; Brunton Lab, 2018: An illustrative video of the annotated motion capture data associated with AJILE.


Years later Singh et al. (2021), of the same lab, improved upon the 2018 AJILE dataset with finer temporal resolution and upper extremity trajectory characterization. Their pipeline discovered and annotated over 40,000 instances of naturalistic arm movements in long-term (7–9 day) behavioral videos, across 12 subjects from the dataset of Parvizi and Kastner (2018).


Gif 5; Singh et al., 2020: Annotated motion capture data of both wrists moving (bimanual movement).


To date, the Brunton Lab produced the largest publically available ECoG dataset synchronized with motion capture in the form of the AJILE version 12. From Peterson et al. (2022):


“AJILE12 includes synchronized intracranial neural recordings and upper body pose trajectories across 55 semi-continuous days of naturalistic movements, along with relevant metadata, including thousands of wrist movement events and annotated behavioral states. Neural recordings are available at 500 Hz from at least 64 electrodes per participant, for a total of 1280 hours. Pose trajectories at 9 upper-body keypoints, including wrist, elbow, and shoulder joints, were sampled at 30 frames per second and estimated from 118 million video frames.”


Beyond these well-annotated spontaneous intracranial datasets, Miller (2016) (last updated Dec 5th, 2022) curated a collection of ECoG data and analyses for 16 benchmark behavioral experiments, with 204 individual datasets from 34 patients captured with the same sampling rate and filter settings. To better understand perception, Wang et al. (2023) measured the neural dynamics in response to 10 participants watching feature-length movies for a total of 43.7 hours, collected with stereo-electroencephalographic (”sEEG”) probes. In 2023, Stanford’s Neural Prosthetics Translational Lab (NPTL) group released the dataset of Willett et al. 2023 with weeks of iBCI recordings as Pat, the participant with ALS, attempted to speak over 10,000 sentences. In 2008, Miller and Schalk presented the ‘BCI Competition IV, dataset 4’ of subdural ECoG recordings of 3 participants continuously moving individual fingers inside a motion-capturing glove which offered ground truth.


Taking a different approach, at the 35th NeurIPS conference, Pei et al. (2021) unveiled four curated datasets of spiking neural activity in cognitive, sensory, and motor areas. Rather than serving the community to model associations between population dynamics and behavior, these datasets were released alongside a benchmark to assess how accurately they can characterize the overall internal population dynamics based only on the activity of a limited set of observed neurons. As outlined in the Neural Manifold Hypothesis section, the motor cortex does much more than generate movement, yet its activity can be reduced into and expanded from a set of representative latent variables. To judge the accuracy of this act of encoding and decoding neural activity, the benchmark did not provide information about the behavior and instead assessed the models in an unsupervised manner — predicting firing rates of held-out channels based on the activity of held-in channels. In a supervised evaluation, judging the accuracy of predicting observed behavior, the benchmark would only assess the models’ capabilities of describing only partial, movement-related, neuronal activity.


Following the unveiling, the lab challenged other computational researchers to test their latent variable models using these standardized datasets with the unsupervised benchmark, to outperform their baseline models, but above all, to improve the techniques that interrogate the neural population dynamics of behavior. In a few months, dozens of labs stood up to the challenge.


Spontaneous movements capture real-world neural and behavior variability that is missing from traditional cued tasks. Measuring neural activity while exposed to audiovisual stimuli, captures other perceptual aspects of everyday life. Together, these and other kinds of long-form datasets offer a window into the dynamics of simultaneous, unpredictable, and environmentally dependent behavior currently hindering the deployment of advanced clinical Motor BCIs. Gazing through such windows, many works highlighted below attempt to paint a realistic picture of the population dynamics and movement they witness.


 


 

Part 6 of a series of unedited excerpts from uCat: Transcend the Limits of Body, Time, and Space by Sam Hosovsky*, Oliver Shetler, Luke Turner, and Cai Kinnaird. First published on Feb 29th, 2024, and licensed under CC BY-NC-SA 4.0.



uCat is a community of entrepreneurs, transhumanists, techno-optimists, and many others who recognize the alignment of the technological frontiers described in this work. Join us!


*Sam was the primary author of this excerpt.

bottom of page