top of page
Writer's pictureSamuel Hosovsky

How Many Fingers Am I Holding Up?

Cyborgs who can handwrite with just their mind (3/5)



As with the hand and arm, PPC has been found to encode individual finger movements with remarkable precision, comparable to the recordings from BA6 near the ‘hand knob’ area (Guan et al., 2023). Finger movements were decoded at 86% (Fig. 22) and 70% (Fig. 23) accuracy from the right hand and from both hands, respectively.


Figure 22; Guan et al., 2023: Online BMI classification of individual finger movements — ”A) Confusion matrix for participant N (PPC), right-hand finger presses. 86% accuracy, 4016 trials over 9 sessions.” Reprinted from Figure 6A.



Figure 23; Guan et al., 2023: Classifying finger presses of both hands — “C) Cross-validated confusion matrix for classifying right- and left-hand finger movements from N-PPC neural activity. 70% accuracy, 1000 trials over 10 sessions.” Reprinted from Figure 7.




Although significantly above chance, the accuracies reported still fall behind the requirements for a clinical system. However, when combined, simultaneous BA6 and PPC recordings yield significantly greater accuracy (Fig. 24.)


Figure 24; Guan et al., 2023: Confusion matrix for participant JJ (PPC + MC), right-hand finger presses — “92% accuracy ± S.D. 3% over eight sessions, 1440 total trials.” Reprinted from Figure 6B.


Guan et al. (2023) classified attempted movement per finger; they did not estimate each finger’s continuous motion. Nason et al. (2021), on the other hand, focused on decoding the velocities (and resulting positions) of individual fingertips from BA4. The two rhesus monkeys performed extraordinarily well, flexing and extending either their index finger or their middle + ring + pinky finger group to reach certain targets (Vid. 6.)


(To play, see footnote link) Video 6; Nason et al., 2021: Two-finger Simultaneous Brain Control using an iBCI.



In a similar study, Shah et al. (2023a) instead considered the BA6 activity of a human participant with paralysis and decoded the attempted fingertip positions of their three finger groups (Little+Ring; Middle+Index; Thumb). Though few details about their decoder are revealed, a linear classifier would match the neural activity of the participant with the virtual letter they were attempting to reach (Fig. 25) and nudge the animated finger group in that direction at a fixed speed.



Figure 25; Shah et al., 2023a: Designed keyboard — “Colors indicate finger groupings, with index-middle (green) and ring-little (yellow) tied together. Keys (circles) lie along the flexion-extension movement axis of the corresponding finger, with key color indicating finger assignment. Staggered locations of keys on the same finger group allows a unique selection even though two fingers are constrained to move together.” Reprinted from Figure 2B.



Taking an iBCI approach, each of the aforementioned studies had access to SUA but instead opted for MUA signal. It turns out that this sentiment is shared among most high-performance Motor BCI efforts which forgo SUA analysis for its inherent instability, computational overhead, and disregard for population dynamics (Kelly et al., 2007, Pandarinath et al., 2017).


On the other hand, the ECoG approaches to single-finger decoding typically prioritize using the noisier LFP signal collected from a much larger anatomical region. Despite using the LFP surrogate for local population spiking activity, it is shown to encode motor control even as specific as single fingers (Marjaninejad et al., 2017).


The lasting uproar of innovative finger-decoding architectures following the release of ‘BCI Competition IV, dataset 4’ sparked global academic participation to infer the flexion of individual fingers from ECoG. More than ten years after the competition winners were announced (1st Place Liang and Bougrain 2012, 2nd Place Flamary and Rakotomamonjy, 2012), the dataset continues to be used as a benchmark for validating novel decoding approaches.


The submissions illustrate the shift toward Deep Learning, as both 1st and 2nd place winners relied on linear approaches, which Deep Learning has since outperformed, despite training on only several minutes of recordings. Notably, Lomtev et al. (2023) achieved state-of-the-art performance in predicting individual finger trajectories (Fig. 26) through their innovative use of convolutional neural networks (”CNNs”). This is particularly significant as their model, despite its moderate correlation coefficients (~0.6), surpassed previous models, including those based on Long Short-Term Memory (LSTM) networks.


The key features contributing to their model’s superior performance include meticulous feature scaling and a carefully optimized number of layers within the network. This suggests that the convolutional approach is adept at handling transient, event-specific fluctuations in time-series data, which is a common characteristic in motor behavior signals. In contrast, LSTMs, which are more traditionally aligned with forecasting time series that exhibit regular autocovariance structures, might not be as effective for tasks where recognizing transient patterns is crucial.


This distinction between CNNs, and LSTMs may explain why the convolutional approach by Lomtev et al. has shown superior performance. CNNs, by design, excel in ‘recognition’ tasks, whether it be in image recognition or, as demonstrated, in identifying event-related potentials (ERPs) in time-series data. This insight opens up intriguing possibilities for applying similar convolutional models to other areas of motor behavior prediction and beyond, suggesting the broad applicability of these computational techniques in developing more effective Brain-Computer Interfaces (BCIs) and other neurotechnology applications.


The convolutional modeling approach taken by Lomtev et al. (2023) seems to have great potential for in real-world applications. Relatively lightweight (600k parameters), it is generalizable to similar datasets without the need for additional hyperparameter tuning, relatively fast at inference from only the current temporal window of arbitrary size and using the same decoder for resting and moving states.


Figure 26; Lomtev et al., 2023: Reprinted from Figure 3.


Classifying individual fingers or even mapping their continuous movement still only decoded the behavior within a specific task context, far removed from the daily tasks to be performed by Users.



✌️ In the current decade, Deep Learning approaches to continuous single-finger decoding have started outperforming conventional approaches.



As with the hand and arm movements, the relatively simple linear decoders struggle to generalize to changes in neural dynamics that generate different movements. For example, when fingers encounter resistance (or when the hand is differently oriented) the neural dynamics (in BA4) behind finger flexion and extension are impacted so severely that the trained linear decoder predicts vastly wrong fingertip positions (Mender et al., 2023).

However, because of the presumed alignment of the neural manifolds between the slightly different movements, the online decoder could quickly adapt to these changes.


Similarly, Shah et al. (2023b) highlight two Deep Learning methods that outperformed their linear decoder during tasks involving simultaneous finger movement. Another particularly well-performing Deep Learning framework proposed by Frey et al. (2021) has demonstrated favorable generalization across behaviors, including single finger tracking (evaluated on BCI Competition IV, dataset 4).


Despite promising advances in finger decoding, the emulation of typing from neural data is not an efficient modality for communication. Although finger decoding has been leveraged for communication, achieving record speeds of 90 characters per minute as decoded during attempted handwriting (Willett et al., 2021b), it is neither the fastest nor the most intuitive way to communicate.


Ironically, a much more complex approach — the direct decoding of speech from vocal motor regions — has recently exploded onto the scene of assistive products for communication. It is soon to become the first truly life-changing non-robotic application of BCIs.



 

Part 8 of a series of unedited excerpts from uCat: Transcend the Limits of Body, Time, and Space by Sam Hosovsky*, Oliver Shetler, Luke Turner, and Cai Kinnaird. First published on Feb 29th, 2024, and licensed under CC BY-NC-SA 4.0.



uCat is a community of entrepreneurs, transhumanists, techno-optimists, and many others who recognize the alignment of the technological frontiers described in this work. Join us!


*Sam was the primary author of this excerpt.

コメント


bottom of page