top of page
Writer's pictureCaila K

Transferring Thoughts Onto Avatars

From brain chip voltages to virtual reality expressions



While VR has been extensively used in psychiatric and rehabilitative care, integrating Motor BCI with a VR environment is, counterintuitively, rare.


The barriers to entry appear to stem from the complexities of representing the biomechanics of motor intents in such a format as to be easily interpolated onto Avatars.


Instead of detailing the frontier of modeling humanoid biomechanics, which has been discussed at length elsewhere (Sugihara and Morisawa, 2020; Hashemi et al., 2023), this section outlines the design considerations a VR application should account for when mapping outputs of Motor BCIs onto an Avatar with multibody dynamics.


Simulating Neuromuscular Control


After faithfully representing the musculoskeletal geometry, a VR application is deeply concerned with the kind of movement variables it can consume from a Motor BCI. Depending on the type of kinetic and/or kinematic variables the application must engage in appropriate forward or inverse kinematics calculations to naturalistically model the multibody dynamics of human body motion.


Figure 35; Seth et al., 2018 — Elements of a typical musculoskeletal simulation in OpenSim. Movement arises from a complex orchestration of the neural, muscular, skeletal, and sensory systems.


The biomechanical outputs of Motor BCI can include various continuous movement variables such as direction, velocity, acceleration, position, distance, trajectory, joint angles, forces, torques, or even muscle-related variables. Simpler to interpolate onto an Avatar, the Motor BCI may also output discrete movements such as target poses, gestures, grips, and facial expressions.


Generally speaking, the more upstream the movement variable, the finer and more versatile the VR emulation. Obtaining these desired variables remains a challenge. Specific muscle activations hardly result from the spiking activity of single neurons (Churchland and Shenoy, 2007) and instead from the sustained change in the activation structure of neural populations commanding that muscle. Formally, this change in the activation structure is the output-potent trajectory of the activated neural modes on a low-dimensional manifold (Versteeg and Miller, 2022.)


Indeed, as shown in Hand and Arm Movements, the best-performing BCIs shift their control strategies away from relying on the straightforward covariance of individual neurons with movement variables in favor of treating the motor cortex as a dynamical system maintained by the collective activity of populations of neurons (Gallego et al., 2017; Gallego et al., 2018).


A leading perspective in line with the neural manifold hypothesis is that instead of coordinating muscles individually, the motor cortex operates in a lower-dimensional musculoskeletal space by recruiting muscles functionally co-activated during a task (so-called ”synergies”) (Santello et al., 1998; Mulla and Keir, 2023), although some implicate the cortex even in the control of spinal cord’s motor units (Marshall et al., 2022).


How the motor cortex controls movement is still highly debated. Therefore, the VR application should not impose constraints on the movement variables it can receive.

Should muscle synergies be the primary continuous output of Motor BCIs, the VR application is tasked with computing muscle-driven forward kinematics, which output target positions and orientations of bones (e.g. using optimal control methods). As these bones are virtual and not supported by a system of physical joints, links and actuators, the task is more straightforward than robotics applications.


Conversely, most of the existing decoders currently output downstream joint/effector angles and velocities (Goodman et al., 2019; see Continuous Movements for more). In such cases, the VR application must calculate inverse kinematics to estimate upstream bone transformations, solving for the effects of dynamic coupling on the kinematic chain (e.g., using static optimization methods).


Ahead of receiving particular movement variables, the VR application should recognize which part(s) of the body the User intends to move. As proposed in the Context Identification section, one strategy may involve a hierarchical set of decoders first identifying target effectors across the body (akin to demonstrations of Willett et al., 2020) before engaging the other specialized movement decoders. During the simultaneous intended movement of multiple effectors or other perturbances of neural dynamics, such as changes in movement targets or inherent population non-stationarities, yet another model maintains the stability of neural manifolds (see the box in The Breakthrough for examples).


Efforts to Combine Motor BCI with VR


Some Motor BCI research groups recognize the benefit of pre-emptive VR simulations for effective and efficient use of a robot. Looping the User into an interactive and real-time VR simulation can uniquely test the system’s usability and performance during environmental interaction (i.e., handling of contacts, passive forces, and other operating conditions). Conversely, prolonged immersion can train the User to improve their future robotic control at a lower risk of harm, equipment damage, or study jeopardy.


👁️‍🗨️Specifically for Motor BCIs sourcing control parameters from sensitive cortical dynamics, VR offers an unprecedented observability of the User’s visual perception of their movement and the environment.


Visuospatial information obtained from the orientation of the HMD coupled with standard gaze tracking can quantify the User’s recognition of body positions which modulates premotor circuits involved in proprioception (Graziano, 1999; Guerraz et al., 2012; Zakharov et al., 2020). Similarly, these methods can obtain target movement locations which the premotor cortex encodes into motor plans (Grafton, 2010; Karl and Whishaw, 2013).


Courtesy of the physics engines powering the simulation, the properties of the environment and all its objects, including the Avatars themselves, are already well-described at arbitrary timesteps. This digitization of the environment is another valuable input for Motor BCIs as audiovisual stimulus affects the planning dynamics of the premotor cortex (Mazurek and Schieber, 2017). For example, when reaching to grasp an object, its size, shape, and even color influence its neural dynamics preshaping the hand and fingers (Haffenden and Goodale, 2000; Cole, 2008; Chouinard et al., 2009).


 

Some Motor BCI labs seek these VR benefits without the burden of developing and maintaining their own simulation and visualization tools. In 2011, Davoodi and Loeb released a VR software platform for simulating and test-driving Motor BCI-controlled prostheses.


Figure 36; Davoodi and Loeb, 2011: MSMS models of multi-DOF prostheses and the task environments simulating rehabilitation tasks and games. Reprinted from Figure 3.


Baking in decades of prior research on neuromusculoskeletal dynamics (the tool’s original purpose Davoodi et al., 2004; Loeb and Davoodi, 2015), the platform came equipped with tools to:


  • Assemble 3D models of human and robotic body parts (or import them from SIMM or SolidWorks)

  • Build scenes with interactable objects (incl. cameras, lights, textures, and audio sources)

  • Design standard animations used in ADLs (e.g., sequence of elbow flexion, hand opening/closing)

  • Synthesize the kinematic chains of simple point-to-point movements (and store them as motion files).

  • Convert its 3D and kinematic models into a Simulink model that may be further extended to compute complex physics-based movement dynamics and integrate control inputs from the Motor BCI.

  • Animate the 3D models according to a stream of live motion commands (outputted by the physics model governing the environment or simply the movement variables decoded by the Motor BCI). These can trigger or override the standard animations.


The 3D models and animations are described in XML and rendered using Java and Java3D. The animations can be visually designed and customized using PowerPoint.


A real-time PC and a visualization PC make up the bare minimum configuration for real-time VR applications. The real-time PC must dedicate all of its resources to simulate the physics, run application logic, input/output data from and to external devices, and transmit animation data to the visualization PC over UDP. The visualization PC must use high-end graphics cards to render stereoscopic 3D frames necessary for VR.


While MSMS can accept animation data from any runtime implementing its “Feature Commands” protocol over UDP, it is clearly catered to the academic user with a MATLAB license who can tolerate its computational limitations and demands.


For instance, to render the movement of just 48 objects using the NVIDIA Quadro FX4800 at a meager resolution, MSMS introduces nearly 18ms rendering latency which does not even include the latencies associated with simulating the environment done by the other PC, and the networked data transfer (Davoodi and Loeb, 2012). For comparison, the same card both simulates and renders significantly more complex programs at a lower framerate, such as the popular driving game Dirt 3 — benchmarked by TechLab.


Indeed, over the past decade, dozens of Motor BCI studies have been conducted with MSMS (for example Cunningham et al., 2010; Fan et al., 2015) and MSMS was incorporated into the Virtual Integration Environment (”VIE”) framework at the Johns Hopkins Applied Physics Lab (Armiger et al., 2011).


While there are many other tools developed to model the musculoskeletal mechanics of the human body, namely AnyBody, OpenSim, and SIMM, they do not offer closed-loop control via VR as their focus is mainly on biomechanical analyses of movement.


Despite its shortcomings, MSMS paved the way for Motor BCI research leveraging real-time VR control. Several similar programs have emerged since its inception, for example:


  • Topalovic et al. (2021) developed a platform integrating VR with intracranial recording and stimulation,

  • Paschall (2022) developed a bidirectional VR interface for Motor BCI where a user: 1. Used eye tracking to focus on a virtual object, 2. Triggered grabbing animation of said object with threshold crossing gamma activity of their primary somatosensory cortex evoked by lifting the right elbow and captured by an sEEG probe, 3. Felt like they were touching the object with their thumb as a result of the probe’s subsequent stimulation of their primary somatosensory cortex (at a different depth compared to 2),

  • In yet unpublished studies, James Johnson, a User with severe paralysis in Caltech’s Andersen Lab, had used his Utah-array-based Motor BCI to (Drew, 2022; Johnson, 2023): - Drive a car using his decoded finger movements. Steering felt akin to moving an imaginary joystick while breaking was done with an attempted thumb press. At first, the car was virtual, but later, he drove a physical car adapted to the commands of his Motor BCI, streaming back a camera feed from thousands of kilometers away, - Control his Avatar in a variety of VR reaching, transporting and placing tasks,

  • Metzger et al. (2023) decoded Ann’s (another User with severe paralysis) speech and facial movements and mapped them onto a digital, though not virtual, Avatar as she was expressing herself. The Avatar’s voice and appearance were modeled after Ann’s likeness before her brainstem stroke.

  • Rosenthal (2023) studied how a User’s visual perception of touching an Avatar impacts the neural activity in their primary somatosensory cortex.

  • With too many studies to list individually, the Cortical Bionics research group has extensively used VR in their Motor BCI studies since 2015 (Wodlinger et al., 2015).


Tangentially, mainly in the context of prosthetic training, EMG muscle activity has also been used to control Avatars; unsurprisingly more often than Motor BCI control (CTRL-Labs’ Melcer et al., 2018; Perry et al., 2018; Bustamante et al., 2021; Shim et al., 2022; Alvi Labs, 2023; Segas et al., 2023).


Cortical Bionics group, in particular, has developed the most advanced Motor BCI-VR pipeline (Vid. 19), although much of their implementation has yet to be released publicly.



Video 19; UChicago Medicine, 2022: Scott, a participant in a recent Cortical Bionics group study, uses his Motor BCI to control his virtual hand simulated and rendered by MuJoCo.


A key characteristic setting them apart from those who use anthropomorphic neuromusculoskeletal simulators is their use of the Multi-Joint dynamics with Contact (”MuJoCo”) engine — a popular physics simulator originally developed for robotics (Todorov et al., 2012).


MuJoCo, favored by many for its computational efficiency (Erez et al., 2015), accurately simulates complex humanoid musculoskeletal dynamics making it a suitable backbone for real-time Motor BCI control.


With the HAPTIX addition (Kumar and Todorov, 2015), created to model contacts and provide VR feedback, MuCoJo is increasingly adopted by the biomechanics community over traditional tools (Joyner et al., 2021).


Although it can be argued that robotics-focused simulators lack validated musculoskeletal models, efforts to bridge this gap are underway (Ikkala and Hamalainen, 2020; Fischer et al., 2021; Saputra et al., 2023). For instance, the MyoSim pipeline generates MuJoCo models at a similar accuracy to OpenSIM, while computing them over two orders of magnitude faster (Wang et al., 2022).


MuJoCo also includes an OpenGL-based rendering engine, efficiently converting mathematically defined 3D models into visually perceivable images displayed in the HMD.


Figure 37; MuJoCo, 2024: Rendering of Robotis OP3, a sample model in their ‘Menagerie’ collection.


Unlike MSMS, no second “visualization PC” is required, although MuJoCo has also never prioritized visualization beyond the needs of R&D departments. Instead, to support stunning visual experiences, MuJoCo released a Unity plug-in sometime in the mid-2010s (v1.55). This integration uses MuJoCo exclusively for physics simulations, while Unity exclusively renders the MuJoCo objects, ignoring their physics (see example implementations here). It should be noted that Unity’s built-in 3D physics is already impressive, as it integrates Nvidia’s PhysX engine (Unity, 2019).


From controlling the arm in 10 DoF (Wodlinger et al., 2015) to evoking finely graded perception of force in each paralyzed finger (Greenspon et al., 2023) and using those percepts during virtual object manipulation tasks (Shelchkova et al. 2023), the Cortical Bionics group continues to pioneer the use of VR at the forefront of Motor BCI research.


 

Part 6 of a series of unedited excerpts from uCat: Transcend the Limits of Body, Time, and Space by Sam Hosovsky*, Oliver Shetler, Luke Turner, and Cai Kinnaird. First published on Feb 29th, 2024, and licensed under CC BY-NC-SA 4.0.

uCat is a community of entrepreneurs, transhumanists, techno-optimists, and many others who recognize the alignment of the technological frontiers described in this work. Join us!


*Sam was the primary author of this excerpt.

Comments


bottom of page