top of page
Writer's pictureSamuel Hosovsky

Cyborgs — The First Citizens Of Metaverse

Why no one stands to gain as much as them



Designed to echo the execution of movement and speech, virtual Avatars extend the user’s physical expressions to the Metaverse by applying them to a sufficiently anthropomorphic 3D model in real-time. Solidifying the embodiment, spatially tracked head-mounted displays (”HMD”) render 3D audiovisual content from a first-person perspective — including each Avatar that comes into view.


An embedded microphone, infrared and monochrome cameras, and motion sensors give the Avatar access to the user’s full range of expressions facilitated by speech, eye and face movement, hand and finger movement, and limb movement respectively.


The resulting sense of immersion (sense of presence, body ownership, and agency) already motivates many to perform ADLs immersed in virtual reality with applications designed for creative expression, collaboration, socialization, traveling, entertainment, sport, and more.


However, our Users cannot use the established control modalities to connect with their Avatars. While they may process visual and audio information just fine, their severe paralysis prevents them from using hand-held controllers, head-tracked HMD, or even voice to explore and interact with the virtual environment.


The emerging Motor BCIs hold the power to make these limitations a thing of the past. As we’ve seen in the previous chapter the present-day solutions can already decode much of the body’s kinematics and speech at near-natural rates.


Should one interpolate these decoded movements onto the User’s embodied Avatar, the User may bypass the usual VR input modalities and safely perform activities they yearn for. Not only does VR facilitate this self-fulfillment, but a fortunate byproduct of this digital veil is that the User will be perceived just like users without paralysis.


At last, now I tie the value of VR with the User needs (defined in User Outcomes) enabled by behavioral insights offered by the  State-of-the-Art Motor BCIs.


Meeting the User’s Needs with VR


Immersive technologies aim to fuse the physical and virtual worlds imperceptibly. The high degree of control over the virtual environments offers a patients wide range of benefits across conditions.



Beyond academia, many such VR applications have found commercial success. XRHealth has transitioned many medical assessments and treatments into VR. Recently widening their mental health offering with the acquisition of Amelia, XRHealth offers a wide array of VR experiences (so-called “Virtual Treatment Rooms”) for cognitive, mental, and physical indications — each registered with the FDA, some covered by CMS (under “Remote Health Monitoring”). As patients visit the Virtual Treatment Rooms, physicians can log into a digital portal to monitor their detailed measurements and wider progress and create or adjust their VR treatment plans.


(To play, see footnote link) Video 11; HRHealth, 2021: Overview of XRHealth applications that provide real-time data to patients and clinicians about progress.


RelieVRx by AppliedVR, which reduces chronic lower-back pain, was the first to gain FDA approval for a take-home VR therapy (Park, 2021) and has since been granted reimbursement coverage with a unique HCPCS Level II code (E1905) (Diamond, 2023). Penumbra’s REAL System provides a range of upper extremity rehabilitation exercises in VR and it too has gained FDA approval (Kirsh, 2019).


In fact, the FDA has approved nearly 40 immersive systems to date (FDA, 2023). The large number of FDA-approved solutions should not dilute the difficulty the FDA faces in evaluating their safety and effectiveness across many complex technical (Fig. 29) and clinical (Fig. 30) factors.


Figure 29; Beams et al. 2022: Summary of technical evaluation challenges for Extended Reality applications. Reprinted from Table 1.


Figure 30; Beams et al. 2022: Clinical Evaluation Challenges for the Application of Extended Reality Devices in Medicine. Reprinted from Table 2.



Aside from the VR device coverage of E1905 (as durable medical equipment), CMS has also granted several CPT codes that cover VR-related procedures (O’Connor and Chaudhary, 2023):


  • 98975–98977, 98980–98981: Remote Therapeutic Monitoring

  • 0770T: VR Technology to Assist Therapy

  • 0791T: Motor-cognitive, semi-immersive VR–facilitated gait training

  • 0771T–0774T: VR procedural dissociation which uses computer-generated, audio, visual, and proprioceptive immersive environment to create a temporary state of altered consciousness that improves patient comfort by increasing pain tolerance and/or decreasing pain sensation during a given procedure.


The uptake of VR in healthcare (for a review see Bansal et al., 2022; Won Lee, 2022) is supported by a broader rise of VR and its continued growth. McKinsey & Co. (2022) believe that the potential economic value of the Metaverse may generate up to $5 trillion in impact by 2030 and Omdia (2023) forecasts a $7.5 billion consumer spend on VR content by 2026. Like with the broader neurotechnology market, the Extended Reality (“XR”) market analysis is not the focus of this work as many have written about it elsewhere.


Numerous basic, instrumental, and advanced/recreational ADLs can already be simulated in VR which has led to its use in evaluating and improving cognitive and physical health:


One of the most advanced ADL simulations was created by Alberts et al. (2022) at the Cleveland Clinic, which quantifies the user’s ability to shop for groceries:


(To play, see footnote link) Video 12; Alberts et al., 2022: Participant walks on an omnidirectional treadmill in VR, shopping for specific items on their list in a simulated grocery store.



Quantitatively, Buele and Navarro (2023) highlight the definite positive impact of simulating ADLs in VR on cognitive-motor functioning in older adults with cognitive impairments:


Figure 31, Buele and Navarro 2023 — Characteristics of meta-analyzed studies which reveal that motor movements, coupled with VR-based cognitive-motor interventions, activate specific brain areas and foster improvements in general cognition, executive function, attention, and memory.



In addition to functional benefits, simulating ADLs in VR can automate otherwise burdensome care (Bauer and Andringa, 2020), foster daily routine independence inaccessible to clinical gym setups (Matsangidou et al., 2023), and increase motivation over physical training programs (Pichierri et al., 2011).


Success Criteria Satisfied with VR


Motor BCIs can already decode much of the User’s biomechanics, including attempted Speech and Facial Movements, Finger Movements, Hand and Arm Movements, and even some full body movements (Integrated Decoding). Given the rate of improvement driven by commercialization, miniaturization, biocompatibility, and machine learning efforts, one can estimate the extent to which the needs of Users may be met in virtual worlds should they be able to embody their Avatars with a best-of-breed Motor BCI.


Development

To approach such a task, I start with the comprehensive breakdown of Users’ desires in Success Criteria for Clinical Motor BCI — User Outcomes, treating them as a target list of outcomes that Motor BCI-powered VR applications should satisfy. Then, I match them with appropriate VR experiences available today.


The following table (Table 3) associates the 31 previously identified Success Criteria with 106 generally available “VR Experience(s)” — each first link leads to a demo of a representative example.


To the extent to which VR blurs the line between the physical and virtual, the following VR experiences allow Users to become who they have most longed for.


(For the full 106 experiences see the footnote link) Table 3; uCat, 2024: 106 VR Experiences Satisfying The 31 Clinical Motor BCI Success Criteria



Interpretation

The “Motor BCI Decoding Requirements” column indicates minimal decoding requirements to satisfy the motor outcomes associated with each Success Criterium as detailed by the associated QOL instruments, excluding attempted neck movements which are necessary for every VR experience.


The “Rationale” column further discusses the mapping of VR experiences to the specific QOL instrument items underpinning each Success Criterium, and provides other helpful context for the VR experiences as needed.


Practical Implications

Table 3 reveals that generally accessible VR experiences can fulfill many of the valuable outcomes of Motor BCIs that Users, their caretakers, and the research community consider important.


The particular strengths of the available VR content are social, vocational, and pleasurable activities which offer the User a near-unlimited range of expressions without appearing disabled. Indeed, VR levels the playing field for all its users with universally cartoonish avatars and simplifies dexterous physical movements into approximations — serving as the perfect sandbox where Users train to master their Motor BCI skills.


While cartoonish at present, VR characters are poised to receive significant upgrades shortly — reconstructing any of its users’ likeness virtually and in real-time. The 3D Gaussian splatting head avatars developed by Xu et al. (2023) and Saito et al. (2023) can accurately depict extremely complex and exaggerated facial expressions and intricate sub-millimeter details (such as hairs or pores) — finally allowing photorealistic avatars to escape the uncanny valley. Similarly, full-body avatars (Kocabas et al., 2023) and environments (Xie et al., 2023) will receive the ‘Gaussian upgrade.’


Although meaningful, realistically rendered Avatars are not enough to secure immersion. Allowing for their realistic motion control is essential for their success. Some methods rely only on the audio or text to animate the avatar (Tevet et al., 2022; NVidea, 2023; Zhao et al., 2024). Most use diffusion models, which may one day include neural data as input.



Video 13; Fridman, 2023: An interview with Mark Zuckerberg in VR using the Saito et al. (2023) model on the Lex Fridman Podcast #398



Video 14; Tong, 2024: An interview recorded entirely using ‘Personas’, Apple Vision Pro avatars.


The main limitation of meeting User needs with VR is that many actions taken in the virtual world lack an effect on the physical. While it may be addressed in the future by Converging VR and Robotics, currently, drinking in VR does not satisfy the User’s thirst and running does not expand their lungs.


A strictly economical perspective may consider the cost of maintaining the User’s day-to-day physical health to be unaffected by VR, although it may be offset by the remuneration the User receives from engaging in the extensive occupations unique to VR.


However, whether VR meets the User’s needs, defined in terms of PROs, largely depends on the User’s subjective attitudes towards the “reality” of VR. In today’s world, digital technology permeates every aspect of our lives, eroding the once-clear boundaries between the human experience and technological influence.


The technology of the 19th century (like dams, canals, or railways) overwhelmed the senses, elucidating sublime experiences of the individual’s insignificance and powerlessness in its might (”Sublime” in the classical Kantian sense as applied to technology by Nye, 1996). With time, new branches of technology sprung increasingly personalized and intimate offshoots: less visible, less inorganic, less unnatural, and less distant (for a review see Wolfe, 2018). This breed of technology has become an inseparable and intuitive extension of oneself (a sense of being one with, say, one’s vehicle or tools) and of one’s environment (as one might say: ”I am on this social media” or “See you in that video game”). The world of digital role-playing games has been most successful in blurring the human-technology boundary, combining these two extensions. It reduces the incomprehensibly complex production, assembly, and computation underlying these games into an increasingly seamless user experience. It is events like a catastrophic system failure in the digital realm (like corrupted hardware) that then bring about the sublime sensation where one’s digitally-extended self is destroyed and one becomes aware of the incomprehensibility of the technology that lies beneath (Shinkle, 2012).


Conversely, a User with paralysis might feel trapped by their body, profoundly disconnected from it. Only their sense of digitally-extended self may restore the connection as attempted movements result in perceived movement. Here, the notion of the digitally extended self is flipped into a narrative of a physically restricted but virtually-native self, making the prospect of VR an appealing experience to restore their otherwise denied, natural behavior.


Whether reality is physical, entirely constructed in the mind, a combination of both, or something else entirely we may never know. Depending on the beliefs of each User, they may prefer the physical world or the virtual world, or prefer neither but treat them as distinct (”digital dualists” as defined by Loewen, 2022), or do not draw such distinction (e.g., “mind-body-ists” of Hayles, 2013). Such beliefs may then strongly influence the User’s subjective appreciation of VR, a key factor in its impact on their QOL (as a type of Motor BCI application).



 

Part 13 of a series of unedited excerpts from uCat: Transcend the Limits of Body, Time, and Space by Sam Hosovsky*, Oliver Shetler, Luke Turner, and Cai Kinnaird. First published on Feb 29th, 2024, and licensed under CC BY-NC-SA 4.0.



uCat is a community of entrepreneurs, transhumanists, techno-optimists, and many others who recognize the alignment of the technological frontiers described in this work. Join us!


*Sam was the primary author of this excerpt.

Comments


bottom of page