Ebook: Medicine Meets Virtual Reality 22
In the early 1990s, a small group of individuals recognized how virtual reality (VR) could transform medicine by immersing physicians, students and patients in data more completely. Technical obstacles delayed progress but VR is now enjoying a renaissance, with breakthrough applications available for healthcare.
This book presents papers from the Medicine Meets Virtual Reality 22 conference, held in Los Angeles, California, USA, in April 2016. Engineers, physicians, scientists, educators, students, industry, military, and futurists participated in its creative mix of unorthodox thinking and validated investigation. The topics covered include medical simulation and modeling, imaging and visualization, robotics, haptics, sensors, physical and mental rehabilitation tools, and more.
Providing an overview of the state-of-the-art, this book will interest all those involved in medical VR and in innovative healthcare, generally.
This conference was conceived in 1991 when a small group of individuals envisioned how virtual reality, then in its first era of widespread enthusiasm, might transform medicine by immersing physicians, students, and patients in data more completely.
They predicted that interactive learning tools might better engage medical students by assessing real-time performance and customizing lessons in sync. Simulation could enhance the “see one, do one, teach one” model with the repetition that athletes and musicians used to perfect their skills. After training on simulators, novice caregivers could tend to their first patients with expertise they'd gained from making many previous errors that did no harm.
In addition, they imagined that visualizing patient data in 3D and 4D would give physicians the power to diagnose more accurately and strategize more precise therapies. Tissues, organs, and systems would be color coded, highlighted, and viewed in motion from multiple angles, revealing previously hidden features and relationships. Computers would join the clinical team.
Psychotherapy presented yet another promising application for VR. Within controlled virtual environments, patients might revisit traumatic experiences or confront phobias. Images would arouse emotions more intensely than words, possibly resulting in more complete healing. And, from pain management to Parkinson's, VR also gave researchers hope as a new tool to aid physical rehabilitation.
Although the VR boom of the early '90s faded when technical obstacles repeatedly delayed progress, researchers who understood the technology's potential kept working. Medical applications improved slowly and steadily; obstacles were overcome with much creativity and little fanfare. This volume, like its predecessors, is the product of these researchers' lasting commitment to better patient care and medical education.
In the past couple years, we've witnessed a remarkable VR renaissance, which must feel gratifying to those pioneers who stayed the course while VR was out of fashion. Heavily funded by the entertainment industry, sleek and relatively inexpensive gear is entering the market and being utilized in healthcare. To replace the clunky headsets of the first VR boom – often better in theory than in practice – are devices that patients, clinicians, and students can use gracefully and intuitively. It took a generation, but we are now seeing more and more applications that fulfill that initial vision of medicine transformed by the ability to immerse oneself in data.
This conference has endured with the support and encouragement of its Organizing Committee. To it and to the researchers who have shared their passion and hard work at this conference: thank you for all you've contributed in the last 25 years.
James D. Westwood
Aligned Management Associates, Inc.
Natural orifice translumenal endoscopic surgery (NOTES) procedures are rapidly being developed in diverse surgical fields. We are developing a Virtual Translumenal Endoscopic Surgery Trainer (VTESTTM) built on a modularized platform that facilitates rapid development of virtual reality (VR) NOTES simulators. Both the hardware interface and software components consist of independent reusable and customizable modules. The developed modules are integrated to build a VR-NOTES simulator for training in the hybrid transvaginal NOTES cholecystectomy. The simulator was demonstrated and evaluated by expert NOTES surgeons at the 2015 Natural Orifice Surgery Consortium for Assessment and Research (NOSCAR) summit.
Neuroanatomy is a challenging subject, with novice medical students often experiencing difficulty grasping the intricate 3D spatial relationships. Most of the anatomical teaching in undergraduate medicine utilizes conventional 2D resources. E-learning technologies facilitate the development of learner-centered educational tools that can be tailored to meet each student's educational needs, and may foster improved learning in neuroanatomy, however this has yet to be examined fully in the literature. An interactive 3D e-learning module was developed to complement gross anatomy laboratory instruction. Incorporating such 3D modules may provide additional support for students in areas of anatomy that are spatially challenging, such as neuroanatomy. Specific anatomical structures and their relative spatial positions to other structures can be clearly defined in the 3D virtual environment from viewpoints that may not readily be available using cadaveric or 2D image modalities. Providing an interactive user interface for the 3D module in which the student controls many factors may enable the student to develop an improved understanding of the spatial relationships. This work outlines the process for the development of a 3D interactive module of the cerebral structures included in the anatomy curriculum for undergraduate medical students in their second year of study.
Conventional surgical telementoring systems require the trainee to shift focus away from the operating field to a nearby monitor to receive mentor guidance. This paper presents the next generation of telementoring systems. Our system, STAR (System for Telementoring with Augmented Reality) avoids focus shifts by placing mentor annotations directly into the trainee's field of view using augmented reality transparent display technology. This prototype was tested with pre-medical and medical students. Experiments were conducted where participants were asked to identify precise operating field locations communicated to them using either STAR or a conventional telementoring system. STAR was shown to improve accuracy and to reduce focus shifts. The initial STAR prototype only provides an approximate transparent display effect, without visual continuity between the display and the surrounding area. The current version of our transparent display provides visual continuity by showing the geometry and color of the operating field from the trainee's viewpoint.
The purpose of this study was to compare the effects of a computer-based anti-smoking game on the intent and motivation to quit tobacco. Smokers with nicotine dependence were briefly exposed to an anti-smoking game with or without an avatar resembling the smoker's self. The computer-based anti-smoking game improved participants' immediate intent and motivation to quit smoking. Embedding an avatar resembling self into the game did not result in added benefits.
In this paper we introduce a Modified Iterative Constraint Anticipation (MICA) method that provides a unified framework for direct and response-based indirect haptic interaction common in many interactive virtual environments. Collision constraints during response based interaction that are modeled using the linear complementarity problem (LCP) framework resolves collision constraints from response-based interactions while allowing for accurate computation of reaction forces. Direct user manipulation is enabled by the linear projection constraints (LPC). A smoothing filter is used to post-process the reaction forces arising from both LCP and LPC to achieve stable interactions in real-time. The effectiveness of MICA is demonstrated using example problems involving deformable bodies.
In this paper we present an algorithm that allows for minimal end-user inputs by internally automating the creation and management of interactions amongst the objects in the scene in real-time medical simulation framework. A bi-directed graph (with nodes representing the scene objects and the connections representing the interactions) is formed based on the inputs from the user. This graph is then processed using a two stage algorithm that aims to find subgraphs that can be treated as independent sub-systems. Collision detection, collision response, assembly and solver objects are then automatically created and managed. This allows for users with limited knowledge of the underlying physics models, collision detection and contact algorithms to easily create a surgical scenario with minimal inputs.
As one of the most commonly performed neurosurgical procedures, ventriculostomy training simulators are becoming increasingly familiar features in research institutes and teaching facilities. Despite their widespread implementations and adoption, simulators to date have not fully explored the landscape of performance metrics that reflect surgical proficiency. They opt instead for measures that are qualitative or simple to compute and conceptualize. In this paper, we examine and compare the use of various metrics to characterize the performance of users on simulated part-task ventriculostomy scenarios derived from patient data. As an initial study, we examine how our metrics relate to expert classification of scenario difficulty as well as measures of anatomical variation.
In this paper a prototype system is presented for home-based physical tele-therapy using a wearable device for haptic feedback. The haptic feedback is generated as a sequence of vibratory cues from 8 vibrator motors equally spaced along an elastic wearable band. The motors guide the patients' movement as they perform a prescribed exercise routine in a way that replaces the physical therapists' haptic guidance in an unsupervised or remotely supervised home-based therapy session. A pilot study of 25 human subjects was performed that focused on: a) testing the capability of the system to guide the users in arbitrary motion paths in the space and b) comparing the motion of the users during typical physical therapy exercises with and without haptic-based guidance. The results demonstrate the efficacy of the proposed system.
The first complete simulation based on OpenSurgSim (OSS) is used as a case study for analyzing how the toolkit can accelerate the development of surgical simulations. The Burr Hole Trainer (BHT) is designed to train non-neurosurgeons to drill holes in the skull to relieve intracranial pressure, and the majority of its simulation functionality is provided by OSS. Based on code size, using OSS cut the development time in half, reduced the necessary size of the development team by two-thirds, and saved millions of US dollars.
3D Perception technologies have been explored in various fields. This paper explores the application of such technologies for surgical operating theatres. Clinical applications can be found in workflow detection, tracking and analysis, collision avoidance with medical robots, perception of interaction between participants of the operation, training of the operation room crew, patient calibration and many more. In this paper a complete perception solution for the operating room is shown. The system is based on the ToF technology integrated to the Microsoft Kinect One implements a multi camera approach. Special emphasize is put on the tracking of the personnel and the evaluation of the system performance and accuracy
This paper introduces the SafeHome Simulator system, a set of immersive Virtual Reality Training tools and display systems to train patients in safe discharge procedures in captured environments of their actual houses. The aim is to lower patient readmission by significantly improving discharge planning and training. The SafeHOME Simulator is a project currently under review.
This paper introduces a computer-based system that is designed to record a surgical procedure with multiple depth cameras and reconstruct in three dimensions the dynamic geometry of the actions and events that occur during the procedure. The resulting 3D-plus-time data takes the form of dynamic, textured geometry and can be immersively examined at a later time; equipped with a Virtual Reality headset such as Oculus Rift DK2, a user can walk around the reconstruction of the procedure room while controlling playback of the recorded surgical procedure with simple VCR-like controls (play, pause, rewind, fast forward). The reconstruction can be annotated in space and time to provide more information of the scene to users. We expect such a system to be useful in applications such as training of medical students and nurses.
Virtual Reality for surgical training is mainly focused on technical surgical skills. We work on providing a novel approach to the use of Virtual Reality focusing on the procedural aspects. Our system relies on a specific work-flow generating a model of the procedure from real case surgery observation in the operating room. This article presents the different technologies created in the context of our project and their relations as other components of our workflow.
This work promotes the use of computer-generated imagery -as visual illusions- to speed up motor learning in rehabilitation. In support of this, we adhere the principles of experience-dependent neuroplasticity and the positive impact of virtual reality (VR) thereof. Specifically, post-stroke patients will undergo motor therapy with a surrogate virtual limb that fakes the paralyzed limb. Along these lines, their motor intentions will match the visual evidence, which fosters physiological, functional and structural changes over time, for recovery of lost function in an injured brain. How we make up such an illusion using computer graphics, is central to this paper.
High fidelity surgical simulations must rely upon accurate soft tissue models to ensure realism of the simulations. Simulating multi-layer tissue becomes increasingly complex due to the specific mechanical properties of each individual layer. We have developed a Soft Tissue Elastography Robotic Arm (STiERA) system capable of identifying layer specific properties of multi-layer constructs while maintaining the integrity of each layer. The system was validated using tissue mimicking agar gel phantoms and showed great promise by identifying the layer specific properties with accuracy of greater than 80% when compared to known ground truth values from a commercial material testing system.
A videolaryngoscope is a more advanced tool than a traditional laryngoscope that eases endotracheal intubation by visualizing the vocal cords with a camera on the tip of the blade. However, using a videolaryngoscope can present difficulty in passing the tube into the glottic opening in some patients. This study developed a training protocol for intubation with videolaryngoscope and trained 22 anesthesia residents. A Parametrically Adjustable Airway Mannequin (PAAM) was set to provide easy and difficult configurations. Motion data of the videolaryngoscope, stylet, mannequin head, and hyoid bone were captured with 6 axis magnetic position sensors, along with the video image. The time to complete the various components of the task were recorded and used as an indication of competence along with observation by experts. The validity of the mannequin was supported by data that showed that the difficult configuration of PAAM took longer to intubate than the easy configuration (66 vs. 39 seconds during pre-test). The effectiveness of the training protocol was supported by improvement in trainee performance. At the beginning of the training, intubation with the difficult configuration took an average of 66 seconds, immediately after training it averaged 23 seconds, and in retention tests over a month after the training the average duration was 33 seconds.
This paper presents a simulation of Virtual Airway Skill Trainer (VAST) tasks. The simulated tasks are a part of two main airway management techniques; Endotracheal Intubation (ETI) and Cricothyroidotomy (CCT). ETI is a simple nonsurgical airway management technique, while CCT is the extreme surgical alternative to secure the airway of a patient. We developed identification of Mallampati class, finding the optimal angle for positioning pharyngeal/mouth axes tasks for ETI and identification of anatomical landmarks and incision tasks for CCT. Both ETI and CCT simulators were used to get physicians' feedback at Society for Education in Anesthesiology and Association for Surgical Education spring meetings. In this preliminary validation study, total 38 participants for ETI and 48 for CCT performed each simulation task and completed pre and post questionnaires. In this work, we present the details of the simulation for the tasks and also the analysis of the collected data from the validation study.
Personalized guides are increasingly used in orthopedic procedures but do not provide for intraoperative re-planning. This work presents a tracked guide that used physical registration to provide an anatomy-to-tracking coordinate frame transformation for surgical navigation. In a study using seven femoral models derived from clinical CT scans used for hip resurfacing, a guide characterization FRE of 0.4°±0.2°, drill-path drill-path angular TRE of 0.9°±0.4° and a positional TRE of 1.2mm±0.4mm were found; these values are comparable to conventional optical tracking accuracy. This novel use of a tracked guide may be particularly applicable to procedures that require a small surgical exposure, or when operating on anatomical regions with small bones that are difficult to track or reliably register.
Enabling surgeon-educators to themselves create virtual reality (VR) training units promises greater variety, specialization, and relevance of the units. This paper describes a software bridge that semi-automates the scene-generation cycle, a key bottleneck in authoring, modeling, and developing VR units. Augmenting an open source modeling environment with physical behavior attachment and collision specifications yields single-click testing of the full force-feedback enabled anatomical scene.
This study investigated the haptic ‘dissection’ of a digital model of the hand and wrist in anatomy education at both undergraduate (UG) and postgraduate (PG) levels. The study ran over five successive years and was split into three discreet phases. Phase one compared the results of PG students across control, non-haptic and haptic groups. Phase two compared the results of UG students between control and haptic groups. Phase three compared the results of UG students across control, non-haptic and haptic groups. Results for all phases indicate that use of the model, both through haptic and non-haptic interfaces produced some significantly improved test results. The non-haptic group performing the strongest overall indicating that the addition of haptic feedback may not be beneficial to student learning.
Camera positioning is critical for all telerobotic surgical systems. Inadequate visualization of the remote site can lead to serious errors that can jeopardize the patient. An autonomous camera algorithm has been developed on a medical robot (da Vinci) simulator. It is found to be robust in key scenarios of operation. This system behaves with predictable and expected actions for the camera arm with respect to the tool positions. The implementation of this system is described herein. The simulation closely models the methodology needed to implement autonomous camera control in a real hardware system. The camera control algorithm follows three rules: (1) keep the view centered on the tools, (2) keep the zoom level optimized such that the tools never leave the field of view, and (3) avoid unnecessary movement of the camera that may distract/disorient the surgeon. Our future work will apply this algorithm to the real da Vinci hardware.
Keratoconus is a progressive non-inflammatory disease of the cornea. Rigid gas permeable contact lenses (RGPs) are prescribed when the disease progresses. Contact lens fitting and assessment is very difficult in these patients and is a concern of ophthalmologists and optometrists. In this study, a hierarchical fuzzy system is used to capture the expertise of experienced ophthalmologists during the lens evaluation phase of prescription. The system is fine-tuned using genetic algorithms. Sensitivity, specificity and accuracy of the final system are 88.9%, 94.4% and 92.6% respectively.
Surgeons are increasingly relying on 3D medical image data for planning interventions. Virtual 3D models of intricate anatomy, such as that found within the temporal bone, have proven useful for surgical education, planning, and rehearsal, but such applications require segmentation of surgically relevant structures in the image data. Four publicly available software packages, ITK-SNAP, MITK, 3D Slicer, and Seg3D, were evaluated for their efficacy in segmenting temporal bone anatomy from CT and MR images to support patient-specific surgery simulation. No single application provided efficient means to segment every structure, but a combination of the tools evaluated enables creation of a complete virtual temporal bone model from raw image data with reasonably minimal effort.
Control of a powered wheelchair is often not intuitive, making training of new users a challenging and sometimes hazardous task. Collisions, due to a lack of experience can result in injury for the user and other individuals. By conducting training activities in virtual reality (VR), we can potentially improve driving skills whilst avoiding the risks inherent to the real world. However, until recently VR technology has been expensive and limited the commercial feasibility of a general training solution. We describe Wheelchair-Rift, a cost effective prototype simulator that makes use of the Oculus Rift head mounted display and the Leap Motion hand tracking device. It has been assessed for face validity by a panel of experts from a local Posture and Mobility Service. Initial results augur well for our cost-effective training solution.