Ebook: Medicine Meets Virtual Reality 18
Since the debut of the Medicine Meets Virtual Reality (MMVR) conference in 1992, MMVR has served as a forum for researchers harnessing IT advances for the benefit of patient diagnosis and care, medical education, and procedural training. At MMVR, virtual reality becomes a theatre for medicine, where multiple senses are engaged—sight, sound and touch—and language and image fuse. Precisely because this theatre is unreal, it is a valuable tool: the risks of experimentation and failure are gone, while the opportunity to understand remains. Improvement of this tool, through steady technological progress, is the purpose of MMVR. This book presents papers delivered at the MMVR18 / NextMed conference, held in Newport Beach, California, in February 2011, with contributions from international researchers whose work creates new devices and methods at the juncture of informatics and medicine. Subjects covered include simulation and learning, visualisation and information-guided therapy, robotics and haptics, virtual reality and advanced ICT in Europe, validation of new surgical techniques, and many other applications of virtual-reality technology. As its name suggests, the NextMed conference looks forward to the expanding role that virtual reality can play in global healthcare. This overview of current technology will interest those who dedicate themselves to improving medicine through technology.
James D. Westwood
Aligned Management Associates, Inc.
ENIAC, the first electronic universal digital computer, was born on Valentine's Day 1946—a lifetime ago. It and its emerging peers were elephantine contraptions, but they evolved rapidly, increasing in speed and shrinking in size, adopting efficiencies of scale in reproduction and mutating continuously. Who are their offspring today? Five billion mobile phones and similarly ubiquitous personal and business computers in countless variations. What was once a costly academic and military project is now an everyday tool.
When Medicine Meets Virtual Reality launched in 1992, computers were already popular in most of the industrialized world, although relatively expensive and clunky. (Remember the dot-matrix printer?) The Internet was about to make its commercial debut, providing a means to link all these solitary devices into a communicating, sharing, interactive meta-forum. More so than print, the computer was image-friendly. Unlike television and cinema, the computer-plus-Internet was multi-directional—users could create and share a moving image. Cinema and TV were meeting their eventual heir as “virtual reality” arrived on the scene.
At MMVR, virtual reality becomes a theater for medicine, where multiple senses are engaged—sight, sound, and touch—and language and image fuse. (Taste and smell are still under-utilized, alas.) Simulation lets actors rehearse in any number of ways, interrupting and reconfiguring the plot to create the most compelling finale. Visualization alters costumes to clarify relationships, and shifts sets and lighting to sharpen focus or obscure a background. Impromptu lines are recorded for possible adoption into the standard repertoire. Audience members, who need not be physically present, may chat with the actors mid-performance or take on a role themselves. Critics can instantly share their opinions.
Whether the actors and audience are physicians, patients, teachers, students, industry, military, or others with a role in contemporary healthcare, the theater of virtual reality provides a singular tool for understanding relationships. Medical information can be presented in ways not possible in books, journals, or video. That information can be manipulated, refined, recontextualized, and reconsidered. Experience finds a wider audience than would fit in a surgical suite or classroom. Therapeutic outcomes can be reverse engineered. Precisely because the theater is unreal, the risks of experimentation and failure vanish, while the opportunity to understand remains. The availability and veracity of this educational virtual theater are improving due to steady technological improvement: this is the purpose of MMVR.
Most of the industrialized world is currently undergoing an economic correction whose end result is far from clear. The happier news is that many emerging economies continue to flourish during the downturn. Furthermore, knowledge resources that were once the privilege of wealthier countries are now more easily shared, via computers and the Internet, with those who are catching up. Children (and adults) are being trained on inexpensive and interconnected devices, acquiring literacy and a better chance at higher education. Healthcare is an important part of this worldwide dissemination of expertise enabled by the virtual theater of learning. As developing regions progress, their most creative minds can take part in the quest for what's next in medicine. The vision of a better educated, more productive, and healthier global population is clarified.
Someone born in 1992, as was MMVR, could be attending a university now. She or he might be working on research that is shared at this conference. We who organize MMVR would like to thank the many researchers who, for a generation, have come from around the world to meet here with the aim of making very real improvements in medicine.
Endoscopic third ventriculostomy is a minimally invasive technique to treat hydrocephalus, which is a condition in which the patient is retaining excessive amount of cerebrospinal fluid in the head. While this surgical procedure is fairly routine, it carries some risks, mainly associated with the lack of depth perception, since monocular endoscopes provide only 2D views. We studied the advantages given by a 3D stereoendoscope over a 2D monocular endoscope, first by assessing the variability of stereoacuity in each subject, then in analyzing their overall correct response rate in differentiating between heights of two different images with 2D and 3D vision.
In the early-middle stages of Parkinson's disease (PD), polysomnographic studies show early alterations of the structure of the sleep, which may explain frequent symptoms reported by patients, such as daytime drowsiness, loss of attention and concentration, feeling of tiredness. The aim of this study was to verify if there is a correlation between the sleep dysfunction and decision making ability. We used a Virtual Reality version of the Multiple Errand Test (VMET), developed using the NeuroVR free software (http://www.neurovr2.org), to evaluate decision-making ability in 12 PD not-demented patients and 14 controls. Five of our not-demented 12 PD patients showed abnormalities in the polysomnographic recordings associated to significant differences in the VMET performance.
We propose a method for accurately tracking the spatial motion of standard laparoscopic instruments from video. By exploiting the geometric and photometric invariants common to standard FLS training boxes, the method provides robust and accurate tracking of instruments from video. The proposed method requires no modifications to the standard FLS training box, camera or instruments.
There is a growing body of evidence to suggest the arthritic hip is an irregularly-shaped, aspherical joint, especially in severely pathological cases. Current methods used to study the shape and motion of the hip in-vivo, are invasive and impractical. This study aimed to assess whether a plastic model of the hip joint can be accurately made from a pelvic CT scan. A cadaver hemi-pelvis was CT imaged and segmented from which a 3D plastic model of the proximal femur and hemi-pelvis were fabricated using rapid-prototyping. Both the plastic model and the cadaver were then imaged using a high-resolution laser scanner. A three-way shape analysis was performed to compare the goodness-of-fit between the cadaver, image segmentation, and the plastic model. Overall, we obtained sub-millimeter fit accuracy between all three hip representations. Shape fit was least favorable in areas where the boundary between cartilage and bone is difficult to distinguish. We submit that rapid-prototyping is an accurate and efficient mechanism for obtaining 3D specimens as a means to further study the irregular geometry of the hip.
Spirometry is the most common pulmonary function test. It provides useful information for early detection of respiratory system abnormalities. While decision support systems use normally calculated parameters such as FEV1, FVC, and FEV1% to diagnose the pattern of respiratory system diseases, expert physicians pay close attention to the pattern of the flow-volume curve as well. Fisher discriminant analysis shows that coefficients of a simple polynomial function fitted to the curve, can capture the information about the disease patterns much better than the familiar single point parameters. A neural network then can classify the abnormality pattern as restrictive, obstructive, mixed, or normal. Using the data from 205 adult volunteers, total accuracy, sensitivity and specificity for four categories are 97.6%, 97.5% and 98.8% respectively.
Suturing is currently one of the most common procedures in minimally invasive surgery (MIS). We present a suturing simulation paradigm with pre-computed finite element models which include detailed needle-tissue and thread-tissue interaction. The interaction forces are derived through a reanalysis technique for haptic feedback. Besides providing deformation updates and high fidelity forces, our simulation is computationally less costly.
There is a recent shift from traditional nerve stimulation (NS) to ultrasound-guided (UG) techniques in regional anesthesia (RA). This shift prompted educators to readdress the best way to teach these two modalities. Development of a more structured curriculum requires an understanding of student preferences and perceptions. To help in structuring the teaching curriculum of RA, we examined residents' preferences to the methods of instruction (NS Vs. UG techniques). Novice residents (n=12) were enrolled in this parallel crossover trial. Two groups of 6 residents received a didactic lecture on NS or UG techniques. The groups then crossed over to view the other lecture. After they observed a demo of ISBPB on two patients using NS and US. The residents completed a questionnaire regarding their impression of each technique and the learning experience. UG technique was perceived to be safer and to have more educational value than NS. However, residents felt both techniques should be mandatory in the teaching curriculum.
The robotic rehabilitation devices can undertake the difficult physical therapy tasks and provide improved treatment procedures for post stroke patients. During passive working mode, the speed of the exercise needs to be controlled continuously by the robot to avoid excessive injurious torques. We designed a fuzzy controller for a hand rehabilitation robot to adjust the exercise speed by considering the wrist angle and joint resistive torque, measured continuously, and the patient's general condition, determined by the therapist. With a set of rules based on an expert therapist experience, the fuzzy system could adapt effectively to the neuromuscular conditions of the patient's paretic hand. Preliminary clinical tests revealed that the fuzzy controller produced a smooth motion with no sudden change of the speed that could cause pain and activate the muscle reflexive mechanism. This improves the recovery procedure and promotes the robot's performance for wide clinical usage.
EMMA project has been focused on how the sense of presence in virtual environments mediates or generates emotional responses, and how to use presence and emotional responses in virtual environments effectively in clinical and non clinical settings. EMMA project has developed two different virtual environments. The first one acts as a ‘mood device’ and is aimed to induce and enhance several moods on clinical and non clinical subjects. The second one is a virtual environment that acts as an adaptive display to treat emotional disorders (Post-traumatic Stress Disorder, Adjustment Disorder and Pathological Grief). This virtual world varies the contents that are presented depending on the emotions of the patient at each moment. The goal of this paper is to outline the main goals achieved by this project.
We present NeuroSim, the prototype of a training simulator for open surgical interventions on the human brain. The simulator is based on virtual reality and uses real-time simulation algorithms to interact with models generated from MRT- or CT-datasets. NeuroSim provides a native interface by using a real surgical microscope and original instruments tracked by a combination of inertial measurement units and optical tracking. Conclusively an immersive environment is generated. In a first step the navigation in an open surgery setup as well as the hand-eye coordination through a microscope can be trained. Due to its modular design further training modules and extensions can be integrated. NeuroSim has been developed in cooperation with the neurosurgical clinic of the University of Heidelberg and the VRmagic GmbH in Mannheim.
Intended for medical students studying the evaluation and diagnosis of heart arrhythmias, the beating heart arrhythmia simulator combines visual, auditory, and tactile stimuli to enhance the student's retention of the subtle differences between various conditions of the heart necessary for diagnosis. Unlike existing heart arrhythmia simulators, our simulator is low cost and easily deployable in the classroom setting. A design consisting of solenoid actuators, a silicon heart model, and a graphical user interface has been developed and prototyped. Future design development and conceptual validation is necessary prior to deployment.
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
The endotracheal bougie is used for difficult intubations when only a minimal glottic view is obtained. Standard bougies are designed for use during direct, line-of-sight viewing of the glottic opening. With videolaryngoscopy, intubators “see around the corner”, thus requiring a bougie which can be shaped to follow a significant curve. A malleable bougie with an imbedded internal wire was created to enable intubators to shape the curve to best fit a difficult airway. This pilot study examined the malleable bougie compared to the SunMedTM bougie in a simulated difficult airway intubation using video laryngoscopy.
This study examined the utility of a novel tongue retractor created with a wider working blade and a more ergonomic curve to provide jaw lift and tongue management with one hand during intubation. Anesthesia providers participated in simulated intubation of a difficult manikin using the novel tongue retractor with the Bonfils video fiberscope. Results show that the tongue retractor improved placement success and was well received by the study participants.
The video laryngoscope is a useful tool in intubation training as it allows both the trainer and the student to share the same view of the airway during the intubation process. In this study, the Center for Advanced Technology and Telemedicine's airway training program employed videolaryngoscopy (VL) in teaching both simulated (manikin) and human intubation. The videolaryngoscope statistically improved the glottic view in both the standard and difficult manikin airways when compared to that with standard (direct) laryngoscopy. The success rate in simulated difficult airway intubation was significantly improved using VL. With human intubation training, there was statistically significant improvement in airway views using VL and a 97.5% intubation success rate. The enhanced view of the videolaryngoscope in airway intubation facilitates the learning process in performing both simulated and human intubation, making it a powerful tool in intubation training.
Airway management is an essential skill in providing care in trauma situations. The video laryngoscope is a tool which offers improvement in teaching airway management skills and in managing airways of trauma patients on the far forward battlefield. An Operational Assessment (OA) of videolaryngoscope technology for medical training and airway management was conducted by the Center for Advanced Technology and Telemedicine (at the University of Nebraska Medical Center, Omaha, NE) for the US Air Force Modernization Command to validate this technology in the provision of Out of OR airway management and airway management training in military simulation centers. The value for both the training and performance of intubations was highly rated and the majority of respondents indicated interest in having a video laryngoscope in their facility.
Studies show the video laryngoscope enhances intubation training by facilitating visualization of airway anatomy. We examined the performance and training of military healthcare providers in a brief intubation training course which included both direct and indirect (video) laryngoscopy. This training format with the video laryngoscope improved airway visualization and intubation performance, promoting increased trainee confidence levels for successful intubation. Web-based training paired with hands-on instruction with the video laryngoscope should be considered as a model for military basic airway management training.
Previous studies have shown that the videolaryngoscope is an excellent intubation training tool as it allows the student and trainer to share the same anatomical view of the airway. Use of this training tool is limited; however, as many times intubation training must take place outside the hospital environment (as in the training of military health care providers). In this environment, the device can prove to be large and cumbersome. This study examined the use of the Storz CMACTM, a compact video laryngoscope system, for intubation training in a simulated field hospital setting with the Nebraska National Air Guard. The study showed that the C-MACTM was well-received by the trainees and would be useful in a deployment or hospital setting.
This study examined the feasibility of using SkypeTM technology in basic manikin intubation instruction of Nebraska National Air Guard personnel at a Casualty Training Exercise. Results show that the SkypeTM monitor provided clear sound and visualization of the airway view to the trainees and the combination of VoIP technology and videolaryngoscopy for intubation training was highly valued by study participants.
Mental health care represents over a third of the cost of health care to all EU nations and in US is estimated to be around the 2'5% of the gross national product. It additionally results in further costs to the economy in lost productivity. Depression and Stress related disorders are the most common mental illnesses and the prevention of depression and suicide is one of the 5 central focus points in the European Pact for Mental Health and Well Being. While other mental illnesses may benefit in the long term, Depression and Stress will be the focal point mental illnesses mentioned in OPTIMI. Currently the main treatments for mental illness are pharmacological and evidence based Cognitive Behavioral Therapy (CBT). CBT comprises a set of therapist and patient processes whose format allows for the whole treatment process to be computerized and personalized, Computerised CBT (CCBT). OPTIMI will try to improve the state of the art by monitoring stress and poor coping behavior in high risk population, and by developing tools to perform prediction through early identification of the onset of depression. The main goal of OPTIMI is to improve CCBT programs in order to enhance both efficacy and therapeutic effectiveness. The presentation will outline the main goals the project is aiming and its clinical rationale.
An integrated communication network, SurgON, has been developed to enable a surgeon to control multiple operating room systems and devices and monitor data streams from diverse sources via a single console. The system also enables the surgeon to grant access and control to remote observers and participants. A test configuration has been evaluated.
Conveying to a patient the exact physical nature of a disease or procedure can be difficult. By establishing an access website, and using existing 3D viewer software along with our expanding set of anatomical models, we can provide an interface to manipulate realistic, 3D models of common anatomical ailments, chosen from a database frequently updated at the request of the medical community. Physicians will be able to show patients exactly what their condition looks like internally, and explain in better detail how a procedure will be performed.
Image-guided catheter ablation therapy is becoming an increasingly popular treatment option for atrial fibrillation. Successful treatment relies on accurate guidance of the treatment catheter. Integration of high-resolution, pre-operative data with electrophysiology data and positional data from tracked catheters improves targeting, but lacks the means to monitor changes in the atrial wall. Intra-operative ultrasound provides a method for imaging the atrial wall, but the real-time, dynamic nature of the data makes it difficult to seamlessly integrate with the static pre-operative patient-specific model. In this work, we propose a technique which uses a self-organizing map (SOM) for dynamically adapting a pre-operative model to surface patch data. The surface patch would be derived from a segmentation of the anatomy in a real-time, intra-operative ultrasound data stream. The method is demonstrated on two regular geometric shapes as well as data simulated from a real, patient computed tomography dataset.
This paper presents an advanced method of visualizing the surface appearance of living brain tissue. We have been granted access to the operating theatre during neurosurgical procedures to obtain colour data via calibrated photography of exposed brain tissue. The specular reflectivity of the brain's surface is approximated by analyzing a gelatine layer applied to animal flesh. This provides data for a bidirectional reflectance distribution function (BRDF) that is then used the rendering process. Rendering is achieved in realtime by utilizing the GPU, and includes support for ambient occlusion, advanced texturing, sub surface scattering and specularity. Our goal is to investigate whether realistic visualizations of living anatomy can be produced and so provide added value to anatomy education.