Ebook: Medicine Meets Virtual Reality 02/10
The book offers papers on many aspects of electronic technology in healthcare. Core areas are imaging, simulation, visualization, data networks, sensors, robotics, and displays. Medical applications include information-guided surgery, education and procedural training, telemedicine, immersive environments, stereoscopic projection, diagnostic tools, rehabilitation, and augmented reality. The papers describe both completed projects and recent developments in ongoing research. The book is a the collection of papers of the 10th annual "Medicine Meets Virtual Reality" conference (January 2002). This volume is a resource for computer scientists working in medical context, and for creators of data-focused products for clinical care, medical education, and procedural training.
Digital Upgrades: Applying Moore's Law to Health. We chose this year’s theme to acknowledge that virtual reality, as an aid to medical diagnosis and therapy, is now being validated by clinical experience. No longer merely conjectural or start-up, is well on its way to becoming routine.
In a half-humorous way, “Digital Upgrades” refers to upgrading our bodies the way we now upgrade software. Now the stuff of science fiction, personal health augmentation devices and programs will become, we predict, commonplace tools in the future. Already, networks connecting physicians, patients, and data are upgrading our methods of care. Sensors and microdevices, constantly improving, will become eyes, ears, fingers, noses, and tongues for these networks.
The doubling of efficiency and capability that Moore's Law describes does not directly apply to healthcare, as Richard Robb explains in his paper. However, we can expect accelerating progress as the utility of imaging, robotics, and informatics is demonstrated in the doctor's office and hospital. Kirby Vosburgh and Ronald Newbower examine how information technology already assists clinical care, and they address the barriers that discourage physicians and the healthcare industry from adopting novel tools and methods. What’s noteworthy to us is that ordinary patients now benefit from the research shared at MMVR over the past ten years. Inevitably, continuing technological leaps and refinements will merge electronic tools with our bodies in ways we can now only imagine.
MMVR02/10 takes place in the wake of September 11, 2001, and we believe there is a relationship between that day's events and what this conference is for. On that morning, it became clear why healthcare should be transformed along the lines of Moore's Law. September 11 taught us that data has become the most critical political and economic resource, that which determines the power of nations. To address some particular fears, the United States and other wealthy nations are confronting the urgent need for improved dataintensive biomedical tools. For all its agonies, war does stimulate medical progress. We're sure to see increased government and private investment in electronic aids to medical training, surgery, telemedicine, data networks, and sensors — what MMVR is all about. Our ability to defend ourselves depends upon this investment.
On the preventive side of conflict, if medical excellence were to proliferate the way cellular phones, personal computers, and the internet have, then the inequality — generally, as well as in healthcare — between rich and poor nations would diminish. (And inequality will increasingly deter peace because global communications are making disparities between nations more obvious.) Although we can’t replicate healthcare workers like we can computer chips, the ever cheaper production of health-supporting technology — per Moore’s Law — would make medical care better and more available in the developing world. Healthier, more valuable individual lives will add up to a more peaceful world.
This volume is the product of the tenth annual Medicine Meets Virtual Reality conference. Noting this special anniversary, we wish to thank the hundreds of researchers who, during the past decade, have shared their knowledge and vision and made MMVR a tool for giving better health to all."
This paper is a personal perspective on VR in medicine over the last decade.
The advancement of technical power described by Moore’s Law offers great potential for enabling more cost-effective medical devices and systems. However, progress has been slow. Many factors for this failure have been cited, including the anti-rational economic structure of healthcare and the complexity and long time scale of medical development. Christensen et al. suggest that “disruptive technologies” may circumvent some of these difficulties. “Disruptive Technologies” are defined as those that are established in one market, but then penetrate and overwhelm another market. These incursions are accelerated by economic factors, and capitalize on functionality, reliability, and advancements supported by the original market. Christensen has cited many examples from industrial and service businesses, but few examples can be found yet in healthcare.
We argue that positive technology impacts in medicine occur most readily when innovators augment the skills of and collaborate with caregivers, rather than seeking to displace them. In the short term, a new approach may improve efficiency or quality. In the longer term, such approaches may obviate human tasks at lowerskill levels, and even permit task automation. One successful example has been the introduction of flexible monitoring for physiologic information. Systems for computer-aided diagnosis, which have failed to impact complex decision making, have succeeded in simpler specialty areas such as the interpretation of EKG's and mammograms, and may do the same with analysis of some pathology images. The next frontier may the operating room, and the adoption of such systemic technologies by caregivers in emergency medicine and general care may then have an even wider “disruptive” effect. Responding to time and cost pressures, and the desire to move care to the patient, other workers, such as radiologists, will drive the trend away from isolated, complex, large-scale devices, and toward integrated, modular, and simpler networked technologies.
In summary, technological “push” will continue in the demanding cuttingedge application areas as always, but the “disruption” will occur through wider application of lower-cost technologies, pulled by the users. The capabilities described by Moore's Law will allow the advancements necessary to facilitate this dissemination of capability and its ultimate benefit, so long sought.
Generation of credible force feedback renderings adds the sense of touch crucial for the development of a realistic virtual surgical environment. However, a number of difficulties must be overcome before this can be achieved. One of the problems is the paucity of data on the in-vivo tissue compliance properties needed to generate acceptable output forces. Without this “haptic texture,” the sense of touch component remains relatively primitive and unrealistic. Current research in the quantitative analysis of biomechanics of living tissue, including collection of in-vivo tissue compliance data using specialized sensors, has made tremendous progress. However, integration of all facets of biomechanical data in order to transfer them into haptic texture remains a very difficult problem. For this reason, we are attempting to create a library of heuristic haptic textures of anatomical structures. The library of heuristic haptic textures will capture the expert's sense of feel for selected anatomical structures and will be used to convey the sense of touch for surgical training simulations. Once the techniques for converting biomechanical data into haptic texture become more robust, this library can be used as a benchmark to verify theoretical computational models used for generating output forces in haptic devices.
Mastoidectomy is one of the most common surgical procedures relating to the petrous bone. In this paper we describe our preliminary results in the realization of a virtual reality mastoidectomy simulator. Our system is designed to work on patient-specific volumetric object models directly derived from 3D, CT and MRI images. The paper summarizes the detailed task analysis performed in order to define the system requirements, introduces the architecture of the prototype simulator, and discusses the initial feedback received from selected end users.
By combining teleconferencing, tele-presence, and Virtual Reality, the Tele-Immersive environment enables master surgeons to teach residents in remote locations. The design and implementation of a Tele-Immersive medical educational environment, Teledu, is presented in this paper. Teledu defines a set of Tele-Immersive user interfaces for medical education. In addition, an Application Programming Interface (API) is provided so that developers can easily develop different applications with different requirements in this environment. With the help of this API, programmers only need to design a plug-in to load their application specific data set. The plug-in is an object-oriented data set loader. Methods for rendering, handling, and interacting with the data set for each application can be programmed in the plug-in. The environment has a teacher mode and a student mode. The teacher and the students can interact with the same medical models, point, gesture, converse, and see each other.
Cataplexy, a sudden loss of voluntary muscle control, is one of the hallmark symptoms of narcolepsy, a sleep disorder characterized by excessive daytime sleepiness. Cataplexy is usually triggered by strong, spontaneous emotions, such as laughter, surprise, fear or anger, and is more common in times of stress. The Sleep Disorders Unit and the Biomedical Imaging Resource at Mayo Clinic are developing interactive display technology for reliably inducing cataplexy during clinical monitoring. The use of immersive displays may help bypass patient defenses, and game-like “unreality” allows introduction of surprising, threatening, or humorous elements, with little risk of offending patients. The project is referred to as the “Cataplexy/Narcolepsy Activation Program”, or CatNAP. We have developed an automobile driving simulation to allow the introduction of humorous, surprising, or stress-inducing events and objects as the patient attempts to navigate a simulated vehicle through a virtual town. The patient wears a stereoscopic head-mounted display, by which he views the virtual town through the windows of his simulated vehicle. The vehicle is controlled via a driving simulator steering wheel and pedal cluster. The patient is instructed to drive his vehicle to another location in town, given initial directions and street signs. As he attempts to accomplish the task, various objects, sounds or conditions occur which may distract, startle, frustrate or cause laughter; responses which may trigger a cataplectic episode. The patient can be monitored by reflex tests and EMG recordings during the driving experience. An evaluation phase with volunteer patients previously diagnosed with cataplexy has been completed. The goal of these trials was to gain insight from the volunteers as to improvements that could be made to the simulation. All patients that participated in the evaluation phase have been under a physician's care for a number of years and control their cataplexy with medication. We believe this is a novel and innovative approach to a difficult problem. CatNAP is a compelling example of the potentially effective application of virtual reality technology to an important clinical problem that has resisted previous approaches. Preliminary results suggest that an immersive simulation system like CatNAP will be able to reliably induce cataplexy in a controlled environment. The project is continuing through a final stage of refinement prior to conducting a full clinical study.
This paper discusses the use of the Long Elements Method – LEM in soft tissue modeling and surgery simulation. The LEM is a new method for real time, physically based, dynamic simulation of deformable objects, based on a new meshing strategy, using long elements. The method uses a combination of static (state-less) and dynamic approaches to simulate deformations and dynamics, obtaining a higher degree of compliance per time step. Global deformations that conserve volume and are convincingly compliant are obtained. Models are defined using bulk material properties. Elastic and plastic deformations can be simulated. The real time performance of the method and its intrinsic properties of volume conservation, modeling based in material properties and simpler meshing make it particularly attractive for soft tissue modeling and surgery simulation.
BACKGROUND: This study was designed to evaluate the safety of a self-administered triage tool. MATERIALS: Ninety-five patients older than 14 years who presented to Memorial Hermann Hospital emergency room (ER) with chief complaint of abdominal pain were included in the study. Their ER disposition and final diagnoses were logged into a database. The assigned disposition and top three diagnoses by the triage tool for each patient were also logged into the database. An emergency physician blinded to the actual disposition reviewed all cases and provided a disposition for each patient. RESULTS: The system disposed 51.1% of cases appropriately and under-disposed 4.4% of cases. Comparison between the system and the emergency physician shows that all cases under-disposed by the system are also under-disposed by the physician.
New surgical navigation techniques may combine the use of live video from a surgical endoscope with 3D volumetrically-reconstructed images of a patient's anatomy. This image-enhanced endoscopy requires calibration of the endoscope to ensure that the mapping of the real endoscope image to its virtual counterpart is properly performed. The application of a technique to calibrate an endoscope prior to use in a diagnostic or therapeutic procedure is described, as well as a simple yet effective linear method for lens-distortion compensation. The results of accuracy testing of the calibration technique using a dedicated testing apparatus are reported.
The design of simulators for surgical training and planning poses a great number of technical challenges. Therefore the focus of systems and algorithms was mostly on the more restricted minimal invasive surgery. This paper tackles the more general problem of open surgery and presents efficient solutions to several of the main difficulties. In addition to an improved collision detection scheme for computing interactions with even heavily moving tissue, a hierarchical system for the haptic rendering has been realized in order to reach the best performance of haptic feedback. A flexible way of modeling complex surgical tools out of simple basic components is proposed. In order to achieve a realistic and at the same time fast relaxation of the tissue, the approach of explicit finite elements has been substantially improved. We are able to demonstrate realistic simulations of interactive open surgery scenarios.
A VR-based system using a CyberGlove and a Rutgers Master II-ND haptic glove was used to rehabilitate four post-stroke patients in the chronic phase. Each patient had to perform a variety of VR exercises to reduce impairments in their fmger range of motion, speed, fractionation and strength. Patients exercised for about two hours per day, five days a week for three weeks. Results showed that three of the patients had gains in thumb range (50-140%) and finger speed (10-15%) over the three weeks trial. All four patients had significant improvement in finger fractionation (40-118%). Gains in finger strength were modest, due in part to an unexpected hardware malfunction. Two of the patients were measured against one-month post intervention and showed good retention. Evaluation using the Jebsen Test of Hand Function showed a reduction of 23-28% in time completion for two of the patients (the ones with the higher degrees of impairment). A prehension task was performed 9-40% faster for three of the patients after the intervention illustrating transfer of their improvement to a functional task.
Accurate biomechanical characteristics of tissues are essential for developing realistic virtual reality surgical simulators utilizing haptic devices. Surgical simulation technology has progressed rapidly but without a large database of soft tissue mechanical properties with which to incorporate. The device described here is a computer-controlled, motorized endoscopic grasper capable of applying surgically relevant levels of force to tissue in vivo and measuring the tissue's force-deformation properties.
We present schemes for real-time generalized interactions such as probing, piercing, cauterizing and ablating virtual tissues. These methods have been implemented in a robust, real-time (haptic rate) surgical simulation environment allowing us to model procedures including animal dissection, microsurgery, hysteroscopy, and cleft lip repair.
We present schemes for real-time generalized mesh cutting. Starting with the a basic example, we describe the details of implementing cutting on single and multiple surface objects as well as hybrid and volumetric meshes using virtual tools with single and multiple cutting surfaces. These methods have been implemented in a robust surgical simulation environment allowing us to model procedures ranging from animal dissection to cleft lip correction.
Symmetry Considerations can be used not only to plan the desired shape of reconstructured bone structures, but also to generate prototypes for soft tissue implants. The poster describes a system which allows to calculate a symmetry plane in the facial area automatically and computes proposals for implants or transplants. The system presented has been used to calculate soft tissue implants and a replacement for parts of the lower jaw.
Attention Deficit Hyperactivity Disorder (ADHD) is a childhood syndrome characterized by short attention span, impulsiveness, and hyperactivity, which often leads to learning disabilities and various behavioral problems. The prevalence rates for ADHD varied from a low of 2.0% to a high of 6.3% in 1992 statistics, and it may be higher now. Using Virtual Environments and Neurofeedback, we have developed an Attention Enhancement System for treating ADHD. And we made a clinical test. Classroom-based virtual environments are constructed for intimacy and intensive attention enhancement. In this basic virtual environment, subjects performed some training sessions. There are two kinds of training sessions. One is Virtual Reality Cognitive Training (VRCT) and the other is Virtual Reality Neurofeedback Training (VRNT). In VRNT, we made a change in the virtual environment by Neurofeedback. Namely, if the Beta ratio is greater than the specified threshold level, the change as positive reinforce is created in the virtual environment. 50 subjects, aged 14 to 18, who had committed crimes and had been isolated in a reformatory took part in this study. They were randomly assigned to one of five 10-subject groups: a control Group, two placebo groups, and two experimental groups. The experimental groups and the placebo groups underwent 10 sessions over two weeks. The control group underwent no training session during the same period of time. While the experimental groups used HMD and Head Tracker in each session, the placebo groups used only a computer monitor. Consequently, only the experimental Groups could look around the virtual classroom. Besides that, Placebo Group 1 and Experimental Group 1 performed the same task(Neurofeedback Training), and Placebo Group 2 and Experimental Group 2 also performed the same task(Cognitive Training). All subjects Continuous Performance Task(CPT) before and after all training sessions. In the number of correct answers, omission errors and signal detection index (d’), the subjects’ scores from CPT showed significant improvement (p<0.01) after all of the training sessions, while control group indicated no significant change. And experimental groups showed significant difference (p<0.01) with placebo groups. Lastly, the Virtual Reality Neurofeedback training group and the Virtual Reality Cognitive training group indicated not significant difference. Our System is supposed to enhance subjects’ attention and lead their behavioral improvement. And also, we can conclude that virtual reality training (both Neurofeedback training and Cognitive training) has an advantage for attention enhancement compared with desk-top training.
A PC based system for simulating image-guided interventional neuroradiological procedures for physician training and patient specific pretreatment planning is described. The system allows physicians to manipulate and interface interventional devices such as catheters, guidewires, stents and coils within 2-D and hybrid surface and volume rendered 3-D patient vascular images in real time. A finite element method is employed to model the interaction of the catheters and guidewires with the vascular system. Fluoroscopic, roadmapping and volume rendered 3-D presentations of the vasculature are provided. System software libraries allow for the use of commonly employed catheters, guidewires, stents and occluding coils of various shapes and sizes. The results of an initial clinical validation suggest that the experience gained from our simulator is comparable with that of using a vascular phantom. We are conducting further validation with the aim of providing patient specific pretreatment planning.
This paper describes a new virtual reality (VR) locomotion input device, the Pressure Mat, which allows a simulator user to navigate/traverse a virtual environment (VE) using similar motions as in the real world; i.e., walk, run, crawl. The device consists of an array of pressure sensitive resistors covered by a thin, flexible mat. The resistor array is connected to a personal computer (PC) that uses a real-time pattern recognition algorithm to determine if the user is standing still, or walking forward, backward, left or right. The information from the Pressure Mat was used to allow users to navigate in a VE. The Pressure Mat may also be useful in the diagnosis of a variety of conditions and/or in rehabilitative therapy.
Computer- and robot-based systems to support interventions become more and more important in modern surgery. In general these systems provide methods to plan an intervention pre-operatively [ 1,2,13] and to execute it with support from a autonomous robot-system [3,4]. Due to the principle restriction of a robot to comparatively simple work steps, there are some complex work steps which the surgeon may plan but which he/she has to execute manually. In craniofacial surgery osteotomised bone segments are deformed by hand to a shape given by the planning system. We support the execution of pre-planned deformation by comparison of the actual shape of an object with the target shape. The actual shape is obtained intra-operatively with a surface scanning device, the deviation from the target shape are visualised by projecting colour-coded error values directly on the object to be deformed. The surgeon uses these projections to adjust further deformation steps. The system is therefore able to validate the correct execution of planned deformations, especially of bony structures.
Normative data is very important for simulation procedures in craniofacial surgery [1]. While treating e.g. a malformed skull the surgeon seeks to reconstruct its natural and harmonic shape. Atlas or normative data of the skull could support the surgeon in this effort [2], as it would provide a standard model of the skull which gives an idea of the natural shape. We create a standard skull by averaging regularly formed skulls in a shape space spanned by spherical harmonics. While state-of-the-art methods use landmarks to define the shape and mean shapes [4,5,6,14], this method is deterministic, i.e. it manages averaging without landmarks and it provides a complete description of the shape. In addition the shape space can be used to classify shapes to identify different types of an anatomy.
Augmented reality is often used for interactive, three-dimensional visualization within the medical community. To this end, we present the integration of an augmented reality system that will be used to train military medics in airway management. The system demonstrates how a head-mounted projective display can be integrated with a desktop PC to create an augmented reality visualization. Furthermore, the system, which uses a lightweight optical, tracker, demonstrates the low cost and the portability of the application.
In this work we focus our attention on developing a surgical simulator for performing laparoscopic Heller myotomy using force feedback. A meshless numerical technique, the method of finite spheres, is used for the purpose of physically based, real time haptic and graphical rendering of soft tissues. Localized discretization allows display of deformations in the vicinity of the tool tip as well as interaction forces at high update rates (kHz). Novel cutting algorithms are implemented using point-based representation of anatomical models. Graphical rendering is accomplished by using a recently developed volumetric rendering technique known as splatting.
Pathology beneath the highly reflective surface, such as the human retina, is key in the detection and management of disease. Advanced imaging techniques can help reveal these. Visualization and guiding the imaging to the appropriate area still remain as problems, in part due to the small scale of the pathology with respect to the potential area to be covered.
Collaborative virtual environments for diagnosis and treatment planning are increasingly gaining importance in our global society. Virtual and Augmented Reality approaches promised to provide valuable means for the involved interactive data analysis, but the underlying technologies still create a cumbersome work environment that is inadequate for clinical employment. This paper addresses two of the shortcomings of such technology: Intuitive interaction with multi-dimensional data in immersive and semi-immersive environments as well as stereoscopic multiuser displays combining the advantages of Virtual and Augmented Reality technology.