Ebook: Medicine Meets Virtual Reality 17
The 17th annual Medicine Meets Virtual Reality (MMVR17) was held January 19-22, 2009, in Long Beach, CA, USA. The conference is well established as a forum for emerging data-centered technologies for medical care and education. Each year, it brings together an international community of computer scientists and engineers, physicians and surgeons, medical educators and students, military medicine specialists and biomedical futurists. MMVR emphasizes inter-disciplinary collaboration in the development of more efficient and effective physician training and patient care. The MMVR17 proceedings collect 108 papers by conference lecture and poster presenters. These papers cover recent developments in biomedical simulation and modeling, visualization and data fusion, haptics, robotics, sensors and other related information-based technologies. Key applications include medical education and surgical training, clinical diagnosis and therapy, physical rehabilitation, psychological assessment, telemedicine and more. From initial vision and prototypes, through assessment and validation, to clinical and academic utilization and commercialization - MMVR explores the state-of-the-art and looks toward healthcare’s future. The proceedings volume will interest physicians, surgeons and other medical professionals interested in emerging and future tools for diagnosis and therapy; educators responsible for training the next generation of doctors and scientists; IT and medical device engineers creating state-of-the-art and next-generation simulation, imaging, robotics and communication systems; data technologists creating systems for gathering, processing and distributing medical intelligence; military medicine specialists addressing the challenges of warfare and defense health needs; and biomedical futurists and investors who want to understand where the field is headed.
MMVR researchers utilize the tools and methods of information technology to design improved human well-being. Visualization, simulation, modeling, robotics, sensors, networking: data becomes the key to better diagnosis and therapy. IT efficiencies have the potential to eliminate the customary tradeoffs between quality and cost. However, imbalances in healthcare threaten this potential.
Although the United States spends more on healthcare than any other nation, returns on this investment remain questionable. The ubiquitous fee-for-service model rewards physicians for the quantity of procedures performed, thus encouraging more procedures. Accordingly, time spent weighing the value of each procedure in light of medical evidence and patient history decreases; odds for waste and malpractice increase. The greater quantity of procedures in turn motivates insurance payers to reduce payment per procedure and increase the premiums obtained from their covered population. Lower per-procedure reimbursement means more procedures done in order to maintain financial equilibrium. Higher premiums result in more individuals for whom insurance becomes unaffordable and healthier patients opting out of the insurance system, increasing premiums for those left behind. Those without coverage are unable to negotiate the discounted rates enjoyed by insurance payers, so they pay more for identical care.
The complexity of the United States healthcare delivery system is expensively maintained by legions of administrators, accountants, technicians, and clerical staff. In spite of their limited medical expertise, they indirectly determine who receives what kind of treatment. Physicians must masterfully navigate a shifting web of insurance protocol while they consider therapies, while in the background, malpractice lawyers console patients failed by this out-of-balance system.
This system would not exist if it offered no benefits. Private insurance companies generate profit and guard their turf with political influence. This system offers ample employment for those who do not provide care but regulate its delivery. Patients with good insurance enjoy treatment that equals that of other rich nations. Drug and device makers invest heavily in R&D since blockbuster products reap long-term rewards. The downward spiral of per-procedure reimbursement creates incentives to design and adopt technology that is cheaper, more accurate, and (for better or worse) reduces the time a physician must spend with each patient.
Is this system sustainable? Currently, while we watch the financial industry implode, we must wonder whether healthcare is heading toward a similar crisis. Convinced by years of advertising, Baby Boomers expect a healthy old age, enabled by breakthrough technologies, just as resources to pay for care are evaporating. A current survey by the Physician's Foundation reports that nearly half of primary care doctors would retire today if they had sufficient financial means; a similar proportion plans to reduce patient load or retire altogether within the next three years. How will these trends affect care for an aging population?
Broader imbalances—and perhaps solutions—exist outside US healthcare. In October 2008, the World Future Society offered ten forecasts for 2009 and beyond. The first predicts, “everything you say and do will be recorded by 2030.” The second projects, “access to electricity will reach 83% of the world by 2030.” The disparity implied by the two scenarios is startling. The first requires—in addition to electricity—countless sensors, processors, bandwidth, search engines, technicians, and tremendous wealth. The second defines a population whose poverty denies it the most basic modern tool, along with its countless tangential benefits.
Reflecting on the first forecast: if an omnipresent sensor and data network were built, how would healthcare benefit? Utilizing such a network, we can imagine a medical research utopia, where every individual becomes a subject in a universal clinical trial. Daily exercise, diet, psychological stresses, and environmental factors are measured and analyzed in conjunction with genetic profile and medical history. Patient non-compliance? It's just another variable providing new insights. Evidence-based, decision-support algorithms replace therapeutic trial and error. When N = everybody, the quantity and quality of data will blur the distinctions between diagnosis and illness, prediction and therapy, experimentation and personalized care. Well-being can be designed more intelligently and efficiently.
The second forecast warns of challenges outside the developed world. To start, modern healthcare demands electricity, yet the unstoppable proliferation of cell phones in the developing world, bypassing landlines that were never built, bodes well for medical data technology. Can the lack of a power grid be overcome with cheap, solar-powered Internet terminals? Might cloud computing be done through the clouds instead of wires? Technological efficiencies from the developed world have the potential to benefit those who have less to pay, allowing their participation in a global wellness intelligence network.
It's an idealistic vision, for sure. Privacy advocates (and anyone who has applied for individual health insurance) fear the widespread dissemination of sensitive data. Will today's credit-identity hacking pale in comparison to future crimes of medical piracy? How will the insurance industry utilize predictive medicine? How will the creativity of academia and industry be supported in a precarious financial environment? Must government's role in the healthcare marketplace change dramatically to sustain well-being for its citizens? Many questions remain unanswered.
At MMVR, where we are privileged to explore and act at the forefront of medical technology, answers emerge. Creating and utilizing data networks to design human well-being is not an abstract vision of the future; it is the challenging task of every day. In these imbalanced times, when harsh realities dominate personal and public dialogue, we congratulate MMVR researchers on their successes and continuing determination.
Combining deterministic (e.g. differential equations) and probabilistic (Bayesian Networks) approaches to model physiological processes in a real-time software environment leads to a novel model for simulation of human patient physiology especially relevant for intensive care units (ICU). Using dedicated HW/SW interfaces simulated patient signals are measurable with standard monitoring systems. Therefore, this system, based on realistic simulations, is very well suited for teaching and education. Additionally, the environment is usable for inferring patient-specific model structures and parameters. We introduce a hierarchical modeling approach, which allows building complex models based on aggregation of simple sub models. The simulation is controlled to run in real-time with typical sampling times of 1–10 ms (depending on model complexity) on a standard PC (Pentium 2.66 GHz CPU).
A new model for describing electrocardiography (ECG) is presented, which is based on multiple dipoles compared to standard single dipole approaches in vector electrocardiography. The multiple dipole parameters are derived from real data (e.g. four dipoles from 12-channel ECG) by solving the backward problem of ECG numerically. Results are transformed to a waveform description based on Gaussian mixture for every dimension of each dipole. These compact parameterized descriptors are used for a very realistic real-time simulation applying the forward solution of the proposed model.
Physically-based virtual environments (VEs) provide realistic interactions and behaviors for computer-based medical simulations. Limited CPU resources have traditionally forced VEs to be simplified for real-time performance. Multi-core processors greatly increase the computational capacity of computers and are quickly becoming standard. However, developing non-application specific methods to fully utilize all available CPU cores for processing VEs is difficult. The paper describes a pipeline VE architecture designed for multi-core CPU systems. The architecture enables development of VEs that leverage the computational resources of all CPU cores for VE simulation. A VE's workload is dynamically distributed across the available CPU cores. A VE can be developed once and scale efficiently with the number of cores. The described pipeline architecture makes it possible to develop complex physically-based VEs for medical simulations. Initial results for a craniotomy simulator being developed have shown super-linear and near-linear speedups when tested with up to four cores.
In this paper, we propose a novel approach for simulating soft tissue tearing, using a model that takes into account the existence of fibers within the tissue. These fibers influence the deformation by introducing anisotropy, and impact the direction of propagation for the fracture during tearing. We describe our approach for simulating, in real-time, the deformation and fracture of anisotropic membranes, and we illustrate our method with the simulation of capsulorhexis, one of the critical steps of cataract surgery.
Until the introduction of non-invasiveimaging techniques, the representation of anatomy and pathology relied solely on gross dissection and histological staining. Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) protocols allow for the clinical evaluation of anatomical images derived from complementary modalities, thereby increasing reliability of the diagnosis and the prognosis of disease. Despite the significant improvements in image contrast and resolution of MRI, autopsy and classical histopathological analysis are still indispensable for the correct diagnosis of specific disease. It is therefore important to be able to correlate multiple images from different modalities, in vivo and postmortem, in order to validate non-invasive imaging markers of disease. To that effect, we have developeda methodologicalpipeline and a visualization environment that allow for the concurrentobservation of both macroscopic and microscopic image data relative to the same patient. We describe these applications and sample data relative to the study of the anatomy and disease of the Central Nervous System (CNS). The brain is approached as an organ with a complex 3-dimensional (3-D) architecture that can only be effectively studied combining observation and analysis at the system level as well as at the cellular level. Our computational and visualization environment allows seamless navigation through multiple layers of neurological data that are accessible quickly and simultaneously.
Spirometry provides useful information for diagnosis purposes in the pulmonary system. A decision support system could help in using this useful noninvasive test more reliably. Bayesian reasoning could aggregate the results of a group of neural network classifiers to improve the diagnosis of pulmonary diseases and discriminating between its restrictive and obstructive patterns.
With ever growing attention of medical community on the usage of surgical simulators as effective training means, development of robust and cost-effective haptic tool interfaces is very much necessary. We have developed such tool interfaces that can be easily plugged to PHANTOM® Omni™. Besides simulating actual tools in operating room, they are cost-effective and are easy to build.
Videolaryngoscopy (VL) is a novel technology that can facilitate rapid acquisition of intubation skills with simultaneous teacher and learner visualization of laryngeal structures. Videolaryngoscopy improves laryngeal visualization, and improves intubation success in difficult airway management compared to standard direct laryngoscopy. First responders need enhanced airway management tools to improve intubation success rates in civilian pre-hospital and military battlefield settings. We evaluated feasibility and efficacy of a remote first responder videolaryngoscopy skills training paradigm using distance learning by VTC (256kb ISDN) with synchronous transmission of laryngoscopy images to a remotely located trainer. Airway visualization, intubation success rates, and intubation times documented feasibility and comparability of remote and face-to-face introductory familiarization and intubation training with the Storz-Berci videolaryngoscopy system. User acceptance was good. Remote training paradigms for advanced technology solutions such as videolaryngoscopy can accelerate the diffusion of life-saving new technologies, especially when there is limited access to specialized training. Videolaryngoscopy visualization and difficult airway intubation success rates were better than direct laryngoscopy.
An estimated 10% of preventable battlefield deaths are due to Airway obstruction. Improved airway rescue strategies are needed with new tools for airway management by less experienced providers. Airway management and training are improved using video laryngoscopy (VL) compared to direct laryngoscopy (DL). We evaluated if novices could rapidly acquire fundamental skills and compared intubation time and laryngeal visualization using VL compared to DL in a manikin model of normal laryngeal anatomy. For 43 subjects mean intubation time did not differ for DL (25.9 ± 24.5 seconds) vs. VL (26.4 ± 31.5 seconds) {p = 0.94 paired t-test}. Self reported novice intubation time was 6.82 ± 31.0 seconds greater with VL (31.6 ± 34.6 seconds) vs. DL (24.8 ± 18.5 seconds) {p = 0.255 paired t-test}. VL vs. DL time difference was not different between self-reported novice and non-novice groups. Mean Cormack-Lehane airway visualization grades (range 1–4) were higher with VL (1.95 ± 0.97) vs. DL (1.02 ± 0.15) {Students t-test p < 0.0001}. VL (69.7%) was preferred to DL (18.6%); no preference was indicated by 11.6%.
In this pilot study, experienced medical helicopter personnel evaluated and compared the prototype Storz CMAC and GlideScope (GS) videolaryngoscopes in intubating a Laerdal Difficult Airway Manikin in a helicopter. No significant differences were found between the devices in the standard airway mode with 100% success rates for the intubations. In the difficult airway mode, there was a significant difference (p = 0.03) between the Cormack Lehane scores observed with Direct View (DV) (3.75 ± 0.46 – average ± standard deviation) compared to the view with the prototype CMAC (2.25 ± 0.71). The view was 3.00 ± 0.76 with GS In the difficult airway, there were significantly more participants who obtained a Grade 1 or 2 view when using the CMAC compared to when using the Mac 3 blade (DV) (p = 0.025; Fisher Exact Probability Test). The success rate for intubating the difficult airway was 0% with DV; compared to 63% with the CMAC and 50% with the GS (p = 0.03). The participants answered a post study questionnaire regarding the characteristics of the devices and indicated preference for the CMAC over the GS in intubation of the difficult airway.
This pilot study examined backward intubation of the Laerdal Difficult Airway Manikin in a medical transport helicopter using the prototype (a new more compact) Storz CMAC videolaryngoscope. The standard manikin airway Cormack Lehane (CL) view scores were 2.00 ± 1.00 for direct view and 1.375 ± 0.517 for the indirect view (CMAC). Success rates for backward intubation in the standard airway were 100% (CMAC) and 87.5% (DV). Average CL grades in the difficult airway were 3.63 ± 0.74 (DV) and 2.00 ± 0.926 (CMAC)(p = 0.002). The success rates for backward intubation of the difficult airway were 12.5% (DV) and 63% (CMAC). Our results show that in backward intubation of the difficult airway in a helicopter setting, the prototype CMAC videolaryngoscope significantly improved the airway score by 1-2 grades and improved intubation success 5-fold. Studies using the portable CMAC videolaryngoscope under challenging rescue conditions and positions should be considered.
Material processing using laser became a widely used method especially in the scope of industrial automation. The systems are mostly based on a precise model of the laser process and the according parameterization. Beside the industrial use the laser as an instrument to treat human tissue has become an integral part in medicine as well. Human tissue as an inhomogeneous material to process, poses the question of how to determine a model, which reflects the interaction processes with a specific laser.
Recently it could be shown that the pulsed CO2 laser is suitable to ablate bony and cartilage tissue. Until now this thermo-mechanical bone ablation is not characterized as a discrete process. In order to plan and simulate the ablation process in the correct level of detail, the parameterization is indispensable. We developed a planning and simulation environment, determined parameters by confocal measurements of bony specimen and use these results to transfer planned cutting trajectories into a pulse sequence and corresponding robot locations.
This paper reports on a low cost system for training ultrasound imaging techniques. The need for such training is particularly acute in developing countries where typically ultrasound scanners remain idle due to the lack of experienced sonographers. The system described below is aimed at a PC platform but uses interface components from the Nintendo Wii games console. The training software is being designed to support a variety of patient case studies, and also supports remote tutoring over the internet.
Medical simulators continue to evolve, developing capabilities that make their use in medical and health professions education more likely. Among these capabilities are: wireless communication, portability, compactness, and user-friendliness. Demands for effective team training and decision support will drive the next generation of medical simulators.
We investigated the retention of knowledge and skills after repeated Virtual World MOS (VWMOS) team training of CPR in high school students. An experimental group of 9 students were compared to a control group of 7 students. Both groups initially received traditional CPR training and the experimental group also received 2 VWMOS sessions six months apart. Although we found no significant differences in general basic life support knowledge, the changes that occurred in the CPR guidelines were retained 18 months after the last Virtual World training session in the experimental group. Moreover fewer deviations from the CPR guidelines occurred.
Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.
The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon is able to visualize 3D models of the patient's organs more effectively during surgical procedure or to use this in the pre-operative planning. The doctor will be able to rotate, to translate and to zoom in on 3D models of the patient's organs simply by moving his finger in free space; in addition, it is possible to choose to visualize all of the organs or only some of them. All of the interactions with the models happen in real-time using the virtual interface which appears as a touch-screen suspended in free space in a position chosen by the user when the application is started up. Finger movements are detected by means of an optical tracking system and are used to simulate touch with the interface and to interact by pressing the buttons present on the virtual screen.
In this paper we present a novel approach for the simulation of linear and nonlinear tissue response during real time surgical simulation. In this technique, physics-based computations using finite elements are used to generate a massive database to train neural networks during an offline pre-computation step. These neural networks are used during real time computations, resulting in massive computational efficiency. The significance of the method is that, for the first time, linear and nonlinear simulations may be performed with almost the same operational complexity. Additionally, the quality of the real time computations may be easily controlled by scaling the number of neurons used in the computations. This system provides a unique platform to leverage the computational speed and scalability of soft computation methods for real time interactive simulations.
A portable instrumentation rig is presented for characterizing nonlinear viscoelastic anisotropic response of intra-abdominal organ-tissues. Two linearly independent in-situ experiments are performed at each indentation site on the intra-abdominal organ, by subjecting the organ to 1) normal and 2) tangential displacement stimuli using the above robotic device. For normal indentation experiments, the indenter is ramped into the tissue and held for 10 seconds before sinusoidal indentation stimuli are applied. For tangential (shear) loading, the indenter tip is rigidly glued to the soft tissue surface. Sinusoidal displacement stimuli are then applied laterally in the tangential plane and the force response is recorded. Tangential loading is repeated along orthogonal directions to measure in-plane mechanical properties. Combined analysis of both experiments leads to assessment of anisotropy. In situ experiments on fresh human cadavers are currently under way at the Albany Medical College.
Simulations can transcend the literal representation of practice worlds. This paper considers the use of gaming and narrative to identify key underlying features and the ways in which they can be used in creating simulation activities.
Simulators are typically standalone devices. The HSVO project is developing a network enabled platform control middleware and a number of integrated ‘edge device’ services to, among other outcomes, enable multi device and platform simulation support.
The use of virtual reality techniques opens up new perspectives to support and improve the puncture training in medical education. In this work a 3D VR-Simulator for the training of lumbar and ascites punctures has been extended to support the bending of the puncture needle. For this purpose the needle is designed as an angular spring model. The forces that restrict the user from bending the needle are calculated using a multiproxy technique and given to the user via a 6DOF haptic device (Sensable Phantom Premium 1.5). Proxy based haptic volume rendering is used to calculate the proxy movement. This way it is possible to integrate original CT-patient data into the rendering process and generate forces from structures that have not been segmented. The bending technique has been integrated in a VR-training system for puncture interventions and shows good results concerning update rate and user acceptance.
Many existing refreshable Braille display technologies are costly or lack robust performance. A process has been developed to fabricate consistent and reliable pneumatic balloon actuators at low material cost, using a novel manufacturing process. This technique has been adapted for use in refreshable Braille displays that feature low power consumption, ease of manufacture and small form factor. A prototype refreshable cell, conforming to American Braille standards, was developed and tested. The cell was fabricated from molded PDMS to form balloon actuators with a spin-coated silicone film, and fast pneumatic driving elements and an electronic control system were developed to drive the Braille dots. Perceptual testing was performed to determine the feasibility of the approach using a single blind human subject. The subject was able to detect randomized Braille letters rapidly generated by the actuator with 100% character detection accuracy.
The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.