Ebook: Medicine Meets Virtual Reality 11
The Proceedings of the 11th annual Medicine Meets Virtual Reality conference provides current research on many data-centered applications for clinical care and medical education. It is a reference source for IT researchers working in a biomedical context, medical educators, physicians interested in emerging clinical tools, and medical device developers. A major focus is on surgical simulation and supporting technologies: haptics, tissue modeling, and virtual environments. Assessment and validation studies are represented, as well as papers on simulator design and didactics. Another core area is information-guided therapy, from diagnosis through surgery. Discussion of imaging techniques is complemented by papers on visualization, data fusion, and augmented reality. Advances in robotic surgery are also presented.
Health Horizon. The horizon serves as a metaphor for the distant goal of optimal health and implies our continuing progress toward that goal.
Researchers who presented at the first MMVR understood that the everyday application of their ideas would take place in the future. Eleven years later, MMVR participants can see their work improving health care in immediate and measurable ways. These researchers enhance medical education and procedural training with increasingly life-like simulation and haptics. They refine accuracy in clinical diagnosis and therapy with better imaging tools and more selective interpretation of imaging data. They promote economic efficiency and better sharing of medical resources by linking medicine to information technology advances. As the second decade of MMVR commences, we have in many ways reached the horizon visible in 1992.
Next Med. Beyond the initial reference to “what's next in medicine,” the term “NextMed” is intended to describe the vigorous exchange of ideas and experience between physicians in all specialties, scientists in widely varying disciplines, educators, commercial entities, and others. Although MMVR focuses on data-related applications, “NextMed” invites conference participants to speculate more broadly: What modeling and simulation techniques would benefit biotechnology and genomics? What nanotech discoveries would transform robotics or networked health care? What novel materials would revolutionize medical imaging and surgical simulation? What unforeseen interactions would shift us closer to optimal health? These ideas represent the new (and perhaps elusive) horizons that now appear before us.
When discussing optimal health, we should remember that it is not a universally defined and concrete ideal. Most people agree that freedom from disease, greater longevity, and minimized senescence are desirable. However, genetic therapy, bionics, and germ line engineering—the likely means to these ends—are controversial. In his book, Our Posthuman Future: Consequences of the Biotechnology Revolution, Francis Fukuyama cautions us against blindly wielding the new tools that science creates. He counsels us to prudently consider each step taken in the name of health “progress” and to guide ourselves with intelligent debate on the consequences of human intervention.
And this is where the true value of MMVR lies: as a forum for creating, assessing, and validating certain tools for better health, MMVR supports the necessary universal dialogue on what the future of health can and ought to be. In this role, MMVR will continue to prove useful as it moves forward into its second decade.
We thank the Organizing Committee for its vision, the many researchers who share their enthusiasm and hard work, the exhibitors who demonstrate their achievements, and all who come to learn with us.
Simulation of cauterization and irrigation forms an important part of a virtual laparoscopic trainer. Typically, they are carried out to stop the intragastric bleeding due to an accidental cut by the surgeon. In this paper, we present a method to simulate these special visual effects in an integrated fashion in real-time. We have simulated cauterization and irrigation using a particle-based system. A physics-based model is used to simulate accumulation and removal of fluids. The integrated special effects were implemented and tested in a prototype environment.
We describe a strategy for collecting experimental data and validating a bone-burr haptic contact model developed in a virtual surgical training system for middle ear surgery. The validation strategy is based on the analysis of data acquired during virtual and real burring sessions. Our approach involves intensive testing of the surgical simulator by expert surgeons and trainees as well as experimental data acquisition in a controlled environment.
Microscopes and neuronavigators are often used in neurosurgical procedures. Our Mixed Reality prototype proposes to show neuronavigator imaging overlayed onto the surgical field of view. The proposal is to replace the traditional optical microscope with a digital one. The system is presented in this article. Firsts trials have been performed at the laboratory with satisfactory results.
Minimally invasive surgery is a technique that permits interventions through very small incisions. This minimises the patients’ trauma and permits a faster recovery in comparison with classical surgery. The disadvantage of this surgery technique, though, is its complexity, requiring a high training effort of the surgeon. In this paper, we present a general surgery simulator for training surgeons in minimally invasive surgery. The application allows to create environments and interaction modes very similar to those encountered in real surgical interventions. The virtual environments are optionally composed of an actual patient's organs the intervention on which one desires to practice in a beforehand manner, or of synthetically generated organs with arbitrary pathologies. The intervention is carried out by means of haptic interfaces with force feedback, providing the surgeon with a sense of touch, a fundamental element of all types of surgery.
We are developing a simulation of needle insertion and radioactive seed implantation to facilitate surgeon training and planning for brachytherapy for treating prostate cancer. Inserting a needle into soft tissues causes the tissues to displace and deform: ignoring these effects during seed implantation leads to imprecise seed placements. Surgeons should learn to compensate for these effects so seeds are implanted close to their pre-planned locations. We describe a new 2-D dynamic FEM model based on a 7-phase insertion sequence where the mesh is updated to maintain element boundaries along the needle shaft. The locations of seed implants are predicted as the tissue deforms. The simulation, which achieves 24 frames per second using a 1250 triangular element mesh on a 750Mhz Pentium III PC, is available for surgeon testing by contacting ron@ieor.berkeley.edu.
Accurate biomechanical characteristics of tissues are essential for developing realistic virtual reality surgical simulators utilizing haptic feedback. Surgical simulation technology has progressed rapidly but lacks a comprehensive database of soft tissue mechanical properties with which to incorporate. Simulators are often designed purely based on what “feels right;” quantitative empirical data are lacking. A motorized endoscopic grasper was used to test abdominal porcine tissues in-vivo and in-situ with cyclic and static compressive loadings. An exponential constitutive equation was fit to the resulting stress-strain curves, and the coefficients were compared for various conditions. Stress relaxation for liver and small bowel were also examined. Differences between successive squeezes and between in-vivo and in-situ conditions were found.
The reconstruction of patients with large deformations in the facial area should consider functional aspects as well as aesthetical ones. In this paper, an integrated virtual reality system which allows bone manipulation, osteotomy planning, calculation of implants for soft tissue and bones combined with the calculation of the post-operative appearance is presented. It is a valuable tool for a wide range of cranio-maxillofacial surgeries. Due to the generalized approach of the underlying algorithms, it is a basis for further clinical applications in other surgical fields.
The MMVR Conference is one of a handful of national forums where leading researchers regularly convene to discuss medical modeling and simulation. As such, the presentations made during the conference represent a reasonable overview of both the state-of-the-art in virtual reality in medicine and the basic and applied research trends in the field. This article describes those trends and some of the implications based on a meta-analysis of almost three hundred articles published as a result of the MMVR Conferences in 2000, 2001, and 2002.
A new approach to evaluate training in simulators based on virtual reality is proposed. This approach uses Gaussian Mixture Models (GMM) for modeling and classification of the simulation in pre-defined classes of training.
Many medical procedures require fine motor skills, and these skills are developed over years of practice and through performing hundreds to thousands of procedures. However medical training that is based upon gaining this expertise by performing procedures on patients results in unnecessary risk to the patient. In this project expert medical skill is quantified, so that advanced medical simulators can be created to provide a realistic training environment. This approach is applied to airway intubation with a rigid laryngoscope; a procedure that is performed prior to general anesthesia and during emergency situations. A laryngoscope has been instrumented with a 3 dimensional force/torque sensor, and magnetic position sensors have been placed on the laryngoscope and the patient. Measurements are made in the operating room of both experts and novices as they perform laryngoscopy on consenting patients undergoing general anesthesia. The skill of the laryngoscopist is represented by the motion and force trajectories applied to the laryngoscope during the procedure. Preliminary results show that novices often err in the placement of the tip of the laryngoscope blade. However, when two experts perform laryngoscopy on the same patient, both experts perform key elements of the task consistently. The measured consistency among experts indicates that it will be possible to apply algorithms developed for Human Skill Acquisition, and thereby define regions of expert motion relative to patient anatomy. This is the first step in developing advanced training simulators that will simulate the procedure accurately, provide guidance to the trainee, and can be used for assessment of medical skill.
This paper gives a work-in-progress report on our research project BurnCase, a virtual environment for modelling human burn injuries. The goal of the project is to simplify and improve the diagnosis and medical treatment of burns. Due to the lack of electronic and computational support for current diagnosis methods, enormous variations regarding the approximated size of burned skin regions exist. And although Simplifications like the Rule-Of-Nines-Method ([Weidringer, 2002]), Lund and Browder ([LundBrowder, 1944]) and others try to compensate for these errors, the fact remains that different physicians overestimate the BSA (Body Surface Area) by 20% up to 50%, depending on the different experience and subjectivity of the approximation process. Nevertheless, different supporting mechanisms have been developed to assist the process of burn region transfer so that after transferring all burned regions on the virtual human body, calculations can be applied in order to evaluate standard indices like the ABSI (Abbreviated Burn Severity Index), and Baux ([Baux, 1989]) as well as ICD10 (International Classification of Diseases) diagnosis encoding. The virtual body simulation is based on state-of-the-art 3D computer graphics (OpenGL).
Thus a simulation system, providing a graphical user interface, allows surgeons to transfer a patient’s burn injury regions onto an appropriate 3-dimensional model. As such, the BurnCase system improves surface determination by calculating region surfaces up to a precision of one cm2. This improves the average variation to less than 5%, limited by the precision of the surface transfer onto the virtual model.
The system already allows the transfer of burned regions by using standard input devices. For this purpose different reference models of human bodies have been created in order to receive appropriate results based on measured physical data of different patients. Moreover, an underlying database stores all entered case studies so that it is possible to perform comparisons of burn cases and animation sequences of the healing process of single wounds or whole bodies. When used as centralized burn accident registration service, a huge knowledge base of burn diagnoses and consequent medical treatment will emerge. This knowledge base will allow medical advices and diagnosis support for any kind of burn accidents, and it will consequently improve and support the primary diagnosis process of burn accidents. Thus, an enormous reduction of time and costs of medical burn treatment will be reached.
Novel algorithms of polarization imaging data produce views not visible in the images resulting from the instrumentations. These computed images are rapidly produced, and reveal clinically relevant features. A virtual reality implementation that allows a selection of such views during diagnostic procedures or treatment might improve patient management.
Current methods to produce 3-dimensional tooth root models involve conversion from radiographic means (computed tomography) or creation using computer-assisted design (CAD) software. The former lacks detail while the second is manually fabricated and can bear little resemblance to the original. Thin-plate splines have been used in morphometries to define changes of shape between subjects of the same species [1]. Herein, we use thin-plate splines to deform a 3D geometric prior model of a tooth to match 2D patient radiographs, producing a "best- fit" patient specific 3D geometric polygonal mesh of the tooth.
In this paper, we present new developments in the area of 3D human jaw modeling and animation. CT (Computed Tomography) scans have traditionally been used to evaluate patients with dental implants, assess tumors, cysts, fractures and surgical procedures [1]. More recently this data has been utilized to generate models.
Researchers have reported semi-automatic techniques to segment and model the human jaw from CT images [6] and manually segment the jaw from MRI images [9]. Recently opto-electronic [2; 15] and ultrasonic-based systems (JMA from Zebris) have been developed to record mandibular position and movement. In this research project we introduce: (1) automatic patient-specific three-dimensional jaw modeling from CT data and (2) three-dimensional jaw motion simulation using jaw tracking data from the JMA system (Zebris).
In view of an increasing use of breast MRI supplementing X-ray mammography, the purpose of this study was the development of a method for fast and efficient analysis of dynamic MR image series of the female breast. The image data sets were acquired with a saturation-recovery-turbo-FLASH sequence facilitating the detection of the kinetics of the contrast agent concentration in the whole breast with. In addition, a morphological 3D-FLASH data set was acquired. The dynamic image data sets were analyzed by tracer kinetic modeling in order to describe the physiological processes underlying the contrast enhancement in mathe-matical terms and thus to enable the estimation of functional tissue specific parameters, reflecting the status of microcirculation. To display morphological and functional tissue information simultaneously, a multidimensional real-time visualization system (using 3D-texture mapping) was developed, which enables a practical and intuitive human-computer interface in virtual reality. The spatially differentiated representation of the computed functional tissue parameters superimposed on the anatomical information offers several possibilities: improved discernibility of contrast enhancement; inspection of the data volume in 3D-space using the features of rotation and transparency variation; localization of lesions in space and thus fast and more natural recognition of topological coherencies. In a feasibility study, it could be demonstrated that multidimensional visualization of contrast enhancement in virtual reality is a practicable idea. Especially, detection and localization of multiple breast lesions may be an important application.
In advancing our capabilities in the realm of virtual reality, the development of haptic technology has been a rate-limiting factor in producing tactile sensations directly onto the human hands. The Living Anatomy Program seeks to obviate the need for such technology by designing physical objects based on anatomic components that feel realistic to the touch. Furthermore, synchronizing motion between physical and related virtual objects infinitely expands visual design options and provides a profound level of immersion into content.
In this paper we present 2D and 3D visualization techniques that are part of our ongoing effort to improve the accuracy of neurosurgical procedures such as ‘Pallidotomy’ and ‘Deep Brain Stimulation’ (DBS), which are performed to alleviate the symptoms of Parkinson's disease. The precise targeting and mapping of structures in the Basal Ganglia particularly the internal Globus Pallidus (GPi) using a combination of stereotactic frame- registered Magnetic Resonance Imaging (MRI) and intraoperative microelectrode recording (IMR) is key to the success of these procedures. We have designed a set of software components, including a knowledge- based system (KBS), a digital signal processing module and a 2D/3D imaging system with automated mapping paradigm, which will work in combination to improve upon the standards currently in use. The imaging system will be the focus of this publication.
New methods and software tools for automatic extraction of the ventricle system from magnetic resonance imagery (MRI) data, ventricle part classification, and realistic texturing are proposed to support Virtual Endoscopy (VE). Volume- and surface-based medical atlases are intensively used as templates in the methods. The processed ventricle-related surfaces are then utilized in a haptic-based system, which provides a surgeon with several basic functions simulating “virtual treatment” of hydrocephalus.
A distributed simulation environment for training and evaluation of medical trauma teams is presented. Connected through the Internet, the geo-graphically remote team members can communicate and interact using the clinically realistic environment provided by the MATADOR simulator. The scenario demonstrates an injured person's arrival at the hospital, and the diagnostic and therapeutic challenges that must be met in order to stabilize the virtual patient. Experiences from a field trial indicate that the simulator is useful both for professionals and medical students.
Virtual reality based surgical simulators offer a very elegant approach to enhancing traditional training in endoscopic surgery. In this context a realistic soft tissue model is of central importance. The most accurate procedures for modeling elastic deformations of tissue use the Finite Element Method (FEM) to solve the governing mechanical equations. An alternative are mass-spring models which are a crude approximation of the real physical behavior. The main reason given when using the mass-spring approach is the computational complexity of FEM. In this study we show that an optimized linear FEM model requires computation time similar to the mass-spring approach, while giving better results.
We have developed a data fusion system for the robotics surgery system “da Vinci”. The data fusion system is composed of an optical 3D location sensor and a digital video processing system. The 3D location sensor is attached to the da Vinci’s laparoscope and measures its location and direction. The digital video processing system captures the laparoscope's view and superimposes 3D patient's organ models onto the captured view in real-time. We applied the system to “da Vinci” and examined this system during a cholecystectomy. In this experiment, the surgeon was able to observe the inner conditions of the organs with a stereo view.
In laparoscopic surgery, surgeons encounter particular difficulties during the course of the operation. Due to the restricted view from the endoscope and the limited degree of freedom using forceps, surgeons find their movements impeded. It would be necessary to develop a support system to provide surgeons with improved laparoscopic vision. If real-time visualization of the abdominal anatomy is possible, it will be useful for accurate procedure and quantitative evaluation. In this paper, The laser-scan endoscope system acquires and visualizes the shape and texture of the area of interest in a flash of time. Results of in-vivo experiments on the liver of a pig verify the effectiveness of the proposed system.
Pressure has grown over the past decade to provide more rigorous and standardized testing of surgical trainees at both higher and lower levels [1]. VR technologies appear to offer a solution but the cost of equipment and realism of the interface present major research challenges. The central requirement of simulation remains assessment, however, and the present study examines this issue within the context of surgical suturing skills. By tracking users during suturing tasks, we argue that errors in technique can be analysed by the examination of standard pattern spaces.