Ebook: Medicine Meets Virtual Reality
MMVR offers solutions for problems in clinical care through the phenomenally expanding potential of computer technology. Computer-based tools promise to improve healthcare while reducing cost – a vital requirement in today’s economic environment. This seventh annual MMVR focuses on the healthcare needs of women. Women every where demand more attention to breast cancer, cervical cancer, ageing-related conditions. Electronic tools provide the means to revolutionise diagnosis, treatment and education. The book demonstrates what new tools can improve the care of their female patients. As minimally invasive procedures are mainstreamed, advanced imaging and robotics tools become indispensable. The internet and other networks establish new venues for communication and research. Medical education, as well as clinical care, is enhanced by systems allowing instruction and professional interaction in ways never before possible and with efficiency never before achieved. Telemedicine networks now permit providers to meet patients needs where previously impossible. MMVR strengthens the link between healthcare providers and their patients. The volume contains selected papers authored by presenters at the conference. Areas of focus include Computer-Assisted Surgery, Data Fusion & Informatics, Diagnostic Tools, Education & Training, Mental Health, Modelling, Net Architecture, Robotics, Simulation, Telemedicine, Telepresence and Visualisation.
Until the twentieth century, medicine for women was plagued by ignorance and distrust. Midwives, some very capable and others marginally so, attended to childbirth—the most uniquely female biological event—with haphazard success. Physicians were perceived, perhaps justifiably, as insensitive and to be consulted only in dire circumstances, and medical care for the typical woman was considered a dubious economic investment. Women worked hard, bore and raised children, and received less than men economically, socially, and medically.
In the past hundred years, the role of women in industrial and post-industrial nations has changed amazingly. This didn’t happen by accident. Women fought for the right to vote, to receive more education, and to be treated equally in regard to property. It is no longer unusual for a woman to acquire her own house, university diploma, or seat in government.
Despite a great deal of progress, though, the battle isn’t over. Women now hold much private wealth in this country but still earn, on average, less than men. Fundamentalism, as much a political tool as a creed, threatens the freedom and vitality of women in many parts of the world. Reproductive responsibilities and freedoms still provoke vehement, even violent, controversy in this country.
In medicine, the fact that women’s needs differ from men’s is still sometimes forgotten. Drug development is skewed in favor of males, while reproductive education and methods of control are frequently obscured or denied. Physicians, even in obstetrics and gynecology, are still mostly men. And not surprisingly, to reduce their expenditures, insurers would like to limit women’s access to gynecologists.
This is where Medicine Meets Virtual Reality fits in. Each year the conference focuses on a special concern, to effect positive change and further strengthen the bond between healthcare providers and their patients. In MMVR's past six years, professionals have shared their ideas and accomplishments and, as a result, have refined their ability to generate and manipulate images of bodies, their ailments, and their treatments. Education, surgery, and clinical care have all benefited from the tools MMVR participants have created.
The special focus of this year's meeting is women’s health, for clearly virtual reality technology can be used to improve care for women as well as men. Although specialization in women's health is largely an urban resource, telemedicine can transport expertise to rural areas. Data networks can educate providers and patients without obliging them to visit distant libraries. Imaging techniques can provide a new understanding of the female body, and breast cancer can be diagnosed earlier and more accurately. Computers serve as assistants in delivery rooms, not to replace human hands but to make them more helpful and effective.
The medical profession has a unique ability to penetrate political and social barriers and fulfill its obligation to improve health for all. Physicians can speak—if they care to and dare to—and the public will listen. Women, as individuals and as mothers of the next generation, deserve every effort physicians and the medical establishment can put forth for them. You, participants in Medicine Meets Virtual Reality, come from around the world. Your mission should be to use your knowledge and talents to enhance the well-being of women, and people, everywhere.
Susan Wheeler Westwood
Aligned Management Associates, Inc
In most tumor cases of neurosurgery, we need to have a rapid section diagnostic of the tumor during the course of surgery. When we have received the results that diagnose the exact dignity of the tumor, we devise the operation strategy for the continuing course of surgery. The tissue sample is sent by taxi to the pathological institute in Heidelberg. It takes approximately 45-60 min until we receive the results by telephone.
Many centers are far away from a pathological institute and the patient may need to be re-operated on there. In stereotaxy it is still more important. We need approximately 15-25 specimens during a stereotaxy, so we can be sure we have a sample with some of the tumor tissue present. With direct contact to a pathological institute we could reduce this dramatically. We prepared a methylen blue slide and used a histological microscope with a video camera attached to it and a digitizer interface for digitized pictures. By use of a modem we send the pictures by telephone line to the pathological institute, where they are assessed. We currently have direct contact with a pathologist so they can tell us from which part of the tumor we should send the other histological pictures.
The procedure takes about twenty minutes. By using this procedure the distance to the pathological institute is irrelevant. The system costs approximately $10000, covering the cost mostly of the microscope and the camera (but we already had the microscope) and can be built by anybody with minimal requirements.
The detection and correction of malocclusions and other dental abnormalities is a significant area of work in orthodontic diagnosis. To assess the quality of occlusion between the teeth the orthodontist has to estimate distances between specific points located on the teeth of both arches. Distance measuring is based on the observation, by the orthodontist, of a plaster model of the mouth. Gathering of information required to make the diagnosis is a time consuming and costly operation. On the other hand, obtaining and manipulation of plaster casts constitute a huge problem in clinics, due to both the large space needed and high costs associated with plaster casts manufacturing. For this problem we present a new system for three-dimensional orthodontic treatment planning and movement of teeth. We describe a computer vision technique for the acquisition and processing of three-dimensional images of the profile of hydrocolloids dental imprints taken by mean of a own developed 3D laser scanner. Profile measurement is based on the triangulation method which detects deformation of the projection of a laser line on the dental imprints. The system is computer-controlled and designed to achieve depth and lateral resolutions of 0.1 mm and 0.2 mm, respectively, within a depth range of 40 mm. The developed diagnosis software system (named MAGALLANES) and the 3D laser scanner (named 3DENT) are both commercially available and have been designed to replace manual measurement methods, which use costly plaster models, with computer measurements methods and teeth movement simulation using cheap hydrocolloid dental wafers. This procedure will reduce the cost and acquisition time of orthodontic data and facilitate the conduct of epidemiological studies.
A Disease Management System (DMS) refers to an integrated healthcare delivery system that provides patient centered care throughout the course of the disease independent of delivery site. A fundamental barrier for the development, implementation and monitoring of a DMS is lack of an appreciation by care providers of the complexity of these systems, and what is required for their maintenance. Foremost in the development of these systems is the presence of information systems that attempt to deal with the temporal, spatial and information needs of the DMS. Purpose: The Zachman Framework for Information Systems Architecture is used in many industries in the development of information systems. Its choice is based on the recognition of a need for a methodology in the conceptualization and modeling of complex information systems. This paper provides a brief overview of the Zachman Framework and its potential application in DMS development. In particular it will be the focus on the peed for “perspective” clarification as the first step in the development of such complex systems. Results: This paper reviews DMS and their potential information needs. The clarification of “perspectives” provides a method toward team building and unification of purpose by decreasing conflict and recognizing the unique contributions that each perspective holder makes.
In this paper, a new deformable model based in Boundary Elements Method (BEM) is presented. This model is characterised by its robustness and high deformation calculation speed. Experiments developed show that this model could calculate elastic deformations of elastic objects composed by 150 nodes with a 15 Hz. refresh rate (minim refresh rate acceptable in real time interactive systems).
A considerable amount of effort has been aimed towards developing real-time deformable objects for surgical simulation, but very little work has been aimed towards including physiology within the soft tissue models. A simulator that links the structural and functional aspects of the human body would allow the user to develop a better understanding of the intrinsic link between anatomy and physiology.
This positional paper discusses the challenges facing the creation of and the development of an integrated physiological and anatomical soft tissue model for use in surgical simulators. It explores the artificial dichotomy between anatomy and physiology and the issues it raises, by considering a suturing simulator capable of modelling ischaemia.
A system for automatic modeling of anatomical joint motion for use in the Virtual Reality Dynamic Anatomic (VRDA) tool is described. The modeling method described in this article relies on collision detection. An original incremental algorithm use this information to achieve stable positions and orientations of the tibia on the femur for each angle considered between these two components on the range of motion. The stable states then become the basis for a look-up table employed in the animation of the motion of the joint. The strength of the method lies in its robustness to animate any “normal” anatomical joint, given a set of kinematic constraints for the joint type as well as an accurate 3D geometric model of the joint. The demonstration could be patient specific (based on a person’s real anatomical data from an imaging procedure such as CT scanning) or scaled from a generic joint based on external patient measurements. The modeling method has been implemented on a generic knee model for use in the VRDA tool.
Realistic simulation of tissue cutting and bleeding is important components of a surgical simulator that are addressed in this study. Surgeons use a number of instruments to perform incision and dissection of tissues during minimally invasive surgery. For example, a coagulating hook is used to tear and spread the tissue that surrounds organs and scissors are used to dissect the cystic duct during laparoscopic cholecystectomy. During the execution of these procedures, bleeding may occur and blood flows over the tissue surfaces. We have developed computationally fast algorithms to display (1) tissue cutting and (2) bleeding in virtual environments with applications to laparoscopic surgery. Cutting through soft tissue generates an infinitesimally thin slit until the sides of the surface are separated from each other. Simulation of an incision through tissue surface is modeled in three steps: first, the collisions between the instrument and the tissue surface are detected as the simulated cutting tool passes through. Then, the vertices along the cutting path are duplicated. Finally, a simple elastic tissue model is used to separate the vertices from each other to reveal the cut. Accurate simulation of bleeding is a challenging problem because of the complexities of the circulatory system and the physics of viscous fluid flow. There are several fluid flow models described in the literature, but most of them are computationally slow and do not specifically address the problem of blood flowing over soft tissues. We have reviewed the existing models, and have adapted them to our specific task. The key characteristics of our blood flow model are a visually realistic display and real-time computational performance. To display bleeding in virtual environments, we developed a surface flow algorithm. This method is based on a simplified form of the Navier-Stokes equations governing viscous fluid flow. The simplification of these partial differential equations results in a wave equation that can be solved efficiently, in real-time, with finite difference techniques. The solution describes the flow of blood over the polyhedral surfaces representing the anatomical structures and is displayed as a continuous polyhedral surface drawn over the anatomy.
Stereotactic techniques for cannulation of cystic structures, within the brain, are well known. Superimposed structures (vessels, ventricles, etc.) may make this problematic as does the need to approach the cystic structure perpendicular to its tangent plane (rather than “glancing”) as with a craniopharyngioma cyst.
To facilitate a three-dimensional visualization of the trajectory, we have employed digital holography. Transparent holographic images of cystic structures, ventricles, and sulci are rendered from T2-weighted MR data. Holographic images of vascular structures are rendered from CT or MR angiographic data. Vascular holograms are superimposed over the brain holograms, demonstrating the spatial relationships of these structures with regard to each other. Holographic images of the skull are rendered from CT slices.
A Laitinen stereotactic frame (Sandstrom) is placed on the patient prior to obtaining the CT. The skull, pre-existing shunt catheters, and the stereotactic frame are all readily visible. The brain and vascular holograms are superimposed on these. The resulting image clearly demonstrates cystic structures, ventricles, vessels, pre-existing catheters, all within the skull and stereotactic frame.
Using this holographic image as a “phantom”, the actual Laitinen stereotactic frame is placed within its holographic image. The optical trajectory is then chosen, and the articulated arm of the stereotactic device is so adjusted. Subsequently, the frame is used to effect stereotactic placement of the cannula, in the usual manner.
The major advantages of this technique are twofold. The first advantage lies with the fact that the surgeon can readily visualize the entire trajectory of the needle, and easily appreciate all structures which may be encountered by the needle on its passage from the skull to the target. Presumably, the surgeon’s knowledge of anatomy would unable such knowledge to be apparent, but in complex cases the “safe” corridor may be rather small, and its limits may not be intuitively obvious. This is all the more the case, when obstacles along the pathway are pathologically distorted, or when they are not of tissue origin (shunt catheters, etc.).
Employing this technique, we have successfully cannulated cystic structures in six patients, three of which presented with complex trajectory problems.
Videoendoscopic (VES) instruments have poor force transmission properties and often require surgeons to employ awkward hand and arm positions. In order to compare the physical workload of laparoscopic surgery to open surgery, we collected long-duration EMG records from the thumb (thenar compartment) of six surgeons perfonning suturing and knot tying in a training box using both open and VES techniques. EMG signals were acquired using a LabVTEW® Virtual Instrument and analyzed using a Modified Exposure Variation Analysis (MEVA) algorithm. Standard EMG indices and the MEVA analysis demonstrated significantly greater amplitude and duration of EMG signals using VES technique compared to open technique. Our results suggest that the use VES techniques requires a greater intensity of physical effort than open surgery techniques.
Given the geometric complexity of anatomical structures, realistic real-time deformation of graphical reconstructions is prohibitively computationally intensive. Instead, real-time deformation of virtual anatomy is roughly approximated through simpler methodologies. Since the graphical interpolations and simple spring models commonly used in these simulations are not based on the biomechanical properties of tissue structures, these “quick and dirty” methods typically do not accurately represent the complex deformations and force-feedback interactions that can take place during surgery. Finite element (FE) analysis is widely regarded as the most appropriate alternative to these methods. Extensive research has been directed toward applying the method to modeling a wide range of biological structures, and a few simple FE models have been incorporated into surgical simulations. However, because of the highly computational nature of the FE method, its direct application to real-time force-feedback and visualization of tissue deformation has not been practical for most simulations. This limitation is primarily due to the overabundance of information provided by the standard FE approaches. If the mathematics is optimized to yield only the information essential for the surgical task, computation time can be drastically reduced. Parallel computation and preprocessing of the model before the simulation begins can also reduce the size of the problem and greatly increase computation speed. Such methodologies are being developed in a combined effort between the Human Interface Technology Laboratory (HIT Lab) and the Mechanical Engineering Department of the University of Washington. We have created computer demonstrations which support real-time interaction with simple finite element soft tissue models. In collaboration with the Division of Dermatology, a real-time skin surgery simulator is being developed using these fast FE methods.
Introduction: The applications of Minimally Invasive Surgery (MIS) and Laparoscopy are rapidly expanding. Despite this expansion, the technology related to our understanding of the importance of haptic feedback related to laparoscopic surgery remains in its infancy. While many surgeons feel that the use of minimally invasive techniques eliminates force feedback and tactile sensation, the importance of haptics in MIS has not been fully evaluated. Moreover, there is considerable interest in the development of haptic simulators for MIS even though the importance of force feedback remains poorly understood. This study was designed to determine the ability of novice surgeons to interpret haptic feedback with respect to texture, shape and consistency of an object.
Method: Subjects were presented objects in a random order and participants were blinded as to their identity. Inspection by direct palpation, palpation with conventional instruments, and palpation with laparoscopic instruments was performed on all objects. Statistical analysis of the data was performed using a Fischer exact probability test.
Results: Direct palpation provided the greatest degree of haptic feedback and was associated with the highest accuracy for texture discrimination, shape discrimination, and consistency discrimination. A significant decrease in the ability to identify shapes was noted with both CI and LI. A significant decrease in the ability to differentiate consistency was noted for LI only. When comparing palpation with conventional instruments to palpation with laparoscopic instruments, there was no significant difference in shape or texture discrimination. There was, however, a significant decrease in consistency discrimination.
Conclusion: This data indicates that laparoscopic instruments do in fact provide the surgeon with haptic feedback. While the instruments change the information available to the surgeon, interpretation of the texture, shape and consistency of objects can be performed. Our ongoing work is directed at further defining force interactions. Through the use of force feedback impulse devices in VR simulators, one should be able to create a more realistic theatre in which the novice surgeon can learn operative skills that will readily translate into the operating room.
Virtual environments for surgical training, planning and rehearsal have the potential to significantly enhance patient treatment and diagnosis. Haptic feedback devices provide forces to the physician through a manipulator, simulating palpation, scalpel cuts, or retraction of tissue. While haptics have been studied in other fields, medical applications of haptics remain in their infancy. We propose a method of haptically rendering isosurfaces (representing hard structures) directly from anatomical datasets rather than through traditional intermediate graphical representations such as polygons. Our algorithm determines an implicit surface representation of a volumetric isosurface on the fly, and renders the structure using standard haptic algorithms to calculate the forces felt by the user. This approach has the advantage of providing easy access to the rich volume dataset containing the actual anatomy. By relating Hounsfield units to density, haptic rendering has the ability to provide different resistances based on the tissue type being rendered. We developed and tested our algorithm using a quadric implicit surface, a common primitive in computer graphics, and have applied it to a variety of anatomical image datasets. Our paper describes the algorithm in sufficient detail to facilitate reproduction by others.
The high cost of simulators that offer adequate realism for training has been a major challenge for the simulation community. The cost of the computers alone has been too high for most training institutions to afford. We have met this challenge by developing the PreOp™ Endoscopic Simulator, our second generation of low-cost medical simulators. The PreOp™ system integrates multimedia, 3D graphics simulation, and force feedback technology on a PC. This paper discusses the challenges of this project and the trade-offs and solutions that we developed to overcome them. We discuss our process of analyzing and prioritizing the medical tasks necessary to correctly perform flexible bronchoscopy. In addition, we illustrate how we blended together simulation and multimedia technology to ensure adequate immersion and training efficacy, while keeping the system cost to a minimum.
The purpose of this report is to outline the hierarchical decomposition of surgical procedures, from surgical steps through tasks and subtasks to tool motions, and highlight implications for surgical training systems. Three common laparoscopic procedures were analysed: cholecystectomy, inguinal hernia repair, and Nissen fundoplication. In laparoscopic training workshops and operating rooms, our observational research included split screen videotaping of both the endoscopic view and our video camera’s view of the primary surgeon. Videotapes were extensively annotated and analysed to yield timelines of each procedure, with component surgical steps, substeps, tasks, and subtasks duration as a function of procedure. The hierarchical decomposition of surgical procedures provides a framework for structuring a systematic approach to training, in the real and simulated environment. An example comparing variations in the fundoplication procedure is presented. Our results also have important implications for the design and assessment of new technology and intelligent tools in endoscopic surgery.
Virtual Reality (VR) is proving to be an effective treatment tool in rehabilitation. Currently we are providing VR therapy to patients in an out-patient clinic and an adult day center. The main focus to this paper is to show how VR is used in occupational therapy to improve balance and dynamic standing tolerance with geriatric patients. We will discuss our research findings of specific treatment approaches that we are using in therapy. The benefits of VR in rehab, as well as other rehab applications for this modality will also be discussed. We will illustrate the importance of addressing the visual and motor systems simultaneously for maximum efficiency in achieving our rehabilitation goals. The ultimate goal in occupational therapy is to increase a patient's level of independence in activities of daily living and therefore improving their quality of life.
In this paper, we describe the basic components of a surgery simulator prototype developed at INRIA. We present two physical models which are well suited for surgery simulation. These models are based on linear elasticity theory and finite elements modeling. The former model can deforme large tetrahedral meshes in real-time but does not allow any topological changes. On the contrary, the latter biomechanical model can simulate the cutting and tearing of soft tissue but must have a limited number of vertices to run in realtime. We propose a method for combining these two approaches into a hybrid model which may allow real time deformations and cuttings of large enough anatomical structures.
We present an augmented reality system that allows surgeons to view features from preoperative radiological images accurately overlaid in stereo in the optical path of a surgical microscope. The purpose of the system is to show the surgeon structures beneath the viewed surface in the correct 3-D position. The technical challenges are registration, tracking, calibration and visualisation. For patient registration, or alignment to preoperative images, we use bone-implanted markers and a dental splint is used for patient tracking. Both microscope and patient are tracked by an optical localiser. Calibration uses an accurately manufactured object with high contrast circular markers which are identified automatically. All ten camera parameters are modelled as a bivariate polynomial function of zoom and focus. The overall system has a theoretical overlay accuracy of better than 1mm. Implementations of the system have been tested on seven patients. Recent measurements in the operating room conformed to our accuracy predictions. For visualisation the system has been implemented on a graphics workstation to enable high frame rates with a variety of rendering schemes. Several issues of 3-D depth perception remain unsolved, but early results suggest that perception of structures in the correct 3-D position beneath the viewed surface is possible.
Purpose: To improve the diagnosis of pathologic modified airways, a new hybrid visualization system has been developed and tested based on digital image analysis and synthesis of spiral CT as well as the visual simulation of bronchoscopy.
Method/Materials: 20 patients with pathologic modifications of the airways (tumors, lung transplantation) were examined with Spiral-CT. The shape of the airways and the lung tissue is defined by an automatic volume growing method and a following geometric reconstruction by the computation of geometric primitives. This is the basis of a multidimensional display system which visualizes volumes, surfaces and computation results simultaneously. The enable the intuitive and immersive inspection of the airways a virtual reality system, consisting of two graphic engines, a head mounted display system, data gloves and specialized software was integrated.
Results: In 20 cases the extension of the pathologic modification of the airways could be visualized with the virtual bronchoscopy. The user interacts with and manipulates the 3D model of the airways in an intuitive and immersive way. In contrast to previously proposed virtual bronchoscopy systems the described method permits truly interactive navigation, detailed quantitation of anatomic structures and a “see through” the bronchial wall. The system enables a user oriented and fast inspection of the volumetric image data.
Conclusions: To support radiological diagnosis with additional information a virtual bronchoscopy was developed. It enables the immersive and intuitive interaction with 3D Spiral CTs by truly 3D navigation in the airways. The system was tested with 20 Spiral-CTs of bronchial tumors and obstructions and is well suited for the inspection of structures beyond the bronchialtree.
In the recent past, we used two 2-D videoscopes to obtain both a close detailed view and simultaneously a panoramic view to improve the efficient and safe access for instruments into the microscopic working field by way of the benefits of the panoramic view. This bi-modal visual set of clues allows for (1) insertion of suture, (2) cutting of suture with scissors (3) retraction of tissue, and (4) removal of suture and needle. During these experiences, we observed the benefits accrued to the surgeon by allowing the focusing of his/her attention on the work (technical skills) without diffusing energy to other activities. Similarly, when training surgeons to perform micro-anastomoses, and while working to improve performance in micro-anastomoses, we hypothesize that two or more videoscopic views of the 3-dimensional working space would provided added visual information to the surgeon during the microscopic work. To exarnine this hypothesis, we have used a non-animate model, in the performance of complex skills in videoscopic surgery.
Methods: Inanimate videoscopic models for suturing and tying (24 studies) were used in this study. The technical skill studied was the sophisticated skill of suturing. The speed and accuracy of Free-Handed suturing and tying was determined in these studies. They were compared using a single 2-D system verses three videoscopic views reconstructing a 3-D effect.
Results: In each of these models, the delineation of multiple views allowed greater detailed 3-dimensional information for the surgeon. The sutures were placed faster, more accurately, and with fewer false motions. These data allow us to conclude the use of multiple high-resolution 2-D views will improve accuracy and efficiency in the performance of delicate and precise skills in videoscopic surgery.
The development of technologies permitting processing, compression, and transmission of digital images and image sequences enables powerful methodologies for local and remote medical teleconsultauon. We are developing a slit-lamp-based ophthalmic augmented reality (image overlay) environment incorporating features to permit real-time, interactive teaching, telemedicine, and telecollaboration. A binocular slit-lamp biomicroscope interfaced to a CCD camera, framegrabber board, and PC permits acquisition and rendering of anterior segment and retinal images. Computer-vision algorithms facilitate robust tracking, registration, and near-video-rate image overlay of previously stored retinal photographic and angiographic images onto the real-time fundus image. Our algorithms facilitate shared control of pointing, drawing, and measuring functions registered with the retinal image video stream and direct audio communication between an examiner (student, generalist) and remote observer (instructor, specialist). Bandwidth and video compression considerations limit the frame rate and latency for video stream transmission. Excellent and acceptable performance are demonstrated in model eyes over a local area network and through a modem connection, respectively. These studies represent the first investigations towards the design and implementation of an intelligent platform for ophthalmic telemedicine and telecollaboration.