Ebook: Medicine Meets Virtual Reality 2000
This book provides an innovative international forum for the researchers, developers, and practitioners who are actively expanding the role of electronic technologies in healthcare. The contributions are by pioneers in all aspects of the field: telemedicine, simulation, computer-assisted surgery, haptics, robotics, education, diagnostics, etc. Leading edge developments and current clinical experience are brought together for the purpose of exploring ways to improve medical care. Mental health implications of new electronic technologies are also discussed. This book has a special focus on virtual reality as a means of bringing practitioner and patient closer in the pursuit of healing. Rather than superseding the talents of healthcare professionals, interactive computer-based tools have the ability to enhance the traditional dialogue of care. In addition, these tools can be used to integrate useful qualities of complementary therapies into allopathic medicine. Sight, touch, sound and other senses can be linked and augmented in ways previously unimagined, ultimately to benefit the patient.
Some Thoughts on Technology, Health, and the Public
At the time of writing this introduction, millennial fever is rising. The media feature persons expecting religious fulfillment, terrorist attacks, and computer-generated mayhem on January 1. Even sensible people wonder if extra cash should be withdrawn from the bank, whether investment funds should be moved to safer accounts, or if bottled water should be stashed away.
Our systems are fragile—or so we fear. Of course, you who are reading this are in a much better position to know the truth of things.
Examining the fear of Y2K computer bugs, one can see an underlying public distrust of technology. This distrust is nothing new; history offers many examples of resistance to the tools of change and progress. One need only recall the term “Luddite” to start a list—a long list, too. And these days, one could add to this list the distrust of traditional Western medicine.
In consumer-driven medicine, nothing is as “hot” right now as herbal remedies. Public fixation on herbal cures defies the countless dollars invested in pharmaceutical and medical device research. It ignores the work of talented and critical biomedical researchers. Why does the consumer feel better when ingesting a plant, even though its production and purity be minimally regulated and its effects uncertain? Real-life physicians should shake their heads when they see an actor who plays a doctor on a TV show extol medicinal herbs to an audience of millions. Is an actor's health advice more valuable than a doctor's? Are herbal remedies more useful than drugs that cost billions to develop? Why is the public so intrigued?
Perhaps discoveries at the forefront of genetics, neurology, biochemistry and such—unfathomable to the average person—make us fear we'll end up like Frankenstein's monster, unhappy in an artificial health that's out of control. Or perhaps in the efficiency crunch of contemporary healthcare, patients believe no one cares how they really feel. Out of distrust, the public looks for alternatives.
There is value in alternative therapies, too, especially those that integrate the power of the human brain into the pursuit of better health. Why not creatively explore how to use the mind's power to our advantage? Just about any issue of Science shows how much we need to learn about the brain's machinations. Let's critically assess all means to understanding.
Medical science, because it's comprised of healers who are human, can consider the feelings of patients. And technology can assist by stretching the capacity to feel (and understand): e.g., in telemedicine consultations, during computer-aided surgery, through immersive mental health therapy, with Internet forums for patients, and in the examination of biological minutiae projected onto screens much larger than life, color-coded and with freeze-frame.
From its start, Medicine Meets Virtual Reality has presented these kinds of technologies. It has also helped direct them to where they're most needed: in the hospital, clinic, office, and home. As organizers of this conference, we're pleased to see how ideas presented in the early years are evolving into tools for real patient care. We chose this year's theme—how technology is (or can be) involved in the relationship between patient and medical practitioner—to emphasize that the work presented at MMVR is reaching the point where it really can affect health. “Envisioning healing” isn't a future activity anymore.
For those who work in the field, VR technology probably appears to be leveling off, its refinements growing more subtle and radical changes less frequent. For the layperson, however, the connection of computer technology to human health is still suspicious and the fear of losing control very real.
We know this technology is marvelous. You who are part of MMVR should share that marvel with the public. Protect your ideas while you apply for patents and construct your IPOs, but then be sure to let the public know this technology is all about them—or really, about all of us.
James D. Westwood
Aligned Management Associates, Inc.
For surgical training and preparations, the existing surgical virtual environments have shown great improvement However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming.
The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.
Since the acquisition of high-resolution three-dimensional patient images has become widespread, medical volumetric datasets (CT or MR) larger than 100MB and encompassing more than 250 slices are common. It is important to make this patient-specific data quickly available and usable to many specialists at different geo-graphical sites. Web-based systems have been developed to provide volume or surface rendering of medical data over networks with low fidelity, but these cannot adequately handle stereoscopic visualization or huge datasets. State-of-the-art virtual reality techniques and high speed networks have made it possible to create an environment for clinicians geographically distributed to immersively share these massive datasets in real-time. An object-oriented method for instantaneously importing medical volumetric data into Tele-Immersive environments has been developed at the Virtual Reality in Medicine Laboratory (VRMedLab) at the University of Illinois at Chicago (UIC).
This networked-VR setup is based on LIMBO, an application framework or template that provides the basic capabilities of Tele-Immersion. We have developed a modular general purpose Tele-Immersion program that automatically combines 3D medical data with the methods for handling the data. For this purpose a DICOM loader for IRIS Performer has been developed. The loader was designed for SGI machines as a shared object, which is executed at LIMBO's runtime. The loader loads not only the selected DICOM dataset, but also methods for rendering, handling, and interacting with the data, bringing networked, real-time, stereoscopic interaction with radiological data to reality.
Collaborative, interactive methods currently implemented in the loader include cutting planes and windowing. The Tele-Immersive environment has been tested on the UIC campus over an ATM network. We tested the environment with 3 nodes; one ImmersaDesk at the VRMedLab, one CAVE at the Electronic Visualization Laboratory (EVL) on east campus, and a CT scan machine in UIC Hospital. CT data was pulled directly from the scan machine to the Tele-Immersion server in our Laboratory, and then the data was synchronously distributed by our Onyx2 Rack server to all the VR setups.
Instead of permitting medical volume visualization at one VR device, by combining teleconferencing, tele-presence, and virtual reality, the Tele-Immersive environment will enable geographically distributed clinicians to intuitively interact with the same medical volumetric models, point, gesture, converse, and see each other. This environment will bring together clinicians at different geographic locations to participate in Tele-Immersive consultation and collaboration.
This paper presents a navigation system for a surgical microscope and an endoscope which can be used for neurosurgery. In this system, a wireframe model of a target tumor and other significant anatomical landmarks are superimposed in real-time onto live video images taken from the microscope and the endoscope. The wireframe model is generated from a CT/MRI slice images. Overlaid images are simultaneously displayed in the same monitor using the picture-in-picture function so that the surgeon can concentrate on the single monitor during the surgery. The system measures the position and orientation of the patient using specially designed non-contact sensing devices mounted on the microscope and the endoscpe. Based on this real-time measurement, the system displays other useful information about the navigation as well as the rendered wireframe. The accuracy of registration between the wireframe model and the actual live view is less than 2mm. We tested this system in actual surgery several times, and verified its performance and effectiveness.
Objectives. Urologists routinely use the systematic sextant needle biopsy technique to detect prostate cancer. However, recent evidence suggests that this teclmique has a significant sampling error. Recent data based upon whole-mounted step-sectioned radical prostatectomy specimens using a 3-D computer assisted prostate biopsy simulator suggests that an increased detection rate is possible using laterally placed biopsies. A new 10-core biopsy pattern was shown to be superior to the traditional sextant biopsy. This pattern includes the traditional sextant biopsy cores and four laterally placed biopsies in the right and left apex and mid portion of the prostate gland. The objective of this study is to confirm the higher prostate cancer detection rate obtained using the 10-core biopsy pattern in a small cohort of patients. Methods. We retrospectively reviewed 35 consecutive patients with a pathologic diagnosis of prostate cancer biopsied by a single urologist using the 10-core biopsy pattern. The frequency of positive biopsy was determined for each core. Additionally, the sextant and 10-core prostate biopsy patterns were compared with respect to prostate cancer detection rate. Results. Of the 35 patients diagnosed with prostate cancer, 54.3% (19/35) were diagnosed when reviewing the sextant biopsy data only. Review of the 10-core pattern revealed that an additional 45.7% (16/35) of patients were diagnosed solely with the laterally placed biopsies. The laterally placed biopsies had the highest frequency of positive biopsies when compared to the sextant cores. Conclusions. Our results suggest that biopsy protocols that use laterally placed biopsies based upon a five region anatomical model are superior to the routinely used sextant prostate biopsy pattern. Lateral biopsies in the apex and mid portion of the gland are the most important.
The procedure for creating a patient-specific virtual tissue model with finite element (FE) based haptic (force) feedback varies substantially from that which is required for generating a typical volumetric model. In addition to extracting geometrical and texture map data to provide visual realism, it is necessary to obtain information for supporting a FE model. Among many differences, FE-based VR environments require a FE model with appropriate material properties assigned. The FE equation must also be processed in a manner specific to the surgical task in order to maximize deformation and haptic computation speed. We are currently developing methodologies and support software for creating patient-specific models from medical images. The steps for creating such a model are as foljows: 1) obtain medical images and texture maps of tissue structures; 2) extract tissue structure contours; 3) generate a 3D mesh from the tissue structure contours; 4) alter mesh based on simulation objectives; 5) assign material properties, boundary nodes and texture maps; 6) generate a fast (or real-time) FE model; and 7) support the tissue models with task-specific tools and training aids. This paper will elaborate on the above steps with particular reference to the creation of suturing simulation software, which will also be described.
Clinics have to deal currently with hundreds of 3D images a day. 3D Medical Images contain a huge amount of data, and thus, very expensive and powerful systems are required in order to process them. The present work shows the features of a software parallel computing package developed at the Universidad Politecnica de Valencia, under the European Project HIPERCIR. http://hiperttn.upv.es/hipercir. Project HIPERCIR is aimed at reducing the time and requirements for processing and visualising 3D images with low-cost solutions, such as networks of PCs running standard operating systems (Windows 95/98/NT). This project is being developed by a consortium formed by medical image processing and parallel computing experts from the Universidad Politécnica de Valencia (UPV), experts on biomedical software and radiology clinic experts.
The next evolving stage of the electronic medical records (EMR) - the clinical information system - must be able to support the mental processes involved in clinical reasoning. We have developed a framework for building such systems.
By a two step procedure objects are chosen on basis of a process model and described using Unified Modelling Language (UML). UML installs the objects in a middleware-layer between a database and the browser used for presentation. The database and a suggestion of the presentation layout are generated by the middleware.
Data entered in the system are provided with a context according to a build in process model. This can be utilized in quality assurance and passive decision support.
Most image guided Neurosurgery employs adhesively mounted external fiducials for registration of medical images to the surgical workspace. Due to high logistical costs associated with these artificial landmarks, we strive to eliminate the need for these markers. At our institution, we developed a handheld laser stripe triangulation device to capture the surface contours of the patient’s head while oriented for surgery. Anatomical surface registration algorithms rely on the assumption that the patient’s anatomy bears the same geometry as the 3D model of the patient constructed from the imaging modality employed. During the time interval from which the patient is imaged and placed in the Mayfield head clamp in the operating room, the skin of the head bulges at the pinsite and the skull fixation equipment itself optically interferes with the image capture laser. We have developed software to reject points belonging to objects of known geometry while calculating the registration. During the course of development of the laser scanning unit, we have acquired surface contours of 13 patients and 2 cadavers. Initial analysis revealed that this automated rejection of points improved the registrations in all cases, but the accuracy of the fiducial method was not surpassed. Only points belonging to the offending instrument are removed. Skin bulges caused by the clamps and instruments remain in the data. We anticipate that careful removal of the points in these skin bulges will yield registrations that at least match the accuracy of the fiducial method.
In craniofacial surgery bone fractures and repositioned bone segments often have to be fixed by titanium miniplates. In clinical routines the surgeon has to fit each miniplate to be used to the individual bone structure of the patient: bending and fitting of a miniplate must frequently be repeated several times. Often up to twenty minutes are required to achieve the best fit of a single osteosynthesis plate. As a patient usually receivd several miniplates for bone fixture, he will be exposed to long anaesthesia.
In co-operation with the surgeons of the Clinic of Maxillofacial surgery at the University of Heidelberg we have conceived a planning system for the preoperative positioning of miniplates on a model of the patient's skull. The appropriate bending is computed and the bending data are stored for later use by a bending device and an intraoperative positioning aid. The principles of our computer-aided tool are presented in this paper.
Traditionally, finite element analysis or mass-spring systems are used to calculate deformations of geometric surfaces. Patient-specific geometric models can be comprised of tens of thousands, even hundreds of thousands of polygons, making finite element analysis and mass-spring systems computationally demanding. Simulations using deformable patient specific models at real time rates are prohibitive under such a computational burden. This paper presents a method for simulating deformable surfaces by deforming a skeletal representation of the surface, rather than the surface itself, yielding an efficient method for interactive simulation with models.
In this article, we present an Interventional Cardiology Training System developed by the Medical Application Group at Mitsubishi Electric in collaboration with the Center for Innovative Minimally Invasive Therapy. The core of the ICTS is a computer simulation of interventional cardiology catheterization. This simulation integrates clinical expertise, research in learning, and technical innovations to create a realistic simulated environment. The goal of this training system is to augment the training of new cardiology fellows as well as to introduce cardiologists to new devices and procedures.
To achieve this goal, both the technical components and the educational content of the ICTS bring new and unique features: a simulated fluoroscope, a physics model of a catheter, a haptic interface, a fluid flow simulation combined with a hemodynamic model and a learning system integrated in a user interface. The simulator is currently able to generate - in real-time - high quality x-ray images from a 3D anatomical model of the thorax, including a beating heart and animated lungs. The heart and lung motion is controlled by the hemodynamic model, which also computes blood pressure and EKG. The blood flow is then calculated according to the blood pressure and blood vessel characteristics. Any vascular tool, such as a catheter, guide wire or angioplasty balloon can be represented and accurately deformed by the flexible tool physics model. The haptics device controls the tool and provides appropriate feedback when contact with a vessel wall is detected. When the catheter is in place, a contrast agent can be injected into the coronary arteries; blood and contrast mixing is computed and a visual representation of the angiogram is displayed by the x-ray Tenderer.
By bringing key advances in the area of medical simulation – with the real-time x-ray Tenderer for instance – and by integrating in a single system both high quality simulation and learning tools, the ICTS opens new perspectives for computer based training systems.
One of the less effective processes within current Computer Assisted Surgery systems, utilizing pre-operative planning, is the registration of the plan with the intra-operative position of the patient. The technique described in this paper requires no digitisation of anatomical features or fiducial markers but instead relies on image matching between pseudo and real x-ray images generated by a virtual and a real image intensifier respectively. The technique is an extension to the work undertaken by Weese [1].
Training of medical staff on minimally invasive surgery (MIS) is an area where new effective training methods are needed. We have studied stent grafting, a type of MIS, which is used to treat abdominal aortic aneurysm. Our analysis revealed that this procedure requires a range of motor, perceptual and cognitive skills. In this paper, we present a training environment that could be used to acquire these skills. Our proposed solution differs from the usual VR solutions by operating within the World Wide Web as an environment for our system. This paper discusses how our solution covers the training skills and presents the results of an appraisal process, which we conducted to evaluate our solution.
Introduction/ Purpose: The complexity of cardiac surgery requires continuous training, education and information addressing different individuals: physicians (cardiac surgeons, residents, anaesthesiologists, cardiologists), medical students, perfusionists and patients. Efficacy and efficiency of education and training will likely be improved by the use of multimedia information systems. Nevertheless, computer-based education is facing some serious disadvantages: 1) multimedia productions require tremendous financial and time recources; 2) the obtained multimedia data are only usable for one specific target user group in one specific instructional context; 3) comupter based learning programs often show deficiencies in the support of individual learning styles and in providing individual information adjusted to the learner's individual needs. In this paper we describe a computer-system, providing multiple re-use of multimedia-data in different instructional sceneries and providing flexible composition of content to different target user groups.
Tools and Methods: The ZYX document model has been developed, allowing the modelling and flexible on-the-fly composition of multimedia fragments. It has been implemented as a DataBlade module into the object-relational database system Informix Dynamic Server and allows for presentation-neutral storage of multimedia content from the application domain, delivery and presentation of multimedia material, content based retrieval, re-use and composition of multimedia material for different instructional settings. Multimedia data stored in the repository, that can be processed and authored in terms of our identified needs is created by using a next generation authoring environment called CardioOP-Wizard. High-quality intra-operative video is recorded using a video-robot. Difficult surgical procedures are visualized with generic and CT-based 3D-animations.
Results: An on-line architecture for multiple re-use and flexible composition of media data has been established. The system contains the following instructional applications (prototypically implemented): a multimedia textbook on operative techniques, an interactive module for problem based-training, a module for creation and presentation of lectures and a module for patient information. Principles of cognitive psychology and knowledge management have been employed in the program. These instructional applications provide information ranging from basic knowledge at the beginner's level, procedural knowledge for the advanced level to implicit knowledge for the professional level. For media-annotation with meta-data a metainformation system, the CardioOP-Clas has been developed. The prototype focuses on aortocoronary bypass grafting and heart transplantation.
Conclusion: The demonstrated system reflects an integrated approach in terms of information technology and teaching by means of multiple re-use and composition of stored media-items to the individual user and the chosen educational setting on different instructional levels.
The explosion of Internet-based socio-cultural subcontexts and the possibility, horizontally distributed, of handling information on the Net, are transforming information retrieval in a key topic for psychosocial research. However, describing and analyzing how a huge and complex topic can be discussed in the Net and how people's behaviors and attitudes can be influenced by this discussion, is a very complicated task.
The aim of this chapter, is both to point out the limits of traditional approaches to the analysis of information retrieval in complex Internet-based domains, and to propose some advice for a cultural approach to the Net. In particular, starting from the analysis of a specific health care domain – drug abuse – the chapter will try to identify some guidelines for the study of these issues.
This paper presents the design and implementation of a distributed image processing server based on CORBA. Existing image processing tools were encapsulated in a common way with this server. Data exchange and conversion is done automatically inside the server, hiding these tasks from the user. The different image processing tools are visible as one large collection of algorithms and due to the use of CORBA are accessible via intra-/ internet.
A novel ankle rehabilitation device is being developed for home use, allowing remote monitoring by therapists. The system will allow patients to perform a variety of exercises while interacting with a virtual environment (VE). These game-like VEs created with WorldToolKit run on a host PC that controls the movement and output forces of the device via an RS232 connection. Patients will develop strength, flexibility, coordination, and balance as they interact with the VEs. The device will also perform diagnostic functions, measuring the ankle's range of motion, force exertion capabilities and coordination. The host PC transparently records patient progress for remote evaluation by therapists via our existing telerehabilitation system. The “Rutgers Ankle” Orthopedic Rehabilitation Interface uses double-acting pneumatic cylinders, linear potentiometers, and a 6 degree-of-freedom (DOF) force sensor. The controller contains a Pentium single-board computer and pneumatic control valves. Based on the Stewart platform, the device can move and supply forces and torques in 6 DOFs. A proof-of-concept trial conducted at the University of Medicine and Dentistry of New Jersey (UMDNJ) provided therapist and patient feedback. The system measured the range of motion and maximum force output of a group of four patients (male and female). Future medical trials are required to establish clinical efficacy in rehabilitation.
We have developed an experimental catheter insertion simulation system supporting head-tracked stereoscopic viewing of volumetric anatomic reconstructions registered with direct haptic 3D interaction. The system takes as input data acquired with standard medical imaging modalities and regards it as a visual and haptic environment whose parameters are interactively defined using look-up tables. The system's display, positioned like a surgical table, provide a realistic impression of looking down at the patient. Measuring head motion via a six degrees-of-freedom head tracker, good positions to observe the anatomy and identify the catheter insertion point are quickly established with simple head motion. By generating appropriate stereoscopic images and co-registering physical and virtual spaces beforehand, volumes appear at fixed physical positions and it is possible to control catheter insertion via direct interaction with a PHANToM haptic device. During the insertion procedure, the system provides perception of the effort of penetration and deviation inside the traversed tissues. Semi-transparent volumetric rendering augment the sensory feedback with the visual indication of the inserted catheter position inside the body.
An instrument for intraoperative sensing of surgeons' hand tremor during vitreoretinal microsurgery has been developed. The instrument uses inertial sensing to detect tremor in six degrees of freedom. Instrument tip velocity is computed using the sensor data. The displacement amplitude of the tremor is then approximated analytically by modeling the velocity as sinusoidal. The instrument presently estimates oscillations at physiological tremor frequencies with error of less than 7%.
Lumbar punctures (LP) are complex, precise procedures done to obtain cerebro-spinal fluid from a patient for diagnostic purposes. Incorrect techniques resulting from inadequate training or supervision can result in sub-optimal outcomes. As tactile feedback is crucial for a successful lumbar puncture, this procedure serves as an ideal candidate for the development of a haptic training simulator. The intent of this project is to engineer a force feedback LP simulator that provides a safe method of training students (medical students, residents, or trained physicians) for an actual LP procedure on a patient.
A semi-automatic method for three-dimensional segmentation of medical images is proposed. A multiresolution representation is achieved through the application of morphological filters, which assures causality for image extrema. This allows for a compact scale space representation, in which each extremum is assigned a scale value. Interactive selection of the interesting extrema of the image is carried out, aided by this scale information and other relevant features. Extrema selected are then used as markers in three-dimensional watersheds calculation. The system has been developed and tested under low cost platforms, and can be the base for totally automatic, knowledge based segmentation systems.
A World Wide Wide-based telerehabilitation platform has been demonstrated in a laboratory environment. This platform allows a rehabilitation provider to thoroughly evaluate the progress of a patient remotely with the same care and measurement precision that would be possible if the provider and the patient were in the same room. The platform was designed to be Web-based so that the service could be offered at the same price without regard to long distance telecommunication facility charges. The Web-based implementation allows enough bandwidth for a simultaneous video teleconference and a precision data acquisition mode even when the Web connection is a low cost analog modem computer interface at both ends of the connection.