Ebook: Medicine Meets Virtual Reality
The main topic of this work is: Health Care in the Information Age: Future Tools for Transforming Medicine. Medicine Meets Virtual Reality is a publication for medicine and interactive technology to interface and create the future of health care, focusing on virtual reality and its enabling technologies. This is an annual publication of multi-specialty, interdisciplinary symposia presenting advances in computing and communications technologies that pave the way for medicine to navigate the information superhighway. Readers will earn how leading-edge technology will affect the future of medical and surgical practice by improving access, quality, and continuity of care, while reducing cost. All topics will be addressed from the perspective of the end-user as well as the interface designer. This work is designed for physicians, surgeons, health care providers, researchers, developers, educators, and investors interested in the advancement of minimally invasive, cost-effective health care practices.
Plowing through this year’s MMVR contributions I am struck by the following:
• the depth of their creativity and willingness to explore uncharted waters
• the breadth of our diversity, both geopolitically and conceptually
• the extend to which we are both producers and consumers of these innovations
Each year we leapfrog ourselves toward the horizon. In four years our review process has moved from titles to abstracts to printed manuscripts to HTML documents. Our presentations, posters, and exhibits have increased in both quantity and quality. And with each year our visions have moved closer to everyday practice.
As is its tradition, the meeting continues to bring together many of the best from medicine, science, and engineering. You show me yours and I’ll show you mine: multidisciplinary projects, heroic solo flights—together we bootstrap ourselves into the future.
Suzanne J. Weghorst
Human Interface Technology Lab
Seattle, WA
If you bought and have read the MMVR3 proceedings, the following will make sense to you as a review of the fourth year of physicians getting exposed to tools of high technology with which to practice medicine, that is.
1995: This year was dominated by the bankruptcy of American science, new health care profit centers called HMOs, bank mergers, the Internet, and my being accused by the New York Times of running a research group for treating erection problems. No matter that I denied any such thing or insisted that I had done great research in other areas, my telephone rang off the hook in the rush to speak to the new guru of microsofts who had found a virtual cure. So, has reporting science overtaken the art of doing science? Fortunately the art of doing science is stronger still. Strong enough even to rebuild itself the American way: When all else fails, roll up your sleeves and start diggin’. The best place to dig right now is the mother of all virtual environments, the Internet. Suddenly, there are new ways of communicating illness, receiving diagnosis, and even receiving treatment. As Medicine Meets Virtual Reality for the fourth time in San Diego, we must face the new challenge to leave the gadget tower and interact.
Buen Vista! Schoene Aussicht! Bien Vue!
Hans B. Sieburg
World Information Networks Corporation
San Diego, CA
Health care in the information age presents exciting opportunities for new technologies and perplexing questions for those who develop them. Will they work? Do they make a difference?
The conference organizers thank all those who make the quantum leap—from the way it has been to create the way it will be— and take us with them. There would be no conference without the individuals who share their brilliance and their purpose through their work, providing us with the tools.
This is the possibility and the challenge: the transformation of medicine through communication.
Karen S. Morgan
Aligned Management Associates, Inc.
San Diego, CA
The Medicine Meets Virtual Reality:5 will be held in San Diego, California, from January 22-25, 1997. For information, contact Aligned Management Associates, Inc., P.O. Box 23220, San Diego, CA 92193 (telephone: 619-751-8841; fax: 619-751-8842; email: MMVR@amainc.com).
Our experience with a very low end interactive image-directed (IIDS) neurosurgical system is presented. The system was developed by the author and consists of a personal desktop computer and a magnetic field digitizer. This low cost solution was pursued as an alternative to available commercial devices which were expensive and not readily modifiable for novel ideas and new applications targeted for Epilepsy surgery. The rationale and description of the system was presented last year at Medicine Meets Virtual Reality III. Included in that detailed report were the fundamental mathematics forming the basics of transformation between the surgical and the digital data spaces. Since then the system has been used in an additional 20 cases now totaling 40 in all. Its advantages and short comings will be described. The theoretical advantages of magnetic field technology over other localization methods is reviewed. Also, our experience with alternative low cost off-the-shelf interfacing devices and other related modifications are described. We have accumulated clinical data to suggest that craniotomy sizes have been reduced, electrode placement has been improved, and that interactive image-directed techniques offer advantages over other common intra-operative localization modalities such as ultrasound. Our conclusion is that interactive image-directed techniques improve neurosurgery and that inexpensive enabling technology is already available providing the technological substrate for low cost devices using virtual reality notions for surgery and medicine. This particular technology offers advantages to traditional surgical techniques demonstrating the attractiveness of rudimentary virtual reality medical applications.
Image guided therapies, such as new endovascular procedures for treating brain aneurysms are now in clinical use. To plan these procedures, physicians currently use angiography to view projectional images of anatomy and blood flow. There are currently no available tools for visualizing the details of complex blood flow or predicting the effects of specific treatments. To address this problem, we have created a virtual environment for the visualization of blood flow and the simulated effects of therapy in brain aneurysms. The “Virtual Aneurysm” is composed using a combination of image processing, flow simulation, scientific visualization, and virtual reality techniques.
Our approach is to begin with angiographic sequences of clinical aneurysms. Image processing, reconstruction, and flow quantitation techniques are to used extract three dimensional geometry and bulk blood flow data. After building the geometric model (including any proposed embolic material insertions), this data is used as input to a computational fluid dynamics program. Using the geometry, arterial inflow, blood properties, and a physical flow model (Navier-Stokes), transient velocity and pressure fields can be computed throughout the aneurysm over one heart beat. Dynamic flow results are then organized in an object-oriented database for use in the interactive visualization system. A virtual environment is used to allow a physician to move around and into the aneurysm. Virtual tools are directly manipulated to explore the flow and gain insight into the transient pressure, velocity, and stress fields. The effects of a proposed interventional procedure on aneurysm wall stress can be viewed. The end result should be better understanding of aneurysm anatomy, hemodynamics, and response to proposed treatment. Furthermore, we hope that greater knowledge of aneurysm hemodynamics will lead to better understanding of aneurysm etiology and pathology.
This paper discusses recent progress by the Three-dimensional Ultrasound Imaging Group at The University of California, San Diego in developing a cost-effective 3D ultrasound system integrating clinical scanners and graphics workstations. We discuss the design features of an interactive system for acquiring, analyzing and displaying volume sonographic patient data incorporating stereoscopic viewing and initial clinical experience working with volume data in an interactive environment. We have developed an intuitive, easy-to-use interface with a rapid learning curve that facilitates physician operation of a system incorporating an interactive volume renderer permitting optimization of viewing orientation and data presentation. Target organ visualization is further enhanced using a 3D electronic scalpel we have developed to interactively extract tissues or organs of interest from the rest of the volume scanned. Incorporation of interactive stereo viewing with all studies has proven essential to improve user comprehension of complex anatomic structures.
Researchers from the Georgia Institute of Technology and the Medical College of Georgia (GIT/MCG) have developed an interactive computer simulation of Endoscopic Retrograde Cholangio-Pancreatography (ERCP). ERCP is a minimally invasive technique for evaluating and treating pathologic conditions of the biliary and pancreatic ducts. While ERCP provides the patient with substantial advantages over traditional methods, ERCP requires advanced skills and extensive experience to minimize the risk of complications. Computer simulation offers many advantages for efficiently and safely training physicians in ERCP. The GIT/MCG proof of concept simulation provides realistic training with both visual and force feedback while an endoscopist practices the ERCP procedure.
Tumors of the skull base in general are considered among the more difficult head and neck pathological entities to treat surgically; some surgeons, in fact, consider lesions in this area inoperable. The most appropriate and safest surgical approach to lesions of the anterior and lateral skull base can be devised only with accurate and precise pre-operative assessment. The literature demonstrates the constant evolution of and search for more efficient less invasive, and safe surgical approaches to this region. With the development of a more exact three-dimensional, interactive anatomical “road map” for each patient’s disease and anatomy, the skull base surgeon can not only achieve a more accurate preoperative assessment leading to a less invasive and less morbid approach, but also can continue to develop and refine new approaches without fear of actual morbidity and mortality. An interdisciplinary team approach, the advent and continued development of faster high performance computers, and the development of new and innovative rendering algorithms can lead to surgical simulation. A prototype of an interactive system has been developed. The system will be iteratively modified through a stepwise evaluation of its clinical usefulness by continually reassessing the system with clinical trials. The current state of the system and the potential benefits are presented.
The ease of use and effectiveness of ultrasound image-guided surgical procedures might be improved by observing single or multiple echographic views properly positioned “within” the patient. Several computer hardware and visualization technologies have to be extended and combined in order to build a system in which such visualizations are available in real time, e.g., during the ultrasound imaging of the patient.
A system we have been developing for several years is now beginning to be used for experiments in needle biopsies on breast models (“breast phantoms”) of a commercial dress mannequin. Although the system is still under development, it has already shown some promise. The operator of the ultrasound study can observe in stereo the ultrasound images overlaid within the live image of the patient/mannequin. By wearing head-gear with a miniature video display to each eye, the physician can now see both the outside of the patient/ mannequin and the inside, with appropriate continuation of a needle image from the outside live video to the inside image from the ultrasound scanner.
Major problems that need much improvement include: 1) combining in realtime the incoming ultrasound data with older, higher quality imaging data to yield more useful combined images, e.g., previously acquired CT or MRI data of same region of the patient , 2) improving tracking calibration, especially improving our method of dealing with tracking errors due to system latency, 3) improving the composition operation between the live camera images and the synthetic image volumes, 4) improving the volume rendering of the region of interest from multiple ultrasound images acquired from free-hand scanning.
Recent improvements to the UNC system include the development of a more compact, lighter-weight head-mounted display that minimizes parallax distortions, and a new method to track the movement of the needle in the synthetic environment.
Traditionally, users of HMD’s with video cameras mounted on top of the helmet report that the parallax distortion resulting from the difference between the location of the video camera and the user’s eyes results in significant impact on hand—eye coordination for tasks performed while wearing the HMD. Our new prototype HMD uses folded optics to match the locations of the video cameras to the viewer’s right and left eye locations. We have also recently implemented a method for tracking the progress of the needle through the synthetic environment by tracking the base of the needle with the same type of high-precision, hand-held mechanical tracker that tracks the ultrasound probe. We display the projected path of the needle tip in the HMD in the hope that this will allow the physician to guide the needle more accurately to the target tissue area.
In conclusion, image-guided surgical procedures may become more popular with the increasing sophistication and availability of the necessary technologies. Likely future targets are deeper nodules within the abdomen, smaller targets such as foreign body fragments in accident and trauma emergencies, more effective guidance for complex catheterization procedures. With some luck, a future generation of physicians may consider the current method of observing patient imagery out of context on a cartmounted display just as quaint and amusing as we now consider individuals communicating in the last century by telegraph instead of by telephone and video teleconferencing.
Minimally invasive techniques using endoscopes for image guided therapy are common in the surgical field and in internal medicine. Interventional procedures in the past were performed with either fluoroscopic, sonographic or CT-guidance, but now MRI-guided interventional procedures are being developed. Combining these technologies will improve surgical access and reduce complications. Today, tomographic 2 D and 3 D imaging (CT , EBT, MRI) can be used for precise and transparent guidance of endoscopes and surgical instruments inside the body for the field of minimally invasive therapy. 3 D imaging is helpful for anatomical , but not for morphological understanding. It has to be used interactivly with actual cross sectional imaging for instrument guidance. This will offer a safe and effective access into the body, especially in high risk areas and lead to the new field of “ Surgical Tomography”.
Micro invasive MRI- and CT-scopic treatments for common diseases like back pain, disk prolapse, tumors or arterial occlusive diseases with endoscopes, lasers or micro mechanical instruments as well as drug instillations like ethanol-ablation techniques are routinely performed in our interventional center. Lasers are used for tumor ablation or hyperthermia and in combination with endoscopes for diskectomies. Percutaneous Ethanol Instillation (micro-PEI) is performed for palliative treatment of tumor pain and cancer therapy and also for therapy of low back pain-facet-joint denervations under local anesthesia. Also sympathectomies for pain of herpes zoster or osteoporosis and arterial occlusive diseases are realized with ethanol. Cortisone is high precisely instillated for chronical back pain or neuralgias.
Recent trends in healthcare informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. Distributed Medical Intelligence promotes the development of an integrative medical communication system which addresses the process of providing expert medical knowledge to the point of need.
SRI International is currently developing a prototype remote telepresence surgery system, for the Advanced Research Projects Agency (ARPA), that will bring life-saving surgical care to wounded soldiers in the zone of combat. Remote surgery also has potentially important applications in civilian medicine. In addition, telepresence will find wide medical use in local surgery, in endoscopic, laparoscopic, and microsurgery applications. Key elements of the telepresence technology now being developed for ARPA, including the telepresence surgeon’s workstation (TSW) and associated servo control systems, will have direct application to these areas of minimally invasive surgery. The TSW technology will also find use in surgical training, where it will provide an immersive visual and haptic interface for interaction with computer-based anatomical models. In this paper, we discuss our ongoing development of the MEDFAST telesurgery system, focusing on the TSW man-machine interface and its associated servo control electronics.
Endoscopy, a we know it today, is far from perfect. In an attempt to make the interventions as little invasive as possible, the ergonomics of the procedure, for the surgeon, are often not dealt with at all. In an attempt to improve the surgeon patient interface during endoscopic procedures, a new concept is presented to allow the surgeon a much more natural visual interaction with the inside of the patient. It is made possible for the sugeon to look around into the cavity, with a stereo endoscope, mounted on a robot arm. This robot arm is directed by the motion of the head of the surgeon.
The VIRIM system, which stands for Virtual Reality In Medicine, aims at realising computer aided surgery (CAS) using methods of virtual reality (VR). In contrast to other approaches the visualisation/segmentation system VIRIM implements both real-time volume ray-casting as visualisation method and real-time grey-value segmentation. VIRIM is currently the first realtime volume ray-casting system. In contrast to most other systems no special data preparation like surface extraction is necessary. Different volume ray-casting algorithms can be implemented in VIRIM and arbitrary opacity levels are possible for the structures that are visualised.
The system will be used in operation planning and -control. Preoperatively captured CT (or MRI) data of the patient’s head is used for three-dimensional navigation through the data set. First the path of the endoscope to the lesion to be operated is planned, second during surgery the positions of the endoscopic tools are compared with the preoperatively planned path. The latter method of navigation is necessary since orientation during endoscopic operations is both difficult and critical for successful interventions.
Both for operation planning and -control the surgeon uses a tracking unit that determines the position and the orientation of endoscope and patient. This information is used to visualise the local operation environment relative to the endoscope tip.
The VR environment will be built up 1995 and will first serve for evaluation of the new approach to head surgery in the Clinic for Maxillofacial and Craniofacial Surgery at the University of Heidelberg. After this introductory phase a regular usage is planned
Despite the large interest in simulators of minimally invasive surgery, it is still unclear to what extent simulators can achieve the task of training medical students in surgical procedures. The answer to that question is certainly linked to the realism of displays and force-feedback systems and to the level of interaction provided by the computer system.
In this paper, we describe the virtual environment for anatomical and surgical training on the liver, currently under construction at INRIA. We specifically address the problems of geometric representation and physical modeling and their impact on the two aforementioned problems: realism and real-time interaction.
BACKGROUND: Although flexible endoscopy is only 25 years old, a new technology may soon be used to provide the same view of internal organs without inserting an instrument. This is virtual endoscopy.
METHODS: By acquiring patient specific high resolution digital images with a helical CT scan or MRI, individual organs can be graphically isolated or “segmented” into fully interactive 3-D reconstructions on a computer monitor. Applying sophisticated flighttracking programs from military pilots, the organs can be “flown through”, giving a view identical to endoscopy.
CONCLUSIONS: In the future we will fly through data instead of inserting endoscopic instruments for gastrointestinal endoscopic diagnosis.
We present a Virtual Echographic System. Because this examination is particulary difficult, developping a simulator is very useful to give students some common databases of pathological samples on which they could experiment image acquisition and evaluate their understanding of clinical cases. We have applied our method to the simulation of thigh ultrasonographic examination for thrombosis diagnosis. A preliminary system, focusing on image generation, has been developped. Virtual echographic slices are generated using a particular interpolation technique and a deformation model of significant structures. Resulting images have a visual quality similar to usual ones.
The University of Wisconsin—La Crosse is developing a software environment which will allow undergraduates in anatomy and physiology to directly manipulate the Visible Human Data Set. The software environment provides students with a “personal digital cadaver” for study. The system incorporates a volume rendering daemon for imaging the digital cadaver. Central to the system is the concept of an anatomical notebook in which students record and annotate the studies.
The University of California, San Diego, School of Medicine’s Learning Resources Center is developing a prototype next-generation application for anatomy education which combines virtual reality and multimedia curricular resources. The anatomy lesson utilizes polygon-based 3-D models of the hepatobiliary system created by BioGraphics Inc. of Fort Collins, Colorado which were derived from the National Library of Medicine’s Visible Human Project™ Dataset. This article describes the needs assessment, learning objectives, and preliminary design of the current prototype. The multivariate design, the development strategy for implementing functionalities, and the engineering of critical software interface components are also outlined.