Ebook: Medicine Meets Virtual Reality 2001
Since 1992, the Medicine Meets Virtual Reality Conference series has gathered physicians, computer scientists, and IT innovators to promote informatics technologies for use in healthcare. Its unique and multidisciplinary assemblage of expertise encourages novel interactions and development of innovative tools for use in the medical environment. The January 2001 conference presents forefront research on tools for telemedicine, computer-assisted diagnosis and surgery, psychotherapy, and education. The proceedings describes applications used in clinical care, and also these applications' underlying technologies: simulation, visualization, imaging, haptics, and robotics.
The theme of the 2001 Medicine Meets Virtual Reality conference is “Outer Space, Inner Space, Virtual Space.” (How could we resist the cinematic reference?) These three different ideas, or domains, are linked by their use of interactive, computer-based, image-producing technology: Virtual Reality . And while they aren’t automatically associated with medicine, they significantly affect health care practice — now and certainly in the future.
Outer Space. While space programs seem to be experiencing a reality check after their initial glory, our species' fascination with the moon, the planets, and the stars is unflagging. We won't soon travel to Mars with today's jetliner nonchalance (including frequent flier miles and seating upgrades), but every step in that direction develops valuable tools for use now. Aerospace ambition nurtures simulation, visualization, robotics, telemedicine, and other technologies integral to MMVR.
Inner Space. The human body is also a “final frontier,” perhaps the most important one. We no longer want to see a surgeon with an obsidian blade, nor a barber's pole advertising trimmed hair and opened veins on the same premises. The efforts of MMVR researchers will some day result in tools that will make today's operating rooms look primitive. Satisfied by the integration of microscopic, information-fed devices into their bodies, our descendents may think of us as well-intentioned but naive. A future child may be astonished to leam that doctors once operated on problems from the outside of the body instead of from within.
Virtual Space. Cultural commentators have noted that the image is usurping the supreme position of the word. Fortunately, for MMVR participants, this image-based world isn't about marketing and brand recognition, but about creating new drawing boards on which to solve health problems and plan new extensions to human capability. At its best, this new image-world is an interactive domain where solutions can be envisioned and tested faster and safer than in real life. It is a catalytic environment that uses data from wide-ranging disciplines to create something entirely new.
2001. Many scenes in Stanley Kubrick's Space Odyssey seem silly in comparison to how electronic technology has evolved since 1968. (Rows of blinking lights are impressive, but what's on the monitor?) And Kubrick's monolith heralding revolutions in human behavior is dramatic, but not emblematic of how progress typically occurs. Progress comes not as a towering and inescapable revelation upon a boulder-strewn landscape, but as gradual understanding that, more likely than not, takes place at a paper-strewn desk.
We have borrowed Kubrick's infant as an icon of the future. While we may not know that future child who is baffled by the ancient healing methods of the year 2001, we hope this child lives in a world that has been improved by the ideas and knowledge shared here.
We present a system involving a computer-instrumented fluoroscope for the purpose of 3D navigation and guidance using pre-operative diagnostic scans as a reference. The goal of the project is to devise a computer-assisted tool that will improve the accuracy, reduce risk, minimize the invasiveness, and shorten the time it takes to perform a variety of neurosurgical and orthopedic procedures of the spine. For this purpose we propose an apparatus that will track surgical tools and localize them with respect to the patient's 3D anatomy and pre-operative 3D diagnostic scans using intraoperative fluoroscopy for in situ registration and embedded fiducials. Preliminary studies have found a fiducial registration error (FRE) of 1.41 mm and a Target Localization Error (TLE) of 0.48 mm. The resulting system leverages equipment already commonly available in the operating room (OR), providing an important new functionality that is free of many current limitations, while keeping costs contained.
The localization of a seizure focus for resective surgery often requires invasive monitoring for precise localization of the target as well as structures to avoid. We report on the use of intra-operative surgical navigation to precisely localize and co-register subdural electrodes to regions of know radiographic pathology. Additionally, the navigation system was used to develop intra-operative electrode maps. These maps were subsequently used in the sub-acute recording phase to assign electrographic pathology and function (e.g. speech) to a specific cortical surface anatomy. This permitted for more precise planning of surgery and better assessment of potential risk, based on functional as well as anatomical criterion.
The use of neuronavigation (NN) in neurosurgery has become ubiquitous. A growing number of neurosurgeons are utilizing NN for a wide variety of purposes, including optimizing the surgical approach (macrosurgery) and locating small areas of interest (microsurgery). The goal of our team is to apply rapid advances in hardware and software technology to the field of NN, challenging and ultimately updating current NN assumptions. To identify possible areas in which new technology may improve the surgical applications of NN, we have assessed the accuracy of neuronavigational measurements in the Radionics™ and BrainLab™ systems. Using a phantom skull, we measured how accurate the visualization of a navigational probe's tip was in these systems, taking a total of 2180 measurements. We found that, despite current NN tenets, error is maximal at the six marker count and minimal in the spreaded marker setting; that is, placing less markers around the area of interest maximizes accuracy and active tracking does not necessarily increase accuracy. Comparing the two systems, we also found that accuracy of NN machines differs both overall and in different axes. As researchers continue to apply technological advances to the NN field, an increasing number of currently held tenets will be revised, making NN an even more useful tool in neurosurgery.
Progress in the application of augmented reality to laparoscopic surgery has been limited by the difficulty associated with generating geometric information about the current patient in real time. Structured light techniques are well known methods for generating range images using a camera and projector, but typically fail when faced, with biological specimens. We describe techniques and equipment that have shown promise for acquisition of range images for use in a real-time augmented reality system for laparoscopic surgery.
We present an integrated environment for stereoscopic acquisition, off-line 3D elaboration, and visual presentation of biological hand actions. The system is used in neurophysiological experiments aimed at the investigation of the parameters of the external stimuli that mirror neurons visually extract and match on their movement related activity.
The long-term objective of our project is to use motion capture technology to identify and characterize body alterations in motion associated with depression that have not been previously recognized or characterizable. These motion phenomena will be studied to determine their utility in the nosology and subtyping of depression. Quantitatively, they may have a significant impact in the areas of research, education and the clinical management of depression; and allow the creation of “virtual humans” which manifest depressive digital motion phenomena that can be used to train researchers, trainees and clinicians.
The PAMI's website creation (www.pami.org.ar) has had the purpose of spreading its activities, the standards of medical care and administrative management, offering - at the same time - a honest channel of interaction with the community.
In June 2000, the Telemedicine Center at the Brody School of Medicine, East Carolina University in Greenville, NC participated in a simulated disaster response in Pu'u Paa, Hawaii, a lava plain without running water, electricity, or human habitation. During the five-day exercise we evaluated the ability to establish telecommunications and the effectiveness of the infrastructure, services, and applications implemented for an operational global emergency response. Scaleable technologies were configured and systematically tested to determine the ability to provide medical and health care in an austere environment. A medical communications matrix was constructed and used throughout the evaluation. Results show that telemedicine can be an important contribution to humanitarian relief efforts and medical support following disasters. Additional research is needed to build upon the lessons learned from participation in this exercise.
Real-time simulation of deformable objects using finite element models is a challenge in medical simulation. We present two efficient methods for simulating real-time behavior of a dynamically deformable 3D object modeled by finite element equations. The first method is based on modal analysis, which utilizes the most significant vibration modes of the object to compute the deformation field in real-time for applied forces. The second method uses the spectral Lanczos decomposition to obtain the explicit solutions of the finite element equations that govern the dynamics of deformations. Both methods rely on modeling approximations, but generate solutions that are computationally faster than the ones obtained through direct numerical integration techniques. In both methods, the errors introduced through approximations were insignificant compare to the computational advantage gained for achieving real-time update rates.
A three-step stereoscopic image-processing algorithm is proposed in order to improve image quality and depth perception of stereoscopic radiographs taken with C-arm equipment. The steps include illumination correction, geometry conversion and screen parallax adjustment. Flipping of stereoscopic radiographs is also discussed.
At the University of Washington, we have been developing a suturing simulator using novel finite element model techniques which allow real-time haptic feedback. The issues involved in measuring validity in a suturing model have not been examined in a systematic way. Very few studies exist on the surgical factors that lead to good sutures. We have examined published data on these factors as well as previously studied metrics in suture training. This information has been combined with a review of types of validity (e.g., face, construct, predictive and concurrent) and reliability that must be considered in assessing any surgical simulator.
Use of active, optical tracking Surgical Guidance systems provides line of sight problems to the surgeon. We plan to use a new magnetic system, ‘Aurora’ from Northern Digital Inc. (Canada) and Mednetix AG (Switzerland), in intra-operative fluoroscopy to develop an integrated system for surgical guidance. Here we outline the modules developed for use with this system, including a novel registration method.
To provide data for the design of virtual environments and teleoperated systems for surgery, it is necessary to measure tissue properties under both in vivo and ex vivo conditions. The former provides information about tissue behavior in its physiological state, while the latter can provide better control over experimental conditions. We have developed devices to measure tissue properties under extension and indentation, as well as to record instrument-tissue interaction forces. We are creating a web database of data recorded from porcine abdominal tissues.
Animal dissection for the scientific examination of organ subsystems is a delicate procedure. Performing this procedure under the complex environment of microgravity presents additional challenges because of the limited training opportunities available that can recreate the altered gravity environment. Traditional crew training often occurs several months in advance of experimentation, provides limited realism, and involves complicated logistics. We have developed an interactive virtual environment that can simulate several common tasks performed during animal dissection. In this paper, we describe the imaging modality used to reconstruct the rat in virtual space, provide an overview of the simulation environment and briefly discuss some of the techniques used to manipulate the virtual rat.
Today, surgeons accept computer assisted technologies as important tools to enhance the treatment of a patient. The positive impact and acceptance of computer assisted technologies could be increased to a great extent, if all methods and devices used for diagnosis and treatment of a patient are better co-ordinated and more finely tuned. Often computer assisted treatments cannot be performed due to a lack of communication between hospital departments, useless patient data, deficient interfaces, etc. Risks for the patient and potential errors within the treatment are often unrecognised, as up to now the safety of computer integrated surgery is only product-, device and security oriented. We have developed a new approach for a safety architecture, which includes safety aspects considering patients, users, interdependencies and interactions of computer assisted methods and apparatuses.
This paper introduces the Virtual Anatomy Lab software platform for coordinating on-line gross anatomy learning sessions over time.
It requires skill, effort, and time to visualize desired anatomic structures from radiological data in three-dimensions. There have been many attempts at automating this process and making it less labor intensive. The technique we have developed is based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). The initial development of our technique has focused on the autocolorization of the liver, portal vein, and hepatic vein. A standard dataset in which these structures had been segmented and assigned colors was created from the full color Visible Human Female (VHF) and then optimally fused to the fresh CT Visible Human Female. This semi-automatic segmentation and coloring of the CT dataset was subjectively evaluated to be reasonably accurate. The transformation could be viewed interactively on the ImmersaDesk, in an immersive Virtual Reality (VR) environment. This 3D segmentation and visualization method marks the first step to a broader, standardized automatic structure visualization method for radiological data. Such a method, would permit segmentation of radiological data by canonical structure information and not just from the data’s intrinsic dynamic range.
Performing epidural injections is a complex task that demands a high level of skill and precision from the physician, since an improperly performed procedure can result in serious complications for the patient. The objective of our project is to create an epidural injection simulator for medical training and education that provides the user with realistic feel encountered during an actual procedure. We have used a Phantom haptic interface by SensAble Technologies, which is capable of three-dimensional force feedback, to simulate interactions between the needle and bones or tissues. An additional degree-of-freedom through an actual syringe was incorporated to simulate the “loss of resistance” effect, commonly considered to be the most reliable method for identifying the epidural space during an injection procedure. The simulator also includes a new training feature called “Haptic Guidance” that allows the user to follow a previously recorded expert procedure and feel the encountered forces. Evaluations of the simulator by experienced professionals indicate that the simulation system has considerable potential to become a useful aid in medical training.
To support diagnosis and therapy, it is a fundamental aim of medical image processing to describe morphological characteristics of pathological structures or image objects in general. Different authors propose quantitative methods of description like bounding boxes[l], fourier descriptors[2] or contour moments [3]. Unfortunately, these methods either don't supply a complete, respectively precise description of the object or only operate on two-dimensional images.
Among the range of application are systems to classify lung nodules [4] or to help the diagnosis of brain tumors [5]. In this paper we present a method to analyze the morphology or shape of any three-dimensional object in order to describe it mathematically well-defined. We show how the description can be used to perform statistical operations on morphologies. The method presented in this paper was developed to assist the planning of craniofacial surgery. We analyze the shape of a given set of skull CT-data and use the mathematical description to statistically calculate the average shape of the skulls.
The NASA Space Medicine program is now developing plans for more extensive use of high-fidelity medical simulation systems. The use of simulation is seen as means to more effectively use the limited time available for astronaut medical training. Training systems should be adaptable for use in a variety of training environments, including classrooms or laboratories, space vehicle mockups, analog environments, and in microgravity.
Modeling and simulation can also provide the space medicine development program a mechanism for evaluation of other medical technologies under operationally realistic conditions. Systems and procedures need preflight verification with ground-based testing. Traditionally, component testing has been accomplished, but practical means for “human in the loop” verification of patient care systems have been lacking. Medical modeling and simulation technology offer potential means to accomplish such validation work.
Initial considerations in the development of functional requirements and design standards for simulation systems for space medicine are discussed.
This work introduces, for the first time, a meshless modeling technique, the method of finite spheres, for physically based, real time rendering of soft tissues in medical simulations. The technique is conceptually similar to the traditional finite element techniques. However, while the finite element techniques requires a slow mesh generation process, this new technique has significant potential for multimodal medical simulations of the future since it does not use a mesh. Several examples are presented showing the effectiveness of the scheme.
Due to increases in network speed and bandwidth, distributed exploration of medical data in immersive Virtual Reality (VR) environments is becoming increasingly feasible. The volumetric display of radiological data in such environments presents a unique set of challenges. The shear size and complexity of the datasets involved not only make them difficult to transmit to remote sites, but these datasets also require extensive user interaction in order to make them understandable to the investigator and manageable to the rendering hardware. A sophisticated VR user interface is required in order for the clinician to focus on the aspects of the data that will provide educational and/or diagnostic insight.
We will describe a software system of data acquisition, data display, Tele-Immersion, and data manipulation that supports interactive, collaborative investigation of large radiological datasets. The hardware required in this strategy is still at the high-end of the graphics workstation market. Future software ports to Linux and NT, along with the rapid development of PC graphics cards, open the possibility for later work with Linux or NT PCs and PC clusters.
Realistic laparoscopic surgical simulators will require real-time graphic imaging and tactile feedback. Our research objective is to develop a cost-effective haptic workstation for the simulation of laparoscopic procedures for training and treatment planning. The physical station consists of a custom-built frame into which laparoscopic trocars and surgical tools may be attached/inserted and which are continuously adjustable to various positions and orientations to simulate multiple laparoscopic surgical approaches. Instruments inserted through the trocars are attached to end effectors of two haptic devices and interfaced to a high speed PC with fast graphics capability. The haptic device transduces 3D motion of the two manually operated surgical instruments into slave maneuvers in virtual space. The slave instrument tips probe the simulated organ. Simulations currently in progress include: 1) Surface-only renderings, deformation, and haptic interactions with elements in the gall gladder surgical field; 2) Voxel- based simulations of the bulk manipulation of tissue; 3) laparoscopic herniorrhaphy. This system provides force feed-forward from the grasped tools to the contact tissue in virtual space, with deformation of the tissue by the virtual probe, and force feedback from the deformed tissue to the operator's hands.