
Ebook: Medicine Meets Virtual Reality 13

Magical describes conditions that are outside our understanding of cause and effect. What cannot be attributed to human or natural forces is explained as magic: super-human, super-natural. Even in modern societies, magic-based explanations are powerful because, given the complexity of the universe, there are so many opportunities to use them. The history of medicine is defined by progress in understanding the human body - from magical explanations to measurable results. To continue medical progress, physicians and scientists must openly question traditional models. Valid inquiry demands a willingness to consider all possible solutions without prejudice. Medical politics should not perpetuate unproven assumptions nor curtail reasoned experimentation, unbiased measurement and well-informed analysis. For thirteen years, Medicine Meets Virtual Reality has been an incubator for technologies that create new medical understanding via the simulation, visualization and extension of reality. Researchers create imaginary patients because they offer a more reliable and controllable experience to the novice surgeon. With imaging tools, reality is purposefully distorted to reveal to the clinician what the eye alone cannot see. Robotics and intelligence networks allow the healer’s sight, hearing, touch and judgment to be extended across distance, as if by magic. The moments when scientific truth is suddenly revealed after lengthy observation, experimentation and measurement, is the real magic. These moments are not miraculous, however. They are human ingenuity in progress and they are documented here in this book.
Most of the current surgical simulators rely on preset anatomical virtual environments (VE). The functionality of a simulator is typically fixed to anatomy‐based specific tasks. This rigid design principle makes it difficult to reuse an existing simulator for different surgeries. It also makes it difficult to simulate procedures for specific patients, since their anatomical features or anomalies cannot be easily replaced in the VE.
In this paper, we demonstrate the reusability of a modular skill‐based simulator, LapSkills, which allows dynamic generation of surgery‐specific simulations. Task and instrument modules are easily reused from LapSkills and the three‐dimensional VE can be replaced with other anatomical models. We build a nephrectomy simulation by reusing the simulated vessels and the clipping and cutting task modules from LapSkills. The VE of the kidney is generated with our anatomical model generation tools and then inserted into the simulation (while preserving the established tasks and evaluation metrics). An important benefit for the created surgery and patient‐specific simulations is that reused components remain validated. We plan to use this faster development process to generate a simulation library containing a wide variety of laparoscopic surgical simulations. Incorporating the simulations into surgical training programs will help collect data for validating them.
Limited sense of touch and vision are some of the difficulties encountered in performing laparoscopic procedures. Haptic simulators can help minimize these difficulties; however, the simulators must be validated prior to actual use. Their effectiveness as a training tool needs to be measured in terms of improvement in surgical skills. LapSkills, a haptic skill‐based laparoscopic simulator, that aims to provide a quantitative measure of the surgeon's skill level and to help improve their efficiency and precision, has been developed. Explicitly defined performance metrics for several surgical skills are presented in this paper. These metrics allow performance data to be collected to quantify improvement within the same skill over time. After statistically significant performance data is collected for expert and novice surgeons, these metrics can be used not only to validate LapSkills, but to also generate a performance scale to measure laparoscopic skills.
Virtual environments such as the CAVE™and the ImmersaDesk™, which are based on graphics supercomputers or workstations, are large and expensive. Most physicians have no access to such systems. The recent development of small Linux personal computers and high‐performance graphics cards has afforded opportunities to implement applications formerly run on graphics supercomputers. Using PC hardware and other affordable devices, a VR system has been developed which can sit on a physician's desktop or be installed in a conference room.
Affordable PC‐based VR systems are comparable in performance with expensive VR systems formerly based on graphics supercomputers. Such VR systems can now be accessible to most physicians. The lower cost and smaller size of this system greatly expands the range of uses of VR technology in medicine.
Our approach to tissue modeling incorporates biologically derived primitives into a computational engine (CellSim® coupled with a genetic search algorithm. By expanding an evolved synthetic genome CellSim® is capable of developing a virtual tissue with higher order properties. Using primitives based on cell signaling, gene networks, cell division, growth, and death, we have encoded a 64‐cell cube‐shaped tissue with emergent capacity to repair itself when up to 60% of its cells are destroyed. Other tissue shapes such as sheets of cells also repair themselves. Capacity for self‐repair is an emergent property derived from, but not specified by, the rule sets used to generate these virtual tissues.
We present an architecture for remote visualization of datasets over the Grid. This permits an implementation‐agnostic approach, where different systems can be discovered, reserved and orchestrated without being concerned about specific hardware configurations. We illustrate the utility of our approach to deliver high‐quality interactive visualizations of medical datasets (circa 1 million triangles) to physically remote users, whose local physical resources would be otherwise overwhelmed. Our architecture extends to a full collaborative, resource‐aware environment, whilst our presentation details our first proof‐of‐concept implementation.
We investigate and report a method of 3D virtual organ creation based on the RGB color laser surface scanning of preserved biological specimens. The surface attributes of these specimens result in signal degredation and increased scanning time. Despite these problems we are able to reproduce 3D virtual organs with both accurate topology and colour consistency capable of reproducing pathological lesions.
Bovine rectal palpation is a necessary skill for a veterinary student to learn. However, lack of resources and welfare issues currently restrict the amount of training available to students in this procedure. Here we present a virtual reality based teaching tool ‐ the Bovine Rectal Palpation Simulator ‐ that has been developed as a supplement to existing training methods. When using the simulator, the student palpates virtual objects representing the bovine reproductive tract, receiving feedback from a PHANToM haptic device (inside a fibreglass model of a cow), while the teacher follows the student's actions on the monitor and gives instruction. We present a validation experiment that compares the performance of a group of traditionally trained students with a group whose training was supplemented with a simulator training session. The subsequent performance in the real task, when examining cows for the first time, was assessed with the results showing a significantly better performance for the simulator group.
In the automotive, telecommunication, and aerospace industries, modeling and simulation are used to understand the behavior and outcomes of a new design well before production begins, thereby avoiding costly failures. In the pharmaceutical industry, failures are not typically identified until a compound reaches the clinic. This fact has created a productivity crisis due to the high failure rate of compounds late in the development process. Modeling and simulation are now being adopted by the pharmaceutical industry to understand the complexity of human physiology and predict human response to therapies. Additionally, virtual patients are being used to understand the impact of patient variability on these predictions. Several case studies are provided to illustrate the technology's application to pharmaceutical R&D and healthcare.
Modeling cuts, bleeding and the insertion of surgical instruments are essential in surgical simulation. Both visual and haptic cues are important. Current methods to simulate cuts change the topology of the model, invalidating pre‐processing schemes or increasing the model's complexity. Bleeding is frequently modeled by particle systems or computational fluid dynamics. Both can be computationally expensive. Surgical instrument insertion, such as intubation, can require complex haptic models. In this paper, we describe methods for simulating surgical incisions that do not require such computational complexity, yet preserve the visual and tactile appearance necessary for realistic simulation.
Minimally invasive surgical techniques using catheter is now used in many procedures. Development of surgical training of such procedures requires real‐time simulation of tool‐organ interaction. In such processes, each subsequent step of interaction would be based on the current configuration of the surgical tool (guidewire in this case), leading to development of techniques to solve and visualize the configuration of tool at every time step. This paper presents a Finite Element (FEM) based approach to simulate the tool‐organ interaction.
The present study examined the effectiveness of an immersive arthroscopic simulator for training naïve participants to identify major anatomical structures and manipulate the arthroscope and probe. Ten psychology graduate students engaged in five consecutive days of practice sessions with the arthroscopic trainer. Following each session, participants were tested to see how quickly and accurately they could identify 10 anatomical landmarks and manipulate the arthroscope and probe. The results demonstrated steady learning on both tasks. For the anatomy task, participants correctly identified an average of 7.7 out of 10 structures correctly in the first session and 9.5 in the last. During the manipulation task, participants collided 53.5 times with simulated tissues in the first session and 13.2 times during the final session. Participants (n=9) also demonstrated minimal performance degradation when tested 4 weeks later. These data suggest that the immersive arthroscopic trainer might be useful as an initial screening or training tool for beginning medical students.
This study examines the effectiveness of two virtual reality simulators when compared with traditional methods of teaching intravenous (IV) cannulation to third year medical students. Thirty‐four third year medical students were divided into four groups and then trained to perform an IV cannulation using either CathSimTM, Virtual I.V.TM, a plastic simulated arm or by practicing IV placement on each other. All subjects watched a five minute training video and completed a cannulation pretest and posttest on the simulated arm. The results showed significant improvement from pretest to posttest in each of the four groups. Students trained on the Virtual I.V.TM showed significantly greater improvement over baseline when compared with the simulated arm group (p<.026). Both simulators provided at least equal training to traditional methods of teaching, a finding with implications for future training of this procedure to novices.
This study describes a comparison between an animal model and a haptic enabled, needle based, graphical user interface simulator (SimPL), for teaching Diagnostic Peritoneal Lavage (DPL). Forty novice medical students were divided into two groups and then trained to perform a DPL on either a pig or the SimPL. All subjects completed a pre and post test of basic knowledge and were tested by performing a DPL on a TraumaMan™ mannequin and evaluated by two trauma surgeons blinded to group. The results showed significant improvement over baseline knowledge in both groups but more so in the SimPL group. The simulator group performed better on site selection (p<0.001) and technique (p<0.002) than those who trained on a pig. The finding that a simulator is superior to an animal model for teaching an important skill to medical students has profound implications on future training and deserves further study.
One of the goals of the DARPA Virtual Soldier Project is to aid the field medic in the triage of a casualty. In Phase I, we are currently collecting 12 baseline experimental physiological variables and a cardiac gated Computed Tomography (CT) imagery for use in an prototyping a futuristic electronic medical record, the “Holomer”. We are using physiological models and Kalman filtering to aid in diagnosis and predict outcomes in relation to cardiac injury. The physiological modeling introduces another few hundred variables. Reducing the complexity of the above into easy‐to‐read text to aid in the triage by the field medic is the challenge with multiple display solutions. A description of the possible techniques follows.
We propose a web‐based collaborative CAD system allowing for the remote communication and data exchange between radiologists and researchers in computer vision‐based software engineering. The proposed web‐based interface is implemented in the Java Advanced Imaging Application Programming Interface. The different modules of the interface allow for 3D and 2D data visualization, as well as for the parametric adjustment of 3D reconstruction process. The proposed web‐based CAD system was tested in a pilot study involving a limited number of liver cancer cases. The successful system validation in the feasibility stage will lead to an extended clinical study on CT and MR image databases.
Instruments and procedures continue to become more complex and challenging, but the display environments to which these technologies are connected have not kept pace. Display real‐estate (the size and resolution of the display), the configurability of the display, and the ability for display systems to incorporate, fuse and present diverse informational sources are limiting factors. The capabilities of display technologies are far exceeded by the procedures and instruments that rely on them.
In this paper we show how to break free from display constraints by moving forward with a hybrid, heterogeneous display framework that preserves key characteristics of current systems (low latency, specialized devices). We have engineered a hybrid display and are currently using it to build a surgical simulation and training environment within which we can evaluate both the technology and the performance of subjects using the technology.
The observation of the evolution of a course of treatment can provide a powerful tool in understanding its efficacy. To visualize this, we produce animations allowing the visualization, as a function of time, of lesions in an organ. Such animations can be used in teaching or for patient education, influencing a patient's decision of following a course of treatment. The animation produced is a metamorphosis, or morph, describing how a source shape (pre‐treatment) gradually deforms into a target shape (post‐treatment). We implemented our method using the programming capabilities of current graphics cards (also known as graphics processing units or GPUs), so both visualization of the volumes and morph generation are performed in real‐time. We demonstrate our method on data from a patient's liver with lymphoma that was treated with chemotherapy and is currently on remission.
This paper presents a method for rendering radially distorted virtual scenes in real‐time using the programmable fragment shader commonly found in many main stream graphics hardware. We show that by using the pixel buffer and the fragment shader, it is possible to augment the endoscopic display with distorted virtual images.
To foster awareness of the magnitude and breadth of activity and to foster collaboration among the participants, the National Center for Collaboration in Medical Modeling and Simulation (NCCMMS) has created the Medical Modeling and Simulation Database (MMSD). The MMSD consists of two web‐based, searchable compilations: one, the Research Database, that contains bibliographic information on published articles and abstracts (where available) and a second, the Companies and Projects Database, that maintains contact information for research centers, development and application programs, journals and conferences. NCCMMS is developing the MMSD to increase awareness of the breadth of the medical modeling domain and to provide a means of fostering collaboration and bringing like‐minded organizations and researchers into more frequent contact with each other, thus speeding advancement of medical modeling and simulation.
The ViCCU (Virtual Critical Care Unit) Project sought to address the problems of shortages of Critical Care staff by developing a system that could use the capabilities of Ultrabroadband networks so as to have a Critical Care Specialist virtually present at a distant location. This is not possible in a clinically useful way with current systems. A new system (ViCCU) was developed and deployed. Critically ill or injured patients are now routinely assessed and managed remotely using this system. It has led to a more appropriate level of transfers of patients and the delivery of a quality of clinical service not previously available. This paper describes the history of the project, its novelty, the clinically significant technical aspects of the system and its deployment. The initial results to the end of September 2004 are described
The use of simulation for high stakes assessment has been embedded in the New South Wales Medical Practice Act and has been used for high stakes assessment on a number of occasions. Simulation has rarely been used in this manner elsewhere in the world. We outline the use of simulation in particular focussing on its relationship to a performance assessment programme featuring performance focus, peer assessment of standards, an educative, remedial and protective framework, strong legislative support and system awareness.
This paper presents formative (i.e., not final project) evaluation data from the use of a responsive virtual human training application by medical students rotating through Pediatrics and by Pediatric medical educators. We are encouraged by the evaluation results and believe the iterative development strategies employed and the subsequent refinements in the scenarios will lead to important instructional and assessment tools for medical educators.
A collaborative initiative is starting within the Internet2 Health Science community to explore the development of a framework for providing access to digital anatomical teaching resources over Internet2. This is a cross‐cutting initiative with broad applicability and will require the involvement of a diverse collection of communities. It will seize an opportunity created by a convergence of needs and technical capabilities to identify the technologies and standards needed to support a sophisticated collection of tools for teaching anatomy.
Surgical skills assessment has been paid increased attention over the last few years. Stochastic models such as Hidden Markov Models have recently been adapted to surgery to discriminate levels of expertise. Based on our previous work combining synchronized video and motion analysis we present preliminary results of a HMM laparoscopic task recognizer which aims to model hand manipulations and to identify and recognize simple surgical tasks.