Ebook: Medicine Meets Virtual Reality 15
Our culture is obsessed with design. Sometimes designers can fuse utility and fantasy to make the mundane appear fresh—a cosmetic repackaging of the same old thing. Because of this, medicine—grounded in the unforgiving realities of the scientific method and peer review, and of flesh, blood, and pain—can sometimes confuse “design” with mere “prettifying.” Design solves real problems, however. This collection of papers underwrites the importance of design for the MMVR community, within three different environments: in vivo, in vitro and in silico. in vivo: we design machines to explore our living bodies. Imaging devices, robots, and sensors move constantly inward, operating within smaller dimensions: system, organ, cell, DNA. in vitro: Using test tubes and Petri dishes, we isolate in vivo to better manipulate and measure biological conditions and reactions. in silico: We step out of the controlled in vitro environment and into a virtual reality. The silica mini-worlds of test tubes and Petri dishes are translated into mini-worlds contained within silicon chips. The future of medicine remains within all three environments: in vivo, in vitro, and in silico. Design is what makes these pieces fit together—the biological, the informational, the physical/material—into something new and more useful.
Our culture is obsessed with design. Magazines, television, and websites publicize current trends in clothing, architecture, home furnishings, automobiles, and more. We design objects to convey ideas about wealth, status, age, gender, education, politics, religion, accomplishment, and aspiration. Design seems mysteriously vital to our well-being, like sleep and dreaming.
Sometimes designers can fuse utility and fantasy to make the mundane appear fresh—a cosmetic repackaging of the same old thing. Because of this, medicine—grounded in the unforgiving realities of the scientific method and peer review, and of flesh, blood, and pain—can sometimes confuse “design” with mere “prettifying.”
Design solves real problems, however. It reshapes material, image, and data into something more useful than was previously available. It addresses challenges of increasing complexity and data overload. It simplifies tasks to reduce confusion and error. It accelerates adoption and training by making new tools more intuitive to use. It comforts clinicians as well as patients by giving engineering a friendly interface.
This year's theme acknowledges the importance of design—currently and as an opportunity—within the MMVR community.
in vivo. We design machines to explore our living bodies. Imaging devices, robots, and sensors move constantly inward, operating within smaller dimensions: system, organ, cell, DNA. Resolution and sensitivity are increasing. Our collaboration with these machines is burdened by vast quantities of input and output data. Physician to machine to patient to machine to physician and back again: it's a crowded information highway prone to bottlenecks, misinterpreted signals, and collisions. Out of necessity, we design ways to visualize, simplify, communicate, and understand complex biomedical data. These can be as basic as color-coding or as advanced as Internet2. In our measurement and manipulation of health, the design of information is critical.
in vitro. Using test tubes and Petri dishes, we isolate in vivo to better manipulate and measure biological conditions and reactions. The bold new field of tissue engineering, for example, relies on creating an imitation metabolic system for growing artificial body parts. Scientists carefully design the scaffolding to which cells will group themselves on their own. The artificial guides nature's path inside a glass container as we strive to improve what nature gives us.
in silico. We step out of the controlled in vitro environment and into a virtual reality. The silica mini-worlds of test tubes and Petri dishes are translated into mini-worlds contained within silicon chips. In the in silico lab, algorithms replace chemicals and proteins in the quest for new drugs. On a different scale, we design simulations of biological systems to serve as educational tools. A simulated human body improves learning by utilizing intuition, repetition, and objective assessment. In surgical training, we are replacing patients with computers, in part because the latter is less susceptible to pain and less likely to hire a lawyer.
The future of medicine remains within all three environments: in vivo, in vitro, and in silico. Design is what makes these pieces fit together—the biological, the informational, the physical/material—into something new and more useful.
And what is the next in medicine? We cannot say, but we hope it offers solutions to the very real challenges that are now upon us: an aging global population; disparities between rich and developing nations; epidemic, disaster, and warfare; and limited economic and natural resources. We are eager to see what new tools are designed to confront these old problems, each involving medicine in some way.
We are thankful to all who have made MMVR15 possible and that, after fifteen years, MMVR remains a place where so many talented, visionary, and hardworking individuals share their research to design the next in medicine.
Traumatic head injuries can cause internal bleeding within the brain. The resulting hematoma can elevate intracranial pressure, leading to complications and death if left untreated. A craniotomy may be required when conservative measures are ineffective. To augment conventional surgical training, a Virtual Reality-based intracranial hematoma simulator is being developed. A critical step in performing a craniotomy involves cutting burrholes in the skull. This paper describes volumetric-based haptic and visual algorithms developed to simulate burrhole creation for the simulator. The described algorithms make it possible to simulate several surgical tools typically used for a craniotomy.
Software tools that utilize haptics for sculpting precise fitting cranial implants are utilized in an augmented reality immersive system to create a virtual working environment for the modelers. The virtual environment is designed to mimic the traditional working environment as closely as possible, providing more functionality for the users. The implant design process uses patient CT data of a defective area. This volumetric data is displayed in an implant modeling tele-immersive augmented reality system where the modeler can build a patient specific implant that precisely fits the defect. To mimic the traditional sculpting workspace, the implant modeling augmented reality system includes stereo vision, viewer centered perspective, sense of touch, and collaboration. To achieve optimized performance, this system includes a dual-processor PC, fast volume rendering with three-dimensional texture mapping, the fast haptic rendering algorithm, and a multi-threading architecture. The system replaces the expensive and time consuming traditional sculpting steps such as physical sculpting, mold making, and defect stereolithography. This augmented reality system is part of a comprehensive tele-immersive system that includes a conference-room-sized system for tele-immersive small group consultation and an inexpensive, easily deployable networked desktop virtual reality system for surgical consultation, evaluation and collaboration. This system has been used to design patient-specific cranial implants with precise fit.
SOFA is a new open source framework primarily targeted at medical simulation research. Based on an advanced software architecture, it allows to (1) create complex and evolving simulations by combining new algorithms with algorithms already included in SOFA; (2) modify most parameters of the simulation – deformable behavior, surface representation, solver, constraints, collision algorithm, etc. – by simply editing an XML file; (3) build complex models from simpler ones using a scene-graph description; (4) efficiently simulate the dynamics of interacting objects using abstract equation solvers; and (5) reuse and easily compare a variety of available methods. In this paper we highlight the key concepts of the SOFA architecture and illustrate its potential through a series of examples.
Severe limb trauma is prevalent in deployed U.S. Military forces since the advent of body armor. To improve outcomes, improved pre-deployment training is urgently needed. To meet this need, Simuluition Inc. and Melerit Medical AB are expanding the capabilities of the TraumaVisionTM Simulator, originally designed for training surgeons in internal fixation procedures, to include training in battlefield relevant trauma care for fractured femurs and compartment syndrome. Simulations are being implemented for fractured femur reduction, external fixation, measuring intercompartment pressure (ICP), and performing fasciotomies. Preliminary validation work has begun to demonstrate content and construct validity of the TraumaVisionTM simulator. Future work will include developing a SCORMs-compliant curriculum and completing the validation studies.
Realistic trocar insertion simulator requires reliable and reproducible tissue data. This paper looks at using synthetic surrogate tissue to facilitate creation of data covering a wide range of pathological cases. Furthermore, we propose to map the synthetic puncture force data to the puncture force data obtained on animal/human tissue to create a simulation model of the procedure. We have developed an experimental setup to collect data from surrogate synthetic tissue using a bladeless trocar.
Virtual reality simulators have the capability to automatically record user performance data in an unbiased, cost effective manner that is also less error prone than manual methods. Centralized data recording simplifies proficiency evaluation even more; however is not commonly available to date for surgical skills trainers. We will detail our approach in implementing a framework for distributed score recording over the Internet using a database for persistent storage.
Percutaneous radiofrequency ablation is a minimally invasive therapy for the treatment of liver tumors that consists in a destruction of tumors by heat. A correct insertion and placement of the needle inside the tumor is critical and conditions the success of the operation. We are developing a software that uses patients data to help the physician plan the operation. In this context, we propose a method that computes automatically, quickly and accurately the areas on the skin that provide a safe access to the tumor. The borders of the 3D mesh representing insertion areas are refined for a higher precision. Resulting zones are then used to restrict the research domain of the optimization process, and are visualized on the reconstructed patient as an indication for the physician.
This paper presents the application of virtual reality and haptics to the simulation of cellular micromanipulation for research, training and automation purposes. A collocated graphic/haptic working volume provides a realistic visual and force feedback to guide the user in performing a cell injection procedure. A preliminary experiment shows promising results.
A non-invasive wrist sensor, BPGuardian (Empirical Technologies C., Charlottesville, VA) has been developed that provides continuous pressure readings by de-convolving the radial arterial pulse waveform into its constituent component pulses (Pulse Decomposition Analysis). Results agree with the predictions of the model regarding the temporal and amplitudinal behavior of the component pulses as a function of changing diastolic and systolic blood pressure.
Current uses of haptic hardware such as the Phantom Premium 6DOF for surgical simulators lack the desired interface transparency and could cause artefacts in the training regime of a student training on a simulator. This problem is addressed and two neural networks are used to find a mapping between handle coordinates and orientation and force output required to counteract gravitational forces. A close fit to the data is achieved for both networks (errors of 0.00149 and 0.0157 between training and predicted forces) and 3DOF gravity compensation is achieved. A 6DOF simulator is created but requires further work to improve it accuracy.
Airway management and intubation skills are essential for in-hospital as well as out-of-hospital health care. However, these skills are difficult to learn and maintain. We tested the hypothesis that novice endoscopists (medical students) could rapidly learn intubation skills, and achieve success in routine as well as difficult intubations using an indirect video laryngoscope. Following the success of the students, we believe that indirect laryngoscopy could become a valuable technique in disaster medicine and personnel hampered by chem.-bio suits.
The ever improving price-performance ratio of personal computers is making a significant contribution to widening the accessibility of training simulators for a wide variety of medical procedures. High fidelity solutions are becoming more and more sought after. However, the problem of providing realistic soft tissue deformation in real time remains, particularly if haptic interaction is also required. This paper presents a new approach for efficient soft tissue deformation using particle systems to model both structure and haptic properties of anatomy. We are applying this technique to a simulator for interventional radiology procedures, but it can easily be adapted for other medical domains.
Surgical simulations are normally developed in a cycle of continuous refinement. This leads to high costs in simulator design and as a result to a very limited number of simulators which are used in clinical training scenarios. We propose using Surgical Workflow Analysis for a goal-oriented specification of surgical simulators. Based on Surgical Workflows, the needed interaction scenarios and properties of a simulator can be derived easily. It is also possible to compare an existing simulator with the real workflow to distinguish whether it behaves realistically. We are currently using this method for the design of a new simulator for transnasal neurosurgery with good success.
Typically virtual fluoroscopy systems display the tracked instruments as a projected shadow on a number of 2D x-ray images completely missing the depth information of the third dimension. This paper describes an extra tool for 3D reconstruction in virtual fluoroscopy which is useful to clarify the position of instruments or anatomy and can be used in planning and assessing surgical procedure without further x-ray images. Two examples are given: displaced subtrochanteric fracture and slipped upper femoral epiphysis is presented.
The purpose of this paper is to present an intensity based algorithm for aligning 2D endoscopic images with virtual images generated from pre-operative 3D data. The proposed algorithm uses photo-consistency as the measurement of similarity between images, provided the illumination is independent from the viewing direction.
The effective visualization of aneurysms is a very important issue in neurosurgery. However it is difficult to display both the aneurysm with sufficient detail, and the vessel network of the brain at the same time. This work offers a solution to both of these problems, applying the concept of focus and context to texture-based volume rendering. A flexible application has been developed, allowing different focus and context techniques to be used. This paper concentrates on the evaluation of the system by a group of neurosurgeons.
Virtual reality surgical simulators have proven value in the acquisition and assessment of laparoscopic skills. In this study, we investigated skill transfer from a virtual reality laparoscopic simulator into the operating room, using a blinded, randomised, controlled trial design. Surgical trainees using the LapSim System performed significantly better at their first real-world attempt at a laparoscopic task than their colleagues who had not received similar training, as measured independently by a number of expert surgical observers using four criteria.
We report on a study that investigates the relationship between repeated training of teams managing medical emergencies in the Virtual World and affective learning outcomes in a group of 12 medical students. The focus of the training was on individual actions, but also on interaction and behaviour in the team. Current CPR training seems to lack important team training aspects which this type of training is addressing. We found an increase in flow experience and in self efficacy. This type of training could probably be expanded to other groups for a similar purpose because of its easiness to use, adaptability and interactivity.
The objective of this paper is to present the initial results of a study aimed at showing the feasibility of using kinematic measures to distinguish skill levels in manipulating surgical tools. Through a simulated surgical task (dissection of a mandarin orange), we acquired motor performance data from three sets of subjects representing different stages of surgical training. We computed the average lateral, axial and vertical tooltip velocities for each of the two main subtasks ('Peel Skin' and 'Detach Segment'). For each subject, we defined a 6-element vector to describe the kinematic measures extracted from the two tasks and used Principal Components Analysis (PCA) to extract the two dominant contributors to overall variability to simplify the presentation of the data to the trainer. We found that the first two principal components accounted for approximately 90% of the variance across all subjects and tasks. Moreover, the PCA plot showed good intrasubject repeatability, consistency within subjects with similar levels of training, and good separation between the subject groups. The results of this pilot study will allow us to design a future intraoperative study.
The shape of anatomic objects often depends in complex ways on the shapes and locations of neighboring objects. Shape parameter networks provide an approach for representing shape dependencies and producing multi-object models that share consistent boundary definitions. This paper provides an overview of the modeling framework provided by shape parameter networks, and demonstrates their use through the development of a detailed multi-object eye model. The eye model presented contains analytically defined shape equations that produce models matching user-specified physical measurements such as cornea width, cornea thickness, anterior chamber angle, and eye axial length.
We present a particle-based smoke simulation and a particle-based fluid simulation in an interactive environment with rigid and deformable objects. Many smoke and fluid simulations offer high physical and visual accuracy, but the underlaying models are to complex to run in real-time while performing soft-tissue simulation, collision detection, and haptic device support at the same time. Our algorithms are based on simple models that allow the surgery simulation to run in real-time.
Haptic modeling of organs using existing approaches is still not realistic or real time. We propose and develop the mathematical foundation of a new approach to modeling organs using beams. Beams are well known entities in Civil and Structural engineering. We develop their mathematical properties in the context of organ simulation. The real time advantage arises from the fact that a single beam implementation eliminates hundreds, if not thousands of mass springs from the traditional mass spring models and, thousands of polygons from the finite element method. Even more importantly, our derivation is valid for large deformation. Most previous work has developed equations only for small deflections. Large deformation is important because we set out to simulate blunt cutting which requires models for large deflections. Our new model, when simulated and compared with an FEM model provides comparable accuracy.
The simulation of catheter, guide wire, rigid tissues, muscles and blood vessels using conventional methods like mass spring model and FEM are computationally expensive. The former is comparatively faster than the latter but less accurate. Earlier, we proposed a new method of simulating of tubular organs using deformable beam models [3]. This method is not only accurate but also promises to be faster than mass-spring model for the simulation of tissues. This paper focuses on an important aspect of this approach - the determination of key and driving points of a beam model.
Surgical simulators are excellent training tools for minimally invasive procedures but are currently lacking in realistic tissue rendering and tissue responses to manipulation. Accurate color representation of tissues may add realism to simulators and provide medically relevant information. The goal of this study was to determine feasible methods for measuring color of in vivo tissue, specifically liver, in a standardized color space. Several compressions were applied to in vivo porcine liver. Three methods were then used to determine the CIELab and/or sRGB colors of normal and damaged liver. Results suggest that there are significant differences between normal and damaged liver color.