Ebook: Medicine Meets Virtual Reality 14
Machine intelligence will eclipse human intelligence within the next few decades - extrapolating from Moore’s Law - and our world will enjoy limitless computational power and ubiquitous data networks. Today’s iPod® devices portend an era when biology and information technology will fuse to create a human experience radically different from our own. Already, our healthcare system now appears on the verge of crisis; accelerating change is part of the problem. Each technological upgrade demands an investment of education and money, and a costly infrastructure more quickly becomes obsolete. Practitioners can be overloaded with complexity: therapeutic options, outcomes data, procedural coding, drug names etc. Furthermore, an aging global population with a growing sense of entitlement demands that each medical breakthrough be immediately available for its benefit: what appears in the morning paper is expected simultaneously in the doctor’s office. Meanwhile, a third-party payer system generates conflicting priorities for patient care and stockholder returns. The result is a healthcare system stressed by scientific promise, public expectation, economic and regulatory constraints and human limitations. Change is also proving beneficial, of course. Practitioners are empowered by better imaging methods, more precise robotic tools, greater realism in training simulators, and more powerful intelligence networks. The remarkable accomplishments of the IT industry and the Internet are trickling steadily into healthcare. The Medicine Meets Virtual Reality series can readily see the progress of the past fourteen years: more effective healthcare at a lower overall cost, driven by cheaper and better computers.
Machine intelligence will eclipse human intelligence within the next few decades – extrapolating from Moore's Law – and our world will enjoy limitless computational power and ubiquitous data networks. Today's iPod® devices portend an era when biology and information technology will fuse to create a human experience radically different from our own.
Between that future and the present, we will live with accelerating technological change. Whether predictable or disruptive, guided or uncontrollable, scientific innovation is carrying us forward at unprecedented speed. What does accelerating change entail for medicine?
Already, our healthcare system now appears on the verge of crisis; accelerating change is part of the problem. Each technological upgrade demands an investment of education and money, and a costly infrastructure more quickly becomes obsolete. Practitioners can be overloaded with complexity: therapeutic options, outcomes data, procedural coding, drug names…
Furthermore, an aging global population with a growing sense of entitlement demands that each medical breakthrough be immediately available for its benefit: what appears in the morning paper is expected simultaneously in the doctor's office. Meanwhile, a third-party payer system generates conflicting priorities for patient care and stockholder returns. The result is a healthcare system stressed by scientific promise, public expectation, economic and regulatory constraints, and human limitations.
Change is also proving beneficial, of course. Practitioners are empowered by better imaging methods, more precise robotic tools, greater realism in training simulators, and more powerful intelligence networks. The remarkable accomplishments of the IT industry and the Internet are trickling steadily into healthcare. MMVR participants can readily see the progress of the past fourteen years: more effective healthcare at a lower overall cost, driven by cheaper and better computers.
We are pleased that this year's conference has an increased emphasis on medical education. In many ways, education is the next medical toolkit: a means to cope with, and take advantage of, accelerating change. Through interaction with novice students, medical educators are uniquely equipped to critique existing methods, encourage fresh thinking, and support emerging tools. Each new class of aspiring physicians stimulates flexibility in problem solving and adaptation within technological evolution. As an earlier generation of physicians trains its successors, experience can guide innovation so that change accelerates for the better.
As always, we wish to thank all the participants who make MMVR possible each year. It is our privilege to work with you.
This paper presents a centerline-based parametric model of colon for collision detection and visualization of the colon lumen for colonoscopy simulator. The prevailing marching cubes algorithm for 3D surface construction can provide a high resolution mesh of triangular elements of the colon lumen from CT data. But a well organized mesh structure reflecting the geometric information of the colon is essential to fast and accurate computation of contact between the colonoscope and colon, and the corresponding reflective force in the colonoscopy simulator. The colon is modeled as parametric arrangement of triangular elements with its surface along the centerline of the colon. All the vertices are parameterized according to the radial angle along the centerline, so that the triangles around the viewpoint can be found fast. The centerline-based parametric colon model has 75,744 triangular elements compared to 373,364 of the model constructed by the marching cubes algorithm.
New volumetric tools were developed for the design and fabrication of high quality cranial implants from patient CT data. These virtual tools replace time consuming physical sculpting, mold making and casting steps. The implant is designed by medical professionals in tele-immersive collaboration. Virtual clay is added in the virtual defect area on the CT data using the adding tool. With force feedback the modeler can feel the edge of the defect and fill only the space where no bone is present. A carving tool and a smoothing tool are then used to sculpt and refine the implant. To make a physical evaluation, the skull with simulated defect and the implant are fabricated via stereolithography to allow neurosurgeons to evaluate the quality of the implant. Initial tests demonstrate a very high quality fit. These new haptic volumetric sculpting tools are a critical component of a comprehensive tele-immersive system.
Several abstract concepts in medical education are difficult to teach and comprehend. In order to address this challenge, we have been applying the approach of reification of abstract concepts using interactive virtual environments and a knowledge-based design. Reification is the process of making abstract concepts and events, beyond the realm of direct human experience, concrete and accessible to teachers and learners. Entering virtual worlds and simulations not otherwise easily accessible provides an opportunity to create, study, and evaluate the emergence of knowledge and comprehension from the direct interaction of learners with otherwise complex abstract ideas and principles by bringing them to life. Using a knowledge-based design process and appropriate subject matter experts, knowledge structure methods are applied in order to prioritize, characterize important relationships, and create a concept map that can be integrated into the reified models that are subsequently developed. Applying these principles, our interdisciplinary team has been developing a reified model of the nephron into which important physiologic functions can be integrated and rendered into a three dimensional virtual environment called Flatland, a virtual environments development software tool, within which a learners can interact using off-the-shelf hardware. The nephron model can be driven dynamically by a rules-based artificial intelligence engine, applying the rules and concepts developed in conjunction with the subject matter experts. In the future, the nephron model can be used to interactively demonstrate a number of physiologic principles or a variety of pathological processes that may be difficult to teach and understand. In addition, this approach to reification can be applied to a host of other physiologic and pathological concepts in other systems. These methods will require further evaluation to determine their impact and role in learning.
Access to the laboratory component of a class is limited by resources, while lab training is not currently possible for distance learning. To overcome the problem a solution is proposed to enable hands-on, interactive, objectively scored and appropriately mentored learning in a widely accessible environment. The proposed solution is the Virtual-Reality Motor-Skills trainer to teach basic fine-motor skills using Haptics for touch and feel interaction as well as a 3D virtual reality environment for visualization.
This paper presents a method for tessellating tissue boundaries and their interiors, given as input a tissue map consisting of relevant classes of the head, in order to produce anatomical models for finite element-based simulation of endoscopic pituitary surgery. Our surface meshing method is based on the simplex model, which is initialized by duality from the topologically accurate results of the Marching Cubes algorithm, and which features explicit control over mesh scale, while using tissue information to adhere to relevant boundaries. Our mesh scale strategy is spatially varying, based on the distance to a central point or linearized surgical path. The tetrahedralization stage also features a spatially varying mesh scale, consistent with that of the surface mesh.
This study determines the expert and referent face validity of LAP Mentor, the first procedural virtual-reality (VR) trainer. After a hands-on introduction to the simulator a questionnaire was administered to 49 participants (21 expert laparoscopists and 28 novices). There was a consensus on LAP Mentor being a valid training model for basic skills training and the procedural training of laparoscopic cholecystectomies. As 88% of respondents saw training on this simulator as effective and 96% experienced this training as fun it will likely be accepted in the surgical curriculum by both experts and trainees. Further validation of the system is required to determine whether its performance concurs with these favourable expectations.
Visualization is a very important part of a high fidelity surgical simulator. Due to modern computer graphics hardware, which offers more and more features and processing power, it is possible to extend the standard OpenGL rendering methods with advanced visualization techniques to achieve highly realistic rendering in real-time. For an easy and efficient use of these new capabilities, a stand-alone graphics engine has been implemented, which exploits these advanced rendering techniques and provides an interface in order to ensure the interoperability with a software framework for surgical simulators.
Under contract with the Telemedicine & Advanced Technology Research Center (TATRC), Energid Technologies is developing a new XML-based language for describing surgical training exercises, the Surgical Simulation and Training Markup Language (SSTML). SSTML must represent everything from organ models (including tissue properties) to surgical procedures. SSTML is an open language (i.e., freely downloadable) that defines surgical training data through an XML schema. This article focuses on the data representation of the surgical procedures and organ modeling, as they highlight the need for a standard language and illustrate the features of SSTML. Integration of SSTML with software is also discussed.
Soft tissue modeling is of key importance in medical robotics and simulation. In the case of percutaneous operations, a fine model of layers transitions and target tissues is required. However, the nature and the variety of these tissues is such that this problem is extremely complex. In this article, we propose a method to estimate the interaction between in vivo tissues and a surgical needle. The online robust estimation of a varying parameters model is achieved during an insertion in standard operating conditions.
Surgical simulators are an integration of many models, capabilities, and functions. Development of a working simulator requires the flexibility to integrate various software models, to support interoperability, and facilitate performance optimizations. An object oriented framework is devised to support multithreaded integration of simulation, deformation, and interaction. A demonstration application has been implemented in Java, leveraging the features that are built into the language including multithreading, synchronization, and serialization. Future work includes expanding the capabilities of the framework with a broader range of model and interactive capabilities.
Rigorous scientific assessment of educational technologies typically lags behind the availability of the technologies by years because of the lack of validated instruments and benchmarks. Even when the appropriate assessment instruments are available, they may not be applied because of time and monetary constraints. Work in augmented reality, instrumented mannequins, serious gaming, and similar promising educational technologies that haven't undergone timely, rigorous evaluation, highlights the need for assessment methodologies that address the limitations of traditional approaches. The most promising augmented assessment solutions incorporate elements of rapid prototyping used in the software industry, simulation-based assessment techniques modeled after methods used in bioinformatics, and object-oriented analysis methods borrowed from object oriented programming.
We report on our work on the development of a novel holographic display technology, capable of targeting multiple freely moving naked eye viewers, and of a demonstrator, exploiting this technology to provide medical specialists with a truly interactive collaborative 3D environment for diagnostic discussions and/or pre-operative planning.
Mass-spring systems are often used to model anatomical structures in medical simulation. They can produce plausible deformations in soft tissue, and are computationally efficient. Determining damping values for a stable mass-spring system can be difficult. Previously stable models can become unstable with topology changes, such as during cutting. In this paper, we derive bounds for the damping coefficient in a mass-spring system. Our formulation can be used to evaluate the stability for user specified damping values, or to compute values that are unconditionally stable.
This paper introduces an enhanced (bootstrapped) method for tracked ultrasound probe calibration. Prior to calibration, a position sensor is used to track an ultrasound probe in 3D space, while the US image is used to determine calibration target locations within the image. From this information, an estimate of the transformation matrix of the scan plane with respect to the position sensor is computed. While all prior calibration methods terminate at this phase, we use this initial calibration estimate to bootstrap an additional optimization of the transformation matrix on independent data to yield the minimum reconstruction error on calibration targets. The bootstrapped workflow makes use of a closed-form calibration solver and associated sensitivity analysis, allowing for rapid and robust convergence to an optimal calibration matrix. Bootstrapping demonstrates superior reconstruction accuracy.
Our novel approach to teaching Breaking Bad News (BBN) involves having students actively participate in an unsuccessful resuscitation (mannequin) followed immediately by BBN to a standardized patient wife (SPW) portrayed by an actress. Thirty-nine 3rd year medical students completed a questionnaire and then were divided as follows: Group 1 (n=21) received little to no training prior to speaking with the SPW. Group 2 (n =18) received a lecture and practiced for 1 hour in small groups prior to the resuscitation and BBN. Both groups self assessed ability to BBN (p<.0002 & p<.00001), and ability to have a plan (p<.0004 & p <.0003) improved significantly over base line with greater improvement in group 2. Group 2 (pre-trained) students were rated superior by SPW's in several key areas. This novel approach to teaching BBN to 3rd year medical students was well received by the students and resulted in marked improvement of self assessed skills over baseline.
A virtual environment-based endoscopic third ventriculostomy simulator is being developed for training neurosurgeons as a standardized method for evaluating competency. Magnetic resonance (MR) images of a patient's brain are used to construct the geometry model, realistic behavior in the surgical area is simulated by using physical modeling and surgical instrument handling is replicate by a haptic interface. The completion of the proposed virtual training simulator will help the surgeon to practice the techniques repeatedly and effectively, serving as a powerful educational tool.
Distributed surgical virtual environments are desirable since they substantially extend the accessibility of computational resources by network communication. However, network conditions critically affects the quality of a networked surgical simulation in terms of bandwidth limit, delays, and packet losses, etc. A solution to this problem is to introduce a middleware between the simulation application and the network so that it can take actions to enhance the user-perceived simulation performance. To comprehensively assess the effectiveness of such a middleware, we propose several evaluation methods in this paper, i.e., semi-automatic evaluation, middleware overhead measurement, and usability test.
Minimally invasive surgery (MIS) has become very common in recent years thanks to many advantages that patients can get. However, due to the difficulties surgeons encounter to learn and manage this technique, several training methods and metrics have been proposed in order to, respectively, improve surgeon's abilities and assess his/her surgical skills. In this context, this paper presents a biomechanical analysis method of the surgeon's movements, during exercise involving instrument tip positioning and depth perception in a laparoscopic virtual environment. Estimation of some biomechanical parameters enables us to assess the abilities of surgeons and to distinguish an expert surgeon from a novice. A segmentation algorithm has been defined to deeply investigate the surgeon's movements and to divide them into many sub-movements.
This paper describes the magnitude and patterns of forces obtained by using a probe, equipped with a six-axis force torque sensor, in knee arthroscopy. The probe was used by orthopaedic surgeons and trainees, who performed 11 different tasks in 10 standard knee arthroscopies. The force magnitude and patterns generated are presented; which can support the development of virtual arthroscopy systems with realistic haptic feedback. The results were compared across both groups of surgeons. A difference in the force patterns generated by senior versus junior surgeons was noted which can aid in the development of an objective assessment system for arthroscopy skills. The results could potentially be useful to assess future performance in real arthroscopy.
An advantage of CAOS over traditional surgery is improved precision of implant position and trajectories in 3D space. However, the implementation of these trajectories often adds an extra step to the operation that increases operative time and requires extra training. This paper reports a study of variation in time-to-task and learning curve in performing a standard task of targeting in 3D space using Hull's CAOSS. It shows that time-to-task can be reduced by replacing a 3D targeting task with multiple independent 2D targeting tasks whilst potentially reducing targeting error. Based on this better understanding of targeting a novel jig was developed for performing dynamic hip Screw (DHS) insertion using CAOSS that would provide improved targeting performance by the surgeon.
The ability to manually specify contours in a volumetric data is often required in situations when automatic algorithms are unable to accurately extract the desire volume. Conventionally, the contours are drawn over a slice, working on plane-by-plane basis usually constrained to orthogonal planes. In defining the contour on a 2D image, the user faces the problem of loosing 3D context (does the tissue belong to inside the contour or outside?). This requires the constant back and forth movement from a plane view where the interaction takes place (usually with the mouse) to the 3D volumetric view that provides the contextual information. We present an interactive environment that allows for efficient contouring while providing contextual 3D information to the user.
In vertebroplasty, physician relies on both sight and feel to properly place the bone needle through various tissue types and densities, and to help monitor the injection of PMMA or cement into the vertebra. Incorrect injecting and reflux of the PMMA into areas where it should not go can result in detrimental clinical complication. This paper focuses on the human-computer interaction for simulating PMMA injection in our virtual spine workstation. Fluoroscopic images are generated from the CT patient volume data and simulated volumetric flow using a time varying 4D volume rendering algorithm. The user's finger movement is captured by a data glove. Immersion CyberGrasp is used to provide the variable resistance felt during injection by constraining the user's thumb. Based on our preliminary experiments with our interfacing system comprising simulated fluoroscopic imaging and haptic interaction, we found that the former has a larger impact on the user's control during injection.
Chemoembolization is an important therapeutic procedure. A catheter was navigated to the artery that feeds the tumor, and chemotherapy drugs and embolus are injected directly into the tumor. There is a risk that embolus may lodge incorrectly and deprive normal tissue of its blood supply. This paper focuses on visualization of the flow particles in simulation of chemotherapy drugs injection for training of hand-eye coordination skills. We assume that the flow follows a defined path in the hepatic vascular system from the catheter tip. The vascular model is constructed using sweeping and blending operations. Quadrilaterals which are aligned to face the viewer are drawn for the trail of each particle. The quadrilateral in the trail is determined using bilinear interpolation. On simulated fluoroscopic image, the flow is rendered as overlaying and semitransparent quadrilaterals representing the particles' trails. This visualization model achieves a good visual approximation of the flow of particles inside the vessels under fluoroscopic imaging.
Paper presents an evaluation and comparison of two different types of software for generating a 3D model from medical imaging data: first, a dedicated 3D reconstruction Mimics interface, by Materialise and second, an engineering CAD (a Solid Works and AutoCAD) interface. Advantages and limitations of both software types are outlined and there some observations for 3D reconstruction of anatomic surfaces are presented.