Ebook: Medicine Meets Virtual Reality 12
The Medicine Meets Virtual Reality conference is a leading forum for surgical simulation and its supporting technologies, haptics and modeling. Assessment and validation studies of newly developed simulation tools are presented with the aim of enhancing the traditional medical education curriculum. Robotics, data visualization and fusion, networking, displays and augmented reality, are additionally explored at MMVR as tools to improve clinical diagnosis and therapy. The proceedings offers medical educators, device developers, and clinicians a wide spectrum of research on data-focused technology utilized in a medical context.
Imaging technology has always been integral to the content of MMVR. At every conference, we have heard from researchers who assess new methods for visualizing and comparing the complex and transient relationships between anatomical structures, biological processes, and surgical therapies.
Ten years ago, the concept of full-body scanning was first presented at MMVR. This technology is now mainstream and the recent proliferation of scanning centers symbolizes expectation that medical science will enable us to take full control of our health destiny. Body scanning, an elective procedure, has become controversial. Data can be misleading and interpretation subjective; self-referring patients may be unaware of the technology's limitations. However, the possibility of looking deeply into one's own body and preemptively correcting health problems is undeniably attractive. Only cosmetic surgery seems to inspire the same willingness to pay out-of-pocket.
And it doesn't matter whether scanning centers prove to be a cure-all or a fad, because imaging has already transformed medical diagnosis and care. This reality was acknowledged when the 2003 Nobel Prize for Physiology/Medicine was awarded to Dr. Paul Lauterbur and Sir Peter Mansfield for their research on MRI. Today, physicians routinely order patient scans to assist diagnosis and to monitor post-therapy progress. Manufacturers improve imaging devices constantly, providing clinicians with ever greater speed and resolution. Imaging is among the most vital tools for creating a future of better health.
Surgical simulation has become another key aspect of the MMVR curriculum. Enabled by more highly refined haptic and tissue modeling techniques, emerging simulator technologies allow surgeons to attempt and repeat unfamiliar procedures in a manner not practical on living patients. “Practice makes perfect” is as valid for surgeons as for athletes and musicians. In time, the safety and efficiency of surgical simulators means they will be accepted into, and will no doubt enhance, traditional surgical education. The end result will be better-trained surgeons and improved surgical outcomes.
In the preface of the 4th MMVR Proceedings (1996) we wrote, “This is the possibility and the challenge: the transformation of medicine through communication.” MMVR is about communication. People talk and listen, agree and disagree. The conference is a forum of ideas and experience, vibrant with the exchange of information. At MMVR, people who are passionate about their work initiate and strengthen connections with the like-minded. They learn what their colleagues are doing, assessing failure as well as success. All of this is done with the ultimate goal of transforming physicians' capability to improve patient health.
As conferences organizers, we find it exciting that MMVR presenters are often young students. This enhances the exploratory nature of the conference, the vitality of the forum. It is critical for science that junior researchers are allowed to present fresh but as yet untested ideas. It is also imperative that they receive criticism and guidance from those who are more experienced.
We want MMVR to stimulate regular breakthroughs in imagination, to inspire the next tools for medical education, diagnosis, and care. We see technology as the door to better health—a door wide open to those who can approach problems creatively and with vision that is guided by purposeful communication with peers.
We thank all MMVR participants for being part of this conference's success. And we give special thanks to those researchers whose work is published here, to share with all
This paper presents physics-based modeling for colonoscopy training simulator. The colon is modeled as a chain of beam along the medial axis for global bending motions. The Timoshenko’s beam theory is applied to the centerline of colon extracted from a medical image. The stiffness matrix of colon is formulated using the finite element method. The colonoscope model consists of rigid elements connected with torsional spring and damper. This modeling allows global bending motions to be simulated in real time.
Minimally invasive surgery procedures are getting common in surgical practice; however the new interventional procedure requires different skills compared to the conventional surgical techniques. The need for training process is very important in order to successfully and safely execute a surgical procedure.
Computer-based simulators, with appropriate tactile feedback device, can be an efficient method for facilitating the education and training process. In addition, virtual reality surgical simulators can reduce costs of education and provide realism with regard to tissues behaviour and real-time interaction.
This work take into account the results of the HERMES Project (HEmatology Research virtual MEdical System), conceived and managed by Consorzio CETMA - Research Centre; the aim of this project is to build an integrate system in order to simulate a coronary angioplasty intervention.
Medical knowledge and skills essential for tomorrow’s healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice inorder to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed “just-in-time” training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.
The need for ever increasing achievements in athletic performance is pushing the acquisition costs and salaries of top performers to unprecedented heights. Individual talent, good physical condition and relentless training, however, do not by themselves guarantee success of the team effort. Team performance is determined by game strategy and team coordination. Ideally, during a game, the team coach should be able to gauge the instantaneous physical condition and performance reserves of each player on the team and direct game strategy accordingly. Our experimental setup explores the feasibility of objectively estimating, in real time, the intensity of physical activity and the physiological performance reserves of each and every player on the team during a game of soccer.
In 1999 the Greek Ministry of Justice decided to utilise telemedicine to improve health services in the largest prison in Greece (Korydallos Prison). The Nikea Hospital in Piraeus undertook to support the effort. For 12 months following installation, intensive “hands-on” training on the use of the system was offered to the staff of both the Korydallos Prison and the Nikea Hospital. However serious operational problems related either to prison bureaucracy or to the inflexibility the Greek National Health Service has annulled the effectiveness of the Korydallos Prison telemedicine system. Still, analysis of the system development history reveals that: (1) if freed from bureaucratic and labour related obstacles, prison telemedicine is a viable option. (2) Telemedicine can avert transfers to out-of-prison medical facilities. (3) 1f properly implemented, telemedicine can generate substantial savings and (4) telemedicine can greatly improve the quality of care available to prisoners.
During its transition to a free economy, Bulgaria benefited from foreign aid provided by Greece. One of the projects was the clinical and educational telemedicine link between the Medical University of Varna in Bulgaria and the Faculty of Medicine of Aristotle University of Thessaloniki in Greece. This began in 1997. In terms of the educational activities, the Bulgarian side of the network supports (a) electronic design and publishing activities, (b) web hosting and mail server activities and (c) satellite communications. In addition it supports an electronic classroom equipped with personal workstations, multimedia projectors and videoconference facilities. Communications are via the ISDN network. In terms of its telemedicine activities, the network provides remote medical assistance to “language handicapped” travellers and to migrant workers in both countries. The main clinical experience is remote consultations in immunology. This admittedly limited experience demonstrates that telemedicine can be used to provide assistance to remote colleagues. In cases where the patient cannot communicate with the attending physician, the use of telemedicine can greatly improve the quality of care available to travellers and migrant workers.
Doctors and radiologists are trained to infer and interpret three-dimensional information from two-dimensional images. Traditionally, they analyze sets of two-dimensional images obtained from imaging systems based on x-rays, computed tomography, and magnetic resonance. These images correspond to slices or projections to a single plane. With the resolution of the scanners increasing in all directions, so is the complexity of the data that can be used for diagnostic purposes. Using volume rendering techniques, massive stacks of image slices can be combined into a single image and important features stressed, increasing the doctor's ability to extract useful information from the image. A hybrid visualization approach combining 2D slices and 3D visuals is presented, drawing from the best features of both of these approaches. 2D slices emulate conventional medical images while 3D images provide additional information, such as better spatial location of the features in the surrounding structures as well as the 3D shape of features.
There are three reasons to create physical replicas of human anatomy: (1) to be able to better visualize the shape of a single organ, or a section of anatomy; (2) to be able to visualize the spatial relationships in three-dimensions; and (3) to use accurate replicas to practice or rehearse otherwise high-risk clinical procedures in the laboratory. This paper describes a project to fabricate a carotid artery. It discusses the gathering of data, the conversion to a volume, and the subsequent conversion to a manufacturable form.
An on-line virtual three-dimensional immersive environment to navigate through colorimetric characterization of the Visible Human Dataset (VHD) cryosectional cross-section color images is introduced. Real-time analysis of color component characteristics of a user defined set of VHD images is now possible. This is a potentially useful resource to many developers working on the VHD raw data, however it could be used in medical education.
Master/slave telemanipulator systems can be applied in minimally invasive heart surgery. However, due to the beating heart and difficulties of finding inner points inside the heart, a surgical task operation such as cutting can be very difficult. In order to avoid surgical error, the "active constraint" concept can be applied. This paper shows an example of an "active constraint" environment used for minimally invasive heart surgery. Experiments have been carried out for a 2-DOF master and the preliminary results validate the present approach.
Mechanical testing of abdominal organs has a profound impact on surgical simulation and surgical robotics development. Due to the nonlinear and viscoelastic nature of soft tissue it is crucial to test them in surgically relevant ranges of applied force, deformation, and duration for incorporating haptic realism into surgical simulators and for safe operation of surgical robots. In order to determine these ranges, a system known as the Blue DRAGON was used to track the motions and the forces applied to surgical tools during live procedures for quantifying how surgeons typically perform a minimally invasive surgical procedure. Thirty-one surgeons of varying skill were recorded performing three different surgical tasks. Grasping force (as applied to the tool handles) and handle angle for each tool were the signals of interest among 26 channels total acquired by the system in real time. These data were analyzed for their magnitudes and frequency content. Using the tool contact state, an algorithm selected tissue grasps to analyze measures during grasps only, as well as obtain grasp durations. The mean force applied to the tool handles during tissue grasps was 8.52 N ± 2.77 N; maximum force was 68.17 N. Ninety-five percent of the handle angle frequency content was below 1.98 Hz d 0.98 Hz. Average grasp time was 2.29 s ± 1.65 s, and 95% of all grasps were held for 8.86 s ± 7.06 s or less. The average maximum grasp time during these tasks was 13.37 s ± 11.42 s. These results form the basis for determining how abdominal tissues are to be mechanically tested in ranges and durations of force and deformation that are surgically realistic. Additionally, this information may serve as design specifications for new surgical robots or haptic simulators.
Constrained minimally-invasive surgical environments create a number of challenges for the surgeon and for automated tools designed to aid in the performance and analysis of complex procedures. The 3D reconstruction of the operative field opens up a number of possibilities for immersive presentation, automated analysis, and post-operative evaluation of su gical procedures.
This paper presents a method for estimating complete 3D information about scope and instrument positioning from monocular imagery. These measurements can be used as the basis for deriving and presenting additional cues during procedures, and can also be used for post-procedure analysis such as objective estimates of high-level performance measures like economy of motion and ergonomic metrics.
Generating patient specific dynamic models is complicated by the complexity of the motion intrinsic and extrinsic to the anatomic structures being modeled. Using a physics-based sequentially deforming algorithm, an anatomically accurate dynamic four-dimensional model can be created from a sequence of 3-D volumetric time series data sets [1]. While such algorithms may accurately track the cyclic non-linear motion of the heart, they generally fail to accurately track extrinsic structural and non-cyclic motion. To accurately model these motions, we have modified a physics-based deformation algorithm to use a meta-surface defining the temporal and spatial maxima of the anatomic structure as the base reference surface. A mass-spring physics-based deformable model, which can expand or shrink with the local intrinsic motion, is applied to the metasurface, deforming this base reference surface to the volumetric data at each time point. As the meta-surface encompasses the temporal maxima of the structure, any extrinsic motion is inherently encoded into the base reference surface and allows the computation of the time point surfaces to be performed in parallel. The resultant 4-D model can be interactively transformed and viewed from different angles, showing the spatial and temporal motion of the anatomic structure. Using texture maps and per-vertex coloring, additional data such as physiological and/or biomechanical variables (e.g., mapping electrical activation sequences onto contracting myocardial surfaces) can be associated with the dynamic model, producing a 5-D model. For acquisition systems that may capture only limited time series data (e.g., only images at end-diastole/end-systole or inhalation/exhalation), this algorithm can provide useful interpolated surfaces between the time points. Such models help minimize the number of time points required to usefully depict the motion of anatomic structures for quantitative assessment of regional dynamics.
In this paper we propose an open source/open architecture framework for developing organ level surgical simulations. Our goal is to facilitate shared development of reusable models, to accommodate heterogeneous models of computation, and to provide a framework for interfacing multiple heterogeneous models. The framework provides an intuitive API for interfacing models with spatial relationships. It is specifically designed to be independent of the specifics of the modeling methods used and therefore facilitates seamless integration of heterogeneous models and processes. Furthermore, each model has separate geometries for visualization, simulation, and interfacing, allowing the modeler choose the most natural geometric representation for each case.
The aim of computer haptics is to enable the user to touch, feel and maneuver virtual objects using a haptic interface. As the user “feels” the virtual object by applying force through the interface, complex calculations have to be done in real-time to generate a feedback force appropriate to the material properties of the object being “touched”. In this paper we propose a method for modeling soft bodies, which incorporate non-linear, viscoelastic, anisotropic behavior that will enable real-time user interaction and still satisfy the high force-feedback frequency requirements. In this paper, we restrict the user interaction with virtual objects to palpation.
A versatile equipment to study the cutting of soft tissue with surgery scalpel was designed and constructed. Experiments were performed with pig liver (ex-vivo) to measure the blade-tissue interaction forces at cutting speeds ranging from 0.1 cm/sec-2.54 cm/sec. The experimentally measured force-displacement curves reveal that the liver cutting process was made up of a sequence of repeating local units with similar features. Each local unit was comprised of a linear deformation phase followed by a crack growth phase. A method was developed to quantify the deformation resistance of the tissue during each local deformation phase in cutting. This deformation resistance was presented in the form of a selfconsistent local effective Young's modulus (LEYM), and was determined by postprocessing force-displacement data with finite element models. Values for LEYM were determined from plane-stress finite element model and plane-strain finite element model. The plane-stress LEYM values were within a close bound of the plane-strain values. Results of the self-consistent LEYM at different cutting speeds show that the tissue’s resistance to deformation decreased as the cutting speed increased.
To facilitate automatic segmentation, we adopt SVM (Support Vector Machine) to localize the left ventricle, and the segmentation is then carried out with narrow band level set. The method of generating the narrow band is improved such that the time used is reduced. Based on the imaging characteristics of the tagged left ventricle MR images, BPV (block-pixel variation) and intensity comparability are introduced to improve the speed term of level set and to increase the precision of segmentation. Our method can perform the segmentation of the tagged left ventricle MR images accurately and automatically.
A major requirement for surgical simulation is to allow virtual tissue cutting. This paper presents a scalable and adaptive cutting technique based on a mass-spring mesh. By the analogy of digital logic design, an arbitrary incision is modeled systematically by translating the cutting process into a state diagram. Subdivision of mesh elements is driven by the state transitions. Node redistribution, local re-meshing and deformation are applied to refine the subdivided mesh
This paper presents the results of several early studies relating to human haptic perception sensitivity when probing a virtual object. A 1 degree of freedom (DoF) rotary haptic system, that was designed and built for this purpose, is also presented. The experiments were to assess the maximum forces applied in a minimally invasive surgery (MIS) procedure, quantify the compliance sensitivity threshold when probing virtual tissue and identify the haptic system loop rate necessary for haptic feedback to feel realistic.
We demonstrate that classical Business Process Reengineering (BPR) methods can be successfully applied to Computer Aided Surgery while increasing safety and efficiency of the overall procedure through an integrated Workflow Management System. Computer guided Prostate Brachytherapy, as a sophisticated treatment by an interdisciplinary team, is perfectly suited to apply our method. Detailed suggestions for improvement of the whole procedure could be derived by our modified BPR method.
We aim to provide a next generation Magnetic Resonance Imaging (MRI) technology with an integrated solution for reducing motion artifacts in brain imaging applications. New developments in the field of MRI are revolutionizing the diagnostic capabilities e.g. of functional (fMRI) of the technique. Unfortunately, motion artifacts are eminent problems in cerebral MRI images, especially in difficult patient populations (e.g. chronic pain, children, neonates). Patient motion artifacts are present in 2D sequences, but are extremely detrimental in multi-slice 3D sequences often employed in fMRI. The problem of motion compensation in MRI technology deals with:
• Identification of the source as well as pattern of motion.
• Obtaining a mathematical model of motion that can be used to identify and then compensate the motion effects.
• Optimizing the image acquisition sequence in order to minimize, or even eliminate, the effect of motion.
We propose a method to obtain a quantitative measure of the movement of the head between different data acquisition points in both MRI, and functional MRI examination.
Surgical dexterity in operating theatres has traditionally been assessed subjectively. Electromagnetic (EM) motion tracking systems such as the Imperial College Surgical Assessment Device (ICSAD) have been shown to produce valid and accurate objective measures of surgical skill. To allow for video integration we have modified the data acquisition and built it within the ROVIMAS analysis software. We then used ActiveX 9.0 DirectShow video capturing and the system clock as a time stamp for the synchronized concurrent acquisition of kinematic data and video frames. Interactive video/motion data browsing was implemented to allow the user to concentrate on frames exhibiting certain kinematic properties that could result in operative errors. We exploited video-data synchronization to calculate the camera visual hull by identifying all 3D vertices using the ICSAD electromagnetic sensors. We also concentrated on high velocity peaks as a means of identifying potential erroneous movements to be confirmed by studying the corresponding video frames. The outcome of the study clearly shows that the kinematic data are precisely synchronized with the video frames and that the velocity peaks correspond to large and sudden excursions of the instrument tip. We validated the camera visual hull by both video and geometrical kinematic analysis and we observed that graphs containing fewer sudden velocity peaks are less likely to have erroneous movements. This work presented further developments to the well-established ICSAD dexterity analysis system. Synchronized real-time motion and video acquisition provides a comprehensive assessment solution by combining quantitative motion analysis tools and qualitative targeted video scoring.
Visualization of medical image information can be achieved by using color scales to enhance aspects of the data. We have used the cardinal directions of color to make a continuous and representation of phase data that wraps around every 360 deg, and added in another dimension using luminance to illustrate amplidute.
Simulating soft tissue deformation in real-time has become increasingly important in order to provide a realistic virtual environment for training surgical skills. Several methods have been proposed with the aim of rendering in real-time the mechanical and physiological behaviour of human organs, one of the most popular being Finite Element Method (FEM). In this paper we present a new approach to the solution of the FEM problem introducing the concept of parent and child mesh within the development of a hierarchical FEM. The online selection of the child mesh is presented with the purpose to adapt the mesh hierarchy in realtime. This permits further refinement of the child mesh increasing the detail of the deformation without slowing down the simulation and giving the possibility of integrating force feedback. The results presented demonstrate the application of our proposed framework using a desktop virtual reality (VR) system that incorporates stereo vision with integrated haptics co-location via a desktop Phantom force feedback device.