Ebook: Medicine Meets Virtual Reality
This publication deals with how leading-edge technology will affect the future of medical and surgical practice by improving access, quality, and continuity of care, while reducing cost. Contributors to the book are the world’s leading researchers and developers in the field. It will be of interest to physicians, surgeons, information scientists, biomedical professionals, corporate futurists, biomechanical engineers, educators, roboticists, medical technologists, rehabilitation specialists, systems integrators/engineers, psychotherapists/behaviourists.
The Medicine Meets Virtual Reality conference is a snapshot of the healthcare (r)evolution: four days of progress reports, thirty hours of ten-minute glimpses at the creative genius that is shaping the future of healthcare.
In the six years since we first undertook to plan the conference, virtual reality, as both technology and concept, has become a catch-phrase. What was once literally unheard-of seems now nearly ubiquitous. Furthermore, in the six years since MMVR was first held, healthcare, as both social concern and industry, has changed from being an isolated domain of trained insiders to an omnipresent subject of public scrutiny and concern.
In the eons since humans first practiced the art of healing, medicine has evolved from alchemy to art. From magic potions to chemotherapy, from chanting to irradiation, from poultices to robotics; we've come a long way. Think of the transfer of technology devised to image earth from space now being used to detect breast disease. Miraculous means of diagnosis and treatment appear, as if by magic, every day. And all of the science and technology we have devised serves no higher purpose than ministering to our fellow beings.
However, all the art, science, and technology we can devise are meaningless without the artist. The miraculous means are merely tools for healthcare professionals who serve us. The shaman placed her life on edge to reach the healing state which enabled her to effect a cure. The physician of today sets her life apart, for education and training which will enable her to devote her life to being able to heal.
Virtual reality in medicine: telemedicine, robotics, simulation and the rest are nothing more than the ability to create the “laying on of hands” where it was not possible before. These new technologies are not devised to create a separation between doctor and patient, but the possibility of joining them across space and even time.
As in other arts, creativity in medicine requires financial support. It takes money to enable doctors and researchers to focus on endeavors which may not produce any short-term reward. However, in the current system of healthcare, the role of managers—insurance and system administrators—seems to be changing from encouraging patron to repressive master. Instead of promoting healing whenever possible, managers now seem to deny care whenever legally feasible. Paradoxically, it is they who profit most from the creative application of science and technology. Doctors are working harder than ever before and are miraculously breaking old barriers to human well-being. Yet we hear people asking: Do we all have access to the resulting benefits? Why does it seem the profits are going to the managers and not to those who are creating? Why doesn't it seem doctors are saying (and doing) something about it?
While spiritual healers treat the maladies of the soul, doctors (and their assistants) are entrusted with our physical selves. Our nation's political foundation guarantees the freedom to get help for our soul, so why is it so different for our body? The laying on of hands (in either context ) should never be denied to a fellow citizen, or really, to a fellow human.
Science and technology will not be the challenge of the next century. Discovery is the natural outgrowth of communication and commitment. The challenge lies in enabling the artists to perform their magic, and in making sure that we, as patients, are the beneficiaries of the (r)evolution.
Karen S. Morgan and James D. Westwood
Aligned Management Associates, Inc.
San Diego, California
December 1997
The budget deficit, reduction in Defense spending and the lack of return in the “peace dividend” has resulted in reduced federal funding for research. A number of programs have attempted to remedy the problem, with the use of collaborative funding as one of the major solutions. However, within the medical research community, there continues to be a very long technology transfer cycle. By mimicking the processes of non-medical high technology research and employing a number of these innovative solutions to medical research could afford the pathway to success. A template of how this could be accomplished through cooperative efforts of academia, industry and government is presented by using examples of success and failure in the past.
This paper describes a software environment for visualizing and segmenting volumetric data sets such as CT, MRI and the Visible Human data set. The goal is to produce an intuitive environment where the expert knowledge of the end user can be employed to directly guide visualization and segmentation of the data. The environment is built around the Fakespace Immersive Workbench (TM), which provides the user with the illusion that the data set volume resides in the space directly above the workbench surface. Using a position/orientation-tracked probe the user is able to interact with the visualization algorithm and segment the data set to expose features of interest. Segmentation can be performed in either the ray space of the volume rendering algorithm or the coordinate space of the data volume itself. The segmentation results can be saved and used for other purposes including the construction of polygonal models.
Arthroscopy has already become an irreplacable method in diagnostics. The arthroscope, with optics and light source, and the exploratory probe are inserted into the knee joint through two small incisions underneath the patella. Currently, the skills required for arthroscopy are taught through handson clinical experience. As arthroscopies became a more common procedure even in smaller hospitals, it became obvious that special training was necessary to guarantee qualification of the surgeons. On-the-job training proved to be insufficient.
Therefore, research groups from the Berufsgenossenschaftliche Unfallklinik Frankfurt am Main approached the Fraunhofer Institute for Computer Graphics to develop a training system for arthroscopy based on virtual reality (VR) techniques. Two main issues are addressed: the three-dimensional (3-D) reconstruction process and the 3-D interaction. To provide the virtual environment, a realistic representation of the region of interest with all relevant anatomical structures is required. Based on a magnetic resonance image sequence a realistic representation of the knee joint was obtained suitable for computer simulation. Two main components of the VR interface can be distinguished: the 3-D interaction to guide the surgical instruments and the 2-D graphical user interface for visual feedback and control of the session. Moreover, the 3-D interaction has to be realized by means of Virtual Reality techniques providing a simulation of an arthroscope and an intuitive handling of other surgical instruments.
Currently, the main drawback of the developed simulator is the missing of haptic perception, especially of force feedback. In cooperation with the Department of Electro-Mechanical Construction at the Technical University Darmstadt a haptic display is designed and built for the VR arthroscopy training simulator. In parallel we developed a concept for the integration of the haptic display in a configurable way.
As further advances in visual display technologies and force feedback devices are integrated in virtual systems, questions remain: What level of reality does the system provide to the user? Is the environment convincing enough to engage the user and to maximize transfer? Are the visual and haptic displays fully integrated to provide seamless operation in the simulated environment? Does the system provide not only the ability to navigate through a simulated environment, but also realistic interaction with instrumentation and structures? We report on our advances in developing a virtual simulation system for training in functional endoscopic sinus surgery (FESS). Specifically, we will present work on subject trials exploring the realism provided by integrated visual and haptic displays, and compare and contrast surface vs. volume representation for presenting realistic models of the anatomy for surgical interaction.
The 3D visual presentation of biodynamic events of human joints is a challenging task. Although the 3D reconstruction of high contrast structures from CT data has been widely explored, then there is much less experience in reconstructing the small low contrast soft tissue structures from inhomogeneous and sometimes noisy MR data. Further, there are no algorithms for tracking the motion of moving anatomic structures through MR data. We represent a comprehensive approach to 3D musculoskeletal imagery that addresses these challenges. Specific imaging protocols, segmentation algorithms and rendering techniques are developed and applied to render complex 3D musculoskeletal systems for their 4D visual presentation. Applications of our approach include analysis of rotational motion of the shoulder, the knee flexion, and other colplex musculoskeletal motions, and the development of interative virtual human joints.
The rapid growth of the World Wide Web (WWW) enables access to huge amounts of data and applications. The diversity of data-structures and applications has led to the concept of network computing where the data is encapsulated within the application. The end-user does not have to worry about tools for data manipulation as they are bundled together with the data itself. However, the user usually has to pay a price in the form of degraded performance. While JAVA is gradually taking its place as the network cross platform programming language, it is clear that it currently does not support high-performance visualization. The purpose of this paper is to demonstrate that high performance volume rendering, traditionally reserved for high-end visual computing, can now be made widely available in a cross-platform fashion using VRML and JAVA.
Introduction: Telesurgical laparoscopic telementoring has successfully been implemented between the Johns Hopkins Bayview Medical Center and the Johns Hopkins Hospital in 27 prior operations. In this previously reported series, telerobotic mentoring was achieved between two institutions 3.5 miles away. We report our experience in performing two international surgical telementoring operations.
Purpose: To determine the clinical utility of international surgical telementoring during laparoscopic surgical procedures.
Method: A laparoscopic adrenalectomy was telementored between Innsbruck, Austria (5,083 miles) and Baltimore, MD. As well, a laparoscopic varicocelectomy was telementored between Bangkok, Thailand and Baltimore, MD (10,880 miles) both over three ISDN lines (384 kbps) with an approximate 1 sec delay.
Results: Both procedures were successfully accomplished with an uneventful postoperative course.
Conclusion: International telementoring is a viable method of instructing less experienced laparoscopic surgeons through potentially complex laparoscopic procedures, as well as potentially improving patient access to specialty care.
In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.
ATTRACT is a project that intends to provide telemedicine services over Cable Television Networks. ATTRACT is an European Commission funded project (Healthcare Telematics). The main objective of ATTRACT is to take advantage of emerging European Cable Television network infrastructures and offer cost-effective care to patients at home. This will be achieved through a set of broadband network applications that competively provide low cost interactive health-care services at home. The applications will be based on existing or developing European Cable Television network infrastructures in order to provide all kind of users with affordable homecare services. It is ATTRACT's intention that citizens and users benefit from high quality access to home telemedical services which also implies cost savings for patients, their families and the already over burdened health institutions. In addition, the European industries will have extensive opportunities to develop, evaluate and validate broadband network infrastructures providing multimedia and interactive telemedical services at home. ATTRACT contributes to the EU telecommunications and telematics policy objectives that promote the development and validation of “applications and services” which “provide an intelligent telematic environment for the patient in institutions and other points of care that helps the patient to continue, as far as possible, normal activities and external communication”.
Computer and telecommunications technologies have unleashed a wide range of powerful tools for gathering, storing, and distributing patient information. Computerized records enable healthcare providers to rapidly access patient data and to closely monitor patients from a distance. These significant advantages can be further extended by using the technology to more fully involve patients in their own healthcare management. A patient-centric approach to telemedicine means that the patient takes on additional responsibility and control, and the benefits from increased patient involvement will translate into improved compliance, reduced litigation, lower costs, and better outcomes. Furthermore, there are often important ethical questions that are best decided by the informed patient. Patients have a right to know what information is being gathered and who will be authorized to access that information. Current health information systems do not adequately address these issues, and telemedicine applications -- particularly home based telemedicine -- is forcing everyone to take a closer look at patients’ roles in their own healthcare. In this presentation, a patient-centric home telemedicine database is described, the limitations are discussed, and future directions are proposed.
The Internet has established itself as an affordable, extremely viable and ubiquitous communications network that can be easily accessed from virtually any point in the world. This makes it ideally suited for medical image communications. Issues regarding security and confidentiality of information on the Internet, however, need to be addressed for both occasional, individual users and consistent enterprise-wide users. In addition, the limited bandwidth of most Internet connections must be factored into the development of a realistic usermodel and resulting protocol. Open architecture issues must also be considered so that images can be communicated to recipients who do not have similar programs. Further, application-specific software is required to integrate image acquisition, encryption and transmission into a single, streamlined process.
Using Photomailer™ software provided by PhysiTel Inc., the authors investigated the use of sending secured still images over the Internet. The scope of their investigation covered the use of the Internet for communicating images for consultation, referral, mentoring and education. Photomailer™ software was used at several local and remote sites. The program was used for both sending and receiving images. It was also used for sending images to recipients who did not have Photomailer™, but instead relied on conventional email programs.
The results of the investigation demonstrated that using products such as Photomailer™, images could be quickly and easily communicated from one location to another via the Internet. In addition, the investigators were able to retrieve images off of their existing email accounts, thereby providing greater flexibility and convenience than other systems which require scheduled transmission of information on dedicated systems. We conclude that Photomailer™ and similar products may provide a significant benefit and improve communications among colleagues, providing an inexpensive means of sending secured images on the Internet.
This paper deals with the connection which has been held on 8th July 1997 in collaboration with the JPL of the NASA, Pasadena, California, between the Eighth International Conference on the Advanced Robotics (ICAR ‘97) in course at Monterey, California and the Telerobotics Laboratory of Politecnico di Milano connected in a multipoint teleconference through the MCU of Rome with the Aula Magna of the same Politecnico and the Palace Business of the Giureconsulti of the Chamber of Commerce of Milan. The demonstration has allowed to telecontrol a scars robot of the Sankyo and an ABB robot, which have affected simulations of operations of biopsy to the prostate, to the liver and to the breast, a mechanical hand and a model of a car, disposed in a space destined to reproduce the Martian ground, from Monterey to Milan by means of the INTERNET+ISDN connection form. In fact the event has taken place four days after the landing on Mars happily successful of the spatial probe Pathfinder from which it has gone out the “Sojourner” robot, telecontrolled from the JPL of the NASA, which has begun to take photos of the Martian ground and also some of these images have been transmitted in the course of the connection.
We discuss an implementation of an audio user interface for assisting surgical placement tasks. We assembled and tested an apparatus for evaluating the potential benefit of using audio guidance for assisting blind biopsy needle placement tasks. This system improves upon an earlier system we demonstrated (see [1]) by employing three dimensional audio processing as well as a facility for algorithmically-motivated arbitrary waveform synthesis.
Using this apparatus an operator attempted to manually follow a predetermined biopsy needle insertion path (trajectory) with an instrumented biopsy needle. This trajectory intercepted a target object embedded within a custom biopsy phantom. The target was invisible to the operator. Audio feedback provided the only means of trajectory and target localization.
To address the needs for performing microsurgical procedures, the SRI telepresence surgery workstation has been combined with a pair of micromanipulator arms. The prototype microsurgery system has been tested with ex-vivo tasks similar to those required for surgical procedures, such as cutting, grasping, suturing, and knot tying. Initial animal testing has been done on a rat model in which end-to-end anastomosis of the femoral artery (approximately 1 millimeter in diameter) was completed with ten rats, and 100% patency was obtained.
To address the needs of surgical training, SRI has begun to develop a system that uses a 6-DOF telepresence workstation. A computer-generated stereo image is reflected in a mirror and appears to be superimposed on the surgeon's hands, creating an immersive and realistic environment. Tools held in the surgeon's hands are connected to left- and right-hand manipulators that both continuously measure tool position/orientation and apply force/torque to the tools. Furthermore, the visual image and tool locations are registered, so that the user perceives that he or she is looking at and moving the simulated tools in the visual image.
The term “frameless image-guided surgery” has become as well-known to surgeons as computerized tomography or operating room microscope over the past several years. The technoloies behind this new surgery option include robotic arms, infra-red camera arrays (1D and 2D), ultrasound, robotic microscopes and magnetic field digitizers. The authors have shown the magnetic field technology incorporated in the Regulus Navigator to be a viable, accurate surgeon's tool by first integrating a conventional framed device and magnetic field frameless device, then advancing to the frameless device alone.
During surgery a patient's anatomy is first registered to preoperatively acquired radiological data. Surgical instruments are tracked on interactive CT/MRI displays as the surgeon locates his point or volume-in-space within the surgical field and uses his own procedure/technique of choice for surgical treatment. A clinical trial of 221 patients showed an overall mean accuracy of 2.56mm with a standard deviation of 1.15mm for intraoperative registration. Major concerns of utilizing magnetic field technology in the operating room, such as interference from surrounding metallic objects and equipment, were proven manageable while maintaining acceptable accuracy.
This paper describes the VIRGY project at the VRAI Group (Virtual Reality and Active Interface), Swiss Federal Institute of Technology (Lausanne, Switzerland). Since 1994, we have been investigating a variety of virtual-reality based methods for simulating laparoscopic surgery procedures. Our goal is to develop an endoscopic surgical training tool which realistically simulates the interactions between one or more surgical instruments and gastrointestinal organs. To support real-time interaction and manipulation between instruments and organs, we have developed several novel graphic simulation techniques. In particular, we are using live video texturing to achieve dynamic effects such as bleeding or vaporization of fatty tissues. Special texture manipulations allows us to generate pulsing objects while minimizing processor load. Additionally, we have created a new surface deformation algorithm which enables real-time deformations under external constraints. Lastly, we have developed a new 3D object definition which allows us to perform operations such as total or partial object cuttings, as well as to selectively render objects with different levels of detail. To provide realistic physical simulation of the forces and torques on surgical instruments encountered during an operation, we have also designed a new haptic device dedicated to endososcopic surgery constraints. We are using special interpolation and extrapolation techniques to integrate our 25 Hz visual simulation with the 300 Hz feedback required for realistic tactile interaction.
The fully VIRGY simulator has been tested by surgeons and the quality of both our visual and haptic simulation has been judged sufficient for training basic surgery gestures.
In this paper we describe a test-bed we have developed for simulation of abdominal trauma surgery.
The abdominal surgery scene is highly complex and contains many layers of deformable organs. Representing this layered and deformable anatomy with models that can interact, be probed and cut presents a unique challenge. We have met this challenge by applying a variety of technology advances in deformable models, computer graphics, and force-feedback (haptic) interfaces.
Objective assessment of surgical technique is currently impossible. A virtual reality simulator for laparoscopic surgery (MIST VR) models the movements needed to perform minimally invasive surgery and can generate a score for various aspects of psychomotor skill. Two studies were performed using the simulator: first to assess surgeons of different surgical experience to validate the scoring system; second to assess in a randomised controlled way, the effect of a standard laparoscopic surgery training course. Experienced surgeons (> 100 laparoscopic cholecystectomies) were significantly more efficient, made less correctional submovements and completed the virtual reality tasks faster than trainee surgeons or non-surgeons. The training course caused an improvement in efficiency and a reduction in errors, without a significant increase in speed when compared with the control group. The MIST VR simulator can objectively assess a number of desirable qualities in laparoscopic surgery, and can distinguish between experienced and novice surgeons. We have also quantified the beneficial effect of a structured training course on psychomotor skill acquisition.
A reference system for accessing anatomical information from a complete 3D structure of the whole body “living human”, inducing 4D cardiac dynamics, was reconstructed with 3D and 4D data sets obtained from normal volunteers. With this system, we were able to produce a human atlas in which sectional images can be accessed from any part of the human body interactively by real-time image generation.
A system for 3D-planning for dental implantology is described. Since exact knowledge of the position of the nervus alv. inf. is critical, we present an algorithm for automated detection of this nerve, which requires only very little initial user interaction. To allow interactive implant placement on comparatively low-cost pc hardware we developed hybrid visualization techniques, which refrain from using large texture memory and raster engines.
The use of magnetic resonance imaging (MRI) for the real time guidance of surgical procedures is now undergoing clinical trials. Among the many procedures explored, open craniotomy neurosurgery appears to be among the most promising. Over 50 such cases have been done at the Brigham and Women's Hospital (BWH) in Boston. We review the technical approach used in these and related procedures. We consider the way in which imaging is used to augment and improve the procedures. As well, the implications of these protocols for remote diagnosis and telesurgery are explored. Finally, the implications of this experience for the insertion of new technology into medicine are discussed.
A computer aided brain surgery system using virtual reality techniques is developed. This system is aimed to be used to support the surgents' decision for the operational strategies, or used in the training of medical students to learn how to operate the brain surgery for the patients. In constructing the whole system, high speed graphical computer is equipped. For input 3D data from MRI or CT are used to display 3D images, HMD (head-mounted display) and 3D CRT display with glass are equipped. To improve the 3D images, colors and optical properties of the voxel are refined.