
Ebook: Medicine Meets Virtual Reality

Eighteen years ago we stated our business purpose: the transformation of medicine through communication. We hoped that could happen through conferences, presented papers, and the intellectual camaraderie of such symposia. The transformation is far from complete and has much more to do with what transpires outside of conferences than at the meetings themselves, but we have been privileged to witness significant stages of the metamorphosis.
The first ambulatory surgery facilities were developed in the late ‘70s and within ten years, nearly half of all surgical procedures were done outside the hospital. While payers saved billions of dollars, many hospitals were forced to close their doors. Further advances in radiologic technology took diagnosis and treatment to another level--often into ambulatory care facilities-- and struck another blow to hospitals. The healthcare (r)Evolution accelerated in the early ‘80s with the advent of outpatient care. Later, combining advances in treatment with advances in information technology, the healthcare (r)Evolution raced forward. Now, with the advent of telemedicine in the ‘90s, the healthcare (r)Evolution is irrevocable. From electronic patient records to electronic representations of the patient, computers now play an integral role in every aspect of health care. Very soon, health care services will be home-delivered, creating even more healthcare industry economic chaos. And this is good, because the graying global population will require more healthcare services, but behemoth institutions won't effectively deliver them. Ideally, we'll have virtual health care providers and hospitals will become conference facilities.
When, a quarter century ago, Buckminster Fuller conceptualized a global energy grid, it was almost coincident with the World Health Organization's declared goal of Healthcare for All by the Year 2000. That may not happen in the next three years, but it's on the horizon, and access to electrical power is the critical partner. Advances in diagnosis and treatment and the advent of information technology, intersected with telecommunications, are creating the global transformation of medicine through communication. We humbly suggest that conferences may have had some small thing to do with this (r)Evolution. One such conference is Medicine Meets Virtual Reality, where the cutting edge is honed. Many ideas, outrageous when first presented at the 1992 conference, are now realized daily in clinical practice; tools exhibited at this year's meeting were then only graphics on a screen. Techniques and technologies being developed by a small group of committed individuals are (r)Evolutionizing healthcare. And this year, at the 5th MMVR conference, we saw real movement and change: even in the age of telecommunication and virtual presence, conferences are a great opportunity to document progress.
As the MMVR conference organizers, we sincerely thank all of you who share our purpose by sharing your work and your selves.
There is theoretical and empirical research supporting the hypothesis that virtual reality technology (VRT) can be efficaciously applied to attenuate the symptoms of mental disorders (Baer, 1996; Rothbaum et al, 1995a, 1995b; Rothbaum et al, 1996.) Yet there is also research suggesting psychiatric therapeutic applications of VRT may induce noxious or unexpected psychological consequences (Kolasinski, 1996; Muscott & Gifford, 1994; Regan & Price, 1994; Regan & Ramsey, 1996; Strickland, 1995.) A prudent conclusion would be to advocate ever more sophisticated studies on psychiatric therapeutic applications of VRT concerning (1) increasing the overall socioadaptiveness of patients, (2) the robustness of moderating, modifying, or other intermediary variables effecting or affecting VRT therapeutic efficacy, and (3) variables, processes, and hypotheses generated from VRT applications in non-psychiatric fields.
Virtual Reality Environments for Psychoneurophysiological Assessment and Rehabilitation - is an European Community funded project (Telematics for health - HC 1053 http://www.etho.be/ht_projects/vrepar/) whose aim is:
- to develop a PC based virtual reality system (PC-VRS) for the medical market that can be marketed at a price which is accessible to its possible end-users (hospitals, universities and research centres) and which would have the modular, connectability and interoperability characteristics that the existing systems lack;
- to develop three hardware/software modules for the application of the PC VRS in psychoneurophysiological assessment and rehabilitation. The chosen development areas are eating disorders (bulimia, anorexia and obesity), movement disorders (Parkinson’s disease and torsion dystonia) and stroke disorders (unilateral neglect and hemiparesis).
This paper presents the rationale of the different approaches and the methodology used.
This study is adapting Virtual Reality (VR) technologies to teach children with autism new coping skills that may then be generalized in their everyday lives. It helps to understand that children with autism are challenged by a sensory overload and by aversions to a variety of aditory, visual, and tactile stimuli. In addition, their ability to attenuate and/or ignore these stimuli differs from that of the “typical” child. Therefore, our goals were to: 1. assess the potential of each of the children (both with and without verbal skills) for sustained interaction with task environments, 2. to identify which visual, auditory, and kinetic VR components would be attractive to our test group, 3. to identify each child's ability to attenuate and/or ignore a variety of distractors, and 4. to build new “pivotal behaviors' [2] for learning. These pivotal behaviors include: improvement of time-on-task, meaningful and consistent interaction with the real world, and screening out extraneous environmental stimuli. Our findings to date have been very encouraging, and we will continue to investigate the role of VR as a tool for generalized learning and the modification of pivotal behaviors. This ongoing study also provides technology transfer of communications applications from government, universities and businesses out into telemedicine and elementary schools.
Can Virtual Reality (VR) developments in audio navigation for blind persons support therapies for all? Working with Crystal River Engineering we are developing navigable Virtual Reality worlds for blind users, using spatialized audio [1], [2]. All persons, however, use specialized channels, such as: visual, aural, and kinetic learning senses. Predominantly visual VR worlds and health informatics models from World Wide Webs, may be downloaded, tailored, augmented, and delivered to each of these learning senses using VR. We are also testing a proof of concept system with Boston Dynamics which downloads 3-dimensional, satellite-derived map models from the World Wide Web, and makes them navigable by “feeling” the terrain using haptic (tactual or force feedback to your hand) robotic interfaces. Ultimately, these multi-sensory VR access methods: sight, localization by audio, and “feeling” of data sets could open up the World Wide Web to individuals with sight impairments. This could also, however, benefit government, businesses, universities, and (elementary) education. It could contribute more powerful communications, education, and medical simulation applications on the World Wide Web. This work is part of government technology transfer to telemedicine, (elementary) education, disabilities access to the Web, and new Internet access and productivity efforts under Vice President Gore's National Performance Review.
We have been examining the potential value of a VR system for the palliative care of cancer. We recently developed palliative care system which consists of a 100-inch-wide screen, HMD (Head-mounted display) and 8-mm video or a PC. Our goal is to use VR techniques to help alleviate a patient's stress and concern regarding their cancer during hospitalization. We can use this system to present (1) personal video movies, (2) video letters from friends and family, (3) personal video instruction about medical examinations, and (4) interactive information about their cancer using a PC-based VR system. Our preliminary results indicate that interesting VR presentations are useful for reducing stress.
For patients with multiple sclerosis and spinal cord injury, virtual reality systems provide new methods of assistance with dysmetria, tremor, spasticity, and weakness. Robust mechanisms exist within the central nervous system to produce neuroplastic adaptive responses operative in retraining motor activities. Haptic systems cued by the patient's visual environment can produce force corridors to guide a patient's wrist and hand in the performance of specific tasks. Such haptic application can substantially reduce motor instability and improve performance. Preliminary clinical approaches, using video tremor tracking and manual force application, indicate the extent of the expected improvements attainable with this approach. Refinement of these techniques is proceeding to development of VR systems that will allow more extensive application to the problems of dysmetria, more general instances of tremor, spasticity, and weakness.
Virtual Reality and other technological innovations in medicine provide new challenges to the regulatory framework of the premarket review process for medical devices. By reinventing the governmentacademia-industry partnership, clinical trial data necessary for a medical device to enter the market can be more efficiently obtained.
The use of higher technology in medicine promises improved outcomes and enhanced productivity. That is, successful techniques will lead to lower cost, higher quality care for a larger population. In surgery, these changes range from the more efficient use of skilled medical practitioners, through improvements to conventional practice (a recent example is the shift to endoscopic surgery in the abdomen), to the creation of new procedures which capitalize on the availability of information in new forms._Image Guided Surgery may be defined as the use of advanced technology to help the surgeon see with l)Better resolution 2) Orientation and context setting 3) Higher contrast, and 4) Vision inside “solid Objects”, including the elimination ofocclusion by the surgeon's tools or other external items. We describe here the current imaging processes and their limitations with regard to direct guidance of therapy The physical properties of real time image acquisition systems are described along with the mechanisms for inherent and enhanced contrast. Examples of the use with surgical instruments or other interventional devices for image-monitored and guided procedures are then discussed, and future prospects elucidated.
We have demonstrated high definition and improved resolution using a novel scanning system integrated with a commercial ultrasound machine. The result is a volumetric 3D ultrasound data set that can be visualized using standard techniques. Unlike other 3D ultrasound images, image quality is improved from standard 2D data. Image definition and bandwidth is improved using patent pending techniques. The system can be used to image patients or wounded soldiers for general imaging of anatomy such as abdominal organs, extremities, and the neck. Although the risks associated with x-ray carcinogenesis are relatively low at diagnostic dose levels, concerns remain for individuals in high risk categories. In addition, cost and portability of CT and MRI machines can be prohibitive. In comparison, ultrasound can provide portable, low-cost, non-ionizing imaging.
Previous clinical trials comparing ultrasound to CT were used to demonstrate qualitative and quantitative improvements of ultrasound using the Sandia technologies. Transverse leg images demonstrated much higher clarity and lower noise than is seen in traditional ultrasound images. An x-ray CT scan was provided of the same cross-section for comparison. The results of our most recent trials demonstrate the advantages of 3D ultrasound and motion compensation compared with 2D ultrasound. Metal objects can also be observed within the anatomy.
This paper describes an interdisciplinary effort to simulate and visualize the mechanisms involved in compression neuropathies, specifically tissue deformation occurring during vaginal delivery. These neuropathies often evolve into chronic pelvic pain. We present our methodologies of using high resolution magnetic resonance acquisitions from submillimeter pulse sequences to develop interactive computer simulations based on physically plausible volume models to drive 3D simulations of childbirth. This effort will elucidate tissue movements and mechanics involved in pain disorders and better explain the etiology of these disorders.
Modeling the musculoskeletal joint system using biomechanical analysis and computer graphics techniques allows us to visualize normal, diseased and reconstructed joint function. This model can be used to study the loading of bones and joints under theoretical and simulated activities. In this study, intact cadavers were imaged using MRI, CT scanning and cryo-sectioning techniques. Using sequential pixel information of bone and soft tissue boundaries collected from digital camera images, MRI and CT scans, the volumetric models of the musculoskeletal joint system are reconstructed. “Descriptive geometry” techniques which treat bones as rigid bodies and cartilage, ligament and muscles as deformable bodies were used to construct the model. Joint resultant forces and moments were determined using an inverse dynamics formulation, while ligament tension, joint contact pressure, and bone stresses are solved through a simplified Rigid Body Spring Modeling technique and the Finite Element Method. The results under static and dynamic loading activities can be visualized using interactive computer graphics. The advantages of such a model are the elimination of the need for large numbers of intact cadaveric specimens, and the unprecedented capability to study joint loading responses under normal, abnormal and surgically reconstructed states. Such a model and its analytical capability are ideal for pre-operative planning and computer-assisted orthopaedic surgery. This Visual, Interactive, Computational, and Anatomic Model(VICAM) and its associated analysis capability represent the next generation of technology which will have an enormous impact in orthopaedic research, education and patient care.
Converging technologies in the areas of networks, volume visualization algorithms, and computer performance have made possible the development of a new tool for collaboration, which extends the reach of health professionals, and other consumers of volumetric data around the world. TeleInViVo(tm) is a threedimensional (3D) collaborative volume visualization tool for medical applications. It extends the capabilities of InViVo(tm), a fast volume visualization tool developed at the Fraunhofer IGD, Darmstadt, Germany [1-3], with efficient and intuitive network collaboration features for remote consultation and new modes of interaction. The software runs on both UNIX and Windows NT platforms. TeleInViVo provides a high degree of interactivity for the medical professional when interacting with the patient data, facilitates explanation and communication between field personnel and medical experts located far from the field, and permits viewing of the data in a multitude of ways designed to support rapid and accurate diagnosis. Current efforts involve architectural enhancements to support multiuser, distributed telemedical scenarios. The application includes the following features:
* Volume and subvolume data transmission at user specified resolution,
* Synchronization cues,
* Integration of Immersion Probe(tm), a 6 degree-of-freedom input device, for ergonomic 3D data exploration,
* Tools for measuring distances,
* Tools for planning instrument path,
* Arbitrary cutting planes in real time,
* Interactive segmentation tools,
* Virtual video recorder and playback (cine loops),
* 3D stereo mode.
TeleInViVo is an essential part of the MUSTPAC-1 portable 3D ultrasound system developed by Battelle Pacific Northwest Labs, Richland, WA.
Total 3-D reconstruction of the tumor size, shape, and relations with surrounding structures using CT, MRI, sonography, and angiography images can make simulated radical resection of soft-tissue sarcomas possible, thus sparing normal tissues. With our approach, starting from three MR images for a given patient, a new single image representation of all three parameters is generated by using two different techniques on a workstation in a standard UNIX and X-11 environment. The first one is a transformation linking together the MR parameters and the RGB (red, green, blue) color components. The second one is an unsupervised segmentation method based on a number of neural and fuzzy models. We can dinamically render and update a stereo display using field sequential presentation of left and right eye views on the monitor, with Cristal Eyes LCD shutter eyewear (StereoGraphics Inc., San Rafael, CA) to view it. As 3D locating tool, a 3D locating control system based on low-frequency magnetic fields (Polhemus Fastrak) has been chosen. Simulations of soft-tissues excisions may be performaed in this interactive environment with augmented-reality modalities. All this, in our experience, has greatly facilitated the simulation of soft -tissue sarcoma excisions.
Wearable augmented reality medical (WARM) interfaces could provide ubiquitous point-of-care decision support and enhance the quality and efficiency of clinicians' efforts. Creation of such systems involves the design and evaluation of new information displays that leverage the representational and presentational capabilities of three-dimensional AR environments. We describe our first efforts in this process: the implementation of interface objects for display of real-time electrocardiographic monitoring information and an evaluation methodology using a simulated clinical environment. Our pilot data confirm the utility of presentation modes that place simultaneous information tasks in close proximity, and highlight issues encountered in designing new representations of medical information.
This paper describes recent results of a unified computerized system for hand diagnosis and rehabilitation. Automatic diagnosis data collection and Virtual Reality rehabilitation exercises are the main characteristics of the system. The diagnosis subsystem includes a tactile sensing glove in addition to standard devices such as electronic dynamometer, pinchmeter and goniometer. Three standard rehabilitation exercises were simulated in a Virtual Reality environment, using the WorldToolKit graphics library. The first two exercises (ball squeezing and DigiKey) allow measurement of finger forces exerted during the rehabilitation routine. The third exercise (Peg board) involves the patient's visual-motor coordination. The rehabilitation subsystem uses a VPL DataGlove retrofitted with Rutgers Master (RM-I) and its interface. The exercises involve manipulation of objects with different stiffnesses and geometry. Grasping forces were modeled and fed back using the Rutgers Master worn on patient's hand. Data is gathered in real time from both diagnosis and rehabilitation subsystems. Finger specific forces recorded during rehabilitation exercises allow better diagnosis of the patient impairment. An ORACLE database is used to store and manipulate patients' records. Proof of concept trials were performed in a clinical environment. Some results of patient records analysis are presented in this paper. A new version of the system using an RM II haptic interface is presently under consideration.
We previously developed a system with which we have created more than 100 virtual cancer images from CT or MR data of individual patients with cancer (Cancer Edutainment Virtual Reality Theater: CEVRT). These images can be used to help explain procedures, findings, etc. to the patient, to obtain informed consent, to simulate surgery, and to estimate cancer invasion to surrounding organs. We recently developed a web-based object-oriented database both to access these cancer images and to register medical images at international research sites via the Internet. In this report, we introduce an international medical VR data warehouse created using an object-oriented database.