
Ebook: Medical Imaging

Michael Kuhn
In 1991 we reviewed the user needs from the clinical disciplines of Neuroradiology, Neurosurgery, and Radiotherapy Planning for computer assistance, in particular for medical image analysis (prior to writing our proposal for the COVIRA project [1]). When, based on the perspective requirements, we derived specifications for algorithms which we would have to develop and related these to the state-of-the-art at that time, it became obvious that image segmentation and multimodality image registration would represent major bottlenecks. Meanwhile, around 400 manmonths of effort have been spent on this topic in COVIRA alone, and multiples of this may have been spent in Europe as a whole. In view of this, the medical imaging project line, the meeting of representatives of all imaging-related European projects receiving funds under the AIM programme of the EC, decided to conduct a workshop on these topics in April 1994, a time when most of the algorithms had reached reasonable maturity and were being integrated into pilot application systems to undergo clinical validation until the end of 1994.
The time of our initial analysis co-incided with the publication of a critical article about the lack of theoretic understanding and, consequently, of robustness of state-of-the-art image segmentation approaches. In their article on “Ignorance, Myopia and Naivete in Computer Vision Systems” (CVGIP Image Understanding, Vol. 53 (1), 112-117, 1991), R.C. Jain and T.O. Binford describe their view of the underlying reasons for the poor status of the field which was characterised by the fact that although a number of competing and promising approaches to image segmentation existed, a thorough comparative evaluation and critical assessment regarding issues such as generality, theoretical foundation, stability with respect to intrinsic parameter variations and input image quality had not been achieved on an international scale.
In view of this, a cautious approach was taken in the COVIRA project: while some groups investigated simple but fast, partly interactive segmentation methods, other groups tried to work on more automatic methods which were based on theoretical models and knowledge about the image generation physics. Since from earlier projects we had learned that the clinical user must always have the final say, tools for interactive editing of contours and surfaces were also pursued. All the results of these activities, together with those of other European workers in this field, were reported during the workshop and are layed down in this book.
In the field of multimodality image registration, a similar state-of-the-art analysis had been done, largely based on experience from the MMOMS project funded by the European Community during the AIM exploratory action programme, in which also those clinical applications had been defined for which the acquisition of multimodality images would be justified. This topic is currently gaining attention in view of the booming field of image guided therapy where frameless stereotaxy is the first application to be pursued on a commercial basis.
As long as we refer to the determination of the geometrical relationship between two independently acquired image data sets from different modalities, and assuming that the image acquisition processes have not introduced significant geometrical distortions, only rigid or affine transformations (scaling, translation, rotation, skewing) have to be considered. When registering patient images with those in an anatomy atlas, the use of so-called elastic registration (or matching) has to be discussed. In the COVIRA project, only rigid registration has been investigated, which is derived from common surfaces or points of reference (fiducial markers or anatomical landmarks) in the different data sets.
While surfaces and, more easily, fiducial markers are most efficiently obtained from suitable image segmentation operations on the involved data sets, anatomical landmarks still have to be interactively specified by the operator because they cannot be adequately modelled. The fact that in this way, multimodality image registration generally relies on the results of image segmentation processes applied to the input images, shows that for the overall success as far as clinical use is concerned, the two areas of research cannot be separated. They have been rightly combined in the workshop for which this book represents the proceedings. We hope that this book will serve to give an overview over recent advances in this field and stimulate further progress.
In this paper we summarize the work on multi-modality image registration performed within the COVIRA (COmputer VIsion in RAdiology) project (AIM, contract A2003). Algorithms for registration using stereotactic frames, anatomical landmark points and anatomical surfaces are presented. Quantification and correction algorithms for geometric distortion in MR and MRA are discussed. These methods are currently clinically evaluated in six clinical pilot application systems for neuroradiological diagnosis, neurosurgery planning and radiotherapy planning.
In this paper, we will show the feasibility of using ridgeness for rigid automatic matching of CT and MR brain images. Image ridgeness can be computed by convolving the image with derivatives of Gaussians. The specific derivatives involved are based on the local gradient and second order structure. The width of the used Gaussian determines the locality of the ridgeness computed.
This paper briefly reviews techniques for registration between 3D objects. A new method is introduced in a first part, dealing for the case where non-rigid registration is required. Some typical medical applications of rigid registration are then described in a second part.
Data fusion in medical imaging encompasses both i) multisensor fusion of anatomical and functional information and ii) interpatient data fusion by means of warping models. These two aspects establish the methodological framework necessary to perform anatomical modeling especially those structures of the brain. The principal aim of the work presented here concerns the investigation of multimodal 3D neuroanatomical data.
Three aspects of data fusion are considered in this paper. The first one concerns the integration of data from multiple multimodalities (multisensor fusion applied to CT, MRI, DSA, PET, SPECT, or MEG). In particular, the problem of warping patient data to match an anatomical atlas is reviewed and a solution is proposed. The second aspect addressed in this paper - the identification of anatomical structures or features - is related to data fusion because it is a prerequisite step to many of the techniques applied to data fusion. Two techniques have been developed: the first one analyses geometrical features of the image to produce a fuzzy mask for labeling the structure of interest. The second technique segments the major cerebral structures by means of statistical image features and relaxation techniques. Finally, the paper presents a review of up-to-date 3D display techniques with special emphasis on 3D display of combined data.
In this paper, we describe two novel approaches to interactive image segmentation, developed within the COVIRA [1] framework. Both methods have been designed to extract the boundary of a connected object. Accuracy and real-time processing are the main objectives to be accomplished. The speed requirement is met by a) drastically reducing the user interaction during processing (semi-automation) and, b) performing pseudo 3D processing (propagation of information between slices). Accuracy is improved through multivalued segmentation. Results are reported, using the COVIRA reference data set.
This paper describes a technique for building compact models of the shape and appearance of flexible objects (such as organs) seen in 2D and 3D medical images. The models are derived from the statistics of sets of labelled images of examples of the objects. Each model consists of a flexible shape template, describing how important points of the objects can vary, and statistical models of the expected grey levels in regions around each model point. The shape models are parameterised in such a way as to allow only legal configurations. The models have proved useful in a wide variety of applications. We describe how they can be used in local image search and give examples of their application in medical image segmentation. We describe how 2D models can be used to segment 3D objects in volume images and to track structures in image sequences. We also describe how to generate full 3D models and illustrate their use to segment 3D Magnetic Resonance images of the brain.
A multiscale method (the hyperstack) is proposed to segment multi-dimensional MR brain data. In particular, attention will be paid to the detection and classification of partial volume voxels. The result—a list of probabilities for each partial volume voxel—makes measurements of image geometry more reliable and improves volumetric visualization of image objects.
An approach to automatic segmentation of two-dimensional brain images from X-ray (CT) and magnetic resonance tomography (MR) is presented. Based on an image model of smooth regions separated by scaled discontinuities we derive computational processes for the problems of edge detection, contour grouping and smooth image reconstruction. We therefore employ advanced concepts of computer vision research such as scale-space analysis and deterministic and stochastic frameworks regularizing ill-posed inverse problems.
The described work is part of the COVIRA project whose goal is to provide prototypes of clinical workstations for neuroradiological diagnostics and therapy, neurosurgery and 3D radiation therapy for evaluation in several hospitals. An important aspect is the 3D reconstruction and visualisation of the cerebral vessel tree based on several DSA projections combined with an MRA volume data set. This combination promises a higher resolution than that yielded by MRA data alone. This paper presents tools for supporting the 3D reconstruction: the segmentation of blood vessels, the extraction of vessel intensities, diameters and centre lines. During the 3D reconstruction these features reduce the ambiguity that is present when only segmented binary image are used. A further application of these tools is the enhanced representation of blood vessels in 2D images. Blood vessels down to a diameter of 2 pixels can be intensified and visualised even if they are poorly filled with contrast agent. In a first evaluation radiologists confirmed the improved quality and the detailed resolution.
Medical image analysis has to support the clinicians ability to identify, manipulate and quantify anatomical structures. On scalar 2-D image data, a human observer is often superior to computer assisted analysis, but the interpretation of vector-valued data or data combined from different modalities, especially in 3-D, can benefit from computer assistance. The problem of how to convey the complex information to the clinician is often tackled by providing colored multimodality renderings. We propose to go a step beyond by supplying a suitable modelling of anatomical and functional structures encoding important shape features and physical properties. The multiple attributes regarding geometry, topology and function are carried by the symbolic description and can be interactively queried and edited. Integrated 3-D rendering of object surfaces and symbolic representation acts as a visual interface to allow interactive communication between the observer and the complex data, providing new possiblities for quantification and therapy planning.
The discussion is guided by the prototypical example of investigating the cerebral vasculature in MRA volume data. Geometric, topological and flow-related information can be assessed by interactive analysis on a computer workstation, providing otherwise hidden qualitative and quantitative information. Several case studies demonstrate the potential usage for structure identification, definition of landmarks, assessment of topology for catheterization, and local simulation of blood flow.