

Data fusion in medical imaging encompasses both i) multisensor fusion of anatomical and functional information and ii) interpatient data fusion by means of warping models. These two aspects establish the methodological framework necessary to perform anatomical modeling especially those structures of the brain. The principal aim of the work presented here concerns the investigation of multimodal 3D neuroanatomical data.
Three aspects of data fusion are considered in this paper. The first one concerns the integration of data from multiple multimodalities (multisensor fusion applied to CT, MRI, DSA, PET, SPECT, or MEG). In particular, the problem of warping patient data to match an anatomical atlas is reviewed and a solution is proposed. The second aspect addressed in this paper - the identification of anatomical structures or features - is related to data fusion because it is a prerequisite step to many of the techniques applied to data fusion. Two techniques have been developed: the first one analyses geometrical features of the image to produce a fuzzy mask for labeling the structure of interest. The second technique segments the major cerebral structures by means of statistical image features and relaxation techniques. Finally, the paper presents a review of up-to-date 3D display techniques with special emphasis on 3D display of combined data.