Ebook: Advances and Challenges in Multisensor Data and Information Processing
Information fusion resulting from multi-source processing, often called multisensor data fusion when sensors are the main sources of information, is a relatively young (less than 20 years) technology domain. It provides techniques and methods for: Integrating data from multiple sources and using the complementarity of this data to derive maximum information about the phenomenon being observed; Analyzing and deriving the meaning of these observations; Selecting the best course of action; and Controlling the actions. Various sensors have been designed to detect some specific phenomena, but not others. Data fusion applications can combine synergically information from many sensors, including data provided by satellites and contextual and encyclopedic knowledge, to provide enhanced ability to detect and recognize anomalies in the environment, compared with conventional means. Data fusion is an integral part of multisensor processing, but it can also be applied to fuse non-sensor information (geopolitical, intelligence, etc.) to provide decision support for a timely and effective situation and threat assessment. One special field of application for data fusion is satellite imagery, which can provide extensive information over a wide area of the electromagnetic spectrum using several types of sensors (Visible, Infra-Red (IR), Thermal IR, Radar, Synthetic Aperture Radar (SAR), Polarimetric SAR (PolSAR), Hyperspectral...). Satellite imagery provides the coverage rate needed to identify and monitor human activities from agricultural practices (land use, crop types identification...) to defence-related surveillance (land/sea target detection and classification). By acquiring remotely sensed imagery over earth regions that land sensors cannot access, valuable information can be gathered for the defence against terrorism. This books deals with the following research areas: Target recognition/classification and tracking; Sensor systems; Image processing; Remote sensing and remote control; Belief functions theory; and Situation assessment.
From the 16th to the 27th of May 2005, a NATO Advanced Study Institute entitled Multisensor Data and Information Processing for Rapid and Robust Situation and Threat Assessment was held in Albena, Bulgaria. This ASI brought together 72 people from 13 European and North American countries to discuss, through a series of 48 lectures, the use of information fusion in the context of defence against terrorism, which is a NATO priority research topic.
Information fusion resulting from multi-source processing, often called multisensor data fusion when sensors are the main sources of information, is a relatively young (less than 20 years) technology domain. It provides techniques and methods for:
1) integrating data from multiple sources and using the complementarity of this data to derive maximum information about the phenomenon being observed;
2) analyzing and deriving the meaning of these observations;
3) selecting the best course of action; and
4) controlling the actions.
Various sensors have been designed to detect some specific phenomena, but not others. Data fusion applications can combine synergically information from many sensors, including data provided by satellites and contextual and encyclopedic knowledge, to provide enhanced ability to detect and recognize anomalies in the environment, compared with conventional means. Data fusion is an integral part of multisensor processing, but it can also be applied to fuse non-sensor information (geopolitical, intelligence, etc.) to provide decision support for a timely and effective situation and threat assessment.
One special field of application for data fusion is satellite imagery, which can provide extensive information over a wide area of the electromagnetic spectrum using several types of sensors (Visible, Infra-Red (IR), Thermal IR, Radar, Synthetic Aperture Radar (SAR), Polarimetric SAR (PolSAR), Hyperspectral...). Satellite imagery provides the coverage rate needed to identify and monitor human activities from agricultural practices (land use, crop types identification...) to defence-related surveillance (land/sea target detection and classification). By acquiring remotely sensed imagery over earth regions that land sensors cannot access, valuable information can be gathered for the defence against terrorism.
Developed on these themes the ASI's program was subdivided in ten half-day sessions devoted respectively to the following research areas:
• Target recognition/classification and tracking
• Sensor systems
• Image processing
• Remote sensing and remote control
• Belief functions theory
• Situation assessment
The lectures presented at the ASI proved to be of great contribution and importance to the research and development of the multisensor data fusion based surveillance systems used in rapid and robust situations and for threat assessment. The ASI gave all the participants the opportunity to interact and exchange valuable knowledge and work experience to overcome challenging issues in various research areas. This book summarizes the lectures that were given at this ASI.
An Advanced Research Workshop (ARW) related to this ASI was held in Tallinn, Estonia from June 27th to July 1st 2005. This ARW addressed the data fusion technologies for harbour protection. More information on this event can be found at http://www.canadiannatomeetings.com.
I would like to thank all the lecturers who accepted the invitation to participate in the ASI. The time they spent preparing their lectures and their active participation were key factors to the ASI's success. I would also like to thank them for the summary papers they provided to make this book happen. I extend my thanks to all the attendees of the ASI for their interest and participation.
A special acknowledgement goes to Kiril Alexiev, the co-director of this ASI who initiated this project and was always very supportive. His tremendous help in the coordination of all events and logistics was much appreciated. My warm thanks go to Gayane Malkhasyan and Masha Ryskin, my administrative assistants and interpreters who ensured that everything ran smoothly during the course of the ASI. I would also like to thank the officers from the Albena Congress centre office, in particular, Mrs. Galina Toteva for her extra assistance.
I would like to thank Pierre Valin and Erik Blasch who did the technical reviews of this book. Their judicious comments were very helpful. Very special thanks go to Kimberly Nash who reviewed the papers and formatted the book. Thank you for your patience and all the time you spent increasing the quality of the book.
Finally I wish to express my gratitude to NATO who supported this ASI along with Lockheed Martin Canada, the Institute of Parallel Processing of the Bulgarian Academy of Science, Defence Research and Development Canada, the European Office of Aerospace Research and Development of the USAF and the National Science Foundation, without whom it would have been impossible to organize this event.
Eric Lefebvre, Montreal, Canada
This tutorial paper provides an intoduction into selected aspects of sensor data fusion by discussing characteristic examples. We consider fusion of data produced at different instants of time, fusion of data from different sensors, and fusion of data with background information on the sensor performance as well as with non-sensor context information. The feed-back from data processing to data acquisition is illustrated by a sensor management example.
Sensor networks have emerged as fundamentally new tools for monitoring spatially distributed phenomena. They incorporate the most progressive ideas from several areas of research: computer networks, wireless communications, grid technologies, multiagent systems and network operating systems. In spite of great interest centered on sensor data processing and information fusion, the simulation of entire multisensor network remains very important for the optimal solution of many tasks relevant to joint data processing and data transmission between sensor nodes. The current state of development automation tools does not correspond to the contemporary needs of modern design. The main purpose of this paper is to outline the structure of a simulation tool for modeling dynamical self-organizing heterogeneous sensor networks. Focus is concentrated on different component modeling and data flow simulation in the sensor networks.
This paper addresses the problem of joint tracking and classification (JTC) of maneuvering targets via sequential Monte Carlo (SMC) techniques. A general framework of the problem is presented within the SMC. A SMC algorithm is developed, namely a Mixture Kalman filter (MKF), which accounts for speed and acceleration constraints. The MKF is applied to airborne targets: commercial and military aircraft. The target class is modeled as an independent random variable, which can take values over the discrete class space with equal probability. A multiple-model structure in the class space is implemented, which provides reliable classification. The performance of the proposed MKF is evaluated by simulation over typical target scenarios.
We address the assignment problem by considering the relationship between single sensor measurements and real targets. Here, the class of multi-dimensional assignment is considered for multi-scan as well as unresolved measurement problems in multi-target tracking.
New classes of tracking algorithms combining Variable-Structure-Interacting Multiple Model (VS-IMM) techniques, augmentation or dual estimation, and Unscented Kalman filtering are presented in this paper. These filter methods ensure significant self-adjusting and inherent manoeuvre detection capabilities. The algorithms are distinguished through their highly accurate course and speed estimations, even for manoeuvring targets. The performance of these techniques is demonstrated for targets performing turns of varying cross accelerations.
Revolutionary computing technologies are defined in terms of technological breakthroughs, which leapfrog over near–term projected advances in conventional hardware and software to produce paradigm shifts in computational science. For underwater threat source localization using information provided by a dynamical sensor network, one of the most promising computational advances builds upon the emergence of digital optical-core devices. In this article, we present initial results of sensor network calculations that focus on the concept of signal wavefront Time-Difference-of-Arrival (TDOA). The corresponding algorithms are implemented on the EnLight™processing platform recently introduced by Lenslet Laboratories. This tera-scale digital optical core processor is optimized for array operations, which it performs in a fixed-point-arithmetic architecture. Our results (i) illustrate the ability to reach the required accuracy in the TDOA computation, and (ii) demonstrate that a considerable speed-up can be achieved when using the EnLight™ 64α prototype processor as compared to a dual Intel Xeon™processor.
The effectiveness of a multi-source fusion process for decision making highly depends on the quality of information that is received and processed by the fusion system. This paper summarizes the existing quantitative analyses of different aspects of information quality in multi-source fusion environments. The summary includes definitions of four main aspects of information, namely, uncertainty, reliability, completeness and relevance. The quantitative assessment of quality of the information can facilitate evaluating how well the product of the fusion process represents the reality, hence contribute to improved decision making.
Several studies have already shown that remote sensing imagery would provide valuable information for area surveillance missions and activity monitoring and that its combination with contextual information could significantly improve the performance of target detection/target recognition (TD/TR) algorithms. In the context of surveillance missions, spaceborne synthetic aperture radars (SARs) are particularly useful due to their ability to operate day and night under any sky condition. Conventional SARs operate with a single polarization channel, while recent and future spaceborne SARs (Envisat ASAR, Radarsat-2) will offer the possibility to use multiple polarization channels. Standard target detection approaches on SAR images consist of the application of a constant false alarm rate (CFAR) detector and usually produce a large number of false alarms. This large number of false alarms prohibits their manual rejection. However, over the past ten years a number of algorithms have been proposed to extract information from a polarimetric SAR scattering matrix to enhance and/or characterize man-made objects. The evidential fusion of such information can lead to the automatic rejection of the false alarms generated by the CFAR detector. In addition, the aforementioned information can lead to a better characterization of the detected targets. In the case of more challenging backgrounds, such as ground-based target detection, the use of higher level information such as context can help in the removal of false alarms. This paper will discuss the use of polarimetric information for target detection using polarimetric SAR imagery as well as the benefit of contextual information fusion for ground-based target detection.
We study the class of problems where Solutions Spaces are specified by combinatorial Game Trees. A version of Botvinnik's Intermediate Goals At First (IGAF) algorithm is developed for strategy formation based on common knowledge planning and dynamic plan testing in the corresponding game tree. The algorithm (1) includes a range of knowledge types in the form of goals and rules, (2) demonstrates a strong tendency to increase strategy formation efficiency, and (3) increases the amount of knowledge available to the system.
The CP-140 (Aurora) Canadian maritime surveillance aircraft is presently undergoing an Aurora Incremental Modernization Program that will allow multi-sensor data fusion to be performed. Dempster-Shafer (DS) evidence theory is chosen for the identity information fusion due to its natural handling of conflicting, uncertain and incomplete information. Two realistic scenarios were constructed in order to test DS under countermeasures, miss-associations, and incorrect classification. Results show that DS theory is robust under all but the worst cases, when using the existing suite of sensors.
Lockheed Martin Canada has recently developed a Situation and Threat Assessment and Resource Management application using some recent technologies and concepts that have emerged from level 2 and 3 data fusion research. The current paper describes some exploration work on Improved Time of Earliest Weapon Release for threat evaluation and the utilization of target weaponry system information for threat evaluation refinement.
Many problems of image processing, remote sensing and remote control can be formulated in terms of detection of structural changes in observed multivariate temporal or spatial data. The proposed lecture considers modern methods of detection of structural changes in multivariate data and some important applications. Various methods for the effective solution of these problems are described. The most popular methods of change-point determination in the one-dimensional interpretation are given: parametrical and non-parametrical statistical methods, a wavelet analysis method. A method of detection of structural changes in a multivariate regression analysis is considered.
We propose Unification of Fusion Theories and Fusion Rules in solving problems/applications. For each particular application, check the reliability of sources, and select the most appropriate model, rule(s), fusion theories, and algorithm of implementation.
The unification scenario presented herein, which is in an incipient form, should periodically be updated to incorporate new discoveries from fusion and engineering research.
Sensors are mainly associated in order to get benefits of their complementarity. Different kinds of advantages may be expected such as the ability to face a more important set of situations, the improvement of discrimination capacity or simply time saving. When analyzing a situation, the available sensors are most often used under conditions that include uncertainties at different levels. In this paper, belief functions theory, a mathematical toolbox which allows to represent both imprecision and uncertainty, is used to represent, manage and reason with such uncertainties (imprecise measurements, ambiguous observations in space or in time, incomplete or poorly defined prior knowledge). Practical examples on how to use this theoretical framework in detection-recognition problems are provided. They have nice properties like the possibility to quantify that none of the original hypothesis is supported, that the value of some 'likelihoods' are unknown, that we can accept an a priori belief that really represents total ignorance. Several applications where belief functions have been successfully applied for multisensor data fusion are finally presented.
Evidence theory has been primarily used in the past to model imperfect information, and it is a powerful tool for reasoning under uncertainty. It appeared as an alternative to probability theory and is now considered a generalization of it. In this paper we first introduce an object identification problem and then present two approaches to solve it: a probabilistic approach and the Dempster- Shafer approach. We also present the limitations of Dempster's rule of combination when conflictual pieces of information are combined and we present alternatives rules proposed in the literature to overcome this problem. We propose a class of adaptive combination rules obtained by mixing the basic conjunctive and disjunctive combination rules. The symmetric adaptive combination rule is finally considered and we compare it with the other existing rules.
Command and control can be characterized as a dynamic human decision making process. A technological perspective of command and control has led system designers to propose solutions, such as decision support and information fusion, to overcome many domain problems. Solving the command and control problem requires balancing the human factor perspective with that of the system designer and coordinating the efforts in designing a cognitively fitted system to support the decision-makers. This paper discusses critical issues in the design of computer aids, such as a data/information fusion system, by which the decision-maker can better understand the situation in his area of operations, select a course of action, issue intent and orders, monitor the execution of operations, and evaluate the results. These aids will support the decision-makers in coping with uncertainty and disorder in warfare and in exploiting people or technology at critical times and places to ensure success in operations.
This paper presents three fusion strategies applied within the European SMART project on space and airborne mined area reduction tools. Two strategies are based on belief function theory, and the third one is a fuzzy method. The main aim of the three methods consists of taking advantage of several available data sources with different properties, improving landcover classification and anomaly detection results, taking advantage of existing knowledge, and allowing user interaction.
This paper introduces the recent theory of plausible and paradoxical reasoning, known as DSmT (Dezert-Smarandache Theory) in the literature, which deals with imprecise, uncertain and potentially highly conflicting sources of information. Recent publications have shown the interest and the potential ability of DSmT to solve fusion problems where Dempster-Shafer Theory (DST) provides counter-intuitive results, especially when conflict between sources becomes high and information becomes vague and imprecise. This short paper presents the foundations of DSmT, its main rules of combination including the most recent ones and introduce briefly some open challenging problems in fusion.
The objective of this study is to present two multitarget tracking applications based on Dezert-Smarandache Theory (DSmT) for plausible and paradoxical reasoning: (1) Target Tracking in Cluttered Environment with Generalized Data Association incorporating the advanced concept of generalized data (kinematics and attribute) association to improve track maintenance performance in complicated situations (closely spaced and/or crossing targets), when kinematics data are insufficient for correct decision making.; (2) Estimation of Target Behavior Tendencies - it is developed on the principles of DSmT applied to conventional passive radar amplitude measurements, which serve as an evidence for corresponding decision-making procedures. The aim is to present and to demonstrate the ability of DSmT to improve the decision-making process and to assure awareness about the tendencies of target behavior in case of discrepancies in measurements interpretation.
Automated registration of image frames is often required for construction of High-Resolution (HR) data to perform surveillance and threat assessment. While some efficient approaches to image registration have been developed lately, the registration algorithms resulting from these approaches generally remain application dependent and may require operator-assisted tuning for different images to achieve same efficiency levels. In this article, we describe an algorithm for automatic image registration that assists improved surveillance and threat assessment in scenarios where multiple diverse sensors are used for these applications. This algorithm offers scene-independent registration performance and is efficient for different scenes ranging from complex highly-varying gray-scale images to simpler low variable gray-scale images. While use of feature-based methods has emerged as more versatile for automatic registration in surveillance applications (compared to other methods based on correlation, mutual information maximization, etc.), the algorithm described here employs the local frequency representation for the image frames to be registered in order to generate a set of control points to solve the matching problem and to determine the registration parameters. The algorithm exploits certain inherent strong points of local frequency representation, such as robustness to illumination variation, capability of detecting the structure of the scene in the image (ridges and edges) simultaneously, and good localization in spatial domain. Experimental results reported here indicate that this registration technique is efficient and yields promising results for the alignment and fusion of complex images.
Image data provided by different available and future observation satellites can improve our capabilities of detection and reconnaissance concerning an area of interest. To be effective, this information must be processed and used in a coherent manner. We propose to introduce several examples that take advantage of SAR, optic and infrared images for mapping and threat activity detection and assessment. Image fusion can be performed at three different processing levels (pixel, feature and decision). These different examples of fusion application are related to defence purposes.
This paper concerns the detection, parameter, and height estimation of Pseudo-random Noise (PN) signals; with a passive radar receiver network applied in wireless communication systems with multipath interference. The investigation is obtained by Monte Carlo simulation. The achieved results can be applied for target detection in multistatic radars using existing communication networks.
This paper describes the main results of a recent investigation of wavelet-based algorithms for fusion of visible and infrared surveillance image sequences used in various scenarios. The concept of adaptive image fusion where the fusion engine adapts to multiple streams of input images and tries to achieve optimal fusion performance under real-time constraints is discussed. A wavelet-based intra-fusion adaptive framework is presented and the design of a system that uses this framework is outlined. The effect of several key wavelet transform parameters, such as the type of wavelet transform, the mother wavelets, the levels of decomposition, the wavelet coefficients used in the fusion process, the fusion rule, etc., which all affect the performance of an adaptive wavelet-based fusion system and the quality of the fused result, are studied in this work.