1

 

 

Multisensor Data Fusion

 

David L. Hall and James Llinas

CONTENTS

1.1   Introduction

1.2   Multisensor Advantages

1.3   Military Applications

1.4   Nonmilitary Applications

1.5   Three Processing Architectures

1.6   Data Fusion Process Model

1.7   Assessment of the State-of-the-Art

1.8   Dirty Secrets in Data Fusion

1.9   Additional Information

References

 

 

1.1   Introduction

Over the past two decades, significant attention has been focused on multisensor data fusion for both military and nonmilitary applications. Data fusion techniques combine data from multiple sensors and related information to achieve more specific inferences than could be achieved by using a single, independent sensor. Data fusion refers to the combination of data from multiple sensors (either of the same or different types), whereas information fusion refers to the combination of data and information from sensors, human reports, databases, etc.

The concept of multisensor data fusion is hardly new. As humans and animals evolved, they developed the ability to use multiple senses to help them survive. For example, assessing the quality of an edible substance may not be possible using only the sense of vision; the combination of sight, touch, smell, and taste is far more effective. Similarly, when vision is limited by structures and vegetation, the sense of hearing can provide advanced warning of impending dangers. Thus, multisensory data fusion is naturally performed by animals and humans to assess more accurately the surrounding environment and to identify threats, thereby improving their chances of survival. Interestingly, recent applications of data fusion1 have combined data from an artificial nose and an artificial tongue using neural networks and fuzzy logic.

Although the concept of data fusion is not new, the emergence of new sensors, advanced processing techniques, improved processing hardware, and wideband communications has made real-time fusion of data increasingly viable. Just as the advent of symbolic processing computers (e.g., the Symbolics computer and the Lambda machine) in the early 1970s provided an impetus to artificial intelligence, the recent advances in computing and sensing have provided the capability to emulate, in hardware and software, the natural data fusion capabilities of humans and animals. Currently, data fusion systems are used extensively for target tracking, automated identification of targets, and limited automated reasoning applications. Data fusion technology has rapidly advanced from a loose collection of related techniques to an emerging true engineering discipline with standardized terminology, collection of robust mathematical techniques, and established system design principles. Indeed, the remaining chapters of this handbook provide an overview of these techniques, design principles, and example applications.

Applications for multisensor data fusion are widespread. Military applications include automated target recognition (e.g., for smart weapons), guidance for autonomous vehicles, remote sensing, battlefield surveillance, and automated threat recognition (e.g., identification-friend-foe-neutral [IFFN] systems). Military applications have also extended to condition monitoring of weapons and machinery, to monitoring of the health status of individual soldiers, and to assistance in logistics. Nonmilitary applications include monitoring of manufacturing processes, condition-based maintenance of complex machinery, environmental monitoring, robotics, and medical applications.

Techniques to combine or fuse data are drawn from a diverse set of more traditional disciplines, including digital signal processing, statistical estimation, control theory, artificial intelligence, and classic numerical methods. Historically, data fusion methods were developed primarily for military applications. However, in recent years, these methods have been applied to civilian applications and a bidirectional transfer of technology has begun.

 

 

1.2   Multisensor Advantages

Fused data from multiple sensors provide several advantages over data from a single sensor. First, if several identical sensors are used (e.g., identical radars tracking a moving object), combining the observations would result in an improved estimate of the target position and velocity. A statistical advantage is gained by adding the N independent observations (e.g., the estimate of the target location or velocity is improved by a factor proportional to N1/2), assuming the data are combined in an optimal manner. The same result could also be obtained by combining N observations from an individual sensor.

The second advantage is that using the relative placement or motion of multiple sensors the observation process can be improved. For example, two sensors that measure angular directions to an object can be coordinated to determine the position of the object by triangulation. This technique is used in surveying and for commercial navigation (e.g., VHF omni-directional range [VOR]). Similarly, sensors, one moving in a known way with respect to another, can be used to measure instantaneously an object’s position and velocity with respect to the observing sensors.

The third advantage gained using multiple sensors is improved observability. Broadening the baseline of physical observables can result in significant improvements. Figure 1.1 provides a simple example of a moving object, such as an aircraft, that is observed by both a pulsed radar and a forward-looking infrared (FLIR) imaging sensor. The radar can accurately determine the aircraft’s range but has a limited ability to determine the angular direction of the aircraft. By contrast, the infrared imaging sensor can accurately determine the aircraft’s angular direction but cannot measure the range. If these two observations are correctly associated (as shown in Figure 1.1), the combination of the two sensors provides a better determination of location than could be obtained by either of the two independent sensors. This results in a reduced error region, as shown in the fused or combined location estimate. A similar effect may be obtained in determining the identity of an object on the basis of the observations of an object’s attributes. For example, there is evidence that bats identify their prey by a combination of factors, including size, texture (based on acoustic signature), and kinematic behavior. Interestingly, just as humans may use spoofing techniques to confuse sensor systems, some moths confuse bats by emitting sounds similar to those emitted by the bat closing in on prey (see http://www.desertmuseum.org/books/nhsd_moths.html—downloaded on October 4, 2007).

Images

FIGURE 1.1
A moving object observed by both a pulsed radar and an infrared imaging sensor.

 

 

1.3   Military Applications

The Department of Defense (DoD) community focuses on problems involving the location, characterization, and identification of dynamic entities such as emitters, platforms, weapons, and military units. These dynamic data are often termed as order-of-battle database or order-of-battle display (if superimposed on a map display). Beyond achieving an order-of-battle database, DoD users seek higher-level inferences about the enemy situation (e.g., the relationships among entities and their relationships with the environment and higher-level enemy organizations). Examples of DoD-related applications include ocean surveillance, air-to-air defense, battlefield intelligence, surveillance and target acquisition, and strategic warning and defense. Each of these military applications involves a particular focus, a sensor suite, a desired set of inferences, and a unique set of challenges, as shown in Table 1.1.

Ocean surveillance systems are designed to detect, track, and identify ocean-based targets and events. Examples include antisubmarine warfare systems to support navy tactical fleet operations and automated systems to guide autonomous vehicles. Sensor suites can include radar, sonar, electronic intelligence (ELINT), observation of communications traffic, infrared, and synthetic aperture radar (SAR) observations. The surveillance volume for ocean surveillance may encompass hundreds of nautical miles and focus on air, surface, and subsurface targets. Multiple surveillance platforms can be involved and numerous targets can be tracked. Challenges to ocean surveillance involve the large surveillance volume, the combination of targets and sensors, and the complex signal propagation environment—especially for underwater sonar sensing. An example of an ocean surveillance system is shown in Figure 1.2.

Air-to-air and surface-to-air defense systems have been developed by the military to detect, track, and identify aircraft and antiaircraft weapons and sensors. These defense systems use sensors such as radar, passive electronic support measures (ESM), infrared identification-friend-foe (IFF) sensors, electrooptic image sensors, and visual (human) sightings. These systems support counterair, order-of-battle aggregation, assignment of aircraft to raids, target prioritization, route planning, and other activities. Challenges to these data fusion systems include enemy countermeasures, the need for rapid decision making, and potentially large combinations of target-sensor pairings. A special challenge for IFF systems is the need to confidently and noncooperatively identify enemy aircraft. The proliferation of weapon systems throughout the world has resulted in little correlation between the national origin of a weapon and the combatants who use the weapon.

TABLE 1.1
Representative Data Fusion Applications for Defense Systems

Specific Applications

Inferences Sought by Data Fusion Process

Primary Observable Data

Surveillance Volume

Sensor Platforms

Ocean surveillance

Detection, tracking, identification of targets and events

Expectation maximization (EM) signals, acoustic signals, nuclear-related, derived observations

Hundreds of nautical miles, air/surface/subsurface

Ships, aircraft, submarines, ground-based, ocean-based

Air-to-air and surface-to-air defense

Detection, tracking, identification of aircraft

EM radiation

Hundreds of miles (strategic), miles (tactical)

Ground-based, aircraft

Battlefield intelligence, surveillance, and target acquisition

Detection and identification of potential ground targets

EM radiation

Tens of hundreds of miles about a battlefield

Ground-based, aircraft

Strategic warning and defense

Detection of indications of impending strategic actions, detection and tracking of ballistic missiles and warheads

EM radiation, nuclear-related

Global

Satellites, aircraft

Images

FIGURE 1.2
An example of an ocean surveillance system.

Finally, battlefield intelligence, surveillance, and target acquisition systems attempt to detect and identify potential ground targets. Examples include the location of land mines and automatic target recognition. Sensors include airborne surveillance via SAR, passive ESM, photo-reconnaissance, ground-based acoustic sensors, remotely piloted vehicles, electrooptic sensors, and infrared sensors. Key inferences sought are information to support battlefield situation assessment and threat assessment.

 

 

1.4   Nonmilitary Applications

Other groups addressing data fusion problems are the academic, commercial, and industrial communities. They involve nonmilitary applications such as the implementation of robotics, automated control of industrial manufacturing systems, development of smart buildings, and medical applications. As with military applications, each of these applications has a particular set of challenges and sensor suites, and a specific implementation environment (see Table 1.2).

TABLE 1.2
Representative Nondefense Data Fusion Applications

Specific Applications

Inferences Sought by Data Fusion Process

Primary Observable Data

Surveillance Volume

Sensor Platforms

Condition-based maintenance

Detection, characterization of system faults, recommendations for maintenance/corrections

EM signals, acoustic signals, magnetic, temperatures, x-rays, lubricant debris, vibration

Microscopic to hundreds of feet

Ships, aircraft, ground-based (e.g., factories)

Robotics

Object location/recognition, guide the locomotion of robot (e.g., “hands” and “feet”)

Television, acoustic signals, EM signals, x-rays

Microscopic to tens of feet about the robot

Robot body

Medical diagnoses

Location/identification of tumors, abnormalities, and disease

X-rays, nuclear magnetic resonance (NMR), temperature, infrared, visual inspection, chemical and biological data, self-reports of symptoms by humans

Human body volume

Laboratory

Environmental monitoring

Identification/location of natural phenomena (e.g., earthquakes, weather)

Synthetic aperture radar (SAR), seismic, EM radiation, core samples, chemical and biological data

Hundreds of miles, miles (site monitoring)

Satellites, aircraft, ground-based, underground samples

Remote sensing systems have been developed to identify and locate entities and objects. Examples include systems to monitor agricultural resources (e.g., to monitor the productivity and health of crops), locate natural resources, and monitor weather and natural disasters. These systems rely primarily on image systems using multispectral sensors. Such processing systems are dominated by automatic image processing. Multispectral imagery—such as the Landsat satellite system (http://www.bsrsi.msu.edu/) and the SPOT system—is used (see http://www.spotimage.fr/web/en/167-satellite-image-spot-formosat-2-kompsat-2-radar.php). A technique frequently used for multisensor image fusion involves adaptive neural networks. Multiimage data are processed on a pixel-by-pixel basis and input to a neural network to classify automatically the contents of the image. False colors are usually associated with types of crops, vegetation, or classes of objects. Human analysts can readily interpret the resulting false color synthetic image.

A key challenge in multiimage data fusion is coregistration. This problem requires the alignment of two or more photos so that the images are overlaid in such a way that corresponding picture elements (pixels) on each picture represent the same location on earth (i.e., each pixel represents the same direction from an observer’s point of view). This coregistration problem is exacerbated by the fact that image sensors are nonlinear and they perform a complex transformation between the observed three-dimensional space and a two-dimensional image.

A second application area, which spans both military and nonmilitary users, is the monitoring of complex mechanical equipment such as turbo machinery, helicopter gear trains, or industrial manufacturing equipment. For a drivetrain application, for example, sensor data can be obtained from accelerometers, temperature gauges, oil debris monitors, acoustic sensors, and infrared measurements. An online condition-monitoring system would seek to combine these observations to identify precursors to failure such as abnormal gear wear, shaft misalignment, or bearing failure. The use of such condition-based monitoring is expected to reduce maintenance costs and improve safety and reliability. Such systems are beginning to be developed for helicopters and other platforms (see Figure 1.3).

Images

FIGURE 1.3
Mechanical diagnostic test-bed used by The Pennsylvania State University to perform condition-based maintenance research.

 

 

1.5   Three Processing Architectures

Three basic alternatives can be used for multisensor data: (1) direct fusion of sensor data; (2) representation of sensor data via feature vectors, with subsequent fusion of the feature vectors; or (3) processing of each sensor to achieve high-level inferences or decisions, which are subsequently combined. Each of these approaches utilizes different fusion techniques as described and shown in Figures 1.4a, 1.4b and 1.4c.

Images

FIGURE 1.4
(a) Direct fusion of sensor data. (b) Representation of sensor data via feature vectors, with subsequent fusion of the feature vectors. (c) Processing of each sensor to achieve high-level inferences or decisions, which are subsequently combined.

If the multisensor data are commensurate (i.e., if the sensors are measuring the same physical phenomena such as two visual image sensors or two acoustic sensors) then the raw sensor data can be directly combined. Techniques for raw data fusion typically involve classic estimation methods such as Kalman filtering.2 Conversely, if the sensor data are noncommensurate then the data must be fused at the feature/state vector level or decision level.

Feature-level fusion involves the extraction of representative features from sensor data. An example of feature extraction is the cartoonist’s use of key facial characteristics to represent the human face. This technique—which is popular among political satirists—uses key features to evoke recognition of famous figures. Evidence confirms that humans utilize a feature-based cognitive function to recognize objects. In the case of multisensor feature-level fusion, features are extracted from multiple sensor observations and combined into a single concatenated feature vector that is an input to pattern recognition techniques such as neural networks, clustering algorithms, or template methods.

Decision-level fusion combines sensor information after each sensor has made a preliminary determination of an entity’s location, attributes, and identity. Examples of decision-level fusion methods include weighted decision methods (voting techniques), classical inference, Bayesian inference, and Dempster–Shafer’s method.

 

 

1.6   Data Fusion Process Model

One of the historical barriers to technology transfer in data fusion has been the lack of a unifying terminology that crosses application-specific boundaries. Even within military applications, related but distinct applications—such as IFF, battlefield surveillance, and automatic target recognition—used different definitions for fundamental terms such as correlation and data fusion. To improve communications among military researchers and system developers, the Joint Directors of Laboratories (JDL) Data Fusion Working Group (established in 1986) began an effort to codify the terminology related to data fusion. The result of that effort was the creation of a process model for data fusion and a data fusion lexicon (shown in Figure 1.5).

The JDL process model, which is intended to be very general and useful across multiple application areas, identifies the processes, functions, categories of techniques, and specific techniques applicable to data fusion. The model is a two-layer hierarchy. At the top level, shown in Figure 1.5, the data fusion process is conceptualized by sensor inputs, human–computer interaction, database management, source preprocessing, and six key subprocesses:

Level 0 processing (subobject data association and estimation) is aimed at combining pixel or signal level data to obtain initial information about an observed target’s characteristics.

Level 1 processing (object refinement) is aimed at combining sensor data to obtain the most reliable and accurate estimate of an entity’s position, velocity, attributes, and identity (to support prediction estimates of future position, velocity, and attributes).

Images

FIGURE 1.5
Joint Directors of Laboratories process model for data fusion.

Level 2 processing (situation refinement) dynamically attempts to develop a description of current relationships among entities and events in the context of their environment. This entails object clustering and relational analysis such as force structure and cross-force relations, communications, physical context, etc.

Level 3 processing (significance estimation) projects the current situation into the future to draw inferences about enemy threats, friend and foe vulnerabilities, and opportunities for operations (and also consequence prediction, susceptibility, and vulnerability assessments).

Level 4 processing (process refinement) is a meta-process that monitors the overall data fusion process to assess and improve real-time system performance. This is an element of resource management.

Level 5 processing (cognitive refinement) seeks to improve the interaction between a fusion system and one or more user/analysts. Functions performed include aids for visualization, cognitive assistance, bias remediation, collaboration, team-based decision making, course of action analysis, etc.

For each of these subprocesses, the hierarchical JDL model identifies specific functions and categories of techniques (in the model’s second layer) and specific techniques (in the model’s lowest layer). Implementation of data fusion systems integrates and interleaves these functions into an overall processing flow.

The data fusion process model is augmented by a hierarchical taxonomy that identifies categories of techniques and algorithms for performing the identified functions. An associated lexicon has been developed to provide a consistent definition of data fusion terminology. The JDL model is described in more detail in Chapters 2 and 3, and by Hall and McMullen.3

 

 

1.7   Assessment of the State-of-the-Art

The technology of multisensor data fusion is rapidly evolving. There is much concurrent research ongoing to develop new algorithms, to improve existing algorithms, and to assemble these techniques into an overall architecture capable of addressing diverse data fusion applications.

The most mature area of data fusion process is level 1 processing—using multisensor data to determine the position, velocity, attributes, and identity of individual objects or entities. Determining the position and velocity of an object on the basis of multiple sensor observations is a relatively old problem. Gauss and Legendre developed the method of least squares for determining the orbits of asteroids.2 Numerous mathematical techniques exist for performing coordinate transformations, associating observations to observations or to tracks, and estimating the position and velocity of a target. Multisensor target tracking is dominated by sequential estimation techniques such as the Kalman filter. Challenges in this area involve circumstances in which there is a dense target environment, rapidly maneuvering targets, or complex signal propagation environments (e.g., involving multipath propagation, cochannel interference, or clutter). However, single-target tracking in excellent signal-to-noise environments for dynamically well-behaved (i.e., dynamically predictable) targets is a straightforward, easily resolved problem.

Current research focuses on solving the assignment and maneuvering target problem. Techniques such as multiple-hypothesis tracking (MHT) and its extensions, probabilistic data association methods, random set theory, and multiple criteria optimization theory are being used to resolve these issues. Recent studies have also focused on relaxing the assumptions of the Kalman filter using techniques such as particle filters and other methods. Some researchers are utilizing multiple techniques simultaneously, guided by a knowledge-based system capable of selecting the appropriate solution on the basis of algorithm performance.

A special problem in level 1 processing involves the automatic identification of targets on the basis of observed characteristics or attributes. To date, object recognition has been dominated by feature-based methods in which a feature vector (i.e., a representation of the sensor data) is mapped into feature space with the hope of identifying the target on the basis of the location of the feature vector relative to a priori determined decision boundaries. Popular pattern recognition techniques include neural networks, statistical classifiers, and vector machine approaches. Although numerous techniques are available, the ultimate success of these methods relies on the selection of good features. (Good features provide excellent class separability in feature space, whereas bad features result in greatly overlapping feature space areas for several classes of target.) More research is needed in this area to guide the selection of features and to incorporate explicit knowledge about target classes. For example, syntactic methods provide additional information about the makeup of a target. In addition, some limited research is proceeding to incorporate contextual information—such as target mobility with respect to terrain—to assist in target identification.

Level 2 and level 3 fusions (situation refinement and threat refinement) are currently dominated by knowledge-based methods such as rule-based blackboard systems, intelligent agents, Bayesian belief network formulations, etc. These areas are relatively immature and have numerous prototypes, but few robust, operational systems. The main challenge in this area is to establish a viable knowledge base of rules, frames, scripts, or other methods to represent knowledge about situation assessment or threat assessment. Unfortunately, only primitive cognitive models exist to replicate the human performance of these functions. Much research is needed before reliable and large-scale knowledge-based systems can be developed for automated situation assessment and threat assessment. New approaches that offer promise are the use of fuzzy logic and hybrid architectures, which extend the concept of blackboard systems to hierarchical and multi–time scale orientations. Also, recent work by Yen and his associates4 on team-based intelligent agents appears promising. These agents emulate the way human teams collaborate, proactively exchanging information and anticipating information needs.

Level 4 processing, which assesses and improves the performance and operation of an ongoing data fusion process, has a mixed maturity. For single-sensor operations, techniques from operations research and control theory have been applied to develop effective systems, even for complex single sensors such as phased array radars. By contrast, situations that involve multiple sensors, external mission constraints, dynamic observing environments, and multiple targets are more challenging. To date, considerable difficulty has been encountered in attempting to model and incorporate mission objectives and constraints to balance optimized performance with limited resources, such as computing power and communication bandwidth (e.g., between sensors and processors), and other effects. Methods from utility theory are being applied to develop measures of system performance and effectiveness. Knowledge-based systems are being developed for context-based approximate reasoning. Significant improvements would result from the advent of smart, self-calibrating sensors, which can accurately and dynamically assess their own performance. The advent of distributed network-centric environments, in which sensing resources, communications capabilities, and information requests are very dynamic, creates serious challenges for level 4 fusion. It is difficult (or possibly impossible) to optimize resource utilization in such an environment. In a recent study, Mullen et al.5 have applied concepts of market-based auctions to dynamically allocate resources, treating sensors and communication systems as suppliers of services, and users and algorithms as consumers, to rapidly assess how to allocate system resources to satisfy the consumers of information.

Data fusion has suffered from a lack of rigor with regard to the test and evaluation of algorithms and the means of transitioning research findings from theory to application. The data fusion community must insist on high standards for algorithm development, test, and evaluation; creation of standard test cases; and systematic evolution of the technology to meet realistic applications. On a positive note, the introduction of the JDL process model and the emerging nonmilitary applications are expected to result in increased cross-discipline communication and research. The nonmilitary research in robotics, condition-based maintenance, industrial process control, transportation, and intelligent buildings would produce innovations that will cross-fertilize the entire field of data fusion technology. The challenges and opportunities related to data fusion establish it as an exciting research field with numerous applications.

 

 

1.8   Dirty Secrets in Data Fusion

In the first edition of this handbook, a chapter entitled “Dirty Secrets in Data Fusion” was included. It was based on a article written by Hall and Steinberg.6 This original article had identified the following seven challenges or issues in data fusion:

  1. There is no substitute for a good sensor.

  2. Downstream processing cannot absolve the sins of upstream processing.

  3. The fused answer may be worse than the best sensor.

  4. There are no magic algorithms.

  5. There will never be enough training data.

  6. It is difficult to quantify the value of data fusion.

  7. Fusion is not a static process.

Subsequently, these “dirty secrets” were revised as follows:

  • There is still no substitute for a good sensor (and a good human to interpret the results)—This means that if something cannot be actually observed or inferred from effects, then no amount of data fusion from multiple sensors would overcome this problem. This problem becomes even more challenging as threats change. The transition from the search for well-known physical targets (e.g., weapon systems, emitters, etc.) to targets based on human networks causes obvious issues with determining what can and should be observed. In particular, trying to determine intent is tantamount to mind reading, and is an elusive problem.

  • Downstream processing still cannot absolve upstream sins (or lack of attention to the data)—It is clear that we must do the best processing possible at every step of the fusion/inference process. For example, it is necessary to perform appropriate image and signal processing at the data stage, followed by appropriate transformations to extract feature vectors, etc., for feature-based identity processing. Failure to perform the appropriate data processing or failure to select and refine effective feature vectors cannot be overcome by choosing complex pattern recognition techniques. We simply must pay attention at every stage of the information chain, from energy detection to knowledge creation.

  • Not only may the fused result be worse than the best sensor, but failure to address pedigree, information overload, and uncertainty may really fowl up things—The rapid introduction of new sensors and use of humans as “soft sensors (reporters)” in network operations places special challenges on determining how to weight the incoming data. Failure to accurately assess the accuracy of the sensor/input data would lead to biases and errors in the fused results. The advent of networked operations and service-oriented architectures (SOA) can exacerbate this problem by rapidly disseminating data and information without understanding the sources or pedigree (who did what to the data).

  • There are still no magic algorithms—This book provides an overview of numerous algorithms and techniques for all levels of fusion. Although there are increasingly sophisticated algorithms, it is always a challenge to match the algorithm with the actual state of knowledge of the data, system, and inferences to be made. No single algorithm is ideal under all circumstances.

  • There will never be enough training data—However, hybrid methods that combine implicit and explicit information can help. It is well-known that pattern recognition methods, such as neural networks, require training data to establish the key weights. When seeking to map an n-dimensional feature vector to one of m classes or categories, we need in general n × m × (10–30) training examples under a variety of observing conditions. This can be very challenging to obtain, especially with dynamically changing threats. Hence, in general, there will never be enough training data available to satisfy the mathematical conditions for pattern recognition techniques. However, new hybrid methods that use a combination of sample data, model-based data, and human subject explicit information can assist in this area.

  • We have started at “the wrong end” (viz., at the sensor side vs. at the human side of fusion)—Finally, we note that extensive research has been conducted to develop methods for level 0 and level 1 fusions. In essence, we have “started at the data side or sensor inputs” to progress toward the human side. More research needs to be conducted in which we begin at the human side (viz., at the formation of hypotheses or semantic interpretation of events) and proceed toward the sensing side of fusion. Indeed, the introduction of the level 5 process was recognition of this need.

The original issues identified (viz., that fusion is not a static process, and that the benefits of fusion processing are difficult to quantify) still hold true.

Overall, this is an exciting time for the field of data fusion. The rapid advances and proliferation of sensors, the global spread of wireless communications, and the rapid improvements in computer processing and data storage enable new applications and methods to be developed.

 

 

1.9   Additional Information

Additional information about multisensor data fusion may be found in the following references:

  • D.L. Hall, Mathematical Techniques in Multisensor Data Fusion, Artech House, Inc. (1992)—Provides details on the mathematical and heuristic techniques for data fusion.

  • E. Waltz and J. Llinas, Multisensor Data Fusion, Artech House, Inc. (1990)—Presents an excellent overview of data fusion especially for military applications.

  • L.A. Klein, Sensor and Data Fusion Concepts and Applications, SPIE Optical Engineering Press, Vol. TT 14 (1993)—Presents an abbreviated introduction to data fusion.

  • R. Antony, Principles of Data Fusion Automation, Artech House, Inc. (1995)—Provides a discussion of data fusion processes with special focus on database issues to achieve computational efficiency.

  • A multimedia computer-based training package, Introduction to Data Fusion, A Multimedia Computer-Based Training Package, available from Artech House, Inc., Boston, MA, 1995.

 

 

References

1. T. Sundic, S. Marco, J. Samitier, and P. Wide, Electronic tongue and electronic nose data fusion in classification with neural networks and fuzzy logic based models, IEEE, 3, 1474–1480, 2000.

2. H.W. Sorenson, Least-squares estimation: From Gauss to Kalman, IEEE Spectrum, 7, 63–68, July 1970.

3. D. Hall and S.A.H. McMullen, Mathematical Techniques in Multisensor Data Fusion, Artech House Inc., Boston, MA, 2004.

4. G. Airy, P.-C. Chen, X. Fan, J. Yen, D. Hall, M. Brogan, and T. Huynh, Collaborative RPD agents assisting decision making in active decision spaces, in Proceedings of the 2006, IAT’06, IEEE/WIC/ACM International Conference on Intelligent Agent Technology, December 2006.

5. T. Mullen, V. Avasarala, and D.L. Hall, Customer-driven sensor management, IEEE Intelligent Systems, Special Issue on Self-Management through Self-Organization in Information Systems, March/April 2006, 41–49.

6. D.L. Hall and A. Steinberg, Dirty secrets in multisensor data fusion, Proceedings of the National Symposium on Sensor Data Fusion (NSSDF), San Antonio, TX, June 2000.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.210.91