23

 

 

Studies and Analyses within Project Correlation: An In-Depth Assessment of Correlation Problems and Solution Techniques*

 

James Llinas, Capt. Lori McConnell, Christopher L. Bowman, David L. Hall, and Paul Applegate

CONTENTS

23.1   Introduction

23.1.1   Background and Perspectives on This Study Effort

23.2   A Description of the Data Correlation Problem

23.3   Hypothesis Generation

23.3.1   Characteristics of Hypothesis Generation Problem Space

23.3.2   Solution Techniques for Hypothesis Generation

23.3.2.1   Hypothesis Enumeration

23.3.2.2   Identification of Feasible Hypotheses

23.3.3   HG Problem Space to Solution Space Map

23.4   Hypothesis Evaluation

23.4.1   Characterization of the HE Problem Space

23.4.1.1   Input Data Characteristics

23.4.1.2   Output Data Characteristics

23.4.2   Mapping of the HE Problem Space to HE Solution Techniques

23.5   Hypothesis Selection

23.5.1   The Assignment Problem

23.5.2   Comparisons of Hypothesis Selection Techniques

23.5.2.1   2D Versus ND Performance

23.5.3   Engineering an HS Solution

23.5.3.1   Deterministic Approaches

23.6   Summary

References

 

 

23.1   Introduction

The correlation problem is one in which both measurements from multiple sensors and additional inputs from multiple nonsensor sources must be optimally allocated to estimation processes that produce (through data/information fusion techniques) fused parameter estimates associated with hypothetical targets and events of interest. In the most general sense, this problem is one of combinatorial optimization, and the solution strategies involve application and extension of existent methods of this type.

This chapter describes a study effort, Project Correlation, which involved stepping back from the many application-specific and system-specific solutions and the extensively described theoretical approaches to conduct an assessment and develop guidelines for moving from problem space to solution space. In other words, the project’s purpose was to gain some understanding of the engineering design approaches for solution development and assess the scalability and reusability of solution methods according to the nature of the problem.

Project Correlation was a project within the U.S. Air Force Tactical Exploitation of National Capabilities (AFTENCAP) program. The charter of AFTENCAP was to exploit all space and national system capabilities for warfighter support. It was not surprising therefore that the issue of how to cost-effectively correlate such multiple sources of data/information is of considerable importance. Another AFTENCAP charter tenet was to influence new national system design and operations; it was in the context of this tenet that Project Correlation sought to obtain the generic/reusable engineering guidelines for effective correlation problem solution.

23.1.1   Background and Perspectives on This Study Effort

The functions and processes of correlation are part of the functions and processes of data fusion. (See Refs 1 and 2, for reviews of data fusion concepts and mathematics.) As a component of data fusion processing, correlation suffers from some of the same problems as other parts of the overall data fusion process (which has been maturing for approximately 20 years): a lack of an adequate scientifically based foundation of knowledge to serve as the basis for engineering guidelines with which to approach and effectively solve problems. In part, the lack of this knowledge is the result of relatively few comparative studies that assess and contrast multiple solution methodologies on an equitable basis. A search of modern literature on correlation and related subjects, for example, revealed a small number of such comparative studies and many singular efforts for specialized algorithms. In part, the goal of the effort described in this chapter was to attempt to overlay or map onto these prior works an equitable basis for comparing and assessing the problem spaces in which these (individually described) algorithms work reasonably well. The lack of an adequate literature base of quantitative comparative studies forced such judgments to become subjective, at least to some degree. As a result, an experienced team was assembled to cooperatively form these judgments in the most objective way possible; none of the evaluators has a stake in, or has been in the business of, correlation algorithm development. Moreover, as an augmentation of this overall study, peer reviews of the findings were conducted via a conference and open session in January 1996 and a workshop and presentation at the National Symposium on Sensor Fusion in April 1996.

Others have attempted such characterizations, at least to some degree. For example, Blackman describes the Tracker-Correlator problem space with two parameters: sampling interval and intertarget spacing.3 This example is, as Blackman remarks, simplified but instructive. Figure 23.1 shows three regions in this space:3

Images

FIGURE 23.1
Interpretation of MTT correlation in a closely spaced target environment.

  1. The upper region of unambiguous correlation—characterized by widely spaced targets and sufficiently short sampling intervals.

  2. An unstable region—in which targets are relatively close (in relation to sensor resolution) and miscorrelation occurs regardless of sampling rate.

  3. A region where miscorrelation occurs without track loss—consisting of very closely spaced targets and where miscorrelation occurs, however, measurements are assigned to some track, resulting in no track loss but degradations in accuracy.

As noted in Figure 23.1, the boundaries separating these regions are a function of the two parameters and are also affected by other aspects of the processing. For example, detection probability (Pd) is known to strongly affect correlation performance, so that alterations in Pd can alter the shape of these regions. For the unstable region, Blackman cites some related studies that show that this region may occur for target angular separations of about two to five times the angular measurement standard deviation. Other studies quoted by Blackman show that unambiguous tracking occurs for target separations of about five times the measurement error standard deviation. These boundaries are also affected by the specifics of correlation algorithms, all of which have several components.

 

 

23.2   A Description of the Data Correlation Problem

One way to effectively architect a data fusion process is to visualize the process as a tree-type structure with each node of the fusion tree process having a configuration such as that shown in Figure 23.2. The partitioning strategy for such a data fusion tree is beyond the scope of this chapter and is discussed by Bowman4 and in detail in Chapter 22 of this handbook.

Images

FIGURE 23.2
Data fusion tree node.

The processing in each data fusion node is partitioned into three distinguishable tasks:

  1. Data preparation (common referencing)—time and coordinate conversion of source data, and the correction for known misalignments and data conflicts

  2. Data correlation or DC (association)—associates data to objects

  3. State estimation and prediction—updates and predicts the object state (e.g., kinematics, attributes, and identity [ID])

This study focused on the processes in the shaded region in Figure 23.2 labeled Data correlation. Note that DC is segmented into three parts, each involved with the association hypotheses that cluster reports together to relate them to the objects:

  • In hypothesis generation (HG), the current data and prior selected hypotheses are used to generate the current correlation hypotheses via feasibility gating, pruning, combining, clustering, and object aggregation. That is, alternate hypotheses are defined that represent feasible associations of the input data, including, for example, existing information (e.g., tracks, previous reports, or a priori data). Feasibility is defined in a manner that effectively reduces the number of candidates for evaluation and selection (e.g., by a region centered on a time-normalized track hypothesis where measurements that fall within that region are accepted as being possibly associated with that track).

  • In hypothesis evaluation (HE), each of these feasible correlation hypotheses are evaluated using kinematic, attribute, ID, and a priori data as needed to rank the hypotheses (with a score reflecting closeness to a candidate object or hypothesis) for more efficient selection searching.5 Evaluation techniques include numerical (Bayesian, possibilistic), logical (symbolic, nonmonotonic), and neural (nonlinear pattern recognition) methods, as well as hybrids of these methods.6

  • Hypothesis selection (HS) involves a (hopefully efficient) search algorithm that selects one or more association hypotheses to support an improved state estimation process (e.g., to resolve overlaps in the measurement/hypothesis matrix). This algorithm may also provide feedback to aid in the generation and evaluation of new hypotheses to initiate the next search. The selection functions include elimination, splitting, combining, retaining, and confirming correlation hypotheses to maintain tracks and aggregated objects.

Most simply put, HG nominates a set of hypotheses to which observations (based on domain/problem insight) can be associated. The HE step develops and computes a metric, which reflects the degree to which any observation is associable (accounting for various errors and other domain effects) to that hypothesis. In spite of the use of such metrics, ambiguities can remain in deciding on how best to allocate the observations. As a result, an HS function (typically an assignment problem solution algorithm) is used to achieve an optimal or near-optimal allocation of the observations (e.g., maximum likelihood (ML) based).

Note that the objects discussed here are, first of all, hypothetical objects—on the basis of some signal threshold being exceeded, an object, or perhaps more correctly, some causal factor is believed to have produced the signal. Typically, the notion of an object’s existence is instantiated by starting, in software, an estimation algorithm that attempts to compute (and predict) parameters of interest regarding the hypothetical object. In the end, the goal is to correlate the best (according to some optimization criteria) ensemble of measurements from multiple input sources to the estimation processes, so that, by using this larger quantity of information, improved estimates result.

 

 

23.3   Hypothesis Generation

23.3.1   Characteristics of Hypothesis Generation Problem Space

The characterization of the HG problem involves many of the issues related to HE discussed in Section 23.3.2. These issues include the nature of the input data available, knowledge of target characteristics and behavior, target density, characteristics of the sensors and our knowledge about their performance, the processing time frame, and characteristics of the mission. These are summarized in Table 23.1.

23.3.2   Solution Techniques for Hypothesis Generation

The approach to developing a solution for HG can be separated into two aspects: (1) hypothesis enumeration and (2) identification of feasible hypotheses. A summary of HG solution techniques is provided in Table 23.2. Hypothesis enumeration involves developing a global set of possible hypotheses based on physical, statistical, or explicit knowledge about the observed environment. Hypothesis feasibility assessment provides an initial screening of the total possible hypotheses to define the feasible hypotheses of subsequent processing (i.e., for HE and HS processing). These functions are described in Sections 23.3.2.1 and 23.3.2.2.

23.3.2.1   Hypothesis Enumeration
  1. Physical models—Physical models can be used to assist in the definition of potential hypotheses. Examples include intervisibility models to determine the possibility of a given sensor observing a specified target (with specified target-sensor interrelationships, environmental conditions, etc.), models for target motion (i.e., to move a target from one location in time to the time of a received observation via dynamic equations of motion, terrain models, etc.), and models of predicted target signature for specific types of sensors.

  2. Syntactic models—Models can be developed to describe how targets or complex entities are constructed. This is analogous to the manner in which syntactic rules are used to describe how sentences can be correctly constructed in English. Syntactical models may be developed to identify the necessary components (e.g., emitters, platforms, sensors, etc.) that comprise more complex entities, such as a surface-to-air missile (SAM) battalion.

    TABLE 23.1
    Aspects of the Hypothesis Generation Problem Space

    Images

    TABLE 23.2
    Solution Techniques for Hypothesis Generation

    Images

  3. Doctrine-based scenarios—Models or scenarios can also be developed to describe anticipated conditions and actions for a tactical battlefield or battle space. Thus, anticipated targets, emitters, relationships among entities, entity behavior (e.g., target motion, emitter operating anodes, sequences of actions, etc.), engagement scenarios, and other information can be represented.

  4. Probabilistic models—Probabilistic models can be developed to describe possible hypotheses. These models can be developed on the basis of a number of factors, such as a priori probability of the existence of a target or entity, expected number of false correlations or false alarms, and the probability of a track having a specified length.

  5. Ad hoc models—In the absence of other knowledge, ad hoc methods may be used to enumerate potential hypotheses, including (if all else fails) an exhaustive enumeration of all possible report-to-report and report-to-track pairs.

The result of hypothesis enumeration is the definition or identification of possible hypotheses for subsequent processing. This step is critical to all subsequent processing. Failure to identify realistic possible causes (or interpretations) for received data (e.g., such as countermeasures and signal propagation phenomena) cannot be recovered in subsequent processing (i.e., the subsequent processing is aimed at reducing the number of hypotheses and ultimately selecting the most likely or feasible hypotheses from the superset produced at this step), at least in a deductive, or model-based approach. It may be possible, in association processes involving learning-based methods, to adaptively create association hypotheses in real time.

23.3.2.2   Identification of Feasible Hypotheses

The second function required for HG involves reducing the set of possible hypotheses to a set of feasible hypotheses. This involves eliminating unlikely report-to-report or report-to-track pairs (hypotheses) based on physical, statistical, explicit knowledge, or ad hoc factors. The challenge is to reduce the number of possible hypotheses to a limited set of feasible hypotheses, without eliminating any viable alternatives that may be useful in subsequent HE and HS processing. A number of automated techniques are used for performing this initial pruning. These are listed in Table 23.2; space limitations prevent further elaboration.

23.3.3   HG Problem Space to Solution Space Map

A mapping between the HG problem space and solution space is summarized in Table 23.3.* The matrix shows a relationship between characteristics of the HG problem (e.g., input data and output data) and the classes of solutions. Note that this matrix is not especially prescriptive in the sense of allowing a clear selection of solution techniques based on the character of the HG problem. Instead, an engineering design process,16 must be used to select the specific HG approach applicable to a given problem.

 

 

23.4   Hypothesis Evaluation

23.4.1   Characterization of the HE Problem Space

The HE problem space is described for each batch of data (i.e., fusion node) by the characteristics of the data inputs, the type of score outputs, and the measures of desired performance. The selection of HE techniques is based on these characteristics. This section gives a further description of each element of the HE problem space.

23.4.1.1   Input Data Characteristics

The inputs to HE are the feasible associations with pointers to the corresponding input data parameters. The input data are categorized according to the available data type, the level of its certainty, and its commonality with the other data being associated, as shown in Table 23.4. Input data includes both recently sensed data and a priori source data. All data types have a measure of certainty, albeit possibly highly ambiguous, corresponding to each data type.

TABLE 23.3
Mapping Between HG Problem Space and Solution Space

Images

TABLE 23.4
Input Data Characteristics

Images

TABLE 23.5
Output Data Characteristics

Images

23.4.1.2   Output Data Characteristics

The characteristics of the HE outputs needed by HS also drive the selection of the HE techniques. Table 23.5 describes these HE output categories, which are partitioned according to the output variable type: logical, integer, real, N-dimensional (ND), functional, or none. Most HE outputs are real-valued scores reflecting the confidence in the hypothesized association. However, some output a discrete confidence level (e.g., low, medium, or high), whereas others output multiple scores (e.g., one per data category) or scores with higher-order statistics (e.g., fuzzy, evidential, or random set). For some batches of data, no HE scoring is performed, and only a yes/no decision on the feasibilities is output for HS. No explicit association refers to those rare cases where the data association function is not performed (i.e., the data is only implicitly associated in performing state estimation directly on all of the data). An example in image processing is the estimation of object centroid or other features, based on intensity patterns without first clustering the pixel intensity data.

23.4.2   Mapping of the HE Problem Space to HE Solution Techniques

This section describes the types of problems for which the solution techniques are most applicable (i.e., mapping problem space to solution space). A preliminary mapping of this type is shown in Table 23.6; final guidelines were developed by Llinas et al.16 The ad hoc techniques are used when the problem is easy (i.e., performance requirements are easy to meet) or the input data errors are ill defined. Probabilistic techniques are selected according to the error statistics of the input data. Namely, ML techniques are applied when there is no useful a priori data, otherwise maximum a posteriori (MAP) are considered. Chi-square (CHI) techniques are applied for data with Gaussian statistics (e.g., without useful ID data), especially when there is data of differing dimensions where ML and MAP would have to use expected values to maintain constant dimensionality. Neyman-Pearson (NP) techniques are statistically powerful and are used as the basis for nonparametric techniques (e.g., sign test and Wilcoxon test). Conditional algebra event (CAE) techniques are useful when the input data is given, conditioned on different events (e.g., linguistic data). Rough sets (Rgh) are used to combine/score data of differing resolutions. Information/entropy (Inf) techniques are used to select the density functions and score data whose error statistics are not known. Further discussion of the various implications of problem-to-solution mappings is provided by Llinas et al.16

TABLE 23.6
Mapping HE Problem Space to HE Solution Techniques

Images

 

 

23.5   Hypothesis Selection

When the initial clustering, gating, distance/closeness metric selection, and fundamental approach to HE are completed, the overall correlation process reaches a point where the most feasible set of both multisensor measurements and multisource inputs exist. The inputs are filtered, in essence, by the preprocessing operations and the remaining inputs allocated or assigned to the appropriate estimation processes that can exploit them for improved computation and prediction of the states of interest. This process is HS, in which the hypothesis set comprises all of the possible/feasible assignment patterns (set permutations) of the inputs to the estimation processes; thus, any single hypothesis is one of the set of feasible assignment patterns. This chapter focuses on position and ID estimation from such assigned inputs as the states of interest. However, the hypothesis generation–evaluation–selection process is also relevant to the estimation processes at higher levels of abstraction (e.g., wherein the states are situational states or threat states), and the state estimation processes, unlike the highly numeric methods used for level 1 estimates, are reasoning processes embodied in symbolic computer-based operations.

So, what exists as input to the hypothesis selection process in effect is, at this point, a matrix (or matrices) where the typical dimensions are the indexed input data/information/measurement set on one hand, and the indexed state estimation systems or processes, along with the allowed ambiguity states, on the other hand (i.e., the other states or conditions, beyond those state estimates being maintained, to which the inputs may be assigned). Simply put, for the problems of interest described here, the two dimensions are the indexed measurements and the indexed position or ID state estimation processes. (Note, however, as discussed later in this chapter, that assignment problems can involve more than two dimensions.)

In any case, the matrix/matrices are populated with the closeness measures, which could be considered costs of assigning any single measurement to any single estimator (resulting from the HE solution). Despite the effort devoted to optimizing the HG and HE solutions, considerable ambiguity (many feasible hypotheses) can still result. The costs in these matrices may directly be the values of the distance or scoring metrics selected for a particular approach to correlation, or a newly developed cost function specifically defined for the HS step. The usual strategy for defining the optimal assignment (i.e., selecting the optimal hypothesis) is to find that hypothesis with the lowest total cost of assignment. Recall, however, that there are generally two conditions wherein such matrices develop: (1) when the input systems (e.g., sensors) are initiated (turned on) and (2) when the dynamic state estimation processes of interest is being maintained in a recursive or iterative mode.

As noted earlier, these assignment or association matrices, despite the careful preprocessing of the HG and HE steps, may still involve ambiguities in how to best assign the inputs to the state estimators. That is, the cost of assigning any given input to any of a few or several estimators may be reasonable or allowable within the definition of the cost function and its associated thresholds of acceptable costs. If this condition exists across many of the inputs, identifying the total-lowest-cost assignments of the inputs becomes a complex problem. The central problem to be solved in HS is that of defining a way to select the hypothesis with minimum total cost from all feasible/permissible hypotheses for any given case; often, this involves large combinations of possibilities and leads to a problem in combinatorial optimization. In particular, this problem—called the assignment problem in the domain of combinatorial optimization—is applicable to many cases other than the measurement assignment problem presented in this chapter and has been well studied by the mathematical and operations research community, as well as by the data fusion community.

23.5.1   The Assignment Problem

The goal of the assignment problem is to obtain an optimal way of assigning various available N resources (in this case, typically, measurements) to various N or M (M can be less than, greater than, or equal to N) processes that require them (in our case estimation processes, typically). Each such feasible assignment of the N × N problem (a permutation of the set N) has a cost associated with it, and the usual notion of optimality equates to minimum cost. Although special cases allow for multiassignment (in which resources are shared), for many problems, a typical constraint allows only one-to-one assignments; these problems are sometimes called bipartite matching problems.

Solutions for the typical and special variations of these problems are provided by mathematical programming and optimization techniques. (Historically, some of the earliest applications were to multiworker/multitask problems and many operations research and mathematical programming texts motivate assignment problem discussions in this context.) This problem is also characterized in the related literature according to the nature of the mathematical programming or optimization techniques used as solutions applied for each special case. Not surprisingly, because the underlying problem model has broad applicability to many specific and real problems, the literature describes certain variations of the assignment problem and its solution in different (and sometimes confusing) ways. For example, assignment-type problems also arise in analyzing flows in networks. Ahuja et al., in describing network flows, divide their discussion into six topics:17

  1. Applications

  2. Basic properties of network flows

  3. Shortest path problems

  4. Maximum flow problems

  5. Minimum cost flow problems

  6. Assignment problems

In their presentation, they characterize the assignment problem as a minimum cost flow problem on a network.17 This characterization, however, is exactly the same as asserted in other applications. In network parlance, however, the assignment problem is now called a variant of the shortest path problem (which involves determining directed paths of smallest cost from any node X to all other nodes). Thus, successive shortest path algorithms solve the assignment problem as a sequence of N shortest path problems (where N = number of resources = number of processes in the (square) assignment problems).17 In essence, this is the bipartite matching problem restated in a different way.

23.5.2   Comparisons of Hypothesis Selection Techniques

Many technical, mathematical aspects comprise the assignment problem that, given space limitations, are not described in this chapter. For example, there is the crucial issue of solution complexity in the formal sense (i.e., in the sense of big O analyses), and there is the dilemma of choosing between multidimensional solutions and two-dimensional (2D) solutions and all that is involved in such choices; in addition, many other topics remain for the process designer. The solution space of assignment problems at level 1 can be thought of as comprising (a) linear and nonlinear mathematical programming (with some emphasis on integer programming), (b) dynamic programming and branch-and-bound methods as part of the family of methods employing implicit enumeration strategies, and (c) approximations and heuristics. Although this generalization is reasonable, note that the assignment problems of the type experienced for level 1 data fusion problems arise in many different application areas; as a result, many specific solution types have been developed over the years, making broad generalizations difficult.

TABLE 23.7
Frequently Cited Level 1 Assignment Algorithms (Generally: Deterministic, 2D, Set Partitioned)

Images

Table 23.7 summarizes the conventional methods used for the most frequently structured versions of assignment problems for data fusion level 1 (i.e., deterministic, 2D, set-partitioned problem formulations). Furthermore, these are solutions for the linear case (i.e., linear objective or cost functions) and linear constraints. Without doubt, this is the most frequently discussed case in the literature and the solutions and characteristics cited in Table 23.7 represent a reasonable benchmark in the sense of applied solutions (but not in the sense of improved optimality).

Note that certain formulations of the assignment problem can lead to nonlinear formulations, such as the quadratic assignment. Additionally, the problem can be formulated as a multiassignment problem as for a set-covering approach, although these structures usually result in linear multiassignment formulations if the cost/objective function can be treated as separable. Otherwise, it, too, will typically be a nonlinear, quadratic-type formulation.29 No explicitly related nonlinear formulation for level 1 type applications was observed in the literature; however, Ref. 30 is a useful source for solutions to the quadratic problem. The key work in multiassignment structures for relevant fusion-type problems is by Tsaknakis et al., who form a solution analogous to the auction-type approach.29

This chapter introduces additional material focused on comparative assessments of assignment algorithms. These materials were drawn, in part, from works completed during the strategic defense initiative, or SDI (Star Wars), era and reported at the SDI Tracking Panels proceedings assembled by the Institute for Defense Analysis, and, in part, from a variety of other sources. Generally, the material dates to about 1990 and could benefit from updating.

One of the better sources, in the sense of its focus on the same issues/problems of interest to this study, is the study by Washburn.31 In one segment of a larger report, Washburn surveys and summarizes various aspects of what he calls data partitioning, gating, and pairwise; multiple hypothesis; and specialized object correlation algorithms. Data partitioning and gating equate to the terms hypothesis generation and evaluation used here, and the term correlation equates to the term hypothesis selection (input data set assignment) used here. Washburn’s assessments of multiple hypothesis class of correlation algorithms that he reviewed are presented in Table 23.8. Washburn repeats a critical remark related to comparisons of these algorithms: for those cases where a multiscan or multiple data set approach is required to achieve necessary performance levels (this was perhaps typical on SDI, especially for midcourse and terminal engagements), the (exact) optimal solution is unable to be computed, and so comparisons to the true solution are infeasible, and a relaxed approach to comparison and evaluation must be taken. This is, of course, the issue of developing an optimal solution to the ND case, which is an NP-hard problem. This raises again the key question of what to do if an exact solution is not feasible.

23.5.2.1   2D Versus ND Performance

One of the only comparisons of 2D versus ND for a common problem that was captured in this study was a reference in the Washburn, 1992 report to Allen et al. (1988) that examined comparative performance for a 10-target/3-passive sensor case.31,41 These results are shown in Figure 23.3.31 Essentially, pairwise—2D—solutions performed considerably worse than the three-dimensional (3D) approaches; branch-and-bound, RELAX, and backtrack all achieved average tracking of eight of ten targets—far below perfection.

This effort involved examining several other comparison and survey studies. To summarize, the research found that, among other factors, performance is affected by angular accuracy-to-angular target separation, number of targets, and average target density in the HG process gates, coding language, degree of parallelization, and available reaction time.

23.5.3   Engineering an HS Solution

As mentioned at the outset, this project sought to develop a set of engineering guidelines to guide data fusion process engineers in correlation process design. Such guidelines have, in fact, been developed,16 and an example for the HS process is provided in this section. In the general case, a fairly complex feasible hypothesis matrix exists. The decision tree guideline for engineering these cases is shown in Figure 23.4. Although the order of the questions or criteria shown in the figure is believed to be correct in the sense of maximal ordered partitioning of the problem/solution spaces, the decisions could possibly be determined by some other sequence. Because the approaches are so basically different and involve such different philosophies, the first question is whether the selected methodology is deterministic or probabilistic. If it is deterministic, then the engineering choices follow the top path, and vice versa. Some of the trade-off factors affecting this choice are shown in Table 23.9, which concentrate on the deterministic family of methods.

TABLE 23.8
Multiple Hypothesis Object Correlation Algorithms Summary

Images

23.5.3.1   Deterministic Approaches

If the approach is deterministic, the next crucial decision relates to the quantity of input data that will be dealt with at any one time. In the tracking world, this usually reflects the number of scans or batches of data that will be considered within a processing interval. These issues reflect the more complex case of correlating and assigning ensembles of data at a time rather than single contact at a time. In the typical cases where data ensembles are processed, they are frequently delineated by source type—often implying the type of sensor—or they are grouped by scan time. In a multisensor or multisource (or multitime) architecture, the (near-globally optimal) natural formulation for assignment processing would be n-dimensional, with n designating either the number of batch segments from a single source, for example, or the specific-scan segments from each of n sources. In the strictest sense of attempting to achieve near-global optimality, these data should be processed in an ND approach. The ND solutions usually applied in practice, because they are computationally demanding, attempt to process the most data possible but fall well short of an all data (i.e., true batch), truly optimal approach. The window width that such techniques can handle is yet another specific design choice to be made in the ND case. Because of the computational aspects, in part, there are applications where the data are segmented into source-specific sets, and a sequence of 2D assignment solutions are applied to obtain a satisfying (good enough, knee-of-the-curve performance), but strictly suboptimal result. The second choice is depicted in Table 23.10. Obviously this design decision separates the assignment solutions into quite different categories; the basic trade-off is in the nature of the optimality sought versus the solution complexity and runtime behavior.

Images

FIGURE 23.3
Functional comparison of correlation algorithms.

Images

FIGURE 23.4
Representative HS decision flow diagram.

TABLE 23.9
Trade-off 1

Design Decision

Positives

Negatives

Deterministic

Wide array of formal mathematical methods

Potentially incorrect specific assignments

Probabilistic

Proven useful for few tracks plus high clutter

Not widely applied to other than special cases as noted

 

Finesses specific assignments by probabilistically using all measurements in a gate (an all-neighbors approach)

Assignment processing not based on proven formalized methods; removes assignment from conventional correlation processing, embeds it in tracker

For many problems, not conceptually correct; measurements truly only come from one source in most (but not all) cases

TABLE 23.10
Trade-off 2

Images

For either the 2D or ND cases, the next decision has to do with whether the problem formulation is of the set-partitioning or set-covering type. This decision, as will be seen, also separates the potential solution types into considerably different categories, because these formulations are entirely different views of the problem. Recall that the set-partitioning scheme is one that divides the notional allocations of inputs into mutually exclusive sets with single inputs eventually being assigned, exclusively, to a particular state. The set-covering approach also divides the input data ensemble, but into nonexclusive sets such that single inputs can be assigned to more than one state. Only in the sense of permitting multiple assignments does the set-covering construct has some similarity to the notion of probabilistic assignment. However, the general set-covering approach makes assignments deterministically, as opposed to probabilistically, and also takes a different approach to defining allocable sets. The probabilistic approach permits multiassignment as determined largely by the gating process defined by HG; all measurements in a gate are probabilistically assigned to the track, and to the degree that gates overlap, multiassignment of specific measurements exists. Generalized set-covering would ideally consider a number of factors on which to define its overlapping sets and is conceptually more robust than the probabilistic approach. So what is at issue here is one’s view of the world, in the sense of how the data should be grouped, which, in turn, affects the eventual structure of the assignment matrix or other construct that provides the input to the assignment problem and its solution.

As noted in Table 23.11 and shown in Figure 23.4, the set-partitioning view of the problem leads to a situation where there are many more known and researched solutions, so the fusion node designer will have wider solution choices if the problem is set up this way.

The next design choice is whether to employ mathematical programming-type solutions or some type of heuristic approach. The mathematical programming solution set includes linear programming (LP) methods, integer and dynamic programming, and a wide variety of variations of these types. Although mathematically elegant and formal, these methods can become computationally demanding for problems of a large scale, whether they are 2D or ND. So, in many practical situations, an approximation-based or heuristic approach—which will develop a reasonably accurate solution and require considerably less computing resources—is preferable. Often heuristics can be embedded as subroutines within otherwise formal methods.

TABLE 23.11
Trade-off 3

Design Decision

Positives

Negatives

Set partitioning

For many, if not most, cases, this is the true view of the world in that singular data truly associate to only one causal factor or object (see trade-off 1, Deterministic). This assertion is conditioned on several factors, including sensor resolution and target spacing, but represents a frequent case

Lacks flexibility in defining feasible data sets

Much larger set of researched and evaluated solution types

Does not solve multispectral or differing resolution or

Set covering

More flexible in formulating feasible data groupings; often the true representation of realworld data sets

More complex formulation and solution approach

 

 

23.6   Summary

DC can be viewed as a three-step process of HG, HE, and HS. The first step limits the number of possible associations by permitting or gating through only those that are feasible. These potential associations are evaluated and quantitatively scored in the second step and assigned in the third.

Each of these stages can be performed in a variety of ways, depending on the specific nature of the DC problem. This nature has less to do with the specific military application than its required characteristics, such as processing efficiency and types of input data and output data, knowledge of how the characteristics relate to the statistics of the input data and the contents of the supporting database. The data fusion systems engineer needs to consider the system’s user-supplied performance-level requirements, implementation restrictions (if any), and characteristics of the system data. In this way, the application can be considered as encoded into an engineering-level problem space. This problem space is then mapped into a solution space via the guidelines of the type discussed in this chapter.

Beginning with HG, the designer needs to consider how the input data gets batched (e.g., single-input, single-sensor scan, multiple-sensor scan, or seconds between update frames) and what metric(s) to use. The goal should be a process that is less computationally intensive than the HE step and coincides with available input data (position, attribute, ID), and the list of possible hypotheses data (e.g., old track, new track, false alarm, ECM/deception, or anomalous propagation ghosting). Section 23.3 discusses these design factors and provides a preliminary mapping of gating techniques to problem space.

The next function, HE, considers much of the same material as HG; however, where the previous function evaluated candidate hypotheses on a pass/fail basis, this function must grade the goodness of unrejected hypotheses. Again, the type of input data must be considered together with its uncertainty characteristics. Probabilistic, possibilistic, and script-based techniques can be applied to generate scores that are, for example, real, integer, or discrete. These outputs are then used by the selection stage to perform the final assignment.

The selection stage accepts the scores to determine an optimum allocation of the gated-through subset of a batch of data to targets, other measurements, or other possible explanations for the data (e.g., false alarms). Optimum, in this case, means that the cost (value) of the set of associations (measurement-track, measurement-measurement), when taken as a whole, is minimized (maximized). The tools that handle ambiguous measurements are solutions to what is called the assignment problem. Solutions to this problem can be computationally intensive; therefore, efficient solutions that cover both the 2D and ND matrices of varying geometries are required. Section 23.5 provides a brief tutorial to the assignment problem and lists the numerical characteristics of popular techniques.

As mentioned previously in this chapter, the evolving guidelines and mappings presented in Sections 23.3, 23.4 and 23.5 were presented to an open audience at the government-sponsored Combat Information/Intelligence Symposium in January 1996 and at the (U.S.) National Symposium on Sensor Fusion in April 1996; however, further peer review is encouraged. The basis of such reviews should be the version of the engineering guidelines as presented in Ref. 16. Further, this chapter has treated these three individual processes as isolated parts, rather than part of a larger, integrated notion of correlation processing. Thus, additional work remains to be performed to assess the interprocess sensitivities in design choices, so that effective, integrated solutions result from the application of these guidelines.

 

 

References

1. Waltz, E. and Llinas, J., Multi-sensor Data Fusion, Artech House, Norwood, MA, 1990.

2. Hall, D.L., Mathematical Techniques in Multisensor Data Fusion, Artech House, Norwood, MA, 1992.

3. Blackman, S., Multiple Target Tracking with Radar Applications, Artech House, Norwood, MA, 1986.

4. Bowman, C.L., The data fusion tree paradigm and its dual, in Proc. 7th Nat’l Fusion Symp., invited paper, Sandia Labs, NM, March 1994.

5. Bowman, C., Max likelihood track correlation for multisensor integration, in 18th IEEE Conference on Decision and Control, December 1979.

6. Bowman, C.L., Possibilistic versus probabilistic trade-off for data association, in Proc. Signal and Data Processing of Small Targets, Vol. 1954, pp. 341–351, SPIE, April 1993.

7. Bar-Shalom, Y. and Tse, E., Tracking in a cluttered environment with probabilistic data association, Automatica, 11, 451–460, 1975.

8. Aldenderfer, M.S. and Blashfield, R.K., Cluster analysis, Sage University Paper Series on Quantitative Applications in the Social Sciences, Paper No. 07-044, London, UK, 1984.

9. Fukunaga, K., Introduction to Statistical Pattern Recognition, 2nd edn., Academic Press, New York, 1990.

10. Blackman, S., Multiple Target Tracking with Radar Applications, Artech House, Norwood, CT, 1985.

11. Uhlmann, J.K. and Zuniga, M.R., Results of an efficient gating algorithm for large-scale tracking scenarios, Naval Research Reviews, 43(1), 24–29, 1991.

12. Hall, D.L. and Linn, R.J., Comments on the use of templating for multisensor data fusion, in Proc. 1989 Tri-Service Data Fusion Symp., Vol. 1, p. 345, May 1989.

13. Noble, D.F., Template-based data fusion for situation assessment, in Proc. 1987 Tri-Service Data Fusion Symp., Vol. 1, pp. 152–162, June 1987.

14. Jackson, P., Introduction to Expert Systems, Addison-Wesley, Reading, MA, 1999.

15. Pearl, J., Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann Series on Representation and Reasoning, San Mateo, CA, 1988.

16. Llinas, J. et al., Engineering Guidelines for Data Correlation Algorithm Characterization, Vol. 3 of Final Reports on Project Correlation, June 1996.

17. Ahuja, R.K. et al., Network flows, in Handbook in OR and MS, Vol. I, GL Nemhauser et al., Eds., Elsevier Science Publishers, North-Holland, 1989, Chapter IV.

18. Kuhn, H.W., The Hungarian method for the assignment problem, Naval Research Logistics Quarterly, 2, 83–97, 1955.

19. Munkres, J., Algorithms for the assignment and transportation problems, J. SIAM, 5(1), 32–38, 1957.

20. Bourgeois, F. and Lassale, J., An extension of the Munkres algorithm for the assignment problem to rectangular matrices, Comm ACM, 14(12), 802–804, 1971.

21. Silver, R., An algorithm for the assignment problem, Comm ACM, 3, 605–606, 1960.

22. Kaufmann, A., Introduction a la Combinatorique en Veu de ses Applications, Dunod Publishers, Paris, 1968.

23. Salazar, D.L., Application of Optimization Techniques to the Multi-Target Tracking Problem, Master’s Thesis, University of Alabama at Huntsville, AL, 1980.

24. Drummond, D.E., Castanon, D.A. and Bellovin, M.S. Comparison of 2D assignment algorithms for sparse, rectangular, floating point cost matrices, J. SDI Panels Tracking, 4, 81–87, 1990.

25. Jonker, R. and Volgenant, A., A shortest augmenting path algorithm for dense and sparse linear assignment problems, Computing, 38(4), 325–340, 1987.

26. Bersakas, D.P. and Eckstein, J., Dual coordinate step methods for linear network flow problems, Math Prog., Series B, 42, 203–243, 1988.

27. Barr, R.S. et al., The alternating basis algorithms for assignment problems, Math Prog., 13, 1–13, 1977.

28. Balinski, M.L., Signature methods for the assignment problem, Oper. Res., 33(3), 427–536, 1985.

29. Tsaknakis, H., Doyle, B.M. and Washburn, R.B., Tracking closely spaced objects using multiassignment algorithms, in Proc. DFS 91, 1991.

30. Finke, G. et al., Quadratic assignment problems, in Annals of Discrete Mathematics, 31, Elsevier Science Pub., Amsterdam, 1987.

31. Washburn, R.B., Tracking algorithms, Section 2, in a report on the SDI Algorithm Architecture program, provided to TRW as prime, September 1992.

32. Liggins, M. and Kurien, T., Report-to-target assignment in multitarget tracking, Proceedings of the 1988 IEEE Conference on Decision and Control, Austin, TX, December 1988.

33. Wolf, J.K., Viterbi, A.M. and Dixon, G.S., Finding the best set of K paths through a trellis with application to multitarget tracking, IEEE Trans. Aerospace Electr. Systems, AES-25, 287–296, 1988.

34. Allen, T.G., Feinberg, L.B., LaMaire, R.O., Pattipati, K.R., Tsaknakis, H., Washburn, R.B., Wren, W., Dobbins, T. and Patterson, P., Multiple information set tracking correlator (MISTC), TR-406, Final Report, Alphatech, Inc., Burlington, MA, and Honeywell, Inc., Space and Strategic Avionics Division, September 1988, USASDC Contract No. DASG60-87-C-0015.

35. Parallel processing of battle management algorithms, Final Report, CDRL Item A003, AT&T Bell Labs, Whippany, NJ, December 1989.

36. Castanon, D.A., Efficient algorithms for finding the K best paths through a trellis, IEEE Trans. Aerospace Electr. Systems, 26(2), 405–410, 1989.

37. Erbacher, J.A., Todd, J. and Hopkins, C.H., SDI tracker/correlator algorithm implementation document and algorithm design document, R-041-87, Contract No. N00014-86-C-2129, VERAC, San Diego, CA, April 1987.

38. Allen, T.G., Cybenko, G., Angelli, C. and Polito, J., Parallel processing for multitarget surveillance and tracking, Technical Report TR-360, Alphatech, Burlington, MA, January 1988.

39. Nagarajan, V., Chidambara, M.R. and Sharma, R.N., Combinatorial problems in multitarget tracking—a comprehensive solution, IEEE Proc., 134, 113–118, 1987.

40. Pattipati, K.R., Deb, S., Bar-Shalom, Y. and Washburn, R.B., Passive multisensor data association using a new relaxation algorithm, IEEE Trans. Automatic Control, February 24, 1989.

41. Allen, T.G. et al., Multiple Information Set Tracking Correlator (MISTC), Technical Report TR-406, Alphatech Inc., Burlington, MA, September 1988.

* This chapter is based on a paper by James Llinas et al., Studies and analyses within project correlation: an in-depth assessment of correlation problems and solution techniques, in Proceedings of the 9th National Symposium on Sensor Fusion, March 12–14, 1996, pp. 171–188.

* The solution space abbreviations PM, SM, DM, etc., refer to the solution techniques in Table 23.2.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.28.108