3

 

 

Revisions to the JDL Data Fusion Model

 

Alan N. Steinberg and Christopher L. Bowman

CONTENTS

3.1   Objective

3.1.1   Background

3.1.2   Role of Data Fusion

3.1.3   1998 Revision

3.1.4   Definition of Data Fusion

3.1.5   Motivation for Present Revision

3.2   Recommended Refined Definitions of Data Fusion Levels

3.3   Discussion of Data Fusion Levels

3.3.1   Level 0: Signal/Feature Assessment

3.3.2   Level 1: Entity Assessment

3.3.3   Level 2: Situation Assessment

3.3.4   Level 3: Impact Assessment

3.3.5   Level 4: Process Assessment

3.4   Information Flow Within and Across the “Levels”

3.5   Model Extensions and Variants

3.5.1   Level 5: User Refinement

3.5.2   Dasarathy’s Input/Output Model

3.5.3   Other Data Fusion Models

3.6   Data Fusion and Resource Management Levels

3.7   Data Fusion and Resource Management Processing Level Issues

References

 

 

3.1   Objective

This chapter presents refinements to the well-known Joint Directors of Laboratories (JDL) data fusion model. Specifically, the goals of the chapter are to

  1. Reiterate the motivation for this model and its earlier versions, viz., to facilitate an understanding of the types of problems for which data fusion is applicable. Such understanding should aid in recognizing commonality among problems and determining the applicability of candidate solutions.

  2. Refine the definition of the data fusion levels to establish a clearer basis for partitioning among such problems and concepts.

3.1.1   Background

The data fusion model, developed in 1985 by the U.S. Joint Directors of Laboratories (JDL) Data Fusion Group, with subsequent revisions, is the most widely used system for categorizing data fusion-related functions. The goal of the JDL data fusion model is to facilitate understanding and communication among acquisition managers, theoreticians, designers, evaluators, and users of data fusion technology to permit cost-effect system design, development, and operation.1,2

The stated purpose for that model and its subsequent revisions have been to

  • Categorize different types of fusion processes

  • Provide a technical architecture to facilitate reuse and affordability of data fusion and resource management (DF&RM) system development

  • Provide a common frame of reference for fusion discussions

  • Facilitate understanding of the types of problems for which data fusion is applicable

  • Codify the commonality among problems

  • Aid in the extension of previous solutions

  • Provide a framework for investment in automation.

It should be emphasized that the JDL model was conceived as a functional model and not as a process model or as an architectural paradigm.3 An example of the latter is the dual node network (DNN) technical architecture developed by Bowman4 and described in Chapter 22.

3.1.2   Role of Data Fusion

The problem of deriving inferences from multiple items of data pervades all biological cognitive activity and virtually every automated approach to the use of information. Unfortunately, the universality of data fusion has engendered a profusion of overlapping research and development in many applications. A jumble of confusing terminology (illustrated in Figure 3.1) and ad hoc methods in a variety of scientific, engineering, management, and educational disciplines obscures the fact that the same ground has been covered repeatedly.

Images

FIGURE 3.1
(Con)fusion of terminology. (From Steinberg, A.N. and Bowman, C.L., Handbook of Multisensor Data Fusion, CRC Press, London, 2001.)

The role of data fusion has often been unduly restricted and its relevance limited to particular state estimation problems. For example, in military applications such as targeting or tactical intelligence, the focus is on estimating and predicting the state of specific types of entities in the external environment (e.g., targets, threats, or military formations). In this context, the applicable sensors or sources that the system designer considers are often restricted to sensors that directly collect observations of targets of interest.

Ultimately, however, such problems are inseparable from other aspects of the system’s assessment of the world. In a tactical military system, these will involve estimation of the state of one’s own assets in relation to the relevant external entities: friends, foes, neutrals, and background. Estimation of the state of targets and threats cannot be separated from the problems of estimating one’s own navigation state, of calibrating one’s sensor performance and alignment, and of validating one’s library of target models. The data fusion problem, then, becomes that of achieving a consistent, comprehensive estimate and prediction of some relevant portion of the world state. From such a view, data fusion involves exploiting all available sources of data to solve all relevant state estimation or prediction problems; where relevance is determined by utility in forming plans of action.

The data fusion problem therefore encompasses a number of interrelated problems: estimation and prediction of states of entities, both external and internal (organic) to the acting system, and the interrelations among such entities. Evaluating the system’s models of the characteristics and behavior of all of these external and organic entities is, likewise, a component of the overall problem of estimating the actual world state.

The complexity of the data fusion system engineering process is characterized by difficulties in

  • Representing the uncertainty in observations and in models of the phenomena that generate observations

  • Combining noncommensurate information (e.g., the distinctive attributes in imagery, text, and signals)

  • Maintaining and manipulating the enormous number of alternative ways of associating and interpreting large numbers of observations of multiple entities.

Deriving general principles for developing and evaluating data fusion processes—whether automatic or manual—can benefit from a recognition of the similarity in the underlying problems of associating and combining data that pervade system engineering and analysis, as well as human cognition. Furthermore, recognizing the common elements of diverse data fusion problems can provide extensive opportunities for synergistic development. Such synergy—enabling the development of information systems that are cost-effective and trustworthy—requires (a) common performance evaluation measures, (b) common system engineering methodologies, (c) common architecture paradigms, and (d) multispectral models of targets and of information sources.

3.1.3   1998 Revision

In 1998, Steinberg et al.5 published a article formally addressing various extensions to the model. That article began by revisiting the basic definition(s) of data fusion both conceptually and in terms of the levels that are characterized in the original JDL model. This article was republished in 2000 with minor revisions.

Images

FIGURE 3.2
The 1998 revised Joint Directors of Laboratories data fusion model (with process refinement and database management system partially outside the data fusion domain). (From Steinberg, A.N., Bowman, C.L., and White, F.E., Proceedings of the SPIE, 3719, 1999.)

Figure 3.2 depicts the 1998 version of the JDL data fusion model. The goals of the 1998 revision—as in the present proposed revision—were to

  • Clarify some of the concepts that guided the original model

  • Refine the definition of the JDL data fusion levels to better reflect different classes of data fusion problems—that is, systematic differences in types of inputs, outputs and techniques

  • Broaden the definitions of fusion concepts and functions to apply across as wide a range of problems as possible, beyond the initial focus on military and intelligence problems

  • Maintain a degree of continuity by deviating as little as possible from the usage of concepts and terminology prevailing in the data fusion community

The 1998 revision included the following actions:

  1. The definitions of data fusion levels 1–3 were broadened to accommodate fusion problems beyond the military and intelligence problems that had been the focus of earlier versions of the JDL model.

  2. A level 0 was introduced to address problems of detecting and characterizing signals, whether in a one-dimension time-series or transform spaces or in multiple spatial dimensions (as in imagery or video feature extraction).

  3. The 1998 revision article emphasized that process refinement, as originally conceived, involves not only data fusion functions but planning and control functions as well. Given the formal duality between estimation and control and the similar duality between data association and planning, as discussed in Section 3.6, the present recommended revision clarifies these distinctions: (a) it retains within level 4 fusion the process assessment functions such as performance and consistency assessment and (b) it treats the process refinement management functions as a part of the resource management functional partitioning.

  4. The database management system (DBMS) is treated as a support application accomplished within a layer of the common operating environment below the applications program interface.

  5. The notion of estimating informational and psychological states in addition to the familiar physical states was introduced, citing the work of Waltz.6

  6. An approach to standardization of an engineering design methodology for data fusion processes was introduced, citing the earlier works of Bowman,7 Steinberg and Bowman,8 and Llinas et al.9 in which engineering guidelines for data fusion processes were elaborated.

3.1.4   Definition of Data Fusion

The initial JDL data fusion lexicon (1985) defined data fusion as

A process dealing with the association, correlation, and combination of data and information from single and multiple sources to achieve refined position and identity estimates, and complete and timely assessments of situations and threats, and their significance. The process is characterized by continuous refinements of its estimates and assessments, and the evaluation of the need for additional sources, or modification of the process itself, to achieve improved results.1

As theory and applications have evolved over the years, it has become clear that this initial definition is rather too restrictive. A definition was needed that could capture the fact that similar underlying problems of data association and combination occur in a very wide range of engineering, analysis, and cognitive situations. Accordingly, the initial definition requires a number of modifications:

  1. Although the concept combination of data encompasses a broad range of problems of interest, correlation does not. Statistical correlation is merely one method for generating and evaluating hypothesized associations among data.

  2. Association is not an essential ingredient in combining multiple pieces of data. Work in random set models of data fusion provides generalizations that allow state estimation of multiple targets without explicit report-to-target association.6,7 and 8

  3. ‘Single or multiple sources’ is comprehensive; therefore, it is superfluous in a definition.

  4. The reference to position and identity estimates should be broadened to cover all varieties of state estimation.

  5. Complete assessments are not required in all applications; ‘timely’, being application-relative, is superfluous.

  6. ‘Threat assessment’ limits the application to situations where threat is a factor. This description must also be broadened to include any assessment of the impact (i.e., the cost or utility implications) of estimated situations. In general, data fusion involves refining and predicting the states of entities and aggregates of entities and their relation to one’s own mission plans and goals. Cost assessments can include variables such as the probability of surviving an estimated threat situation.

  7. Not every process of combining information involves collection management or process refinement. Thus, the definition’s second sentence is best construed as illustrative, not definitional.

Pruning these extraneous qualifications, the model revision proposes the following concise definition for data fusion:5

The process of combining data or information to estimate or predict entity states.

Data fusion involves combining data—in the broadest sense of ‘data’—to estimate or predict the state of some aspect of the universe. Often the objective is to estimate or predict the physical state of entities: their identity, attributes, activity, location, and motion over some past, current, or future time period.

However, estimation problems can also concern nonphysical aspects. If the job at hand is to estimate the state of people (or any other sentient beings), it may be important to estimate or predict the informational and perceptual states of individuals or groups and the interrelation of these with physical states.

Arguments about whether ‘data fusion’ or another label—say, some other label shown in Figure 3.1—best describes this very broad concept are pointless. Some people have adopted terms like ‘information integration’ in an attempt to generalize earlier, narrower definitions of ‘data fusion’ (and, perhaps, to dissociate themselves from old data fusion approaches and programs).

Nonetheless, relevant research should not be neglected simply because of shifting terminological fashion. Although no body of common and accepted usage currently exists, this broad concept is an important topic for a unified theoretical approach and, therefore, deserves its own label.

3.1.5   Motivation for Present Revision

Like ‘data fusion’ itself, the JDL data fusion levels are in need of definitions that are at once broadly applicable and precise. The suggested revised partitioning of data fusion functions is designed to capture the significant differences in the types of input data, models, outputs, and inferencing appropriate to broad classes of data fusion problems. In general, the recommended partitioning is based on different aspects of a situation for which the characterization is of interest to a system user. In particular, a given entity—whether a signal, physical object, aggregate, or structure—can often be viewed either (a) as an individual whose attributes, characteristics, and behaviors are of interest or (b) as an assemblage of components whose interrelations are of interest.* For effective fusion system implementation, the fusion system designer needs to determine what would be the types of basic entities of interest from which the relationships of interest can be defined.

Sections 3.2 and 3.3 respectively present and discuss the recommended data fusion level definitions. Section 3.4 discusses the system integration of data fusion processes across the levels. Section 3.5 examines some prominent alternatives and extensions to the JDL model. Sections 3.6 and 3.7 extend the revised JDL data fusion levels to corresponding dual levels in resource management, which together are used to decompose DF&RM problems and processes for effective system design. The revised level 4 of data fusion retains the “process assessment” fusion functions in the original JDL fusion level 4 and includes process and sensor management as part of the corresponding dual resource management levels. It is proposed that these extended DF&RM levels will serve much the same purpose for resource management problems as the original JDL levels do for data fusion problems: facilitating understanding, design and integration.10

 

 

3.2   Recommended Refined Definitions of Data Fusion Levels

The original JDL model suffered from an unclear partitioning scheme. Researchers have sometimes taken the model as distinguishing the fusion levels on the basis of process (identification, tracking, aggregation, prediction, etc.); sometimes on the basis of topics (objects, situations, threats); products (object, situation, cost estimates) levels. In some ways, this ambiguity is a virtue: correlating significant differences in engineering problems and uses.

Nonetheless, the conflation of processes, topics and products has tended to blur important factors in data fusion problems and solutions. Numerous examples can be cited in which problems of one level find solution in techniques of a different level. For example, entity recognition—a paradigmatically leve1 1 problem—is often addressed by techniques that assess relationships between an entity and its surroundings, these being paradigmatically level 2 processes. Also relations among components of an object can be used in level 0 fusion to improve object recognition products.

It is the goal of the recommended revised definitions of the data fusion functional levels to provide a clear and useful partitioning while adhering as much as possible to current usage across the data fusion community. With this goal in mind, the recommended revised definitions partition data fusion functions basis entities of interest to information users. The products of processes at each level are estimates of some existing or predicted aspects of reality.

The recommended revised model diagram is shown in Figure 3.3, with fusion levels defined as follows:

  • Level 0: Signal/feature assessment. Estimation of signal or feature states. Signals and features may be defined as patterns that are inferred from observations or measurements. These may be static or dynamic and may have locatable or causal origins (e.g., an emitter, a weather front, etc.).

  • Level 1: Entity assessment. Estimation of entity parametric and attributive states (i.e., of entities considered as individuals).

  • Level 2: Situation assessment. Estimation of the structures of parts of reality (i.e., of sets of relationships among entities and their implications for the states of the related entities).

  • Level 3: Impact assessment. Estimation of the utility/cost of signal, entity, or situation states, including predicted utility/cost given a system’s alternative courses of action.

  • Level 4: Process assessment. A system’s self-estimation of its performance as compared to desired states and measures of effectiveness (MOEs).

These concepts are compared and contrasted in Table 3.1.

In general, the benefit of this partitioning scheme is that these levels distinguish problems that characteristically involve significant differences in types of input data, models, outputs, and inferencing. It should be noted that the levels are not necessarily processed in order and any one can be processed on its own given the corresponding inputs. In addition, a system design may involve addressing more than one fusion level within a single processing node.

Images

FIGURE 3.3
Recommended revised data fusion model, Bowman ’04. (From Bowman, C.L., AIAA Intelligent Systems Conference, Chicago, September 20–22, 2004.)

TABLE 3.1
Characteristics of the Recommended Data Fusion Levels

Images

 

 

3.3   Discussion of Data Fusion Levels

3.3.1   Level 0: Signal/Feature Assessment

The 1998 revision5 recommended defining a level 0 fusion to encompass various uses of multiple measurements in signal and feature processing. These include processes of feature extraction from imagery and analogous signal detection and parameter estimation in electro-magnetic, acoustic, or other data. In the most general sense, these problems concern the discovery of information (in Shannon’s sense) in some region of space and time.

Level 0 processes include inferences that do not require assumptions about the presence or characteristics of entities possessing such observable features. They concern the structure of measurement sets (their syntax), not their cause (i.e., their semantics). In cases where signal characteristics are conditioned on the presence and characteristics of one or more presumed entities, the latter are treated as a context, or situation, in which the signals or features are inferable, for example, through a likelihood function λ(Z|x).

These functions include DAI/FEO (data in/features out) and FEI/FEO (features in/features out) functions as defined in Dasarathy’s model,11 as well as DEI/FEO (decisions in/features out) in our expanded version of that model.*12

Level 0 processes are often performed by individual sensors, or with the product of individual sensors. That certainly is the case with most tactical systems. However, when communications bandwidth and processing time allow, much can be gained by feature extraction or parameter measurement at the multisensor level. Image fusion characteristically involves extracting features across multiple images, often from multiple sources. Multisensor feature extraction is also practiced in diverse systems that locate targets using multistatic or other multisensor techniques.

3.3.2   Level 1: Entity Assessment

The notion of level 1 data fusion—called variously object assessment or entity assessment—was originally conceived as encompassing the most prominent and most highly-developed applications of data fusion: detection, identification, location, and tracking of individual physical objects (aircraft, ships, land vehicles, etc). Most techniques involve combining observations of a target of interest to estimate the states of interest. Thus, it was convenient to conflate process, target and product in distinguishing level 1 fusion. However, states of physical objects can be estimated using indirect, non-attributive information. Also, non-physical objects are amenable to attributive state estimation. Accordingly, we prefer to construe level 1 data fusion as the estimation of states of entities considered as individuals. JDL level 1, as in earlier versions, is concerned with the estimation of the identity, classification (in some given taxonomic scheme), other discrete attributes, kinematics, other continuous parameters, activities, and potential states of the basic entities of interest for the fusion problem being solved.

Individuation of entities implies an inclusion, or boundary, function such as is found in such paradigmatic entities as

  • Biological organisms

  • Purpose-made entities; for example, a hammer, an automobile, or a bird’s nest

  • Other discriminated spatially contiguous uniformities that persist over some practical time interval, for example, a mountain, coral reef, cloud, raindrop, or swarm of gnats

  • Societally discriminated entities, for example, a country, tract of land, family, or other social, legal or political entity.

A level 1 estimation process can be thought of as the application of one-place relations R(1)(a) = x, where x may be a discrete or a continuous random variable, or a vector of such variables, so that R(1)(a) may be the temperature, location, functional class, an attribute, or activity state of an entity a.

3.3.3   Level 2: Situation Assessment

Level 2 data fusion concerns inferences about situations. Devlin13 provides a useful definition of a situation as any structured part of reality. We may parse this in terms of a standard dictionary definition of a structure as a set of entities, their attributes, and relationships (Merriam-Webster Dictionary, 3rd edition). Once again, data fusion is the estimation of the state of some aspects of the world on the basis of multiple data.

Abstract situations can be represented as sets of relations; actual situations can be represented as sets of instantiated relations (or, relationships).* Thus, situation assessment involves inferencing from the estimated state(s) of one entity in a situation to another and from the estimated attributes and relationships of entities to situations. Attributes of individuals can be treated as one-place relationships.

Methods for representing relationships and for inferring entity states on the basis of relationships include graphical methods (e.g., Bayesian and other belief networks).

Situation assessment, then, involves the following functions:

  1. Inferring relationships among (and within) entities

  2. Recognizing/classifying situations basis estimates of constituents (entities, attributes, and relationships)

  3. Using inferred relationships to infer entity attributes, to include attributes (a) of entities that are elements of a situation (e.g., refining the estimate of the existence and state variables of an entity on the basis of its inferred relationships) and (b) of the entities that are themselves situations (e.g., in determining a situation to be of a certain type)

These functions implicitly include inferring and exploiting relationships among situations, for example, predicting future situation basis current and historical situations.

As noted above, an entity—whether a signal, physical object, aggregate, or structure—can often be viewed either (a) as an individual whose attributes, characteristics, and behaviors are of interest or (b) as an assemblage of components whose interrelations are of interest. From the former point of view, discrete physical objects (the paradigm targets of level 1 data fusion) are components of a situation. From the latter point of view, the targets are themselves situations, that is, contexts for the analysis of components and their relationships.

A more detailed discussion on situation assessment and its relationship to entity and impact assessment is given in Chapter 18.

3.3.4   Level 3: Impact Assessment

It must be admitted that level 3 data fusion has been the most troublesome to define clearly and consistently with common usage in the field. The initial name of “threat assessment” is useful and relates to a specific, cohesive set of operational functions, but only within the domain of tactical military or security applications. The 1998 version attempted to broaden the concept to nonthreat domains; with a change of names to impact assessment. This was defined as the estimation and prediction of effects on situations of planned or estimated/predicted actions by the participants (e.g., assessing susceptibilities and vulnerabilities to estimated/predicted threat actions, given one’s own planned actions).

We may simplify this definition by taking impact assessment to be conditional situation estimation. Questions to be answered include those of the following form:

“If entities xiX follow courses of action αi, what will be the outcome?”

Many problems of interest involve a given agent x operating in a reactive environment (i.e., one that responds differentially to x’s actions). Furthermore, such reaction is often the result of one or more responsive agents, often assumed to be capable of intentional activity. As such, impact assessment problems can be amenable to game theoretic treatment, when the objective functions of the interacting agents can be specified.

In military threat assessment, we are usually concerned with the state of a set of such agents that constitute our forces as affected by intentional actions and reactions of hostile agents. In other words, we are concerned with inferring the concept:

“If we follow course of action α, what will be the outcome (and cost)?”

Impact assessment so construed can also include counterfactual event or situation prediction:

“If x had followed this course of action, what would have been the outcome?”

In general then, level 3 fusion involves combining multiple sources of information to estimate conditional or counterfactual outcome and cost. It is amenable to cost analysis, whether it is Bayesian-based or otherwise.

Because the utility of a fused state in supporting a user generally needs to be predicted based on the estimated current situation, known plans, and predicted reactions, level 3 fusion processing typically has different inputs and different outputs (viz. utility predictions) than the other fusion levels.

An example of a level 3 type of association problem is that of determining the expected consequence of system plans or courses of action, given the current estimated signal, entity, and situational state. An example of a level 3 type of estimation problem is the prediction of the impact of the estimated current situational state on the mission utility.

Level 3 issues and methods are discussed further in Chapter 18.

3.3.5   Level 4: Process Assessment

In early versions of the JDL model, level 4 was designated as process refinement. This was meant to encompass both assessment and adaptive control of the fusion process and—in many interpretations—in the data collection process as well. We have argued (a) that a more fundamental partitioning of functionality involves a distinction between data fusion functions (involving data association and state estimation) and resource management functions (involving planning and control), and (b) that functions that determine how the system resources should respond are clearly resource management functions.5,8,12

There is, however, a need for a system-level category of level 4 fusion processes, one that is involved in associating system states and actions to desired system states and estimating or predicting the performance of the system. We therefore propose a data fusion level 4, concerning this process performance. Level 4 data fusion functions combine information to estimate a system’s measures of performance (MOPs) and MOEs based on a desired set of system states and responses. In the case of a data fusion system, MOEs can involve correlation of system state estimates to truth.

In level 4 data fusion, the primary association problem to be solved is that of determining which system outputs correspond to which of the desired goal states.

In the case of a data fusion system, the level 4 association problem may be that of determining a correspondence between system tracks and real-world entities.

In level 4 fusion, estimation may include such MOPs as sensor calibration and alignment errors, track purity and fragmentation, etc. MOEs could include utility metrics on target classification and location accuracies. Such estimates may be made on the basis of the system’s fusion multisensor product, or by an external test-bed system on the basis of additional fiducial information.

For an integrated DF&RM system, there can be several levels of planning and control (discussed in Sections 3.6 and 3.7).

RM association problems may involve determining a correspondence between resource actions and desired responses. A hierarchy of MOPs and MOEs can be estimated based on these associations.

 

 

3.4   Information Flow Within and Across the “Levels”

Processing at each of these data fusion levels involves the batching of available data for fusion within a network of fusion nodes where, paradigmatically, each fusion node accomplishes

  • Data preparation (data mediation, common formatting, spatiotemporal alignment, and confidence normalization)

  • Data association (generation, evaluation, and selection of association hypotheses, i.e., of hypotheses as to the applicability of specific data to particular aspects of the state estimation problem)

  • State estimation and prediction (estimating the presence, attributes, inter-relationships, utility, and performance or effectiveness of entities of interest, as appropriate to the data fusion node)*

A comparison of the data association problems that need to be solved at each fusion level is given in Figure 3.4. In all the fusion levels, the accuracy of the fused state estimates tends to increase as larger batches of data (e.g., over more data types, sources or times) are fused; however, the cost and complexity of the fusion process also increases. Thus, a knee-of-the-curve of performance versus cost fusion node network is sought in the system design and operation.

Images

FIGURE 3.4
Data association problems occur at each fusion level.

As noted earlier, the data fusion levels are not necessarily processed in order and any one can be processed on its own or in combination given the corresponding inputs. Therefore, the term level can be misleading in this context. The early data fusion model descriptions rightfully avoided the sense of hierarchy by the use of functional diagrams such as Figures 3.2 and 3.3, in which various data fusion functions are represented in a common bus architecture.

Nonetheless, there is often a natural progression from raw measurements to entity states, to situation relationships, to utility prediction, and to system performance assessment, in which data are successively combined as suggested by the level ordering. Composition of estimated signals (or features), entities, and aggregates per levels 0–2 is quite natural, with a corresponding reverse flow of contexts as illustrated in Figure 3.5.* Utility is generally assessed as a function of an estimated or predicted situational state, rather than of the state of a lone entity within a situation. That is the reason that level 3 fits naturally above level 2, but level 2 outputs are not always required for level 3 processing. Similarly, system performance can also be assessed on the basis of estimated or predicted outcomes of situations and executed or planned system actions, consistent with our ordering of levels.

Images

FIGURE 3.5
Characteristic functional flow across the data fusion levels.

 

 

3.5   Model Extensions and Variants

3.5.1   Level 5: User Refinement

A process identified as “user refinement” has been proposed variously by Hall et al.14 and by Blasch and Plano.15 The latter give the following definition:

Level 5: Visualize. This process connects the user to the rest of the fusion process so that the user can visualize the fusion products and generate feedback or control to enhance/improve these products.15

Hall and McMullen16 define level 5 data fusion as cognitive refinement and human–computer interaction. Cognitive refinement includes the transformation from sensor data to a graphics display, but encompasses all methods of presenting information to human users. User refinement processes can involve adaptive determination of which users have needs for information and which have access to information. User refinement can also include knowledge management to adaptively determine the data that is most needed to be retrieved and displayed to support cognitive decision making and actions.

While there is no denial of the essential character of such functions in many systems involving both human and automated processes, it can be argued that these are not specifically data fusion functions. In fact they usually need to be managed by higher resource management processes such as objective management and resource relationship management. As a functional model, the JDL data fusion model is partitioned according to the functions performed, not by the performance and response management mechanisms, whether automatic, human, or other mechanisms.

TABLE 3.2
Interpretation of Dasarathy’s Data Fusion Input/Output Model

Images

The level 0–4 data fusion processes as we have defined them—involving the combination of information to estimate the state of various aspects of the world—can and have been performed by people, by automated processes, and by combinations thereof.*

We therefore argue against the inclusion of such a level 5 in the data fusion model. Like data archiving, communications, and resource management, such functions as data access, data retrieval, and data presentation are ancillary functions that often occur in systems that also do data fusion. As these descriptions indicate, some of these functions are resource management functions that are formal duals of certain data fusion functions.

3.5.2   Dasarathy’s Input/Output Model

The partitioning basis process products (as distinct, e.g., from process types) allows a direct mapping from Dasarathy’s input/output (I/O) model,11 as extended in Chapter 2 of the previous edition of this handbook.12 Accordingly, the latter can be seen as a refinement of the JDL model.

Dasarathy11 categorized data fusion functions in terms of the types of data/information that are processed and the types that result from the process. Table 3.2 illustrates the types of I/O considered. Processes corresponding to the cells in the highlighted matrix region are described by Dasarathy, using the abbreviations DAI/DAO, DAI/FEO, FEI/FEO, FEI/DEO, and DEI/DEO. A striking benefit of this categorization is the natural manner in which technique types can be mapped into it.

In the previous edition of this handbook,12 this categorization was extended by

  • Adding labels to these cells, relating I/O types to process types

  • Filling in the unoccupied cells in the original matrix

Note that Dasarathy’s original categories represent constructive, or data-driven, processes, in which organized information is extracted from less organized data. Additional processes—FEI/DAO, DEI/DAO, and DEI/FEO—can be defined that are analytic, or model-driven, such that organized information (a model) is analyzed to estimate lower-level data (features or measurements) as they relate to the model. Examples include predetection tracking (an FEI/DAO process), model-based feature extraction (DEI/FEO), and model-based classification (DEI/DAO). The remaining cell in Table 3.2—DAI/DAO—has not been addressed in a significant way (to the authors’ knowledge), but could involve the direct estimation of entity states without the intermediate step of feature extraction.

In Ref. 12, we extended Dasarathy’s categorization to encompass level 2, 3, and 4 processes, as shown in Table 3.3. Here rows and columns have been added to correspond to the object types listed in Figure 3.5.

With this augmentation, Dasarathy’s categorization is something of a refinement of the JDL levels. Not only can each of the levels (0–4) be subdivided basis of input data types, but level 0 can also be subdivided into detection processes and feature-extraction processes.

Of course, much of Table 3.3 remains virgin territory; researchers have only seriously explored its northwest quadrant, with tentative forays southeast. Most likely, little utility will be found in either the Northeast or the Southwest. However, there may be gold buried somewhere in those remote stretches.

TABLE 3.3
Expansion of Dasarathy’s Model to Data Fusion Levels 0–4

Images

Images

FIGURE 3.6
The data fusion and resource management duality: key concepts.

3.5.3   Other Data Fusion Models

Bedworth and O’Brien21 provide a synthesis of data fusion models. Their resultant Omnibus model maps the early JDL and Dasarathy’s models into an OODA (observe, orient, decide, act) model.

It should be noted that the omnibus model—like the OODA loop—is a process model, whereas the JDL model and Dasarathy’s I/O model are functional models.* Among the most influential variants and alternatives to the JDL model are those presented by Endsley,22 Salerno,23 and Lambert.24 These are discussed in some detail in Chapter 18.

 

 

3.6   Data Fusion and Resource Management Levels

Just as a formal duality exists between estimation and control, there is a more encompassing duality between data fusion and resource management (DF&RM).10 The DF&RM duality incorporates the association/planning duality as well as the estimation/control duality, as summarized in Figure 3.6. The planning and control aspects of process refinement—level 4 in early versions of the JDL model—are inherently resource management functions.

A functional model for resource management was proposed in Ref. 17 in terms of functional levels that are, accordingly, the duals of the corresponding data fusion levels. This model has been refined in Ref. 4.

A dual scheme for modeling DF&RM functions has the following beneficial implications for system design and development:

  • Integrated DF&RM systems can be implemented using a network of interacting fusion and management nodes.

  • The duality of DF and RM concepts provides insights useful for developing and evaluating techniques for implementing both DF and RM.

  • The technology maturity of data fusion can be used to bootstrap resource management technology, which is lagging fusion development by more than 10 years, much as estimation did for control theory in the 1960s.

  • The partitioning into levels reflects significant differences in the types of data, resources, models, and inferencing necessary for each level.

  • All fusion levels can be implemented using a fan-in network of fusion nodes where each node performs: data preparation, data association, and state estimation.

  • Analogously, all management levels can be implemented using a fan-out network of management nodes where each node performs: task preparation, task planning, and resource state control.

Given the dual nature of DF&RM, we would expect a corresponding duality in functional levels. As with data fusion levels, the dual resource management processing levels are based on the user’s resources of interest. The proposed dual DF&RM levels are presented in Table 3.4, and the duality concepts that were used in defining these resource management levels are summarized in Figure 3.7.

As with the corresponding partitioning of data fusion functions into levels, the utility of these management levels results from the significant differences in the types of resources, models, and inferencing used in each level. Resource management includes applications-layer functions such as sensor management, target management, weapons management, countermeasure management, flight management, process management, communications management, and so on. As with the data fusion levels, the resource management levels are not necessarily processed in order and any one can be processed on its own or in combination given the corresponding inputs.

All the management levels can be implemented and integrated using a fan-out network of management nodes where each node performs the functions of task preparation, task planning, and resource state control. DF&RM systems can be implemented using a network of interacting fusion and management nodes. These node interactions can occur across any of the levels. However, for illustrative purposes, Figure 3.8 shows a sequential processing flow across the DF&RM levels 0–3.*

TABLE 3.4
Dual Data Fusion and Resource Management Processing Levels

Images

Images

FIGURE 3.7
Duality between DF and RM elements.

Images

FIGURE 3.8
Illustrative multilevel data fusion and resource management system network with sequential level interactions.

Images

FIGURE 3.9
Response planning problems occur at each management level as duals of the corresponding data fusion levels.

The extended partitioning by type of inputs, response planning, and output response states strives to enhance clarity, ease solution development, and preserve respect for the duality. Planning—the dual of data association—involves analogous assignment problems that differ by resource management level. This is illustrated in Figure 3.9, which is the resource management analog to Figure 3.4.

The five resource management processing levels are described below for each of the JDL levels comparing duality concepts to data fusion levels:

  • Resource signal management level 0. Management to task/control specific resource response actions (e.g., signals, pulses, waveforms, etc.)

  • Signal/feature assessment level 0. Fusion to detect/estimate/perceive specific source entity signals and features

  • Resource response management level 1. Management to task/control continuous and discrete resource responses (e.g., radar modes, countermeasures, maneuvering, communications)

  • Entity assessment level 1. Fusion to detect/estimate/perceive continuous parametric (e.g., kinematics, signature) and discrete attributes (e.g., entity type, identity, activity, allegiance) of entity states

  • Resource relationship management level 2. Management to task/control relationships (e.g., aggregation, coordination, conflict) among resource responses

  • Situation assessment level 2. Fusion to detect/estimate/comprehend relationships (e.g., aggregation, causal, command/control, coordination, adversarial relationships) among entity states

  • Mission objective management level 3. Management to establish/modify the objective of level 0, 1, 2 action, response, or relationship states

  • Impact assessment level 3. Fusion to predict/estimate the impact of level 0, 1, or 2 signal, entity, or relationship states

  • Design management level 4. Management to task/control the system engineering (e.g., problem-to-solution space algorithm/model design mapping, model discovery, and generalization)

  • Process assessment level 4. Fusion to estimate the system’s MOPs and MOEs.

 

 

3.7   Data Fusion and Resource Management Processing Level Issues

The user’s entities of interest can be the basis of all five levels of fusion processing. The features of an entity can be estimated basis attributes inferred from one or more entity signal observations (e.g., through a level 0 data preparation/association/estimation process). For example, signal-level association and estimation problems occur in Electronic Intelligence (ELINT) pulse train deinterleaving or feature extraction of an entity in imagery. This involves inferring the existence and characteristics of the features of an entity by attributive or relational state estimation from observations and measurements specific to each sensor/source modality (e.g., radar pulses, hyperspectral pixel intensities).

The identity, location, track, and activity state of an entity of interest (whether it be a man, a molecule, or a military formation) can be estimated basis attributes inferred from one or more signal or entity observations (e.g., through one or a network of data preparation/association/estimation fusion nodes). The same entity’s compositional or relational state (e.g., its role within a larger structure and its relations with other elements of that structure) can be inferred through level 2 processes. The behavior of the same entity can also be projected to assess the impact of the utility of an estimated or predicted situation relative to the user’s objective.

The fused states can then be compared to desired states at all fusion levels to determine the performance of the fusion system and similarly for response states and the resource management system performance assessment.

The declaration of features, entity states, their relationships, their conditional interactions and impacts, or their correspondence to truth is a data association (i.e., hypothesis generation, evaluation, and selection) function within the particular fusion level.

Fused state projection at the impact assessment level (data fusion level 3) needs to determine the impact of alternative projected states. This requires additional information concerning the conditional actions and reactions of the entities in the relevant situation (e.g., in the battlespace). Furthermore, utility assessment requires additional information on the mission success criteria, which also is unnecessary for level 0–2 fusion.*

Utility assessment is the estimation portion of level 3 processing. Predicting the fused state beyond the current time is of interest for impact assessment whose results are of interest to mission resource management.

The objective of the lower levels of resource management in using impact assessments is to plan responses to improve the confidence in mission success, whereas the objective of level 4 design management in using the performance evaluation (PE) outputs is to improve confidence in system performance. PE nodes tend to have significant interaction with their duals in design management, in that they provide the performance estimates for a DF&RM solution that are used to propose a better DF&RM solution. This resource management function for optimizing the mapping from problem to solution space is usually referred to as system engineering (i.e., equated to design management here). The design management architecture provides a representation of the system engineering problem that partitions its solution into the resource management node processes; that is, of

  • Problem alignment (resolving design needs, conflicts, mediation)

  • Design planning (design generation, evaluation, and selection)

  • Design implementation and test (output-specific resource control commands)

The dual management levels are intended to improve understanding of the management alternatives and facilitate exploitation of the significant differences in the resource modes, capabilities, and types as well as mission objectives.

Process refinement—the old data fusion level 4—has been subsumed as an element of each level of resource management that includes adaptive data acquisition and processing to support mission objectives (e.g., sensor management and information systems dissemination).

User refinement (which, as noted above, has been proposed as a data fusion level 5)15 has been subsumed as an element of knowledge management within resource management.

 

 

References

1. Llinas, J., C.L. Bowman, G. Rogova, A.N. Steinberg, E. Waltz, and F. White, Revisiting the JDL data fusion model II, Proceedings, Seventh International Conference on Information Fusion, Stockholm, 2004.

2. White, F.E., Jr., A model for data fusion, Proceedings of the First National Symposium on Sensor Fusion, 1988.

3. Buede, D.M., The Engineering Design of Systems, Wiley, New York, 2000.

4. Bowman, C.L., The dual node network (DNN) data fusion & resource management (DF&RM) architecture, AIAA Intelligent Systems Conference, Chicago, September 20–22, 2004.

5. Steinberg, A.N., C.L. Bowman, and F.E. White, Revisions to the JDL Model, Joint NATO/IRIS Conference Proceedings, Quebec, October, 1998 and in Sensor Fusion: Architectures, Algorithms, and Applications, Proceedings of the SPIE, Vol. 3719, 1999.

6. Waltz, E., Information Warfare: Principles and Operations, Artech House, Boston, MA, 1998.

7. Bowman, C.L., The data fusion tree paradigm and its dual, Proceedings of the Seventh National Symposium on Sensor Fusion, 1994.

8. Steinberg, A.N. and C.L. Bowman, Development and application of data fusion engineering guidelines, Proceedings of the Tenth National Symposium on Sensor and Data Fusion, 1997.

9. Llinas, J. et al. Data fusion system engineering guidelines, Technical Report 96-11/4, USAF Space Warfare Center (SWC) Talon-Command project report, Vol. 2, 1997.

10. Bowman, C.L., Affordable information fusion via an open, layered, paradigm-based architecture, Proceedings of Ninth National Symposium on Sensor Fusion, Monterey, CA, March 1996.

11. Dasarathy, B.V., Sensor fusion potential exploitation-innovative architectures and illustrative applications, IEEE Proceedings, Vol. 85, No. 1, 1997.

12. Steinberg, A.N. and C.L. Bowman, Revisions to the JDL data fusion model, Chapter 2 of Handbook of Multisensor Data Fusion, D.L. Hall and J. Llinas (Eds.), CRC Press, London, 2001.

13. Devlin, K., Logic and Information, Press Syndicate of the University of Cambridge, Cambridge, 1991.

14. Hall, M.J., S.A. Hall, and T. Tate, Removing the HCI bottleneck: How the human computer interface (HCI) affects the performance of data fusion systems, Proceedings of the MSS National Symposium on Sensor Fusion, 2000.

15. Blasch, E.P. and S. Plano, Level 5: User refinement to aid the fusion process, in Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2003, B.V. Dasarathy (Ed.), Proceedings of the SPIE, Vol. 5099, 2003.

16. Hall, D.L. and McMullen S.A.H., Mathematical Techniques in Multisensor Data Fusion, Second Edition, Artech House, Boston, 2004.

17. Steinberg, A.N. and C.L. Bowman, Rethinking the JDL data fusion levels, Proceedings of MSS National Symposium on Sensor and Data Fusion, Laurel, Maryland, 2004.

18. Quine, W.V.O., Word and Object, MIT Press, Cambridge, MA, 1960.

19. Strawson, P.F., Individuals: An Essay in Descriptive Metaphysics, Methuen Press, London, 1959.

20. Steinberg, A.N. and G. Rogova, Situation and context in data fusion and natural language understanding, Proceedings of Eleventh International Conference on Information Fusion, Cologne, 2008.

21. Bedworth, M. and J.O’Brien, The Omnibus model: a new model of data fusion?, Proceedings of the Second International Conference on Information Fusion, Sunnyvale, CA, 1999.

22. Endsley, M.R., Toward a theory of situation awareness in dynamic systems, Human Factors, Vol. 37, No. 1, 1995.

23. Salerno, J.J., Where’s level 2/3 fusion—a look back over the past 10 years, Proceedings of the Tenth International Conference on Information Fusion, Quebec, Canada, 2007.

24. Lambert, D.A., Tradeoffs in the design of higher-level fusion systems, Proceedings of the Tenth International Conference on Information Fusion, Quebec, Canada, 2007.

* This distinction in which the ontology derives from users’ interests, originates in the work of Quine18 (whose answer to What exists? was Everything) and Strawson.19

* Dasarathy’s original categories represent constructive, that is data-driven, processes, in which organized information is extracted from relatively unorganized data. In the earlier edition of this handbook12 we defined additional processes that are analytic, or model-driven, such that organized information (a model) is analyzed to estimate lower-level data (features or measurements) as they relate to the model. Examples include predetection tracking (an FEI/DAO process), model-based feature extraction (DEI/FEO), and model-based classification (DEI/DAO). Dasarathy’s model is discussed further in Section 3.5.

* It is useful to distinguish between relations in the abstract (e.g., trust) and instantiated relations (e.g., Othello’s trust in Iago), which we will call relationships. There is no obvious term in English to mark the analogous distinction between abstract and instantiated attributes. In any case, it is convenient in some applications to consider these, respectively, as single-place relations or relationships.

* We use the word “paradigmatically” because some systems may not require data preparation in every fusion node and data association may not be separable from estimation in some implementations (e.g., segmentation leads to less complexity but usually gives up some performance). Furthermore, not every multitarget state estimation processes requires explicit data association at the target level; rather, data can be determined to be relevant at the situational (e.g., multitarget) level.

* In many or most data fusion implementations to date, the predominant flow is upward; any contextual conditioning in a data fusion node being provided by corresponding resource management nodes (e.g., during process refinement). Steinberg and Rogova20 provide a discussion of contextual exploitation in data fusion.

* As stated by Frank White, the first JDL panel chairman, “much of [the JDL model’s] value derives from the fact that identified fusion functions have been recognizable to human beings as a ‘model’ of functions they were performing in their own minds when organizing and fusing data and information. It is important to keep this ‘human centric’ sense of fusion functionality since it allows the model to bridge between the operational fusion community, the theoreticians and the system developers” (personal communication, quoted in Ref. 4).

See Section 3.3.1 for the definition of these acronyms.

* See Ref. 3, pp. 331ff for definitions of various types of engineering models.

See Chapter 22 for an in-depth discussion.

* Level 4 fusion and management processes have typically been performed off-line, during system design and evaluation. Their use in on-line self-assessment and reconfiguration has potential for improved efficiency in information exploitation and response.

* Generally speaking, Level 0–2 processes need only to project the fused state forward in time sufficient for the next data to be fused.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.237.89