18

 

 

Foundations of Situation and Threat Assessment

 

Alan N. Steinberg

CONTENTS

18.1   Scope and Definitions

18.1.1   Definition of Situation Assessment

18.1.2   Definition of Threat Assessment

18.1.3   Inference in Situation and Threat Assessment

18.1.3.1   Inferences of Relationships and Entity States

18.1.3.2   Inferring Situations

18.1.4   Issues in Situation and Threat Assessment

18.2   Models of Situation Assessment

18.2.1   Situation Assessment in the JDL Data Fusion Model

18.2.2   Endsley’s Model for Situation Awareness

18.2.3   Salerno’s Model for Higher-Level Fusion

18.2.4   Situation Theory and Logic

18.2.4.1   Classical (Deterministic) Situation Logic

18.2.4.2   Dealing with Uncertainty

18.2.5   State Transition Data Fusion Model

18.3   Ontology for Situation and Threat Assessment

18.3.1   Ontology Specification Languages

18.3.2   Ontologies for Situation Threat Assessment

18.3.2.1   Core Situation Awareness Ontology

18.3.2.2   Ontology of Threat and Vulnerability

18.4   A Model for Threat Assessment

18.4.1   Threat Model

18.4.2   Models of Human Response

18.5   System Engineering for Situation and Threat Assessment

18.5.1   Data Fusion for Situation and Threat Assessment

18.5.1.1   Data Fusion Node for Situation and Threat Assessment

18.5.1.2   Architecture Implications for Adaptive Situation Threat Assessment

18.5.2   Data Alignment in Situation and Threat Assessment

18.5.2.1   Semantic Registration: Semantics and Ontologies

18.5.2.2   Confidence Normalization

18.5.3   Data Association in Situation and Threat Assessment

18.5.4   State Estimation in Situation and Threat Assessment

18.5.4.1   Link Analysis Methods

18.5.4.2   Graph Matching Methods

18.5.4.3   Template Methods

18.5.4.4   Belief Networks

18.5.4.5   Compositional Methods

18.5.4.6   Algorithmic Techniques for Situation and Threat Assessment

18.5.5   Data Management

18.5.5.1   Hypothesis Structure Issues

18.5.5.2   Data Repository Structure Issues

18.6   Summary

References

 

 

18.1   Scope and Definitions

This chapter aims to explore the underlying principles of situation and threat assessment (STA) and discuss current approaches to automating these processes. As Lambert1 notes

Machine based higher-level fusion is acknowledged in the dominant JDL model for data fusion through the inclusion of amorphous “level 2” and “level 3” modules …, but these higher levels are not yet supported by a standardised theoretical framework.

We hope to take some steps toward developing such a framework.

The past several years have seen much work and, we believe, some progress in these fields. There have been significant developments relating to situation assessment in techniques for characterizing, recognizing, analyzing, and projecting situations. These include developments in mathematical logic, ontology, cognitive science, knowledge discovery and knowledge management.

In contrast, level 3 data fusion—impact or threat assessment—is as yet a formally ill-defined and underdeveloped discipline, but one that is vitally important in today’s military and intelligence applications. In this chapter, we examine possible rigorous definitions for level 3 fusion and discuss ways to extend the concepts and techniques of situation and threat assessment.

Issues in developing practical systems for STA derive ultimately from uncertainty in available evidence and in available models of such relationships and situations. Techniques for mitigating these problems are discussed.

18.1.1   Definition of Situation Assessment

Data fusion in general involves the use of multiple data—often from multiple sources—to estimate or predict the state of some aspect of reality. Among data fusion problems are those concerned with estimation/prediction of the state of one or more individuals; i.e., of entities whose states are treated as if they were independent of the states of any other entity. This assumption underlies many formulations of target recognition and tracking problems. These are the province of “level 1” in the well-known Joint Directors of Laboratories (JDL) Data Fusion Model.2, 3 and 4

Other data fusion problems involve the use of context to infer entity states. Such contexts can include relationships among entities of interest and the structure of such relationships. The ability to exploit relational and situational contexts, of course, presupposes an ability to characterize and recognize relationships and situations.

Images

FIGURE 18.1
A taxonomy for situation science.

A situation is a partial world state, in some sense of “partial.” Devlin (Ref. 5, p. 31, paraphrased) provides an attractive informal definition for “situation” as “a structured part of reality that is discriminated by some agent.”*

Such structure can be characterized in terms of the states of constituent entities and relationships among them.

As a working definition for Situation Assessment, we will use that of the JDL model revision (Chapter 2):

Situation assessment is the estimation and prediction of structures of parts of reality (i.e. of the aggregation of relationships among entities and their implications for the states of the related entities).4

We propose a rough taxonomy of functions related to situation assessment (Figure 18.1). For want of a comprehensive term, we coin situation science to span situation assessment and semantics. We define the subordinate functions as follows:

  1. Situation semantics. Defining situations in the abstract, for example, specifying the characteristics that define a situation as a political situation or a scary situation or, more specifically, as a tennis match or an epidemic. Two interrelated disciplines are situation ontology (the determination of the classes of entities involved in situations and their logical dependencies) and situation logic (the method for reasoning about such entities). In the terminology of Peircean semiotics, developing situation semantics involves abductive processes that determine general characteristics about types of situations and inductive processes that generalize these characteristics to establish an ontology of situations.6

  2. Situation assessment. Estimating and predicting the structures of parts of reality (i.e., of the aggregation of relationships among entities and their implications for the states of the related entities). Situation assessment functions include situation recognition, characterization, analysis, and projection.

  3. Situation recognition. Classifying situations (whether actual, real-world situations or hypothetical ones) as to situation type. Situation recognition—like signal and target recognition—involves primarily deductive processes.

  4. Situation characterization. Estimating salient features of situations (i.e., of relations among entities and their implications for the states of the related entities) on the basis of received data. This involves primarily abductive processes.

  5. Situation analysis. Estimating and predicting relations among entities and their implications for the states of the related entities.7,8

  6. Situation projection. Determining the type and salient features of situations on the basis of contingent data; for example, inferring future situations based on projection of present data or hypothetical data (“what-if?”) or inferring past or present situations based on hypothetical or counterfactual data. This involves primarily inductive processes.*

  7. Object assessment. Estimating the presence and characteristics of individual entities in a situation. The decomposition of object assessment is analogous to that of situation assessment, involving the recognition (e.g., target recognition and combat identification), characterization (e.g., location and activity characterization), analysis (further decomposition), and projection (e.g., tracking) of individual entities.

  8. Relationship assessment. Estimating the presence and characteristics of relationships among entities in a situation. The decomposition is analogous to that of situation assessment, involving the recognition, characterization, analysis, and projection of relationships.

Situation assessment as a formal discipline has only recently received serious attention in the data fusion community, which has hitherto found it easier to both build and market target identification and tracking technologies.

The newfound interest in situation assessment reflects changes in both the marketplace and the technology base. The change in the former is largely motivated by the transition from a focus on traditional military problems to asymmetrical threat problems. The change in the latter involves developments in diverse aspects of “situation science,” to include the fields of situation awareness (SAW),9,10 belief propagation,11,12 situation logic,1,5,13 and machine learning.14

18.1.2   Definition of Threat Assessment

In plain terms, threat assessment involves assessing situations to determine whether detrimental events are likely to occur. According to the JDL Data Fusion Model, threat assessment is a level 3 data fusion process. Indeed, the original model2 used “threat assessment” as the general name for level 3 fusion; indicative of the importance of that topic. In subsequent revisions,3,4,15,16 the concept of level 3 has been broadened to that of impact assessment.*

Level 3 fusion has been the subject of diverse interpretations. Its distinction from level 2 fusion has variously been cast in temporal, ontological, and epistemic terms. Salerno17 distinguishes situation assessment as being concerned with estimating current states of the world, whereas impact or threat assessment involves predicting future states.

In contrast, Lambert18,19 sees the distinction not so much a temporal one, but an ontological one: concerning the types of entity being estimated. Situation assessment estimates world states, whereas impact or threat assessment estimates the utility of such states to some assumed agent (e.g., to “us”). In this interpretation, the products of levels 1–3 processing are

  • Level 1. Representations of objects

  • Level 2. Representations of relations among objects

  • Level 3. Representations of effects of relations among objects

Yet another interpretation, which we will incorporate into our definition, is neither temporal nor ontological, but epistemic: Situation assessment estimates states of situations directly from inference from observations of the situations, whereas impact or threat assessment estimates situational states indirectly from other observations.

That is to say, impact or threat assessment projects situational states from information external to these situations. The paradigm case is that of projecting, or predicting future situations from present and past information.

Projection from one estimated state to another—whether forward, backward, or lateral in time—is clearly an important topic for system development. So is the estimation of the utility of past, present, or future states.

Projection is, of course, an integral function in level 1 fusion in tracking the kinematic or other dynamic states of individual targets assumed to behave independently from one another. In levels 2 and 3, we are concerned with tracking the dynamic state of multiple targets when such independence cannot be assumed.§

The estimation and exploitation of relationships among multiple targets, the primary topic of situation assessment. It certainly plays a role in situational tracking and prediction as applicable to impact or threat assessment. In particular, impact assessment is generally conceived as a first-person process, that is, as supporting “our” planning by projecting situations to determine consequences as “our” course of action interacts with that of others. This is a common context for threat assessment:

  • How will the adversary respond if I continue my present course of action?

  • What will be the consequences of the resultant interaction?

Estimation of cost or utility is integral to concerns about consequences. The estimation problem is not appreciably different if we are considering the consequences to us or to somebody else, but most threat assessment discussion is naturally focused on self-protection.

Rather than insisting on a specific definition, we may list the following functions that are relevant to impact assessment:

  1. The estimation of contingent situations, that is, situations that are contingent upon events that are construed as undetermined. These can include

    1. Projection of future situations conditioned on the evolution of a present situation (as in target or situation tracking)

    2. Projection of past, present, or future situations conditioned on undetermined or hypothetical situations, to include counterfactual assessment, as performed in simulation, “what-if” exercises, historical fiction, or science fiction

  2. The estimation of the value of some objective cost function for actual or contingent situations

For breadth of applicability in the present discussion, we apply the term impact assessment to all aspects of (1) and (2), while reserving “threat assessment” for some aspects of (1), namely, projection of future situations, as well as factor (2) as it concerns evaluation of the cost of future situations. This rather tedious definition seems to accord generally with the way threat assessment is used in practice.

We need to know what is meant by a threat ontologically. Let us distinguish the following threat aspects:

  1. Threat acts (acts detrimental to “our” interests)

  2. Threat agents (agents who may perform threat acts at some time)

  3. Threatening agents (agents who may perform threat acts in the future)

  4. Threatening situations (situations containing threatening agents and actions)

  5. Threatened acts (threat acts that may be performed in the future)

  6. Threatened situations (consequences of such acts: “impacts”)

Threat assessment involves the prediction of (5) and (6). We perform threat assessment by assessing the capability and intent of potentially threatening agents (3), together with the opportunities for threat acts inherent in the present situation (4).*

18.1.3   Inference in Situation and Threat Assessment

STA—whether implemented by people, automatic processes, or some combination thereof—requires the capability to make inferences of the following types:

  1. Inferring the presence and the states of entities on the basis of relationships in which they participate

  2. Using entity attributes and relationships to infer additional relationships

  3. Recognizing and characterizing extant situations

  4. Predicting undetermined (e.g., future) situations

Images

FIGURE 18.2
An example of situation assessment inferences. (From Steinberg, A.N., Proceedings of the Seventh International Conference on Information Fusion, Stockholm, 2004. With permission.)

For example, consider the problem of characterizing an air-defense threat (Figure 18.2). Assume that a single radar signal has been detected. On the basis of a stored model of radar signatures, it is recognized that this is a fire control radar of a certain type and it is in a target acquisition mode (level 1 inference). Furthermore, per the recognition model, it is found that radars of this type are strongly associated with a particular type of mobile surface-to-air missile (SAM) battery. Therefore, the presence and activity state of a SAM battery of the given type is inferred (level 2 inference type [1]). Furthermore, the recognition model indicates the expected composition of such a SAM battery to include four missile launchers of a certain type, plus ancillary entities (power generators, command/control vehicle, personnel, and so on). On this basis, these entities in turn are inferred to be present in the vicinity of the observed radar set. Command, control, and communications relationships among these elements can also be inferred (level 2 inference type [2]). Thus, the presence of a SAM battery in target acquisition mode can be construed to be a situation in itself or part of a larger, partly known situation, for example, an enemy air-defense network (level 2 inference type [3]).

All of this can be inferred, with various degrees of certainty, from a single piece of data: the intercept of the radar signal. This can be done, as the example illustrates, on the basis of a recognition model of entities, their relationships, the indicators of entity states and relationships, and of expectations for related entities and of additional relationships.

As in level 1 inferencing (i.e., with one-place relations), we can write production rules based on logical, semantic, causal, societal, or material (etc.) relationships among entities and aggregates. Patterns of such inferences are provided in the following subsections.

18.1.3.1   Inferences of Relationships and Entity States
18.1.3.1.1   Patterns of Inference

Estimates of relationship and entity states are used to infer other relationship and entity states. Characteristic inference patterns include the following:

L1 → L1 Deduction:

p(R(x)|Q1(x),s)=p(Q(x)|R(x),s)p(R(x)|s)p(Q(x)|s))(18.1)

for example, estimation of the probability of a single target state R(x) from associated measurements Q(x) or prediction of state R(x) from prior state Q(x) in situation s.* We label this an “L1 → L1 Deduction,” as inference is from one level–one (i.e., attributive) state estimate to another. Such inferences are deductive in that the inference is based on the likelihoods and priors shown.

Relationships between pairs of entities, or among n-tuples of entities, can similarly be inferred from other relationships among them:

L2 → L2 Deduction:*

p(R(x1,...,xm)|Q(y1,...,yn),s)=p(Q(y1,...,xn)|R(x1,...,xm),s)p(R(x1,...,xm)|s)p(Q(y1,...,yn)|s)(18.2)

where ∀i(xi ∈ {y1, …, yn}).

Cross-level inference patterns can include the following:

L1 → L2 Deduction:

p(R(x1,...,xm)|Q1(x1),...,Qm(xm),s)=p(Q1(x1),...,Qm(xm)|R(x1,...,xm),s)p(R(x1,...,xm))|s)p(Q1(x1),...,Qm(xm)|s)(18.3)

L1 → L2 Induction:

p(x2[R(x1,x2)]|Q(x1),s)=p(Q(x1)|s)|x2[R(x1,x2)])p(Q(x1),s)(18.4)

L2 → L2 Induction:

p(xm+1[R(x1,...,xm,xm+1)]|Q(x1),...,Q(xm),s)=p(Q(x1),...,Q(xm)|s)|xm+1[R(x1,...,xm,xm+1)])p(Q(x1),...,Q(xm),s)(18.5)

L2 → L1 Deduction:

p(R(xi)|Q(x1,...,xm),s)=p(Q(x1,...,xm),s)|R(x1))p(Q(x1,...,xm),s)(18.6)

where 1 ≤ im.

18.1.3.1.2   Relational States

We find that reasoning about attributes, relations, and situations is facilitated if these concepts are “reified,” that is to say, attributes, relations, and situations are admitted as entities in the ontology. Attributes of entities are conveniently treated as one-place relationships.

By explicitly including attributes and relationships into the ontology one is able to reason about such abstractions without the definitional baggage or extensional issues in reductionist formulations (e.g., in which uncertainties in the truth of a proposition involving multiple entities are represented as distributions of multitarget states).

We also find it useful to distinguish between an entity X and its state x (say, at a particular time), writing, for example, X = x. Here we use capital letters for entity variables, and small letters for corresponding state variables. Thus, we take “p(x)” as an abbreviation for “p(X = x)” and “p(xk)” for “p(X = x|k)”.*

A relation is a mapping from n-tuples of entities to a relational state r:

R(m):X1××XmR(18.7)

Dependencies can be given by such expressions as

pxy(r)=p(R(x,y)=r)(18.8)

pxy(r)=xypxy(r)p(x,y)=xypxy(r)p(x|y)p(y)(18.9)

Equations 18.1, 18.2, 18.3, 18.4, 18.5 and 18.6 can be rewritten with this notation in the obvious manner.

18.1.3.2   Inferring Situations

Level 2 inferences have direct analogy to those at level 1. Situation recognition is a problem akin to target recognition. Situation tracking is akin to target tracking.1

18.1.3.2.1   Situation Recognition

A situation can imply and can be implied by the states and relationships of constituent entities. The disposition of players and the configuration of the playing field are indicators that the situation is a baseball game, a bullfight, a chess match, an infantry skirmish, an algebra class, or a ballet recital:

x1,...,xn[R(n)(x1,...,xn)]s(18.10)

Often situational inferences can be given in the form of Boolean combinations of quantified expressions:

x1,...,xn[R1(m1)(y11,...,ym1)&...&Rk(mk)(y1k,...,ymk)],yij{x1,... xn},1jk,1ijmj(18.11)

To be sure, we are generally in short supply of situational ontologies that would enable us to write such rules. Issues of ontology science and ontology engineering will be discussed in Section 18.3.

18.1.3.2.2   Situation Tracking

We may distinguish the following types of dynamic models:

  1. Independent target dynamics. Each target’s motion is assumed not to be affected by that of any other entity; so, multitarget prior probability density functions (pdfs) are simple products of the single target pdfs.

  2. Context-sensitive individual target dynamics. At each time-step, a particular target x responds to the current situation, that is, to the states of other entities. These may be dynamic but are assumed not to be affected by the state of x. An example is an aircraft flying to avoid a thunderstorm or a mountain.

  3. Interacting multiple targets. At each time-step, multiple entities respond to the current situation, which may be affected by the possibly dynamic state of other entities. An example is an aerial dogfight.*

    Analogous dynamic models are possible for tracking the evolution of situations.

  4. Independent component dynamics. The situation contains substructures (call them component situations) that are assumed to evolve independent from one another such that the situational prior pdfs are simple products of the component situation pdfs. An example might be the independent evolution of wings in insects and birds.

  5. Context-sensitive individual component motion. At each time-step, a component situation x responds to the current situation, that is, to the states of other entities and component situations. These may be dynamic but are assumed not to be affected by the state of x. An example is an aircraft formation flying to avoid a thunderstorm or a mountain.

  6. Interacting multiple components. At each time-step, multiple component situations respond to the current situation, which may be affected by the possibly dynamic state of other entities. An example is the interplay of various components of the world economy.

Situation tracking, like target tracking, is far simpler where independent dynamics can be assumed: case (4) or (5). We expect that recent advances in general multitarget tracking will facilitate corresponding advances in general situation tracking.

18.1.4   Issues in Situation and Threat Assessment

The relative difficulty of the higher-level situation and impact/threat assessment problems can largely be attributed to the following three factors:

  1. Weak ontological constraints on relevant evidence. The types of evidence relevant to threat assessment problems can be diverse and can contribute to inferences in unexpected ways. This is why much of intelligence analysis—like detective work—is opportunistic, ad hoc, and difficult to codify in a systematic manner.

  2. Weak spatio-temporal constraints on relevant evidence. Evidence relevant to a level 1 estimation problem (e.g., target recognition or tracking) can be assumed to be contained within a small spatio-temporal volume, generally limited by kinematic or thermodynamic constraints. In contrast, many STA problems can involve evidence that is widespread in space and time, with no easily defined constrains.

  3. Weakly-modeled causality. Threat assessment often involves inference of human intent and behavior, both as individuals and as social groups. Such inference is basic not only to predicting future events (e.g., attack indications and warning) but also in understanding current or past activity. Needless to say, our models of human intent, planning, and execution are incomplete and fragile to a great extent as compared with the physical models used in target recognition or tracking.21

Table 18.1 contrasts these situation/threat assessment factors, which are characteristic of most of today’s national security concerns, with the classical “level 1” fusion problems that dominated military concerns of the twentieth century.22

Clearly, the problem of recognizing and predicting terrorist attacks is much different from that of recognizing or predicting the disposition and activities of battlefield equipment. Often the key indicators of potential, imminent, or current threat situations are in the relationships among people and equipment that are not in themselves distinguishable from common, nonthreatening entities.

We may summarize the difficulties for STA in such an application as presented in Table 18.2.

TABLE 18.1
Yesterday and Today’s Data Fusion Problems

Problem characteristics

Twentieth Century

Level 1 fusion: target recognition and tracking

Twenty-First Century

Level 2/3 fusion: situation/threat recognition and tracking

(a) Problem dimensionality

Low: few relevant data types and relations

High: many relevant data types and relations

(b) Problem span

Spatially localized: hypothesis generation via validation gate

Spatially distributed: hypothesis generation via situation/behavior model

(c) Required “target” models

Physical: signature, kinematics models

Human/group behavior: for example, coordination/collaboration, perception, value, influence models

Source:   Steinberg, A.N., Threat Assessment Issues and Methods, Tutorial presented at Sensor Fusion Conference, Marcus-Evans, Washington, DC, December 2006. With permission.

TABLE 18.2
Data Fusion Issues for Situation Threat Assessment

Images

 

 

18.2   Models of Situation Assessment

As noted in the preceding section, there are several approaches to represent and reason about situations. These include

  • The JDL Data Fusion Model, as discussed in Refs 2–4, 15, and 16 and in Chapter 3

  • Endsley’s model of the cognitive aspects of SAW in human beings9,10

  • The situation theory and logic of Barwise and Perry,13 and Devlin5

  • The fusion for SAW initiative of the Australian Defence Science and Technology Organisation (DSTO), led by Lambert1,18,19,23

These are discussed in turn in the following subsections.

18.2.1   Situation Assessment in the JDL Data Fusion Model

The definition of the higher levels of data fusion in the JDL model and related excursions has been subject to diverse interpretations.

In early versions of the model,2 the distinction among levels 1, 2, and 3 is based on functionality, that is, types of processes performed:

  • Level 1. Target tracking and identification

  • Level 2. Understanding situations

  • Level 3. Predicting and warning about threats

In Ref. 3, a revision was suggested in which the levels were distinguished on the basis of ontology, that is, the class of targets of estimation:

  • Level 1. Estimating the states of individual objects

  • Level 2. Estimating the states of ensembles of objects

  • Level 3. Estimating impacts (i.e., costs) of situations

More recently,4,15,16 we have proposed the split on the basis of epistemology, that is, the relationship between an agent (person or system) and the world state that the agent is estimating:

  • Level 1 involves estimation of entities considered as individuals

  • Levels 2 and 3 involve estimation of entities considered as structures (termed situations)

  • Level 3 is distinguished from levels 1 and 2 on the basis of relationship between the estimated states and the information used to estimate that state. Specifically, the distinction is between states that are observed and states that are projected. By projection, we mean inferences from one set of observed objects and activities to another set that occurs elsewhere in space–time. The paradigm case is in projecting current information to estimate future states

There is no reason to think that there is a single best model for data fusion. As in all scientific disciplines, investigations are made for taxonomies and other abstract representations in data fusion to identify problem space to facilitate understanding of common local problems and of common appropriate solutions within that space.

The fact that functional, ontological, and epistemological criteria all tend to partition the practical problem space in roughly the same way suggests that there may be some ideal categorization of the data fusion problem space. We doubt it.

In the present recommended version of the JDL model (Chapter 3), situation assessment—level 2 in the model—is defined as:

the estimation and prediction of structures of parts of reality (i.e., of the aggregation of relationships among entities and their implications for the states of the related entities).

This definition follows directly from

  1. The definition of situation as a structured part of reality, due to Devlin5

  2. The dictionary definition of structure as the aggregate of elements of an entity in their relationships to each other24

  3. The JDL model definition of data fusion as combining multiple data to estimate or predict some aspect of the world3

In particular, a given entity—whether a signal, physical object, aggregate or structure—can often be viewed either (1) as an individual whose attributes, characteristics, and behaviors are of interest or (2) as an assemblage of components whose interrelations are of interest. From the former point of view, discrete physical objects (the “targets” of level 1 data fusion) are components of a situation. From the latter point of view, the targets are themselves situations, that is, contexts for the analysis of component elements and their relationships.

According to this definition, then, situation assessment involves the following functions:

  • Inferring relationships among (and within) entities

  • Using inferred relationships to infer entity attributes

This includes inferences concerning attributes and relationships (a) of entities that are elements of a situation (e.g., refining the estimate of the existence and state variables of an entity on the basis of its inferred relations) and (b) of the entities that are themselves situations (e.g., in recognizing a situation to be of a certain type). According to this definition, situation assessment also includes inferring and exploiting relationships among situations, for example, predicting a future situation on the basis of estimates of current and historical situations.

18.2.2   Endsley’s Model for Situation Awareness

As defined by Endsley,10 SAW is defined as the perception of the elements of the environment, the comprehension of their meaning (understanding), and the projection (prediction) of their status in order to enable decision superiority.

Endsley’s SA model is depicted in Figure 18.3, showing the three levels of perception, comprehension, and projection. These are construed as aspects or “levels” of mental representation:

  • Level 1. Perception provides information about the presence, characteristics, and activities of elements in the environment.

  • Level 2. Comprehension encompasses the combination, interpretation, storage, and retention of information, yielding an organized representation of the current situation by determining the significance of objects and events.

  • Level 3. Projection involves forecasting future events.

Images

FIGURE 18.3
Endsley’s situation awareness model. (From Endsley, M.R., Situation Awareness Analysis and Measurement, Lawrence Erlbaum Associates Inc., Mahwah, NJ, 2000. With permission.)

McGuinness and Foy25 add a fourth level, which they call resolution. Resolution provides awareness of the best course of action to follow to achieve a desired outcome to the situation.

The JDL model—including the “situation assessment” level—can be compared and contrasted with this SAW model. First of all, it must be noted that, in these models, assessment is a process; awareness is a product of a process. Specifically, awareness is a mental state that can be attained by a process of assessment or analysis.

In an automated situation assessment system for which we are reluctant to ascribe mental states, the product is an informational state. It is not an awareness of an actual situation, but a representation of one. The performance of a situation assessment system, then, is measured by the fidelity of the representation. The effectiveness of a situation assessment system is measured by the marginal utility of responses made on the basis of such representations.

In Table 18.3, we compare in a very notional sense the levels of the JDL Data Fusion Model with those of Endsley’s SAW model as augmented by McGuinness and Foy. Both models are subject to various interpretations. We chose interpretations that emphasize the similarities between the models. This is done not only to increase harmony in the world but also to suggest that there is a commonality in concepts and issues that both the fusion and SAW communities are struggling to understand.

We have added a column showing the elements of our taxonomy of situation science. We show JDL levels 0 and 1 as possible beneficiaries of situation analysis processing. Situational analysis can generate or refine estimates of the presence and states of signals/features and objects on the basis of context.

Lambert23 similarly argues that situation awareness as defined by Endsley is equivalent to levels 1–3 of the JDL model as performed by people. This is illustrated in Figure 18.4, which suggests the need for an integrated approach to the assessment of objects, situations, and impacts as performed by machines and people. Responsibility for performing these functions will no doubt evolve as technology improves. In many applications, functional allocation is best performed dynamically on the basis of available resources.

TABLE 18.3
Comparison Between Joint Directors of Laboratories and Situation Awareness “Levels”

Images

Images

FIGURE 18.4
Relationship between machine and human fusion processes in achieving situation awareness. (Adapted from Lambert, D.A., Proceedings of the Tenth International Conference on Information Fusion, Quebec, 2007. Copyright Commonwealth of Australia. With permission.)

18.2.3   Salerno’s Model for Higher-Level Fusion

Salerno et al. at the U.S. Air Force Research Laboratory Information Fusion Branch have developed and refined models for data fusion, with an emphasis on higher-level fusion.17,26,27 These developments incorporate ideas from the JDL model, from Endsley’s SAW model, and from other activities.

We understand Salerno as distinguishing data fusion levels on the basis of functionality (the types of assessment performed4) rather than on the basis of ontology (types of objects estimated). Thus, level 1 can be the province not merely of physical objects, but of concepts, events (e.g., a telephone call), groups, and so on.

Situation assessment (level 2) in this system is the assessment of a situation at a given point in time.

In this scheme, level 1 fusion involves such functions as

  • Existence and size analysis (How many?)

  • Identity analysis (What/who?)

  • Kinematics analysis (Where/when?)

Level 2 involves such functions as

  • Behavior analysis (What is the object doing?)

  • Activity level analysis (Build up, draw down?)

  • Intent analysis (Why?)

  • Salience analysis (What makes it important?)

  • Capability/capacity analysis (What could they/it do?)*

Following Endsley, the next step to be performed is projection (which Salerno specifically distinguishes from the JDL level 2/3 processes). Thus, level 3 functions forecast or project the current situation and threat into the future.

In other words, the difference between level 2 and 3 in this scheme is a matter of situation time relative to the assessment time.

18.2.4   Situation Theory and Logic

Situation theory and logic were developed by Barwise and Perry13 and refined by Devlin.5 We summarize some of the key concepts.

Situation theory in effect provides a formalism for Lambert’s concept that “situation assessment presents an understanding of the world in terms of situations expressed as sets of statements about the world.1 The prime construct of situation logic, the infon, is a piece of information, which can be expressed as a statement.*

Note that an infon is not the same as a statement, proposition, or assertion. A statement is a product of human activity, whereas an infon is information that may apply to the world or to some specific situation (including factual and counterfactual situations), independent of people’s assertions. Infons that so apply in a given situation are facts about that situation. For example, in Hamlet it is a fact that Hamlet killed Polonius.

A situation is defined as a set of such pieces of information, expressed by a corresponding set of statements. A real situation is a subset of reality: a real situation is one in which all pieces of information are facts in the real world and the corresponding statements are true in the real world.

As in Section 18.1.3.1.2, situation theory “reifies” attributes, relations, and situations. In other words, they are admitted as entities in the ontology. Attributes of entities are treated as one-place relationships.

18.2.4.1   Classical (Deterministic) Situation Logic

Situation logic, as developed by Devlin,5 employs a second-order predicate calculus, related to the combinatorial logic of Curry and Feys.29

The latter represent first-order propositions R(x1, …, xn), involving an m-place predicate “R,” mn, as second-order propositions Applies(r, x1, …, xn), employing a single second-order predicate “Applies.” This abstraction amounts to the reification of relations r corresponding intensionally to predicates R. There being but one second-order predicate, we can often abbreviate “Applies(r, x1, …, xn)” as “(r, x1, …, xn).”

This is the basis of Devlin’s notion of an infon. Under this formulation,5 an infon has the form

σ=(r,x1,...,xn,h,k,p)(18.12)

for an m-place relation r, entities xi, location h, and time k, 1 ≤ inm.

The term p is a polarity, or applicability value, p ∈ {0, 1}. We may read “(r, x1, …, xn, h, k, 1)” as “relation r applies to the n-tuple of entities 〈x1, …, xn〉 at location h and time k.” Similarly, “(r, x1, …, xn, h, k, 0)” can be read “relation r doesn’t apply ….”

It will be convenient to make a distinction between relations, construed to be abstract (e.g., marriage), and relationships, which are anchored to sets of referents within a situational context (e.g., Anthony’s marriage with Cleopatra or Hamlet's marriage with Ophelia). As the latter indicates, such contexts are not necessarily factual, even within an assumed fictional context.

A relationship is an instantiation of a relation. We formally define a relationship as an n-tuple 〈r(m), x1, …, xn〉, nm, comprising a relation and one or more entities so related. An infon is in effect a relationship concatenated with a polarity and spatio-temporal constraints.

An important feature of this formulation is the fact that infons can be arguments of other infons. This allows the expression of propositional attitudes, fictions, hypothetical and other embedded contexts. For example, consider an infon of the form σ1 = (believes, x, σ2, h, k, 1) to be interpreted as “At place h and time k, x believes that σ2,” where σ2 is another infon. Similar nested infons can be created with such propositional attitudinal predicates as R = perceives, hypothesizes, suspects, doubts that, wonders whether, and so on, or such compounds as “At place h and time k, x1 asked x2 whether x3 reported to x4 that x5 believes that σ2.” In this way, the representational scheme of situation logic can be used to characterize and rationalize counterfactual and hypothetical situations, their relations to one another, and their relations to reality.

The second-order formulation also permits inferencing concerning propositions other than assertions. For example, a first-order interrogative “Is it true that σ?” can be restated as the assertion “I ask whether σ” or, in infon notation, “(ask whether, I, σ, h2, k2, p).” Similarly, for other modalities, replacing the relation “ask whether” with “demand that,” “believe that,” “fear that,” “pretend that,” and so on.*

Barwise and Perry13 and Devlin et al.4 broaden the expressive power of the second-order predicate calculus by a further admission into the working ontology. The new entities are our friends, situations. An operator “⊨” expresses the notion of contextual applicability, so that “sσ” can be read as “situation s supports σ” or “σ is true in situation s.” This extension allows consistent representation of factual, conditional, hypothetical, and estimated information.

Devlin defines situations as sets of infons. As with infons, a situation may be anchored and factual in various larger contexts. For example, the situation of Hamlet’s involvement with Ophelia’s family is anchored and factual within the context of the situation that is Shakespeare’s Hamlet, but not necessarily outside that context in the world at large.

Like infons, situations may be nested recursively to characterize, for example, my beliefs about your beliefs or my data fusion system’s estimate of the product of your data fusion system. One can use such constructs to reason about such perceptual states as one’s adversary’s state of knowledge and his belief about present and future world state, for example, our estimate of the enemy’s estimate of the outcome of his available courses of action. In this way, the scheme supports an operational net assessment framework allowing reasoning simultaneously from multiple perspectives.31

This formulation supports our epistemological construal of the JDL Data Fusion Model (Section 18.2.1). Depending on the way information is used, virtually any entity may be treated either as an individual or as a situation. For example, an automobile may be discussed and reasoned about as a single individual or as an assembly of constituent parts. The differentiation of parts is also subject to choices: we may disassemble the automobile into a handful of major assemblies (engine, frame, body, etc.) or into a large number of widgets or into a huge assembly of atoms.

We distinguish, in a natural way, between real situations (e.g., the Battle of Midway) and abstract situations or situation types (e.g., naval battle). A real situation is a set of facts. An abstract situation is equivalent to a set of statements, some or all of which may be false.*

Real-world situations can be classified in the same way as we classify real-world individual entities. Indeed, any entity is—by definition—a kind of situation.

18.2.4.2   Dealing with Uncertainty

Classical situation logic provides no specific mechanism to represent and process uncertainty. STA requires means to deal with uncertainties both in received data (epistemic uncertainty) and in the models we use to interpret such data in relationship and situation recognition (ontological uncertainty).

It is crucial to distinguish among the types of uncertainty that may be involved in ontological uncertainty.

In particular, a distinction needs to be drawn between

  • Model fidelity. A predictive model may be uncertain in the sense that the underlying processes to be modeled are not well understood and, therefore, the model might not faithfully represent or predict the intended phenomena. Models of human behavior suffer from this sort of inadequacy. Probabilistic and evidential methods apply to represent such types of uncertainty.37

  • Model precision. The semantics of a model might not be clearly defined, though the subject matter may be well understood. Wittgenstein’s well-known problem concerning the difficulty of defining the concept of game is a case in point. Fuzzy logic was developed to handle such types of uncertainty.38

As suggested, probabilistic, evidential, and fuzzy logic, as well as various ad hoc methods, can be and have been employed in representing epistemic and ontological uncertainty. These methods inherit all the virtues and drawbacks of their respective formalisms: the theoretical grounding and verifiability of probability theory, with the attendant difficulty of obtaining the required statistics, and the easily-obtained and robust but ad hoc uncertainty assignments of evidential or fuzzy systems.

18.2.4.2.1   Fuzzy Approaches

Some situations can be crisply defined. An example is a chess match, of which the constituent entities and their relevant attributes and relationships are explicitly bounded. Other situations may have fuzzy boundaries. Fuzziness is present in both abstract situation types (e.g., the concepts economic recession or naval battle) and of concrete situations such as we encounter in the real world (e.g., the 1930s, the Battle of Midway). In such cases, it is impossible to define specific boundary conditions. Both abstract and concrete situations can be characterized via fuzzy membership functions on infon applicability: μ(σ) ∈ [0, 1], where ∑ = {σ|sσ}.

Fuzzy methods have been applied to the problem of semantic mapping, providing a common representational framework for combining information across information sources and user communities.39

Research in the use of fuzzy logic techniques for STA has been conducted under the sponsorship of the Fusion Technology Branch of the U.S. Air Force Research Laboratory. The goal was to recognize enemy courses of action and to infer intent and objectives. Both input measurements and production rules are “fuzzified” to capture the attendant uncertainty or semantic variability. The resulting fuzzy relationships are “defuzzified” to support discrete decisions.40

18.2.4.2.2   Probabilistic Approaches

Classical situation logic, as described in the preceding section, is a monotonic logic: one in which assertions are not subject to revision as evidence is accrued. It will therefore need to be adapted to nonmonotonic reasoning as required to deal with partial and uncertain evidence. Such “evidence” can take the form of noisy or biased empirical data, whether from electronic sensors or from human observations and inferences. Evidence in the form of a priori models—for example, physical or behavior models of targets, or organizational or behavior models of social networks—can also be incomplete, noisy, and biased.

Belief networks (including, but not limited to, Bayesian belief networks) provide mechanisms to reason about relationships and the structure of situations as means for inferring random states of situations.

In Ref. 15 the author proposes modifying the infon of “classical” (i.e., monotonic) situation logic, by replacing the deterministic polarity term p with a probability term. Such a probabilistic infon can be used to express uncertainty in sensor measurement reports, track reports, prior models, and fused situation estimates alike. Each such probabilistic infon σ = (r, x1, …, xn, h, k, p) is a second-order expression of a relationship, stated in terms of the probability p that a relation r applies to an n-tuple of entities 〈x1, …, xn〉 at some place and time.

The expressive power of the infon (i.e., that of a second-order predicate calculus) can be employed to enhance the expressive capacity of a probabilistic ontology.

There has been active development of probabilistic ontologies to address the issues of semantic consistency and mapping.41, 42 and 43 Traditional ontological formalisms make no provision for uncertainness in representations.* This limits both the ability to ensure robustness and semantic consistency of developed ontologies. In addition, as noted in Ref. 43, besides the issues of incomplete and uncertain modeling of various aspects of the world, there is a problem of semantic mapping between heterogeneous ontologies that have been developed for specific subdomains of the problem.

A formalism for probabilistic ontology, named PR-OWL has been developed by Costa et al.41,42 They define probabilistic ontology as follows.

Images

FIGURE 18.5
The STDF model. (From Lambert, D., Inform. Fusion, 2007. Copyright Commonwealth of Australia. With permission.)

A probabilistic ontology is an explicit, formal knowledge representation that expresses knowledge about a domain of application. This includes

  • Types of entities that exist in the domain

  • Properties of those entities

  • Relationships among entities

  • Processes and events that happen with those entities

  • Statistical regularities that characterize the domain

  • Inconclusive, ambiguous, incomplete, unreliable, and dissonant knowledge related to entities of the domain

  • Uncertainty about all these forms of knowledge

where the term entity refers to any concept (real or fictitious, concrete or abstract) that can be described and reasoned about within the domain of application (Ref. 41 quoted in Ref. 43).

18.2.5   State Transition Data Fusion Model

Lambert has developed a model—called the state transition data fusion (STDF) model—intended to comprehend object, situation, and impact assessment.1,18 At each of these levels, different aspects of the world are predicted, observed (i.e., estimated), and explained as shown in Figure 18.5. The specific representations of state and state transition models for objects, situations, and impacts are given in Table 18.4. As shown,

  • In object assessment, the target for estimation is a state vector u(k), which the fusion system “explains” by û(k|k).

  • In situation assessment, the target for estimation is a state of affairs ∑(k), which the fusion system “explains” by a set of statements ̂(k/k).*

  • In impact assessment, the target for estimation is a scenario S(k) = {∑(n)|nTime & n ≤ ∂(k)} (i.e., a set of situations within some look-ahead time ∂(k)), which the fusion system “explains” by Ŝ(k).

TABLE 18.4
Lambert’s STDF Representation of Object, Situation, and Impact Assessment

Assessment

State s(k)

Transition {s(t)|tTime & tk}

Object assessment û¯(k)

Each s(k) is a state vector u(k) explained by û(k|k)

Each {s(t)|tTime & tk} is an object u¯(k){u¯(t)|tTime&tk} explained by û¯(k)={û¯(t|t)|tTime&tk}

Situation assessment ̂(k)

Each s(k) is a state of affairs ∑(k) explained by ̂(k|k)

Each {s(t)|tTime & tk} is a situation (k)={(t)|tTime&tk} explained by ̂(k)={̂(t|t)tTimes&tk}

Impact assessment Ŝ(k)

Each s(k) is a scenario state S(k) = {∑(n)|nTime & n ≤ ∂(k)} explained by Ŝ(k)={̂(n|k)|tTime&t(k)}

Each {s(t)|tTime & tk} is a scenario S(k)={S(t)|tTime&tk}={{(n)|nTime&n(k)}|tTime&tk} explained by Ŝ(k)={{̂(n|t)|nTime&n(k)}|tTime&tk}

Source:   Lambert, D., Inform. Fusion, 2007. Copyright Commonwealth of Australia. With permission.

This model has the significant advantages of being formally well grounded and providing unified representation and processing models across JDL levels 1–3.*

Incidental concerns include (a) the apparent lack of representation of relevance in the models for situation and impact assessment; (b) the somewhat artificial distinction between situations and scenarios; and (c) the characterization of “intent” in terms of intended effects rather than intended actions.

  1. This representation of situations (and, thereby, of scenarios) places temporal constraints on constituent facts, but not on any other factors affecting relevance. Therefore, it would seem that situations are configurations of the entire universe.

  2. Defining scenarios in terms of sets of situations seems to draw a distinction that weakens the representational power of situation logic. As discussed in Section 18.1.3.2.2, situation tracking is a large, variegated problem that has barely begun to be addressed. We wish to be able to track and predict the evolution of dynamic situations (i.e., of scenarios) of various magnitudes and complexities. An important feature of situation logic is the generality of the definition of “situation,” allowing common representation at multiple levels of granularity. Thus, one situation s1 can be an element of another s2, such that for all infons σ, s2σs1σ. Representational power is enhanced by embedding one situation in another, for example, by propositional attitude, as in “s2 ⊨ (x believes that σ)” or by subset. A scenario, as commonly understood, is one form of dynamic situation. It may be decomposed in terms of several types of embedded situations (e.g., temporal stages, facets, and local events). As an example, the enormous, dynamic, multifaceted situation that was World War II included such interrelated component situations as the conquest and liberation of France, the German rail system, the Soviet economy, the disposition of the U.S. fleet before the Pearl Harbor attack, the battle of Alamein, Roosevelt’s death, and so on.*

  3. Lambert indicates that impact prediction (i.e., the prediction of the evolution of a scenario) involves intent, capability, and awareness, at least where purposeful agents are involved. He characterizes intent in terms of an agent’s intended effects.1 We suggest a more precise exegesis in terms of intended actions and their hoped-for effects. Intended effects are represented by statements about the world at large; intended actions are represented by statements specifically about the agent. Consider a statement

Brutus intended to restore the Republic by killing Caesar(18.13)

We parse this, ignoring for a moment the times and locations of various events, such as

Brutus intends

(Brutus kills Caesar)

because

(Brutus hopes

((Brutus kills Caesar)

will cause

(Republic is restored))).

We can use infon notation to capture the implicit propositional attitude, rendering Equation 18.13 as

(Because (Intends, Brutus α,h1,k1,1),(Hope, Brurus, (Cause, α,β,h2,k2,p2),p1))(18.14)

where α = (Kill, Brutus, Caesar, In the Senate House, Ides of March, 1); β = (Restored, Republic, h2, k2, 1); h1, k1 = place and time the intention was held (k1 ≤ Ides of March); h2, k2 = place and time the effect was hoped to occur (k2 ≥ Ides of March); p1 = probability that Brutus maintains this hope; and p2 = probability that Brutus assigns to the given effect.§

 

 

18.3   Ontology for Situation and Threat Assessment

The type of recognition model that has been generally applied is that of an ontology. In its current use in artificial intelligence (AI) and information fusion, an ontology is a formal explicit description of concepts in a domain of discourse.48 In other words, an ontology is a representation of semantic structure. In AI, ontologies have been used for knowledge representation (“knowledge engineering”) by expressing concepts and their relationships formally, for example, by means of mathematical logic.49

Genesereth and Nilsson50 define an ontology as an explicit specification of a conceptualization: the objects, concepts, and other entities that are assumed to exist in some area of interest and the relationships that hold among them. Similarly, to Kokar,51 an ontology is composed of sets of concepts of object classes, their features, attributes, and interrelations.

It will be noted that this definition of an ontology is essentially the same as that given in Section 18.1.1 for an abstract situation. Indeed, an ontology is a sort of abstract situation. The problem is in clearly defining what sort of abstract situation it is. In classical metaphysics, Ontology (with a capital “O” and sans article) represented the essential order of things.52 “Accidental” facts of the world—those not inferable from the concepts involved—are not captured in the classical ontology. As we shall see, many such accidental facts are essential to most practical inferencing and, therefore, need to be captured in the knowledge structures that we now call ontologies.

Many recent examples of ontologies rely heavily on such relations as “is_a” (as in “a tree is a plant”) and “part_of” (“a leaf is part of a tree”).

However, such a concept of ontology is too narrow to meet the needs of situation or threat assessment in the real world. We wish to be able to recognize and predict relationships well beyond those that are implicit in the semantics of a language, that is, those that are true by definition. Relationships to be inferred and exploited in situation assessment can include

  • Logical/semantic relationships, for example, definitional, analytic, taxonomic, and mereologic (“part_of”) relationships

  • Physical relationships, to include causal relationships (“electrolysis of water yields hydrogen and oxygen” or “the short circuit caused the fire”) and spatio-temporal relationships (e.g., “Lake Titicaca is in Bolivia” or “the moon is in Aquarius”)

  • Functional relationships, to include structural or organizational roles (“the treasurer is responsible for receiving and disbursing funds for the organization” or “Max is responsible for the company’s failure”)

  • Conventional relationships, such as ownerships, legal and other societal conventions (“in the United Kingdom, one drives on the left side of the road”)

  • Cognitive relationships, such as sensing, perceiving, believing, fearing (“w speculates that x believes that y fears an attack by z”)

Therefore, we will want an expanded concept of ontology to encompass all manners of expectations concerning a universe of discourse. Of course, the certainty that we can ascribe to expectations varies enormously. We expect no uncertainty in mathematical relations, but considerable uncertainty in inferring conventional relationships. Depending on the source of the uncertainty—for example, lack of model fidelity or precision—the class and relation definitions in our ontology will need to be tagged with the appropriate confidence metrics, such as probability or fuzzy membership functions.*

Images

FIGURE 18.6
Representative relations of interest in a tactical military situation. (From Steinberg, A.N., Proceedings of the Eighth International Conference on Information Fusion, Phildelphia, 2005. With permission.)

It is evident that relationships of many of the types discussed are generally not directly observable, but rather must be inferred from the observed attributes of entities and their situational context. Indirect inference of this sort is by no means unique to situation assessment. Inference of relationships and entity states in given relationships is also essential to model-based target recognition, in which the spatial and spectral relationships among components are inferred and these, in turn, are used to infer the state of the constituted entity.

The realm of relations and relationships is vast. For example (illustrated in Figure 18.6), relationships of interest in tactical military applications can include

  • Relationships among objects in an adversary’s force structure (deployment, kinetic interaction, organization role/subordination, communication/coordination, type similarity, etc.)

  • Relationships among friendly sensor and response elements (spatio-temporal alignment, measurement calibration, confidence, communication/coordination, etc.)

  • Relationships between sensors and sensed entities (intervisibility, assignment/cueing, sensing, data association, countermeasures, etc.)

  • Relationships between components of opposing force structures (targeting, jamming, engaging, capturing, etc.)

  • Relationships between entities of interest and contextual entities (terrain features, cultural features, solar and atmospheric effects, weapon launch, impact points, etc.)

Some additional points to be made about relations:

  1. A simple relation can have complex hidden structure: “The monster is eating” implies “The monster is eating something.”

  2. One relation can imply additional relations between situation components: “The monster ate the bus” implies things about the relative size of the monster and the bus, their relative positions before, during, and after the meal.

  3. The order of a relation is not necessarily fixed: “x bought a car”—ostensibly the expression of a 2-place relationship—implies a 5-place relationship.

    wyzt[Car(w)& x buys w from y for amount of money z at time t & t < Now]*

Various ontology specification languages are discussed in Section 18.3.1. Some ontologies that have defined specifically for STA are introduced in Section 18.3.2.

18.3.1   Ontology Specification Languages

Kokar51 compares ontology specification languages. He finds that UML is excellent for visualization and human processing, but not for machine processing or for information exchange among machines or agents. Just the converse is true of XML, which includes no semantics. Some semantics are provided by RDF (resource description framework) and more so by RDFS (RDF schema). Significantly greater expressiveness is provided by OWL.

OWL (web ontology language) allows representation of data, objects and their relationships as annotations in terms of an ontology. Its description logic (OWL DL) was initially defined to be decidable for a semantic web.

Object query language (OQL) is a specification defined by the object data management group (ODMG) to support an object-oriented, SQL-like query language.52 Kokar51 defines a general-purpose OWL reasoner to answer queries formulated in OQL. The trace of the reasoner provides explanations of findings to users. Multiple ontologies and annotations can be combined using the colimit of category theory.

As discussed in Section 18.2.4.2.2, an OWL variant, PR-OWL, has developed an upper-ontology as a framework for probabilistic ontologies.4142,53 PR-OWL employs a logic, MEBN, that integrates classical first-order logic with probability theory.53 It is claimed43 that PR-OWL is expressive enough to represent even the most complex probabilistic models and flexible enough to be used by diverse Bayesian probabilistic tools (e.g., Netica, Hugin, Quiddity*Suite, JavaBayes, etc.) based on different probabilistic methods (e.g., probabilistic relational models, Bayesian networks, etc.).

A significant limitation of OWL and its variants is in allowing only first-order binary relations within the formal models. This has been found to be a significant constraint in representing the complex relational structures encountered in STA.44,51,54

Protégé 2000 is an open-source tool developed at Stanford University, which provides “domain-specific knowledge acquisition systems that application experts can use to enter and browse the content knowledge of electronic knowledge bases.”55 Protégé 2000 provides basic ontology construction and ontology editing tools.44

Protégé 2000, however, lacks the expressive capacity needed to represent complex relationships and higher-level relationships such as the ones occurring in STA. Protégé 2000 is based on a rudimentary description logic so that it is capable of modeling only hierarchical relations that are unary or binary (“is-a,” “part_of,” and the like).

CASL is the Common Algebraic Specification Language approved by IFIP WG1.3.56,57 CASL allows the expression of full first-order logic, plus predicates, partial and total functions, and subsorting. There are additional features for structuring, or “specification in the large.” A sublanguage, CASL-DL, has been defined to correspond to OWL DL, allowing mappings and embeddings between the two representations.58

SNePS (Semantic Networks Processing System) is a propositional semantic network, developed by the Computer Science and Engineering Department of the State University of New York, Buffalo, NY, under the leadership of Stuart Shapiro. The original goal in building SNePS was to produce a network that would be able to represent virtually any concept that is expressible in natural language. SNePS involves both a logic-based and a network- (graph-) based representation. Any associative-network representation can be duplicated or emulated in SNePS networks. This dual nature of SNePS allows logical inference, graph matching, and path-matching to be combined into a powerful and flexible representation formalism and reasoning system. A great diversity of path-based inference rules are defined.59,60

IDEF5 (Integrated Definition 5) is a sophisticated standard for building and maintaining ontologies. It was developed to meet the increasing needs for greater expressive power and to serve the use of model-driven methods and of more sophisticated AI methods in business and engineering.

The schematic language of IDEF5 is similar to that of IDEF1 and IDEF1X, which were designed for modeling business activities. IDEF5 adds to this schematic language the expressive power found in IDEF3 to include representation of temporal dynamics, full first-order logic, and sufficient higher-order capabilities to enable it to represent ontologies.

IDEF5’s expressive power allows representation of n-place first-order relations and 2-place second-order relations. As such, it is able to capture the fundamental constructs which situation logic expresses via infons: higher-order and nested relations, relational similarity, type/token distinctions, propositional attitudes and modalities, and embedded contexts.61*

18.3.2   Ontologies for Situation Threat Assessment

There have been several efforts to define taxonomies or ontologies of relations, situations, and threats.

18.3.2.1   Core Situation Awareness Ontology (Matheus et al.)

Matheus et al.63 have developed a formal ontology of SAW, drawing on concepts from Endsley, the JDL model, and the situation theory of Barwise et al. Their core SAW ontology is depicted in Figure 18.7 as a UML diagram. As they describe it,

Images

FIGURE 18.7
Core SA ontology. (From Matheus, C.J., Kokar, M., and Baclawski, K., Proceedings of the Ninth International Conference on Information Fusion, Florence, Italy, 2003. With permission.)

The Situation class … defines a situation to be a collection of Goals, SituationObjects, and Relations. SituationObjects are entities in a situation—both physical and abstract—that can have characteristics (i.e., Attributes) and can participate in relationships. Attributes define values of specific object characteristics, such as weight or color. A PhysicalObject is a special type of SituationObject that necessarily has the attributes of Volume, Position, and Velocity. Relations define the relationships between ordered sets of SituationObjects. For example, inRangeOf(X,Y) might be a Relation representing the circumstance when one PhysicalObject, X, is within firing range of a second PhysicalObject, Y.63

The Core SA Ontology is intended to define the theory of the world that describes the classes of objects and their relationships that are relevant to SAW.

18.3.2.2   Ontology of Threat and Vulnerability (Little and Rogova)

Little and Rogova44 have developed an ontology for threat assessment. This has been applied to threat assessment problems in such domains as natural disaster response64,65 and urban/asymmetric warfare.66

This ontology builds on the basic formal ontology developed by Smith and Grenon for SNePS.59,67, 68 and 69

A key SNePS distinction that influences Little and Rogova’s threat ontology is that between continuants and determinants.

Continuants are described as “spatial” items: entities that occupy space and endure through time, though subject to change. Examples are Napoleon’s army, Heraclitus’ river, Aristotle’s boat, and (possibly) Schrödinger’s cat.

Determinants are described as “temporal” entities. Examples are a military exercise, the firing of a weapon or a thought process. The SNePS ontology distinguishes relation-types that can occur between spatial (SNAP) and temporal (SPAN) items.67, 68 and 69

This threat ontology considers capability, opportunity, and intent to be the principal factors in predicting (intentional) actions.

  • Capability involves an agent’s physical means to perform an act.

  • Opportunity involves spatio-temporal relationships between the agent and the situation elements to be acted upon.*

  • Intent involves the will to perform an act.

Consider the following uses:

  1. x is able to jump 4 m (Capability)

  2. x is standing near the ditch and the ditch is 4 meters wide (Opportunity)

  3. = (1) + (2): x is prepared to jump over the ditch (Capability + Opportunity)

  4. x intends to jump over the ditch (Intent)

    Note that intent is defined in terms of the will to act, not the desire for a particular outcome. Therefore, if

  5. x wants to get to the other side of the ditch (Desire, not Intent) then x may have the intent:

  6. x intends to jump over the ditch to get to the other side of the ditch

    Also, as mentioned, awareness in such an example is a condition of intent, rather than of capability or opportunity: x must be aware of the ditch if he intends to jump over it.

    More precisely, it is hope for one’s capability and opportunity, rather than one’s awareness, which is a condition for intent. So

  7. x hopes that (3) is a necessary condition for (4)

    Regarding (7), it is, of course, not necessary that x believes or hopes that the ditch is 4 m wide or that he is able to jump 4 m; only that he hopes that he is able to jump the width of the ditch.

    Furthermore, one may intend to perform a series of acts with dependencies among them. Thus, (6) might be amenable to parsing as (8) or (9):

  8. x intends to jump over the ditch once he improves his jumping skills (acquires the capability) and walks to the ditch (acquires the opportunity)

  9. x intends to get to the other side of the ditch (by some means, whether identified or not, for example, by jumping)

Little and Rogova make a further key distinction between potential and viable threats. A potential threat is one for which either capability, opportunity, or intent is lacking. One could argue that a viable threat—defined as one for which all three conditions are met—is not only viable but also inevitable.

 

 

18.4   A Model for Threat Assessment

Impact assessment—and, therefore, threat assessment—involves all the functions of situation assessment, but applied to projected situations. That is to say, impact assessment concerns situations inferred on the basis of observations other than of the situations themselves, such as

  • Inferences concerning potential future situations

  • Inferences concerning counterfactual present or past situations (e.g., what might have happened if Napoleon had won at Waterloo?)

  • Inferences concerning historical situations on the basis of evidence extrinsic to the situation (e.g., surmising about characteristics of the original Indo-European language on the basis of evidence from derivative languages)

Threat assessment—a class of impact assessment—involves assessing projected future situations to determine the likelihood and cost of detrimental events.

Recall our threat taxonomy: threatening agents and situations; threatened agents, events and situations. Given our focus on intentional detrimental events, we will use the term “attack” broadly to refer to such events.*

Threat assessment includes the following functions:

  • Threat event prediction. Determining likely threat events (“threatened events” or “attacks”): who, what, where, when, why, how

  • Indications and warning. Recognition that an attack is imminent or under way

  • Threat entity detection and characterization. Determining the identities, attributes, composition, location/track, activity, capability, intent of agents, and other entities involved in a threatened or current attack

  • Attack (or threat event) assessment.

    • Responsible parties (country, organization, individuals, etc.) and attack roles

    • Intended target(s) of attack

    • Intended effect (e.g., physical, political, economic, psychological effects)

    • Threat capability (e.g., weapon and delivery system characteristics)

    • Force composition, coordination, and tactics (goal and plan decomposition)

  • Consequence assessment. Estimation and prediction of event outcome states (threatened situations) and their cost/utility to the responsible parties, to affected parties, or to the system user, which can include both intended and unintended consequences

To support all of these processes, it will be necessary to address the following issues, as noted in Ref. 70:

  1. Representation of domain knowledge in a way that facilitates retrieval and processing of that knowledge.

  2. Means of obtaining the required knowledge.

  3. Representation and propagation of uncertainty.

    Issues (1)–(3) are, of course, pervasive across STA problems. Threat assessment involves two aspects that raise additional issues. These are the aspects of human intention and event prediction.

  4. In the many threats of concern that are the results of human intent, threat event prediction involves the recognition and projection of adversarial plans. This can involve methods for spatial and temporal reasoning; it certainly involves methods for recognizing human goals, priorities, and perceptions.

  5. Whether the concern is for human or other threats, threat assessment involves prediction of events and of situations, requiring models of causal factors and of constraints in projecting from known to unknown situational states.70

18.4.1   Threat Model

Threats are modeled in terms of potential and actualized relationships between threatening entities (which may or may not be people or human agencies) and threatened entities, or “targets” (which often include people or their possessions).

We define a threatening situation to be a situation in which threat events are likely. A threat event is defined as an event that has adverse consequences for an agent of concern; paradigmatically, for “us.” Threat events may be intentional or unintentional (e.g., potential natural disasters or human error). We call an intentional threat event an attack.

In the present discussion, we are not directly concerned with unintentional threats such as from natural disasters. Rather, we are interested in characterizing, predicting, and recognizing situations in which a willful agent intends to do harm to something or somebody. This, of course, does not assume that intended threat actions necessarily occur as intended or at all, or that they yield the intended consequences. Furthermore, we need to recognize threat assessment as an aspect of the more general problems of assessing intentionality and of characterizing, recognizing, and predicting intentional acts in general.

Looney and Liang71 formulate threat assessment as a process that applies the situation assessment result to

  1. Predict intentions of the enemy

  2. Determine all possible actions of the enemy

  3. Recognize all opportunities for hostile action by the enemy

  4. Identify all major threats to friendly force resources

  5. Predict the most likely actions of the enemy

  6. Determine the vulnerabilities of friendly forces to all likely enemy actions using match-up of forces, weapon types, and preparedness

    Images

    FIGURE 18.8
    Elements of a threat situation hypothesis. (From Steinberg, A.N., Llinas, J., Bisantz, A., Stoneking, C., and Morizio, N., Proceedings of the MSS National Symposium on Sensor and Data Fusion, McLean, VA, 2007. With permission.)

  7. Determine favorable offensive actions of friendly forces with favorable match-ups to thwart all likely actions of the enemy with least cost

  8. Determine optimal targeting for all offensive actions*

We base our threat model on the ontology of Little and Rogova,44 with intent redefined as per footnote * in Section 18.3.2.2. Indicators of threat situations relate to the capability, opportunity, and—where purposeful agents are involved—intent of agents to carry out such actions against various targets.

Figure 18.8 shows the elements of a representative threat(ening) situation hypothesis and the principal inference paths.

The three principal elements of the threat situation—an entity’s capability, opportunity, and intent to affect one or more entities—are each decomposed into subelements. The Threat assessment process (a) generates, evaluates, and selects hypotheses concerning entities’ capability, intent, and opportunity to carry out various kinds of attack and (b) provides indications, warnings, and characterizations of attacks that occur.

An entity’s capability to carry out a specific type of threat activity depends on its ability to design, acquire, and deploy or deliver the resources (e.g., weapon system) used in that activity. The hypothesis generation process searches for indicators of such capability and generates a list of feasible threat types.

Intent is inferred by means of goal decomposition, based on decision models for individual agents and for groups of agents.

The postulated threat type constrains a threat entity’s opportunity to carry out an attack against particular targets (e.g., to deploy or deliver a type of weapon as needed to be effective against the target). Other constraints are determined by the target’s accessibility and vulnerability to various attack types, and by the threat entity’s assessment of opportunities and outcomes.

Capability, opportunity, and intent all figure in inferring attack–target pairing. Some authors have pursued threat assessment on the basis of capability and intent alone.72 These factors, of course, are generally more observable and verifiable than is intent. However, capability and opportunity alone are not sufficient conditions for an intentional action.

The threat assessment process evaluates threat situation hypotheses on the basis of likelihood, in terms of the threat entity’s expected perceived net pay-off. It should be noted that this is an estimate, not of the actual outcome of a postulated attack, but an estimate of the threat entity’s estimate of the outcome.

The system’s ontology provides a basis for inferencing—capturing useful relationships among entities that can be recognized and refined by the threat assessment process. An ontology can capture a diversity of inference bases, to include logical, semantic, physical, and conventional contingencies (e.g., societal, legal, or customary contingencies). The represented relationships can be conditional or probabilistic.

This ontology will permit an inferencing engine to generate, evaluate, and select hypotheses at various fusion levels. In data fusion level 0, 1, and 2, these are hypotheses concerning signals (or features); individuals; and relationships and situations, respectively. A level 3 (threat or impact assessment) hypothesis concerns potential situations, projecting interactions among entities to assess their outcomes. In Lambert’s1 term, these are scenario hypotheses. In intentional threat assessment we are, of course, concerned with interactions in which detrimental outcomes are intended.

18.4.2   Models of Human Response

As noted in Ref. 18, tools are needed to assess multiple behavior models for hostile forces during threat assessment. For example, game theoretic analysis has been suggested for application to level 3 fusion. Combinatorial game theory can offer a technique to select combinations in polynomial time out of a large number of choices.

Problems in anticipating and influencing adversarial behaviors result from, at a minimum, the following three factors:

  1. Observability. Human psychological states are not directly observed but must be inferred from physical indicators, often on the basis of inferred physical and informational states.

  2. Complexity. The causal factors that determine psychological states are numerous, diverse, and interrelated in ways that are themselves numerous and diverse.

  3. Model fidelity. These causal factors are not well understood, certainly in comparison with the mature physical models that allow us to recognize and predict target types and kinematics.

Nonetheless, however complex and hidden human intentions and behaviors may be, they are not random.

In Refs 46 and 73, we propose methods for representing, recognizing, and predicting such intentions and behaviors. We describe an agent response model, developed as a predictive model of human actions. This model is based on a well-known methodology in cognitive psychology for characterizing and predicting human behavior: Rasmussen’s skill-rule-knowledge (SRK) model.74 We have formulated the SRK model in terms of a formal probabilistic model for characterizing such behavior.73

Images

FIGURE 18.9
Components of an action process and constituent error factor.

Actions of a responsive agent—to include actions of information reporting—are decomposed by means of four process models:

  1. Measurement model. p(ZMwk|X,w); probabilities that agent w will generate measurement sets Z in world states X

  2. Inference model. p(X̂Iwk|ZMwk); probabilities that agent w will generate world state estimates X̂, given measurement sets Z

  3. Planning model. p(ÂPwk|X̂Iwk); probabilities that agent w will generate action plans Â, given world state estimate X̂

  4. Control model. p(ÂCwk|ÂPwk); probabilities that agent w will generate actions A’, given action plans Â

These process models can generally be assumed to be conditionally independent, arising from mutually independent system components.* Thus, they can be modeled serially as in Figure 18.9, which shows representative SRK performance factors for each component. As shown, this Measurement-Inference-Planning-Control (MIPC) model can be viewed as a formal interpretation of Boyd’s well-known “OODA-Loop.”

Among the four constituent models, measurement and control are familiar territory in estimation and control theory. Inference and planning model processes are, respectively, the provinces of information fusion and automatic planning, respectively. Inference corresponds to Endsley’s perception, comprehension, and projection. Planning involves the generation, evaluation, and selections of action plans (as discussed in Chapter 3 and Refs 4 and 16). Such actions can include external activity (e.g., control of movement, of sensors and effectors) and control of internal processes (e.g., of the processing flow), (see Ref. 17).

Inference and planning are “higher-level” functions in the old dualistic sense that human perception, comprehension, and projection and planning are considered “mental” processes, whereas “measurement” (sensation) and motor control are “physical” processes. The former are certainly less directly observable, less well understood and, therefore, less amenable to predictive modeling.

By decomposing actions into constituent elements, and these into random and systematic components, we can isolate those components that are difficult to model. Having done that, we are able—at the very least—to assess the sensitivity of our behavior model to such factors.

We often have reliable predictive models for measurement and control in automated systems. Our ability to predict inference and planning performance in automated systems is relatively primitive, to say nothing of our ability to predictively model human cognition or behavior. As shown, each of the four models can involve both random and bias error components. However, note that only planning can involve intentional error components.

On the assumption of conditional independence, we can write expressions for biases in inference, planning and control analogous to the familiar one for measurement:

ZMk=hsk(βM,Xtk)+VMk(18.15)

X̂Ik=hIk(βI,ZMk)+VIk(18.16)

ÂPk=hPk(βP,X̂Ik)+VPk(18.17)

ACk=hCk(βC,ÂPk)+VCk(18.18)

Terms are defined as follows (the agent index has been suppressed): ZMk is the measurement set as of time k; X̂Ik is the estimated world state; βM, βI, βP, βC are systematic bias terms for measurements, inference, planning, and control, respectively; VMk,VIk,VPk,VCk are noise components of measurement, inference, planning, and control, respectively (generally non-Gaussian).

Intentionality occurs specifically as factors in the bias term βP of Equation 18.17. This term, then, is the focus of the assessment of an agent’s intentionally directed patterns of behavior.

The transforms h, particularly those for inference and planning, can be expected to be highly nonlinear. Equation 18.18 is the control model dual of Equation 18.15, with explicit decomposition into random and systematic error components. Equation 18.17 is a planning model, which—as argued by Bowman75—has its dual in data association:

ŶAk=hAk(βA,ZMk)+VAk(18.19)

for association hypotheses ŶAk2ZMk (i.e., hypotheses within the power set of measurements).

 

 

18.5   System Engineering for Situation and Threat Assessment

We now consider issues in the design of techniques for STA. In Section 18.5.1, we discuss the implications for data fusion to perform STA—first in terms of functions within a processing node, then in terms of an architecture that includes such nodes. The very important topics of semantic registration and confidence normalization are treated in Section 18.5.2, whereas Sections 18.5.3 and 18.5.4 discuss issues and techniques for data association and state estimation, respectively. Implications for efficiency in processing and data management are addressed in Sections 18.5.5 and 18.5.6, respectively.

18.5.1   Data Fusion for Situation and Threat Assessment

18.5.1.1   Data Fusion Node for Situation and Threat Assessment

Figure 18.10 depicts the data fusion node model that we have been using to represent all data fusion processes, regardless of fusion level (e.g., in Chapters 3 and 22). The figure is annotated to indicate the specific issues involved in data fusion for situation and threat assessment.

As seen in the figure, the basic data fusion functions are data alignment (or common referencing), data association, and state estimation.

  • Data alignment for situation assessment involves preparing available data for data fusion, that is, for associating and combining multiple data.

  • Data association for situation assessment involves deciding how data relate to one another: generating, evaluating, and selecting hypotheses concerning relationships and relational structures.

  • State estimation for situation assessment involves the functions defined in Section 18.1.3; viz.,

    Images

    FIGURE 18.10
    Generic data fusion node for situation and threat assessment.

  • Inferring relationships among (and within) entities, to include (a) entities that are elements of a situation and (b) entities that are themselves treated as situations (i.e., to static and dynamic “structures,” to include everything we commonly refer to as situations, scenarios, aggregates, events, and complex objects)

  • Using inferred relationships to infer attributes of entities in situations and of situations themselves (situation recognition or characterization)

Data fusion for impact assessment (and, therefore, for threat assessment) involves all of these functions; state estimation concerns projected, rather than observed, situations. Threat assessment involves projecting and estimating the cost of future detrimental situations.

18.5.1.2   Architecture Implications for Adaptive Situation Threat Assessment

As noted, the STA process will need to generate, evaluate, and select hypotheses concerning present and projected situations.

We anticipate that processes for situation assessment—and a forteriori, for threat assessment—will need to be adaptive and opportunistic. Thus, we expect that a processing architecture that could be constructed on the basis of modules per Figure 18.5 will require tight coupling between inferencing and data acquisition (collection management and data mining) to build, evaluate, and refine situation and threat hypotheses. The simplest form of information exploitation involves open-loop data collection, processing, and inferencing, illustrated in Figure 18.11a. Here, “Blue” collects and exploits whatever information his sensors/sources provide. Naturally, entities in the sensed environment may react to their environment, as illustrated by “Red’s” OODA-loop.

Of course, the only aspects of Red’s behavior that are potentially observable by Blue are the consequences of Red’s physical acts. Red’s observation, orientation, and decision processes* are hidden, except insofar as they involve physical activity (e.g., electromagnetic emissions of active sensors). Estimation and prediction activity within these hidden processes must be inferred indirectly, as discussed in Section 18.4.2.

Images

FIGURE 18.11
Passive, adaptive, and stimulative information exploitation. (From Steinberg, A.N., Proceedings of the MSS National Symposium on Sensor and Data Fusion, McLean, VA, 2006. With permission.)

In such an open-loop approach—typical of most data fusion systems today—Blue observes and interprets such activities, but response decisions and actions are explicitly outside the purview of the data fusion process.

Information exploitation is often conceived as a closed-loop process: TPED, TPPU, and so on, as illustrated in Figure 18.11b. Here data collectors and the fusion process are adaptively managed to improve Blue’s fusion product. This can include prediction of future and contingent states of entities and situations (e.g., Red’s actions and their impact). They can include predicting the information that would be needed to resolve among possible future states and the likelihood that candidate sensing or analytic actions will provide such information. That is, this closed-loop information exploitation process projects the effects of the environment on one’s own processes, but does not consider the effects of one’s own actions on the environment.

In contrast, closed-loop stimulative intelligence, Figure 18.11c, involves predicting, exploiting, and guiding the response of the system environment (and, in particular, of purposeful agents in that environment) to one’s own actions. The use of such stimulative techniques for information acquisition is discussed in Ref. 76.

18.5.2   Data Alignment in Situation and Threat Assessment

Data alignment (often referred to as common referencing) involves normalizing incoming data as necessary for association and inferencing. Data alignment functions operate on both reported data and associated metadata to establish data pedigree and render data useful for the receiving data fusion node.16

In the general case, data alignment includes any necessary data formatting, registration/calibration, and confidence normalization.

  • Data formatting can include decoding/decryption as necessary for symbol extraction and conversion into useful data formats. We can refer to these processes as lexical and syntactic registration.

  • Registration/calibration can include measurement calibration and spatio-temporal registration/alignment. It can also include mapping data within a semantic network, to include, coordinate, and scale conversion, as well as more sophisticated semantic mapping. We can refer to such processes as semantic registration.

  • Confidence normalization. Consistent representation of uncertainty.

In situation assessment, data formatting and measurement-level calibration/registration usually have been performed by upstream object assessment fusion nodes. If so, data alignment for situation assessment involves only semantic registration and confidence normalization. These are discussed in the following subsections.

18.5.2.1   Semantic Registration: Semantics and Ontologies

“Semantic registration” was coined as the higher-level fusion counterpart to spatio-temporal registration.18 In lower-level fusion, spatio-temporal registration provides a common spatio-temporal framework for data association. In STA, semantic registration involves the translation of sensor fusion object assessments and higher-level assessments into a common semantic framework for concept association.

Images

FIGURE 18.12
An example of multiple intelligence deduction. (From Steinberg, A.N. and Waltz, E.L., Perceptions on Imagery Fusion, Presented at NASA Data Fusion/Data Mining Workshop, Sunnyvale, CA, 1999. With permission.)

Semantic registration permits semantic interoperability among information processing nodes and information users. As discussed in Section 18.3, an ontology provides the means for establishing a semantic structure. Processing may be required to map between separate ontologies as used by upstream processes.77

Because information relevant to STA is often obtained from a diversity of source types, it will be necessary to extract information into a format that is common both syntactically and semantically. In the field of content-based image retrieval (CBIR), this is called the problem of the semantic gap.

Figure 18.12 depicts a concept for combining information extracted from human and communications intelligence (HUMINT and COMINT) natural language sources, as well as from various sensor modalities, such as imagery intelligence (IMINT) and measurement and signals intelligence (MASINT).78 The tree at the bottom of the figure represents the accumulation of confidence indicators (e.g., probabilities) from diverse sources supporting a particular situation hypothesis. Such fusion across heterogeneous sources requires lexical, syntactic, and semantic registration, as well as spatio-temporal registration and confidence normalization.

Figure 18.13 illustrates a process for bridging this gap, so that information from such diverse sources as imagery and natural language text can be extracted as necessary for fusion.

Techniques and developments for performing these functions are discussed in Ref. 79.

18.5.2.2   Confidence Normalization

Confidence normalization involves assigning consistent and (at least in principle) verifiable confidence measures to incoming data.

The concept of confidence normalization is tied to that of data pedigree. Data pedigree can include any information that a data fusion or resource management node requires to maintain its formal and mathematical processing integrity.16

Images

FIGURE 18.13
Fusion of imagery and text information.

Pedigree metadata involves two components:

  1. Content description: data structure, elements, ontologies, constraints, etc.

  2. Origin of the data and processing history77

The latter requires processes to estimate the error characteristics of data sources and of the processes that transform, combine, and derive inferences from such data before arrival at a given user node (i.e., to a node that performs further processing or to a human user of the received data). One way of maintaining consistency across fusion nodes and fusion levels is by exchanging pedigree metadata along with the data that they describe.16,77,80

The problem of characterizing data sources and data processing is particularly acute in STA, because of the heavy human involvement in source data generation, transformation, combination, and inferencing.

We may conceive the flow of information to be exploited by an STA process as an open communications network, that is, as an unconstrained network of agents that may interact on the basis of diverse capabilities, motives, and allegiances.81 Agent interactions within such a network may be

  • Intentional (e.g., by point-to-point or broadcast communications or publication)

  • Unintentional (e.g., by presenting active or passive signatures, reflective cross sections, electromagnetic or thermal emissions, or other detectable physical interactions with the environment)

Images

FIGURE 18.14
An example of open communications network. (From Steinberg, A.N., Proceedings of the Ninth International Conference on Information Fusion, Florence, Italy, 2006. With permission.)

Agents can interact with one another and with nonpurposeful elements of their environment in complex ways. As illustrated in Figure 18.14, if we consider ourselves a node in a network of interacting agents, our understanding of received data requires the characterization of proximate sources and, recursively, of their respective sources. That is, we need to characterize the pedigree of our received data.*

Such data, coming from a variety of sources with different levels of reliability, is often incomplete, noisy, and statistically biased. The concern for bias is especially acute when sources of information are people, who have the potential for intentional as well as inadvertent distortion of reported data.

HUMINT and COMINT, as well as open-source document intelligence (OSINT), are key sources of information in threat assessment. These sources involve information generated by human sources, and typically interpreted and evaluated by human analysts. Although intelligence analysts may be presumed to report honestly, there is always the possibility of inadvertent bias. In the case of external subjects—authors of print or electronic documents, interrogation subjects, and so on—it is not often clear how much credence to ascribe.

Consider the situation of a HUMINT reporting agent. As shown in Figure 18.15, there are three modes by which such an agent may receive information:

  1. Passive observation (e.g., as a forward observer)

  2. Direct interaction (e.g., by interrogation of civilians, or enemy combatants)

  3. Receiving information from third-party sources (e.g., from civilian informants, from other friendly force members, from print or electronic media such as radio, TV, bloggers, e-mail, telephone, text messaging, and so on)

Images

FIGURE 18.15
Modes of collecting human-generated information. (From Steinberg, A.N., Llinas, J., Bisantz, A., Stoneking, C., and Morizio, N., Proceedings of the MSS National Symposium on Sensor and Data Fusion, McLean, VA, 2007. With permission.)

Human reporting agents and human subjects can have various degrees—and possibly time-varying degrees—of allegiance, common purpose, cooperativeness, information fidelity, and controllability.

In mode (1) there is a concern for errors caused by faults in the reporting agent’s observation, inference, decision, and reporting processes. In modes (2) and (3), the concern is for such faults in his sources, compounded with those of the reporting agent.

Estimating the credibility and possible biases in information from human sources is critical to exploiting such information and combining it with other information products (e.g., sensor data). Consider an analyst’s dilemma when one source with historically good performance provides a report with low certainty, whereas a second with historically less-reliable source provides a conflicting report with greater certainty.77

Systematic methods are needed to recognize and correct distorted information from external subjects of the aforementioned types as reported in HUMINT, COMINT, and OSINT. These methods will also need to recognize and correct distortions—whether intentional or unintentional—contributed by the observers, interrogators, interpreters, analysts, and others in the information exploitation chain. This is, in effect, a problem of data alignment similar to those of spatio-temporal and semantic registration discussed in Section 18.5.2.1. The result may be expressed as likelihood values appended to individual report elements; as needed for inferring entity states, behaviors, and relationships as part of STA.46

Data reliability should be distinguished from data certainty in that reliability is a measure of the confidence in a measure of certainty (e.g., confidence in an assigned probability). Reliability has been formalized in the statistical community as second-order uncertainty.82,83

Rogova and Nimier84 have assessed available approaches to assessing source reliability. They differentiate cases where

  • It is possible to assign a numerical degree of reliability (e.g., second-order uncertainty) to each source (in this case values of reliability may be “relative” or “absolute” and they may or may not be linked by an equation such as ∑Ri = 1).

  • Only a relative ordering of the reliabilities of the sources is known but no specific values can be assigned.

  • A subset of sources is reliable but the specific identity of these is not known.

Rogova and Nimier argue that there are basically two strategies for downstream fusion processes to deal with these cases:

  1. Strategies explicitly utilizing known reliabilities of the sources.

  2. Strategies for identifying the quality of data input to fusion processes and elimination of data of poor reliability (this still requires an ability to process the “reasonably reliable” data).84

The agent response model described in Section 18.4.2 can be used for characterizing the performance of human information sources.

Actions of a responsive agent—to include actions of information reporting—are decomposed by means of the following four process models, as discussed in Section 18.4.2.

  • Measurement model. p(ZMwk|X,w); probabilities that agent w will generate measurement sets Z in world states X. In the examples of Figure 18.15, these would include visual observations or received verbal or textual information.

  • Inference model. p(X̂Iwk|ZMwk); probabilities that agent w will generate world state estimates X̂, given measurement sets Z.

  • Planning model. p(ÂPwk|X̂Iwk); probabilities that agent w will generate action plans Â, given world state estimate X̂. In the source characterization application, this can be planned reporting.

  • Control model. p(ACwk|APwk); probabilities that agent w will generate actions A’, given action plans Â; for example, actual reporting (subject to manual or verbal errors).

It is evident that an agent’s reporting bias is distinct from, but includes measurement bias. Characterizing reporting biases and errors involves the fusion of four types of information:

  1. Information within the reports themselves (e.g., explicit confidence declarations within a source document or an analyst’s assessment of the competence discernable in the report)

  2. Information concerning the reporting agent (prior performance or known training and skills)

  3. Information about the observing and reporting conditions (e.g., adverse observing conditions or physically or emotionally stressful conditions that could cause deviations from historical reporting statistics)

  4. Information from other sources about the reported situation that could corroborate or refute the reported data (e.g., other human or sensor-derived information or static knowledge bases)

Information of these types may be fused to develop reporting error models in the form of probability density functions (pdfs). The techniques used derive from standard techniques developed for sensor calibration and registration:81,85

  • Direct filtering (absolute alignment) methods are used to estimate and remove biases in confidence-tagged commensurate data in multiple reports, using standard data association methods to generate, evaluate, and select hypotheses that the same entities are being report on.

  • Correspondence (relative alignment) methods such as cross-correlation and polynomial warping methods are used to align commensurate data when confidence tags are missing or are themselves suspect.

  • Model-based methods—using ontologies or causal models relating to the reported subject matter—are used to characterize biases and errors across noncommensurate data types (e.g., when one source reports a force structure in terms of unit type and designation, another reports in terms of numbers of personnel and types of equipment).

The source characterization function can serve as a front-end to the STA processing architecture. Alternatively, it can be integrated into the entity state estimation process, so that states of source entities—including states of their measurement, inference, planning, and control processes—are estimated in the same way as are states of “target” entities. In this way, “our” node in the open network of Figure 18.14 performs level 1 assessment of other nodes, across allegiance and threat categories. Pedigree data are traced and accumulated via level 2 processes.

18.5.3   Data Association in Situation and Threat Assessment

Data association involves generating, evaluating, and selecting hypotheses regarding the association of incoming data:

  • A signal/feature (level 0) hypothesis contains a set of measurements from one or more sensors, hypothesized to be the maximal set of measurements of a perceived signal or feature.

  • An object (level 1) hypothesis contains a set of observation reports (possibly products of level 0 fusion), hypothesized to be the maximal set of observation reports of a perceived entity.

  • A situation (level 2) hypothesis is a set of entity state estimates (possibly products of level 1 fusion), hypothesized to constitute a relationship or situation.

  • A threat (level 3) hypothesis is a situation hypothesis, projected on the basis of level 1, 2, and 3 hypotheses.

These hypothesis classes are defined in terms of an upward flow of information between data fusion levels, as shown in Figure 18.16.

However, inference can flow downward as well, as noted in the inductive examples of Section 18.1.3. Therefore, a signal, individual or situation can be inferred partially or completely on the basis of a contextual individual, situation or impact. For example, a level 2 (situation) hypothesis may contain level 1 hypotheses that contain no level 0 products. Such would be the case if an entity was not directly observed, but inferred on the basis of its situational context (as exemplified in Figure 18.2).

Images

FIGURE 18.16
Characteristic process flow across the fusion “levels.” (From Steinberg, A.N. and Bowman, C.L., Proceedings of the MSS National Symposium on Sensor and Data Fusion, June 2004. With permission.)

As noted in Ref. 77, a system is needed that can generate, evaluate, and present multiple hypotheses regarding intent and hostile COAs. Hostile value functions need to be inferred, using for example, techniques for inferring Markov representations of the deterministic parts of enemy behavior. Multiple hypotheses as expressed in the probability distribution of enemy intent are not usually well known. Perhaps it is possible to select most likely case when pressed to do so. Probable as well as possible actions need to be considered. Dependences can be revealed with alternative generation. A related technique is reinforcement learning.

Hypothesis generation involves

  • Determining relevant data (data batching)

  • Generating/selecting candidate scenario hypotheses (abduction)

The open world nature of many situation assessment problems poses serious challenges for hypothesis generation. This is particularly so in the case of threat assessment applications because of the factors listed in Section 18.1.4; viz.,

  • Weak spatio-temporal constraints on relevant evidence

  • Weak ontological constraints on relevant evidence

  • Dominance of relatively poorly modeled processes (specifically, human, group, and societal behavior, vice simple physical processes)

This is in contrast to level 1 data fusion, in which hypothesis generation is relatively straightforward. Hypothesis generation for threat assessment is difficult to automate and, therefore, will benefit from human involvement. The analyst can select batches of data that he finds suspicious. He may suggest possible situation explanations for the automatic system to evaluate.

Hypothesis evaluation assigns plausibility scores to generated hypotheses. In Bayesian net implementations, for example, scoring is in terms of a joint pdf. In threat assessment, generated hypotheses are evaluated in terms of entities’ capability, intent, and opportunity to carry out various actions, to include attacks of various kinds. Hypothesis evaluation involves issues of uncertainty management.

Hypothesis selection involves comparing competing hypotheses to select those for subsequent action. Depending on metrics used in hypothesis evaluation, this can involve selecting the highest-scoring hypothesis (e.g., with the greatest likelihood score). Hypothesis selection in STA involves issues of efficient search of large, complex graphs. A practical threat assessment system may employ efficient search methods for hypothesis selection, with final adjudication by a human analyst.86

18.5.4   State Estimation in Situation and Threat Assessment

State estimation for STA involves the patterns of inference listed in Section 18.1.3:

  1. Inferring the presence and the states of entities on the basis of relations in which they participate

  2. Inferring relationships on the basis of entity states and other relationships

  3. Recognizing and characterizing extant situations

  4. Projecting undetermined (e.g., future) situations

Considerable effort has been devoted in the last few years to apply to STA the rich body of science and technology in such fields as pattern recognition, information discovery and extraction, model-based vision, and machine learning.

Techniques that are employed in STA generally fall into three categories: data-driven, model-driven, and compositional methods.

  • Data-driven methods are those that discover patterns in the data with minimal reliance on a priori models. Prominent among such techniques that have been used for STA are link-analysis methods. Data-driven methods can employ abductive inference techniques to explain discovered patterns.

  • Model-driven methods recognize salient patterns in the data by matching with a priori models. Such techniques for STA include graph matching and case-based reasoning. Model-driven methods typically involve deductive inference processes.

  • Compositional methods are hybrids of data- and model-driven methods. They adaptively build and refine explanatory models of the observed situation by discovering salient characteristics and composing hypotheses to explain the specific characteristics and interrelations of situation components. In general, these involve inductive (generalization) and abductive (explanatory) methods integrated with deductive (recognition) inference methods to build and refine partial hypotheses concerning situation components.

Figure 18.17 shows the ways that such techniques could operate in a hybrid, compositional STA system. Data-driven techniques search available data, seeking patterns. Model-driven techniques search the model set, seeking matching data. Discovered structures—hypothesized situation fragments—are maintained on a blackboard. Inductive techniques project situations, for example, to predict threat situations and threat events. They also predict the effectiveness of candidate system actions in the projected future situations, including the effectiveness of candidate data acquisition actions to resolve, refine, and complete the situation hypotheses.

Images

FIGURE 18.17
Notional hybrid situation and threat assessment architecture.

Together with this major operational flow (thick arrows in the figure) is a maintenance flow, which evaluates and refines the model base.

We can touch upon only a representative selection of these techniques.

18.5.4.1   Link Analysis Methods

Link analysis is an important technique in situation analysis (as defined in Section 18.1.1). Link analysis methods discover associations of one sort or another among reported entities. They have been used extensively in law enforcement and counter-terrorism to link people, organizations, and activities. This is a sophisticated technology to search for linkages in data. However, it tends to generate unacceptable false alarms. While classic link analysis methods are effective in detecting linkages between people and organizations, their means for recognizing operationally significant relationships or behavior patterns are generally weak. In addition, finding anomalous, possibly suspicious patterns of activity requires assumptions concerning joint probabilities of individual activities, for which statistics are generally lacking.

There are serious legal and policy constraints that prohibit widespread use of link discovery methods. As a prominent example, DARPA’s TIA program was chartered to develop data mining or knowledge discovery programs to uncover relationships to detect terrorist organizations and their activities. In response to widespread privacy concerns, the program’s name was changed from “Total Information Awareness” to “Terrorist Information Awareness.”87

A currently operational link analysis capability is provided in the Threat HUMINT Reporting, Evaluation, Analysis, and Display System (THREADS) developed by Northrop Grumman. THREADS was designed to deal with heavy volumes of HUMINT data that are primarily in the form of unformatted text messages and reports. THREADS employs the Objectivity/DB object-oriented database management system to track information from the incoming messages looking for people, facilities, locations, weapons, organizations, vehicles, or similar identified entities that are extracted from the messages.

Incoming data reports are monitored, analyzed, and matched with the stored network of detected links, possibly generating alerts for analyst response. An analyst can select one of the messages and review a tree of extracted attributes and links, correcting any errors or adding more information. Inherent mapping, imagery, and geographic capabilities are used in interpreting the HUMINT data; e.g., to determine the locations and physical attributes of threat-related entities. When an updated case is returned to the database, the stored link network is updated: links are generated, revised, or pruned.88

18.5.4.2   Graph Matching Methods

STA frequently employs graph matching techniques for situation recognition and analysis. According to our definitions in Section 18.1.1, situation recognition fundamentally involves the matching of relational structures found in received data to stored structures defining situation-classes. A survey of the many powerful graph matching methods that have been developed is found in Ref. 89.

Graph matching involves (a) processing the received data to extract a relational structure in the form of a semantic net and (b) comparing that structure with a set of stored graphs to find the best match. Matching is performed using the topologies of the data-derived and model graph (generally called the data graph and the template graph, respectively). To be of use in threat assessment applications, the graph matching process must incorporate techniques to (a) resolve uncertain and incomplete data; (b) efficiently search and match large graphs; and (c) extract, generate and validate template graphs.

In the simplest case, the search is for 1:1 isomorphism between data and template graphs. In most practical applications, however, the data graph will be considerably larger than the template graph, incorporating instantiation specifics and contextual information that will not have been modeled. In such cases, the template graph will at best be isomorphic with a subgraph of the data graph. This is typically the case in automatic target recognition, in which a target of interest is viewed within a scene, features of which may be captured in the data graph but will not have been anticipated in the target template graph.

In addition, parts of the template graph may not be present in the data graph, so that matching is between a subgraph of the former with that of the latter (i.e., partial isomorphism). In the target recognition example, the target may be partially occluded or obscured. A three-dimensional target viewed from one perspective will occlude part of its own structure.

The search time for isomorphic subgraphs is generally considerably longer than that for isomorphic graphs.90

Images

FIGURE 18.18
INFERD high-level fusion: situation awareness, impact assessment, and process refinement. (From Sudit, M., CMIF Information Fusion Technologies, CMIF Internal Presentation, 2005. With permission.)

Similar problems occur in STA domains. Often, the template does not capture the real-world variability in the modeled target or situation class. This, of course, is the expectation in threat assessment, given the variability inherent in intelligent, opportunistic adversaries as well as in complex attack contexts. To operate robustly in such cases, isomorphism matching criteria must be relaxed. A formulation of inexact graph matching is presented in Ref. 91.

Researchers at the State University of New York at Buffalo have developed a graph matching system, information fusion engine for real-time decision-making (INFERD), which they have applied to threat assessment problems in such domains as natural disaster response,64,65 cyber attacks,92 and urban/asymmetric warfare.93

INFERD is an information fusion engine that adaptively builds knowledge hierarchically from data fusion levels 1 through 4 (Figure 18.18).94 The most recent version of INFERD incorporates advanced features to minimize the dependence on perfect or complete a priori knowledge, while allowing dynamic generation of hypotheses of interest. The most important among these features is abstraction of a priori knowledge into meta-hypotheses called Guidance Templates. In level 2 fusion, template nodes (generated from feature trees) are composed into acyclic directed graphs called “Attack Tracks”. The latter are used by INFERD to represent the current situation. Attack Tracks are instantiated over a Guidance Template, a meta-graph containing every “known” possible node and edge.

18.5.4.3   Template Methods

Graph matching is a class of model-driven methods. Other methods using a variety of templating schemes have been developed.

As an example, the U.S. Army imagery exploitation system/balanced technology initiative (IES/BTI), uses templates in the form of Bayesian belief networks to develop estimates of the presence and location of military force membership, organization, and expected ground formations.72,95

Hinman96 discusses another project—enhanced all-source fusion (EASF), sponsored by the Fusion Technology Branch of the U.S. Air Force Research Laboratory—that employs probabilistic force structure templates for recognizing military units by Bayesian methods.97

18.5.4.4   Belief Networks

Inferences concerning relationships are amenable to the machinery of belief propagation, whereby entity states are inferred conditioned on other entity states (i.e., on other states of the same entities or on the states of other entities).

18.5.4.4.1 Bayesian Belief Networks

A Bayesian belief network (BBN) is a graphical, probabilistic representation of the state of knowledge of a set of variables describing some domain. Nodes in the network denote the variables and links denote causal relationships among variables. The strengths of these causal relationships are encoded in conditional probability tables.98

Formally, a BBN is a directed acyclic graph (G, PG) with a joint probability density PG.

Each node is a variable in a multivariate distribution. The network topology encodes the assumed causal dependencies within the domain: lack of links among certain variables represents a lack of direct causal influence. BBNs are, by definition, causal. That is to say, influence flows from cause to effect or from effect to cause without loops: there is no directed path from any node back to itself that can be represented by a directed graph. For example, graphs such as given in Figures 18.19a and 18.19b are feasible BBNs, but not ones like Figure 18.19c.

The joint distribution for a BBN is uniquely defined by the product of the individual distributions for each random variable.

Images

FIGURE 18.19
Sample network topologies.

Belief in the node X of a belief network is the probability distribution p(X|e) given all the evidence received. The posterior probability of a node state X = x after receiving evidence eX is

p(x|eX)=(eX|x)p(x)p(eX)=αλ(x)p(x)(18.20)

Belief revision is accomplished by message passing between the nodes. Figures 18.19a and 18.19b show examples of causal networks, such as BBNs. Causal networks strictly distinguish evidence as causes or effects:

  • Causal evidence. Evidence that passes to a variable node X from nodes representing variables that causally determine the state of the variable represented by X, designated π(X)=p(eX+)

  • Diagnostic evidence. Concerning effects of X,λ(X)=p(eX)

The plus and minus signs reflect the conventional graphical depiction, as seen in Figure 18.20, in which causal evidence is propagated as downward-flowing and diagnostic evidence as upward-flowing messages.

The a posteriori belief in the value of a state of X is

Bel(X)=p(X|eX+,eX)=p(eX+,eX,X)p(eX+,eX)=p(eX|eX+,X)p(X|eX+)p(eX+)p(eX+,eX)(18.21)=αp(eX|eX+,X)p(X|eX+)=απ(X)λ(X)

Images

FIGURE 18.20
Belief revision in causal Bayesian networks.

λ(X)=p(eX) is the joint likelihood function on X:

λ(x)=Πip(yi|x)(18.22)

for the respective observed states yi of all causal children Yi of X. This is illustrated in Figure 18.2b.

Evidence from a parent W to a child node X is passed as conditionals on the probability distribution of that evidence:

πW(x)=p(x|W)=jp(x|wj)p(wj)(18.23)

π(x)=w1,...,wkp(x|w1,...,wk)Πi=1kπWi(x)(18.24)

As is evident from the summation notation, Bayesian belief networks deal in dependencies among discrete-valued variables. In dealing with a continuous variable, it is necessary to “discretize” the data (i.e., subdivide the range of the variables into a mutually exclusive set of intervals). Das98 discusses optimal maximum entropy solutions to discretization.

Evaluating large graphs can be daunting, and various pruning strategies can be employed to eliminate weak dependencies between nodes: trading optimality for speed.98

There is a concern for maintaining information integrity in multiply-connected graphs. For example, in Figure 18.19b, the state of SoilNitrates affects the states of both CornYield and BeanYield, which in turn affect the state of FoodSupply. The problem arises in double-counting the effects of the common variable, SoilNitrates, on FoodSupply.

Pearl11 discusses three techniques that solve this problem with various degrees of success:

  • Clustering. Forming compound variables to eliminate multiple connectivity (e.g., combining CornYield and BeanYield in Figure 18.19b to a common node)

  • Conditioning. Decomposing networks to correspond to multiple instantiations of common nodes (e.g., breaking the node SoilCondition in Figure 18.19c into a prior and a posterior node to reflect the causal dynamics)

  • Stochastic simulation. Using Monte Carlo methods to estimating joint probabilities at a node

One of the most widely used technique for dealing with loops in causal graphs is the Junction Tree algorithm, developed by Jensen et al.99,100 Like Pearl’s clustering algorithm, the Junction Tree algorithm passes messages that are functions of clusters of nodes, thereby eliminating duplicative paths.101

18.5.4.4.2 Generalized Belief Networks

As noted, the classical Bayesian belief network is concerned with evaluating causal influence networks. This is rather too constraining for our purposes in that it precludes the use of complex random graph topologies of the sorts encountered in the data graphs and template graphs needed for many STA problems, for example, graphs with loops as shown in Figure 18.19c.

Yedida et al.101 provide a generalization that mitigates this restriction. Their Generalized Belief Propagation formulation—which they show to be equivalent to Markov Random Field and to Bethe’s free energy approximation formulations—models belief in a state xj of a node X

bX(xj)=kϕX(xj)ΠWN(X)mW,X(xj)(18.25)

in terms of “local” evidence φX(xj) and evidence passed as messages from other nodes W in the immediate neighborhood N(X) in the graph that represents the set of relationships in the relevant situation:

mW,X(xj)=wiϕW(wi)ψW,X(wi,xj)ΠYN(W)XmY,W(wi)(18.26)

Joint belief, then, is given as

bW,X(wi,xj)=kψW,X(wi,xj)ϕW(wi)ϕX(xj)ΠYN(W)XmY,W(wi)ΠZN(X)WmZ,X(x)(18.27)

As before, k is normalizing constant ensuring that beliefs sum to 1.

18.5.4.4.3   Learning of Belief Networks

Learning belief networks entails learning both the dependency structure and conditional probabilities. Das98 evaluates applicability of various techniques as a function of prior knowledge of the network structure and data observability (Table 18.5).

18.5.4.4.4   Belief Decision Trees

Additional induction methods have been developed based, not on joint probability methods, but on the Dempster–Shafer theory of evidence.102, 103 and 104 These methods are designed to reduce the sensitivity of the decision process to small perturbations in the learning set, which has been a significant problem with existing decision tree methods. Because of the capability to express various degrees of knowledge—from total ignorance to full knowledge—Dempster–Shafer theory is an attractive way of building recognition systems that are forgiving in the absence of well-modeled domains.

TABLE 18.5
Techniques for Learning of Belief Networks

Structure Observability

Known Network Structure

Unknown Network Structure

Fully observable variables

Maximum likelihood (ML) estimation

Entropy, Bayesian or MDL score based (e.g., K2)

Bayesian Dirichlet (BD)

Dependency constraint based

Partially observable variables

Expectation maximization (EM)

Expectation maximization (EM) (e.g., structured EM)

Gibb’s sampling Gradient descent based (e.g., APN)

Source:   Das, S., Proceedings of the Eighth International Conference on Information Fusion, Philadelphia, 2005. With permission.

Vannoorenberghe102 augments the Belief Decision Tree formulation with several machine-learning techniques—including bagging and randomization—and claims strikingly improved performance.

18.5.4.5   Compositional Methods

In assessing well-modeled situations (e.g., a chess match or a baseball game), it should be possible to define a static architecture for estimating and projecting situations.

This, however, is not the case in the many situations where human behavior is relatively unconstrained (as in most threat assessment problems of interest).

In particular, adapting to unanticipated behavior of interacting agents requires an ability to create predictive models of behaviors that a purposeful agent might exhibit and determine the distinctive observable indicators of those behaviors.

Often, however, the specific behaviors that must be predicted are unprecedented and unexpected, making them unsuitable for template-driven recognition methods. This drives a need to shift from the model recognition paradigms familiar in automatic target recognition to a model discovery paradigm.

Under such a paradigm, the system will be required to compose models that explain the available data, rather than simply to recognize preexisting models in the data. In such a system, the assessment process will need to be adaptive and opportunistic. That is to say, it cannot assume that it has a predefined complete set of threat scenarios available for simple pattern-matching recognition (e.g., using graph matching). Rather, situation and event hypotheses must be generated, evaluated, and refined as the understanding of the situation evolves.

A process of this sort has been developed under the DARPA evidence extraction and link detection (EELD) program. The system, continuous analysis and detection of relational evidence (CADRE) was developed by BAE Systems/Alphatech. CADRE discovers and interprets subtle relations in data.

CADRE is a link detection system that takes in a threat pattern and partial evidence about threat cases and outputs threat hypotheses with associated inferred agents and events. CADRE uses a Prolog-based frame system to represent threat patterns and enforce temporal and equality constraints among pattern slots. Based on rules involving uniquely identifying slots in the pattern, CADRE triggers an initial set of threat hypotheses, and then refines these hypotheses by generating queries for unknown slots from constraints involving known slots. To evaluate hypotheses, CADRE scores each local hypothesis using a probabilistic model to create a consistent, high-value global hypothesis by pruning conflicting lower-scoring hypotheses.105

Another approach is to represent the deployed situation by means of a constructive Belief Network as illustrated in Figure 18.21.

An individual object, such as the mobile missile launcher at the bottom of the figure, can be represented either as a single node in the graph, or as a subgraph of components and their interrelationships. In this way the problems of Automatic Target Recognition (ATR), scene understanding and situation assessment are together solved via a common belief network at various levels of granularity.

Such an approach was implemented under the DARPA multi-source intelligence correlator (MICOR) program.106 The system composed graphs representing adversary force structures, including the class and activity state of military units, represented as nodes in the graph.

To capture the uncertainty in relational states as well as in entity attributive states, the Bayesian graph method was extended by the use of labeled directed graphs.

Images

FIGURE 18.21
A representative graph of a tactical military situation. (From Steinberg, A.N., Threat Assessment Issues and Methods, Tutorial presented at Sensor Fusion Conference, Marcus-Evans, Washington, DC, December 2006. With permission.)

Applying this to the generalized belief propagation formulation of Section 18.5.4.4.2, we expand Equation 18.26 by marginalizing over relations:

mW,X(xj)=wiϕW(wi)ψW,X(wi,xj)ΠYN(W)XmY,W(wi)=wiϕW(wi)rp(wi,xj|r(W,X)f(r(W,X))ΠYN(W)XmY,W(wi)(18.28)

The effect is to build belief networks in which entity state variables and relational variables are nodes.

A reference processing architecture for adaptively building and refining situation estimate models is shown in Figure 18.22. The architecture elaborates the notional blackboard concept of Figure 18.17. The specific design extends one developed for model-based scene understanding and target recognition.

This adaptive process for model discovery iteratively builds and validates interpretation hypotheses, which attempt to explain the available evidence. A feature extraction process searches available data for indicators and differentiators (i.e., supporting and discriminating evidence) of situation types of interest (e.g., movements of weapons, forces, or other resources related to threat organizations).

Hypothesis generation develops one or more interpretation hypotheses, which have the form of labeled directed graphs (illustrated at the bottom of the figure). In a threat assessment application, interpretation hypotheses concern the capability, opportunity, and intent of agents to carry out various actions. These conditions for action are decomposed into mutually consistent sets and evaluated against the available evidence.

Images

FIGURE 18.22
Reference architecture for SA/TA. (From Steinberg, A.N., Threat Assessment Issues and Methods, Tutorial presented at Sensor Fusion Conference, Marcus-Evans, Washington, DC, December 2006. With permission.)

Threat hypotheses include

  • Inference of threatening situations: the capability, opportunities, and intent of one entity x to (adversely) affect an entity y

  • Prediction of threat situations and events (e.g., attacks): interactions whereby entities (adversely) affect entities of interest

Threat situations and threat events are inferred on the basis of the attributes and relationships of the entities involved.

Estimates of physical, informational, and psychological states of such entities are used to infer both actual and potential relationships among entities. Specifically, threat situations are inferred in terms of one entity’s capability, opportunity, and intent to affect (other) entities. A predicted or occurring threat event is characterized in terms of its direction (i.e., planning, command, and control), execution, and outcome.

The process closes the loop by means of a resource management function that nominates information acquisition actions. A utility/probability/cost model is used to predict the cost-effectiveness of such actions in terms of (a) the predicted utility of particular information, (b) the probability of attaining such information given various actions, and (c) the cost of such actions.107 Utility is calculated in terms of the expected effectiveness of specific information to refine hypotheses or to resolve among competing hypotheses as needed to support the current decision needs (i.e., to map from possible world space to decision space).21,45

Information acquisition actions can include the intentional stimulation of the information environment to elicit information, as shown in Figure 18.11c. Stimulation to induce information sources to reveal their biases requires an inventory of models for various classes of behavior.76 A key research goal is the development and validation of methods for the systematic understanding and generalization of such models—the abductive and inductive elements of Figure 18.22.

18.5.4.6   Algorithmic Techniques for Situation and Threat Assessment

Various techniques applicable to situation recognition and discussed in Section 18.5 have been applied to problems of impact and threat assessment.

Hinman96 discusses a range of techniques that have been assessed for these purposes under the sponsorship of the Fusion Technology Branch of the U.S. Air Force Research Laboratory. These include

  • Bayesian techniques, using force structure templates for recognizing military units.97

  • Knowledge-based approaches for behavior analysis, specifically for vehicle recognition but with distinct applicability to the recognition of organizations and their activities. Hidden Markov Models are used to capture doctrinal information and uncertainty.108

  • Neural networks for situation assessment, using back propagation training in a multilayer neural network to aggregate analysts’ pairwise preferences in situation recognition. Dempster–Shafer methods are used to resolve potential conflicts in these outputs.109

  • Neural networks for predicting one of a small number of threat actions (attack, retreat, feint, hold). A troop deployment analysis module performs a hierarchical constrained clustering on the battle map. The resulting battlefield cluster maps are processed by a tactical situation analysis module, using both rule-based and neural network methods to predict enemy intent.110

  • Fuzzy logic techniques were evaluated to recognize enemy courses of action and to infer intent and objectives (discussed in Section 18.2.4).111

  • Genetic algorithms are used in a planning-support tool called FOX, which rapidly generates and assesses battlefield courses of action (COAs). Wargaming techniques enable a rapid search for desirable solutions.112 As such, the tool can be applied to impact assessment by generating potential friendly and adversary COAs and predicting their interactions and consequences.

18.5.5   Data Management

Data management is an important issue in developing and operating practical systems for STA. In particular, there is the need to manage and manipulate very large graphical structures. It is not uncommon for data graphs used in link analysis to have thousands of nodes. Network representations of situation hypotheses can have several hundred nodes. Furthermore, systems must have means to represent the sorts of uncertainty typical of military or national security threat situations. Often, this is performed by entertaining multiple hypotheses.

This leads to issues of data structure, search, and retrieval. Solutions have been developed at both the application and infrastructure layers.

18.5.5.1   Hypothesis Structure Issues

McMichael and coworkers at CSIRO have developed what they call grammatical methods for representing and reasoning about situations.113, 114 and 115

Their approach has numerous attractive features. Expressive power is obtained by a compositional grammar. Much like a generative grammar for natural language, this compositional grammar obtains expressive power by allowing the recursive generation of a very large number of very complex structures from a small number of components. The method extends Steedman’s Combinatory Categorical Grammar116 to include grammatical methods for both set and sequence combinations. The resulting Sequence Set Combinatory Categorical Grammar generates parse trees from track data. These trees are parsed to generate situation trees.115

Dynamic situations are readily handled. Such a situation is partitioned into set and sequence components, rather analogous to the SPAN and SNAP constructs of SNePS.59,60 Set components are entities that persist over time despite changes: physical objects, organizational structures, and the like. Sequence components are dynamic states of the set components. These can be stages of individual dynamic entities. They can also be episodes of dynamic situations. Both sets and sequences are amenable to hierarchical decomposition: a military force may have a hierarchy of subordinate units; a complex dynamic situation can involve a hierarchy of subordinate situations. Therefore, a situation can be represented as a tree structure—called a sequence set tree—as illustrated in Figure 18.23.

This hierarchical graphical formulation has value not only in its representational richness, but in allowing efficient processing and manipulation. The aim of the representational scheme is to maximize the state complexity (expressive power) while minimizing the complexity of what is expressed.117

Dependency links among tree nodes—to include continuity links between stages of a dynamically changing component and interaction links between components—are represented within the tree structure. So doing transforms situation representations for highly connected graphs such as in Figures 18.14 and 18.21, which cannot be searched in polynomial time, to trees that can.

A pivot table algorithm is used for fast situation extraction.114

Images

FIGURE 18.23
An example of McMichael’s situation tree. (From Steinberg, A.N., Threat Assessment Issues and Methods, Tutorial presented at Sensor Fusion Conference, Marcus-Evans, Washington, DC, December 2006. With permission.)

18.5.5.2   Data Repository Structure Issues

Let us examine the implications for data management of the STA issues listed in Table 18.2. Table 18.6 shows features of a database management system that help to address these issues.

Many current systems maintain large databases using a relational data base management system (RDBMS), such as Oracle. An RDBMS performs well in the following cases:

  • Ad hoc queries, for which relationships do not need to be remembered

  • Static change policy for the database and schema

  • Centralized database server

  • Storage of data that is logically amenable to representation in two-dimensional tables

However, as data become more dynamic and more complex, in terms of greater interconnectivity and greater number of data types, the overhead in an RDBMS becomes excessively cumbersome.

Furthermore, the demand for larger, distributed enterprises in intelligence analysis increases this burden.

Therefore, in many of the applications of interest—those driven by issues such as listed in Table 18.6—it is found that an object-oriented data base management system (OODBMS) should be preferred.

In contrast to an RDBMS, an OODBMS such as Objectivity/DB has the following desired features:

  • Data of any complexity can be directly represented in the database.

  • Relationships are persisted, so that there is no need to rediscover the relationship upon each use (e.g., by matching a local key to a foreign key).

TABLE 18.6
Data Management Issues for Situation Threat Assessment

Images

  • Direct storage of objects and relationships provides efficient storage and searching of complex trees and networks of objects.

  • Applications with data that have many relationships or many objects with many interrelationships benefit greatly.

  • Support for complex data and inheritance enables creation of a comprehensive schema that models the union of large numbers of disparate data sources.

  • A comprehensive global view to data can open up many data mining opportunities.

  • Once objects are stored in the database, any number of arbitrary structures can be created on top of the data to rapidly find and access those objects again, such as maps, lists, sets, trees, and specialized indices. Multidimensional indices can be created over the data objects.118

 

 

18.6   Summary

Every technological discipline undergoes a formative phase in which the underlying concepts and principles are only vaguely perceived. During that phase, much unfocused energy is inefficiently expended, as researchers experiment with a multiplicity of fragmentary methods, with little sense of concerted progress.

The disciplines of situation and, especially, threat assessment have been in that nascent phase since people began thinking systematically about them, perhaps in the 1970s. Eventually, a rigorous conceptual foundation will be established, enabling profound progress to be made rapidly and efficiently.

We are seeing very encouraging signs that we are nearing that point. Recent developments in inferencing methods, knowledge representation, and knowledge management are beginning to bear fruit in practical STA systems.

 

 

References

1. D. Lambert, A blueprint for higher-level information fusion systems, Information Fusion, 2008 (forth coming).

2. F.E. White, A model for data fusion, Proceedings of the First National Symposium on Sensor Fusion, GACIAC, IIT Research Institute, Chicago, IL, vol. 2, 1988.

3. A.N. Steinberg, C.L. Bowman and F.E. White, Revisions to the JDL Model, Joint NATO/IRIS Conference Proceedings, Quebec, October 1998; reprinted in Sensor Fusion: Architectures, Algorithms and Applications, Proceedings of the SPIE, vol. 3719, 1999.

4. A.N. Steinberg and C.L. Bowman, Rethinking the JDL data fusion model, Proceedings of the MSS National Symposium on Sensor and Data Fusion, June 2004.

5. K. Devlin, Logic and Information, Press Syndicate of the University of Cambridge, Cambridge, MA, 1991.

6. E.T. Nozawa, Peircean semeiotic: A new engineering paradigm for automatic and adaptive intelligent systems, Proceedings of the Third International Conference on Information Fusion, vol. 2, pp. WEC4/3–WEC410, July 10–13, 2000.

7. J. Roy, From data fusion to situation analysis, Proceedings of the Fourth International Conference on Information Fusion, vol. II, pp. ThC2-3–ThC2-10, Montreal, 2001.

8. A.-L. Jousselme, P. Maupin and É. Bossé, Uncertainty in a situation analysis perspective, Proceedings of the Sixth International Conference on Information Fusion, vol. 2, pp. 1207–1214, Cairns, Australia, 2003.

9. M.R. Endsley, Toward a theory of situation awareness in dynamic systems, Human Factors, 37(1), 1995.

10. M.R. Endsley, Theoretical underpinnings of situation awareness: A critical review, in M.R. Endsley and D.J. Garland (Eds.), Situation Awareness Analysis and Measurement, Lawrence Erlbaum Associates Inc., Mahwah, NJ, 2000.

11. J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann Publishers, San Mateo, CA, 1988.

12. J.S. Yedidia, W.T. Freeman and Y. Weiss, Understanding belief propagation and its generalizations, in G. Lakemeyer and B. Nebel (Eds.), Exploring Artificial Intelligence in the New Millennium, Morgan Kaufmann Publishers, New York, NY, 2002.

13. J. Barwise and J. Perry, Situations and Attitudes, Bradford Books, MIT Press, Cambridge, MA, 1983.

14. A. Khalil, Computational Learning and Data-Driven Modeling for Water Resource Management and Hydrology, PhD dissertation, Utah State University, 2005.

15. A.N. Steinberg, Unifying data fusion levels 1 and 2, Proceedings of the Seventh International Conference on Information Fusion, Stockholm, 2004.

16. J. Llinas, C. Bowman, G. Rogova, A. Steinberg, E. Waltz and F. White, Revisiting the JDL data fusion model II, in P. Svensson and J. Schubert (Eds.), Proceedings of the Seventh International Conference on Information Fusion, Stockholm, 2004.

17. J.J. Salerno, Where’s level 2/3 fusion: A look back over the past 10 years, Proceedings of the Tenth International Conference on Information Fusion, Quebec, 2007.

18. D.A. Lambert, A unification of sensor and higher-level fusion, Proceedings of the Ninth International Conference on Information Fusion, pp. 1–8, Florence, Italy, 2006.

19. D.A. Lambert, Tradeoffs in the design of higher-level fusion systems, Proceedings of the Tenth International Conference on Information Fusion, Quebec, 2007.

20. R. Mahler, Random set theory for target tracking and identification, in D.L. Hall and J. Llinas (Eds.), Handbook of Multisensor Data Fusion, CRC Press, London, 2001.

21. A.N. Steinberg, Threat assessment technology development, Proceedings of the Fifth International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT’05), pp. 490–500, Paris, July 2005.

22. Steinberg, A.N., Threat Assessment Issues and Methods, Tutorial presented at Sensor Fusion Conference, Marcus-Evans, Washington, DC, December 2006

23. D.A. Lambert, Situations for situation awareness, Proceedings of the Fourth International Conference on International Fusion, Montreal, 2001.

24. Merriam-Webster’s Collegiate Dictionary, Tenth edition, Merriam-Webster, Inc., Springfield, MA, 1993.

25. B. McGuinness and L. Foy, A subjective measure of SA: The Crew Awareness Rating Scale (CARS), Proceedings of the First Human Performance: Situation Awareness and Automation Conference, Savannah, GA, October 2000.

26. J. Salerno, M. Hinman and D. Boulware, Building a framework for situation awareness, Proceedings of the Seventh International Conference on Information Fusion, Stockholm, July 2004.

27. J. Salerno, M. Hinman and D. Boulware, The many faces of situation awareness, Proceedings of the MSS National Symposium on Sensor and Data Fusion, Monterey, CA, June 2004.

28. É. Bossé, J. Roy and S. Wark, Concepts, Models and Tools for Information Fusion, Artech House, Boston, MA, 2007.

29. H. Curry and R. Feys, Combinatory Logic, Volume 1, North-Holland Publishing Company, Amsterdam, 1974.

30. A. Tarski, The semantic conception of truth and the foundations of semantics, Philosophy and Phenomenological Research IV(3), 341–376, 1944 (Symposium on Meaning and Truth).

31. Doctrinal Implications of Operational Net Assessment (ONA), United States Joint Forces Command, Joint Warfare Center Joint Doctrine Series, Pamphlet 4, February 24, 2004.

32. G. Bronner, L’incertitude, Volume 3187 of Que sais-je? Presses Universitaires de France, Paris, 1997.

33. M. Smithson, Ignorance and Uncertainty: Emerging Paradigms, Springer-Verlag, Berlin, 1989.

34. P. Krause and D. Clark, Representing Uncertain Knowledge: An Artificial Intelligence Approach, Kluwer Academic Publishers, New York, NY, 1993.

35. B. Bouchon-Meunier and H.T. Nguyen, Les incertitudes dans les syst`emes intelligents, Volume 3110 of Que sais-je? Press Universitaires de France, Paris, 1996.

36. G.J. Klir and M.J. Wierman, Uncertainty-Based Information: Elements of Generalized Information Theory, Volume 15 of Studies in Fuzziness and Soft Computing, 2nd edition. Physica-Verlag, Heidelberg, New York, NY, 1999.

37. P. Smets, Imperfect information: Imprecision—uncertainty, in A. Motro and P. Smets (Eds.), Uncertainty Management in Information Systems: From Needs to Solutions, Kluwer Academic Publishers, Boston, MA, 1977.

38. L. Wittgenstein, Philosophical Investigations, Blackwell Publishers, Oxford, 2001.

39. K. Perusich and M.D. McNeese, Using fuzzy cognitive maps for knowledge management in a conflict environment, IEEE Transactions on Systems, Man and Cybernetics—Part C, 26(6), 810–821, 2006.

40. P. Gonsalves, G. Rinkus, S. Das and N. Ton, A hybrid artificial intelligence architecture for battlefield information fusion, Proceedings of the Second International Conference on Information Fusion, Sunnyvale, CA, 1999.

41. P.C.G. Costa, Bayesian Semantics for the Semantic Web, Doctoral dissertation, Department of Systems Engineering and Operations Research, George Mason University, Fairfax, VA, 2005.

42. P.C.G. Costa, K.B. Laskey and K.J. Laskey, Probabilistic ontologies for efficient resource sharing in semantic web services, Proceedings of the Second Workshop on Uncertainty Reasoning for the Semantic Web (URSW 2006), held at the Fifth International Semantic Web Conference (ISWC 2006), Athens, GA, 2006.

43. K.B. Laskey, P.C.G. da Costa, E.J. Wright and K.J. Laskey, Probabilistic ontology for net-centric fusion, Proceedings of the Tenth International Conference on Information Fusion, Quebec, 2007.

44. E.G. Little and G.L. Rogova, An ontological analysis of threat and vulnerability, Proceedings of the Ninth International Conference on Information Fusion, pp. 1–8, Florence, Italy, 2006.

45. A.N. Steinberg, An approach to threat assessment, Proceedings of the Eighth International Conference on Information Fusion, vol. 2, p. 8, Philadelphia, 2005.

46. A.N. Steinberg, Predictive modeling of interacting agents, Proceedings of the Tenth International Conference on Information Fusion, Quebec, 2007.

47. D.A. Lambert, Personal communication, August 2007.

48. M. Uschold and M. Grüninger, Ontologies: Principles, methods and applications, Knowledge Engineering Review, 11(2), 93–155, 1996.

49. B. Krieg-Brückner, U. Frese, K. Lüttich, C. Mandel, T. Mossakowski and R.J. Ross, Specification of an ontology for route graphs, in Freksa et al. (Eds.), Spatial Cognition IV, LNAI 3343, Springer-Verlag, Berlin, Heidelberg, 2005.

50. M.R. Genesereth and N.J. Nilsson, Logical Foundations of Artificial Intelligence, Morgan Kaufmann, Los Altos, CA, 1987.

51. M.M. Kokar, Ontologies and Level 2 Fusion: Theory and Application, Tutorial presented at International Conference on Information Fusion, 2004.

52. Plato, Parmenides, First Edition (fragment), Collection of the Great Library of Alexandria, Egypt, 370 BC.

53. K.B. Laskey, MEBN: A Logic for Open-World Probabilistic Reasoning (Research Paper), C41-06-01, http://ite.gmu.edu/klaskey/index.html, 2005.

54. M. Kokar, Choices in ontological languages and implications for inferencing, Presentation at Center for Multisource Information Fusion Workshop III, Critical Issues in Information Fusion, Buffalo, NY, September, 2004.

55. Protégé Ontology Editor and Knowledge Acquisition System, Guideline Modeling Methods and Technologies, Open Clinical Knowledge Management for Medical Care, http://www.openclinical.org/gmm_protege.html.

56. E. Astesiano, M. Bidoit, B. Krieg-Brückner, H. Kirchner, P.D. Mosses, D. Sannella and A. Tarlecki, CASL: The common algebraic specification language, Theoretical Computer Science, 286, 2002.

57. P.D. Mosses (Ed.), CASL Reference Manual, Volume 2960 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, 2004.

58. K. Lüttich, B. Krieg-Brückner and T. Mossakowski, Ontologies for the semantic web in CASL, W3C Recommendation, http://www.w3.org/TR/owl-ref/, 2004.

59. S.C. Shapiro and W.J. Rapaport, The SNePS Family, Computers & Mathematics with Applications, 23(2–5), 243–275, 1992. Reprinted in F. Lehmann (Ed.), Semantic Networks in Artificial Intelligence, pp. 243–275, Pergamon Press, Oxford, 1992.

60. S.C. Shapiro and the SNePS Implementation Group, SNePS 2.6.1 User’s Manual, Department of Computer Science and Engineering, University at Buffalo, The State University of New York, Buffalo, NY, October 6, 2004.

61. IDEF5 Ontology Description Capture Overview, Knowledge Based Systems, Inc., http://www.idef.com/IDEF5.html.

62. IDEF5 Method Report, Information Integration for Concurrent Engineering (IICE), Knowledge Based Systems, Inc., http://www.idef.com/pdf/Idef5.pdf.

63. C.J. Matheus, M. Kokar and K. Baclawski, A core ontology for situation awareness, Proceedings of the Ninth International Conference on Information Fusion, vol. 1, pp. 545–552, Florence, Italy, 2003.

64. E. Little and G. Rogova, Ontology meta-modeling for building a situational picture of catastrophic events, Proceedings of the Eighth International Conference on Information Fusion, Philadelphia, 2005.

65. P.D. Scott and G.L. Rogova, Crisis management in a data fusion synthetic task environment, Proceedings of the Seventh International Conference on Information Fusion, Stockholm, July 2004.

66. R. Nagi, M. Sudit and J. Llinas, An approach for level 2/3 fusion technology development in urban/asymmetric scenarios, Proceedings of the Ninth International Conference on Information Fusion, pp. 1–5, Florence, Italy, 2006.

67. P. Grenon and B. Smith, SNAP and SPAN: Towards dynamic spatial ontology, Spatial Cognition and Computation, 4(1), 69–104, 2004.

68. P. Grenon, Spatiotemporality in basic formal ontology: SNAP and SPAN, upper-level ontology and framework for formalization, IFOMIS Technical Report Series, (http://www.ifomis.unisaarland.de/Research/IFOMISReports/IFOMIS%20Report%2005_2003.pdf), 2003.

69. B. Smith and P. Grenon, The cornucopia of formal-ontological relations, Dialectica 58(3), 279–296, 2004.

70. K.B. Laskey, S. Stanford and B. Stibo, Probabilistic Reasoning for Assessment of Enemy Intentions, Publications #94–25, George Mason University, 1994.

71. C.G. Looney and L.R. Liang, Cognitive situation and threat assessments of ground battlespaces, Information Fusion, 4(4), 297–308, 2003.

72. T.S. Levitt, C.L. Winter, C.J. Turner, R.A. Chestek, G.J. Ettinger and A.M. Sayre, Bayesian inference-based fusion of radar imagery, military forces and tactical terrain models in the image exploitation system/balanced technology initiative, IEEE International Journal on Human-Computer Studies, 42, 667–686, 1995.

73. A.N. Steinberg, J. Llinas, A. Bisantz, C. Stoneking and N. Morizio, Error characterization in human-generated reporting, Proceedings of the MSS National Symposium on Sensor and Data Fusion, McLean, VA, 2007.

74. J. Rasmussen, Skills, rules, and knowledge: Signals, signs, and symbols, and other distractions in human performance models, IEEE Transactions on Systems, Man, and Cybernetics SMC-13(3), 257–266, 1983.

75. C.L. Bowman, The data fusion tree paradigm and its dual, Proceedings of the National Symposium on Sensor Fusion, 1994.

76. A.N. Steinberg, Stimulative intelligence, Proceedings of the MSS National Symposium on Sensor and Data Fusion, McLean, VA, 2006.

77. M.G. Ceruti, A. Ashenfelter, R. Brooks, G. Chen, S. Das, G. Raven, M. Sudit and E. Wright, Pedigree information for enhanced situation and threat assessment, Proceedings of the Ninth International Conference on Information Fusion, pp. 1–8, Florence, Italy, 2006.

78. Steinberg, A.N. and Waltz, E.L., Perceptions on Imagery Fusion, Presented at NASA Data Fusion/Data Mining Workshop, Sunnyvale, CA, 1999

79. J.D. Erdley, Bridging the semantic gap, Proceedings of the MSS National Symposium on Sensor and Data Fusion, McLean, VA, 2007.

80. C.J. Matheus, D. Tribble, M.M. Kokar, M.G. Ceruti and S.C. McGirr, Towards a formal pedigree ontology for level-one sensor fusion, The 10th International Command and Control Research and Technology Symposium (ICCRTS 2005), June 2005.

81. A.N. Steinberg, Open networks: Generalized multi-sensor characterization, Proceedings of the Ninth International Conference on Information Fusion, pp. 1–7, Florence, Italy, 2006.

82. P. Walley, Statistical Reasoning with Imprecise Probabilities, Chapman & Hall, London, 1991.

83. J. Llinas, New challenges for defining information fusion requirements, Plenary Address, Tenth International Conference on Information Fusion, International Society for Information Fusion, Quebec, 2007.

84. G. Rogova and V. Nimier, Reliability in information fusion: literature survey, Proceedings of the Seventh International Conference on Information Fusion, pp. 1158–1165, Stockholm, 2004.

85. E.R. Keydel, Multi-INT registration issues, IDGA Image Fusion Conference, Washington, DC, 1999.

86. B.V. Dasarathy, Information fusion in the context of human-machine interface, Information Fusion, 6(2), 2005.

87. P. Rosenzeig and M. Scardaville, The need to protect civil liberties while combating terrorism: legal principles and the Total Information Awareness program, Legal Memorandum #6, http://www.heritage.org/Research/HomelandSecurity/lm6.cfm, February 6, 2003.

88. Objectivity Platform in THREADS—Northrop Grumman Mission Systems Application Assists U.S. Counter Terrorist and Force Protection Analysts in the Global War on Terrorism, Objectivity, Inc. Application Note, 2007.

89. D. Conte, P. Foggia, C. Sansone and M. Vento, Thirty years of graph matching in pattern recognition, International Journal of Pattern Recognition and Artificial Intelligence, 18(3) 265–298, 2004.

90. D.F. Gillies, Computer Vision Lecture Course, http://www.homes.doc.ic.ac.uk/~dfg/vision/v18.html.

91. A. Hlaoui and W. Shengrui, A new algorithm for inexact graph matching, Proceedings of the Sixteenth International Conference on Pattern Recognition, vol. 4, pp. 180–183, 2002.

92. M. Sudit, A. Stotz and M. Holender, Situational awareness of a coordinated cyber attack, Proceedings of the SPIE Defense & Security Symposium, vol. 5812, pp. 114–129, Orlando, FL, March 2005.

93. M. Sudit, R. Nagi, A. Stoltz and J. Delvecchio, Dynamic hypothesis generation and tracking for facilitated decision making and forensic analysis, Proceedings of the Ninth International Conference on Information Fusion, Florence, Italy, 2006.

94. Sudit, M., CMIF Information Fusion Technologies, CMIF Internal Presentation, 2005.

95. M.C. Stein and C.L. Winter, Recursive Bayesian fusion for force estimation, Proceedings of the Eighth National Symposium on Sensor Fusion, 1995.

96. M.L. Hinman, Some computational approaches for situation assessment and impact assessment, Proceedings of the Fifth International Conference on Information Fusion, vol. 1, pp. 687–693, Annapolis, MD, 2002.

97. M. Hinman and J. Marcinkowski, Final results on enhanced all source fusion, Proceedings of the SPIE, Sensor Fusion: Architectures, Algorithms and Applications IV, vol. 4051, pp. 389–396, 2000.

98. S. Das, Tutorial AM3: An integrated approach to data fusion and decision support, part I: Situation assessment, Proceedings of the Eighth International Conference on Information Fusion, Philadelphia, 2005.

99. F.V. Jensen and F. Jensen, Optimal Junction Trees. Uncertainty in Artificial Intelligence, 1994.

100. A.L. Madsen and F.V. Jensen, LAZY propagation: A junction tree inference algorithm based on lazy evaluation, Journal of Artificial Intelligence, 113(1), 1999.

101. Y.S. Yedida, W.T. Freeman and Y. Weiss, Understanding belief propagation and ts generalization, in G. Lakemeyer and B. Nevel (Eds.), Exploring AI in the New Millennium, pp. 239–269, 2002.

102. P. Vannoorenberghe, On aggregating belief decision trees, Information Fusion, 5(3), 179–188, 2004.

103. T. Fenoeux and M.S. Bjanger, Induction of decision trees from partially classified data using belief functions, Proceedings of SMC2000, Nashville, TN, IEEE, 2000.

104. Z. Elouedi, K. Mellouli and P. Smets, Belief decision trees: Theoretical foundations, International Journal of Approximate Reasoning, 18(2–3), 91–124, 2001.

105. N. Pioch, D. Hunter, C. Fournelle, B. Washburn, K. Moore, K.E. Jones, D. Bostwick, A. Kao, S. Graham, T. Allen and M. Dunn, CADRE: Continuous analysis and discovery from relational evidence, Integration of International Conference on Knowledge Intensive Multi-Agent Systems, pp. 555–561, 2003.

106. A.N. Steinberg and R.B. Washburn, Multi-level fusion for Warbreaker intelligence correlation, Proceedings of the National Symposium on Sensor Fusion, 1995.

107. R.C. Whitehair, A Framework for the Analysis of Sophisticated Control, PhD dissertation, University of Massachusetts CMPSCI Technical Report 95, February 1996.

108. C. Burns, A knowledge based approach to information fusion, Presentation to the Air Force Scientific Advisory Board, 2000.

109. G. Rogova, P. Losiewicz and J Choi, Connectionist approach to multi-attribute decision making under uncertainty, AFRL-IF-RS-TR-1999-12, 2000.

110. W. Wright, Artificial neural systems (ANS) fusion prototype, AFRL-IF-RS-TR-1998-126, 1998.

111. P. Gonsalves, G. Rinkus, S. Das and N. Ton, A Hybrid Artificial Intelligence Architecture for Battlefield Information Fusion, Proceedings of the Second International Conference on Information Fusion, Sunnyvale, CA, 1999.

112. J.S. Schlabach, C. Hayes and D.E. Goldberg, SHAKA-GA: A genetic algorithm for generating and analyzing battlefield courses of action (white paper), 1997, Cited in Evolutionary Computation, 7(1), 45–68, 1999.

113. G. Jarrad, S. Williams and D. McMichael. A framework for total parsing, Technical Report 03/10, CSIRO Mathematical and Information Sciences, Adelaide, Australia, January 2003.

114. D. McMichael, G. Jarrad, S. Williams and M. Kennett, Modelling, simulation and estimation of situation histories, Proceedings of the Seventh International Conference on Information Fusion, pp. 928–935, Stockholm, Sweden, June 2004.

115. D. McMichael and G. Jarrad, Grammatical methods for situation and threat analysis, Proceedings of the Eighth International Conference on Information Fusion, vol. 1, p. 8, Philadelphia, June 2005.

116. M. Steedman, The Syntactic Process, MIT Press, Cambridge, MA, 2000.

117. D. McMichael and G. Jarrad, Grammatical Methods for Situation and Threat Analysis, Tutorial Presented at the Eighth International Conference on Information Fusion, vol. 1, p. 8, Philadelphia, July 2005.

118. OODBMS vs. ORDMS vs. RDBMS: the pros and cons of different database technologies (white paper), Objectivity, Inc., 2006, http://www.objectivity.com/pages/downloads/whitepaper.asp#.

* The provision for a discriminating agent appears to be otiose. To include it in the definition is similar to defining sound so as to force an answer to the question of whether a falling tree makes a sound when no one is around to hear. We shall allow that there are situations that no one has noticed.

We formally define the terms relation and relationship in Section 18.2.4.1.

* Situation projection is often considered to be within the province of level 3 fusion. We will not agonize inordinately about such boundary disputes in this chapter, which are treated in Chapter 3.

Object assessment is, of course, the label for level 1 fusion in the JDL model. Such assessment can involve inference both from measurement data, p(x|Z), and from situation context p(x|s).

“Situation awareness” does not appear in this diagram. The correspondence with the three SAW levels in Endsley’s model9,10 is roughly as follows: Endsley’s Perception ~ Situation Analysis; Comprehension ~ Situation Recognition and Discovery; Projection ~ Situation Projection.

* Threat (impact) assessment is defined in Ref. 2 as the “process of estimation and prediction of effects on situations of planned or estimated/predicted actions by the participants; to include interactions between action plans of multiple players (e.g., assessing susceptibilities and vulnerabilities to estimated/predicted threat actions given one’s own planned actions).” Definition has been refined in Chapter 3.

We would add “... and of structures comprised of such objects and relationships.”

This seems to be close to Lambert’s position in Section 6 of Ref. 1.

§ The topic of multitarget tracking is, of course, actively being pursued using random set methods (Chapter 16 and Ref. 20). The key distinction is that in situation assessment the dependencies among entity states are the explicit basis for inferring about situational states and, conversely, situational states are used to infer states of component entities and their relationships.

Ceruti et al.21 note that, “[as] technology improves, the interface between Situation and Threat Assessment may become less distinct due to the larger number of possible interactions between these levels.”

* According to the definitions of Section 18.2.4, an event is a type of situation. Further discussion of threat ontology is found in Sections 18.3 and 18.4.

* The predicate order superscripts, R(1), etc., have been suppressed for legibility.

* This is also the pattern for L2 → L2, L2 → L3, L3 → L2, or L3 → L3 Deduction.

Also, L2 → L2, L2 → L3, L3 → L2, or L3 → L3 Induction.

A multitarget state of the sort of interest in situation or threat assessment cannot in general be inferred from a set of single-target states X = {x1, ..., xn}. For example, from P = “Menelaus is healthy, wealthy and wise,” we cannot be expected to infer anything like Q = “Menelaus is married to Helen” (however, in such cases we can sometimes infer Q → ~P). It does appear feasible, however, to restrict our ontology of relationships to those of finite and determinate order; that is, any given relation maps from a space of specific finite dimensionality r:X(n) → Y.

* It can be argued that only some entities—called continuants in the SNePS ontology discussed in Section 18.3.2.2—persist through state changes.

* Tracking targets with type (1) dynamics is clearly a level 1 (i.e., independent-target estimation) fusion problem. Type (2) and (3) dynamics are often encountered, but are generally treated using independent-target trackers; perhaps with a context-dependent selection of motion models, but assuming independence among target tracks. Type (3) cases, at least, suggest the need for trackers that explicitly model multitarget interactions. We may similarly distinguish the following types of measurement models:

  1. Independent measurements. Measurements of each target are not affected by those of any other entity; so multitarget posterior pdfs are simple products of the single target posterior pdfs.

  2. Context-sensitive multitarget measurements. Measurements of one target may be affected by the state of other entities as in cases of additive signatures (e.g., multiple targets in a pixel), or occluded or shadowed targets. Other cases involve induced effects (e.g., bistatic illumination, electromagnetic interference, or disruption of the observing medium).

    The absent category, interacting multiple measurements, is actually a hybrid of the listed categories, in which entities affect one another’s state and thereby affect measurements.

* This list derives from Bossé, Roy and Wark,28 who however describe them as Situation Analysis products.

* The term infon was coined on analogy with “electron” and “photon,” suggesting a discrete “particle”—though not necessarily an elementary, indivisible particle—of information. Infons need not be primitives of situation logic; Devlin (Ref. 5, p. 47) defines an infon to be an equivalence class of a pair 〈R, C〉 of a configuration R and a constraint C.

The provision “1 ≤ inm” in the definition allows for infons with free variables as well as for one-place relations (i.e., attributes).

* In our formulation, such propositional attitudes (as expressed in “x believes that σ” or “x wonders whether σ”) are cognitive relationships r and therefore can appear either as the initial argument of an infon or as one of the succeeding n-tuple of arguments.

The operator “⊨” of situation logic operates on units of information, not units of syntax and is therefore not the same as the Tarski’s truth operator.30 An infonic statement has the form “s ⊨ ∑,” where s is a situation and ∑ is a set of infons (Ref. 5., pp. 62f).

* We distinguish between abstraction and generality. Abstract situations can be more or less general, as can concrete (or fully anchored) situations. Generality is a function of their information content (roughly, the number of independent infons). Consider such abstract situations as conflict, warfare, space warfare, interstellar warfare, the war in Star Wars. These are, once again, to be contrasted with situations that are completely anchored in fact—for example, the Battle of Actium (as it actually occurred)—and hybrids, which are partially anchored in fact, such as the Battle of Actium as depicted by Shakespeare.

A discussion of the diverse concepts relating to uncertainty is given in Ref. 8, including the models of Bronner,32 Smithson,33 Krause and Clark,34 Bouchon-Meunier and Nguyen,35 Klir,36 and Smets.37

* We discuss some major ontology formalisms for STA in Section 18.3.

* In this formulation, like ours of Section 18.1.2, situation assessment involves estimating a current state of affairs (i.e., that most recently observed); impact assessment involves projecting future or past states of affairs.1

* Lambert “deconstructs” the JDL model levels to these three. The extension of the STDF model to JDL levels 0 (signal/feature assessment) and level 4 (system process or performance assessment) is a straightforward exercise; one that we leave for the student.

Lambert does address issues of relevance elsewhere (Section 3 of Ref. 1), so that we are able to reason about subordinate situations.

* The inclusion of the last of these situations emphasizes (a) the admission of brief, localized events as situations and (b) the fuzziness of the boundary conditions for such multifaceted situations as World War II.

Compare this with Little and Rogova’s44 threat ontology which we have adapted in Section 18.3.2.2 and in previous publications.21,45,46 That ontology considers capability, opportunity, and intent to be the major factors in predicting intentional actions. We find it useful to distinguish one’s opportunity to act as a necessary condition for action that can be distinguished from one’s capability to act. Also, we agree with Lambert that awareness is a necessary condition for intentional action, but only as a necessary condition for intent to act (as discussed in Section 18.3.3.2). We have previously confused this point.21,45,46

Lambert does provide for this distinction in his Mephisto framework1 in Sections 3.2 and 4.2 and argues that the formulation of intents in terms of intended effects permits a more economical representation.47

§ Three points regarding this parsing

  1. More precisely, h2, k2 should be bound by existential quantifiers. These appear as relational arguments in infon notation; for example, (∃, x, σ, S, T, p), where σ is an infon and “S, T” indicate all places and times

  2. It seems preferable to parse (Equation 18.14) in terms of hope rather than belief, as in “(Because (Intends, Brutus α, h1, k1, 1), (Believes, Brutus, (Cause, α, β, h2′, k2, p2), p1))”

  3. An alternative ontology might parse “intends” in terms of a predicate “IntendsBecause”; so that Equation 18.14 is rendered “(IntendsBecause, Brutus, α, (Hopes, Brutus, (Cause, α, β, h2′, k2, p2), p1), h1, k1, 1)”. The fact that we have such choices illustrates the slipperiness of ontology engineering.

* The preceding is a list of representative expected relationships; expected because of assumed models of semantics, nature, society, or psychology. There are, of course, “accidental” or contingent physical or societal relationships: this cat has no tail, the moon is in Aquarius, George is older than Bill, etc.

* This is the point of the situation logic construct discussed in footnote * in Section 18.2.4.1.

* IDEF5 does lack some of the specialized representations of IDEF1/1X, so that it would be a cumbersome tool for such purposes as designing a relational database.62

* Note that opportunity is defined only in spatio-temporal terms. In previous writings, we distinguished capability from opportunity as factors that are, respective, internal to and external to the agent.21,45,46 So doing, however, allows for such confusing applications as “a nearby latter provides an opportunity to scale a wall whereas the possession of a ladder provides a capability to scale a wall.” We do have some residual uneasiness about defining opportunity solely in spatio-temporal terms. It would seem that a target’s vulnerability is a matter of opportunity; as in “x can enter the fort because the gate was left unlocked (or because its walls are paper-thin).” Therefore, we prefer to characterize opportunity as involving the relationships between an agent and the situation elements to be acted upon.

* It would be naïve to assume that the concern in threat assessment is only with attacks per se. Rather, as discussed below, we are concerned with any event or change in the world situation that changes the Bayesian cost to us. An adversary’s withdrawal, resupply, or leadership change is a concern in threat assessment to the extent that it affects the adversary’s capability or opportunity or intent to conduct detrimental activity.

* We would consider functions (7) and (8)—like McGuinness and Foy’s “resolution”25—to be process assessment/refinement functions that fall outside of threat assessment per se.

* It is well recognized that human perception, comprehension, and projection (inference) are often driven by expectations and desires; that is, by prior plans and inferences concerning their expected outcomes. These factors are modeled as feedback in the measurement/inference/planning/control loop (an action component A in Figure 18.9).

* Or, equivalently, his measurement, inference and planning processes, as per the MIPC model of Section 18.4.2.

* In Ref. 46, we broaden the notion of open communications network by expanding the notion of communications to include any interaction among agents: one may communicate with another either by an e-mail message, a dozen roses, or a punch in the nose. This expanded notion allows us to characterize the interactions of concern to threat assessment in Section 18.4.1.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.198.83