21

 

 

Requirements Derivation for Data Fusion Systems

 

Ed Waltz and David L. Hall

CONTENTS

21.1   Introduction

21.2   Requirements Analysis Process

21.3   Engineering Flow-Down Approach

21.4   Enterprise Architecture Approach

21.4.1   The Three Views of the Enterprise Architecture

21.5   Comparison of Approaches

21.6   Requirements for Data Fusion Services

References

 

 

21.1   Introduction

The design of practical systems requires the translation of data fusion theoretic principles, practical constraints, and operational requirements into a physical, functional, and operational architecture that can be implemented, operated, and maintained. This translation of principles to practice demands a discipline that enables the system engineer or architect to perform the following basic functions:

  • Define user requirements in terms of functionality (qualitative description) and performance (quantitative description).

  • Synthesize alternative design models and analyze/compare the alternatives in terms of requirements and risk.

  • Select optimum design against some optimization criteria.

  • Allocate requirements to functional system subelements for selected design candidates.

  • Monitor the as-designed system to measure projected technical performance, risk, and other factors (e.g., projected life cycle cost) throughout the design and test cycle.

  • Verify performance of the implemented system against top- and intermediate-level requirements to ensure that requirements are met, and to validate the system performance model.

The discipline of system engineering, pioneered by the aerospace community to implement complex systems over the past four decades, has been successfully used to implement both research and development and large-scale data fusion systems. This approach is characterized by formal methods of requirement definition at a high level of abstraction, followed by decomposition to custom components that can then be implemented. More recently, as information technology has matured, the discipline of enterprise architecture design has also developed formal methods for designing large-scale enterprises using commercially available custom software as well as hardware components. Both of these disciplines contribute sound methodologies for implementing data fusion systems.

This chapter introduces each approach before comparing the two to illustrate their complementary nature and the utility of each. The approaches are not mutually exclusive, and methods from both may be applied to translate data fusion principles to practice.

 

 

21.2   Requirements Analysis Process

Derivation of requirements for a multisensor data fusion system must begin with the recognition of a fundamental principle: there is no such thing as a data fusion system. Instead, there are applications to which data fusion techniques can be applied. This implies that generating requirements for a generic data fusion system is not particularly useful (although one can identify some basic component functions). Instead, the particular application or mission to which the data fusion is addressed drives the requirements. This concept is illustrated in Figures 21.1a and 21.1b.1

Figure 21.1a indicates that the requirements analysis process begins with an understanding of the overall mission requirements. What decisions or inferences are sought by the overall system? What decisions or inferences do the human users want to make? The analysis and documentation of this is illustrated at the top of the figure. An understanding of the anticipated targets supports this analysis, the types of threats anticipated, the environment in which the observations and decisions are to be made, and the operational doctrine. For Department of Defense (DoD) applications—such as automated target recognition—this would entail specifying the types of targets to be identified (e.g., army tanks and launch vehicles) and other types of entities that could be confused for targets (e.g., automobiles and school buses). The analysis must specify the environment in which the observations are made, the conditions of the observation process, and sample missions or engagement scenarios. This initial analysis should clearly specify the military or mission needs and how these would benefit from a data fusion system.

From this initial analysis, system functions can be identified and performance requirements associated with each function. The Joint Directors of Laboratories (JDL) data fusion process model can assist with this step. For example, the functions related to communications/message processing could be specified. What are the external interfaces to the system? What are the data rates from each communications link or sensor? What are the system transactions to be performed?2 These types of questions assist in the formulation of the functional performance requirements. For each requirement, one must also specify how the requirement can be verified or tested (e.g., via simulations, inspection, and effectiveness analysis). A requirement is vague (and not really a requirement) unless it can be verified via a test or inspection.

Ideally, the system designer has the luxury of analyzing and selecting a sensor suite. This is shown in the middle of the diagram that appears in Figure 21.1a. The designer performs a survey of current sensor technology and analyzes the observational phenomenology (i.e., how the inferences to be made by the fusion system could be mapped to observable phenomena such as infrared spectra and radio frequency measurements). The result of this process is a set of sensor performance measures that link sensors to functional requirements, and an understanding of how the sensors could perform under anticipated conditions. In many cases, of course, the sensors have already been selected (e.g., when designing a fusion system for an existing platform such as a tactical aircraft). Even in such cases, the designer should perform the sensor analysis to understand the operation and contributions of each sensor in the sensor suite.

Images

FIGURE 21.1
Requirements flow-down process for data fusion.

The flow-down process continues as shown in Figure 21.1b. The subsystem design/analysis process is shown within the dashed frame. At this step, the designer explicitly begins to allocate requirements and functions to subsystems such as the sensor subsystem, the processing subsystem, and the communications subsystem. These must be considered together because the design of each subsystem affects the design of the others. The processing subsystem design entails the further selection of algorithms, the specific elements of the database required, and the overall fusion architecture (i.e., the specification of where in the process flow the fusion actually occurs).

This requirement analysis process results in well-defined and documented requirements for the sensors, communications, processing, algorithms, displays, and test and evaluation requirements. If performed in a systematic and careful manner, this analysis provides a basis for an implemented fusion system that supports the application and mission.

 

 

21.3   Engineering Flow-Down Approach

Formal systems engineering methods are articulated by the U.S. DoD in numerous classical military standards3 and defense systems engineering guides. A standard approach for development of complex hardware and software systems is the waterfall approach shown in Figure 21.2.

This approach uses a sequence of design, implementation, and test and evaluation phases or steps controlled by formal reviews and delivery of documentation. The waterfall approach begins at the left side of Figure 21.2 with system definition, subsystem design, preliminary design, and detailed design. In this approach, the high-level system requirements are defined and partitioned into a hierarchy of increasingly smaller subsystems and components. For software development, the goal is to partition the requirements to a level of detail so that they map to individual software modules comprising no more than about 100 executable lines of code. Formal reviews, such as a requirement review, preliminary design review (PDR), and critical design review (CDR), are held with the designers, users, and sponsors to obtain agreement at each step in the process. A baseline control process is used, so that requirements and design details developed at one phase cannot be changed in a subsequent phase without a formal change/modification process.

Images

FIGURE 21.2
System engineering methodology.

After the low-level software and hardware components are defined, the implementation begins. (This is shown in the middle of Figure 21.2.) Small hardware and software units are built and aggregated into larger components and subsystems. The system development proceeds to build small units, integrate these into larger entities, and test and evaluate evolving subsystems. The test and integration continues until a complete system is built and tested (as shown on the right side of Figure 21.2). Often a series of builds and tests are planned and executed.

Over the past 40 years, numerous successful systems have been built in this manner. Advantages of this approach include

  • The ability to systematically build large systems by decomposing them into small, manageable, testable units

  • The ability to work with multiple designers, builders, vendors, users, and sponsoring organizations

  • The capability to perform the development over an extended period of time with resilience to changes in development personnel

  • The ability to define and manage risks by identifying the source of potential problems

  • The ability to formally control and monitor the system development process with well-documented standards and procedures

This systems engineering approach is certainly not suitable for all system developments. The approach is most applicable for large-scale hardware and software system. Basic assumptions include the following:

  • The system to be developed is of sufficient size and complexity that it is not feasible to develop it using less formal methods.

  • The requirements are relatively stable.

  • The requirements can be articulated via formal documentation.

  • The underlying technology for the system development changes relatively slowly compared to the length of the system development effort.

  • Large teams of people are required for the development effort.

  • Much of the system must be built from scratch rather than purchased commercially.

Over the past 40 years, the formalism of systems engineering has been very useful for developing large-scale DoD systems. However, recent advances in information technology have motivated the use of another general approach.

 

 

21.4   Enterprise Architecture Approach

The rapid growth in information technology has enabled the construction of complex computing networks that integrate large teams of humans and computers to accept, process, and analyze volumes of data in an environment referred to as the enterprise. The development of enterprise architectures requires the consideration of functional operations and the allocation of these functions in a network of human (cognitive), hardware (physical), or software components.

The enterprise includes the collection of people, knowledge (tacit and explicit), and information processes that deliver critical knowledge (often called intelligence) to analysts and decision-makers, enabling them to make accurate, timely, and wise decisions. This definition describes the enterprise as a process that is devoted to achieving an objective for its stakeholders and users. The enterprise process includes the production, buying, selling, exchange, and promotion of an item, substance, service, and system. The definition is similar to that adopted by DaimlerChrysler’s extended virtual enterprise, which encompasses its suppliers:

A DaimlerChrysler coordinated, goal-driven process that unifies and extends the business relationships of suppliers and supplier tiers to reduce cycle time, minimize systems cost, and achieve perfect quality.4

This all-encompassing definition brings the challenge of describing the full enterprise, its operations, and its component parts. Zachman has articulated many perspective views of an enterprise information architecture and has developed a comprehensive framework of descriptions to thoroughly describe an entire enterprise.5,6 The following section describes a subset of architecture views that can represent the functions in most data fusion enterprises.

21.4.1   The Three Views of the Enterprise Architecture

The enterprise architecture is described in three views (as shown in Figure 21.3), each with different describing products. These three interrelated perspectives or architecture views are outlined by the DoD in their description of the Command, Control, Communication, Computation, Intelligence, Surveillance, and Reconnaissance (C4ISR) framework.7 They include the following:

Images

FIGURE 21.3
Three architecture views are described in a variety of products.

  1. Operational architecture (OA) is a description (often graphical) of the operational elements, business processes, assigned tasks, workflows, and information flows required to accomplish or support the C4ISR function. It defines the type of information, the frequency of exchange, and tasks supported by these information exchanges. This view uniquely describes the human role in the enterprise and the interface of human activities to automated (machine) processes.

  2. Systems architecture (SA) is a description, including graphics, of the systems and interconnections providing for or supporting functions. The SA defines the physical connection, location, and identification of the key nodes, circuits, networks, and war-fighting platforms, and it specifies system and component performance parameters. It is constructed to satisfy OA requirements per standards defined in the technical architecture. The SA shows how multiple systems within a subject area link and interoperate and may describe the internal construction or operations of particular systems within the architecture.

  3. Technical architecture (TA) is a minimal set of rules governing the arrangement, interaction, and interdependence of the parts or elements whose purpose is to ensure that a conformant system satisfies a specified set of requirements. TA identifies the services, interfaces, standards, and their relationships. It provides the technical guidelines for implementation of systems upon which engineering specifications are based, common building blocks are built, and product lines are developed.

The primary products that describe the three architecture views (Figure 21.3) include the following:

  1. Context diagram. The intelligence community context that relates shareholders (owners, users, and producers)

  2. Scenarios. Selected descriptions of problems representing the wide range of situations expected to be confronted and solved by the enterprise

  3. Process hierarchy. Tree diagrams that relate the intelligence community business processes and describe the functional processes that are implemented as basic services

  4. Activity diagrams. Sequential relationships between business processes that are described in activity sequence diagrams

  5. Domain operation. The structure of collaborative domains of human virtual teams (user community)

  6. Service descriptions. Define core and special software services required for the enterprise—most commercial and some custom

  7. n-Tier structure diagram. The n-tier structure of the information system architecture is provided at a top level (e.g., two-tier systems are well-known client-server tiers; three-tier systems are partitioned into data warehouse, business logic, and presentation layers)

  8. Technical standards. Critical technical standards that have particular importance to data fusion business processes, such as data format standards and data service standards (e.g., SQL and XML)

  9. Information technology roadmaps. Projected technology needs and drivers that influence system growth and adoption of emerging technologies are critical components of an enterprise; they recognize the highly dynamic nature of both the enterprise and information technology as a whole.

 

 

21.5   Comparison of Approaches

The two design methods are complementary in nature and both provide helpful approaches for decomposing problems into component parts and developing data fusion solutions. Comparison of the major distinguishing characteristics of the approaches (Table 21.1) illustrates the strengths of each approach for data fusion system implementation:

  • Perspective. System engineering seeks specific (often custom) solutions to meet all specific functional requirements; system architecting begins with components and seeks to perform the set of use cases (accepting requirements’ flexibility) with the optimum use of components (minimizing custom-designed components).

TABLE 21.1
Comparison of System-Level Design Approaches

Images

  • Starting assumptions. System engineering assumes that top-level system requirements exist and are quantifiable. The requirements are specific and can be documented along with performance specifications. System engineering emphasizes functional models. By contrast, system architecting assumes that functional components exist and that the requirements are general and user oriented. The emphasis is on use-case models.

  • Methodology. The methodology of system engineering involves structured problem decomposition and requirements flow down (as described in Section 21.2). The design and implementation proceed in accordance with a waterfall approach. System architecting involves use-case modeling and data and functional modeling. Multiple functional perspectives may be adopted, and the focus is on integration of standard components.

  • Risk analysis. Systems engineering addresses risks in system implementation by partitioning the risks to subsystems or components. Risks are identified and addressed by breaking the risks into manageable smaller units. Alternative approaches are identified to address the risks. In the system architecting approach, risk is viewed in terms of operational utility over a life cycle. Alternative components and architectures address risks.

  • Design variables. System engineering assumes that the system requirements are fixed but that development cost and schedule may be varied to meet the requirements. By contrast, system architecting assumes that the cost is fixed and the requirements may be varied or traded off to meet cost constraints.

  • Application. The system engineering approach provides a formal means of deriving, tracking, and allocating requirements to permit detailed performance analysis and legal contract administration. This is often applied on one-of-a-kind systems, critical systems, or unique applications. The architectural approach is appropriate for the broader class of systems, where many general approaches can meet the requirements (e.g., many software products may provide candidate solutions).

The two basic approaches described here are complimentary. Both hardware and software developments will tend to evolve toward a hybrid utilization of systems engineering and architecting engineering. The rapid evolution of information technology and the appearance of numerous commercial-off-the-shelf tools provide the basis for the use of methods such as architecting engineering. New data fusion systems will likely involve combinations of traditional systems engineering and architecting engineering approaches, which will provide benefits to both the implementation and the user communities.

 

 

21.6   Requirements for Data Fusion Services

Defense systems have adopted more distributed, network-centric architectures that loosely couple sensors, sources, data, processes, and presentation (displays); these architectures impose new requirements on data fusion process that allow them to be used as a general application or service by many network users. Network-centric approaches employ a service-oriented architecture (SOA) framework, where services are published, discovered, and dynamically applied as part of an adaptive information ecosystem. These network-centric enterprise services (NCES) are aware of other services, are adaptive to the threat environment, and are reusable.

In such a framework, a data fusion service is one of many published resources for all participants in the network (humans and other services) to use, and the fusion service is required to meet the following general capabilities so that all participants can access it in a standardized way:

  • The data fusion functionality must be designed to be as widely accessible as possible; to be a generally available fusion service, it should minimize restrictions on data types, conditions, and pedigrees of source data.

  • The SOA requires the tagging of all data (e.g., intelligence, nonintelligence, raw, and processed) with metadata to enable discovery of data. The data fusion service can discover needed data (e.g., sensor data, geospatial or map data, calibration data, etc.), and display services can readily locate the fused products posted by the fusion service.

  • The fusion service must advertise its capabilities within standard service registries for access by other services, expressing the service interfaces using standard metadata descriptions.

  • The fusion service must communicate with other services using standard protocols.

  • The fusion service must post or publish data to shared spaces to provide access to all users, except when limited by security, policy, or regulations.

These general provisions allow a fusion service to perform the many-to-many exchanges typical of a network environment, rather than the more limited interoperability provided by traditional point-to-point interfaces.

The U.S. DoD has published a detailed checklist of requirements that guide program managers and developers to meet these service requirements.8 The sharing of data among sensor, fusion, and other services is also guided by policies and instructions that assure reliable exchange and use of data.9 The SOA approach imposes stringent simultaneous quality of service (QoS) demands that also require data fusion services to be real time, scalable (to service many users simultaneously, with assured response and accuracy), and secure. Legg has enumerated the key factors that must be considered to specify a distributed multisensor fusion surveillance system, including sensor data processing and distribution, tracking, sensor control, computing resources, and performance to operate in an SOA network.10

Network-centric distributed computing environments leverage open standards and use the open architecture to dynamically interconnect services to respond to operational needs. Consider, for example, the following operation of a hypothetical tactical network that employs fusion services to illustrate a network-centric concept of operations.

A tactical military unit is ambushed and comes under intense fire in a desolate tribal area; the commander requests immediate targeting and available firepower support information via a CRITICAL_SUPPORT service.

  1. The CRITICAL_SUPPORT service issues a network search for all available sensor services and information source services on the ambush force:

    1. The area search discovers (calls) a Tactical Fusion (TACFUSION) service that correlates area sensor data over the past 12 h, filtering for unusual movements.

    2. The TACFUSION service also filters recent Situational Reports (SITREPS) and Human Intelligence (HUMINT) reports (tagged and available in repositories) about neighboring tribal areas for similar ambush patterns and reports of recent movements.

    3. The TACFUSION service discovers (calls) a SENSOR_SEARCH service to identify immediately available sensors that can provide surveillance of the ambush forces, tracking, and targeting. The SENSOR_SEARCH reports back that a Predator unmanned air vehicle returning to base is within 5 min time over target (if redirected) and has sufficient fuel to provide 25 min of Electro-Optical/Synthetic Aperture Radar (EO/SAR) coverage of the area and targeting. Unfortunately, the aircraft has expended its weapons and cannot support direct attack. The SENSOR_SEARCH service also identifies another national asset that may provide targeting quality information, but it will not be available for almost an hour.

    4. The TACFUSION service also searches for all Blue Force tracking data published by the BLUE_TRACK to identify potential supporting sensors within the range of the ambush force; none are found.

    5. The TACFUSION service also checks for unattended ground sensors in the area and identifies a small emplaced sensor net within range of the ambush force capable of detecting and tracking vehicles used by the force.

    6. The TACFUSION service reports to the CRITICAL_SUPPORT service its need of the Predator sensors and follow-on national asset sensor data; it also begins to publish a stream of emerging data on the ambush force using prior data and the ground sensors, and awaits Predator data to become available.

  2. The CRITICAL_SUPPORT service also requests a search for available supporting fire control, and identifies two distant tactical aircraft (25 min out) with direct attack munitions, and an artillery unit that is marginally in-range, but capable of supporting area fires.

  3. The CRITICAL_SUPPORT service requests use of the Predator sensors (as another available net service) from the TACTICAL_C2 service, and also requests available fire control support from the TACTICAL_C2 service.

  4. The TACTICAL_C2 service then issues a request to the FIRE_FUSION service to develop targeting solutions for immediately suppressing area fires, and when the Predator data come online to develop coarse target tracking for handoff to the tactical aircraft when they arrive on-station bringing attack munitions.

The DoD has developed a body of architectural and engineering knowledge to provide guidance in the design, implementation, maintenance, and use of the such net-centric solutions for military applications, called the Net-Centric Enterprise Solutions for Interoperability (NESI).11 NESI provides specific technical guidance to assure that service developers are compliant with the Net-centric Checklist and other data sharing directives.

 

 

References

1. Waltz, E. and Llinas, J., Multisensor Data Fusion, Artech House, Norwood, MA, 1990.

2. Finkel, D., Hall, D.L., and Beneke, J., Computer performance evaluation: the use of a time-line queuing method throughout a project life cycle, Model. Simul., 13, 729–734, 1982.

3. Classical DoD military standards for systems engineering include MIL-STD-499B, Draft Military Standard: Systems Engineering, HQ/AFSC/EN, Department of Defense, “For Coordination Review” draft, May 6, 1992, and NSA/CSS Software Product Standards Manual, NSAM 81-3, National Security Agency. More recent documents that organize the principles of systems engineering include: EIA/IS 632, Interim Standard: Systems Engineering, Electronic Industries Alliance, December 1994; Systems Engineering Capability Assessment Model SECAM (version 1.50), INCOSE, June 1996; and the ISO standard for system life cycle processes, ISO 15288.

4. DaimlerChrysler Extended Enterprise, see http://supplier.chrysler.com/purchasing/extent/index.html

5. Zachman, J.A., A framework for information systems architecture, IBM Syst. J., 26(3), 276–292, 1987.

6. Sowa, J.F. and Zachman, J.A., Extending and formalizing the framework for information systems architecture, IBM Syst. J., 31(3), 590–616, 1992.

7. Joint Technical Architecture, Version 2.0, Department of Defense, October 31, 1997 (see paragraph 1.1.5 for definitions of architecture and the three architecture views).

8. Net-Centric Checklist, Office of the Assistant Secretary of Defense for Networks and Information Integration/Department of Defense Chief Information Officer, Version 2.1.3, May 12, 2004.

9. See DoD Net-Centric Data Strategy, May 9, 2003 and Instruction DoD Directive 8320.2 Data Sharing in a Net-Centric DoD, December 2, 2004.

10. Legg, J. A., Distributed Multisensor Fusion System Specification and Evaluation Issues, Australian Defence Science and Technology Organisation (DSTO), DSTO–TN–0663, October 2005.

11. Net-Centric Enterprise Solutions for Interoperability (NESI), Version 1.3, June 16, 2006; this six-part document is a collaborative activity of the USN PEO for C4I and Space, the USAF Electronic Systems Center, and the Defense Information Systems Agency.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.121.8