7
Cyber Computer‐Assisted Exercise (CAX) and Situational Awareness (SA) via Cyber M&S

While automated COA responses are a goal for “simple” cyber operations, strategic planning and coordinated response still rely on human decision making. Detecting, reacting, and reconstituting cyber systems are therefore a function of human skill; skills that can be improved through training. These training requirements are translated by instructional system designers and subject matter experts to define the readiness competencies using Bloom’s Taxonomy (Figure 7.1).

Diagram displaying an inverted pyramid with 3 segments labeled Analyze, Evaluate, and Create on the top layer. Apply, Understand, and Remember are labeled from the second to the bottom layers.

Figure 7.1 Bloom Taxonomy for learning domains.

Developed by Benjamin Bloom (1994), Figure 7.1’s Taxonomy divides educational objectives into three “domains”: cognitive, affective, and psychomotor (sometimes loosely described as “knowing/head,” “feeling/heart,” and “doing/hands,” respectively). Within the domains, learning at the higher levels is dependent on having attained the prerequisite knowledge and skills at lower levels. This parallels our training pipeline approach where “doing/hands” objectives are developed in foundational training; “feeling/heart” objectives are developed in sub‐element validation and certification activities; and readiness training is achieved through “knowing/head” objectives.

One approach to “livening up” training is to add game‐theoretic processes, modeled with moves and effects inspired by cyber conflict but without modeling the underlying processes of cyberattack and defense (Manshaei et al. 2013; Cho and Gao 2016). In addition, it is often pointed out that accurate predictions require good models of not just the physical and control systems, but also of human decision making; one approach being to specifically model the decisions of a cyber–physical intruder who is attacking the system and the system operator who is defending it – demonstrating the model’s usefulness for design (Backhaus et al. 2013).

A goal of computer‐assisted exercises (CAX) is to ensure that both individuals and teams are ready for stressing defense situations. One example, Crossed Shields (NATO Cooperative Cyber Defense Center of Excellence), an exercise from the NATO Cooperative Defense Cyber Center of Excellence (CDCCOE), uses cyber computer‐assisted exercise (CAX) for cyber defense training; the goal of which is to teach personnel how to choose the most appropriate Course of Action (COA) in the event of an attack. In this chapter we make a distinction between training types for:

  • Cyber for Cyber (C4C) – experts on cyber defense. Sometimes C4C can include former Information Assurance (IA) personnel evaluating IT system security.
  • Cyber for Others (C4O) – operators tasked with dealing with the denied/degraded environments resulting from cyberattacks.
    • To date, the representation of cyber has been rare – experiments (e.g. COATS) introducing cyber into traditional command‐level training simulation have been performed, but automated approaches are still developing – exercises seem to be the principle means of evaluating system security.

Using the C4C and C4O definitions to guide our look at cyber M&S for training, we make a distinction between the two different types of CAX:

  • (C4C) “Cyber CAX” that focuses on Cyber Defense at a systems level (Scalable Network Defense Trainer1 (NDTrainer [Chapter 12]).
  • (C4O) Traditional CAX with cyber injections
    • Cyber Operational Architecture Training System (COATS) (Wells and Bryan 2015).
    • Cyber Operations Battlefield Web Services (CobWEBS) (Marshall et al. 2015).

In addition, we will discuss how these two types of CAX leverage the physical, informational, and cognitive elements2 of Information Operations (IO) (Joint Chiefs of Staff 2014) to provide a basis for measuring cyber situational awareness (SA) via cyber M&S; and how both kinds of CAX are commonly facilitated by M&S. We will also look at the different training Tiers (e.g. Global through individual) and available tools and metrics used to judge their performance.

As will be discussed in Section 7.2, SA is key to both being aware that a system is under cyberattack and taking defensive measures to protect the system. While the three layers of IO provide an initial reference for describing cyber SA (Robinson and Cybenko 2012), we can also leverage decades of Observe/Orient/Decide/Act (OODA) development, targeted for training pilots across the spectrum of air operations.

7.1 Training Type and Current Cyber Capabilities

In this chapter, we review traditional CAX, look at cyber injects into traditional CAX, evaluate cyber CAX, and look to understand combined traditional and cyber CAX. Table 7.1 provides a few examples of cyber training systems across the training Tiers; each training system referenced in Chapter 12.

Table 7.1 Group training capabilities.a

CAX type Training system Individual Team Regional Global
Cyber Injections into Traditional CAX COATS, CobWEBS X X X
Cyber CAX NDTrainer – (Exata/Antycip) X X
TOFINO SCADA Sim X X X
CYNRTS (Camber) X X
HYNESIM (Diatteam) X X
CyberShield (Elbit) X X
Training Games NETWARS/Cyber City (SANS) X X
CyberCIEGE X X
MAST X

a References for each of the tools are provided in the Appendix.

Table 7.1’s Traditional CAX and Cyber injections into Traditional CAX have the goal of reusing existing simulation platforms for operator (C4O) training. Cyber CAX and Training games, however, are relatively new. The Cyber CAX and Training games have a goal of training pure cyber (C4C) personnel on systems that either are, or represent, actual devices of interest. Chapter 8 will talk about device emulation and its current evolution into simulation. Cyber CAX and training games are the current realization of this idea, providing users with scenarios so that they can experiment with possible courses of action (COAs). Chapter 6 covered COA evaluation, which is performed with Table 7.1’s training system examples.

Each of Table 7.1’s training systems, in providing a tool for COA evaluation, maintains the singular goal of increasing operator SA; or the ability to discern that their system is under cyberattack, and, once identified, determine an appropriate COA. This remains an unsolved problem in the commercial world. For example, in 2013, the median number of days attackers present on commercial victim networks, before being discovered, was 229 days, down from 243 days in 2012 (Mandiant 2014); indicating a persistently low SA.

7.2 Situational Awareness (SA) Background and Measures

With over 200 days to discovery of the average APT threat, an initial learning objective for cyber CAX is tactical situational assessment, the process of understanding one’s environment that leads to SA and understanding. In addition, while there are two distinct audiences for cyber training (C4C, C4O), each group will require the development of SA to do their job. SA training, a goal of pilot training over the last several decades, provides an exemplar for cyber SA training. For example, in the course of air combat training, the US Air Force developed the Observe Orient Decide Act (OODA) loop (Boyd), with the observe–orient commonly ascribed to being SA development (Table 7.2).

Table 7.2 Situational awareness learning – tactical and strategic processes and outcomes.

Phase
Process Outcome
Learning objective Strategic Sense making Understanding
Tactical Situational assessment Situational awareness

As shown in Table 7.2, SA and understanding occurs over different time horizons, strategic and tactical, with different learning processes and objective outcomes. With SA formally defined as “The perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” (Endsley 1995), a few example SA frameworks for further metric development are listed in Chapter 12. In measuring both operator aptitude before/after cyber CAX training, SA measures provide an evaluation framework for the different training alternatives.

Due to adversarial success, there is clearly a need for training simulators that will be used to ensure cyber security. While cyber modifications are currently being developed to leverage existing simulators, it can be a challenge to retrofit CAX designed for command and control (C2) of conventional operations with cyber requirements. For example, a common approach to cyber training, currently, is to use a legacy trainer, and turn off the communications to simulate a denial of service (DOS) attack. Simply leveraging a higher‐level simulation and turning off the communications misses the point of cyber, where attackers are more likely to minimize detection of their presence on a network and modify the integrity of data in order to shape operations (i.e. similar to IO). This also brings up the challenge of clearly defining metrics for the cyber domain; an area where policy (e.g. implementation of cyber security controls – Chapter 12) is often viewed as a solution from the management perspective. Training to ensure policy implementation will likely need to be unpacked to ensure clear communication across a cyber defense organization. In addition, this more nuanced use of cyber will likely require a tailored training simulator to meet this need and explore the possibilities.

While the challenges of cyber training are still being defined, we can now take a look at how operational exercises have been used in the air domain to confront similar SA and subsequent OODA development capabilities. Fortunately, the air domain has already tackled many of the structural training issues that cyber currently faces.

7.3 Operational Cyber Domain and Training Considerations

Cyber is at an early stage and still premature in clearly specifying the models that govern the dynamics of the cyber domain for CAX training. Figure 7.2 (Stine 2012) provides a notional interaction between the network, cyberspace, and mission operations that help inform both cyber injections into current CAX and the development of standalone cyber CAX.

Flow diagram displaying rightward arrows between 3 boxes labeled from network operations, cyberspace operations, and mission operations, with leftward arrows from operational centers to NOSCs/CERTs.

Figure 7.2 Network, cyberspace, and mission operations – information flows and events (Stine 2012).

Figure 7.2 provides just one view of an “as is” architecture, parts of which are emulated by the CAX in training future cyber defenders. The objective here is to emulate a real‐world system, like that shown in Figure 7.2, with the right mix of Live–Virtual–Constructive (LVC) assets.

As shown in Figure 7.3, each of the LVC modes has different associated skill acquisition goals. We will see how the respective virtual and constructive injections are implemented in current CAX environments in evaluating other domains (e.g. air training). Achieving the realism common to LVC for the air domain will require either producing simulations of realistic fidelity (constructive modeling) or consistently providing live injects (live/virtual modeling) into the CAX. A few considerations for achieving realistic fidelity and timeliness in building cyber M&S for defense CAX training are shown in Table 7.3.

A pyramid with layers labeled Live, Constructive, Virtual, and Traditional classroom lecture and brackets alongside labeled Validate skills, Practice skills, Acquire skills, etc. (top to bottom), respectively.

Figure 7.3 Live–Virtual–Constructive (LVC) and skills development.

Table 7.3 LVC contributions to cyber CAX realism.

LVC element Description
Live Injecting effects from operators into the simulation
Virtual Injecting effects from ranges into the training simulation (e.g. COATS)
Constructive Use of simulators to inject cyber effects into tactical exercise

Ideally, Table 7.3’s techniques will contribute to cyber defense training simulators to keep pace with the fast‐moving nature of the cyber domain. M&S, therefore, can be used to quickly put into operation rules that leverage recent field understanding of current cyber threats (Chapter 12) and provide these threats, for training, to our cyber defenders. The goal is then to provide realistic measures of SA improvement that will be used to inform both training updates and the future acquisition of material solutions to help with cyber defense.

7.4 Cyber Combined Arms Exercise (CAX) Environment Architecture

We are fortunate to have a baseline, in both LVC for bringing operator/range effects into our cyber simulations, and a functioning simulation architecture, via air training,3 that leverages SA evaluation that is of interest for our cyber team. AMSP‐03’s (NATO 2014) Distributed CAX Environment Architecture (Figure 7.4) provides a CAX environment architecture as a generic construct, where teams may represent nations, corporations, or any other members of the training audience.

Diagram of generic CAX environment architecture displaying double-headed arrows between a vertical bar labeled EXCON and horizontal bars labeled HICON, LOCON, etc., within vertical boxes for Team 1, Team 2, and Team 3.

Figure 7.4 Generic CAX environment architecture.

Figure 7.4 shows the overall CAX environment architecture and the interactions between the training audience, and the respective command levels (Table 7.4).

Table 7.4 CAX environment components.

CAX environment component Description
Training Audience The Training Audience plays staff in operation and performs C2
LOCON (Low Control Cell) The LOCON (Low Control Cell) plays the subordinate command or units of the training audience. The LOCON provides reports based on the orders created by the training audience. The response Cell staff uses M&S to facilitate report generation that leverages data used for the system state data
Opposition Forces (OPFOR) The OPFOR plays the opposing forces
HICON (High Control Cell) HICON (High Control Cell) plays the training Tiers above the training audience level. The HICON provides directives and could request situational reports from the training audience
EXCON (Exercise Control Cell) The EXCON (Exercise Control Cell) performs scenario execution and injects events/incidents planned in the MEL/MIL (Main Event List/ Main Incident List) in coordination with LOCON, the OPFOR, and the HICON.

In addition to Table 7.4’s description of the CAX components, the arrow color convention in Figure 7.4 provides the following:

  • The red arrows represent the information exchange between the M&S tools. The different types of information should be considered for standardization (Chapter 5).
  • The brown arrows represent the information exchange between the EXCON staff and the role players. These information exchanges concern document exchange and control activities by any collaborative mode (Email, chat, phone, etc.).
  • The blue arrows represent the information exchange between C2. These information exchanges are defined by the C2 community.
  • The green arrow represents the information exchange between the Trainees and the LOCON and HICON. These information exchanges should be identical to the C2 information exchanges. Nevertheless, some simplifications could occur for practical reasons.

7.4.1 CAX Environment Architecture with Cyber Layer

Adding a cyber layer to a CAX provides the trainer control in adding cyber effects to the training solution (Figure 7.5).

Diagram of generic CAX environment architecture including a cyber layer displaying double-headed arrows between vertical bar for EXCON and horizontal bars of HICON, LOCON, etc., along boxes for Team 1, Team 2, and Team 3.

Figure 7.5 Generic CAX environment architecture including a cyber layer.

By incorporating Cyber M&S into the generic CAX architecture, all communication concerning the training audience will be passed through the cyber layer, enabling the introduction of cyber effects in training. As shown in Figure 7.5, a cyber injection would target the green or the blue arrows by

  • Confidentiality – Intended or unintended disclosure or leakage of information.
  • Integrity – Creating false information.
  • Availability – Degrading the flow of information, i.e. slowing down or preventing it.

The first type of cyber threat, leaking system information, might be injected either by HICON or LOCON as well as by a communication simulation system. Similarly, the latter two might be achieved by the use of communication simulation systems, which are able to model different types of communication disturbances, affecting data integrity or reducing system availability (Table 7.5).

Table 7.5 Description of Red Computer Network Attack (CNA) on Blue systems to demonstrate degradation effects on operator workstations.

Red cyber engagements (in cyber range) against Blue entity causes user detectable effects (e.g. Blue Screen of Death (BSoD), CPU memory utilization, etc.)
Desired effects: Blue operator workstation degradation Generated effect: BSoD and CPU memory utilization
Challenge: Propagating effect through exercise infrastructure Attack direction: Red on Blue
Demonstration audience: Chief Information Security Officer (CISO) and staff LVC category: Cyber (Live/Virtual) to Cyber (Virtual)
Target network: Live asset in the cyber environment, generic end‐user workstations Effect network: Private LAN – generic blue end‐user workstations

As shown in Table 7.5, COATS and the Network Effects Emulation System (NE2S) (Morse et al. 2014a, b) provide cyber effects to a training audience. In addition, COATS provides a distributed simulation architecture in providing cyber effect injects, from cyber ranges, as shown in Figure 7.6.

Schematic of COATS cyber injection architecture displaying arrows labeled cyber effects, cross-domain solution, etc., between clouds with bulleted lists and icons for C4I Systems, Degraded operator workstations, etc.

Figure 7.6 COATS cyber injection architecture.

While Figure 7.6 shows how COATS provides range‐based cyber injects into training simulations, it is also possible to use constructive modeling as a proxy for the range‐provided effects. This opens up the opportunity to control the level of cyber effects in a command‐level simulation. For example, the NE2S (Bucher, 2012) is used to add cyber effects to a command‐level simulation. This approach is currently targeted at higher‐level (e.g. Global and Regional) exercise simulations.

7.4.2 Cyber Injections into Traditional CAX – Leveraging Constructive Simulation

The traditional CAX is targeted to strategic, operational, and/or tactical‐level training audiences. In this type of exercise, cyber injections are just one among several threads that the audience has to deal with. Because of the novelty, and lack of real‐world controllability, cyber is often handled via white cards, at present, where a cyberattack can be scheduled for a particular effect, in the course of an exercise focused on a broader attack model. The aim of including a cyber injection in a traditional CAX is

  • Training of the audience’s strategic, operational and/or tactical skills.
  • Raising the audience’s awareness of cyber threats to enhance the ability to recognize and mitigate Cyber threats, which might be done through virtual/constructive injects of cyber effects (i.e. range effects).

StealthNet (Torres 2015), developed by Exata for the Test Resource Management Center (TRMC), has the goal of using five tiers of the OSI hierarchy level as constructive simulation layers for exercising architectures on a cyber range (Figure 7.7).

Diagram of NSUT operational tester (left) with 3 shaded clouds for Hardware in the loop, interoperability, and System in the loop interface with a leftward arrow from a 5 layered box for NSUT threat test team (right).

Figure 7.7 Network emulation (StealthNet) injection into Network System Under Test (NSUT) (Bucher 2012).

Figure 7.7’s example (Bucher 2012), from the test and evaluation (T&E) community, is also viable as a constructive simulation input for CAX. Some examples of the effects for different simulation injects include:

  • Loss of communication nodes and lines of communication (e.g. DOS).
  • Loss of fidelity of sources of communication.
  • Partial loss of information.
  • False or compromised information (i.e. key differentiator for cyber CAX).
  • Restrictions in bandwidth.

The aspects of NATO and Multinational CAX are listed and explained in AMSP‐03,4 but these are also valid for CAX on a smaller (e.g. national or organizational) level.

7.4.3 Cyber CAX – Individual and Group Training

We use the term Cyber CAX for CAX that is intended to train the audience in the application of methods of cyber defense. Training focuses on exercising technical personnel, both military and non‐military subject matter experts for cyber defense.

Individual training approaches include CyberCIEGE (Thompson and Irvine 2011), where video gaming technology is used to train candidate cyber defenders. SANS NETWARS (SANS) is a more common gaming platform used in the cyber training community (Table 7.6).

Table 7.6 Individual training – games and administrator training.

CAX type Training system Individual Team Regional Global
Training Games NETWARS Cyber City (SANS) X X X X
CyberCIEGE X X X X
MAST (Singh) X

More traditional CAX span from individual to large group training, providing the opportunity to evaluate teams at multiple levels (Table 7.7).

Table 7.7 Example Cyber CAX and training levels.

CAX type Training system Individual Team Regional Global
Cyber CAX NDTrainer – (Exata/Antycip) X X X
TOFINO SCADA Sim X X X X
CYNRTS (Camber) X X
HYNESIM (Diatteam) X X
CyberShield (Elbit) X X

Table 7.7’s Cyber CAX and training levels provide the different tools on the market today for training individuals to large organizations. In addition, Table 7.7’s focus on Cyber CAX provides an overview on how current operators, and cyber professionals, are currently training for improving their SA of the cyber terrain.

7.5 Conclusions

Cyber SA, one of the differentiating elements of the cyber domain, has its best definition in Information Operations (IO) (e.g. physical, informational, and cognitive), at present. Quantitative approaches to cyber SA still rely heavily on computer science (e.g. graph theory, etc.) for network description; cyber SA, as a human process, is still a work in progress. Similarly, CAX for cyber remains a developing domain at the time of this writing.

7.6 Future Work

With the goal of Cyber CAX being cyber operator SA, overall scenario and exercise design might consider using fusion levels (Waltz 2000) to evaluate the operator’s ability to assess his/her situation; and how this assessment ability overlaps with operator performance. Examples measures include:

  • Operator (C4C or C4O) estimated attack surface(s).
  • Actual attack surfaces, as measured by network evaluation tools.

Finding common ground between objectively/technically measurable phenomena (e.g. fusion levels – not involving human judgment) and the somewhat qualitative metrics from human training (e.g. SA metrics) will provide trainers with a few different tools to measure C4C/C4O operators in evaluating their performance. Assessing human vs. machine performance will also provide an objective evaluation of what can be automated. For example, Guo and Sprague (2016) show a Bayesian replication of human operator’s situation assessment and decision making performing very well in a simulated area reconnaissance war game.

Along with using objectively quantifiable data to evaluate cyber defenders, our future Cyber CAX should leverage standard threats when building both exercise scenarios and evaluation COAs. In addition, a future Cyber CAX, leveraging standard threats and scenarios, might look something like a combination of Morse et al. (2014a, b) and Bucher’s (2012) cyber training architectures, distributed to capture best‐of‐breed capabilities wherever they reside on the net, and expanded to all of the LVC dimensions, so that each of the training levels can be provided through one training architecture.

7.7 Questions

  1. 1 Which of the Bloom Taxonomy’s educational objectives are most important for cyber training and why?
  2. 2 Why are the IO domains a useful reference measure for cyber computer‐aided exercises?
  3. 3 How should an organization match its CAX training and SA development goals?
  4. 4 Looking at the LVC skills development pyramid (Figure 7.3), where should the strongest emphasis be placed for building SA skills?
  5. 5 Why is it a good idea to combine C4C and C4O training, via a COATS‐like approach?
    1. What are the challenges for this kind of training?
  6. 6 In developing a cyber CAX, what are the additional LVC considerations to make the training more realistic?

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.52.96