7: DETERMINING INFORMATION STATES AND MAPPING INFORMATION LOW

INTRODUCTION

One of the key attributes of the McCumber Cube approach is the requirement to extrapolate and analyze information flow characteristics. The concept of information flow is a unique aspect of this methodology and is superior to a technology-based approach. In technology-based methodologies, security safeguards are applied to specific IT products, media, or subsystems. In other words, safeguards are defined by the current capabilities and vulnerabilities of the particular technology products and protocols employed in the network infrastructure. This means that every time a system component is replaced, upgraded, or modified, the security attributes of the system need to be completely reassessed and in many cases, adapted to the new technology. Although the McCumber Cube methodology will not preclude the necessity for adapting technical controls to changes in technology, it ensures consistency of security policy and safeguards enforcement requirements.

Information flow analysis within the McCumber Cube methodology requires the security practitioner to decompose the elements of the IT system in one of the three primary information states—transmission, storage, or processing. These information states can be assigned to information regardless of its location in the IT system. In most cases, information will exist simply as data—ones and zeroes—within the system. However, the distinction between data and information is not important for information flow analysis. To determine information flow, one needs to determine the information state and location within the system.

At any point in time, information exists in one of three states. Information can be in a state of transmission, storage, or processing. There are no other options. Examples of the three states abound and are often intuitively obvious. Several researchers and security pundits have proposed additional information states such as display. Just like those who propose additional security attributes such as nonrepudiation, display easily fits into the state of storage. This is most likely simply an issue of semantics.

Although the data in display mode would not be considered in storage in the traditional sense of being stored away, it exists as postprocessing product that is now stored within the display function. As these displays change with new information, they simply transition between storage functions of different data sets. The model is consistent and the parsing of either state or security attributes into evermore refined subcategories does not always suit the needs of a security practitioner.

One of the major problems with attempting to expand on these categories arises when you cannot determine if you have accounted for all potential possibilities. The need to create these subcategories can create an unwieldy model that fails to account for certain information states. By applying the three primary states to each information resource as it moves through the system, you can develop a complete security model for any environment.


INFORMATION STATES: A BRIEF HISTORICAL PERSPECTIVE

The three information states have existed since the dawn of human information exchange. Early humans developed language to facilitate the transmission of concepts and ideas between themselves. Information or ideas deemed important were often committed to paintings or drawings rendered on cave walls. Processing was a state almost solely applied to the human brain. As early man perceived the information transmitted to him, he adapted that information to suit his survival needs.

As the history of information rapidly evolved, man developed more effective ways to transmit, store, and process information. There is little wonder that more advanced ways of managing information were vital. I am certain early hunters eagerly exchanged information with other hunters about their experiences. By knowing the historical location of prey and the anecdotal stories of others, a hunter could position himself better to increase his chances of success. He could capitalize on the purely providential discoveries of others and avoid problems experienced by the unfortunate. By processing this information properly, he could enhance his chances for survival.,

As the ages passed, the ability to effectively gather, store, and process information became a strategic and tactical advantage for individual people and larger societal systems like nation-states. Those individuals and groups who best gathered and managed information would prosper over those who were less effective. In this way, superior information management capabilities established a Darwinian process that evolves in favor of those who possess information superiority.

Nowhere is information superiority so dramatically displayed as in warfare. For millennia, quality information has been recognized for its vital role in winning—and losing—in these high stakes games for survival. Often, military historians and planners refer to this information as intelligence. Though specific definitions are still being debated, superior knowledge of one’s enemies and the battlefield are central to success in life and death situations.

It is not surprising, then, that innumerable advances in information management and technology were brought about by those engaged in war. Some of the earliest forms of cryptography still in use today are known as Caesarian ciphers. They were developed under the Roman caesars to securely obtain battlefield information and to safely transmit command and control information to military commanders in the field.

For most of human advances, information management was confined to finding more effective ways to capture, store, and transmit information in physical media like animal hides and eventually the more efficient and compact paper medium. The development and widespread adoption of electronic communication capabilities began in the 19th century with the telegraph. Until this watershed advance, even Guttenberg’s printing press was simply a more efficient way to manage the physical storage and transmission of the paper medium.

Electronic transmission of data created great opportunities as well as new vulnerabilities. In paper-based media, it was necessary to gain physical access to process the stored information contained therein. Basically, you had to obtain access to the paperbased information in some manner to exploit the information. In an electronic system, it became possible to intercept the transmission at some point and translate the electronic data pulses into readable information. In many cases, the sender and intended recipient would have no indication the information had been compromised.

Electronic transmission of data created great opportunities as well as new vulnerabilities. In paper-based media, it was necessary to gain physical access to process the stored information contained therein. Basically, you had to obtain access to the paperbased information in some manner to exploit the information. In an electronic system, it became possible to intercept the transmission at some point and translate the electronic data pulses into readable information. In many cases, the sender and intended recipient would have no indication the information had been compromised.

Ultimately, it became vital to develop and employ new security safeguards that could protect information during its electronic transmission state. Cryptographers had developed and employed evermore complicated encoding schemes as information exploiters became adept at decoding intercepted information. The science of cryptography needed to evolve its paper-based capabilities into the electronic coding of sensitive information.

At the beginning of World War II, cryptographers assigned a level of trust to their encoding schemes that was most often tied to the complexity of the algorithm used to encipher the data. It was necessarily assumed that the more complex the encoding algorithm, the more secure the data during the transmission state. However, Polish, English, and ultimately American researchers and mathematicians ushered in a significant advance in the science of cryptography when the Allies captured a German Enigma encoding machine.

Through the use of an early computational device, researchers were able to crack both the algorithm and the changeable key used by the Germans to encode their most sensitive military data. This landmark scientific breakthrough became one of the most closely held Allied secrets of the war. Nazi and other Axis information intercepted and decoded played a decisive role in winning the war in Europe.

Since World War II, the science of cryptography has focused on the generation of more computationally complex keys. Although complex encoding algorithms are a key security attribute of any cryptographic system, the actual trust in the system is now primarily vested in unique keys that would require an enemy to invest years to crack. After World War II, entire governmental organizations were chartered and funded by many countries to maintain the superiority of these electronic cryptographic systems for the protection of information primarily in its transmission state.

As advances in information management rapidly progressed with the introduction of electronic transmission capabilities, information security was focused almost solely on the states of storage and transmission. To protect stored information resources, security practitioners relied almost exclusively on physical security safeguards. These included safes, protective covers, and even the physical location of these paper-based information assets. In some cases, information was stored in an encoded state to be decoded by the intended recipient when it was needed for a decision-making process.

Little historical precedent was available with the development and rapid adoption of computers. Security practitioners were caught off guard as machines were developed to automatically process information. All the research and development of information security systems had been focused on the adaptation of old paper-based media storage and more complex encoding schemes. The widespread adoption of computing technology introduced the ability to remotely process information and tie this new function inextricably with the electronic transmission of data.

It is historically significant that computers have ushered in daunting new information security requirements. Until the advent of modern memory-based computer systems, the processing functions of information were never accomplished in an automated fashion. For most of human evolution, it was tacitly assumed that the processing function was the sole realm of the human brain. The computer’s ability to take in information or data and create new information has changed the landscape of information management in dramatic ways we are just now coming to realize.


AUTOMATED PROCESSING: WHY CRYPTOGRAPHY IS NOT SUFFICIENT

One of the key attributes of the computer system is that it automatically processes information. By the use of programs, information is brought in to the computing environment, manipulated by these programs (which are simply complex algorithms), and new information is created. It can be theoretically argued that within the basic processor, the processing function is simply the rapid change of transmission and storage states. However, for purposes of our methodology, the broader and more generally accepted interpretation and application of the three primary information states is the most effective.

When computers were first introduced, security researchers and cryptographers erroneously assumed security requirements could be enforced in this new environment by the adaptation of cryptographic solutions. However, in practical application, they soon learned that the science of cryptography was insufficient. One of the key principles of the information processing function is that you cannot process encrypted information, decrypt it, and produce the requisite new information. For computer programs to function properly, they had to work with information in plaintext. If a program were to be coded in such a way as to work with encoded text, this new enciphered text would simply become a new language (and a new form of plaintext), because it would be nearly impossible to apply a variable key to maintain the security attributes of the encryption algorithm.

This may all sound a bit too theoretical, so suffice it to say that information in its processing state needs to be in a form of plaintext. For many software programs, the programming language dictates the processing algorithm and the representational data may not become information again until it is transmitted out of the processing or storage function as either electronic, digital, or hard-copy output. This makes it necessary to closely evaluate the information state changes within your IT environment.

There is a saying currently in vogue with IT researchers: If you think cryptography is the solution to the problem [of information systems security], you understand neither cryptography nor the problem. This saying has evolved from the understanding that cryptography has an important role in safeguarding information in its transmission and storage states, but it is not adequate in and of itself. That is another vote for a structured, information-based methodology like the McCumber Cube.

One of the keys to the methodology is the identification and mapping of information states with an IT system. It is also the least understood. The reason information state analysis is so little used is that, until now, almost every security analysis methodology or security enforcement process was based on a technology-centric model. In fact, many recommended methods are nothing more than a vulnerability checklist that needs constant updating as technology changes. As we have discovered, however, those processes based on the point-counterpoint of vulnerability and safeguard challenges ultimately leave the organization unable to determine the overall state of its security plan and lacking the information necessary to make cost or benefit tradeoffs for an effective risk management program.

The key to the McCumber Cube methodology is the application of state analysis. In the following sections, we will give examples and provide guidelines for determining system boundaries and identifying and mapping state evolution in modern IT systems. At this point, it will not be necessary to include a comprehensive outline of all vulnerabilities or identify all potential threats.


SIMPLE STATE ANALYSIS

State analysis is a critical aspect of our structured methodology. The attributes of information states will indicate where security safeguard requirements need to be defined. We will discuss these processes in Chapter 11, but we need to first understand how to identify the information states and information flows in our technology environment. The first step is to define the boundaries of our system.

Someone leaving a telephone message represents an example of a complete albeit simple information system. Information flowing through a virtual telephone connection is obviously in transmission. A voice mail recording of the call can contain the same information in a state of storage. When the intended recipient retrieves the voice mail and listens to the stored message, their brain processes this same information. The transmission and processing functions are thus ephemeral states that exist for a specific element of information (the message) at certain points in the information system. The storage function is static for the life of that element of information. If the recipient erases the message, once the technology components physically remove the representative digital data from a memory or storage medium, the information will cease to exist.

A key element to mapping information flow is defining the boundaries of the system in question. Establishing boundaries for an IT system can be more difficult than it first appears. Some systems are defined solely by a specific processing function or software application. This could be the case for a dedicated device to gather limited weather data and transmit it to a base station. You may have such a system in your home today. The entire system could be defined as the sensors, the transmission medium, and the main processor, storage, and display unit. This system is depicted in Figure 7.1.

We can quickly and easily identify the various information states by parsing the system into its three primary components—a remote unit with sensors, a wire to connect the sensor to the base station, and the base station itself. In each of these components, we must now determine which information states are present.

In this simple example of an information system, no data is processed or retained by the sensor—it is simply a device to obtain data and transmit it back to the main processing and display unit as it is acquired. The sensors on the remote unit include a barometer, an anemometer, and a temperature gauge. These three sensors send real-time data back to the base station through the wire that connects the two. The sensors in this case neither process nor retain any data. They simply send the raw feed to the base station.

The wire connecting the remote sensor unit to the base station will represent the transmission of the raw data. Simplistically, the transmission function begins where the wire is connected to the sensor and ends when it is connected to the base station. It is at these points that state changes exist.

9781135488963_122_01

Figure 7.1 Weather Reporting System

The base station in our example has both a digital memory and a visual display unit. The operator can maintain a record of daily high and low temperatures, maximum wind velocity, and a daily barometric historical trend. These rather simple functions are reset every 24 hours. The base station takes the raw data feeds and translates them into information that we, in turn, can reprocess in our brains to help us control our comfort by predicting the weather and adapting either our behaviors or environment in relation to the information we are provided by the base station.

Within this base station are both a processing state and limited storage states for the barometric trends and daily temperature extremes. With this simple IT system, we have now identified the transmission, storage, and processing functions. The information flow is one way from the remote sensor, through the wire, and into the base station for processing and storage. We now have the state analysis aspect of a security review accomplished.

A simple state analysis like this works effectively for all types of dedicated information systems as well as those that are application specific. In these cases, the information is normally of one type for security purposes even if it is a robust collection of data. To accurately identify the states, the analyst must track the data from its acquisition through its use, maintenance, and ultimate disposition.

It becomes easier to see the implications of state analysis changes in technology by using the telephone system as an example. The original telephone system was a relatively simple environment where temporary communications sessions were established. For each session, an operator would create a dedicated circuit between callers to establish a dedicated wire-based connection session. After the link was created, information could be exchanged between the telephone units at each end. The processing and storage functions were left to the capabilities of the minds of the humans on each end.

The security attributes for such a system were routinely confined to the characteristic of system availability. The telephone system operators worked to provide consistent service for their customers who needed to know they could make a call in the event of an emergency or simply for their own convenience. Central operators, who could listen in on the connections they created, easily violated confidentiality, however, those outside the connection environment had to obtain and employ highly specialized tools to conduct wiretaps or exploit other confidentiality vulnerabilities. The integrity attributes of the information were left to the veracity of the participants. Once the participants in one of these calls hung up the connection, the virtual transmission capabilities ceased and no information was retained nor was it possible to manipulate any of the information exchanged after the fact.

The introduction of modern processing (computer) technology into the relatively simple telephone system has changed the entire security environment for telephony. Tiny processors in current telephone systems and after-market devices now allow us to use our telephones to record conversations, capture images, store large quantities of data, and manipulate the information before, during, and after we send it. Within the telephone infrastructure are computer systems and technology to manage all three states. All aspects of transmission, storage, and processing are now incorporated into these interconnected global systems. Someone responsible for evaluating or implementing security technology for this environment will now need to critically define all possible information states in each affected component.

The basic change has come from the recognition that all information, whether it be images, music, or books, can be rendered as data in computer systems. Color, perspective, and sound can be captured and replayed throughout a collection of processors, wires, radio waves, light waves, magnetic pulses, and optical media. Regardless of the medium, any security analysis must consider the three primary states of the information in order to begin a structured methodology to make intelligent choices for its protection and safeguarding.


INFORMATION STATES IN HETEROGENEOUS SYSTEMS

Obviously, many large-scale IT systems can be used for a multitude of functions. Multipurpose computer systems can host a nearly endless variety of applications that include voice, data, and imagery. A LAN in even a small office environment will contain numerous instantiations of transmission, storage, and processing functions. These states will exist in a variety of applications and functions. Identifying the primary information states across a variety of applications and system components is necessary.

In heterogeneous environments such as a LAN, it becomes necessary to examine information state changes for each application within the environment. To determine what information states exist, you can either approach the state analysis by introspection of each component or by the various applications supported by these components. In either case, you should end up with the same results.

If you approach the problem by system component, you need to answer the following questions:

  • What function does this component perform?
  • What applications are supported by this component?
  • Does this component perform multiple functions for different applications?
  • Which states of transmission, storage, and processing exist within this component?
  • Did I perform a complete inventory of information assets that are affected or managed by this component?

If you approach the state analysis by application (the employment of information and data), you simply apply the same questions, but you begin by looking at the various uses of information throughout the system. You need to determine.

  • What information does this application use?
  • Where does this come from?
  • Where does this information travel within the system?
  • What state changes does this information make?
  • Did I perform a complete inventory of all state changes and every possible information flow?

In each case you should be able to diagram and define the various information state changes for each application and for each component within the system. Boundary definition in either case is critical. Although information may flow between various systems, infrastructures, and components, it is vital that you can account for the accuracy and timeliness of the information when it enters your defined boundaries and that you can ensure the appropriate confidentiality, integrity, and availability attributes of the information as it passes out of the control of your system.

Heterogeneous system boundaries also can apply to regional, national, or even multinational networks. As the breadth and functionality of the system expands, the challenge in applying the model is to determine the expectations of the system users and the role of security safeguards in the environment under study. A defense-in-depth view I often apply when analyzing ever-broadening system boundaries is to look at what type of enforcement control and protection the system operator can realistically enforce.

At the individual component level, an application of the McCumber Cube methodology normally defines the security functionality of the component itself. In other words, the methodology is used to determine a set of security-relevant functions the specific component could provide for any type of information, regardless of the type of information or the external threat environment. This type of analysis was originally envisioned by governments for ascribing security criteria to computer components. Examples include the blah, blah criteria (also known as the Orange Book), the Common Criteria, and the CTCPEC. Each of these standards represented an attempt to define security for products without consideration for the value of the information transmitted, stored, or processed by them or for the threat environment the system operated within.

At the organizational systems boundary level, the McCumber Cube methodology is most effective because it is employed to define security safeguards for systems under someone’s span of control. Additionally, the process also identifies and includes in the assessment the value of the information resources transmitted, stored, and processed by the system. The methodology then enables the analyst to refine and optimize the security requirements based on that criterion. Finally, a system perspective also considers the threat environment the system operates within. With these parameters considered, it becomes possible to make informed tradeoffs and employ cost-effective security controls. On an even larger scale, the McCumber Cube methodology can help assess a security environment (Figure 7.2). For information, components, and media outside the security analyst’s control, the methodology still provides a way to assess the effectiveness of the various security functions employed and the likelihood of information exploitation. By identifying information states and data flows, it becomes significantly easier to identify potential vulnerabilities and external risks.

In the following chapters, we will explain the decomposition process for each of these three levels of IT systems to apply the McCumber Cube methodology. The IT systems environment and the goals of the security analysis will dictate the level of detail and specific analysis techniques employed. Before those areas are developed, we must first discuss how decomposition of information states is practiced.

9781135488963_125_01

Figure 7.2 Layered Security Analysis I

BOUNDARY DEFINITION

One of the key advantages of a structured security methodology that is information-based (or asset based) is its application across the entire spectrum of current technologies as well as future products and systems. To adequately apply this methodology, it is critical to define the system boundaries by accurately defining and mapping them. Another aspect of this boundary analysis is a determination to what level of decomposition is this analysis most valuable.


DECOMPOSITION OF INFORMATION STATES

Within many IT systems components are entire information systems within the component itself. A desktop system, for example, is composed of numerous information state changes within the computer system itself. The system’s memory is a storage device as well as the hard drive. There are processing functions most obviously employed within the processor, but also contained within the video accelerator card and even the modem. Each of these system subcomponents can then be decomposed into more minute state changes that take place on the chip itself.

To understand the boundaries of the security analysis required, it is necessary to determine the outcome desired. In Figure 7.3, the tiered IT environment is mapped against the security analysis best suited for the selected environment. The first step is to determine the type of security analysis you wish to perform. From there, the boundary analysis is performed to determine the limits of your analysis.

For purposes of this chapter, we will define our level of abstraction as one of security enforcement analysis—the LAN and related topologies. We will also define how this methodology can be applied to security functionality within components as well as broader IT systems where the goal is one of developing and implementing a comprehensive security program. However, for the sake of continuity, we will present the rest of this chapter as if we were endeavoring to define the security environment at the LAN level.

The required steps at this stage of the methodology are to define the boundary, make an inventory of the information systems resources, and then decompose and identify the information states at the appropriate level of abstraction. Each of these steps can be accomplished and documented rather quickly and provide the basis for the security analysis to come.

9781135488963_127_01

Figure 7.3 Layered Security Analysis II

Step 1: Defining the Boundary

The McCumber Cube methodology is not founded on an educated guess of attacker profiles and then a test to simulate their possible attack scenarios. As we have discussed, such processes are flawed. All sound security analysis is based not on attacker profiles, but around understanding and protecting the assets requiring protection to the appropriate degree. Attacker profiles may change and new attackers with novel approaches are a consistent reality. The only way to ensure you are prepared to deal with any new threat is to ensure you have accommodated the requirements of confidentiality, integrity, and availability of the resource in each of its states of transmission, storage, and processing.

It is important to point out that determining the security boundary does not in any way mean that a security practitioner should rely on boundary protection techniques alone. The boundary security technique is an approach whereby a systems boundary is developed and all (or most) security functions are enforced at the boundary. In such a scenario, there is an assumption that the boundary will remain relatively static and that the only threat comes from unauthorized outsiders—those outside the defined boundary. The point of defining the boundary for use of the McCumber Cube methodology is to simply define the location of information resources and the systems components that are used to transmit, store, and process that information.

There are several guidelines available to help make the determination of which systems comprise the IT systems existing within your boundary (Table 7.1). You can employ an organizational approach whereby all information used by an organization must be accounted for and mapped to specific systems components. In this case, it is also useful to have an inventory (Table 7.2) of the various system components owned by the organization. However, it is important to ensure all components that handle data within the identified boundary are accounted for. A thorough physical inventory is highly recommended. It is not unusual to identify several components or systems that were not included in the inventory for a variety of reasons. Systems may have been purchased with funds not tracked by the inventory system, leased components may be installed, and vendor demonstration units and components belonging to outside organizations also may be present and should be included within the scope of a security analysis.

Again, the challenge is one of actually defining the boundaries. This boundary definition problem is identical to the challenges faced by any type of security analysis or certification process. This is summed up nicely by the National Institute of Standards and Technology’s approach for security accreditation in their Guide for the Security Certification and Accreditation of Federal Information Systems (second public draft):1

Table 7.1 Boundary Checklist

Table 7.2 Inventory Checklist

One of the most difficult and challenging problems for agencies has been identifying appropriate security accreditation boundaries for their information systems. Security accreditation boundaries for agency information systems need to be established during the initial assessment of risk and development of security plans. Boundaries that are too expansive make the security certification and accreditation process extremely unwieldy and complex. Boundaries that are too limited increase the number of security certifications and accreditations that must be conducted and thus, drive up the total security costs for the agency. Although there are no specific rules for determining security accreditation boundaries for information systems, there are, however, some guidelines and considerations…that may be helpful…in making boundary decisions tasks more manageable.

…In general, if a set of information resources is identified as an information system, the resources should meet the following criteria: (i) be under the same direct management control; (ii) have the same function or mission objective; (iii) have essentially the same operating characteristics and security needs, and (iv) reside in the same general operating environment (or in the case of a distributed information systems, reside in various locations with similar operating environments). The application of the criteria results in the assignment of a security accreditation boundary to a single information system. There are certain situations when management span of control and information system boundaries can be used to streamline the security certification and accreditation process, and thus increase it overall cost effectiveness.

Once you have defined the boundary for your security analysis based on these criteria, you are ready to proceed to the next step of the process, making an inventory of IT resources.


Step 2: Make an Inventory of All IT Resources

Once you have identified the extent of your network parameters, you need to work within its confines to identify the various technology resources and components that transmit, store, and process the data. Even though we are employing an information based security model, it is necessary to account for not only the information resources, but also the equipment, systems, and components used. This inventory will be used as a basis for determining the nature and location of information states within the security enforcement environment.


Step 3: Decompose and Identify Information States

The problem becomes one of determining to what level you should decompose information states in order to define and implement the appropriate safeguards. One of the guidelines I recommend to make this assessment is to review a comprehensive list of known security vulnerabilities for the types of components in the information system under review. A sound overview of these known security problems will give you an effective idea of just how minutely your information states need to be defined. For most system components, assigning the appropriate information states will be straightforward, as most of the technology building blocks of these heterogeneous systems have clearly defined roles.

In the example of an organizational information system, information states can be readily identified. A component or system, such as a workstation, PC, router processor, or mainframe, is involved in the processing function. Recall that these automated processing functions require information to be in plaintext (unencrypted). The processing function is identified at any stage where information is updated, modified, appended, or otherwise manipulated. Basically, it is anything that is not storage or transmission.

The transmission function is similarly easy to identify. Transmission can encompass movement of information from one location to another. The medium will be important for understanding vulnerabilities and safeguards, but is irrelevant for this stage of the methodology. It can be a wire-based transmission or wireless. It can be any of a number of protocols. At this point in the process, it is important simply to note that information is being moved.

The storage function can be any mode for storing information resources. It can be a database, a compact disk, or any other type of media. The distinction here is that the information is static. It is important to recognize the wide variety of storage states. It does not need to be a centralized database or even a popular database application. Anywhere data rests is a storage function.

There have been claims from some researchers that a separate function should be identified for display of information. There is no need. Information being displayed is simply a form of storage—usually with associated access from those within eyesight of the display function. Even real-time data displays should be considered storage functions as they refresh their images with information that is retrieved from the processing function through a transmission medium.

9781135488963_130_01

Figure 7.4 Personal Computer Information States

It should therefore be obvious that most workstations and PCs comprise all three basic information states and should be identified as possessing such (Figure 7.4). Security assessment and implementation activities must account for the fact that almost all of these systems have the ability to transmit, store, and process information resources. If a PC on a network has the capability to modify corporate information, it must be considered as having as much of a processing function as a mainframe computer or network server.

All information states and states changes need to be identified completely in the environment within the predefined boundaries.


DEVELOPING AN INFORMATION STATE MAP

Once this state identification process has been completed for all aspects of the information systems environment, you should have all the pieces necessary to develop an information state map. An information state map is not the same as a network topology map. It represents the flow of information through the system under review. Some people prefer to overlay the information state map to a network topology diagram, but it should be possible to divorce the information state map to perform the upcoming security analysis of the structured methodology.

The information state map will identify the various state changes of the information through storage, transmission, and processing. Aspects of the map that require greater detail are those where several functions and states are identified within the context of a set of components or sub-systems. For example, workstations or PCs attached to the network need to be identified as supporting all three information states. Vulnerabilities exist for all three states in the type of environment, so it is vital to identify and deal with the security ramifications for each of these systems. The same icon can represent systems that are identically configured, but the security elements need to be identified for all three state functions in each unit.

9781135488963_131_01

Figure 7.5 Simple IT System State Map

I have included a simple technology map (Figure 7.5) for a small LAN. It represents an environment for a simple application in which the boundaries have already been defined. This diagram is technology-based in that the model represents the various technology components that make up the information systems infrastructure. Once we have mapped this system, we need to decompose the components and overlay the information states and state changes to the map.

In Figure 7.6, we have overlaid the information state changes on top of the technology infrastructure. It is important to accurately reflect information state evolution by actually tracking the information resources as they flow through and between the various technology components. Although it is possible to create an accurate information flow map based solely on technology components using a simple system, a more complex infrastructure would require the analyst to actually define and follow the flow of information resources as opposed to relying simply on identifying physical system components.

Figure 7.6 represents the first phase of the structured methodology—mapping information flow. It is a critical step because this is an information-centric model and not a technology-based approach. The actual operating system, protocols, and various media that are employed can and will change, yet information flow through the system should be mapped and managed at a level of abstraction above the physical medium. In other words, security assessments need to be implemented and managed based on information states, not based on the whims of changing technology.

The information flow maps you create will certainly change and evolve just as the technology infrastructure that supports the organizational information resources. There is no security implementation that can remain static in the face of evolving resource requirements and changes in technology. However, it is vital to develop information flow maps and maintain them in order to continually track the effectiveness of an information security environment. Now that you have mapped the information flow for your environment, it is time to apply the McCumber Cube in assessing and implementing your security requirements.

9781135488963_131_01

Figure 7.6 Simple IT System State Map with Information State Overlay

REFERENCE

1. National Institute of Standards and Technology, NIST Special Publication 800–37—Guide for the Security Certification and Accreditation of Federal Information Systems, 2nd public draft, June 2003 [available at www.csrc.nist.gov/sec-cert].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.223.168