2: DEFINING INFORMATION SECURITY

CONFIDENTIALITY, INTEGRITY, AND AVAILABILITY

One of the most critical aspects of any model development is a detailed understanding of the nature of your subject. Just as a good map contains the detailed representation of the routes and landscape features, any security model must use an appropriate definition of the security environment. For IT and information systems security, a structured decomposition of both security and information is required. This chapter will define the elements of security and how they are applied.

Earlier in the book, we defined the relevance of security in context. We also used that discussion to outline the principal definitions of security. The primary definition of security according to the dictionary is freedom from risk or danger. A secondary definition is how you feel about your safety. So when we speak of security, we are speaking of an ideal state. It is no wonder people quickly become confused when discussing security, even in its IT definition. Our perceptions of what constitutes security may be different from those with whom we communicate.

There is an old story in military circles that highlights the problem of understanding and applying the right definition of security. As the tale goes, an admiral is assigned to a joint command in charge of members of all the military branches. He asks his aide-decamp, a young Air Force officer, what it would mean to each member of the staff if he were to command, “Secure that building.” The young staff officer replies, “Sir, if you tell someone from the Navy to secure that building, he would close all the doors and windows, extinguish the lights, lock the doors, then retire for the evening. If you asked someone from the Army to secure that building, he would station machine gun nests and guards around the building and maintain his vigil until given further orders. If you asked a Marine to secure that building, he would attack the building, destroy it, and guard the rubble with his life.”

“I think I am beginning to understand,” mused the admiral. “But you are in the U.S. Air Force. What would you do if I asked you to secure that building?” he asked.

“That’s an easy one, sir,” the young officer quips, “I would get you a three-year convertible lease with an option to buy.”

This story never fails to get a laugh among military personnel, but it succinctly points out the problem with defining security. The words may be the same, but our mental image of what security means to us may not be shared with everyone else.

Similar problems exist for any concept that encompasses such a broad spectrum of meaning. Take, for example, love. You may love your spouse, your dog, or your car. You may love all three. If you are like me, you also love egg custard. Although the word love is accurately used in each instance, the specific elements that comprise your love are different for each subject in question. You evince love for your spouse though your caring and attention, as well as practical support for his or her needs. You express love for your dog by your daily care and also your attention. Love for inanimate objects like your car may take on more pragmatic demands such as mechanical maintenance and activities to improve its physical appearance. In these cases, the way you show love may be completely different, although the expansive concept of love is accurately applied to each object.

If security is freedom from risk or danger, the perception of security exists only when we experience freedom from risk or danger. This inexact definition is easy to misinterpret and misapply, especially in the context of security for information assets. We can best see how the two definitions work in tandem by looking at the case of former football coach John Madden.

I do not know anyone who claims John Madden is a dummy. He had a stellar career as coach of the Oakland Raiders and he is an entertaining announcer for my favorite fall sport. However, for years he has refused to fly, citing the risks of air travel, specifically an airplane crash. He opts instead to travel to his nationwide appointments in a luxuriously appointed motor coach.

Objectively, the mathematical odds of Madden being killed in an accident with his motor home are far greater than the risk of dying in an airplane crash. It is also obvious that he is far more comfortable traveling by coach than by air. In fact, he runs his professional life around it. For personal reasons, Madden applies the subjective definition of security to his travel requirements. He feels better about his safety when he is on the road than in the faster, safer, and more convenient aircraft.

In this example, the perception of safety trumps the empirical evidence to the contrary. Madden’s security model is his own gut instinct of which mode of transportation makes him feel secure. Although the figures argue otherwise, his perception of his security is that an airplane crash is so unfathomable a consequence that he prefers the statistically more dangerous cross-country drive. He certainly mitigates his risks by hiring a welltrained driver, but the time and convenience factors still make the case for air travel.

Our perception of security plays a critical role in our ability to define and apply security requirements. Like John Madden, many people may make this assessment on a gut level or based on perception as opposed to empirical analysis. This distinction is easy to identify when seeing television advertisements for security products and services like home alarm systems. These systems usually comprise both a product (the alarm system itself) and a service (monitoring of the alarm by a security monitoring center).

In a recent advertising campaign, a security alarm service attempted to define an instinctive (or perception) model of security using three different scenarios. First they set the stage from the perspective of a homeowner, a traveling family man with a wife and children. Then they presented you with the perspective of the daughter of an elderly parent living alone; then finally of a single person away on vacation sitting in a canoe.

Each of the three segments shows the person explaining the security value that a home alarm means to them personally. The traveling family man wants to know his family is safe. The daughter with the elderly parent wants the assurance that responsive help is available for her father. The man in the canoe wants to just relax knowing his house is protected from flooding and burglary. Each person expresses what security means to him or her on a strictly personal level. The company paying for the advertisement, in partnership with the advertising firm, did not want to use the limited time available to explain how the system works or even mention the price.

Those aspects would certainly be considered important, but the company paying for the advertising was not trying to compete on cost. Comparatively, if you think of automobile advertising, it is almost always centered on cost. Manufacturers and dealers tout their financing rates, discounts, and low monthly payments, especially for moderately priced vehicles aimed at the average American. However, home security monitoring services are not about cost or technical superiority; they are all about peace of mind. The value case for security, in this instance, is not aimed at an empirical analysis of the likelihood of a house break-in, flood, or family emergency. The value is the peace of mind they provide their customers.

Even when people demand security, they themselves may not understand exactly what elements they require. That is the role of the security analyst. Nothing is more problematic than someone demanding that a building, a document, or an information system be secure. To accurately employ a security model, the nature and degree of the security requirements must be spelled out in unambiguous terms.

These definitional distinctions are far more important to security practitioners than mere academic exercises. The heart of developing and implementing effective security is understanding and codifying precise security policies than can be translated into a complete suite of safeguards encompassing technology, policy, and human factors. A comprehensive methodology that accounts for each of these factors goes a long way in providing a framework to both design and communicate security concepts to everyone in an organization.

In the specific case of IT, analytical accuracy requires that the security definitions be applied to the information, not the technology. The section on security models depicts the problems that result when your security model is focused on the technology and not the information assets themselves. Technology can be considered secure only as far as the physical instantiation of the technology itself is concerned. In other words, if I claim a router is secure, the concept of security can only be applied to the physical piece of hardware itself. If you refer solely to the software within the unit, it is even more difficult to make an assertion about its security, because the software itself is an information asset and subject to threats faced by all digital resources. It is perfectly acceptable to discuss the security features or capabilities of the router as it applies to the data that transits the router, but the concept of a secure router is only meaningful when physical protection of the equipment is the desired security outcome.


INFORMATION ATTRIBUTES

Applying an adequate definition for security of information resources is more complex than the McCumber Cube would suggest. Although the three key elements— confidentiality, integrity, and availability—provide a complete framework for describing or modeling information security, there are certain attributes to information, data, and digital media that require closer inspection. The most significant is the nature of how we ascribe value to information.

When defining the amount of security necessary for a given IT environment, the key is determining the value the information, data, or digital resources have for the organization or people who require security. Information has value and the metrics of that value is the subject of Chapter 3. However, it is crucial to this analysis to identify some unique characteristics of information as an asset.

Any security model must take into account the value of the assets that require protection. In our household, we only have a two-car garage, but three cars. An outdoor parking pad accommodates the car that must be left out of the garage. Although time of arrival and parental status influence which car gets left outside, the primary factor is the objective value of the vehicle itself. The least valuable vehicle is left outside where it is more vulnerable to theft and weather damage.

Because our model is based on the attributes of the information itself and not the specific technology used to transmit, store, and process that information, models to define the value of the asset must be applied specifically to the information resources. Logically, information moves throughout these IT systems. To make an accurate assessment of the value of information, it will be necessary to perform an analysis of the information in a specific state of transmission, storage, or processing.

When discussing the value attributes of information, it is important to make another distinction and to do so, we must again use definitions to describe exactly what we mean. The first word is data. For our use, we must apply the computer science definition: Data is numerical or other information represented in a form suitable for processing by computer. The key concept in this definition is the part about data’s form—a form suitable for processing by a computer. By applying this definition, we see that data in many cases has actually little value to us. It can be perceived as a stream of bits transiting a network connection or a large collection of ones and zeroes in a database.

The distinction between raw data and information is critical to applying security attributes. Information is data in context. A close synonym used in some definitions would be knowledge. An example of this key difference could be described by this example of data: 13, 35, 48. These numbers certainly represent data, but do they represent information?

Suppose you found those same three numbers, or data set, written on a small note that you find taped to the underside of a telephone near a safe containing $50,000 in cash. You now have converted data into information and have even been able to help quantify the value of that information. If, in fact, you have discovered the safe’s combination, the protection accorded the information while it was attached to the underside of a nearby object was insufficient. Especially if you were prone to apply this knowledge to acquire the assets.

Security policies and procedures should be applied only to information and not to data per se. As presented earlier, security expressed outside the context of valuation is not security, but simply an outline of security features or functions. To accurately and costeffectively apply security to IT systems, security must be defined according to the value of the information as it passes through its transmission, storage, and processing phases.


INTRINSIC VERSUS IMPUTED VALUE

Evaluating asset value for information can be a difficult task. Just as the concept of security encompasses both objective empirical risk avoidance as well as perception, information assets have two significant defining traits. Information has both intrinsic and imputed value.

Intrinsic value refers specifically to the external nature or worth of the information itself. The best way to describe the difference between intrinsic and imputed value is to apply the concept to other assets. Take cash as an example, the bills in your wallet have both intrinsic and imputed value. The intrinsic value of your money is its worth as green slips of paper. You can use them to write notes on or perhaps consider using them as fuel for a small fire. However, the real value of the currency, as you know, is the ability to trade them for goods and services equal to the worth placed on them as a convenient and portable form of trade. This is the imputed value.

The imputed value of currency is the value placed in it by our government as a vehicle for trade and commerce. The different denominations represent varying amounts of value all tied to the base unit of one dollar. The value certainly changes over time, but the imputed value of the currency remains tied to what has become an international standard.

Let us refer back to our example of the three two-digit numbers of our safe combination. The intrinsic value of the three numbers is quite small, but their imputed value as information could be worth up to $50,000 depending on the risks in obtaining and keeping the cash in the safe. To fully express the value of information, it is critical to assess both its intrinsic and imputed values.

All assets have intrinsic as well as imputed values and often these values are wildly disparate. I have an old guitar that was owned and played by my deceased father. Its intrinsic value as a musical instrument is quite small, because it needs repairs to be even marginally playable. However, its value to me as a memento is significantly greater.

You may drive an old car with high mileage. Perhaps it is over ten years old. Its tradein or retail sale value according to industry sources may be miniscule. However, you may have owned the vehicle since it was new and have taken good care of it. Likely, it has become reliable transportation, and the fact you own it free and clear is also an important aspect of ownership. You know it would be nearly impossible to locate another vehicle as reliable for even twice the book value of this car. In this case, the imputed value and intrinsic value are not the same.

With most assets, however, it is difficult, if not impossible, to separate the intrinsic and imputed values. Again, using currency as an example, to acquire the imputed value of a five-dollar bill, I would have to possess the bill. If I exchange the five-dollar bill for two dollars of goods or services, I would surrender the five dollars and receive currency in the amount of three dollars in return. It is not possible for someone to deduct two dollars of value from the five-dollar note and establish a new imputed value of three dollars. The five-dollar bill is a form of token that contains both intrinsic and imputed values that are closely intertwined.

Most assets have their intrinsic and imputed values tightly conjoined. A jewelry company that wishes to lease retail space in an upscale suburban shopping mall will most likely pay more for that space than the same amount of space in a rural strip mall. The difference is not the intrinsic value of the retail space, but the imputed value of the retail location. The same holds true for residential real estate. A home located in a convenient, highly desirable residential area will cost more than an identical home away from major roads or near a dirty industrial plant. This is an example of intrinsic and imputed values being different. The fact that you would have to pay tens of thousands of dollars to physically move the house from its less desirable location to a better one limits your ability to manipulate the intrinsic and imputed values of homes based on their location.

If we return to our example of currency, we can feature a relatively recent technology that allows us to manipulate the imputed value of a currency token. A debit card makes such transactions possible. If the debit card is has five dollars of value and I purchase two dollars of goods or services, a vendor with the appropriate technology can automatically reestablish a new value of three dollars for the debit card. This feature—the ability to change the imputed value—makes these payment options desirable for many commercial transactions. The debit card is an example of how modern technology can provide more flexibility in ascribing and defining imputed value in financial transactions.

The importance of intrinsic versus imputed value in assessing information assets cannot be overemphasized. Too often, organizations make security decisions without an explicit understanding of the imputed value of the information assets. This subject is so critical to making cost-effective decisions that it will be treated to its own section in the upcoming Chapter 3. However, the concept must be introduced here to provide background for understanding the security-relevant attributes of information.

In the case of currency, counterfeiters try to create imputed value out of tokens that have only intrinsic value. The currency tokens they produce are only worth the paper and artwork they contain. However, by trying to pass their work off as legal tender, they are attempting to create imputed value they can exchange for valuable goods and services.

For legitimate currency, it is technically impossible to separate intrinsic and imputed value. If I want to realize the value of a five-dollar bill, I must obtain physical possession of a legal five-dollar bill. In the case of information assets, simply gaining access to sensitive or valuable information may be all that is necessary to shift part or all of its imputed value from its legitimate owner to someone else.

An important attribute of information value is the understanding that it is relatively easy to manipulate its imputed value. An example here will be instructive. Let us again return to the poorly concealed combination to the safe containing $50,000. Perhaps the untrustworthy person who located this combination waited to open the safe at a less suspicious time. If I am the owner of the safe and realize the implications of leaving the combination so easily exposed, I could change the combination while leaving the note under the telephone intact. When the potential thief comes to open the safe, they will find the combination changed and their insider knowledge worthless.

Information assets often are easily exploited in the same manner. If your company produces breaded chicken using a secret recipe containing eleven herbs and spices, part of the economic value of your company is contained in the recipe. If that exact recipe is surreptitiously copied, either through electronic or physical means, you could be empowering a potential competitor who will possibly take away a significant portion of your business. Although you still have your secret recipe, your business growth is in jeopardy because someone has extracted some of the imputed value of the information. Information assets demand that special attention be paid to both intrinsic and imputed values.


INFORMATION AS AN ASSET

In the earlier section on security models, the importance of identifying and valuing assets was emphasized. Assessing and implementing security—any security—is a process of protecting assets. The key word here is process. Security is a journey, not a destination. Because the concept of security is necessarily a transitory and idyllic state, security practitioners must deploy technology, policies, and procedures in a manner that easily accommodates the requisite dynamism for protection mechanisms to remain current.

Information, as an asset, cannot be fully understood as a static entity. Information has the characteristics of a living organism. Information often begins small and grows. It marries up with other information. It changes, flows, and evolves. The main differentiator between information and life forms is that information does not die, it must be killed off. However, this concept is covered in more detail in Chapter 3, so we will just introduce the idea here.

Any structured security management methodology or model must take into account the dynamic and fluid nature of information and also must be able to ascribe reasonable value estimates to it at each stage of its transmission, storage, and processing. Because security is not an absolute, but a matter of degree, these aspects of any model are crucial to the element of cost-effective security implementation.

One of the major problems with assigning value to information is that it currently does not appear on corporate balance sheets. I use the term currently to specifically point out my belief that information will sometime in the near future appear as a tangible corporate asset. To manage security effectively, it is necessary to measure its efficacy. If you cannot measure the assets you are endeavoring to protect, you cannot have an effective model to define security requirements.

Information may often show up on a balance sheet in the form of intellectual property. Types of intellectual property include copyrights, patents, trademarks, or trade secrets. These assets have certain rights in law depending on the country where the business is located. These rights can be bestowed, rented, sold, and even mortgaged in some countries. This gives a somewhat more tangible valuation to these important intangible resources.

It is critical to understand that the actual rights to the information are considered the propeity, not the intellectual work itself. For example, a patent is something that can be bought, sold, or traded. The actual invention itself technically does not belong to anyone. In this case, intellectual property represents a government granted monopoly on certain types of business activities. Intellectual property rights are generally divided into two main categories:

  1. Those that protect a product or process by granting exclusive rights on only the copying or reproduction (copyright)
  2. A more stringent patent protection that also grants additional rights such as the protection from competitors who have no knowledge of the original design

In either case, there is a legal burden on the individual or corporation to identify these assets and file the appropriate legal documents to provide a level of government-enforced protection.

Information is important to all organizations and it is the raison d’etre for innumerable businesses, such as clearinghouses, credit scoring companies, and contact providers. In these instances, it is ironic that items such as computer equipment and brick-and-mortar facilities are part of the balance sheet, but information is not. Most informationdependent organizations could recover quickly from a loss of any physical asset, yet would quickly go out of business without the ability to access and sell their information. Even if current accounting practice does not allow for a convenient way to portray the value of information, it is incumbent on people who depend on information to adequately provide for its management and protection.


THE ELEMENTS OF SECURITY

The word security provides us few clues for understanding and modeling requirements for the protection and management of information resources. To model security for information assets, we must define what we mean when we use the term security. The elements of security, as we define them, are confidentiality, integrity, and availability. This triad concept has been around for many years, yet has never been incorporated into a security model.

Over the years, security analysts, policymakers, and researchers have considered expanding this group to include elements such as accountability and nonrepudiation. Although there is a valid argument to be made for expanding the list of security elements, it is not necessary to do so in order to accommodate these attributes. Elements like nonrepudiation are a facet of integrity and accountability also could be included in that category. It makes no sense to make your model overly inclusive at its fundamental level. These are elements that are to be considered within one of the three widely adopted elements.

Confidentiality

Confidentiality is perhaps the most widely recognized and most deeply studied security requirement. Confidentiality is the basis for the science of cryptography that has its documented beginnings in the Roman Empire. The primary consideration of confidentiality is not simply keeping information secret from everyone else; it is making it available only to those people who need it, when they need it, and under the appropriate circumstances.

Perhaps the most significant imperative for confidentiality is not the element of secrecy, but the capability to ensure the appropriate subjects (both people and other processes or systems) have the requisite access when needed. Since the reign of the Caesars, confidentiality has been seen as a contest between those who would protect the content of information transmissions and those who would gain from the disclosure of the same. Evermore complex methodologies have evolved in this ongoing cycle of protect and exploit.

Centuries ago, information transmission speeds were measured in days and even months. Couriers were provided encoded information and the sender lost control over the transmission as soon as the courier disappeared from sight. The duty of courier was most likely assigned to a trusted soldier or confidante as an additional safeguard.

In the Roman Empire, senior civilian leaders and military commanders had a staff of office that also functioned as an encoding and decoding device. The staff contained the entire Roman alphabet and a complete set of numerals. A strip of parchment was wound around the staff starting at a particular letter or numeral that functioned as a key. A message could then be encoded as the actual letter or numeral was compared against its enciphered pair on the parchment. The receiver had only to take an identical staff and parchment strip and start at the same key letter or numeral to decipher the message from its encoded format. This simple encoding algorithm is known as a monoalphabetic substitution cipher and is still practiced by children fortunate enough to find a secret decoder ring in a box of popcorn snacks.

There were many security challenges with these early systems for ensuring the confidentiality of critical information. The geographically dispersed nature of the physical transportation of protected information made it difficult to change both the encoding algorithms and the keys required to decipher the information. Those charged with exploiting these communiques had to initially intercept the physical communication and then determine both the algorithm and the key used to encode the plaintext data. Analysis, guile, and even brute force were necessary tools to discover these elements.

Modern cryptographic systems have the same properties as their ancient counterparts. There is an algorithm (or process) to encode the data and a key variable to establish the enciphered session.

Keys that are generated randomly and with great frequency prove difficult to break. In the 19th century, Flemish cryptographer Auguste Kerchoffs enunciated the elemental key principle used by all modern cryptosystems: the security of the systems depends solely on the security of the key, not the algorithm. This is known as Kerchoffs’ Law. As early as his published works in 1883, Kerchoffs understood that ultimately, the cryptosystem’s algorithm could be captured and analyzed by those seeking to exploit the information the cryptography tried to protect. In modern computer systems, cryptographic algorithms can be deconstructed and analyzed by code crackers. Thus, a complex and regenerated key is the most critical defensive capability of the cryptosystem.

Although they were keenly aware of Kerchoffs’ law, the Nazi military in World War II came too late to realize just how devastating the implications of this important principle were. British forces were able to smuggle a German Enigma encryption machine out of Poland with the help of patriotic Polish engineers who were making parts for the mass-produced encryption device. British and American researchers were ultimately able to replicate the complex algorithm within at the secret Bletchley Park compound in England. In 1940, the scientists were finally able to break the Enigma code by converting some test messages into plaintext. This uncovered the key systems that allowed the Allies to read the secret German military communications.

Cryptography represents the most frequently employed technology safeguard for the protection of information. The subject is treated in great detail in numerous other books and studies, so this text will not consider this science in any great detail. We also will not delve into the various strengths and weaknesses of specific cryptosystems. However, it is critical to understand both the strengths and weaknesses of cryptography as a safeguard in information systems.

In referring to the McCumber Cube, it is quickly obvious that cryptography can be applied as a technical safeguard in numerous instances. Cryptography is a highly effective safeguard for information that is being transmitted or stored. In the early days of computer security research, many people felt that cryptography alone could provide a sufficient technical safeguard for protection of information assets in computer systems.

In the era of monolithic computer systems and limited intercomputer communications, cryptography was envisioned as all the protection necessary for information that flowed between protected islands of data processing. In this case, boundary protection of the computer system itself was deemed adequate.

One of the defining principles of modern IT systems, however, is that confidentiality of information can only be provided with cryptography during the information states of transmission and storage. It is axiomatic that you cannot take data in its encrypted form, modify it (or, more accurately, process it), and then have data that can be decoded into plaintext. Though it is possible to keep information encoded until processing, it is not possible to apply the processing function until the data has been converted back to plaintext.

Perhaps the best way to illustrate this important principle is to use the analogy of a protected communications system that used cryptography long before the advent of the computer. The same Roman military commander we discussed earlier received his encoded dispatches and then had to decipher the data into plaintext before it became actual information for him to analyze and use in his decision-making process. There would be no way for him to skip the deciphering step and process the data in its encrypted form. The enciphered data was not in a form that his mind could process.

The same is true of automated processing systems such as modern computer and telecommunications systems. The processing stage of information is roughly analogous to the processing function of the human brain. Any processing function must work with data in its plaintext state.

It can be argued that an application (or other type of software) can be programmed to process enciphered information. Whatever the process for developing and writing a computer program to function in crypto-text, this would be nothing different from actually writing a program in another language—either a human-based linguistic system or computer programming language. However, such an instance would not work outside the confines of this one representational language and would therefore render the system unsuitable to widespread use and adoption. For that and other reasons, we will not consider such a case as meeting the needs of our IT users and it is therefore outside the scope of this work.

There is a saying among computer security researchers that states, “If you think cryptography is the only technical answer, you understand neither cryptography nor the question.” Cryptography is a critical technical safeguard that is ideally applied to information in its transmission and storage states, if it is applied at all. The analysis of the McCumber Cube approach will help you determine if it is required and, then, to what degree.

There are numerous cryptosystems in current use and many more under development. The applicability and effectiveness of each is determined by the quality of the algorithm and the strength and adaptability of the key. Such systems are usually paired with a variety of authentication mechanisms. Technical confidentiality controls include not only cryptography, but authentication and intrusion detection technologies as well.

Maintaining the confidentiality of information resources requires not only technical safeguards, but policy and procedures as well. Procedural controls begin by ensuring only authorized persons can put data into the system and only authorized users can view the resultant information. Perhaps the most vital confidentiality policy, however, is the initial determination of who can view what information and under what conditions.

To provide adequate security enforcement, it is always important to ensure you can develop and publish a comprehensive matrix that defines the nature of confidentiality for your environment. Identifying individuals in the organization may accomplish this, but the most common way is to define people by their job title or responsibilities. In this way, access requirements can be succinctly codified for the organization independent of the individuals in it.

Depending on the size of the information systems environment, this policy process can be either relatively simple or amazingly complex. It begins with an inventory of individual roles and responsibilities throughout the organization that is supported by the IT system. Then a comprehensive list of all the available information resources needs to be developed. These lists are then mapped against each other as roles.

This process is often lacking in even the most security-conscious environments. Alone, this exercise can be extremely beneficial for objectively determining who should have access to specific informational resources. When combined with the structured process defined in this text, it becomes a powerful method to strategically assess organization confidentiality policies and a key tool for tactically applying technology to support security requirements.

Confidentiality is a relatively simple concept that, in practice, requires a broad spectrum of technology and procedural enforcement in IT systems. Once you have developed your confidentiality policies and have charted them in the methodology, you will have a basis for determining the requirements for cryptography and other confidentiality safeguards.


Integrity

The integrity element of security is foundational. Inaccurate information can be worse than worthless. It can provide a false understanding of the business environment or even a military battlefield and lead decision makers into taking self-destructive actions. Integrity consists of ensuring the information is accurate, complete, and robust. As with the concept of security itself, integrity represents an ideal. Obviously, there are limits to the ability of security safeguards to provide for complete and robust information resources, but the integrity attribute is a central aspect to security enforcement. Integrity controls also include cryptographic solutions as well as authentication, nonrepudiation, and comparative analysis.

Many current definitions of integrity are woefully inadequate and many are notoriously incorrect. One widely cited definition defines data integrity as the assurance that data can only be accessed or modified by authorized users. Obviously, this definition is wildly deficient. This definition assumes that authorized users will always acquire, maintain, and update information with 100 percent accuracy. It also presumes the user will not make any type of mistake nor undertake any malicious or nonmalicious activity that could jeopardize the integrity of information resources for which they have authorized access. Because both of these scenarios are patently preposterous, we can safely assume we will not make the mistake of employing it as our understanding of information integrity.

For those familiar with database and storage systems use, the concept of data integrity has other meanings. Data integrity means, in the database sense, that you can correctly and consistently navigate and manipulate the tables in the database. According to database usage, there are two basic rules to ensure data integrity—entity integrity and referential integrity.

The entity integrity rule states that the value of the primary key can never be a null value (a null value is one that has no value and is not the same as a blank). Because a primary key is used to identify a unique row in a relational table in a database, its value must always be specified and should never be unknown. The integrity rule requires that insert, update, and delete operations maintain the uniqueness and existence of all primary keys.

The referential integrity rule states that if a relational table has a foreign key, then every value of the foreign key must either be null or match the values in the relational table in which that foreign key is a primary key. In this way, data is tracked effectively and that data is said to have integrity.

Neither database concept is exactly what is necessary to define information integrity in the sense that a security practitioner needs to understand and apply it. These definitions were initially developed for early database development at a time when the concept of a global information infrastructure replete with both good guys and bad guys was almost inconceivable. The requirements for data integrity were made with several assumptions, including complete accuracy of data input into the database, and the ability of the retrieving application to find and display (or process) the data flawlessly.

Other mechanisms that have been identified for obtaining and maintaining data and information integrity are physical protection of networked workstations, servers, and PCs. The protection of transmission media is another security practice often cited for the assurance of integrity. Although each of these specific mechanisms will aid in the protection of the information resources, these incomplete lists make for a poor way to define and assess information integrity.

Many security texts and systems vendors accurately point out that information and data integrity can be threatened by environmental hazards. Such threats include heat, dust particles, and electrical surges. Each of these has been known to garble data in transmit or storage and can result in lost information. However, for purposes of this methodology, these would be more accurately considered as threats to availability of the information in question as opposed to a loss of integrity.

For our purposes, the definition of data integrity will be a determination of how accurately and robustly the information reflects reality for a given application. In this case, we are not using the term application in the sense of a computer application. We are using the term broadly to define the intended use of the information to meet the needs of its owners.

Integrity is often defined as the need to ensure that the information is robust and accurate, but describing and quantifying those attributes is difficult. Rarely, if ever, can a decision maker claim to have perfect information. However, having accurate and robust information is critical. The integrity attribute of information is simply defined by the accurate and robust state of the information in relation to its intended use.

You will notice that many of the previously discredited definitions included safeguards as a way to define integrity. Although they are a key element for maintaining information integrity, the safeguards have to be accurately categorized as such. Many security safeguards function to help provide the requisite amount and degree of information integrity. We already presented definitions that included several physical and environmental safeguards necessary to help maintain information integrity, although many elements of an IT system are employed for this purpose.

Applications themselves are an important component in protecting information integrity. Safeguards incorporated into the applications themselves tend to be effective because applications are usually the vehicle by which data is converted to information. The application understands (and usually accomplishes) the transition from data into human-readable information. Applications also can enforce data entry and acquisition rules to screen out inaccurate or improperly formatted data elements. They also provide more immediate feedback to authorized users and administrators when policies or rules are violated. If one relies solely on database integrity constraints, an integrity constraint can only notify the user of a bad value after the user has entered all forrn values and the application has issued the INSERT statement (using a SQL [structured query language] example) to the database. However, you can design your application to verify the integrity of each value as it is entered and notify the user immediately in the event of a bad value.

Information integrity is one of the most demanding and yet the subtlest and least defined of the information attributes to maintain. The great majority of investment in safeguards and protective techniques are targeted at maintaining information integrity. Yet, it is vital to ensure that integrity is assessed and enforced even at the acquisition of the data or the information’s introduction into your systems.


Availability

If information is needed for a decision or for any other purpose and it is not there, it is simply not available. If integrity represents the accuracy and robustness of data, then availability is the timeliness factor. The availability element of security is often relegated as an afterthought, or at best, a control left for a simple demand for redundancy and uptime requirements. In practice, availability is often the single most critical assurance for critical IT systems.

For example, the fact that a profit-making enterprise has a database of all its current customers makes the intrinsic value of the data important. If that information is deleted, the company may lose business from those customers as a result of its inability to meet the demand of access to its business information. Availability is the cornerstone security requirement and one that requires protection.

Historically, information availability has been relegated to the study and application of disparate disciplines and safeguards such as redundancy, backup systems, storage management, disaster recovery, and business continuity. Each of these represents an important aspect of the availability attribute, but none on its own or even in concert with others constitutes a robust and accurate definition. Availability is really quite simple to define and understand: It is the ability of stored, transmitted, or processed information to be used for its intended purposes when required.

Availability is a security constraint that must be considered on an equal footing with the more commonly cited attributes of confidentiality and integrity. The concepts of timeliness and accessibility are aspects just as important as accuracy and protection from unauthorized use. Security parameters must answer the question: Will I have my information when I need it?

There are nominally two main components to ensuring availability of data and managing the risk to information by entrusting it to valuable technology resources. These ensure that systems operate to deliver data as needed and back up data to guard against system failure or data loss. As with any security-relevant requirement, it helps to be able to quantify what constitutes acceptable availability and how much you are willing to pay to achieve that goal.

Redundancy is an important aspect of providing appropriate information availability. In many cases, having redundant databases, networks, and even workstations is necessary to ensure uninterrupted access to vital information resources. The costs of hardware, configuration, and maintenance can be high, but these must be weighed against the consequences of delay. The decision is made by the measurement of risk that is discussed in Appendix B.

Backup and recovery are other safeguards that must be considered to ensure availability. A backup is simply a copy of the data. This copy can, in addition to the data, include important parts of the database such as the control file and data files. A backup is a safeguard against unexpected data loss and application errors. If you lose the original data resource, a backup allows you to reconstruct it.

Backups can be subdivided into physical backups and logical backups. Physical backups are the primary concern in a backup and recovery strategy. They are represented as copies of physical database files. In contrast, logical backups contain logical data (for example, data tables and the associated stored procedures) extracted with a utility and stored in a binary file. Logical backups are most often used to supplement physical backups.

Recovery systems work hand in hand with backup technology to ensure information can be restored to its primary function in the event it is required. A key element of this process is once again a risk management decision. Solutions that provide near-immediate fail over and recovery are, as a rule, significantly more expensive than solutions that demand extensive manual recovery techniques.

Availability also employs a variety of other safeguards and countermeasures to ensure that information resources are available when they are needed for the decision makers.


SECURITY IS SECURITY ONLY IN CONTEXT

Understanding the nature of security requires the practitioner to ensure that security is applied to the information resources in the context of their environment. Security is a moot concept unless it is fully analyzed with the elements of information valuation threats as safeguards. This methodology is designed to help you assess these critical elements.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.108.175