Chapter 4. Security Policies

 

PORTIA: Of a strange nature is the suit you follow;Yet in such rule that the Venetian lawCannot impugn you as you do proceed.[To Antonio.] You stand within his danger, do you not?

 
 --The Merchant of Venice, IV, i, 177–180.

A security policy defines “secure” for a system or a set of systems. Security policies can be informal or highly mathematical in nature. After defining a security policy precisely, we expand on the nature of “trust” and its relationship to security policies. We also discuss different types of policy models.

Security Policies

Consider a computer system to be a finite-state automaton with a set of transition functions that change state. Then:

  • Definition 4–1. A security policy is a statement that partitions the states of the system into a set of authorized, or secure, states and a set of unauthorized, or nonsecure, states.

A security policy sets the context in which we can define a secure system. What is secure under one policy may not be secure under a different policy. More precisely:

  • Definition 4–2. A secure system is a system that starts in an authorized state and cannot enter an unauthorized state.

Consider the finite-state machine in Figure 4-1. It consists of four states and five transitions. The security policy partitions the states into a set of authorized states A = { s1, s2 } and a set of unauthorized states UA = { s3, s4 }. This system is not secure, because regardless of which authorized state it starts in, it can enter an unauthorized state. However, if the edge from s1 to s3 were not present, the system would be secure, because it could not enter an unauthorized state from an authorized state.

  • Definition 4–3. A breach of security occurs when a system enters an unauthorized state.

A simple finite-state machine. In this example, the authorized states are s1 and s2.

Figure 4-1. A simple finite-state machine. In this example, the authorized states are s1 and s2.

We informally discussed the three basic properties relevant to security in Section 1.1. We now define them precisely.

  • Definition 4–4. Let X be a set of entities and let I be some information. Then I has the property of confidentiality with respect to X if no member of X can obtain information about I.

Confidentiality implies that information must not be disclosed to some set of entities. It may be disclosed to others. The membership of set X is often implicit—for example, when we speak of a document that is confidential. Some entity has access to the document. All entities not authorized to have such access make up the set X.

  • Definition 4–5. Let X be a set of entities and let I be some information or a resource. Then I has the property of integrity with respect to X if all members of X trust I.

This definition is deceptively simple. In addition to trusting the information itself, the members of X also trust that the conveyance and storage of I do not change the information or its trustworthiness (this aspect is sometimes called data integrity). If I is information about the origin of something, or about an identity, the members of X trust that the information is correct and unchanged (this aspect is sometimes called origin integrity or, more commonly, authentication). Also, I may be a resource rather than information. In that case, integrity means that the resource functions correctly (meeting its specifications). This aspect is called assurance and will be discussed in Part 6, “Assurance.” As with confidentiality, the membership of X is often implicit.

  • Definition 4–6. Let X be a set of entities and let I be a resource. Then I has the property of availability with respect to X if all members of X can access I.

The exact definition of “access” in Definition 4–6 varies depending on the needs of the members of X, the nature of the resource, and the use to which the resource is put. If a book-selling server takes up to 1 hour to service a request to purchase a book, that may meet the client's requirements for “availability.” If a server of medical information takes up to 1 hour to service a request for information regarding an allergy to an anesthetic, that will not meet an emergency room's requirements for “availability.”

A security policy considers all relevant aspects of confidentiality, integrity, and availability. With respect to confidentiality, it identifies those states in which information leaks to those not authorized to receive it. This includes not only the leakage of rights but also the illicit transmission of information without leakage of rights, called information flow. Also, the policy must handle dynamic changes of authorization, so it includes a temporal element. For example, a contractor working for a company may be authorized to access proprietary information during the lifetime of a nondisclosure agreement, but when that nondisclosure agreement expires, the contractor can no longer access that information. This aspect of the security policy is often called a confidentiality policy.

With respect to integrity, a security policy identifies authorized ways in which information may be altered and entities authorized to alter it. Authorization may derive from a variety of relationships, and external influences may constrain it; for example, in many transactions, a principle called separation of duties forbids an entity from completing the transaction on its own. Those parts of the security policy that describe the conditions and manner in which data can be altered are called the integrity policy.

With respect to availability, a security policy describes what services must be provided. It may present parameters within which the services will be accessible—for example, that a browser may download Web pages but not Java applets. It may require a level of service—for example, that a server will provide authentication data within 1 minute of the request being made. This relates directly to issues of quality of service.

The statement of a security policy may formally state the desired properties of the system. If the system is to be provably secure, the formal statement will allow the designers and implementers to prove that those desired properties hold. If a formal proof is unnecessary or infeasible, analysts can test that the desired properties hold for some set of inputs. Later chapters will discuss both these topics in detail.

In practice, a less formal type of security policy defines the set of authorized states. Typically, the security policy assumes that the reader understands the context in which the policy is issued—in particular, the laws, organizational policies, and other environmental factors. The security policy then describes conduct, actions, and authorizations defining “authorized users” and “authorized use.”

The retort that the first user could copy the files, and therefore the action is allowed, confuses mechanism with policy. The distinction is sharp:

  • Definition 4–7. A security mechanism is an entity or procedure that enforces some part of the security policy.

Security policies are often implicit rather than explicit. This causes confusion, especially when the policy is defined in terms of the mechanisms. This definition may be ambiguous—for example, if some mechanisms prevent a specific action and others allow it. Such policies lead to confusion, and sites should avoid them.

The difference between a policy and an abstract description of that policy is crucial to the analysis that follows.

  • Definition 4–8. A security model is a model that represents a particular policy or set of policies.

A model abstracts details relevant for analysis. Analyses rarely discuss particular policies; they usually focus on specific characteristics of policies, because many policies exhibit these characteristics; and the more policies with those characteristics, the more useful the analysis. By the HRU result (see Theorem 3–2), no single nontrivial analysis can cover all policies, but restricting the class of security policies sufficiently allows meaningful analysis of that class of policies.

Types of Security Policies

Each site has its own requirements for the levels of confidentiality, integrity, and availability, and the site policy states these needs for that particular site.

  • Definition 4–9. A military security policy (also called a governmental security policy) is a security policy developed primarily to provide confidentiality.

The name comes from the military's need to keep information, such as the date that a troop ship will sail, secret. Although integrity and availability are important, organizations using this class of policies can overcome the loss of either—for example, by using orders not sent through a computer network. But the compromise of confidentiality would be catastrophic, because an opponent would be able to plan countermeasures (and the organization may not know of the compromise).

Confidentiality is one of the factors of privacy, an issue recognized in the laws of many government entities (such as the Privacy Act of the United States and similar legislation in Sweden). Aside from constraining what information a government entity can legally obtain from individuals, such acts place constraints on the disclosure and use of that information. Unauthorized disclosure can result in penalties that include jail or fines; also, such disclosure undermines the authority and respect that individuals have for the government and inhibits them from disclosing that type of information to the agencies so compromised.

  • Definition 4–10. A commercial security policy is a security policy developed primarily to provide integrity.

The name comes from the need of commercial firms to prevent tampering with their data, because they could not survive such compromises. For example, if the confidentiality of a bank's computer is compromised, a customer's account balance may be revealed. This would certainly embarrass the bank and possibly cause the customer to take her business elsewhere. But the loss to the bank's “bottom line” would be minor. However, if the integrity of the computer holding the accounts were compromised, the balances in the customers' accounts could be altered, with financially ruinous effects.

Some integrity policies use the notion of a transaction; like database specifications, they require that actions occur in such a way as to leave the database in a consistent state. These policies, called transaction-oriented integrity security policies, are critical to organizations that require consistency of databases.

The role of trust in these policies highlights their difference. Confidentiality policies place no trust in objects; so far as the policy is concerned, the object could be a factually correct report or a tale taken from Aesop's Fables. The policy statement dictates whether that object can be disclosed. It says nothing about whether the object should be believed.

Integrity policies, to the contrary, indicate how much the object can be trusted. Given that this level of trust is correct, the policy dictates what a subject can do with that object. But the crucial question is how the level of trust is assigned. For example, if a site obtains a new version of a program, should that program have high integrity (that is, the site trusts the new version of that program) or low integrity (that is, the site does not yet trust the new program), or should the level of trust be somewhere in between (because the vendor supplied the program, but it has not been tested at the local site as thoroughly as the old version)? This makes integrity policies considerably more nebulous than confidentiality policies. The assignment of a level of confidentiality is based on what the classifier wants others to know, but the assignment of a level of integrity is based on what the classifier subjectively believes to be true about the trustworthiness of the information.

Two other terms describe policies related to security needs; because they appear elsewhere, we define them now.

  • Definition 4–11. A confidentiality policy is a security policy dealing only with confidentiality.

  • Definition 4–12. An integrity policy is a security policy dealing only with integrity.

Both confidentiality policies and military policies deal with confidentiality; however, a confidentiality policy does not deal with integrity at all, whereas a military policy may. A similar distinction holds for integrity policies and commercial policies.

The Role of Trust

The role of trust is crucial to understanding the nature of computer security. This book presents theories and mechanisms for analyzing and enhancing computer security, but any theories or mechanisms rest on certain assumptions. When someone understands the assumptions her security policies, mechanisms, and procedures rest on, she will have a very good understanding of how effective those policies, mechanisms, and procedures are. Let us examine the consequences of this maxim.

A system administrator receives a security patch for her computer's operating system. She installs it. Has she improved the security of her system? She has indeed, given the correctness of certain assumptions:

  1. She is assuming that the patch came from the vendor and was not tampered with in transit, rather than from an attacker trying to trick her into installing a bogus patch that would actually open security holes. Winkler [1052] describes a penetration test in which this technique enabled attackers to gain direct access to the computer systems of the target.

  2. She is assuming that the vendor tested the patch thoroughly. Vendors are often under considerable pressure to issue patches quickly and sometimes test them only against a particular attack. The vulnerability may be deeper, however, and other attacks may succeed. When someone released an exploit of one vendor's operating system code, the vendor released a correcting patch in 24 hours. Unfortunately, the patch opened a second hole, one that was far easier to exploit. The next patch (released 48 hours later) fixed both problems correctly.

  3. She is assuming that the vendor's test environment corresponds to her environment. Otherwise, the patch may not work as expected. As an example, a vendor's patch once reset ownerships of executables to the user root. At some installations, maintenance procedures required that these executables be owned by the user bin. The vendor's patch had to be undone and fixed for the local configuration. This assumption also covers possible conflicts between different patches, as well as patches that conflict with one another (such as patches from different vendors of software that the system is using).

  4. She is assuming that the patch is installed correctly. Some patches are simple to install, because they are simply executable files. Others are complex, requiring the system administrator to reconfigure network-oriented properties, add a user, modify the contents of a registry, give rights to some set of users, and then reboot the system. An error in any of these steps could prevent the patch from correcting the problems, as could an inconsistency between the environments in which the patch was developed and in which the patch is applied. Furthermore, the patch may claim to require specific privileges, when in reality the privileges are unnecessary and in fact dangerous.

These assumptions are fairly high-level, but invalidating any of them makes the patch a potential security problem.

Assumptions arise also at a much lower level. Consider formal verification (see Chapter 20), an oft-touted panacea for security problems. The important aspect is that formal verification provides a formal mathematical proof that a given program P is correct—that is, given any set of inputs i, j, k, the program P will produce the output x that its specification requires. This level of assurance is greater than most existing programs provide, and hence makes P a desirable program. Suppose a security-related program S has been formally verified for the operating system O. What assumptions would be made when it was installed?

  1. The formal verification of S is correct—that is, the proof has no errors. Because formal verification relies on automated theorem provers as well as human analysis, the theorem provers must be programmed correctly.

  2. The assumptions made in the formal verification of S are correct; specifically, the preconditions hold in the environment in which the program is to be executed. These preconditions are typically fed to the theorem provers as well as the program S. An implicit aspect of this assumption is that the version of O in the environment in which the program is to be executed is the same as the version of O used to verify S.

  3. The program will be transformed into an executable whose actions correspond to those indicated by the source code; in other words, the compiler, linker, loader, and any libraries are correct. An experiment with one version of the UNIX operating system demonstrated how devastating a rigged compiler could be, and attackers have replaced libraries with others that performed additional functions, thereby increasing security risks.

  4. The hardware will execute the program as intended. A program that relies on floating point calculations would yield incorrect results on some computer CPU chips, regardless of any formal verification of the program, owing to a flaw in these chips [202]. Similarly, a program that relies on inputs from hardware assumes that specific conditions cause those inputs.

The point is that any security policy, mechanism, or procedure is based on assumptions that, if incorrect, destroy the superstructure on which it is built. Analysts and designers (and users) must bear this in mind, because unless they understand what the security policy, mechanism, or procedure is based on, they jump from an unwarranted assumption to an erroneous conclusion.

Types of Access Control

A security policy may use two types of access controls, alone or in combination. In one, access control is left to the discretion of the owner. In the other, the operating system controls access, and the owner cannot override the controls.

The first type is based on user identity and is the most widely known:

  • Definition 4–13. If an individual user can set an access control mechanism to allow or deny access to an object, that mechanism is a discretionary access control (DAC), also called an identity-based access control (IBAC).

Discretionary access controls base access rights on the identity of the subject and the identity of the object involved. Identity is the key; the owner of the object constrains who can access it by allowing only particular subjects to have access. The owner states the constraint in terms of the identity of the subject, or the owner of the subject.

The second type of access control is based on fiat, and identity is irrelevant:

  • Definition 4–14. When a system mechanism controls access to an object and an individual user cannot alter that access, the control is a mandatory access control (MAC), occasionally called a rule-based access control.

The operating system enforces mandatory access controls. Neither the subject nor the owner of the object can determine whether access is granted. Typically, the system mechanism will check information associated with both the subject and the object to determine whether the subject should access the object. Rules describe the conditions under which access is allowed.

  • Definition 4–15. An originator controlled access control (ORCON or ORGCON) bases access on the creator of an object (or the information it contains).

The goal of this control is to allow the originator of the file (or of the information it contains) to control the dissemination of the information. The owner of the file has no control over who may access the file. Section 7.3 discusses this type of control in detail.

Policy Languages

A policy language is a language for representing a security policy. High-level policy languages express policy constraints on entities using abstractions. Low-level policy languages express constraints in terms of input or invocation options to programs existing on the systems.

High-Level Policy Languages

A policy is independent of the mechanisms. It describes constraints placed on entities and actions in a system. A high-level policy language is an unambiguous expression of policy. Such precision requires a mathematical or programmatic formulation of policy; common English is not precise enough.

Assume that a system is connected to the Internet. A user runs a World Wide Web browser. Web browsers download programs from remote sites and execute them locally. The local system's policy may constrain what these downloaded programs can do.

This language ignores implementation issues, and so is a high-level policy language. The domain-type enforcement language (DTEL) [54] grew from an observation of Boebert and Kain [126] that access could be based on types; they confine their work to the types “data” and “instructions.” This observation served as the basis for a firewall [996] and for other secure system components. DTEL uses implementation-level constructs to express constraints in terms of language types, but not as arguments or input to specific system commands. Hence, it combines elements of low-level and high-level languages. Because it describes configurations in the abstract, it is a high-level policy language.

Low-Level Policy Languages

A low-level policy language is simply a set of inputs or arguments to commands that set, or check, constraints on a system.

Example: Academic Computer Security Policy

Security policies can have few details, or many. The explicitness of a security policy depends on the environment in which it exists. A research lab or office environment may have an unwritten policy. A bank needs a very explicit policy. In practice, policies begin as generic statements of constraints on the members of the organization. These statements are derived from an analysis of threats, as described in Chapter 1, “An Overview of Computer Security.” As questions (or incidents) arise, the policy is refined to cover specifics. As an example, we present an academic security policy. The full policy is presented in Chapter 35, “Example Academic Security Policy.”

General University Policy

This policy is an “Acceptable Use Policy” (AUP) for the Davis campus of the University of California. Because computing services vary from campus unit to campus unit, the policy does not dictate how the specific resources can be used. Instead, it presents generic constraints that the individual units can tighten.

The policy first presents the goals of campus computing: to provide access to resources and to allow the users to communicate with others throughout the world. It then states the responsibilities associated with the privilege of using campus computers. All users must “respect the rights of other users, respect the integrity of the systems and related physical resources, and observe all relevant laws, regulations, and contractual obligations.”[1]

The policy states the intent underlying the rules, and notes that the system managers and users must abide by the law (for example, “Since electronic information is volatile and easily reproduced, users must exercise care in acknowledging and respecting the work of others through strict adherence to software licensing agreements and copyright laws”).[2]

The enforcement mechanisms in this policy are procedural. For minor violations, either the unit itself resolves the problem (for example, by asking the offender not to do it again) or formal warnings are given. For more serious infractions, the administration may take stronger action such as denying access to campus computer systems. In very serious cases, the university may invoke disciplinary action. The Office of Student Judicial Affairs hears such cases and determines appropriate consequences.

The policy then enumerates specific examples of actions that are considered to be irresponsible use. Among these are illicitly monitoring others, spamming, and locating and exploiting security vulnerabilities. These are examples; they are not exhaustive. The policy concludes with references to other documents of interest.

This is a typical AUP. It is written informally and is aimed at the user community that is to abide by it. The electronic mail policy presents an interesting contrast to the AUP, probably because the AUP is for UC Davis only, and the electronic mail policy applies to all nine University of California campuses.

Electronic Mail Policy

The university has several auxiliary policies, which are subordinate to the general university policy. The electronic mail policy describes the constraints imposed on access to, and use of, electronic mail. It conforms to the general university policy but details additional constraints on both users and system administrators.

The electronic mail policy consists of three parts. The first is a short summary intended for the general user community, much as the AUP for UC Davis is intended for the general user community. The second part is the full policy for all university campuses and is written as precisely as possible. The last document describes how the Davis campus implements the general university electronic mail policy.

The Electronic Mail Policy Summary

The summary first warns users that their electronic mail is not private. It may be read accidentally, in the course of normal system maintenance, or in other ways stated in the full policy. It also warns users that electronic mail can be forged or altered as well as forwarded (and that forwarded messages may be altered). This section is interesting because policies rarely alert users to the threats they face; policies usually focus on the remedial techniques.

The next two sections are lists of what users should, and should not, do. They may be summarized as “think before you send; be courteous and respectful of others; and don't interfere with others' use of electronic mail.” They emphasize that supervisors have the right to examine employees' electronic mail that relates to the job. Surprisingly, the university does not ban personal use of electronic mail, probably in the recognition that enforcement would demoralize people and that the overhead of carrying personal mail is minimal in a university environment. The policy does require that users not use personal mail to such an extent that it interferes with their work or causes the university to incur extra expense.

Finally, the policy concludes with a statement about its application. In a private company, this would be unnecessary, but the University of California is a quasi-governmental institution and as such is bound to respect parts of the United States Constitution and the California Constitution that private companies are not bound to respect. Also, as an educational institution, the university takes the issues surrounding freedom of expression and inquiry very seriously. Would a visitor to campus be bound by these policies? The final section says yes. Would an employee of Lawrence Livermore National Laboratories, run for the Department of Energy by the University of California, also be bound by these policies? Here, the summary suggests that they would be, but whether the employees of the lab are Department of Energy employees or University of California employees could affect this. So we turn to the full policy.

The Full Policy

The full policy also begins with a description of the context of the policy, as well as its purpose and scope. The scope here is far more explicit than that in the summary. For example, the full policy does not apply to e-mail services of the Department of Energy laboratories run by the university, such as Lawrence Livermore National Laboratories. Moreover, this policy does not apply to printed copies of e-mail, because other university policies apply to such copies.

The general provisions follow. They state that e-mail services and infrastructure are university property, and that all who use them are expected to abide by the law and by university policies. Failure to do so may result in access to e-mail being revoked. The policy reiterates that the university will apply principles of academic freedom and freedom of speech in its handling of e-mail, and so will seek access to e-mail without the holder's permission only under extreme circumstances, which are enumerated, and only with the approval of a campus vice chancellor or a university vice president (essentially, the second ranking officer of a campus or of the university system). If this is infeasible, the e-mail may be read only as is needed to resolve the emergency, and then authorization must be secured after the fact.

The next section discusses legitimate and illegitimate use of the university's email. The policy allows anonymity to senders provided that it does not violate laws or other policies. It disallows using mail to interfere with others, such as by sending spam or letter bombs. It also expressly permits the use of university facilities for sending personal e-mail, provided that doing so does not interfere with university business; and it cautions that such personal e-mail may be treated as a “University record” subject to disclosure.

The discussion of security and confidentiality emphasizes that, although the university will not go out of its way to read e-mail, it can do so for legitimate business purposes and to keep e-mail service robust and reliable. The section on archiving and retention says that people may be able to recover e-mail from end systems where it may be archived as part of a regular backup.

The last three sections discuss the consequences of violations and direct the chancellor of each campus to develop procedures to implement the policy.

An interesting sidelight occurs in Appendix A, “Definitions.” The definition of “E-mail” includes any computer records viewed with e-mail systems or services, and the “transactional information associated with such records [E-mail], such as headers, summaries, addresses, and addressees.” This appears to encompass the network packets used to carry the e-mail from one host to another. This ambiguity illustrates the problem with policies. The language is imprecise. This motivates the use of more mathematical languages, such as DTEL, for specifying policies.

Implementation at UC Davis

This interpretation of the policy simply specifies those points delegated to the campus. Specifically, “incidental personal use” is not allowed if that personal use benefits a non-university organization, with a few specific exceptions enumerated in the policy. Then procedures for inspecting, monitoring, and disclosing the contents of email are given, as are appeal procedures. The section on backups states that the campus does not archive all e-mail, and even if e-mail is backed up incidental to usual backup practices, it need not be made available to the employee.

This interpretation adds campus-specific requirements and procedures to the university's policy. The local augmentation amplifies the system policy; it does not contradict it or limit it. Indeed, what would happen if the campus policy conflicted with the system's policy? In general, the higher (system-wide) policy would prevail. The advantage of leaving implementation to the campuses is that they can take into account local variations and customs, as well as any peculiarities in the way the administration and the Academic Senate govern that campus.

Security and Precision

Chapter 1 presented definitions of security and precision in terms of states of systems. Can one devise a generic procedure for developing a mechanism that is both secure and precise? Jones and Lipton [526] explored this question for confidentiality policies; similar results hold for integrity policies. For this analysis, they view programs as abstract functions.

  • Definition 4–16. Let p be a function p: I1 × ... × InR. Then P is a program with n inputs ikIk, 1 ≤ kn, and one output rR.

  • The observability postulate makes one assumption of what follows explicit.

  • Axiom 4–1. (The observability postulate.) The output of a function p(i1, ..., in) encodes all available information about i1, ..., in.

Consider a program that does not alter information on the system, but merely provides a “view” of its inputs. Confidentiality policies seek to control what views are available; hence the relevant question is whether the value of p(i1, ..., in) contains any information that it should not contain.

This postulate is needed because information can be transmitted by modulating shared resources such as runtime, file space used, and other channels (these are called covert channels and are discussed in Chapter 17). Even though these channels are not intended to be used for sending information, that they are shared enables violation of confidentiality policies. From an abstract point of view, covert channels are part of the output (result) of the program's execution, and hence the postulate is appropriate. But as a matter of implementation, these channels may be observable even when the program's output is not.

Let E be the set of outputs from a program p that indicate errors.

  • Definition 4–17. Let p be a function p:I1 × ... × InR. A protection mechanism m is a function m:I1 × ... × InRE for which, when ikIk, 1 ≤ kn, either

    1. m(i1, ..., in) = p(i1, ..., in) or

    2. m(i1, ..., in) ∊ E.

Informally, this definition says that every legal input to m produces either the same value as for p or an error message. The set of output values from p that are excluded as outputs from m are the set of outputs that would impart confidential information.

Now we define a confidentiality policy.

  • Definition 4–18. A confidentiality policy for the program p:I1 × ... × InR is a function c:I1 × ... × InA, where AI1 × ... × In.

In this definition, A corresponds to the set of inputs that may be revealed. The complement of A with respect to I1 × ... × In corresponds to the confidential inputs. In some sense, the function c filters out inputs that are to be kept confidential.

The next definition captures how well a security mechanism conforms to a stated confidentiality policy.

  • Definition 4–19. Let c:I1 × ... × InA be a confidentiality policy for a program p. Let m:I1 × ... × InRE be a security mechanism for the same program p. Then the mechanism m is secure if and only if there is a function m':ARE such that, for all ikIk, 1 ≤ kn, m(i1, ..., in) = m'(c(i1, ..., in)).

In other words, given any set of inputs, the protection mechanism m returns values consistent with the stated confidentiality policy c. Here, the term “secure” is a synonym for “confidential.” We can derive analogous results for integrity policies.

The distinguished policy allow:I1 × ... × InA generates a selective permutation of its inputs. By “selective,” we mean that it may omit inputs. Hence, the function c(i1, ..., in) = i1 is an example of allow, because its output is a permutation of some of its inputs. More generally, for kn,

  • allow(i1, ..., in) = (i1', ..., ik')

where i1', ..., ik' is a permutation of any k of i1, ..., in.

A secure mechanism ensures that the policy is obeyed. However, it may also disallow actions that do not violate the policy. In that sense, a secure mechanism may be overly restrictive. The notion of precision measures the degree of overrestrictiveness.

  • Definition 4–20. Let m1 and m2 be two distinct protection mechanisms for the program p under the policy c. Then m1 is as precise as m2 (m1 Ý m2) provided that, for all inputs (i1, …, in), if m2(i1, …, in) = p(i1, …, in), then m1(i1, …, in) = p(i1, …, in). We say that m1 is more precise than m2 (m1 ~ m2) if there is an input (i1′, …, in) such that m1(i1′, …, in) = p(i1′, …, in) and m2(i1′, …, in) ≠ p(i1′, …, in).

An obvious question is whether or not two protection mechanisms can be combined to form a new mechanism that is as precise as the two original ones. To answer this, we need to define “combines,” which we formalize by the notion of “union.”

  • Definition 4–21. Let m1 and m2 be protection mechanisms for the program p. Then their union m3 = m1m2 is defined as

    Definition 4–21.

This definition says that for inputs on which m1 and m2 return the same value as p, their union does also. Otherwise, that mechanism returns the same value as m1. From this definition and the definitions of secure and precise, we have:

  • Theorem 4–1. Let m1 and m2 be secure protection mechanisms for a program p and policy c. Then m1m2 is also a secure protection mechanism for p and c. Furthermore, m1m2 Ý m1 and m1m2 Ý m2.

  • Generalizing, we have:

  • Theorem 4–2. For any program p and security policy c, there exists a precise, secure mechanism m* such that, for all secure mechanisms m associated with p and c, m* Ý m.

  • ProofImmediate by induction on the number of secure mechanisms associated with p and c.

This “maximally precise” mechanism m* is the mechanism that ensures security while minimizing the number of denials of legitimate actions. If there is an effective procedure for determining this mechanism, we can develop mechanisms that are both secure and precise. Unfortunately:

  • Theorem 4–3. There is no effective procedure that determines a maximally precise, secure mechanism for any policy and program.

  • ProofLet the policy c be the constant function—that is, no information about any of the inputs is allowed in the output. Let p be a program that computes the value of some total function T(x) and assigns it to the variable z. We may without loss of generality take T(0) = 0.

  • Let q be a program of the following form:

    p;
    if z = 0 then y := 1 else y := 2;
    halt;
    
  • Now consider the value of the protection mechanism m at 0. Because c is constant, m must also be constant. Using the program above, either m(0) = 1 (if p, and hence q, completes) or it is undefined (if p halts before the “if” statement).

  • If, for all inputs x, T(x) = 0, then m(x) = 1 (because m is secure). If there is an input x' for which T(x') ≠ 0, then m(x') = 2 (again, because m is secure) or is undefined (if p halts before the assignment). In either case, m is not a constant; hence, no such p can exist. Thus, m(0) = 1 if and only if T(x) = 0 for all x.

  • If we can effectively determine m, we can effectively determine whether T(x) = 0 for all x. This contradicts the security policy c, so no such effective procedure can exist.

There is no general procedure for devising a mechanism that conforms exactly to a specific security policy and yet allows all actions that the policy allows. It may be possible to do so in specific cases, especially when a mechanism defines a policy, but there is no general way to devise a precise and secure mechanism.

Summary

Security policies define “security” for a system or site. They may be implied policies defined by the common consensus of the community, or they may be informal policies whose interpretations are defined by the community. Both of these types of policies are usually ambiguous and do not precisely define “security.” A policy may be formal, in which case ambiguities arise either from the use of natural languages such as English or from the failure to cover specific areas.

Formal mathematical models of policies enable analysts to deduce a rigorous definition of “security” but do little to improve the average user's understanding of what “security” means for a site. The average user is not mathematically sophisticated enough to read and interpret the mathematics.

Trust underlies all policies and enforcement mechanisms. Policies themselves make assumptions about the way systems, software, hardware, and people behave. At a lower level, security mechanisms and procedures also make such assumptions. Even when rigorous methodologies (such as formal mathematical models or formal verification) are applied, the methodologies themselves simply push the assumptions, and therefore the trust, to a lower level. Understanding the assumptions and the trust involved in any policies and mechanisms deepens one's understanding of the security of a system.

This brief overview of policy, and of policy expression, lays the foundation for understanding the more detailed policy models used in practice.

Research Issues

The critical issue in security policy research is the expression of policy in an easily understood yet precise form. The development of policy languages focuses on supplying mathematical rigor that is intelligible to humans. A good policy language allows not only the expression of policy but also the analysis of a system to determine if it conforms to that policy. The latter may require that the policy language be compiled into an enforcement program (to enforce the stated policy, as DTEL does) or into a verification program (to verify that the stated policy is enforced, as tripwire does). Balancing enforcement with requirements is also an important area of research, particularly in real-time environments.

The underlying role of trust is another crucial issue in policy research. Development of methodologies for exposing underlying assumptions and for analyzing the effects of trust and the results of belief is an interesting area of formal mathematics as well as a guide to understanding the safety and security of systems. Design and implementation of tools to aid in this work are difficult problems on which research will continue for a long time to come.

Further Reading

Much of security analysis involves definition and refinement of security policies. Wood [1059] has published a book of templates for specific parts of policies. That book justifies each part and allows readers to develop policies by selecting the appropriate parts from a large set of possibilities. Essays by Bailey [55] and Abrams and Bailey [4] discuss management of security issues and explain why different members of an organization interpret the same policy differently. Sterne's wonderful paper [970] discusses the nature of policy in general.

Jajodia and his colleagues [520] present a “little language” for expressing authorization policies. They show that their language can express many aspects of existing policies and argue that it allows elements of these policies to be combined into authorization schemes.

Fraser and Badger [371] have used DTEL to enforce many policies. Cholvy and Cuppens [194] describe a method of checking policies for consistency and determining how they apply to given situations.

Son, Chaney, and Thomlinson [951] discuss enforcement of partial security policies in real-time databases to balance real-time requirements with security. Their idea of “partial security policies” has applications in other environments. Zurko and Simon [1074] present an alternative focus for policies.

Exercises

1:

In Figure 4-1, suppose that edge t3 went from s1 to s4. Would the resulting system be secure?

2:

Revisit the example of one student copying another student's homework assignment. Describe three other ways the first student could copy the second student's homework assignment, even assuming that the file access control mechanisms are set to deny him permission to read the file.

3:

A noted computer security expert has said that without integrity, no system can provide confidentiality.

  1. Do you agree? Justify your answer.

  2. Can a system provide integrity without confidentiality? Again, justify your answer.

4:

A cryptographer once claimed that security mechanisms other than cryptography were unnecessary because cryptography could provide any desired level of confidentiality and integrity. Ignoring availability, either justify or refute the cryptographer's claim.

5:

Classify each of the following as an example of a mandatory, discretionary, or originator controlled policy, or a combination thereof. Justify your answers.

  1. The file access control mechanisms of the UNIX operating system

  2. A system in which no memorandum can be distributed without the author's consent

  3. A military facility in which only generals can enter a particular room

  4. A university registrar's office, in which a faculty member can see the grades of a particular student provided that the student has given written permission for the faculty member to see them.

6:

A process may send a message to another process provided that the recipient is willing to accept messages. The following class and methods are relevant:

class Messages {
   public deposit(int processid, String message);
   public int willaccept(int processid);
   ...
}

The method willaccept returns 1 if the named process will accept messages, and 0 otherwise. Write a constraint for this policy using Pandey and Hashii's policy constraint language as described in the first example in Section 4.5.1.

7:

Use DTEL to create a domain d_guest composed of processes executing the restricted shell /usr/bin/restsh. These processes cannot create any files. They can read and execute any object of type t_sysbin. They can read and search any object of type t_guest.

8:

Suppose one wishes to confirm that none of the files in the directory /usr/spool/lpd are world-readable.

  1. What would the fourth field of the tripwire database contain?

  2. What would the second field of the RIACS database contain?

  3. Tripwire does not provide a wildcard mechanism suitable for saying, “all files in the directory /usr/spool/lpd beginning with cf or df.” Suggest a modification of the tripwire configuration file that would allow this.

9:

Consider the UC Davis policy on reading electronic mail. A research group wants to obtain raw data from a network that carries all network traffic to the Department of Political Science.

  1. Discuss the impact of the electronic mail policy on the collection of such data.

  2. How would you change the policy to allow the collection of this data without abandoning the principle that electronic mail should be protected?

10:

Prove Theorem 4–1. Show all elements of your proof.

11:

Expand the proof of Theorem 4–2 to show the statement, and the proof, of the induction.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.131.255