Chapter 5. Confidentiality Policies

 

SHEPHERD: Sir, there lies such secrets in this fardeland box which none must know but the king;and which he shall know within this hour, if Imay come to the speech of him.

 
 --The Winter's Tale, IV, iv, 785–788.

Confidentiality policies emphasize the protection of confidentiality. The importance of these policies lies in part in what they provide, and in part in their role in the development of the concept of security. This chapter explores one such policy—the Bell-LaPadula Model—and the controversy it engendered.

Goals of Confidentiality Policies

A confidentiality policy, also called an information flow policy, prevents the unauthorized disclosure of information. Unauthorized alteration of information is secondary. For example, the navy must keep confidential the date on which a troop ship will sail. If the date is changed, the redundancy in the systems and paperwork should catch that change. But if the enemy knows the date of sailing, the ship could be sunk. Because of extensive redundancy in military communications channels, availability is also less of a problem.

The term “governmental” covers several requirements that protect citizens' privacy. In the United States, the Privacy Act requires that certain personal data be kept confidential. Income tax returns are legally confidential and are available only to the Internal Revenue Service or to legal authorities with a court order. The principle of “executive privilege” and the system of nonmilitary classifications suggest that the people working in the government need to limit the distribution of certain documents and information. Governmental models represent the policies that satisfy these requirements.

The Bell-LaPadula Model

The Bell-LaPadula Model [67, 68] corresponds to military-style classifications. It has influenced the development of many other models and indeed much of the development of computer security technologies.[1]

Informal Description

The simplest type of confidentiality classification is a set of security clearances arranged in a linear (total) ordering (see Figure 5-1). These clearances represent sensitivity levels. The higher the security clearance, the more sensitive the information (and the greater the need to keep it confidential). A subject has a security clearance. In the figure, Claire's security clearance is C (for CONFIDENTIAL), and Thomas' is TS (for TOP SECRET). An object has a security classification; the security classification of the electronic mail files is S (for SECRET), and that of the telephone list files is UC (for UNCLASSIFIED). (When we refer to both subject clearances and object classifications, we use the term “classification.”) The goal of the Bell-LaPadula security model is to prevent read access to objects at a security classification higher than the subject's clearance.

At the left is the basic confidentiality classification system. The four security levels are arranged with the most sensitive at the top and the least sensitive at the bottom. In the middle are individuals grouped by their security clearances, and at the right is a set of documents grouped by their security levels.

Figure 5-1. At the left is the basic confidentiality classification system. The four security levels are arranged with the most sensitive at the top and the least sensitive at the bottom. In the middle are individuals grouped by their security clearances, and at the right is a set of documents grouped by their security levels.

The Bell-LaPadula security model combines mandatory and discretionary access controls. In what follows, “S has discretionary read (write) access to O” means that the access control matrix entry for S and O corresponding to the discretionary access control component contains a read (write) right. In other words, were the mandatory controls not present, S would be able to read (write) O.

Let L(S) = ls be the security clearance of subject S, and let L(O) = lo be the security classification of object O. For all security classifications li, i = 0, ..., k – 1, li < li+1.

  • Simple Security Condition, Preliminary VersionS can read O if and only if lols and S has discretionary read access to O.

In Figure 5-1, for example, Claire and Clarence cannot read personnel files, but Tamara and Sally can read the activity log files (and, in fact, Tamara can read any of the files, given her clearance), assuming that the discretionary access controls allow it.

Should Tamara decide to copy the contents of the personnel files into the activity log files and set the discretionary access permissions appropriately, Claire could then read the personnel files. Thus, for all practical purposes, Claire could read the files at a higher level of security. A second property prevents this:

  • *-Property (Star Property), Preliminary VersionS can write O if and only if lslo and S has discretionary write access to O.

Because the activity log files are classified C and Tamara has a clearance of TS, she cannot write to the activity log files.

Define a secure system as one in which both the simple security condition, preliminary version, and the *-property, preliminary version, hold. A straightforward induction establishes the following theorem.

  • Theorem 5–1. Basic Security Theorem, Preliminary VersionLet Σ be a system with a secure initial state σ0, and let T be a set of state transformations. If every element of T preserves the simple security condition, preliminary version, and the *-property, preliminary version, then every state σi, i ≥ 0, is secure.

Expand the model by adding a set of categories to each security classification. Each category describes a kind of information. Objects placed in multiple categories have the kinds of information in all of those categories. These categories arise from the “need to know” principle, which states that no subject should be able to read objects unless reading them is necessary for that subject to perform its functions. The sets of categories to which a person may have access is simply the power set of the set of categories. For example, if the categories are NUC, EUR, and US, someone can have access to any of the following sets of categories: Ø (none), { NUC }, { EUR }, { US }, { NUC, EUR }, {NUC, US }, { EUR, US }, and { NUC, EUR, US }. These sets of categories form a lattice under the operation ⊆ (subset of); see Figure 5-2. (Chapter 30, “Lattices,” discusses the mathematical nature of lattices.)

Lattice generated by the categories NUC, EUR, and US. The lines represent the ordering relation induced by ⊆.

Figure 5-2. Lattice generated by the categories NUC, EUR, and US. The lines represent the ordering relation induced by ⊆.

Each security level and category form a security level.[2] As before, we say that subjects have clearance at (or are cleared into, or are in) a security level and that objects are at the level of (or are in) a security level. For example, William may be cleared into the level (SECRET, { EUR }) and George into the level (TOP SECRET, { NUC, US }). A document may be classified as (CONFIDENTIAL, {EUR }).

Security levels change access. Because categories are based on a “need to know,” someone with access to the category set { NUC, US } presumably has no need to access items in the category EUR. Hence, read access should be denied, even if the security clearance of the subject is higher than the security classification of the object. But if the desired object is in any of the security levels Ø, { NUC }, { US }, or { NUC, US } and the subject's security clearance is no less than the document's security classification, access should be granted because the subject is cleared into the same category set as the object.

This suggests a new relation for capturing the combination of security classification and category set. Define the relation dom (dominates) as follows.

  • Definition 5–1. The security level (L, C) dominates the security level (L', C') if and only if L' ≤ L and C' ⊆ C.

We write (L, C) ¬dom (L', C') when (L, C) dom (L', C') is false. This relation also induces a lattice on the set of security levels [267].

Let C(S) be the category set of subject S, and let C(O) be the category set of object O. The simple security condition, preliminary version, is modified in the obvious way:

  • Simple Security ConditionS can read O if and only if S dom O and S has discretionary read access to O.

In the example above, George can read DocA and DocC but not DocB (again, assuming that the discretionary access controls allow such access).

Suppose Paul is cleared into security level (SECRET, { EUR, US, NUC }) and has discretionary read access to DocB. Paul can read DocB; were he to copy its contents to DocA and set its access permissions accordingly, George could then read DocB. The modified *-property prevents this:

  • *-PropertyS can write to O if and only if O dom S and S has discretionary write access to O.

Because DocA dom Paul is false (because C(Paul) ⊈ C(DocA)), Paul cannot write to DocA.

The simple security condition is often described as “no reads up” and the *-property as “no writes down.”

Redefine a secure system as one in which both the simple security property and the *-property hold. The analogue to the Basic Security Theorem, preliminary version, can also be established by induction.

  • Theorem 5–2. Basic Security TheoremLet Σ be a system with a secure initial state σ0, and let T be a set of state transformations. If every element of T preserves the simple security condition and the *-property, then every σi, i ≥ 0, is secure.

At times, a subject must communicate with another subject at a lower level. This requires the higher-level subject to write into a lower-level object that the lower-level subject can read.

The model provides a mechanism for allowing this type of communication. A subject has a maximum security level and a current security level. The maximum security level must dominate the current security level. A subject may (effectively) decrease its security level from the maximum in order to communicate with entities at lower security levels.

How this policy is instantiated in different environments depends on the requirements of each environment. The conventional use is to define “read” as “allowing information to flow from the object being read to the subject reading,” and “write” as “allowing information to flow from the subject writing to the object being written.” Thus, “read” usually includes “execute” (because by monitoring the instructions executed, one can determine the contents of portions of the file) and “write” includes “append” (as the information is placed in the file, it does not overwrite what is already in the file, however). Other actions may be included as appropriate; however, those who instantiate the model must understand exactly what those actions are. Chapter 8, “Noninterference and Policy Composition,” and Chapter 17, “Confinement Problem,” will discuss this subject in considerably more detail.

Example: The Data General B2 UNIX System

The Data General B2 UNIX (DG/UX) system provides mandatory access controls (MACs). The MAC label is a label identifying a particular compartment. This section describes only the default labels; the system enables other labels to be created.

Assigning MAC Labels

When a process (subject) begins, it is assigned the MAC label of its parent. The initial label (assigned at login time) is the label assigned to the user in a database called the Authorization and Authentication (A&A) Database. Objects are assigned labels at creation, but the labels may be either explicit or implicit. The system stores explicit labels as parts of the object's attributes. It determines implicit labels from the parent directory of the object.

The least upper bound of all compartments in the DG/UX lattice has the label IMPL_HI (for “implementation high”); the greatest lower bound has the label IMPL_LO (for “implementation low”). The lattice is divided into three regions, which are summarized in Figure 5-3.[3]

The three MAC regions in the MAC lattice (modified from the DG/UX Security Manual [257], p. 4–7, Figure 4-4). TCB stands for “trusted computing base.”

Figure 5-3. The three MAC regions in the MAC lattice (modified from the DG/UX Security Manual [257], p. 4–7, Figure 4-4). TCB stands for “trusted computing base.”

The highest region (administrative region) is reserved for data that users cannot access, such as logs, MAC label definitions, and so forth. Because reading up and writing up are disallowed (the latter is a DG/UX extension to the multilevel security model; see Section 5.2.2.2), users can neither read nor alter data in this region. Administrative processes such as servers execute with MAC labels in this region; however, they sanitize data sent to user processes with MAC labels in the user region.

System programs are in the lowest region (virus prevention region). No user process can write to them, so no user process can alter them. Because execution requires read access, users can execute the programs. The name of this region comes from the fact that viruses and other forms of malicious logic involve alterations of trusted executables.[4]

Problems arise when programs of different levels access the same directory. If a program with MAC label MAC_A tries to create a file, and a file of that name but with MAC label MAC_B (MAC_B dom MAC_A) exists, the create will fail. To prevent this leakage of information, only programs with the same MAC label as the directory can create files in that directory. For the /tmp directory, and the mail spool directory /var/mail, this restriction will prevent standard operations such as compiling and delivering mail. DG/UX introduces a “multilevel directory” to solve this problem.

A multilevel directory is a directory with a set of subdirectories, one for each label. These “hidden directories” normally are not visible to the user, but if a process with MAC label MAC_A tries to create a file in /tmp, it actually creates a file in the hidden directory under /tmp with MAC label MAC_A. The file can have the same name as one in the hidden directory corresponding to label MAC_A. The parent directory of a file in /tmp is the hidden directory. Furthermore, a reference to the parent directory goes to the hidden directory.

Mounting unlabeled file systems requires the files to be labeled. Symbolic links aggravate this problem. Does the MAC label the target of the link control, or does the MAC label the link itself? DG/UX uses a notion of inherited labels (called implicit labels) to solve this problem. The following rules control the way objects are labeled.

  1. Roots of file systems have explicit MAC labels. If a file system without labels is mounted on a labeled file system, the root directory of the mounted file system receives an explicit label equal to that of the mount point. However, the label of the mount point, and of the underlying tree, is no longer visible, and so its label is unchanged (and will become visible again when the file system is unmounted).

  2. An object with an implicit MAC label inherits the label of its parent.

  3. When a hard link to an object is created, that object must have an explicit label; if it does not, the object's implicit label is converted to an explicit label. A corollary is that moving a file to a different directory makes its label explicit.

  4. If the label of a directory changes, any immediate children with implicit labels have those labels converted to explicit labels before the parent directory's label is changed.

  5. When the system resolves a symbolic link, the label of the object is the label of the target of the symbolic link. However, to resolve the link, the process needs access to the symbolic link itself.

Rules 1 and 2 ensure that every file system object has a MAC label, either implicit or explicit. But when a file object has an implicit label, and two hard links from different directories, it may have two labels. Let /x/y/z and /x/a/b be hard links to the same object. Suppose y has an explicit label IMPL_HI and a an explicit label IMPL_B. Then the file object can be accessed by a process at IMPL_HI as /x/y/z and by a process at IMPL_B as /x/a/b. Which label is correct? Two cases arise.

Suppose the hard link is created while the file system is on a DG/UX B2 system. Then the DG/UX system converts the target's implicit label to an explicit one (rule 3). Thus, regardless of the path used to refer to the object, the label of the object will be the same.

Suppose the hard link exists when the file system is mounted on the DG/UX B2 system. In this case, the target had no file label when it was created, and one must be added. If no objects on the paths to the target have explicit labels, the target will have the same (implicit) label regardless of the path being used. But if any object on any path to the target of the link acquires an explicit label, the target's label may depend on which path is taken. To avoid this, the implicit labels of a directory's children must be preserved when the directory's label is made explicit. Rule 4 does this.

Because symbolic links interpolate path names of files, rather than store inode numbers, computing the label of symbolic links is straightforward. If /x/y/z is a symbolic link to /a/b/c, then the MAC label of c is computed in the usual way. However, the symbolic link itself is a file, and so the process must also have access to the link file z.

Using MAC Labels

The DG/UX B2 system uses the Bell-LaPadula notion of dominance, with one change. The system obeys the simple security condition (reading down is permitted), but the implementation of the *-property requires that the process MAC label and the object MAC label be equal, so writing up is not permitted, but writing is permitted in the same compartment.

Because of this restriction on writing, the DG/UX system provides processes and objects with a range of labels called a MAC tuple. A range is a set of labels expressed by a lower bound and an upper bound. A MAC tuple consists of up to three ranges (one for each of the regions in Figure 5-3).

An object can have a MAC tuple as well as the required MAC label. If both are present, the tuple overrides the label. A process has read access when its MAC label grants read access to the upper bound of the range. A process has write access when its MAC label grants write access to any label in the MAC tuple range.

A process has both a MAC label and a MAC tuple. The label always lies within the range for the region in which the process is executing. Initially, the subject's accesses are restricted by its MAC label. However, the process may extend its read and write capabilities to within the bounds of the MAC tuple.

Formal Model

Let S be the set of subjects of a system and let O be the set of objects. Let P be the set of rights r for read, q for write, w for read/write, and e for empty.[5] Let M be a set of possible access control matrices for the system. Let C be the set of classifications (or clearances), let K be the set of categories, and let L = C × K be the set of security levels. Finally, let F be the set of 3-tuples (fs, fo, fc), where fs and fc associate with each subject maximum and current security levels, respectively, and fo associates with each object a security level. The relation dom from Definition 5–1 is defined here in the obvious way.

The system objects may be organized as a set of hierarchies (trees and single nodes). Let H represent the set of hierarchy functions h: OP(O).[6] These functions have two properties. Let oi, oj, okO. Then:

  1. If oioj, then h(oi) ∩ h(oj) = Ø.

  2. There is no set { o1, o2, …, ok } ⊆ O such that for each i = 1, …, k, oi+1h(oi), and ok+1 = o1.

(See Exercise 6.)

A state ν ∊ ν of a system is a 4-tuple (b, m, f, h), where bP(S × O × P) indicates which subjects have access to which objects, and what those access rights are; mM is the access control matrix for the current state; fF is the 3-tuple indicating the current subject and object clearances and categories; and hH is the hierarchy of objects for the current state. The difference between b and m is that the rights in m may be unusable because of differences in security levels; b contains the set of rights that may be exercised, and m contains the set of discretionary rights.

R denotes the set of requests for access. The form of the requests affects the instantiation, not the formal model, and is not discussed further here. Four outcomes of each request are possible: y for yes (allowed), n for no (not allowed), i for illegal request, and o for error (multiple outcomes are possible). D denotes the set of outcomes. The set WR × D × ν × ν is the set of actions of the system. This notation means that an entity issues a request in R, and a decision in D occurs, moving the system from one state in ν to another (possibly different) state in ν. Given these definitions, we can now define the history of a system as it executes.

Let N be the set of positive integers. These integers represent times. Let X = RN be a set whose elements x are sequences of requests, let Y = DN be a set whose elements y are sequences of decisions, and let Z = VN be a set whose elements z are sequences of states. The ith components of x, y, and z are represented as xi, yi, and zi, respectively. The interpretation is that for some tN, the system is in state zt–1 ∊ ν; a subject makes request xtR, the system makes a decision ytD, and as a result the system transitions into a (possibly new) state zt ∊ ν.

A system is represented as an initial state and a sequence of requests, decisions, and states. In formal terms, Σ(R, D, W, z0) ⊆ X × Y × Z represents the system, and z0 is the initial state of the system. (x, y, z) ∊ Σ(R, D, W, z0) if and only if (xt, yt, zt, zt–1) ∊ W for all tN. (x, y, z) is an appearance of Σ(R, D, W, z0).

The next request r2 is for s to write to o; however, this is disallowed (d2 = n, or no). The resulting state is the same as the preceding one. Now x = (r1, r2), y = (y, n), and z = (ν0, ν1, ν2), where ν2 = v1.

Basic Security Theorem

The Basic Security Theorem combines the simple security condition, the *-property, and a discretionary security property. We now formalize these three properties.

Formally, the simple security condition is:

  • Definition 5–2. (s, o, p) ∊ S × O × P satisfies the simple security condition relative to f (written ssc rel f) if and only if one of the following holds:

    1. p = e or p = a

    2. p = r or p = w and fc(s) dom fo(o)

In other words, if s can read o (or read and write to it), s must dominate o. A state (b, m, f, h) satisfies the simple security condition if all elements of b satisfy ssc rel f. A system satisfies the simple security condition if all its states satisfy the simple security condition.

Define b(s: p1, ..., pn) to be the set of all objects that s has p1, ..., pn access to:

  • b(s: p1, ..., pn) = { o | oO ∧ [ (s, o, p1) ∊ b v ... v (s, o, pn) ∊ b ] }

  • Definition 5–3. A state (b, m, f, h) satisfies the *-property if and only if, for each sS, the following hold:

    1. b(s: a) ≠ Ø ⇒ [∀ ob(s: a) [ fo(o) dom fc(s) ] ]

    2. b(s: w) ≠ Ø ⇒ [∀ ob(s: w) [ fo(o) = fc(s) ] ]

    3. b(s: r) ≠ Ø ⇒ [∀ ob(s: r) [ fc(s) dom fo(o) ] ]

This definition says that if a subject can write to an object, the object's classification must dominate the subject's clearance (“write up”); if the subject can also read the object, the subject's clearance must be the same as the object's classification (“equality for read”). A system satisfies the *-property if all its states satisfy the *-property. In many systems, only a subset S' of subjects satisfy the *-property; in this case, we say that the *-property is satisfied relative to S' ⊆ S.

  • Definition 5–4. A state (b, m, f, h) satisfies the discretionary security property (ds-property) if and only if, for each triple (s, o, p) ∊ b, pm[s, o].

The access control matrix allows the controller of an object to condition access based on identity. The model therefore supports both mandatory and discretionary controls, and defines “secure” in terms of both. A system satisfies the discretionary security property if all its states satisfy the discretionary security property.

  • Definition 5–5. A system is secure if it satisfies the simple security condition, the *-property, and the discretionary security property.

The notion of an action, or a request and decision that moves the system from one state to another, must also be formalized, as follows.

  • Definition 5–6. (r, d, v, v') ∊ R × D × ν × ν is an action of Σ(R, D, W, z0) if and only if there is an (x, y, z) ∊ Σ(R, D, W, z0) and a tN such that (r, d, v, v') = (xt, yt, zt, zt–1).

Thus, an action is a request/decision pair that occurs during the execution of the system.

We now can establish conditions under which the three properties hold.

  • Theorem 5–3. Σ(R, D, W, z0) satisfies the simple security condition for any secure state z0 if and only if, for every action (r, d, (b, m, f, h), (b', m', f', h')), W satisfies the following:

    1. Every (s, o, p) ∊ bb' satisfies ssc rel f.

    2. Every (s, o, p) ∊ b' that does not satisfy ssc rel f is not in b.

  • ProofLet (x, y, z) ∊ Σ(R, D, W, z0) and write zt = (bt, at, ft) for tN.

  • (⇒) By contradiction. Without loss of generality, take b = bt and b' = bt–1. Assume that Σ(R, D, W, z0) satisfies the simple security condition for some secure state z0, and that either some (s, o, p) ∊ bb' = btbt–1 does not satisfy ssc rel ft or some (s, o, p) ∊ b' = bt–1 that does not satisfy ssc rel ft is in b = bt. If the former, there is some (s, o, p) ∊ bt that does not satisfy ssc rel ft, because btbt–1bt. If the latter, there is some (s, o, p) ∊ bt–1 that does not satisfy ssc rel ft but that is in bt. In either case, there is some (s, o, p) ∊ bt that does not satisfy the simple security condition relative to ft, which means that Σ(R, D, W, z0) does not satisfy the simple security condition for some secure state z0, contradicting the hypothesis.

  • (⇐) By induction on t.

  • Induction basisz0 = (b0, m0, f0, h0) is secure, by the hypothesis of the claim.

  • Induction hypothesiszt–1 = (bt–1, mt–1, ft–1, ht–1) is secure, for t < n

  • Induction stepLet (xt, yt, zt, zt–1) ∊ W. By (a), every (s, o, p) ∊ btbt–1 satisfies ssc rel ft. Let Induction step. = { (s, o, p) | (s, o, p) ∊ bt–1 ∧ (s, o, p) does not satisfy ssc rel ft }. By (b), btInduction step. = Ø; so, Induction step. ∩ (btbt–1) = (Induction step.bt) ∩ bt–1 = Ø. This means that if (s, o, p) ∊ btbt–1, then (s, o, p) ∉ Induction step. and so (s, o, p) satisfies ssc rel ft. Hence, if (s, o, p) ∊ bt, then either (s, o, p) ∊ btbt–1 or (s, o, p) ∊ btbt–1. In the first case, the induction hypothesis ensures that (s, o, p) satisfies the simple security condition. In the second case, (a) ensures that (s, o, p) satisfies the simple security condition. Hence, zt = (bt, mt, ft, ht) is secure. This completes the proof.

  • Theorem 5–4. Σ(R, D, W, z0) satisfies the *-property relative to S' ⊆ S for any secure state z0 if and only if, for every action (r, d, (b, m, f, h), (b', m', f', h')), W satisfies the following for every sS':

    1. Every (s, o, p) ∊ bb' satisfies the *-property with respect to S'.

    2. Every (s, o, p) ∊ b' that does not satisfy the *-property with respect to S' is not in b.

  • ProofSee Exercise 8.

  • Theorem 5–5. Σ(R, D, W, z0) satisfies the ds-property for any secure state z0 if and only if, for every action (r, d, (b, m, f, h), (b', m', f', h')), W satisfies the following:

    1. Every (s, o, p) ∊ bb' satisfies the ds-property.

    2. Every (s, o, p) ∊ b' that does not satisfy the ds-property is not in b.

  • ProofSee Exercise 9.

  • Theorems 5–3, 5–4, and 5–5 combine to give us the Basic Security Theorem:

  • Theorem 5–6. Basic Security TheoremΣ(R, D, W, z0) is a secure system if z0 is a secure state and W satisfies the conditions of Theorems 5–3, 5–4, and 5–5.

  • ProofImmediate from Theorems 5–3, 5–4, and 5–5.

Rules of Transformation

A rule is a function ρ:R × ν→D × ν. Intuitively, a rule takes a state and a request, and determines if the request meets the conditions of the rule (the decision). If so, it moves the system to a (possibly different) state. The idea is that a rule captures the means by which a system may transition from one state to another.

Of course, the rules affect the security of a system. For example, a rule that changes all read rights so that a subject has the ability to read objects with classifications higher than the subject's clearance may move the system from a secure state to a nonsecure state. In this section we develop constraints that rules must meet to preserve security, and we give an example rule.

  • Definition 5–7. A rule ρ is ssc-preserving if, for all (r, v) ∊ R × ν and ν satisfying ssc rel f, ρ(r, v) = (d, v') means that ν' satisfies ssc rel f'.

Similar definitions hold for the *-property and the ds-property. If a rule is ssc-preserving, *-property-preserving, and ds-property-preserving, the rule is said to be security-preserving.

We define a relation with respect to a set of rules ω = { ρ1, ..., ρm } in such a way that each type of request is handled by at most one rule; this eliminates ambiguity and ensures that the mapping from R × ν to D × ν is one-to-one.

  • Definition 5–8. Let ω = { ρ1, ..., ρm } be a set of rules. For request rR, decision dD, and states v, v' ∊ ν, (r, d, v, v') ∊ W(ω) if and only if di and there is a unique integer i, 1 ≤ im, such that ρi(r, v) = (d, v').

This definition says that if the request is legal and there is only one rule that will change the state of the system from ν to ν', the corresponding action is in W(ω).

The next theorem presents conditions under which a set of rules preserves the simple security condition.

  • Theorem 5–7. Let ω be a set of ssc-preserving rules, and let z0 be a state satisfying the simple security condition. Then Σ(R, D, W, z0) satisfies the simple security condition.

  • ProofBy contradiction. Let (x, y, z) ∊ Σ(R, D, W(ω), z0) be a state that does not satisfy the simple security property. Without loss of generality, choose tN such that (xt, yt, zt) is the first appearance of Σ(R, D, W(ω), z0) that does not satisfy the simple security property. Because (xt, yt, zt, zt–1) ∊ W(ω), there is a unique rule ρ ∊ ω such that ρ(xt, zt–1) = (yt, zt), and yti. Because ρ is ssc-preserving, and zt–1 satisfies the simple security condition, by Definition 5–7, zt must meet the simple security condition. This contradicts our choice of t, and the assumption that (x, y, z) does not meet the simple security property. Hence, the theorem is proved.

  • When does adding a state preserve the simple security property?

  • Theorem 5–8. Let ν = (b, m, f, h) satisfy the simple security condition. Let (s, o, p) ∉ b, b' = b ∪ { (s, o, p) }, and ν' = (b', m, f, h). Then ν' satisfies the simple security condition if and only if either of the following conditions is true.

    1. Either p = e or p = a.

    2. Either p = r or p = w, and fs(s) dom fo(o).

  • ProofFor (a), the theorem follows from Definition 5–3 and ν' satisfying ssc rel f. For (b), if ν' satisfies the simple security condition, then, by definition, fs(s) dom fo(o). Moreover, if fs(s) dom fo(o), then (s, o, p) ∊ b' satisfies ssc rel f; hence, v' is secure.

  • Similar theorems hold for the *-property:

  • Theorem 5–9. Let ω be a set of *-property-preserving rules, and let z0 be a state satisfying the *-property. Then Σ(R, D, W, z0) satisfies the *-property.

  • ProofSee Exercise 11.

  • Theorem 5–10. Let ν = (b, m, f, h) satisfy the *-property. Let (s, o, p) ∉ b, b' = b ∪ { (s, o, p) }, and ν' = (b', a, f, h). Then v' satisfies the *-property if and only if one of the following conditions holds.

    1. p = a and fo(o) dom fc(s)

    2. p = w and fo(o) = fc(s)

    3. p = r and fc(s) dom fo(o)

  • ProofIf ν' satisfies the *-property, then the claim follows immediately from Definition 5–3. Conversely, assume that conditions (a), (b), and (c) hold. Let (s', o', p') ∉ b'. If (s', o', p') ∊ b, the assumption that ν satisfies the *-property means that ν' also satisfies the *-property. Otherwise, (s', o', p') = (s, o, p) and, by conditions (a), (b), and (c), the *-property holds. Thus, ν' satisfies the *-property.

  • Theorem 5–11. Let ω be a set of ds-property-preserving rules, and let z0 be a state satisfying the ds-property. Then Σ(R, D, W, z0) satisfies the ds-property.

  • ProofSee Exercise 11.

  • Theorem 5–12. Let ν = (b, m, f, h) satisfy the ds-property. Let (s, o, p) ∉ b, b' = b ∪ { (s, o, p) }, and ν' = (b', m, f, h). Then v' satisfies the ds-property if and only if pm[s, o].

  • ProofIf ν' satisfies the ds-property, then the claim follows immediately from Definition 5–4. Conversely, assume that pm[s, o]. Because (s', o', p') ∊ b', the ds-property holds for ν'. Thus, ν' satisfies the ds-property.

  • Finally, we present the following theorem.

  • Theorem 5–13. Let ρ be a rule and ρ(r, v) = (d, v'), where ν = (b, m, f, h) and ν' = (b', m', f', h'). Then:

    1. If b' ⊆ b, f' = f, and ν satisfies the simple security condition, then v' satisfies the simple security condition.

    2. If b' ⊆ b, f' = f, and ν satisfies the *-property, then v' satisfies the *-property.

    3. If b' ⊆ b, m[s, o] ⊆ m'[s, o] for all sS and oO, and ν satisfies the dsproperty, then v' satisfies the ds-property.

  • ProofSuppose that ν satisfies the simple security property. Because b' ⊆ b, (s, o, r) ∊ b' implies (s, o, r) ∊ b, and (s, o, w) ∊ b' implies (s, o, w) ∊ b. So fs(s) dom fo(o). But f' = f. Thus, fs'(s) dom fo'(o). So ν' satisfies the simple security condition.

    The proofs of the other two parts are analogous.

Example Model Instantiation: Multics

We now examine the modeling of specific actions. The Multics system [68, 788] has 11 rules affecting the rights on the system. These rules are divided into five groups. Let the set Q contain the set of request operations (such as get, give, and so forth). Then:

  1. R(1) = Q × S × O × M. This is the set of requests to request and release access. The rules are get-read, get-append, get-execute, get-write, and release-read/execute/write/append. These rules differ in the conditions necessary for the subject to be able to request the desired right. The rule get-read is discussed in more detail in Section 5.2.4.1.

  2. R(2) = S × Q × S × O × M. This is the set of requests to give access to and remove access from a different subject. The rules are give-read/execute/write/append and rescind-read/execute/write/append. Again, the rules differ in the conditions needed to acquire and delete the rights, but within each rule, the right being added or removed does not affect the conditions. Whether the right is being added or deleted does affect them. The rule give-read/execute/write/append is discussed in more detail in Section 5.2.4.2.

  3. R(3) = Q × S × O × L. This is the set of requests to create and reclassify objects. It contains the create-object and change-object-security-level rules. The object's security level is either assigned (create-object) or changed (change-object-security-level).

  4. R(4) = S × O. This is the set of requests to remove objects. It contains only the rule delete-object-group, which deletes an object and all objects beneath it in the hierarchy.

  5. R(5) = S × L. This is the set of requests to change a subject's security level. It contains only the rule change-subject-current-security-level, which changes a subject's current security level (not the maximum security level).

Then, the set of requests R = R(1)R(2)R(3)R(4)R(5).

The Multics system includes the notion of trusted users. The system does not enforce the *-property for this set of subjects STS; however, members of ST are trusted not to violate that property.

For each rule ρ, define Δ(ρ) as the domain of the request (that is, whether or not the components of the request form a valid operand for the rule).

We next consider two rules in order to demonstrate how to prove that the rules preserve the simple security property, the *-property, and the discretionary security property.

The get-read Rule

The get-read rule enables a subject s to request the right to read an object o. Represent this request as r = (get, s, o, r) ∊ R(1), and let the current state of the system be ν = (b, m, f, h). Then get-read is the rule ρ1(r, v):

if (r ∉ Δ(ρ1)) then ρ1(r, v) = (i, ν);else if (fs(sdom fo(oand [s ∊ ST or fc(sdom fo(o)] and r ∊ m[s, o])            then ρ1(r, v) = (y, (b ∪ { (s, or) }, m, f, h));else ρ1(r, v) = (n, ν);

The first if tests the parameters of the request; if any of them are incorrect, the decision is “illegal” and the system state remains unchanged. The second if checks three conditions. The simple security property for the maximum security level of the subject and the classification of the object must hold. Either the subject making the request must be trusted, or the simple security property must hold for the current security level of the subject (this allows trusted subjects to read information from objects above their current security levels but at or below their maximum security levels; they are trusted not to reveal the information inappropriately). Finally, the discretionary security property must hold. If these three conditions hold, so does the Basic Security Theorem. The decision is “yes” and the system state is updated to reflect the new access. Otherwise, the decision is “no” and the system state remains unchanged.

We now show that if the current state of the system satisfies the simple security condition, the *-property, and the ds-property, then after the get-read rule is applied, the state of the system also satisfies those three conditions.

  • Theorem 5–14. The get-read rule ρ1 preserves the simple security condition, the *-property, and the ds-property.

  • ProofLet ν satisfy the simple security condition, the *-property, and the ds-property. Let ρ1(r, v) = (d, v'). Either ν' = ν or ν' = (b ∪ { (s2, o, r) }, m, f, h), by the get-read rule. In the former case, because ν satisfies the simple security condition, the *-property, and the ds-property, so does ν'. So let ν' = (b ∪ { (s2, o, r) }, m, f, h).

    Consider the simple security condition. From the choice of ν', either b' – b = Ø or b' – b = { (s2, o, r) }. If b' – b = Ø, then { (s2, o, r) } ∊ b, so ν = ν', proving that v' satisfies the simple security condition. Otherwise, because the get-read rule requires that fc(s) dom fo(o), Theorem 5–8 says that ν' satisfies the simple security condition.

    Consider the *-property. From the definition of the get-read rule, either sST or fc(s) dom fo(o). If sST, then s is trusted and the *-property holds by the definition of ST. Otherwise, by Theorem 5–10, because fc(s) dom fo(o), ν' satisfies the *-property.

    Finally, consider the ds-property. The condition in the get-read rule requires that rm[s, o] and b' – b = Ø or b' – b = { (s2, o, r) }. If b' – b = Ø, then { (s2, o, r) } ∊ b, so ν = ν', proving that v' satisfies the ds-property. Otherwise, { (s2, o, r) } ∉ b, which meets the conditions of Theorem 5–12. From that theorem, v' satisfies the ds-property.

Hence, the get-read rule preserves the security of the system.

The give-read Rule

The give-read rule[7] enables a subject s to give subject s2 the (discretionary) right to read an object o. Conceptually, a subject can give another subject read access to an object if the giver can alter (write to) the parent of the object. If the parent is the root of the hierarchy containing the object, or if the object itself is the root of the hierarchy, the subject must be specially authorized to grant access.

Some terms simplify the definitions and proofs. Define root(o) as the root object of the hierarchy h containing o, and define parent(o) as the parent of o in h. If the subject is specially authorized to grant access to the object in the situation just mentioned, the predicate canallow(s, o, v) is true. Finally, define mm[s, o]←r as the access control matrix m with the right r added to entry m[s, o].

Represent the give-read request as r = (s1, give, s2, o, r) ∊ R(2), and let the current state of the system be ν = (b, m, f, h). Then, give-read is the rule ρ6(r, v):

if (r ∉ Δ(ρ6)) then ρ1(r, v) = (i, ν);else if ( [ o ≠ root(oand parent(o) ≠ root(oand parent(o) ∊ b(s1w)] or           [ parent(o) = root(oand canallow(s1o, v) ] or           [ o = root(o) and canallow(s1root(o), ν) ])           then ρ1(r, v) = (y, (b, m ∧ m[s2o]←rf, h));else ρ1(r, v) = (n, ν);

The first if tests the parameters of the request; if any of them are incorrect, the decision is “illegal” and the system state remains unchanged. The second if checks several conditions. If neither the object nor its parent is the root of the hierarchy containing the object, then s1 must have write rights to the parent. If the object or its parent is the root of the hierarchy, then s1 must have special permission to give s2 the read right to o. The decision is “yes” and the access control matrix is updated to reflect the new access. Otherwise, the decision is “no” and the system state remains unchanged.

We now show that if the current state of the system satisfies the simple security condition, the *-property, and the ds-property, then after the give-read rule is applied, the state of the system also satisfies those three conditions.

  • Theorem 5–15. The give-read rule ρ6 preserves the simple security condition, the *-property, and the ds-property.

  • ProofLet ν satisfy the simple security condition, the *-property, and the ds-property. Let ρ6(r, v) = (d, v'). Either ν' = ν or ν' = (b, mm[s, o]←r, f, h), by the give-read rule. In the former case, because ν satisfies the simple security condition, the *-property, and the ds-property, so does ν'. So, let ν' = (b, mm[s, o]←r, f, h).

  • Here, b' = b, f' = f, and m[x, y] = m'[x, y] for all xS and yO such that xs and yo. In that case, m[s, o] ⊆ m'[s, o]. Hence, by Theorem 5–13, v' satisfies the simple security condition, the *-property, and the ds-property.

Hence, the get-read rule preserves the security of the system.

Tranquility

The principle of tranquility states that subjects and objects may not change their security levels once they have been instantiated. Suppose that security levels of objects can be changed, and consider the effects on a system with one category and two security clearances, HIGH and LOW. If an object's security classification is raised from LOW to HIGH, then any subjects cleared to only LOW can no longer read that object. Similarly, if an object's classification is dropped from HIGH to LOW, any subject can now read that object.

Both situations violate fundamental restrictions. Raising the classification of an object means that information that was available is no longer available; lowering the classification means that information previously considered restricted is now available to all. In essence, the LOW subjects either have, or have had, access to HIGH information, in violation of the simple security condition. Furthermore, by lowering the classification level, the HIGH subjects lose the ability to append to the object, but anything they have written into the object becomes available at the LOW level, and so this has the effect of writing to an object with a lower classification and thereby violates the *-property.

Raising the classification of an object is not considered a problem. The model does not define how to determine the appropriate classification of information. It merely describes how to manipulate an object containing the information once that object has been assigned a classification. Information in an object with a particular classification is assumed to be known to all who can access that object, and so raising its classification will not achieve the desired goal (preventing access to the information). The information has already been accessed.

Lowering the classification level is another matter entirely and is known as the declassification problem. Because this makes information available to subjects who did not have access to it before, it is in effect a “write down” that violates the *-property. The typical solution is to define a set of trusted entities or subjects that will remove all sensitive information from the HIGH object before its classification is changed to LOW.

The tranquility principle actually has two forms:

  • Definition 5–9. The principle of strong tranquility states that security levels do not change during the lifetime of the system.

Strong tranquility eliminates the need for trusted declassifiers, because no declassification can occur. Moreover, no raising of security levels can occur. This eliminates the problems discussed above. However, stong tranquility is also inflexible and in practice is usually too strong a requirement.

  • Definition 5–10. The principle of weak tranquility states that security levels do not change in a way that violates the rules of a given security policy.

Weak tranquility moderates the restriction to allow harmless changes of security levels. It is more flexible, because it allows changes, but it disallows any violations of the security policy (in the context of the Bell-LaPadula Model, the simple security condition and *-property).

Tranquility plays an important role in the Bell-LaPadula Model, because it highlights the trust assumptions in the model. It raises other problems in the context of integrity that we will revisit in the next chapter.

The Controversy over the Bell-LaPadula Model

The Bell-LaPadula Model became the target of inquiries into the foundations of computer security. The controversy led to a reexamination of security models and a deeper appreciation of the complexity of modeling real systems.

McLean's †-Property and the Basic Security Theorem

In a 1985 paper [682], McLean argued that the “value of the [Basic Security Theorem] is much overrated since there is a great deal more to security than it captures. Further, what is captured by the [Basic Security Theorem] is so trivial that it is hard to imagine a realistic security model for which it does not hold” ([682], p. 47). The basis for McLean's argument was that given assumptions known to be nonsecure, the Basic Security Theorem could prove a nonsecure system to be secure. He defined a complement to the *-property:

  • Definition 5–11. A state (b, m, f, h) satisfies the †-property if and only if, for each subject sS, the following conditions hold:

    1. b(s: a) ≠ Ø ⇒ [∀ ob(s: a) [ fc(s) dom fo(o) ] ]

    2. b(s: w) ≠ Ø ⇒ [∀ ob(s: w) [ fc(s) = fo(o) ] ]

    3. b(s: r) ≠ Ø ⇒ [∀ ob(s: r) [ fc(s) dom fo(o) ] ]

In other words, the †-property holds for a subject s and an object o if and only if the clearance of s dominates the classification of o. This is exactly the reverse of the *property, which holds if and only if the classification of o dominates the clearance of s. A state satisfies the †-property if and only if, for every triplet (s, o, p), where the right p involves writing (that is, p = a or p = w), the †-property holds for s and o.

McLean then proved the analogue to Theorem 5–4:

  • Theorem 5–16. Σ(R, D, W, z0) satisfies the †-property relative to S' ⊆ S for any secure state z0 if and only if, for every action (r, d, (b, m, f, h), (b', m', f', h')), W satisfies the following conditions for every sS':

    1. Every (s, o, p) ∊ bb' satisfies the †-property with respect to S'.

    2. Every (s, o, p) ∊ b' that does not satisfy the †-property with respect to S' is not in b.

  • ProofSee Exercise 8, with “*-property” replaced by “†-property.”

From this theorem, and from Theorems 5–3 and 5–5, the analogue to the Basic Security Theorem follows.

  • Theorem 5–17. Basic Security TheoremΣ(R, D, W, z0) is a secure system if and only if z0 is a secure state and W satisfies the conditions of Theorems 5–3, 5–16, and 5–5.

However, the system Σ(R, D, W, z0) is clearly nonsecure, because a subject with HIGH clearance can write information to an object with LOW classification. Information can flow down, from HIGH to LOW. This violates the basic notion of security in the confidentiality policy.

Consider the role of the Basic Security Theorem in the Bell-LaPadula Model. The goal of the model is to demonstrate that specific rules, such as the get-read rule, preserve security. But what is security? The model defines that term using the Basic Security Theorem: an instantiation of the model is secure if and only if the initial state satisfies the simple security condition, the *-property, and the ds-property, and the transition rules preserve those properties. In essence, the theorems are assertions about the three properties.

The rules describe the changes in a particular system instantiating the model. Showing that the system is secure, as defined by the analogue of Definition 5–5, requires proving that the rules preserve the three properties. Given that they do, the Basic Security Theorem asserts that reachable states of the system will also satisfy the three properties. The system will remain secure, given that it starts in a secure state.

LaPadula pointed out that McLean's statement does not reflect the assumptions of the Basic Security Theorem [617]. Specifically, the Bell-LaPadula Model assumes that a transition rule introduces no changes that violate security, but does not assume that any existing accesses that violate security are eliminated. The rules instantiating the model do no elimination (see the get-read rule, Section 5.2.4.1, as an example).

Furthermore, the nature of the rules is irrelevant to the model. The model accepts a definition of “secure” as axiomatic. The specific policy defines “security” and is an instantiation of the model. The Bell-LaPadula Model uses a military definition of security: information may not flow from a dominating entity to a dominated entity. The *-property captures this requirement. But McLean's variant uses a different definition: rather than meet the *-property, his policy requires that information not flow from a dominated entity to a dominating entity. This is not a confidentiality policy. Hence, a system satisfying McLean's policy will not satisfy a confidentiality policy. McLean's argument eloquently makes this point.

However, the sets of properties in both policies (the confidentiality policy and McLean's variant) are inductive, and the Basic Security Theorem holds. The properties may not make sense in a real system, but this is irrelevant to the model. It is very relevant to the interpretation of the model, however. The confidentiality policy requires that information not flow from a dominating subject to a dominated object. McLean substitutes a policy that allows this. These are alternative instantiations of the model.

McLean makes these points by stating problems that are central to the use of any security model. The model must abstract the notion of security that the system is to support. For example, McLean's variant of the confidentiality policy does not provide a correct definition of security for military purposes. An analyst examining a system could not use this variant to show that the system implemented a confidentiality classification scheme. The Basic Security Theorem, and indeed all theorems, fail to capture this, because the definition of “security” is axiomatic. The analyst must establish an appropriate definition. All the Basic Security Theorem requires is that the definition of security be inductive.

McLean's second observation asks whether an analyst can prove that the system being modeled meets the definition of “security.” Again, this is beyond the province of the model. The model makes claims based on hypotheses. The issue is whether the hypotheses hold for a real system.

McLean's System Z and More Questions

In a second paper [683], McLean sharpened his critique. System transitions can alter any system component, including b, f, m, and h, as long as the new state does not violate security. McLean used this property to demonstrate a system, called System Z, that satisfies the model but is not a confidentiality security policy. From this, he concluded that the Bell-LaPadula Model is inadequate for modeling systems with confidentiality security policies.

System Z has the weak tranquility property and supports exactly one action. When a subject requests any type of access to any object, the system downgrades all subjects and objects to the lowest security level, adds access permission to the access control matrix, and allows the access.

Let System Z's initial state satisfy the simple security condition, the *property, and the ds-property. It can be shown that successive states of System Z also satisfy those properties and hence System Z meets the requirements of the Basic Security Theorem. However, with respect to the confidentiality security policy requirements, the system clearly is not secure, because all entities are downgraded.

McLean reformulated the notion of a secure action. He defined an alternative version of the simple security condition, the *-property, and the discretionary security property. Intuitively, an action satisfies these properties if, given a state that satisfies the properties, the action transforms the system into a (possibly different) state that satisfies these properties, and eliminates any accesses present in the transformed state that would violate the property in the initial state. From this, he shows:

  • Theorem 5–18. Σ(R, D, W, z0) is a secure system if z0 is a secure state and each action in W satisfies the alternative versions of the simple security condition, the *-property, and the discretionary security property.

  • ProofSee [683].

Under this reformulation, System Z is not secure because this rule is not secure. Specifically, consider an instantiation of System Z with two security clearances, (HIGH, { ALL }) and (LOW, { ALL }) (HIGH > LOW). The initial state has a subject s and an object o. Take fc(s) = (LOW, { ALL }), fo(o) = (HIGH, { ALL }), m[s, o] = { w }, and b = { (s, o, w) }. When s requests read access to o, the rule transforms the system into a state wherein fo'(o) = (LOW, { ALL }), (s, o, r) ∊ b', and m'[s, o] = { r, w }. However, because (s, o, r) ∊ b' – b and fo(o) dom fs(s), an illegal access has been added. Yet, under the traditional Bell-LaPadula formulation, in the final state fc'(s) = fo'(o), so the read access is legal and the state is secure, hence the system is secure.

McLean's conclusion is that proving that states are secure is insufficient to prove the security of a system. One must consider both states and transitions.

Bell [64] responded by exploring the fundamental nature of modeling. Modeling in the physical sciences abstracts a physical phenomenon to its fundamental properties. For example, Newtonian mathematics coupled with Kepler's laws of planetary motion provide an abstract description of how planets move. When observers noted that Uranus did not follow those laws, they calculated the existence of another, trans-Uranean planet. Adams and Lavoisier, observing independently, confirmed its existence. Refinements arise when the theories cannot adequately account for observed phenomena. For example, the precession of Mercury's orbit suggested another planet between Mercury and the sun. But none was found.[8] Einstein's theory of general relativity, which modified the theory of how planets move, explained the precession, and observations confirmed his theory.

Modeling in the foundations of mathematics begins with a set of axioms. The model demonstrates the consistency of the axioms. A model consisting of points, lines, planes, and the axioms of Euclidean geometry can demonstrate the consistency of those axioms. Attempts to prove the inconsistency of a geometry created without the Fifth Postulate[9] failed; eventually, Riemann replaced the plane with a sphere, replaced lines with great circles, and using that model demonstrated the consistency of the axioms (which became known as “Riemannian geometry”). Gödel demonstrated that consistency cannot be proved using only axioms within a system (hence Riemannian geometry assumes the consistency of Euclidean geometry, which in turn assumes the consistency of another axiomatizable system, and so forth). So this type of modeling has natural limits.

The Bell-LaPadula Model was developed as a model in the first sense. Bell pointed out that McLean's work presumed the second sense.

In the first sense of modeling, the Bell-LaPadula Model is a tool for demonstrating certain properties of rules. Whether the properties of System Z are desirable is an issue the model cannot answer. If no rules should change security compartments of entities, the system should enforce the principle of strong tranquility. System Z clearly violates this principle, and hence would be considered not secure. (The principle of tranquility adds requirements to state transitions, so given that principle, the Bell-LaPadula Model actually constrains both states and state transitions.)

In the second sense, Bell pointed out that the two models (the original Bell-LaPadula Model, and McLean's variant) define security differently. Hence, that System Z is not secure under one model, but secure under the other, is not surprising. As an example, consider the following definitions of prime number.

  • Definition 5–12. A prime number is an integer n > 1 that has only 1 and itself as divisors.

  • Definition 5–13. A prime number is an integer n > 0 that has only 1 and itself as divisors.

Both definitions, from a mathematical point of view, are acceptable and consistent with the laws of mathematics. So, is the integer 1 prime? By Definition 5–12, no; by Definition 5–13, yes. Neither answer is “right” or “wrong” in an absolute sense.[10]

Summary

McLean's questions and observations about the Bell-LaPadula Model raised issues about the foundations of computer security, and Bell and LaPadula's responses fueled interest in those issues. The annual Foundations of Computer Security Workshop began shortly after to examine foundational questions.

Summary

The influence of the Bell-LaPadula Model permeates all policy modeling in computer security. It was the first mathematical model to capture attributes of a real system in its rules. It formed the basis for several standards, including the Department of Defense's Trusted Computer System Evaluation Criteria (the TCSEC or the “Orange Book” discussed in Chaper 21) [285]. Even in controversy, the model spurred further studies in the foundations of computer security.

Other models of confidentiality arise in practical contexts. They may not form lattices. In this case, they can be embedded into a lattice model. Still other confidentiality models are not multilevel in the sense of Bell-LaPadula. These models include integrity issues, and Chapter 7, “Hybrid Policies,” discusses several.

Confidentiality models may be viewed as models constraining the way information moves about a system. The notions of noninterference and nondeducibility provide an alternative view that in some ways matches reality better than the Bell-LaPadula Model; Chapter 8, “Noninterference and Policy Composition,” discusses these models.

Research Issues

Research issues in confidentiality arise in the application of multilevel security models. One critical issue is the inclusion of declassification within the model (as opposed to being an exception, allowed by a trusted user such as the system security officer). A second such issue is how to abstract the details of the system being modeled to a form about which results can be proved; databases and multilevel networks are often the targets of this. A third issue is the relationship of different formulations of the model. What is their expressive power? Which allows the most accurate description of the system being modeled?

Another issue is that of models of information flow. The confidentiality models usually speak in terms of channels designed to move information (such as reading and writing). But information can flow along other channels. How to integrate these channels into models, and how to show that models correctly capture them, are critical research issues.

Yet another issue is how to apply confidentiality policies to a collection of systems implementing slightly different variations of the policy and with different security interfaces. How can the systems be merged to meet the policy? How does one derive the wrapper specifications needed to allow the systems to connect securely, and how does one validate that the resulting policy is “close enough” to the desired policy in practice?

Further Reading

The developers of the ADEPT-50 system presented a formal model of the security controls that predated the Bell-LaPadula Model [633, 1037]. Landwehr and colleagues [612] explored aspects of formal models for computer security. Denning used the Bell-LaPadula Model in SeaView [272, 275], a database designed with security features. The model forms the basis for several other models, including the database model of Jajodia and Sandhu [521] and the military message system model of Landwehr [613]. The latter is an excellent example of how models are applied in practice.

Dion [300] extended the Bell-LaPadula Model to allow system designers and implementers to use that model more easily. Sidhu and Gasser [921] designed a local area network to handle multiple security levels.

Feiertag, Levitt, and Robinson [341] developed a multilevel model that has several differences from the Bell-LaPadula Model.Taylor [992] elegantly compares them. Smith and Winslett [936] use a mandatory model to model databases that differ from the Bell-LaPadula Model.

Gambel [379] discusses efforts to apply a confidentiality policy similar to Bell-LaPadula to a system developed from off-the-shelf components, none of which implemented the policy precisely.

Irvine and Volpano [513] cast multilevel security in terms of a type subsystem for a polymorphic programming language.

Exercises

1:

Why is it meaningless to have compartments at the UNCLASSIFIED level (such as (UNCLASSIFIED, { NUC }) and ( UNCLASSIFIED, { EUR }))?

2:

Given the security levels TOP SECRET, SECRET, CONFIDENTIAL, and UNCLASSIFIED (ordered from highest to lowest), and the categories A, B, and C, specify what type of access (read, write, or both) is allowed in each of the following situations. Assume that discretionary access controls allow anyone access unless otherwise specified.

  1. Paul, cleared for (TOP SECRET, { A, C }), wants to access a document classified (SECRET, { B, C }).

  2. Anna, cleared for (CONFIDENTIAL, { C }), wants to access a document classified (CONFIDENTIAL, { B }).

  3. Jesse, cleared for (SECRET, { C }), wants to access a document classified (CONFIDENTIAL, { C }).

  4. Sammi, cleared for (TOP SECRET, { A, C }), wants to access a document classified (CONFIDENTIAL, { A }).

  5. Robin, who has no clearances (and so works at the UNCLASSIFIED level), wants to access a document classified (CONFIDENTIAL, { B }).

3:

Prove that any file in the DG/UX system with a link count greater than 1 must have an explicit MAC label.

4:

In the DG/UX system, why is the virus prevention region below the user region?

5:

In the DG/UX system, why is the administrative region above the user region?

6:

Prove that the two properties of the hierarchy function (see Section 5.2.3) allow only trees and single nodes as organizations of objects.

7:

Declassification effectively violates the *-property of the Bell-LaPadula Model. Would raising the classification of an object violate any properties of the model? Why or why not?

8:

Prove Theorem 5–4. (Hint: Proceed along lines similar to the proof of Theorem 5–3.)

9:

Prove Theorem 5–5.

10:

Consider Theorem 5–6. Would the theorem hold if the requirement that z0 be a secure state were eliminated? Justify your answer.

11:

Prove Theorems 5–9 and 5–11.

12:

Consider McLean's reformulation of the simple security condition, the *-property, and the ds-property (see page 146).

  1. Does this eliminate the need to place constraints on the initial state of the system in order to prove that the system is secure?

  2. Why do you believe Bell and LaPadula did not use this formulation?



[1] The terminology in this section follows that of the unified exposition of the Bell-LaPadula Model [68].

[2] There is less than full agreement on this terminology. Some call security levels “compartments.” However, others use this term as a synonym for “categories.” We follow the terminology of the unified exposition [68].

[3] The terminology used here corresponds to that of the DG/UX system. Note that “hierarchy level” corresponds to “clearance” or “classification” in the preceding section.

[4] The TCB, or trusted computing base, is that part of the system that enforces security (see Section 19.1.2.2).

[5] The right called “empty” here is called “execute” in Bell and LaPadula [68]. However, they define “execute” as “neither observation nor alteration” (and note that it differs from the notion of “execute” that most systems implement). For clarity, we changed the e right's name to the more descriptive “empty.”

[6] P(O) is the power set of O—that is, the set of all possible subsets of O.

[7] Actually, the rule is give-read/execute/write/append. The generalization is left as an exercise for the reader.

[8] Observers reported seeing this planet, called Vulcan, in the mid-1800s. The sighting was never officially confirmed, and the refinements discussed above explained the precession adequately. Willy Ley's book [625] relates the charming history of this episode.

[9] The Fifth Postulate of Euclid states that given a line and a point, there is exactly one line that can be drawn through that point parallel to the existing line. Attempts to prove this postulate failed; in the 1800s, Riemann and Lobachevsky demonstrated the axiomatic nature of the postulate by developing geometries in which the postulate does not hold [774].

[10] By convention, mathematicians use Definition 5–12. The integer 1 is neither prime nor composite.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.142.2