Chapter 7. Hybrid Policies

 

JULIET: Come, vial.What if this mixture do not work at all?Shall I be marry'd then tomorrow morning?No, no! this shall forbid it, lie thou there.

 
 --The Tragedy of Romeo and Juliet, IV, iii, 20–22.

Few organizations limit their security objectives to confidentiality or integrity only; most desire both, in some mixture. This chapter presents two such models. The Chinese Wall model is derived from the British laws concerning conflict of interest. The Clinical Information Systems security model is derived from medical ethics and laws about dissemination of patient data. Two other models present alternative views of information management. Originator controlled access control lets the creator determine (or assign) who should access the data and how. Role-based access control formalizes the more common notion of “groups” of users.

Chinese Wall Model

The Chinese Wall model [146] is a model of a security policy that refers equally to confidentiality and integrity. It describes policies that involve a conflict of interest in business, and is as important to those situations as the Bell-LaPadula Model is to the military. For example, British law requires the use of a policy similar to this, and correct implementation of portions of the model provides a defense in cases involving certain criminal charges [653, 654]. The environment of a stock exchange or investment house is the most natural environment for this model. In this context, the goal of the model is to prevent a conflict of interest in which a trader represents two clients, and the best interests of the clients conflict, so the trader could help one gain at the expense of the other.

Informal Description

Consider the database of an investment house. It consists of companies' records about investment and other data that investors are likely to request. Analysts use these records to guide the companies' investments, as well as those of individuals. Suppose Anthony counsels Bank of America in its investments. If he also counsels Citibank, he has a potential conflict of interest, because the two banks' investments may come into conflict. Hence, Anthony cannot counsel both banks.

The following definitions capture this:

  • Definition 7–1. The objects of the database are items of information related to a company.

  • Definition 7–2. A company dataset (CD) contains objects related to a single company.

  • Definition 7–3. A conflict of interest (COI) class contains the datasets of companies in competition.

Let COI(O) represent the COI class that contains object O, and let CD(O) be the company dataset that contains object O. The model assumes that each object belongs to exactly one COI class.

Anthony has access to the objects in the CD of Bank of America. Because the CD of Citibank is in the same COI class as that of Bank of America, Anthony cannot gain access to the objects in Citibank's CD. Thus, this structure of the database provides the required ability. (See Figure 7-1.)

The Chinese Wall model database. It has two COI classes. The one for banks contains three CDs. The other one, for gasoline companies, contains four CDs. Each (COI, CD) pair is represented by a lowercase letter (for example, (Bank COI, Citibank) is c). Susan may have access to no more than one CD in each COI, so she could access Citibank's CD and ARCO's CD, but not Citibank's CD and Bank of America's CD.

Figure 7-1. The Chinese Wall model database. It has two COI classes. The one for banks contains three CDs. The other one, for gasoline companies, contains four CDs. Each (COI, CD) pair is represented by a lowercase letter (for example, (Bank COI, Citibank) is c). Susan may have access to no more than one CD in each COI, so she could access Citibank's CD and ARCO's CD, but not Citibank's CD and Bank of America's CD.

This implies a temporal element. Suppose Anthony first worked on Bank of America's portfolio and was then transferred to Citibank's portfolio. Even though he is working only on one CD in the bank COI class at a time, much of the information he learned from Bank of America's portfolio will be current. Hence, he can guide Citibank's investments using information about Bank of America—a conflict of interest. This leads to the following rule, where PR(S) is the set of objects that S has read.

  • CW-Simple Security Condition, Preliminary VersionS can read O if and only if either of the following is true.

    1. There is an object O' such that S has accessed O' and CD(O') = CD(O).

    2. For all objects O', O' ∊ PR(S) ⇒ COI(O') ≠ COI(O).

Initially, PR(S) = Ø, and the initial read request is assumed to be granted. Given these assumptions, in the situation above, Bank of America's COI class and Citibank's COI class are the same, so the second part of the CW-simple security condition applies, and Anthony cannot access an object in the former, having already accessed an object in the latter.

Two immediate consequences of this rule affect subject rights. First, once a subject reads any object in a COI class, the only other objects in that COI class that the subject can read are in the same CD as the read object. So, if Susan accesses some information in Citibank's CD, she cannot later access information in Bank of America's CD.

Second, the minimum number of subjects needed to access every object in a COI class is the same as the number of CDs in that COI class. If the gasoline company COI class has four CDs, then at least four analysts are needed to access all information in the COI class. Thus, any trading house must have at least four analysts to access all information in that COI class without creating a conflict of interest.

In practice, companies have information they can release publicly, such as annual stockholders' reports and filings before government commissions. The Chinese Wall model should not consider this information restricted, because it is available to all. Hence, the model distinguishes between sanitized data and unsanitized data; the latter falls under the CW-simple security condition, preliminary version, whereas the former does not. The CW-simple security condition can be reformulated to include this notion.

  • CW-Simple Security ConditionS can read O if and only if any of the following holds.

    1. There is an object O' such that S has accessed O' and CD(O') = CD(O).

    2. For all objects O', O' ∊ PR(S) ⇒ COI(O') ≠ COI(O).

    3. O is a sanitized object.

Suppose Anthony and Susan work in the same trading house. Anthony can read objects in Bank of America's CD, and Susan can read objects in Citibank's CD. Both can read objects in ARCO's CD. If Anthony can also write to objects in ARCO's CD, then he can read information from objects in Bank of America's CD and write to objects in ARCO's CD, and then Susan can read that information; so, Susan can indirectly obtain information from Bank of America's CD, causing a conflict of interest. The CW-simple security condition must be augmented to prevent this.

  • CW-*-PropertyA subject S may write to an object O if and only if both of the following conditions hold.

    1. The CW-simple security condition permits S to read O.

    2. For all unsanitized objects O', S can read O' ⇒ CD(O') = CD(O).

In the example above, Anthony can read objects in both Bank of America's CD and ARCO's CD. Thus, condition 1 is met. However, assuming that Bank of America's CD contains unsanitized objects (a reasonable assumption), then because Anthony can read those objects, condition 2 is false. Hence, Anthony cannot write to objects in ARCO's CD.

Formal Model

Let S be a set of subjects, let O be a set of objects, and let L = C × D be a set of labels. Define projection functions l1: OC and l2: OD. C corresponds to the set of COI classes, and D to the set of CDs, in the informal exposition above. The access matrix entry for sS and oO is H(s, o); that element is true if s has, or has had, read access to o, and is false otherwise. (Note that H is not an access control matrix, because it does not reflect the allowed accesses, but merely the granted accesses.) This matrix incorporates a history element into the standard access control matrix. Finally, R(s, o) represents s's request to read o.

The model's first assumption is that a CD does not span two COI classes. Hence, if two objects are in the same CD, they are in the same COI class.

  • Axiom 7–1. For all o, o' ∊ O, if l2(o) = l2(o'), then l1(o) = l1(o').

  • The contrapositive is as follows:

  • Lemma 7–1. For all o, o' ∊ O, if l1(o) ≠ l1(o'), then l2(o) ≠ l2(o').

  • So two objects in different COI classes are also in different CDs.

  • Axiom 7–2. A subject s can read an object o if and only if, for all o' ∊ O such that H(s, o') = true, either l1(o') ≠ l1(o) or l2(o') = l2(o).

This axiom is the CW-simple security condition: a subject can read an object if and only if it has not read objects in other datasets in the object's COI class, or if it has read objects in the object's CD. However, this rule must also hold initially for the state to be secure. So, the simplest state for which the CW-simple security condition holds is that state in which no accesses have occurred; and in that state, any requests for access should be granted. The next two axioms state this formally.

  • Axiom 7–3. H(s, o) = false for all sS, and oO is an initially secure state.

  • Axiom 7–4. If for some sS and for all oO, H(s, o) = false, then any request R(s, o) is granted.

The following theorem shows that a subject can only read the objects in a single dataset in a COI class.

  • Theorem 7–1. Suppose a subject sS has read an object oO. If s can read o' ∊ O, o' ≠ o, then l1(o') ≠ l1(o) or l2(o') = l2(o).

  • ProofBy contradiction. Because s has read o, H(s, o) = true. Suppose s reads o'; then H(s, o') = true. By hypothesis, l1(o') = l1(o) and l2(o') ≠ l2(o). Summarizing this:

  • H(s, o) = trueH(s, o') = truel1(o') = l1(o) ∧ l2(o') ≠ l2(o)

  • Without loss of generality, assume that s read o first. Then H(s, o) = true when s read o'; by Axiom 7–2, either l1(o') ≠ l1(o) or l2(o') = l2(o). This leads to:

  • (l1(o') ≠ l1(o) ∨ l2(o') = l2(o)) ∧ (l1(o') = l1(o) ∧ l2(o') ≠ l2(o))

  • which is equivalent to

  • (l1(o') ≠ l1(o) ∧ l1(o') = l1(o) ∧ l2(o') ≠ l2(o)) ∨

  • (l2(o') = l2(o) ∧ l1(o') = l1(o) ∧ l2(o') ≠ l2(o))

  • However, because l1(o') ≠ l1(o) ∧ l1(o') = l1(o) is false, and l2(o') = l2(o) ∧ l2(o') ≠ l2(o) is also false, this expression is false, contradicting the hypothesis.

From this, it follows that a subject can access at most one CD in each COI class.

  • Lemma 7–2. Suppose a subject sS can read an object oO. Then s can read no o' for which l1(o') = l1(o) and l2(o') ≠ l2(o).

  • ProofInitially, s has read no object, so by Axioms 7–3 and 7–4, access will be granted for any object o. This proves the lemma for the trivial case. Now, consider another object o'. By Theorem 7–1, if s can read o' ∊ O, o' ≠ o, then l1(o') ≠ l1(o) or l2(o') = l2(o). Conversely, if l1(o') = l1(o) and l2(o') ≠ l2(o), s cannot read o', proving the lemma in the general case.

Suppose a single COI class has n CDs. Then at least n subjects are needed to access every object. The following theorem establishes this requirement.

  • Theorem 7–2. Let cC and dD. Suppose there are n objects oiO, 1 ≤ in, for which l1(oi) = d for 1 ≤ in, and l2(oi) ≠ l2(oj), 1 ≤ i, jn, ij. Then for all such o, there is an sS that can read o if and only if n ≤ | S |.

  • ProofBy Axiom 7–2, if any subject s can read an oO, it cannot read any other o' ∊ O. Because there are n such o, there must be at least n subjects to meet the conditions of the theorem.

We next add the notion of sanitizing data. Let v(o) be the sanitized version of object o; so, for example, if v(o) = o, the object contains only public information. All sanitized objects are in a special CD in a COI containing no other CD.

  • Axiom 7–5l1(o) = l1(v(o)) if and only if l2(o) = l2(v(o)).

Writing is allowed only if information cannot leak indirectly between two subjects; for example, the object cannot be used as a kind of mailbox. The next axiom captures this constraint.

  • Axiom 7–6. A subject sS can write to an object o ∊ O if and only if the following conditions hold simultaneously.

    1. H(s, o) = true.

    2. There is no o' ∊ O with H(s, o') = true, l2(o)l2(o'), l2(o) ≠ l2(v(o)), l2(o') = l2(v(o)) and s can read o'.

The next definition captures the notion of “information flow” by stating that information can flow from one object to another if a subject can access both objects.

  • Definition 7–4. Information may flow from oO to o' ∊ O if there exists a subject sS such that H(s, o) = true and H(s, o') = true. This is written (o, o').

Information flows even if the access is read-only, because then s can act on information contained in both objects, so in some sense information has flowed between them into a third entity (the subject).

The next theorem shows that unsanitized information is confined to its CD, but sanitized information may flow freely about the system.

  • Theorem 7–3. For any given system, the set of all information flows is the set

  • { (o, o') | oOo' ∊ Ol2(o) = l2(o') ∨ l2(o) = l2(v(o)) }

  • ProofThe set

  • F = { (o, o') | oOo' ∊ O∧ ∃ sS such that (H(s, o) = trueH(s, o') = true) }

    is the set of all information flows in the system, by Definition 7–4. Let F* be its transitive closure, which is the set of all information flows that may occur as the system changes state.

  • The rules banning write access constrain which of these flows will be allowed. The set of flows that Axiom 7–6 excludes are those in the set

  • X = { (o, o') | oOo' ∊ Ol2(o)l2(o') ∧ l2(o) ≠ l2(v(o)) }

  • The remaining information flows are

  • F* – X = { (o, o') | oOo' ∊ O ∧ ¬( l2(o)l2(o') ∧ l2(o) ≠ l2(v(o)) ) }

  • which, by propositional logic, is equivalent to

  • F* – X = { (o, o') | oOo' ∊ O ∧ ( l2(o) = l2(o') ∨ l2(o) = l2(v(o)) ) }

  • establishing the result.

Bell-LaPadula and Chinese Wall Models

The Bell-LaPadula Model and the Chinese Wall model are fundamentally different. Subjects in the Chinese Wall model have no associated security labels, whereas subjects in the Bell-LaPadula Model do have such labels. Furthermore, the Bell-LaPadula Model has no notion of “past accesses,” but this notion is central to the Chinese Wall model's controls.

To emulate the Chinese Wall model using Bell-LaPadula, we assign a security category to each (COI, CD) pair. We define two security levels, S (for sanitized) and U (for unsanitized). By assumption, S dom U. Figure 7-2 illustrates this mapping for the system in Figure 7-1. Each object is transformed into two objects, one sanitized and one unsanitized.

The relevant parts of the Bell-LaPadula lattice induced by the transformation applied to the system in Figure 7-1. For example, a subject with security clearance in class (U, {a,s}) can read objects with labels (U, {a}) and (U, {s}). The Bell-LaPadula Model defines other compartments (such as U, {a, b}), but because these would allow access to different CDs in the same COI class, the Chinese Wall model requires that compartment to be empty.

Figure 7-2. The relevant parts of the Bell-LaPadula lattice induced by the transformation applied to the system in Figure 7-1. For example, a subject with security clearance in class (U, {a,s}) can read objects with labels (U, {a}) and (U, {s}). The Bell-LaPadula Model defines other compartments (such as U, {a, b}), but because these would allow access to different CDs in the same COI class, the Chinese Wall model requires that compartment to be empty.

Each subject in the Chinese Wall model is then assigned clearance for the compartments that do not contain multiple categories corresponding to CDs in the same COI class. For example, if Susan can read the Bank of America and ARCO CDs, her processes would have clearance for compartment (U, {a, n}). There are three possible clearances from the bank COI class, and four possible clearances from the gasoline company COI class, combining to give 12 possible clearances for subjects. Of course, all subjects can read all sanitized data.

The CW-simple security condition clearly holds. The CW-*-property also holds, because the Bell-LaPadula *-property ensures that the category of input objects is a subset of the category of output objects. Hence, input objects are either sanitized or in the same category (that is, the same CD) as that of the subject.

This construction shows that at any time the Bell-LaPadula Model can capture the state of a system using the Chinese Wall model. But the Bell-LaPadula Model cannot capture changes over time. For example, suppose Susan falls ill, and Anna needs to access one of the datasets to which Susan has access. How can the system know if Anna is allowed to access that dataset? The Chinese Wall model tracks the history of accesses, from which Anna's ability to access the CD can be determined. But if the corresponding category is not in Anna's clearances, the Bell-LaPadula Model does not retain the history needed to determine whether her accessing the category would violate the Chinese Wall constraints.

A second, more serious problem arises when one considers that subjects in the Chinese Wall model may choose which CDs to access; in other words, initially a subject is free to access all objects. The Chinese Wall model's constraints grow as the subject accesses more objects. However, from the initial state, the Bell-LaPadula Model constrains the set of objects that a subject can access. This set cannot change unless a trusted authority (such as a system security officer) changes subject clearances or object classifications. The obvious solution is to clear all subjects for all categories, but this means that any subject can read any object, which violates the CW-simple security condition.

Hence, the Bell-LaPadula Model cannot emulate the Chinese Wall model faithfully. This demonstrates that the two policies are distinct.

However, the Chinese Wall model can emulate the Bell-LaPadula Model; the construction is left as an exercise for the reader. (See Exercise 2.)

Clark-Wilson and Chinese Wall Models

The Clark-Wilson model deals with many aspects of integrity, such as validation and verification, as well as access control. Because the Chinese Wall model deals exclusively with access control, it cannot emulate the Clark-Wilson model fully. So, consider only the access control aspects of the Clark-Wilson model.

The representation of access control in the Clark-Wilson model is the second enforcement rule, ER2. That rule associates users with transformation procedures and CDIs on which they can operate. If one takes the usual view that “subject” and “process” are interchangeable, then a single person could use multiple processes to access objects in multiple CDs in the same COI class. Because the Chinese Wall model would view processes independently of who was executing them, no constraints would be violated. However, by requiring that a “subject” be a specific individual and including all processes executing on that subject's behalf, the Chinese Wall model is consistent with the Clark-Wilson model.

Clinical Information Systems Security Policy

Medical records require policies that combine confidentiality and integrity, but in a very different way than for brokerage firms. Conflict of interest is not a critical problem. Patient confidentiality, authentication of both records and the personnel making entries in those records, and assurance that the records have not been changed erroneously are critical. Anderson [30] presents a model for such policies that illuminates the combination of confidentiality and integrity to protect patient privacy and record integrity.

Anderson defines three types of entities in the policy.

  • Definition 7–5. A patient is the subject of medical records, or an agent for that person who can give consent for the person to be treated.

  • Definition 7–6. Personal health information is information about a patient's health or treatment enabling that patient to be identified.

In more common parlance, the “personal health information” is contained in a medical record. We will refer to “medical records” throughout, under the assumption that all personal health information is kept in the medical records.

  • Definition 7–7. A clinician is a health-care professional who has access to personal health information while performing his or her job.

The policy also assumes that personal health information concerns one individual at a time. Strictly speaking, this is not true. For example, obstetrics/gynecology records contain information about both the father and the mother. In these cases, special rules come into play, and the policy does not cover them.

The policy is guided by principles similar to the certification and enforcement rules of the Clark-Wilson model. These principles are derived from the medical ethics of several medical societies, and from the experience and advice of practicing clinicians.[1]

The first set of principles deals with access to the medical records themselves. It requires a list of those who can read the records, and a list of those who can append to the records. Auditors are given access to copies of the records, so the auditors cannot alter the original records in any way. Clinicians by whom the patient has consented to be treated can also read and append to the medical records. Because clinicians often work in medical groups, consent may apply to a set of clinicians. The notion of groups abstracts this set well. Thus:

Access Principle 1Each medical record has an access control list naming the individuals or groups who may read and append information to the record. The system must restrict access to those identified on the access control list.

Medical ethics require that only clinicians and the patient have access to the patient's medical record. Hence:

Access Principle 2One of the clinicians on the access control list (called the responsible clinician) must have the right to add other clinicians to the access control list.

Because the patient must consent to treatment, the patient has the right to know when his or her medical record is accessed or altered. Furthermore, if a clinician who is unfamiliar to the patient accesses the record, the patient should be notified of the leakage of information. This leads to another access principle:

Access Principle 3The responsible clinician must notify the patient of the names on the access control list whenever the patient's medical record is opened. Except for situations given in statutes, or in cases of emergency, the responsible clinician must obtain the patient's consent.

Erroneous information should be corrected, not deleted, to facilitate auditing of the records. Auditing also requires that all accesses be recorded, along with the date and time of each access and the name of each person accessing the record.

Access Principle 4The name of the clinician, the date, and the time of the access of a medical record must be recorded. Similar information must be kept for deletions.

The next set of principles concern record creation and information deletion. When a new medical record is created, the clinician creating the record should have access, as should the patient. Typically, the record is created as a result of a referral. The referring clinician needs access to obtain the results of the referral, and so is included on the new record's access control list.

Creation PrincipleA clinician may open a record, with the clinician and the patient on the access control list. If the record is opened as a result of a referral, the referring clinician may also be on the access control list.

How long the medical records are kept varies with the circumstances. Normally, medical records can be discarded after 8 years, but in some cases—notably cancer cases—the records are kept longer.

Deletion PrincipleClinical information cannot be deleted from a medical record until the appropriate time has passed.

Containment protects information, so a control must ensure that data copied from one record to another is not available to a new, wider audience. Thus, information from a record can be given only to those on the record's access control list.

Confinement PrincipleInformation from one medical record may be appended to a different medical record if and only if the access control list of the second record is a subset of the access control list of the first.

A clinician may have access to many records, possibly in the role of an advisor to a medical insurance company or department. If this clinician were corrupt, or could be corrupted or blackmailed, the secrecy of a large number of medical records would be compromised. Patient notification of the addition limits this threat.

Aggregation PrincipleMeasures for preventing the aggregation of patient data must be effective. In particular, a patient must be notified if anyone is to be added to the access control list for the patients's record and if that person has access to a large number of medical records.

Finally, systems must implement mechanisms for enforcing these principles.

Enforcement PrincipleAny computer system that handles medical records must have a subsystem that enforces the preceding principles. The effectiveness of this enforcement must be subject to evaluation by independent auditors.

Bell-LaPadula and Clark-Wilson Models

Anderson notes that the Confinement Principle imposes a lattice structure on the entities in this model, much as the Bell-LaPadula Model imposes a lattice structure on its entities. Hence, the Bell-LaPadula protection model is a subset of the Clinical Information Systems security model. But the Bell-LaPadula Model focuses on the subjects accessing the objects (because there are more subjects than security labels), whereas the Clinical Information Systems model focuses on the objects being accessed by the subjects (because there are more patients, and medical records, than clinicians). This difference does not matter in traditional military applications, but it might aid detection of “insiders” in specific fields such as intelligence.

The Clark-Wilson model provides a framework for the Clinical Information Systems model. Take the CDIs to be the medical records and their associated access control lists. The TPs are the functions that update the medical records and their access control lists. The IVPs certify several items:

  • A person identified as a clinician is a clinician (to the level of assurance required by the system).

  • A clinician validates, or has validated, information in the medical record.

  • When someone (the patient and/or a clinician) is to be notified of an event, such notification occurs.

  • When someone (the patient and/or a clinician) must give consent, the operation cannot proceed until the consent is obtained.

Finally, the requirement of auditing (certification rule CR4) is met by making all records append-only, and notifiying the patient whenever the access control list changes.

Originator Controlled Access Control

Mandatory and discretionary access controls (MACs and DACs) do not handle environments in which the originators of documents retain control over them even after those documents are disseminated. Graubert [419] developed a policy called ORGCON or ORCON (for “ORiginator CONtrolled”) in which a subject can give another subject rights to an object only with the approval of the creator of that object.

In practice, a single author does not control dissemination; instead, the organization on whose behalf the document was created does. Hence, objects will be marked as ORCON on behalf of the relevant organization.

Suppose a subject sS marks an object oO as ORCON on behalf of organization X. Organization X allows o to be disclosed to subjects acting on behalf of a second organization, Y, subject to the following restrictions.

  1. The object o cannot be released to subjects acting on behalf of other organizations without X's permission.

  2. Any copies of o must have the same restrictions placed on it.

Discretionary access controls are insufficient for this purpose, because the owner of an object can set any permissions desired. Thus, X cannot enforce condition (b).

Mandatory access controls are theoretically sufficient for this purpose, but in practice have a serious drawback. Associate a separate category C containing o, X, and Y and nothing else. If a subject yY wishes to read o, xX makes a copy o' of o. The copy o' is in C, so unless zZ is also in category C, y cannot give z access to o'. This demonstrates adequacy.

Suppose a member w of an organization W wants to provide access to a document d to members of organization Y, but the document is not to be shared with members of organization X or Z. So, d cannot be in category C because if it were, members xX and zZ could access d. Another category containing d, W, and Y must be created. Multiplying this by several thousand possible relationships and documents creates an unacceptably large number of categories.

A second problem with mandatory access controls arises from the abstraction. Organizations that use categories grant access to individuals on a “need to know” basis. There is a formal, written policy determining who needs the access based on common characteristics and restrictions. These restrictions are applied at a very high level (national, corporate, organizational, and so forth). This requires a central clearinghouse for categories. The creation of categories to enforce ORCON implies local control of categories rather than central control, and a set of rules dictating who has access to each compartment.

ORCON abstracts none of this. ORCON is a decentralized system of access control in which each originator determines who needs access to the data. No centralized set of rules controls access to data; access is at the complete discretion of the originator. Hence, the MAC representation of ORCON is not suitable.

A solution is to combine features of the MAC and DAC models. The rules are

  1. The owner of an object cannot change the access controls of the object.

  2. When an object is copied, the access control restrictions of that source are copied and bound to the target of the copy.

  3. The creator (originator) can alter the access control restrictions on a per-subject and per-object basis.

The first two rules are from mandatory access controls. They say that the system controls all accesses, and no one may alter the rules governing access to those objects. The third rule is discretionary and gives the originator power to determine who can access the object. Hence, this hybrid scheme is neither MAC nor DAC.

The critical observation here is that the access controls associated with the object are under the control of the originator and not the owner of the object. Possession equates to only some control. The owner of the object may determine to whom he or she gives access, but only if the originator allows the access. The owner may not override the originator.

Role-Based Access Control

The ability, or need, to access information may depend on one's job functions.

This suggests associating access with the particular job of the user.

  • Definition 7–8. A role is a collection of job functions. Each role r is authorized to perform one or more transactions (actions in support of a job function). The set of authorized transactions for r is written trans(r).

  • Definition 7–9. The active role of a subject s, written actr(s), is the role that s is currently performing.

  • Definition 7–10. The authorized roles of a subject s, written authr(s), is the set of roles that s is authorized to assume.

  • Definition 7–11. The predicate canexec(s, t) is true if and only if the subject s can execute the transaction t at the current time.

  • Three rules reflect the ability of a subject to execute a transaction.

  • Axiom 7–7. Let S be the set of subjects and T the set of transactions. The rule of role assignment is (∀sS)(∀tT)[ canexec(s, t) → actr(s) ≠ Ø ].

This axiom simply says that if a subject can execute any transaction, then that subject has an active role. This binds the notion of execution of a transaction to the role rather than to the user.

  • Axiom 7–8. Let S be the set of subjects. Then the rule of role authorization is (∀sS)[ actr(s) ⊆ authr(s) ].

This rule means that the subject must be authorized to assume its active role. It cannot assume an unauthorized role. Without this axiom, any subject could assume any role, and hence execute any transaction.

  • Axiom 7–9. Let S be the set of subjects and T the set of transactions. The rule of transaction authorization is (∀sS)(∀tT)[ canexec(s, t) → ttrans(actr(s)) ].

This rule says that a subject cannot execute a transaction for which its current role is not authorized.

The forms of these axioms restrict the transactions that can be performed. They do not ensure that the allowed transactions can be executed. This suggests that role-based access control (RBAC) is a form of mandatory access control. The axioms state rules that must be satisfied before a transaction can be executed. Discretionary access control mechanisms may further restrict transactions.

Capturing the notion of mutual exclusion requires a new predicate.

  • Definition 7–12. Let r be a role, and let s be a subject such that rauth(s). Then the predicate meauth(r) (for mutually exclusive authorizations) is the set of roles that s cannot assume because of the separation of duty requirement.

Putting this definition together with the above example, the principle of separation of duty can be summarized as

  • (∀r1, r2R) [ r2meauth(r1) → [ (∀sS) [ r1authr(s) → r2authr(s) ] ] ]

Summary

The goal of this chapter was to show that policies typically combine features of both integrity and confidentiality policies. The Chinese Wall model accurately captures requirements of a particular business (brokering) under particular conditions (the British law). The Clinical Information Systems model does the same thing for medical records. Both models are grounded in current business and clinical practice.

ORCON and RBAC take a different approach, focusing on which entities will access the data rather than on which entities should access the data. ORCON allows the author (individual or corporate) to control access to the document; RBAC restricts access to individuals performing specific functions. The latter approach can be fruitfully applied to many of the models discussed earlier.

Research Issues

Policies for survivable systems, which continue functioning in the face of massive failures, are critical to the secure and correct functioning of many types of banking, medical, and governmental systems. Of particular interest is how to enable such systems to reconfigure themselves to continue to work with a limited or changed set of components.

ORCON provides controls that are different from DAC and MAC. Are other controls distinct enough to be useful in situations where DAC, MAC, and ORCON don't work? How can integrity and consistency be integrated into the model?

Integrating roles into models appears straightforward: just use roles instead of users. But the issues are more subtle, because if an individual can change roles, information may flow in ways that should be disallowed. The issue of integrating roles into existing models, as well as defining new models using roles, is an area that requires much research.

Further Reading

Meadows [689] discusses moving the Chinese Wall into a multilevel security context. Lin [631] challenges an assumption of the model, leading to a different formulation.

Very little has been written about policy models that are useful for policies in specific fields other than government. Anderson's clinical model is an excellent example of such a policy model, as is the Chinese Wall. Foley and Jacob discuss computer-supported collaborative working confidentiality policies in the guise of specification [364]. Wiemer and Murray discuss policy models in the context of sharing information with foreign governments [1044].

McCollum, Messing, and Notargiacomo [670] have suggested an interesting variation of ORCON, called “Owner-Retained Access Control.” Unlike ORCON, this model keeps a list of the originators and owners. Like ORCON, the intersection of all sets controls access. Chandramouli [178] provides a framework for implementing many access control policies in CORBA and discusses an RBAC policy as an example. He also presents a little language for describing policies of interest.

Exercises

1:

Devise an algorithm that generates an access control matrix A for any given history matrix H of the Chinese Wall model.

2:

Develop a construction to show that a system implementing the Chinese Wall model can support the Bell-LaPadula Model.

3:

Show that the Clinical Information System model's principles implement the Clark-Wilson enforcement and certification rules.

4:

Consider using mandatory access controls and compartments to implement an ORCON control. Assume that there are k different organizations. Organization i will produce n(i, j) documents to be shared with organization j.

  1. How many compartments are needed to allow any organization to share a document with any other organization?

  2. Now assume that organization i will need to share nm(i, i1, …, im) documents with organizations i1, …, im. How many compartments will be needed?

5:

Someone once observed that “the difference between roles and groups is that a user can shift into and out of roles, whereas that user has a group identity (or identities) that are fixed throughout the session.”

  1. Consider a system such as a Berkeley-based UNIX system, in which users have secondary group identities that remain fixed during their login sessions. What are the advantages of roles with the same administrative functions as the groups?

  2. Consider a system such as a System V-based UNIX system, in which a process can have exactly one group identity. To change groups, users must execute the newgrp command. Do these groups differ from roles? Why or why not?

6:

The models in this chapter do not discuss availability. What unstated assumptions about that service are they making?

7:

A physician who is addicted to a pain-killing medicine can prescribe the medication for herself. Please show how RBAC in general, and Definition 7–12 specifically, can be used to govern the dispensing of prescription drugs to prevent a physician from prescribing medicine for herself.



[1] The principles are numbered differently in Anderson's paper.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.245.233