Chapter 6. Integrity Policies

 

ISABELLA: Some one with child by him? My cousin Juliet?LUCIO: Is she your cousin?ISABELLA: Adoptedly; as school-maids change their namesBy vain, though apt affection.

 
 --Measure for Measure, I, iv, 45–48.

An inventory control system may function correctly if the data it manages is released; but it cannot function correctly if the data can be randomly changed. So integrity, rather than confidentiality, is key. Integrity policies focus on integrity rather than confidentiality, because most commercial and industrial firms are more concerned with accuracy than disclosure. In this chapter we discuss the major integrity security policies and explore their design.

Goals

Commercial requirements differ from military requirements in their emphasis on preserving data integrity. Lipner [636] identifies five requirements:

  1. Users will not write their own programs, but will use existing production programs and databases.

  2. Programmers will develop and test programs on a nonproduction system; if they need access to actual data, they will be given production data via a special process, but will use it on their development system.

  3. A special process must be followed to install a program from the development system onto the production system.

  4. The special process in requirement 3 must be controlled and audited.

  5. The managers and auditors must have access to both the system state and the system logs that are generated.

These requirements suggest several principles of operation.

First comes separation of duty. The principle of separation of duty states that if two or more steps are required to perform a critical function, at least two different people should perform the steps. Moving a program from the development system to the production system is an example of a critical function. Suppose one of the application programmers made an invalid assumption while developing the program. Part of the installation procedure is for the installer to certify that the program works “correctly,” that is, as required. The error is more likely to be caught if the installer is a different person (or set of people) than the developer. Similarly, if the developer wishes to subvert the production data with a corrupt program, the certifier either must not detect the code to do the corruption, or must be in league with the developer.

Next comes separation of function. Developers do not develop new programs on production systems because of the potential threat to production data. Similarly, the developers do not process production data on the development systems. Depending on the sensitivity of the data, the developers and testers may receive sanitized production data. Further, the development environment must be as similar as possible to the actual production environment.

Last comes auditing. Commercial systems emphasize recovery and accountability. Auditing is the process of analyzing systems to determine what actions took place and who performed them. Hence, commercial systems must allow extensive auditing and thus have extensive logging (the basis for most auditing). Logging and auditing are especially important when programs move from the development system to the production system, since the integrity mechanisms typically do not constrain the certifier. Auditing is, in many senses, external to the model.

Even when disclosure is at issue, the needs of a commercial environment differ from those of a military environment. In a military environment, clearance to access specific categories and security levels brings the ability to access information in those compartments. Commercial firms rarely grant access on the basis of “clearance”; if a particular individual needs to know specific information, he or she will be given it. While this can be modeled using the Bell-LaPadula Model, it requires a large number of categories and security levels, increasing the complexity of the modeling. More difficult is the issue of controlling this proliferation of categories and security levels. In a military environment, creation of security levels and categories is centralized. In commercial firms, this creation would usually be decentralized. The former allows tight control on the number of compartments, whereas the latter allows no such control.

More insidious is the problem of information aggregation. Commercial firms usually allow a limited amount of (innocuous) information to become public, but keep a large amount of (sensitive) information confidential. By aggregating the innocuous information, one can often deduce much sensitive information. Preventing this requires the model to track what questions have been asked, and this complicates the model enormously. Certainly the Bell-LaPadula Model lacks this ability.

Biba Integrity Model

In 1977, Biba [94] studied the nature of the integrity of systems. He proposed three policies, one of which was the mathematical dual of the Bell-LaPadula Model.

A system consists of a set S of subjects, a set O of objects, and a set I of integrity levels.[1] The levels are ordered. The relation < ⊆ I × I holds when the second integrity level dominates the first. The relation ≤ ⊆ I × I holds when the second integrity level either dominates or is the same as the first. The function min: I × II gives the lesser of the two integrity levels (with respect to ≤). The function i:SO →I returns the integrity level of an object or a subject. The relation rS × O defines the ability of a subject to read an object; the relation wS × O defines the ability of a subject to write to an object; and the relation xS × S defines the ability of a subject to invoke (execute) another subject.

Some comments on the meaning of “integrity level” will provide intuition behind the constructions to follow. The higher the level, the more confidence one has that a program will execute correctly (or detect problems with its inputs and stop executing). Data at a higher level is more accurate and/or reliable (with respect to some metric) than data at a lower level. Again, this model implicitly incorporates the notion of “trust”; in fact, the term “trustworthiness” is used as a measure of integrity level. For example, a process at a level higher than that of an object is considered more “trustworthy” than that object.

Integrity labels, in general, are not also security labels. They are assigned and maintained separately, because the reasons behind the labels are different. Security labels primarily limit the flow of information; integrity labels primarily inhibit the modification of information. They may overlap, however, with surprising results (see Exercise 3).

Biba tests his policies against the notion of an information transfer path:

  • Definition 6–1. An information transfer path is a sequence of objects o1, ..., on+1 and a corresponding sequence of subjects s1, ..., sn such that si r oi and si w oi+1 for all i, 1 ≤ in.

Intuitively, data in the object o1 can be transferred into the object on+1 along an information flow path by a succession of reads and writes.

Low-Water-Mark Policy

Whenever a subject accesses an object, the policy changes the integrity level of the subject to the lower of the subject and the object. Specifically:

  1. sS can write to oO if and only if i(o) ≤ i(s).

  2. If sS reads oO, then i´(s) = min(i(s), i(o)), where i´(s) is the subject's integrity level after the read.

  3. s1S can execute s2S if and only if i(s2) ≤ i(s1).

The first rule prevents writing from one level to a higher level. This prevents a subject from writing to a more highly trusted object. Intuitively, if a subject were to alter a more trusted object, it could implant incorrect or false data (because the subject is less trusted than the object). In some sense, the trustworthiness of the object would drop to that of the subject. Hence, such writing is disallowed.

The second rule causes a subject's integrity level to drop whenever it reads an object at a lower integrity level. The idea is that the subject is relying on data less trustworthy than itself. Hence, its trustworthiness drops to the lesser trustworthy level. This prevents the data from “contaminating” the subject or its actions.

The third rule allows a subject to execute another subject provided the second is not at a higher integrity level. Otherwise, the less trusted invoker could control the execution of the invoked subject, corrupting it even though it is more trustworthy.

This policy constrains any information transfer path:

  • Theorem 6–1. If there is an information transfer path from object o1O to object on+1O, then enforcement of the low-water-mark policy requires that i(on+1) ≤ i(o1) for all n > 1.

  • ProofIf an information transfer path exists between o1 and on+1, then Definition 6–1 gives a sequence of subjects and objects identifying the entities on the path. Without loss of generality, assume that each read and write was performed in the order of the indices of the vertices. By induction, for any 1 ≤ kn, i(sk) = min { i(oj) | 1 ≤ jk } after k reads. As the nth write succeeds, by rule 1, i(on+1) ≤ i(sn). Thus, by transitivity, i(on+1) ≤ i(o1).

This policy prevents direct modifications that would lower integrity labels. It also prevents indirect modification by lowering the integrity label of a subject that reads from an object with a lower integrity level.

The problem with this policy is that, in practice, the subjects change integrity levels. In particular, the level of a subject is nonincreasing, which means that it will soon be unable to access objects at a high integrity level. An alternative policy is to decrease object integrity levels rather than subject integrity levels, but this policy has the property of downgrading object integrity levels to the lowest level.

Ring Policy

The ring policy ignores the issue of indirect modification and focuses on direct modification only. This solves the problems described above. The rules are as follows.

  1. Any subject may read any object, regardless of integrity levels.

  2. sS can write to oO if and only if i(o) ≤ i(s).

  3. s1S can execute s2S if and only if i(s2) ≤ i(s1).

The difference between this policy and the low-water-mark policy is simply that any subject can read any object. Hence, Theorem 6–1 holds for this model, too.

Biba's Model (Strict Integrity Policy)

This model is the dual of the Bell-LaPadula Model, and is most commonly called “Biba's model.” Its rules are as follows.

  1. sS can read oO if and only if i(s) ≤ i(o).

  2. sS can write to oO if and only if i(o) ≤ i(s).

  3. s1S can execute s2S if and only if i(s2) ≤ i(s1).

Given these rules, Theorem 6–1 still holds, but its proof changes (see Exercise 1). Note that rules 1 and 2 imply that if both read and write are allowed, i(s) = i(o).

Like the low-water-mark policy, this policy prevents indirect as well as direct modification of entities without authorization. By replacing the notion of “integrity level” with “integrity compartments,” and adding the notion of discretionary controls, one obtains the full dual of Bell-LaPadula.

Lipner's Integrity Matrix Model

Lipner returned to the Bell-LaPadula Model and combined it with the Biba model to create a model [636] that conformed more accurately to the requirements of a commercial policy. For clarity, we consider the Bell-LaPadula aspects of Lipner's model first, and then combine those aspects with Biba's model.

Lipner's Use of the Bell-LaPadula Model

Lipner provides two security levels, in the following order (higher to lower):

  • Audit Manager (AM)system audit and management functions are at this level.

  • System Low (SL)any process can read information at this level.

He similarly defined five categories:

  • Development (D)production programs under development and testing, but not yet in production use

  • Production Code (PC)production processes and programs

  • Production Data (PD)data covered by the integrity policy

  • System Development (SD)system programs under development, but not yet in production use

  • Software Tools (T)programs provided on the production system not related to the sensitive or protected data

Lipner then assigned users to security levels based on their jobs. Ordinary users will use production code to modify production data; hence, their clearance is (SL, { PC, PD }). Application developers need access to tools for developing their programs, and to a category for the programs that are being developed (the categories should be separate). Hence, application programmers have (SL, { D, T }) clearance. System programmers develop system programs and, like application programmers, use tools to do so; hence, system programmers should have clearance (SL, { SD, T }). System managers and auditors need system high clearance, because they must be able to access all logs; their clearance is (AM, { D, PC, PD, SD, T }). Finally, the system controllers must have the ability to downgrade code once it is certified for production, so other entities cannot write to it; thus, the clearance for this type of user is (SL, { D, PC, PD, SD, T }) with the ability to downgrade programs. These security levels are summarized as follows.

Users

Clearance

Ordinary users

(SL, { PC, PD })

Application developers

(SL, { D, T })

System programmers

(SL, { SD, T })

System managers and auditors

(AM, { D, PC, PD, SD, T })

System controllers

(SL, { D, PC, PD, SD, T }) and downgrade privilege

The system objects are assigned to security levels based on who should access them. Objects that might be altered have two categories: that of the data itself and that of the program that may alter it. For example, an ordinary user needs to execute production code; hence, that user must be able to read production code. Placing production code in the level (SL, { PC }) allows such access by the simple security property of the Bell-LaPadula Model. Because an ordinary user needs to alter production data, the *-property dictates that production data be in (SL, { PC, PD }). Similar reasoning supplies the following:

Objects

Class

Development code/test data

(SL, { D, T })

Production code

(SL, { PC })

Production data

(SL, { PC, PD })

Software tools

(SL, { T })

System programs

(SL, Ø)

System programs in modification

(SL, { SD, T })

System and application logs

(AM, { appropriate categories })

All logs are append-only. By the *-property, their classes must dominate those of the subjects that write to them. Hence, each log will have its own categories, but the simplest way to prevent their being compromised is to put them at a higher security level.

We now examine this model in light of the requirements in Section 6.1.

  1. Because users do not have execute access to category T, they cannot write their own programs, so requirement 1 is met.

  2. Application programmers and system programmers do not have read or write access to category PD, and hence cannot access production data. If they do require production data to test their programs, the data must be downgraded from PD to D, and cannot be upgraded (because the model has no upgrade privilege). The downgrading requires intervention of system control users, which is a special process within the meaning of requirement 2. Thus, requirement 2 is satisfied.

  3. The process of installing a program requires the downgrade privilege (specifically, changing the category of the program from D to PC), which belongs only to the system control users; hence, only those users can install applications or system programs. The use of the downgrade privilege satisfies requirement 3's need for a special process.

  4. The control part of requirement 4 is met by allowing only system control users to have the downgrade privilege; the auditing part is met by requiring all downgrading to be logged.

  5. Finally, the placement of system management and audit users in AM ensures that they have access both to the system state and to system logs, so the model meets requirement 5.

Thus, the model meets all requirements. However, it allows little flexibility in special-purpose software. For example, a program for repairing an inconsistent or erroneous production database cannot be application-level software. To remedy these problems, Lipner integrates his model with Biba's model.

Lipner's Full Model

Augment the security classifications with three integrity classifications (highest to lowest):

  • System Program (ISP): the classifications for system programs

  • Operational (IO): the classifications for production programs and development software

  • System Low (ISL): the classifications at which users log in

Two integrity categories distinguish between production and development software and data:

  • Development (ID): development entities

  • Production (IP): production entities

The security category T (tools) allowed application developers and system programmers to use the same programs without being able to alter those programs. The new integrity categories now distinguish between development and production, so they serve the purpose of the security tools category, which is eliminated from the model. We can also collapse production code and production data into a single category. This gives us the following security categories:

  • Production (SP): production code and data

  • Development (SD): same as (previous) security category Development (D)

  • System Development (SSD): same as (previous) security category System Development (SD)

The security clearances of all classes of users remain equivalent to those of the model without integrity levels and categories. The integrity classes are chosen to allow modification of data and programs as appropriate. For example, ordinary users should be able to modify production data, so users of that class must have write access to integrity category IP. The following listing shows the integrity classes and categories of the classes of users:

Users

Security clearance

Integrity clearance

Ordinary users

(SL, { SP })

(ISL, { IP })

Application developers

(SL, { SD })

(ISL, { ID })

System programmers

(SL, { SSD })

(ISL, { ID })

System controllers

(SL, { SP, SD }) and downgrade privilege

(SP, { IP, ID })

System managers and auditors

(AM, { SP, SD, SSD})

(ISL, { IP, ID })

Repair

(SL, { SP })

(ISL, { IP })

The final step is to select integrity classes for objects. Consider the objects Production Code and Production Data. Ordinary users must be able to write the latter but not the former. By placing Production Data in integrity class (ISL, { IP }) and Production Code in class (IO, { IP }), an ordinary user cannot alter production code but can alter production data. Similar analysis leads to the following:

Objects

Security level

Integrity level

Development code/test data

(SL, { SD })

(ISL, { IP })

Production code

(SL, { SP })

(IO, { IP })

Production data

(SL, { SP })

(ISL, { IP })

Software tools

(SL, Ø)

(IO, { ID })

System programs

(SL, Ø)

(ISP, { IP, ID })

System programs in modification

(SL, { SSD, })

(ISL, { ID })

System and application logs

(AM, { appropriate categories })

(ISL, Ø)

Repair

(SL, { SP })

(ISL, { IP })

The repair class of users has the same integrity and security clearance as that of production data, and so can read and write that data. It can also read production code (same security classification and (IO, { IP }) dom (ISL, { IP })), system programs ((SL, { SP }) dom (SL, Ø) and (ISP, { IP, ID }) dom (ISL, { IP })), and repair objects (same security classes and same integrity classes); it can write, but not read, the system and application logs (as (AM, { SP }) dom (SL, { SP }) and (ISL, { IP }) dom (ISL, Ø)). It cannot access development code/test data (since the security categories are disjoint), system programs in modification (since the integrity categories are disjoint), or software tools (again, since the integrity categories are disjoint). Thus, the repair function works as needed.

The reader should verify that this model meets Lipner's requirements for commercial models.

Comparison with Biba

Lipner's model demonstrates that the Bell-LaPadula Model can meet many commercial requirements, even though it was designed for a very different purpose. The resiliency of that model is part of its attractiveness; however, fundamentally, the Bell-LaPadula Model restricts the flow of information. Lipner notes this, suggesting that combining his model with Biba's may be the most effective.

Clark-Wilson Integrity Model

In 1987, David Clark and David Wilson developed an integrity model [198] radically different from previous models. This model uses transactions as the basic operation, which models many commercial systems more realistically than previous models.

One main concern of a commercial environment, as discussed above, is the integrity of the data in the system and of the actions performed on that data. The data is said to be in a consistent state (or consistent) if it satisfies given properties. For example, let D be the amount of money deposited so far today, W the amount of money withdrawn so far today, YB the amount of money in all accounts at the end of yesterday, and TB the amount of money in all accounts so far today. Then the consistency property is

  • D + YBW = TB

Before and after each action, the consistency conditions must hold. A well-formed transaction is a series of operations that transition the system from one consistent state to another consistent state. For example, if a depositor transfers money from one account to another, the transaction is the transfer; two operations, the deduction from the first account and the addition to the second account, make up this transaction. Each operation may leave the data in an inconsistent state, but the well-formed transaction must preserve consistency.

The second feature of a commercial environment relevant to an integrity policy is the integrity of the transactions themselves. Who examines and certifies that the transactions are performed correctly? For example, when a company receives an invoice, the purchasing office requires several steps to pay for it. First, someone must have requested a service, and determined the account that would pay for the service. Next, someone must validate the invoice (was the service being billed for actually performed?). The account authorized to pay for the service must be debited, and the check must be written and signed. If one person performs all these steps, that person could easily pay phony invoices; however, if at least two different people perform these steps, both must conspire to defraud the company. Requiring more than one person to handle this process is an example of the principle of separation of duty.

Computer-based transactions are no different. Someone must certify that the transactions are implemented correctly. The principle of separation of duty requires that the certifier and the implementors be different people. In order for the transaction to corrupt the data (either by illicitly changing the data or by leaving the data in an inconsistent state), two different people must either make similar mistakes or collude to certify the well-formed transaction as correct.

The Model

The Clark-Wilson model defines data subject to its integrity controls as constrained data items, or CDIs. Data not subject to the integrity controls are called unconstrained data items, or UDIs. For example, in a bank, the balances of accounts are CDIs since their integrity is crucial to the operation of the bank, whereas the gifts selected by the account holders when their accounts were opened would be UDIs, because their integrity is not crucial to the operation of the bank. The set of CDIs and the set of UDIs partition the set of all data in the system being modeled.

A set of integrity constraints (similar in spirit to the consistency constraints discussed above) constrain the values of the CDIs. In the bank example, the consistency constraint presented earlier would also be an integrity constraint.

The model also defines two sets of procedures. Integrity verification procedures, or IVPs, test that the CDIs conform to the integrity constraints at the time the IVPs are run. In this case, the system is said to be in a valid state. Transformation procedures, or TPs, change the state of the data in the system from one valid state to another; TPs implement well-formed transactions.

Return to the example of bank accounts. The balances in the accounts are CDIs; checking that the accounts are balanced, as described above, is an IVP. Depositing money, withdrawing money, and transferring money between accounts are TPs. To ensure that the accounts are managed correctly, a bank examiner must certify that the bank is using proper procedures to check that the accounts are balanced, to deposit money, to withdraw money, and to transfer money. Furthermore, those procedures may apply only to deposit and checking accounts; they might not apply to other types of accounts—for example, to petty cash. The Clark-Wilson model captures these requirements in two certification rules:

Certification rule 1 (CR1)When any IVP is run, it must ensure that all CDIs are in a valid state.

Certification rule 2 (CR2)For some associated set of CDIs, a TP must transform those CDIs in a valid state into a (possibly different) valid state.

CR2 defines as certified a relation that associates a set of CDIs with a particular TP. Let C be the certified relation. Then, in the bank example,

  • (balance, account1), (balance, account2), …, (balance, accountn) ∊ C

CR2 implies that a TP may corrupt a CDI if it is not certified to work on that CDI. For example, the TP that invests money in the bank's stock portfolio would corrupt account balances even if the TP were certified to work on the portfolio, because the actions of the TP make no sense on the bank accounts. Hence, the system must prevent TPs from operating on CDIs for which they have not been certified. This leads to the following enforcement rule:

Enforcement rule 1 (ER1)The system must maintain the certified relations, and must ensure that only TPs certified to run on a CDI manipulate that CDI.

Specifically, ER1 says that if a TP f operates on a CDI o, then (f, o) ∊ C. However, in a bank, a janitor is not allowed to balance customer accounts. This restriction implies that the model must account for the person performing the TP, or user. The Clark-Wilson model uses an enforcement rule for this:

Enforcement rule 2 (ER2)The system must associate a user with each TP and set of CDIs. The TP may access those CDIs on behalf of the associated user. If the user is not associated with a particular TP and CDI, then the TP cannot access that CDI on behalf of that user.

This defines a set of triples (user, TP, { CDI set }) to capture the association of users, TPs, and CDIs. Call this relation allowed A. Of course, these relations must be certified:

Certification rule 3 (CR3)The allowed relations must meet the requirements imposed by the principle of separation of duty.

Because the model represents users, it must ensure that the identification of a user with the system's corresponding user identification code is correct. This suggests:

Enforcement rule 3 (ER3)The system must authenticate each user attempting to execute a TP.

An interesting observation is that the model does not require authentication when a user logs into the system, because the user may manipulate only UDIs. But if the user tries to manipulate a CDI, the user can do so only through a TP; this requires the user to be certified as allowed (per ER2), which requires authentication of the user (per ER3).

Most transaction-based systems log each transaction so that an auditor can review the transactions. The Clark-Wilson model considers the log simply as a CDI, and every TP appends to the log; no TP can overwrite the log. This leads to:

Certification rule 4 (CR4)All TPs must append enough information to reconstruct the operation to an append-only CDI.

When information enters a system, it need not be trusted or constrained. For example, when one deposits money into an Automated Teller Machine (ATM), one need not enter the correct amount. However, when the ATM is opened and the cash or checks counted, the bank personnel will detect the discrepancy and fix it before they enter the deposit amount into one's account. This is an example of a UDI (the stated deposit amount) being checked, fixed if necessary, and certified as correct before being transformed into a CDI (the deposit amount added to one's account). The Clark-Wilson model covers this situation with certification rule 5:

Certification rule 5 (CR5)Any TP that takes as input a UDI may perform only valid transformations, or no transformations, for all possible values of the UDI. The transformation either rejects the UDI or transforms it into a CDI.

The final rule enforces the separation of duty needed to maintain the integrity of the relations in rules ER2 and ER3. If a user could create a TP and associate some set of entities and herself with that TP (as in ER3), she could have the TP perform unauthorized acts that violated integrity constraints. The final enforcement rule prevents this:

Enforcement rule 4 (ER4)Only the certifier of a TP may change the list of entities associated with that TP. No certifier of a TP, or of an entity associated with that TP, may ever have execute permission with respect to that entity.

This rule requires that all possible values of the UDI be known, and that the TP be implemented so as to be able to handle them. This issue arises again in both vulnerabilities analysis and secure programming.

This model contributed two new ideas to integrity models. First, it captured the way most commercial firms work with data. The firms do not classify data using a multilevel scheme, and they enforce separation of duty. Second, the notion of certification is distinct from the notion of enforcement, and each has its own set of rules.

Assuming correct design and implementation, a system with a policy following the Clark-Wilson model will ensure that the enforcement rules are obeyed. But the certification rules require outside intervention, and the process of certification is typically complex and prone to error or to incompleteness (because the certifiers make assumptions about what can be trusted). This is a weakness in some sense, but it makes explicit assumptions that other models do not.

A UNIX Approximation to Clark-Wilson

Polk describes an implementation of Clark-Wilson under the UNIX operating system [809]. He first defines “phantom” users that correspond to locked accounts. No real user may assume the identity of a phantom user.

Now consider the triple (user, TP, { CDI set }). For each TP, define a phantom user to be the owner. Place that phantom user into the group that owns each of the CDIs in the CDI set. Place all real users authorized to execute the TP on the CDIs in the CDI set into the group owner of the TP. The TPs are setuid to the TP owner,[2] and are executable by the group owner. The CDIs are owned either by root or by a phantom user.

Polk points out three objections. Two different users cannot use the same TP to access two different CDIs. This requires two separate copies of the TP, one for each user and associated CDI. Secondly, this greatly increases the number of setuid programs, which increases the threat of improperly granted privileges. Proper design and assignment to groups minimizes this problem. Finally, the superuser can assume the identity of any phantom user. Without radically changing the nature of the root account, this problem cannot be overcome.

Comparison with the Requirements

We now consider whether the Clark-Wilson model meets the five requirements in Section 6.1. We assume that production programs correspond to TPs and that production data (and databases) are CDIs.

Requirement 1.

If users are not allowed to perform certifications of TPs, but instead only “trusted personnel” are, then CR5 and ER4 enforce this requirement. Because ordinary users cannot create certified TPs, they cannot write programs to access production databases. They must use existing TPs and CDIs—that is, production programs and production databases.

Requirement 2.

This requirement is largely procedural, because no set of technical controls can prevent a programmer from developing and testing programs on production systems. (The standard procedural control is to omit interpreters and compilers from production systems.) However, the notion of providing production data via a special process corresponds to using a TP to sanitize, or simply provide, production data to a test system.

Requirement 3.

Installing a program from a development system onto a production system requires a TP to do the installation and “trusted personnel” to do the certification.

Requirement 4.

CR4 provides the auditing (logging) of program installation. ER3 authenticates the “trusted personnel” doing the installation. CR5 and ER4 control the installation procedure (the new program being a UDI before certification and a CDI, as well as a TP in the context of other rules, after certification).

Requirement 5.

Finally, because the log is simply a CDI, management and auditors can have access to the system logs through appropriate TPs. Similarly, they also have access to the system state.

Thus, the Clark-Wilson model meets Lipner's requirements.

Comparison with Other Models

The contributions of the Clark-Wilson model are many. First, we compare it with the Biba model, and then with the Lipner model, to highlight these new features.

Recall that the Biba model attaches integrity levels to objects and subjects. In the broadest sense, so does the Clark-Wilson model, but unlike the Biba model, each object has two levels: constrained or high (the CDIs) and unconstrained or low (the UDIs). Similarly, subjects have two levels: certified (the TPs) and uncertified (all other procedures). Given this similarity, can the Clark-Wilson model be expressed fully using the Biba model?

The critical distinction between the two models lies in the certification rules. The Biba model has none; it asserts that “trusted” subjects exist to ensure that the actions of a system obey the rules of the model. No mechanism or procedure is provided to verify the trusted entities or their actions. But the Clark-Wilson model provides explicit requirements that entities and actions must meet; in other words, the method of upgrading an entity is itself a TP that a security officer has certified. This underlies the assumptions being made and allows for the upgrading of entities within the constructs of the model (see ER4 and CR5). As with the Bell-LaPadula Model, if the Biba model does not have tranquility, trusted entities must change the objects' integrity levels, and the method of upgrading need not be certified.

Handling changes in integrity levels is critical in systems that receive input from uncontrolled sources. For example, the Biba model requires that a trusted entity, such as a security officer, pass on every input sent to a process running at an integrity level higher than that of the input. This is not practical. However, the Clark-Wilson model requires that a trusted entity (again, perhaps a security officer) certify the method of upgrading data to a higher integrity level. Thus, the trusted entity would not certify each data item being upgraded; it would only need to certify the method for upgrading data, and the data items could be upgraded. This is quite practical.

Can the Clark-Wilson model emulate the Biba model? The relations described in ER2 capture the ability of subjects to act on objects. By choosing TPs appropriately, the emulation succeeds (although the certification rules constrain trusted subjects in the emulation, whereas the Biba model imposes no such constraints). The details of the construction are left as an exercise for the reader (see Exercise 11).

Summary

Integrity models are gaining in variety and popularity. The problems they address arise from industries in which environments vary wildly. They take into account concepts (such as separation of privilege) from beyond the scope of confidentiality security policies. This area will continue to increase in importance as more and more commercial firms develop models or policies to help them protect their data.

Research Issues

Central to the maintenance of integrity is an understanding of how trust affects integrity. A logic for analyzing trust in a model or in a system would help analysts understand the role of trust. The problem of constructing such a logic that captures realistic environments is an open question.

The development of realistic integrity models is also an open research question, as are the analysis of a system to derive models and the generation of mechanisms to enforce them. Although these issues arise in all modeling, integrity models are particularly susceptible to failures to capture the underlying processes and entities on which systems are built.

Models for analyzing software and systems to determine whether they conform to desired integrity properties is another critical area, and much of the research on “secure programming” is relevant here. In particular, has the integrity of a piece of software, or of data on which that software relies, been compromised? In the most general form, this question is undecidable; in particular cases, with software that exhibits specific properties, this question is decidable.

Further Reading

Nash and Poland discuss realistic situations in which mechanisms are unable to enforce the principle of separation of duty [742]. Other studies of this principle include its use in role-based access control [600, 928], databases [782], and multilevel security [363]. Notargiacomo, Blaustein, and McCollum [781] present a generalization of Clark-Wilson suitable for trusted database management systems that includes dynamic separation of duty.

Integrity requirements arise in many contexts. Saltman [863] provides an informative survey of the requirements for secure electronic voting. Chaum's classic paper on electronic payment [185] raises issues of confidentiality and shows that integrity and anonymity can coexist. Integrity in databases is crucial to their correctness [46, 334, 418]. The analysis of trust in software is also an issue of integrity [23, 730].

Chalmers compares commercial policies with governmental ones [177]. Lee [619] discusses an alternative to Lipner's use of mandatory access controls for implementing commercial policies.

Exercises

1:

Prove Theorem 6–1 for the strict integrity policy of Biba's model.

2:

Give an example that demonstrates that the integrity level of subjects decreases in Biba's low-water-mark policy. Under what conditions will the integrity level remain unchanged?

3:

Suppose a system used the same labels for integrity levels and categories as for subject levels and categories. Under what conditions could one subject read an object? Write to an object?

4:

In Pozzo and Gray's modification of LOCUS, what would be the effect of omitting the run-untrusted command? Do you think this enhances or degrades security?

5:

Explain why the system controllers in Lipner's model need a clearance of (SL, { D, PC, PD, SD, T }).

6:

Construct an access control matrix for the subjects and objects of Lipner's commercial model. The matrix will have entries for r (read) and w (write) rights. Show that this matrix is consistent with the requirements listed in Section 6.1.

7:

Show how separation of duty is incorporated into Lipner's model.

8:

In the Clark-Wilson model, must the TPs be executed serially, or can they be executed in parallel? If the former, why; if the latter, what constraints must be placed on their execution?

9:

Prove that applying a sequence of transformation procedures to a system in a valid state results in the system being in a (possibly different) valid state.

10:

The relations certified (see ER1) and allowed (see ER2) can be collapsed into a single relation. Please do so and state the new relation. Why doesn't the Clark-Wilson model do this?

11:

Show that the enforcement rules of the Clark-Wilson model can emulate the Biba model.

12:

One version of Polk's implementation of Clark-Wilson on UNIX systems requires transaction procedures to distinguish users in order to determine which CDIs the user may manipulate. This exercise asks you to explore the implementation issues in some detail.

  1. Polk suggests using multiple copies of a single TP. Show, with examples, exactly how to set this up.

  2. Polk suggests that wrappers (programs that perform checks and then invoke the appropriate TPs) could be used. Discuss, with examples, exactly how to set this up. In particular, what checks would the wrapper need to perform?

  3. An alternative implementation would be to combine the TPs and wrappers into a single program. This new program would be a version of the TP that would perform the checks and then transform the CDIs. How dificult would such a combination be to implement? What would be its advantages and disadvantages compared with multiple copies of a single TP? Compared with the use of wrappers?

13:

The text states that whether or not the integrity of a generic piece of software, or of generic data on which that generic software relies, has been compromised is undecidable. Prove that this is indeed the case.



[1] The original model did not include categories and compartments. The changes required to add them are straightforward.

[2] That is, the TPs execute with the rights of the TP owner, and not of the user executing the TP.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.4.174