Chapter 27. System Security

 

IMOGEN: To your protection I commend me, gods.From fairies and the tempters of the nightGuard me, beseech ye.

 
 --Cymbeline, II, ii, 8–10.

System configuration and administration relies on many principles of security and assurance. This chapter begins with a policy for the DMZ Web server system and for a development system in the internal network. It explores the configuration and maintenance of several system components in light of the policy and in light of principles of computer security. This illuminates how the practice of computer security is guided by the fundamental principles discussed throughout this book.

Introduction

Among the many functions of system administration is the security of the system and the data it contains. This chapter considers how the administration of security affects the system.

For our purposes, we consider the security policy of the Web server within the DMZ and a user system in the development subnet. This will contrast the manner in which an administrator secures a system that many users use for development of software with the methods used to secure a system that is likely to be attacked and that is not intended for the use of nonadministrative users.

Section 26.3.3.2 discusses the Web server's function in relation to the rest of the Drib's network infrastructure. Briefly, the Web server system provides access to untrusted users through a Web server, and access to trusted users through SSH. Untrusted users can come from any system on the Internet. Trusted users are those users who have access to the trusted administrative host on the internal network. For the purposes of our policy, we assume that any user in that system has been correctly authenticated to that system and is “trusted” as we use the term.

The development system is a standard UNIX or UNIX-like system. A set of developers are allowed to use the system.

Policy

Policy is at the heart of every decision involving security. The DMZ Web server has a policy very different from that of the development system. This section discusses portions of the policies in order to provide a foundation for the remainder of this chapter. We then compare and contrast the policy elements.

The Web Server System in the DMZ

Section 26.3.3.2, “DMZ WWW Server,” discusses the basic security policy of the Web server. Some of the consequences of the policy are as follows.

  1. All incoming Web connections come through the outer firewall, and all replies are sent through the outer firewall.

  2. All users log in from an internal trusted server running SSH. Web pages are never updated locally. New Web pages are downloaded through the SSH tunnel.

  3. Log messages are transmitted to the DMZ log server only.

  4. The Web server may query the DMZ DNS system for IP addresses.

  5. Other than those expressly mentioned here, no network services are provided.

  6. The Web server runs CGI scripts. One of these scripts will write enciphered information (transaction data) to a spooling area. The enciphered file will be retrieved from the trusted internal administrative host using the SSH tunnel.

  7. The Web server must implement its services correctly, and must restrict access to those services as much as possible.

  8. The public key of the principal who will decipher and process the transaction data must reside on the DMZ Web server.

From these implications, several constraints emerge. The Web server consequences (WCs) of interest are as follows.

WC1.

Policy consequence 1 requires that no unrequested network connections except those from the outer firewall over the HTTP and HTTPS ports, and those from the internal trusted administrative server over SSH, should be accepted. Replies to DNS queries should be accepted provided that they come from the DMZ DNS server. If other network clients are to be run, only replies to messages originating from the DMZ Web server should be accepted.

WC2.

Policy consequence 2 states that user access to the system is to be limited to those users on the internal trusted administrative server. Furthermore, the number of users who need access to the Web server should be as small as possible, with only those privileges needed to perform their tasks. All actions must be attributable to an individual, as opposed to a role, user.

WC3.

Policy consequences 4 and 5 suggest that the Web server should be configured to provide minimal access to the system. This prevents an attacker who compromises the Web server from accessing other parts of the system. This requirement leads to one unexpected, interesting consideration. If an attacker gains access to the system through the Web server, she can delete all uncollected transaction files. This denial of service attack would blemish the Drib's reputation. Some other mechanism should capture the transaction files and copy them to an area that the Web server cannot reach. Then, if an attacker compromises the Web server, that attacker cannot reach the transaction files.

WC4.

Policy consequences 5, 6, and 8 imply that all software must have a very high assurance of functioning correctly (as specified by its documentation). In practice, this means that the software must be either developed or checked very carefully. It also requires that extensive logging occur, to verify that the software is functioning correctly even when under attack. In essence, we view attacks as situations in which software functions correctly (and the attack fails) or incorrectly (and the attack succeeds).

WC5.

Policy consequence 7 states that the Web server must contain as few programs, and as little software, configuration information, and other data, as possible. If the system is compromised, this will minimize the effects of the attack.

The Development System

The development system lies in the internal network, on the development subnet (called the “devnet”). It must provide an environment in which developers can produce code for dribbles. Because users will be active on the system, its policy is considerably different than that of the Web server system.

The devnet has both infrastructure and user systems. The infrastructure systems are the devnet firewall (which separates it from other internal subnets), a DNS server, a logging host (which provides a central repository for logs), one or more file servers, and one or more systems containing user information common to the workstations (the UINFO servers). There is also an isolated system used to build a “base system configuration” (system files, configuration files, company-approved software, and so on) and to burn CD-ROMs. The policy that follows does not apply to these systems. They are under much tighter controls. The components of the security policy relevant to our discussion are as follows.

  1. Only authorized users are allowed to use the devnet systems. They may work on any devnet workstation. All actions and system accesses must be tied to an individual user, rather than to a role account.

  2. Workstation system administrators must be able to access the workstations at all times, unless the particular workstation has crashed. The set of devnet workstation administrators differs from the set of devnet central server administrators.

  3. Within the devnet itself, users are trusted not to attack devnet systems. Users not on the devnet are not trusted. They are not allowed to access devnet resources except as permitted by the network security policy (for internal Drib users). Furthermore, devnet users are not allowed to access systems not on the devnet except as permitted by the network policy.

  4. All network communications, except electronic mail, are to be confidential and are to be checked to ensure that the messages are not altered in transit.

  5. The base standard configuration for each devnet system cannot be changed on that system. There is to be a local area in each system in which developers may install programs that are nonstandard. Before doing this, they must obtain approval from the security officers and system administrators. Should the software prove useful, it may be integrated into the standard configuration.

  6. Backups shall enable system administrators to restore any devnet system with the loss of at most one day's changes in user and local files.

  7. Security officers shall perform both periodic and ongoing audits of devnet systems. Compromised systems shall be removed from the devnet until they have been restored to an uncompromised state.

These components have several consequences, two of which affect the infrastructure and configuration of workstations. Policy component 3 leads to the use of a firewall at the boundary of the devnet and the other subnets to enforce the network security policy. This allows the network security administrators to enforce changes in the network policy without having to alter each system on the devnet. Any changes need only be made at the firewall. Also, the systems on the devnet need not be so tightly configured as must the firewalls. The firewalls enforce the policy that hosts outside the devnet see; the hosts inside the devnet enforce the policy specific to the developers and their hosts (the policy outlined above).

Policy component 3 also bars direct access between the Internet and devnet systems. This decision was based on a risk analysis. The security officers and management of the Drib realized that the Drib would benefit from allowing telecommuting and access to remote Web sites. However, the dangers of opening up an avenue of attack from Internet hosts to internal hosts, and allowing unsuspecting Drib employees to download untrusted, and possibly malicious, code, outweighed the perceived benefits. This portion of the policy is under review, and the Drib is considering changes to allow telecommuting (see Exercise 8 in Chapter 26).

Some developers need access to the Internet to determine what equipment to obtain as they plan new mechanisms and devices to enhance the value of the Drib's products. These developers are given separate workstations connected to a commercial Internet Service Provider (ISP) outside the Drib's perimeter. These “ISP work stations” are physically separated from the internal network, and the ISP workstation cannot easily be connected with the devnet workstation. These procedural mechanisms enforce the desired separation.

Other consequences of the policy apply to the devnet workstations. The development system consequences (DCs) of interest are as follows.

DC1.

Policy components 1 and 4 imply the need for authenticated, enciphered, integrity-checked communications. These policy components also imply a consistent naming scheme across systems, so that a user name refers to the same user on all devnet systems.

DC2.

Policy component 2 requires that each workstation have one or more local privileged accounts to administer the system locally. Policy components 1 and 2 imply that multiple local administrative accounts may be used to limit access to particular administrative functions. This division of power into roles allows the administrators to designate special system accounts, such as mail, as being limited in their power. Policy requirement 2 also requires that the workstation be able to run without any network connections.

DC3.

Policy component 1 also requires that there be a notion of a “login” or “audit” user (see Section 27.4). This identity must be recorded in logs, to tie individuals to actions. Furthermore, users should not be able to log directly into role accounts such as root, because this would eliminate the ability to tie an individual to an action. Instead, they must log into an individual account and change to the role account, or add a new role, to their individual account.

DC4.

If a developer wants to install a program from the outside onto his devnet workstation, he must first obtain approval from the security officers. Once approved, he installs it in an area separate from the base system configuration (see policy component 5). Adding a program to the base system configuration requires that it be added to the isolated system first. This requires testing and analysis of the program to ensure (to an appropriate level of integrity) that the software is not malicious and will not accidentally damage the system on which it runs.

DC5.

Policy component 5 requires that each workstation protect the base system configuration, as installed, from being altered. One approach is to mount the disks containing that configuration as read-only disks. A far simpler and more effective approach is to use read-only media. This meets policy requirements and ensures that all devnet workstations are up to date. A writable hard drive provides space for local files such as spool and temporary files.

DC6.

Policy component 1 requires that an employee's files be available to her continuously. This requires that the files be stored on systems other than the workstations, in case a workstation goes down. As a corollary, the file controls should enforce the same sets of permissions regardless of the workstations from which they are accessed.

DC7.

Policy component 6 requires regular backups. As explained in Section 27.7.2, the development workstations store only transient files on writable media. Hence, they need not be backed up. Restoration involves rebooting and remounting of file systems from the file servers, which are regularly backed up.

DC8.

Policy component 7 requires several security precautions. The primary one is a logging system to which all systems send log messages. Furthermore, security officers need access to both devnet systems and the devnet network. They conduct periodic (and irregular) sweeps of the network, looking for unauthorized servers. They also conduct periodic (and irregular) sweeps of each system looking for dangerous settings in user accounts and the local areas.

Two points about this policy, and its implications, are apparent. First, the system security policy relies on the outer and inner firewalls to prevent Internet users from reaching the system. If one firewall fails, the other will still block such accesses.[1] Also, the firewall at the perimeter of the developer's subnet enforces the access restrictions among the users of the other two subnets and the systems on the developer's subnet.[2] So the system policy assumes that those who can connect to the system are authorized to access developer systems.

The security policy also requires procedural enforcement mechanisms.

Here, the Drib must rely on procedural mechanisms to enforce the policy. In this case, the procedures should specify both the prohibition and the consequences of violating it. This puts all employees on notice that the prohibition will be enforced, and encourages them to use the allowed methods to obtain approval.

Comparison

The differences between the policies of the DMZ WWW system and the devnet developer system arise from their different roles. The DMZ WWW server is not a general-use system. It exists only to serve Web pages and accept Web orders. The devnet developer system is a general-use computer. It must allow compilation, editing, and other functions that programmers and software engineers need to design, implement, and test software.

The DMZ Web server system's security policy focuses on the single purpose of the server: to run the Web server. Two sets of users can access the server: the system administrators, who maintain the security and the Web pages; and the users from the Internet, who must go through the outer firewall and can access only the Web server. The developer system's security policy focuses on more complex purposes. These purposes include software creation, testing, and maintenance. The developer system requires more supporting software than does the DMZ Web server system. The user population is different and provides an environment more amenable for attackers than does the DMZ Web server system, because the users may not be as security-conscious as the security officers comprising the user population of the DMZ Web server system.

That the system administrators of the DMZ Web server system are trained in security (hence, the term “security officers”) should be expected. The developer systems are more numerous and require more administrative effort to maintain. More system administrators are required. The administrators will also have different skills and abilities; some may be very senior and experienced, whereas others will be junior and inexperienced. Hence, the system administrators for the developer systems may not be trained in security. So the system security officers may not be administrators. This leads to situations in which system administrators and security officers disagree on what actions are appropriate. The policy must have some mechanism for resolving these disputes. The mechanism typically involves a person, or a group of people, performing a cost-benefit analysis of each option and selecting the option that provides the greatest benefit at the least cost. This type of analysis was briefly discussed in Section 1.6.1.

Conclusion

We now examine several areas of system administration in light of these security requirements. Our goal is to install, and manage, as secure a system as possible. Our approach is to compare and contrast these two systems. What follows is organized into areas, and each system is examined with respect to the mechanisms used to enforce the policy. We then compare the two systems.

Networks

Both the DMZ Web server system and the devnet user system are connected to the network. Although the firewalls provide some measure of protection, the principle of separation of privilege says that access should be limited even when the firewalls fail.[3] So we consider how the administrators should set network configurations and services to protect the systems in the case that the firewalls fail.

The Web Server System in the DMZ

Item WC1 limits network access to the Web server.[4] External users can reach the system only by using Web services and connecting through the outer firewall. Internal users can reach the system by using SSH from the trusted administrative system, through the inner firewall. A security mechanism must block any other types of connections, or any connections from sources other than the outer firewall or the trusted administrative server.[5] Moreover, item WC4 requires that all attempts to connect be monitored[6] to validate that the security mechanism functions according to this policy (or to detect failures).[7]

Consider the Web server first. Although requests can come from any IP address on the Internet, all such requests go to the outer firewall's Web proxy. That firewall forwards well-formed requests to the DMZ Web server. Hence, the Web server's access control mechanism can discard any requests from sites other than the outer firewall. Whether to accept requests from the inner firewall depends on several policy factors. The current policy for the Drib is not to allow the Web server to accept these requests.[8] However, the policymakers have realized that some situations may require internal users to access the Web server directly (these situations typically will involve debugging or checking for errors). Should this be necessary, the security officers will reconfigure the inner firewall to run a Web proxy identical to the one on the outer firewall. Thus, the DMZ Web server is configured to accept requests from the inner firewall as well as the outer firewall. The server will not accept requests from other DMZ systems, because they are not to be used for accessing the Web server.

Item WC1 requires the DMZ Web server to allow administrative access from the trusted administrative Web server. This allows system administrators to update Web pages, reconfigure and modify software, and perform other administrative tasks. The Web server runs an SSH server. This server provides enciphered, authenticated access to the Web server system using cryptographic mechanisms to provide those security services. Of interest here is that the server requires both the host and the user to be authenticated.[9] This allows the system administrators to restrict access to users connecting from the trusted administrative server only.

Section 27.4.1 discusses users, and authentication of both hosts and users, on the Web server system.

To maximize availability, the Web server system wraps each server with a small script. If the server terminates, the script starts a new instance of the server.

By virtue of item WC3, the Web server system should run a minimum of network servers. Because access is to be given only to Web requests and administrative logins, no network servers other than the Web server and the SSH server are needed.[10]

The Web server runs several network clients, however. Because the Web server system must request IP addresses and host names, it must make requests of, and receive replies from, a DMZ DNS server. At any time, multiple requests may be outstanding. By virtue of item WC1, this satisfies the policy. However, several types of attacks on DNS clients [892] involve “piggybacking” of multiple host name and address associations onto a reply to a request for a single such association.[11] The Web server system's DNS client will use only the requested data. It will discard any additional data as well as any logs that such data has been received.[12] Furthermore, if the client receives a response that provides information that was not requested, or if two responses provide different answers to the same query, both are logged and discarded, and the client acts as though the DNS request has timed out.

The Web server system also runs a logging client to send log messages to the log server. Programs use an internal message delivery system to send messages to the logging client, which then delivers them to the appropriate hosts and files. The delivery addresses lie in a configuration file. Each log message is timestamped and has the name of the process and (Web server) system attached.

The system is configured to log any attempts to connect to network ports on which no servers are listening. The three reasons for doing this follow from item WC4. First, it serves as a check that the outer firewall is intercepting all probes from the Internet to the Drib's Web server. Second, it detects probes from the internal network to the DMZ Web server. Because the inner firewall has one port that is filtered rather than proxied (the SSH port), such probing is possible if the filter does not check the destination port number. This should never happen, of course, unless the inner firewall is misconfigured or compromised. Thus, in order for an attack on the firewall to be undetectable, two failures must occur (the firewall fails to block, and the DMZ Web server fails to log).[13] Third, probes to other ports from within the DMZ indicate unauthorized activities on the DMZ systems, meaning that one of them has been compromised. This requires immediate investigation.

The Development System

Item DC1 requires that the development system accept user connections only when they are authenticated and encrypted. Like the DMZ Web server, the development systems run SSH servers to provide such access. Both hosts and users use public key authentication.[14]

Unlike the DMZ Web server system, the development system runs several other servers. It runs a line printer spooler to send print requests to a print server. It runs a logging server to accept log messages and dispose of them properly. It also runs servers to support access to both the file server and the user information database system. These servers are necessary in order for the developers to be productive on that system.

The development system does not have the ftp or Web services. Instead, special ftp and Web server systems mount directories from the central file servers. The workstations run an SMTP server as a convenience to users,[15] but all mail is forwarded to a central mail server and is never delivered locally. (This allows workstation SMTP servers to be very simple programs.[16]) Users can access mail on any workstation, because the mail spooling directory resides on the central file server. Similarly, users can make files available for ftp and Web access by placing them into user-specific directories on the central file server. The corresponding servers mount these directories for remote access. They cannot access other parts of the file systems on the file servers.

Placing the mail, ftp, and Web services on systems other than the development workstations has two advantages that satisfy item DC2. First, it minimizes the set of network servers that each workstation has to run. Second, it minimizes the number of systems that provide the services.[17] This enables the firewall to be configured to allow traffic for these services through to a small set of systems, and the security administrators can configure those systems to handle access control appropriately.

The development system uses access control wrappers to support access controls. The firewall provides this control for systems not on the devnet, but the workstation's access control wrappers provide this control for other devnet workstations, as well as duplicating the firewall's control rules. If the firewall's access controls fail (for example, as a result of a configuration error), the workstation will still honor the network security policy.[18] Furthermore, the development system logs all attempts to access servers. These logs provide both evidence of intrusions and verification of the correct functioning of the security mechanisms, as required by item DC8.

Item DC8 requires checking of the security of the development workstations. To ensure that they remain at the desired level of security, the system security officers occasionally scan each system. Their scanner probes each port and records those that are open. The results are compared with the list of ports that are expected to be open. Any discrepencies are reported to the security officers. Moreover, the scanners record the address of each system on the network. Any unauthorized system is reported immediately, as are any unexpected changes in addresses. The security officers make these scans periodically. To prevent an attacker from determining the schedule, the security officers launch additional scans at irregular intervals as well.[19]

Finally, the security officers occasionally attack devnet systems to determine how well they withstand attacks.[20] These operations are sustained and take some time, but the information gleaned from them has proven invaluable. When flaws are discovered, the security officers determine whether they are attributable to the initial configuration or to user changes in the system. In the former case, the security officers develop a patch or modification of the standard configuration. In the latter case, they assess the situation in more detail, and act on the basis of that analysis.

Comparison

The difference between approaches to network services and accesses springs from the use of, and the locations of, the systems.

The DMZ Web server system is dedicated to two specific tasks—serving Web pages and accepting commercial transactions. Only those functions and processes required to support this specific task are allowed. Any other programs, such as those required for general use, are simply not present in the system. It need not provide access to a line printer, or handle remote file systems from central servers. Everything is present in the system itself. No extraneous services are provided or used.[21]

The development system performs many tasks, all designed to achieve the goal of providing an environment in which the developers can be productive.[22] It has general-purpose tools ranging from compilers and text editors to electronic mail reading programs. It shares user files with other workstations using a central file server, and user information with a central user information system. Users can run processes freely.

The environment plays a role in configuration. Both systems use a “defense in depth” strategy of providing access controls that duplicate some of the firewall controls.[23] The DMZ Web server system does not depend on the firewall to filter or block Web client requests. Even if the inner firewall allowed messages to flow through it with no control, the DMZ Web server system would function as required by policy.

However, access to the development systems depends on the devnet firewall's filtering abilities. If a user from another internal subnet tries to access a development system, the devnet firewall will determine whether or not access to the devnet is allowed. If it is, then the developer system determines whether or not to accept the connection. This allows the Drib network administrators to control access among the three subnets and the DMZ independently of the system administrators within the subnets (who do not control the firewalls). It also allows the developer workstations to support developers on other subnets—if the Drib policy allows it.

Users

Our first step is to determine the accounts needed to run the systems. The user accounts, as distinguished from the system administration accounts (system administrators), require enough privileges to use the computer to perform their jobs, but as few others as possible.[24] Creating, configuring, and maintaining their accounts are crucial to the successful use of the computer. For brevity, we refer to a user account as a “user” and a system administration account as a “sysadmin” in this section.

The Web Server System in the DMZ

Items WC2 and WC3 suggest that the number of user accounts on the system be minimal. The Web server requires at most two users and a sysadmin. The first user is a user with enough privileges to read (and serve) Web pages and to write to the Web server transaction area. The second user is a user who can move files from the Web transaction area to the commerce transaction spooling area. The reason the Web server has minimal privileges lies in the assumption that the Web server, which interacts with other systems on the Internet, may be compromised. A compromised Web server running with sysadmin privileges could allow the attacker to control the system, but if the Web server had only enough priviliges to read Web pages, then compromising it would be less likely to compromise the system. The commerce server and the Web server should be different users in order to prevent an attacker from compromising the Web server and then deleting files from the commerce server's area. Access control mechanisms[25] can inhibit this, but defense should not depend on one control only.[26] If the Web server and commerce server are different users, and the Web server is compromised, the attacker must then compromise either the sysadmin or the commerce server user.

Some systems (such as many UNIX systems) use a simplified mechanism that does not allow individual users to be placed in an access control list.[27] However, group mechanisms achieve the same end.

There is a tension between the desire to minimize the number of accounts (item WC2) and the desire to minimize the privileges of these accounts (item WC3). Most computer systems allow the assignment of privilege to accounts independently of name. This means that there can be multiple sysadmin accounts. Each person designated as a system administrator could have a separate sysadmin account or could use a single, role account.[29] The reason for having separate sysadmin accounts is to tie each action to a particular user. Whether or not this can be done depends to some extent on the implementation of the Web server system.

Some UNIX systems support an audit, or a login, UID.[30] This UID is assigned at login and is not changed throughout the lifetime of the process. Furthermore, all children of the process inherit that audit UID. Assigning each system administrator a unique user account (each with a unique UID) associates that UID with every action that account takes. This includes acquiring administrator privileges.

Because item WC4 requires strict user accountability, the Web server system is set up to disallow direct logins from system administrators. Each user must log into the system from the trusted administrative server. As stated in Section 27.3.1, this requires the use of SSH, so the user must be an authorized user of the Web server system.[31] The set of allowed users is enumerated in the SSH configuration file in the Web server system. Once logged in, the user may switch to a role account. To do so, the user supplies a password. The program checks that the user has self-authenticated correctly, and then that the user is authorized to access the requested role account. If so, the user is switched into this role.

Direct login to a sysadmin account is allowed in one situation only. The Web server system allows logins to role accounts (such as root) from the system console. Although the system cannot identify the individual logging into the role, the console itself is in a locked room to which only a few highly trusted individuals have access. At least three people are in that room at all times, including one security officer. The officer can identify by sight the set of people authorized to enter the room.[32] So, if someone walks up to the console and logs into a role account, the security officer will log that individual's use of the console.[33] Thus, should the SSH server become unexpectedly unavailable, a system administrator could fix it.

The Development System

Unlike the DMZ WWW server system, the development system requires at least one user account per developer (items DC1, DC3, and DC6). It also requires administrative accounts, as well as groups corresponding to projects (items DC2 and DC3). Furthermore, an account on different development systems must refer to the same individual, role, or project (item DC1). Otherwise, inconsistent use of identifiers may allow access rights that exceed the level authorized by the security policy.

To meet the requirement for consistency of naming, the Drib developers have decided to use a central repository to define users and accounts, the UINFO system. They use the NIS protocol [969] to allow distribution of user information. All systems on the developer subnet, except the firewall, use the NIS server to obtain information about users and accounts. Any new account must be instantiated on the databases of this server. No user accounts are created on the developer workstations themselves, and all system accounts have entries in the server databases.

The developers benefit from this arrangement. Because their files are kept on NFS file servers, a developer can access them at any devnet workstation, as required by item DC6. If one workstation cannot function, the developer can walk to another workstation and continue development. The system and network administrators can then repair the malfunctioning workstation with minimal loss of developer time.

To satisfy item DC2, each developer workstation has a local root account and one local account for each system administrator.[35] This account gives administrators access should the workstation be unable to contact the NIS server. Because there are both primary and secondary NIS servers, and backups for each, the only reason that this situation might arise would be either a network problem or a workstation problem. Using the local root account, the administrator could access the workstation, diagnose the problem, and (if possible) correct the problem at the client.

As allowed by item DC2, the Drib administrators have set up several accounts to perform system functions. Examples are the mail account, which allows the user to manipulate mail queues and configuration files, and the daemon user, under which most network daemons run. These accounts do not have root privileges. This is an application of the principle of least privilege,[36] because few functions require the powers of the root account.

The NIS mechanism uses cleartext messages to transmit user information. This violates requirement DC1, because the messages are not integrity-checked. They are susceptible to network-based attack, because an attacker can inject responses to queries. However, a quick analysis demonstrates why this is not a problem in the particular environment of the Drib.

The development system is not accessible to users from the Internet. The outer firewall, inner firewall, and devnet firewall prevent any direct connections from the Internet. The threat comes from insiders, people with access to the Drib's internal network. The security analysts classified these threats into two distinct sets.

The first set involves nonadministrative information. This data is sent enciphered and integrity-checked, using mechanisms that the analysts trust. Compromising this data could lead to corruption of user-specific data, and the analysts felt that the other mechanisms provided sufficient protection to deter this.

The second set involves administrative information—specifically, the NIS user records. These records are not encrypted. However, none of these records include administrative accounts, so only ordinary users can be compromised.[38] The security analysts configured each workstation so that only root could inject false information that either the clients or the server would accept as legitimate.[39] They then physically secured the network to prevent unauthorized personnel from connecting workstations to the Drib's network. Fake NIS replies can be put on the network only from the outside (such replies would have to go through the devnet firewall) or from a host on the network (such replies would require root access). In the first case, the devnet firewall would reject the packets before they entered the devnet network. In the second case, root could access that user's account by running the su command on the system under attack, making unnecessary the injection of false NIS packets to obtain access to a user's account.

Given this analysis, the Drib's policy managers agreed with the system developers, administrators, and security officers that the violation of item DC1 was acceptable. However, if there is evidence of a problem, the policy managers reserved the right to require that some other scheme be developed with security the foremost consideration.

Comparison

The difference between selecting users for the DMZ Web server system and selecting users for the development system reflects the differences between the security policies of the two systems. The root lies in the intended use of each system.

The DMZ Web server system is in an area that is accessible to untrusted users (specifically, from the Internet). Although access is controlled, the controls may have vulnerabilities. Limiting the number of users on the system, and ensuring that untrusted users access servers running with minimal privileges, increase the difficulty of an attacker obtaining unauthorized access to the system.[40] Except for the superuser, users can perform only restricted actions. Finally, the user information is kept on the system, so attackers cannot inject false information (such as information on other users) into the system's accesses to a user information database.[41]

The development system allows general user access, so it has many more accounts. Furthermore, the development system shares its user population with other systems on the same subnet, so it accesses a centralized database containing the information. This keeps the user and file information consistent across platforms. The features of the NIS system (notably, the “+” and “–”) [969] allow each devnet system administrator to control authorization to use that particular system. System accounts other than that of the superuser allow the system administrators to control administrative actions to a fairly high degree of granularity. The trade-off is that these administrative accounts can access files on the file server, whereas the superuser can access only public files.

Finally, the difference in means of access reflects the differences in the environments and uses of the two systems. The DMZ Web server system allows access only through a small set of tightly controlled access points: the Web server (from the outer firewall), the SSH server (from the inner firewall), and a login server bound to the physical console of the system. This reflects the classes of users who are authorized to use the system, as well as the ways in which they are authorized to use it.[42] External users can access only the Web server; internal users, only the SSH server. However, the devnet system is in the internal network. Hence, users can come from a wide variety of systems and can access any server. The only controls on access are that the accesses must come from within the devnet, unless explicitly stated otherwise, and that the users must have accounts on the devnet centralized database system.

Authentication

Authentication binds the identity of the user to processes. Incorrect or compromised authentication leads to security problems. In this section, we consider the authentication techniques used in the two systems.

The Web Server System in the DMZ

As required by WC1 and WC2, the SSH server uses cryptographic authentication to ensure that the source of the connection is the trusted administrative host. If the connection is from any other host, the SSH server is configured to reject the connection. Furthermore, SSH uses a cryptographic method of authentication rather than relying on IP addresses.[43]

When a user connects to the SSH server, that server attempts to perform cryptographic authentication. If that attempt fails, that server requests a password from the user. Were this likely to remain unchanged, the administrator would configure the authentication routines directly in the SSH daemon. However, the Drib is experimenting with one smart card system and plans to try two more. Because such a system would require changes in the authentication methods, the system administrator has elected to use PAM to avoid having to modify the source to the SSH server, recompile, and reinstall.[44]

The UNIX system used for the Web server system allows the use of an MD-5-based password hashing mechanism. The advantage of this scheme over the standard UNIX scheme is that the passwords may be of arbitrary length. The password changing program on the Web server system is set to require passwords to have a mixture of letters, numbers, and punctuation (including white space) characters. When a password is changed, the password changing program runs the proposed password through a series of checks to determine if it is too easy to guess.[45] If not, the change is allowed.

The system administrator has disabled password aging. Password aging is suitable when reusable passwords may be tried repeatedly until guessed, or if the hashed passwords can be obtained and cracked.[46] Here, all user connections come from the trusted administrative host, so only users who are authorized to use that system can get to the Web server system's SSH server. These users are trusted. The purpose of password aging is to limit the danger of passwords being guessed. Because the only users who could guess passwords are trusted not to do so, password aging is unnecessary.

Development Network System

The development system supports several users. It is in a physically secure area, accessible only to Drib employees. However, employees other than developers (such as custodians and managers) have access to the restricted area, so authentication controls are required.[47]

Item DC1 means that each user must self-authenticate at login. Although the Drib is moving toward a smart card system, each user currently has a reusable password. Because the users are not administrators (and therefore have no superuser privileges), cracking of passwords would gain them additional privileges. Hence, password aging is in effect. The mechanism uses the time-based approach. Once changed, a password may not be changed again for 3 days. Because the Drib has administrators present at all times, if a user suspects a compromise, the system administrator can reset the password. The Drib computed that guessing passwords in 180 days would require more computing power than was conveniently available on site, so the system administrators require users to change their passwords every 90 days. One week before a user's password expires, the user is warned at login that the password is about to expire. Once the password expires, the user may begin logging in but will be asked for a new password before the login can be completed. The user is also given the option of terminating the login at that point rather than supplying a new password.[48]

Each proposed password is checked to ensure that it is not easy to guess.[49] The criteria include a mixture of case, character type, length, and testing against various word lists and transformations of those lists. Like the Web server system, the development system uses a password hashing scheme based on MD-5.

Although the Drib does not expect to upgrade the methods of authentication on the development system, that system uses PAM to provide a uniform, consistent interface for authentication. The system maintainers found that providing consistency and simplicity, as the interface to PAM does, eases the burden of administration.

To allow developers to access the system from anywhere within the Drib's offices, the development system runs an SSH server. This is configured to accept connections from any system within the internal network. It validates host identities using public key encryption and validates users using public key authentication, smart card authentication, and (if needed) password authentication.[50] However, to meet item DC3, root access is blocked. A system administrator must log in as an ordinary user and then change to root. To enforce this, the server's configuration file disallows root logins, and the system is set to disallow root logins on all terminals (network terminals and console). Other role accounts simply have a password hash that cannot be produced when any password is entered. Thus, users cannot log into them. To gain access, administrators must use a special program on the workstation that validates their identities, and then checks their authorization to access the desired role account.[51]

Comparison

Both the DMZ Web server system and the devnet system use strong authentication measures to ensure that users and hosts are correctly authenticated. The SSH server requires cryptographic authentication of not only the user but also the host from which the user is connecting, and the server responds only to known hosts. Host and user identities are established using the RSA public key cryptosystem. The certificates are initialized by trusted system administrators, so systems that are set up by unauthorized personnel will not be able to connect over SSH to any Drib system.

Both systems also allow reusable passwords. However, the DMZ Web server system uses an MD-5-based hash, whereas the development system uses the traditional UNIX DES-based hash, because it is the version supported by NIS. An undesirable side effect is that reusable passwords on the development system are restricted to a maximum of eight characters, whereas those on the DMZ Web server system may be of arbitrary length. This also explains why the development system uses password aging but the DMZ Web server system does not. Because the users of the Web server system have chosen very long passwords, attackers are expected to take much longer to guess them than if they were only eight characters long, as they are on the development system—assuming that attackers can even get to the SSH server on the DMZ Web server system.

Processes

A system runs a collection of processes to perform specific tasks. Each process is a potential vulnerability. This section examines the processes run on both systems.

The Web Server System in the DMZ

As required by WC5, the Web server runs a minimum set of processes[52] because its function is only to serve Web pages and batch transactions for off-line processing. The required services are as follows.

  • Web server

  • Commerce server

  • SSH server

  • Login server, if there is a physical terminal or console

  • Any essential operating system services (such as pagers)

Items WC2 and WC3 require each server to run with a minimum of privileges. The SSH and login servers need enough privileges to change to the user logging in. The Web and commerce servers run with minimal privileges, because they only need to access public data. Neither the login nor the commerce server accepts network connections.[53] The former is tied to specific, hard-wired terminals (such as a console); the latter simply responds to interprocess communication from the Web server.

Consider the level of privilege that the servers need.[54] The SSH server must run with sysadmin privileges to support the remote access and tunnelling facilities. The login server (if present) must run with this level of privilege also. The Web server requires enough privileges to read Web pages and invoke subordinate CGI scripts. The Web pages can be world-readable, so the Web server simply needs minimal privileges. The CGI scripts manipulate Web pages or generate transaction data, and with appropriate settings of file permissions can write into the Web server's area. The commerce server needs enough privileges to copy transaction files from the Web server area to the transaction spooling area. However, it should not have enough privileges to alter Web pages. Other required servers run with appropriate privileges.

File access is an important issue. File system access control lists[55] provide one defense. We can adapt another defense from capabilities.[56] Recall that in a pure capability system, the capability names the object; if the subject does not possess the capability, it cannot even identify an object. An access control-based system does not work this way. However, if we can change the meaning of a file system name, then we can confine all references to a particular part of the file system. The Web server, for example, needs to reference only programs and files within the hierarchy of Web pages (and CGI scripts). The commerce server needs access only to the transaction spooling area and the area where the Web server's CGI script places transactions.

Finally comes interprocess communication. Processes should be able to communicate only through known, well-defined communication channels.[57] The issue here is how the Web server communicates with the commerce server to tell it that transaction files are present, and the names of those files.

The simplest method of communication is to use the directory that both the Web server and commerce server share. The commerce server periodically checks for files with names consisting of trns followed by a set of digits. When a transaction begins, the CGI script creates a temporary transaction file. It builds the transaction data and enciphers it using the appropriate public key. It then renames the temporary file with a name consisting of trns followed by the integer representation of the date and time, followed by one or more digits. (See Exercise 5.) When the commerce server checks the directory, it moves any files with that type of name to the spooling area.

If the Web server and commerce server run with the same real or effective UID, or either runs with superuser privileges,[58] then they can communicate using the UNIX signaling (asynchronous interrupt) mechanism. If an attacker acquires access through the Web server, and can signal the commerce server, then the attacker can damage the Drib with a denial of service attack. Hence, the Web server and the commerce server should run as distinct users, with different privileges.

The Development System

Unlike the DMZ Web server system, the development workstation serves developers who will compile, test, debug, and manage software. They will also write reports and analyses, communicate with other developers on different systems in the devnet, and send and receive electronic mail over the Internet. The system must support all these functions.

Consider servers and clients first. The devnet workstations may run servers to provide administrative information (such as who is currently logged into the system). These servers require administrative users. As discussed in Section 27.4.2, item DC2 requires these users to be local. Item DC1 requires that users be named (and numbered) consistently. The NIS protocol provides user information to clients, ensuring this consistency. Hence, the devnet workstation runs NIS clients. Similarly, the workstation runs NFS clients to satisfy item DC6. Servers run with the fewest privileges necessary to perform their tasks. In many cases, servers begin with root privileges to open privileged ports. They then drop privileges to a more restricted user.[59]

Server processes on the development machine run with as few privileges as necessary, as required by item DC2. Whenever possible, they run with the nobody UID and the nogroup GID to ensure that the clients can obtain only information that the developers deem public (that is, available to others within the confines of the Drib's internal network).[60] When access to privileged ports is required, one of two methods is used. In the first, the inetd daemon (which runs with root privileges) listens for messages at the port. When a message is received, inetd spawns the server with the limited privileges. In the second method, the server starts with root privileges, opens the ports and other files accessible only to root, and drops to a lesser privilege level. This minimizes the actions that the process takes when it has unlimited privilege.[61] It also allows the operating system to enforce normal file system access checks.[62] As with the WWW server system, the servers run in a subtree of the file system whenever possible.

To satisfy item DC3, the development system has a logging mechanism that can record any operating system call, its parameters, and the result.[63] Logged information is recorded locally and sent to a central logging server. The security officers monitor the logs from that server using an intrusion detection system.[64] If an attack is suspected, the central logging server can instruct the kernel to begin (or cease) recording data to augment the current set of data. Initially, the system logs process initiation and termination, along with the audit UID and effective UID of the user executing the command.

In addition to requiring the use of file servers, item DC6 requires that the workstations have sufficient disk space available for local users' work. To meet this goal, every night, or when disk space reaches 95% of capacity, a program scans the file system and deletes auxiliary files such as editor backup files and files in temporary directories that are not in current use (defined as not having been accessed within the last 3 days).

As required by item DC1, the devnet workstations allow remote access using SSH. This allows devnet users to test software using multiple workstations, which is useful when the software involves network connections or concurrency. It also allows system administrators to log in remotely to perform maintenance activities.

Comparison

The DMZ Web server system uses a minimalist approach: only those processes necessary for the Web server, remote administration, and the operating system are present. All other processes are eliminated. This requires that any new software be compiled on other systems and that all development be done elsewhere. Only those programs essential to the serving of Web pages, to remote administration, and to the operating system are available. The number of processes active at any time on this system is small.[65] By way of contrast, the devnet system must provide an environment in which developers can be productive. This requires that more programs be available, and that more processes be active, than on the DMZ Web server system. Compilers, scripting languages, Web servers, and other tools help the developers carry out their tasks.

Both systems run servers with the minimum level of privilege needed. This includes not only minimizing user privileges but also restricting the environment in which the process runs.[66] The difference between the systems is that the “minimum environment” for the DMZ Web server system is different from the minimum environment for the Web servers on the devnet systems. In the latter, users wish to share data, so users must be able to place data into areas in which the devnet system's Web server can make it available to other users on the development network. The DMZ Web server system has no such requirement.[67] The root user installs all new Web pages. So the Web server needs to serve data only from a part of the file system to which the root user can write. No other user needs access, except for the commerce user—and that user has tightly restricted access.

Both systems have processes that log information, but the types of the logging processes differ. The devnet system has a log server that accepts messages from other programs, timestamps and formats them, and writes them to locations specified in a control file. This conforms to the way most UNIX-like systems handle logging and allows devnet systems to use off-the-shelf software. The DMZ Web server system has no such daemon. Each program writes log entries to a local log and to a remote daemon on the log server.[68] This minimizes the number of servers on the DMZ Web server system.

Files

The setting of protection modes, and the contents of files, affect the protection domains of users and so are critical to a system satisfying a security policy. Again, consider each system separately.

The Web Server System in the DMZ

The Web server system's goal is to serve the Web pages. The system programs and configuration files will not change; only the Web pages, log files, and spooling area for the electronic commerce transactions will change. To preserve their integrity, as required by item WC4, all system programs and files are on a CD-ROM. When the system boots, it boots from the CD-ROM. The CD-ROM is mounted as a file system, so even if attackers can break into the Web server, they cannot alter system files or configuration files.[69] A hard drive provides space for temporary and spooled files, for the home directories of authorized users, and for portions of the Web pages.

Because the Web pages change often, it is not feasible to burn them onto a CD-ROM. However, the CGI programs change very infrequently, and are to be protected from any attacker who might gain access to the system, as required by item WC4. Hence, the Web page root directory, and the subdirectory containing the CGI programs, are on the CD-ROM. In the Web page root directory is a subdirectory called pages that serves as a mount point for a file system on the hard drive. That file system contains the Web pages. In other words, an attacker can alter Web pages, but cannot alter the CGI programs or the internal public key, which is also kept in a directory under the Web page root directory on the CD-ROM. (See Exercise 10.)

When the system boots, one of its start-up actions is to mount two directories from the hard drive onto mount points on the CD-ROM. The hard drive file system containing the Web pages is mounted onto the mount point in the Web page root directory. A separate area, containing user home directories for the system administrators, a temporary file area, and spooling directories for transactions, is also mounted on the root file system.

As dictated by item WC3, the Web server runs confined to the Web page root directory and its subdirectories.[70] An attacker who compromises the Web server cannot alter the CGI programs, nor add new ones, but can only damage the Web pages on the server.

The commerce server has access to the Web page directory and the spooling area. When a CGI program has processed a request for an electronic transaction, it names the transaction file appropriately (see Section 27.6.1). The commerce server copies the data to the spooling area and deletes the original data. Because the Web server is confined to the Web page partition, an attacker who seizes control of the Web server will be unable to control the commerce server. Moreover, because the CGI programs (and the containing directory) cannot be altered, an attacker could not alter the programs to send raw data to the attacker. Because the CGI programs encipher all data using a public key system before writing the data to disk, the attacker cannot read the raw data there.[71] The corresponding private key is on the internal network, not the DMZ system, so the attacker cannot decipher the data without breaking the public key system.[72]

The system administrator partition provides a home directory area when an administrator logs in. It is small and intended for emergency use only.

Finally, WC5 also specifies that the number of programs on the system be minimal.[73] Fortunately, the system itself requires few programs. No compilers or software development tools are available. Because all executables are statically linked, the dynamic loader is not present (see Exercise 3). The only programs that are available allow the users to log in and out; run commands (command interpreters); monitor the system; copy, create, edit, or delete files; and stop and start servers. Programs such as mail readers, news readers, batching systems (the at and cron commands), and Web browsers are not present. This minimizes what an attacker can compromise.

WC4 suggests that the integrity of the system should be checked. Periodically, or whenever there is a question about the integrity of the system, the Web server is stopped, transaction files are transferred, the system is rebooted from the CD-ROM, the hard drive is reformatted, and the contents of the user and Web page areas are reloaded from the internal Web server system clone mirroring the DMZ system (see Section 26.3.3.2). This restores the Web pages and user directories to a known, safe state. If an attacker has left any back doors or other processes to gather information, the reformatting of the hard drive eliminates them.

The Development System

The development system's goal is to provide the resources that developers need to develop new software for the Drib's products and (if necessary) infrastructure and systems. This requires a variety of software. A site can take two approaches.

The first approach is to allow each developer to configure his or her own workstation. The Drib rejected this approach because it would create too many different systems for the system administrators to manage. Furthermore, tools available on one workstation might not be available on another, violating the interchangeability required by item DC6. Meeting item DC5 would also be infeasible because read-only media would have to be created for each workstation separately—an effort that was deemed unacceptable.[74]

The second approach is to develop a standard configuration that provides developers and system administrators with needed software tools and configuration settings. To create such a configuration, the Drib policy managers gathered developers, system administrators, security officers, and all other users of the development workstations. The group developed a configuration that met the Drib's policies and that was acceptable to as many people as possible, and ensured that all members of the group were willing and able to use systems with that configuration.[75] The system administrators then installed and configured a base system on an isolated workstation system and created a bootable CD-ROM. This CD-ROM was copied and given to all developers. The developers use this CD-ROM to boot their workstations, ensuring that the resulting configuration is the standard one. All updates and upgrades are made to that isolated workstation system and tested, and a new CD-ROM is created. The CD-ROM is copied and distributed to the developers. This eliminates the problem of inconsistent patching or upgrading of workstations.[76] It also ensures that files are available on all workstations (through mounting of the central file server's file systems) and that the naming scheme is consistent (through use of the same user database system), satisfying items DC1 and DC5. Finally, the local system configurations of all workstations are identical, so all have the same administrative accounts.

Some members of the group pointed out the need for local writable storage. In the event that no file servers are available, the local administrators may need to create files (for example, to save output from a program for analysis). Furthermore, spool files require space, and many programs use temporary storage. Hence, each workstation has a hard drive with several file systems. When the computer boots from the CD-ROM, the root file system is located on the CD-ROM itself. All system programs and configuration files lie on the CD-ROM, as indicated above. During the boot, the workstation mounts the file systems on the hard drive at mount points in the CD-ROM file system. This provides the workstation with appropriate writable storage, satisfying item DC5.

This approach also prevents developers from adding new system programs to the workstations. Programs can of course be added to the writable file systems, but adding a program to the configuration requires that it be added to the isolated system and that new CD-ROMs be burned.[77] This satisfies part of item DC4. Procedural mechanisms (ranging from warnings to firings) enforce the requirement that programs be inspected before they are added to the writable file system. The organization of the various file systems allows the writable media to be wiped during the boot procedure, eliminating any and all programs added to the workstation. This is part of the recommended boot procedure, but it can be skipped if spool files are queued.

Wiping the writable disks deletes some local log files. However, the logging server also forwards log messages to an infrastructure system that records messages from all workstations. Security analysts examine these logs using various analysis tools, including host-based and network-based intrusion detection tools, to detect misuse and attacks. To validate that the analysis tools are working as expected and are configured correctly, every day the analysts select 30 minutes' worth of log entries and examine them to determine if the analysis tools correctly analyzed those entries. The analysis either validates the security mechanisms and procedures as effective, or reports (or finds) problems. This serves two purposes: validation of the current configuration and software (item DC4)[78] and detection of security incidents (item DC8).[79]

The use of read-only media eliminates the need for integrity checking of the development system binaries and configuration files. Scans of the writable media locate files that match patterns of intrusions. When such files are found, the security officer merely reboots the system, wiping the writable hard drive. This cleans up the workstation. An extensive check of the file servers follows.

Comparison

Both the Web server system and the development system rely on physical protection of media to prevent unauthorized alteration of system programs and configuration files. Both boot from the CD-ROM and use the CD-ROM's file system as the main file system. Because some files on both systems must change (for example, transaction files on the Web server system and spooled files on the development system), both have file systems on writable media that are mounted on the main file system.[80]

When the Web server system must be reloaded (because the integrity of the system may have been violated), the spooled transaction files are removed from the system, the system is booted, and the writable medium is reformatted. Then the Web pages and user directories are reloaded from a clone kept in a state known to be safe. The development system does not require this, because any nontransient files are kept on a centralized file server that is itself regularly checked. The only local files are temporary files that the users can reinstantiate when they log back in, so the system is simply rebooted and the media reformatted. Because the main file system is on a CD-ROM, its integrity is ensured.

The differences between the approaches used in developing the two CD-ROMs spring from the question of attack from within the company. The developers are all trusted not to attack the workstation, because at any time a developer may have to use any workstation. However, the developers may be used as “vectors of attack” if they should (accidentally or deliberately) make errors in programming or bring in software from untrusted sources.[81] This led to the consensus-based development of the workstation CD-ROM. The developers had great influence, because they would be using the workstations. Security was a consideration, but it was weighted against productivity and morale. The outer, inner, and devnet firewalls were to provide the bulwark of the security for the development network systems.[82]

The set of users trusted to work on the DMZ Web server system was much smaller. Thus, the DMZ Web server system was designed to withstand attack from both the Internet and the internal network. For example, the Web server originally was intended to handle transactions; the security people vetoed this as allowing too many potential attacks, and instead suggested the staging approach, in which the DMZ Web server acts as a proxy for the transaction processing systems on the customer data subnet (see Figure 26-1). The construction of the CD-ROM began with the security officers devising the most secure, minimal Web server system they could construct and then adding those features necessary for the Drib's special needs.[83] They monitor activities on the Web server, and several vulnerability tracking lists and news services, to ensure that they are up to date on all potential problems.

The DMZ Web server system is self-contained in that all files are local. None are served remotely.[84] If an attacker alters files, a reboot and a reload restore the files to their original state. No other system depends on those files. However, the development workstation relies on file servers. This removes user file integrity from the purview of the development workstation's security. Integrity of the configuration becomes critical, to ensure that the right servers are used, but the CD-ROM ensures that the configuration file data is correct. However, the security of the development systems depends more on the security of the infrastructure of the development network than the security of the DMZ Web server system depends on the security of the infrastructure of the DMZ network.

Retrospective

This section briefly reviews the basics of the security of the systems.

The Web Server System in the DMZ

The Web server on the DMZ Web server system runs a minimal set of services. It keeps everything possible on unalterable media.[85] Except for the Web server process, the system accepts only enciphered, authenticated connections from a known, trusted host by known, trusted users.[86]

The Web server process must accept connections from any host on the Internet. However, all such connections go through an outer firewall that can (if desired) be configured to reject requests.[87] This means that denial of service attacks could be handled at the outer firewall and not by the DMZ Web server.

The Web server and commerce server run with minimal privileges. Neither may communicate with the other except through a shared directory used to transfer transaction requests from the public Web server area to a private spooling area from which they can be retrieved through the enciphered link.[88] The transaction files themselves are enciphered using a public key algorithm, so an attacker who compromises the Web server cannot alter the transaction files, but can only delete them. To minimize this risk, the commerce server moves the transaction files as quickly as possible to an area that is inaccessible to the Web server.

Access to the administrative account requires that the user access a trusted host (the internal trusted administrative host) and then authenticate to the DMZ Web server using a public key protocol. Automated processes will authenticate on the basis of the host from which they are run, which is the internal trusted administrative host. The SSH server ignores connections from other hosts, and host identity is determined using public key authentication, not IP addresses.

Other servers and programs are simply deleted from the system, so they cannot be run even by accident.[89] This simplifies system maintenance. It also deprives any attackers of available tools should they penetrate the Web server system.

The Development System

The development system also runs a minimal set of programs and services.[90] The notion of “minimal” is different for the development system than for the DMZ Web server system, because the systems must serve many functions. Users compile and debug programs. They test programs, and they integrate different programs into a single software system. They may use ancillary hardware (such as embedded systems) to support the development. The development systems must support this functionality.

Given this, security plays a prominent but not dominant role. Hidden behind three firewalls, each development workstation has sufficient security mechanisms to hinder attackers, and to allow quick recovery if an attack does occur,[91] but these systems rely more on the infrastructure than does the DMZ Web server system.

The development system allows a large number of users access from any development network system and (possibly) from systems in other subnets of the internal network. User information resides in a centralized repository to maintain consistency across all development systems. Reusable passwords are supported, and password aging is not enforced. However, passwords are tested for strength before they are accepted, and the security officers periodically try to guess passwords. Other password schemes are also supported.

Backups occur daily. Because each workstation has a local writable area, users may keep files in that area rather than place them on the file servers. These areas are backed up. The dumps are typically small, because most users work on directories mounted from the file servers. The main reason for these backups is to preserve the log files should an investigation require them.

Summary

This chapter refined parts of a security policy to derive requirements for mechanisms on systems to implement the policy. The mechanisms rely in part on infrastructure systems and the environment in which those systems function. The server in the DMZ is based on assumptions under which a small set of users is trusted, and everyone else is distrusted. This leads to a system that provides minimal services. System files are kept on protected media so that they cannot be physically altered. Other files, such as those containing transactions, are protected using cryptographic mechanisms so that alterations will be detected, and sanity checks are performed on their contents both before encryption and after delivery and decryption. By way of contrast, the development workstations are general-purpose workstations designed to support a development environment. They support many more functions, and more open access, than the DMZ server. Furthermore, their user population is trusted to a greater degree than that of the DMZ Web server. This leads to differences in infrastructure support and workstation configuration.

Research Issues

The role of a security policy in system development raises several research issues. The first is realism and consistency of policy. A security policy must be consistent with the requirements of the organization. The second issue is the difficulty of ensuring that a policy is internally consistent. Aggravating this issue is the manner of expression of the policy. If the policy can be expressed mathematically, one can apply mathematical techniques to determine its internal consistency. In practice, few policies can be so expressed, and those that can are usually abstractions of the policy in use (that is, they are policy models). Analyzing the actual policies in use at a site requires techniques and methodologies that have not yet been developed.

Part of the problem is how to map policy components to security mechanisms. Although such mapping appears clear in many cases, the influences of the environment, the users, and the organization affect the selection of mechanisms. For example, the nature of the authentication mechanism determines whether an access control list is sufficient to restrict access as required by the policy, because if users are improperly identified through a weak authentication mechanism, unauthorized users may gain access to data. One research issue is the development of automated (or semiautomated) techniques for guiding the selection of mechanisms for enforceing policies.

The inverse of this problem is also an interesting research topic. Given a system, one would like to derive a high-level exposition of the policy it enforces. Given a set of systems, one would like to demonstrate that the policy they collectively enforce is consistent. Security mechanisms work at a low level. Translating those implementation-specific mechanisms to higher-level expositions, whether formal or informal, is an area that is ripe for study.

Developing methodologies for maintaining large collections of systems consistently seems straightforward. It is not. The difficulty arises from the administration of the distribution system and from the need to ensure that any failures in the process will be detected and reported. This problem is related to the problem of updating distributed databases, and some protocols from that field ameliorate the problem. But human factors, such as keeping the distribution versions up to date, often create problems, as does the desire for control over one's own system. Much work remains to be done in the area of distributing upgrades and patches to systems in a trusted manner, and in the area of determining whether installing a patch or an upgrade will interfere with the system's meeting the security policy of the local site.

Further Reading

Many books discuss system administration and security for UNIX and UNIX-like systems [31, 376, 382, 425, 767], Windows systems [500, 533, 681, 1049], and Macintosh systems [832].

As sites grow in complexity and number of systems, automated system administration tools are becoming more important. Several authors [159, 160, 239, 351, 433] discuss systems for administering sites.

The role of policy is increasingly driving work in systems administration. The balance between centralized system administration and distributed system administration is delicate [479, 884], as is the balance between security and convenience [528, 835]. Others [438, 494, 1076] focus on case studies of system administration and policy. Burgess [161] discusses some theory to evaluate system administration policies. Sloman [932] and Lupu and Sloman [650] discuss policy and framework in the context of distributed systems. Kubicki [599] adapts the Capabilities Maturity Model to the examination of quality control in system administration.

Exercises

1:

A system administrator on a development network workstation wants to execute a program stored on a floppy disk. What steps could the Drib take to configure the workstation to prevent the system administrator from mounting the floppy and executing the program?

2:

Suppose a user has physical access to computer hardware (specifically, the box containing the CPU and a hard drive). The user does not have an account on the computer. How can the user force the computer to shut down? To reboot?

3:

Some systems support dynamic loading, in which system library routines are not loaded until they have been referenced. A library can be updated independently of any programs that use the library. If the program loads the library routines dynamically, the updated routines will be used. If the program does not load the library routines dynamically, the program will use the versions of the routines that were in the library at link time. This exercise examines this property from the viewpoint of security.

  1. From the point of view of assurance, what problems might dynamic loading introduce? (Hint: Think about the assumptions the programmer made when writing the code that calls the library functions.)

  2. Does dynamic loading violate any of Saltzer and Schroeder's principles of secure design [865]? (See Chapter 13.) Justify your answer.

  3. If an attacker wanted to implant a Trojan horse into as many processes as possible, how would dynamic loading lower the amount of work that the attacker would need to do?

4:

Suppose there is no system dedicated to the bootable CD-ROM discussed in Section 27.7.2. How would you go about constructing such a CD-ROM? Discuss procedures, and justify them. What is the problem with updating a running system and burning a CD-ROM of the changes only?

5:

The Web server on the DMZ Web server system renames temporary files used to record transactions. The name has the form trns followed by the integer representation of the date and time, followed by one or more digits. Why are the extra digits necessary?

6:

Consider a developer who has both an ISP workstation and a devnet workstation on his desk, and who wants to move a program from the ISP workstation to the devnet workstation.

  1. Assume that the user is not allowed to mount media such as the floppy disk. Thus, he would not be able to access the data on the disk as though it were a file system. Would he be able to access the data in some other way? (Hint: Must data on all media be accessed as though it were a file system, or can it be read in some other way?)

  2. Assume that the root user is asked to mount the floppy for the user, so he can access data on it. What precautions should root take before making the data available to the user?

  3. Suppose the ISP workstation were removed. How could the Drib prevent the developer from bringing a floppy into his office?

  4. Suppose the floppy reader were removed from the development network workstation. Would this solve the problem? Why or why not? Discuss the advantages and disadvantages of this approach.

7:

The second line of the Web server starting script puts the process ID number of the Web server wrapper into a file. Why? (Hint: Think of how to terminate the process automatically.)

8:

This exercise reconsiders the use of NIS to distribute user information such as password hashes.

  1. In general, why might an administration want to use encryption techniques to protect the transmission of NIS records over a network?

  2. Why is secrecy of the NIS records not important to the system administrators?

  3. Assume the devnet firewall (and the inner and outer firewalls) did not prevent outside users from monitoring the development network. How important would secrecy of the NIS records be then? Why?

  4. The NIS client accepts the first response to its query that it receives from any NIS server. Why is physical control of the development network critical to the decision not to use cryptography to protect the NIS network traffic?

9:

The system administrators on the development network believe that any password can be guessed in 180 days of continuous trial and error. They set the lifetime of each password at a maximum of 90 days. After 90 days, a password must be changed. Why did they use 90 days rather than 180 days?

10:

Section 27.7.1 discusses CGI scripts on the DMZ Web server system. It points out that Web pages change too frequently to be placed on a CD-ROM, but that the CGI scripts are changed infrequently enough to allow them to be placed on the CD-ROM.

  1. In light of the fact that the CGI scripts do not contain data, why is their alteration a concern?

  2. CGI scripts can generate Web pages from data stored on the server. Discuss the integrity issues arising from storing of the data that those scripts use on writable media but storing of the scripts themselves on read-only media. In particular, how trustworthy are the pages resulting from the script's use of stored data? (Hint: See Section 6.2.)

  3. Assume that the CGI scripts are to be changed frequently. Devise a method that allows such changes and also keeps the interface to those scripts on read-only media. Where would you store the actual scripts, and what are the benefits and drawbacks of such a scheme?

11:

Brian Reid has noted that “[p]rogrammer convenience is the antithesis of security” [835]. Discuss how the Drib's trade-off between security and convenience exemplifies the conflict between users (programmers) and security. In particular, when should the principle of psychological acceptability (see Section 13.2.8) override other principles of secure design?

12:

Computer viruses and worms are often transmitted as attachments to electronic mail. The Drib's development network infrastructure directs all electronic mail to a mail server. Consider an alteration of the development network infrastructure whereby workstations download user mail rather than mounting the file system containing the mailboxes.

  1. The Drib has purchased a tool that scans mail as it is being received. The tool looks for known computer worms and viruses in the contents of attachments, and deletes them. Should this antivirus software be installed on the mail server, on the desktop, or on both? Justify your answer.

  2. What other actions should the Drib take to limit incoming computer worms and viruses in attachments? Specifically, what attributes should cause the Drib to flag attachments as suspicious, even when the antivirus software reports that the attachment does not contain any known virus?

  3. What procedural mechanisms (such as warnings) should be in place to hinder the execution of computer worms and viruses that are not caught by the antivirus filters? Specifically, what should users be advised to do when asked to execute a set of instructions to (for example) print a pretty picture?



[1] See Section 13.2.6, “Principle of Separation of Privilege.”

[2] See Section 13.2.4, “Principle of Complete Mediation.”

[3] See Section 13.2.6, “Principle of Separation of Privilege.”

[4] See Section 13.2.1, “Principle of Least Privilege.”

[5] See Section 16.5, “Example Information Flow Controls.”

[6] See Chapter 24, “Auditing.”

[7] See Chapter 25, “Intrusion Detection.”

[8] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[9] See Section 13.2.6, “Principle of Separation of Privilege.”

[10] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[11] See Section 14.6.1.2, “Security Issues with the Domain Name Service.”

[12] See Section 13.2.1, “Principle of Least Privilege.”

[13] See Section 13.2.6, “Principle of Separation of Privilege.”

[14] See Section 9.3, “Public Key Cryptography.”

[15] See Section 13.2.8, “Principle of Psychological Acceptability.”

[16] See Section 13.2.3, “Principle of Economy of Mechanism.”

[17] See Section 13.2.3, “Principle of Economy of Mechanism.”

[18] See Section 13.2.6, “Principle of Separation of Privilege.”

[19] See Chapter 24, “Auditing.”

[20] See Section 23.2, “Penetration Studies.”

[21] See Section 13.2.7, “Principle of Least Common Mechanism.”

[22] See Section 13.2.8, “Principle of Psychological Acceptability.”

[23] See Section 13.2.6, “Principle of Separation of Privilege.”

[24] See Section 13.2.1, “Principle of Least Privilege.”

[25] See Chapter 15, “Access Control Mechanisms.”

[26] See Section 13.2.6, “Principle of Separation of Privilege.”

[27] See Section 15.1.1, “Abbreviations of Access Control Lists.”

[28] Some UNIX variants allow the group owner of a file to delete it only if the directory and the file itself are group-writable. In this case, the transaction file must be group-writable as well.

[29] See Section 14.3, “Users,” and Section 14.4, “Groups and Roles.”

[31] See Section 13.2.6, “Principle of Separation of Privilege.”

[32] See Section 13.2.6, “Principle of Separation of Privilege.”

[33] See Section 13.2.4, “Principle of Complete Mediation.”

[34] See Section 13.2.7, “Principle of Least Common Mechanism.”

[35] See Section 13.2.7, “Principle of Least Common Mechanism.”

[36] See Section 13.2.1, “Principle of Least Privilege.”

[37] See Section 14.4, “Groups and Roles.”

[38] See Section 13.2.7, “Principle of Least Common Mechanism.”

[39] See Section 13.2.1, “Principle of Least Privilege.”

[40] See Section 13.2.1, “Principle of Least Privilege.”

[41] See Section 13.2.7, “Principle of Least Common Mechanism.”

[42] See Section 13.2.1, “Principle of Least Privilege.”

[43] See Section 9.3, “Public Key Cryptography.”

[44] See Section 12.6, “Multiple Methods.”

[45] See Section 12.2.2.3, “User Selection of Passwords.”

[46] See Section 12.2.3, “Password Aging.”

[47] See Section 13.2.6, “Principle of Separation of Privilege.”

[48] See Section 12.2.3, “Password Aging.”

[49] See Section 12.2.2, “Countering Password Guessing.”

[50] See Section 14.6.1, “Host Identity.”

[51] See Section 14.4, “Groups and Roles.”

[52] See Section 13.2.1, “Principle of Least Privilege.”

[53] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[54] See Section 13.2.1, “Principle of Least Privilege.”

[55] See Section 15.1, “Access Control Lists.”

[56] See Section 15.2, “Capabilities.”

[57] See Chapter 17, “Confinement Problem.”

[59] See Section 13.2.1, “Principle of Least Privilege.”

[60] See Section 13.2.1, “Principle of Least Privilege.”

[61] See Section 13.2.1, “Principle of Least Privilege.”

[62] See Section 13.2.4, “Principle of Complete Mediation.”

[63] See Section 24.3, “Designing an Auditing System,” and Section 24.4, “A Posteriori Design.”

[64] See Chapter 25, “Intrusion Detection.”

[65] See Section 13.2.1, “Principle of Least Privilege.”

[66] See Section 13.2.1, “Principle of Least Privilege.”

[67] See Section 13.2.7, “Principle of Least Common Mechanism.”

[68] See Section 13.2.6, “Principle of Separation of Privilege.”

[69] See Section 13.2.1, “Principle of Least Privilege.”

[70] See Section 17.2, “Isolation.”

[71] See Section 13.2.7, “Principle of Least Common Mechanism.”

[72] See Section 9.3, “Public Key Cryptography.”

[73] See Section 13.2.1, “Principle of Least Privilege.”

[74] See Section 13.2.8, “Principle of Psychological Acceptability.”

[75] See Section 13.2.5, “Principle of Open Design.”

[76] See Section 13.2.7, “Principle of Least Common Mechanism.”

[77] See Section 13.2.6, “Principle of Separation of Privilege.”

[78] See Section 19.3.3, “Justifying That the Implementation Meets the Design.”

[79] See Chapter 25, “Intrusion Detection.”

[80] See Section 13.2.1, “Principle of Least Privilege.”

[81] See Section 22.7, “Defenses.”

[82] This approach violates the principle of fail-safe defaults, but it was deemed necessary to allow the developers to be as productive and effective as possible. This illustrates a tension between the principle of fail-safe defaults and the principle of psychological acceptability (see Exercise 11).

[83] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[84] See Section 13.2.7, “Principle of Least Common Mechanism.”

[85] See Section 13.2.1, “Principle of Least Privilege.”

[86] See Section 13.2.4, “Principle of Complete Mediation.”

[87] See Section 13.2.6, “Principle of Separation of Privilege.”

[88] See Section 17.2, “Isolation.”

[89] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[90] See Section 13.2.1, “Principle of Least Privilege.”

[91] See Section 25.6.2, “Intrusion Handling.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.178.53