Chapter 28. User Security

 

COMINIUS: Away! the tribunes do attend you: arm yourselfTo answer mildly; for they are prepar'dWith accusations, as I hear, more strongThan are upon you yet.

 
 --Coriolanus, III, ii, 138–141.

Although computer systems provide security mechanisms and policies that can protect users to a great degree, users must also take security precautions for a variety of reasons. First, although system controls limit the access of unauthorized users to the system, such controls often are flawed and may not prevent all such access. Second, someone with access to the system may want to attack an authorized user—for example, by reading confidential or private data or by altering files. The success of such attacks may depend on the victim's failure to take certain precautions. Finally, users may notice problems with their accounts, causing them to suspect compromises. The system administrator can then investigate thoroughly.

This chapter considers a user of a workstation on the development network at the Drib. The user's primary job is to develop products or support for the Drib. It is not to secure her system. We explore the precautions, settings, and procedures that such a user can use to limit the effect of attacks on her account.

Policy

Most users have informal policies in mind when they decide on security measures to protect their accounts, data, and programs. Few analyze the policies or even write them down. However, as with the development of a network infrastructure, and of the configuration and operation of a system, users' security policies are central to the actions and settings that protect them.

The components of users' policies that we focus on are as follows.

  • U1. Only users have access to their accounts.

  • U2. No other user can read or change a file without the owner's permission.

  • U3. Users shall protect the integrity, confidentiality, and availability of their files.

  • U4. Users shall be aware of all commands that they enter, or that are entered on their behalf.

Access

Component U1 requires that users protect access to their accounts. Consider the ways in which users gain access to their accounts. These points of entry are ideal places for attackers to attempt to masquerade as users. Hence, they form the first locus of users' defenses.

Passwords

Section 12.2.2, “Countering Password Guessing,” discussed the theory behind good password selection. Ideally, passwords should be chosen randomly.[1] In practice, such passwords are difficult to remember. So, either passwords are not assigned randomly, or they require that some information be written down.

Writing down passwords is popularly considered to be dangerous. In reality, the degree of danger depends on the environment in which the system is accessed and on the manner in which the password is recorded.

Users with accounts on many systems will choose the same password for each system, choose passwords that follow a pattern, or write passwords down.[2] On the development network, the first of these is a result of centralizing the user database. Even there, users (especially system administrators) may have multiple accounts, including some on infrastructure systems that do not use the centralized user database. These users must take precautions to protect their passwords.

The users of development network workstations can choose their own passwords, but a proactive password checking program checks the proposed password before accepting it.[4] The proactive password checker rejects proposed passwords that are deemed too easy to guess.[5] Most users choose verses of poetry or sayings, and use them to generate their passwords.

If a user chooses a password that is easy to guess, it may cause a violation of policy component U1.

The Login Procedure

To log in, the user must supply her login name and authentication information. First, the user obtains a prompt at which she can enter the information. She then logs in.

The first potential attack arises from the lack of mutual authentication on most systems. An attacker may place a program at the access point that emulates the login prompt sequence. Then, if the user has a reusable password, the name and password are captured. Crude versions of this Trojan horse[6] save the name and password to a file and then terminate by spawning a legitimate login session. The user will be reprompted for the information. Most users simply assume that they have mistyped some part of the password (which, after all, is usually not printed) and proceed to repeat the login procedure. A more sophisticated version saves the name and password to a file and then spawns the login process and feeds it the name and password. The program terminates, giving control of the access point to the login process.

The second potential attack arises from an attacker reading the password as it is entered. At a later date, the attacker can reuse the password. This differs from the first attack in that it succeeds even when the user and system mutually authenticate each other. The problem is that the password is no longer confidential.

As part of the login procedure, many systems print useful information. If the date, time, and location of the last successful login are shown, the user can verify that no one has used her account since she last did. If the access point is shown, the user can determine if some program is intercepting and rerouting her communications.

Policy component U1 suggests that the user should be alert when logging in. If something suspicious occurs, or the link to the system is not physically or cryptographically protected, an unauthorized user may acquire access to the system.

Trusted Hosts

The notion of “trusted hosts” comes from the belief that if two hosts are under the same administrative control, each can rely on the other to authenticate a user. It allows certain mechanisms, such as backups, to be automated without placing passwords or cryptographic keys on the system.

The trusted host mechanism requires accurate identification of the connecting host. The primary identification token of a host is its IP address,[9] but the authentication mechanism can be either the IP address itself [549] or a challenge-response exchange[10] based on cryptography [1065]. The Drib uses the latter. This prevents IP spoofing.

The development network workstations use the cryptographically based trusted host mechanism. The implementation provides enciphered and integrity-checked connections. Because all development network workstations use the same user information database, a developer need only log into one using a password. She can then access any workstation on that subnet.

Hence, the development network provides an infrastructure that supports this aspect of policy component U1.

Leaving the System

The Drib has many physical and procedural controls that limit access to its facility, but some people not authorized to use the systems have access to the rooms in which those systems reside. For example, custodians clean the rooms. If lights or air conditioning units need to be repaired, maintenance workers need entry. Hence, physical security is not sufficient to control access to the systems.

Users must authenticate themselves to begin a session. However, once authenticated, the user must also control access to the session. A common problem is that users will leave their sessions unattended—for example, by walking away from their monitors to go to the bathroom. If a custodian came into the room, she would see that the monitor was logged in and could enter commands, thereby obtaining access to the system even though she was not authorized to do so.

When a user of a system leaves a session unattended, he must restrict physical access to the endpoint of the session.[11] When that endpoint is a monitor or terminal, a screen locking program provides an approriate defense against this threat.

Screen locking programs may have security holes. The most common is a “master password” that unlocks the terminal if the user forgets the password used to lock it.[12]

A modem bank provides similar opportunities for open sessions. When a modem detects carrier drop (that is, when the remote user hangs up), it terminates the session. However, two problems arise. The first and simpler one is that the detection of carrier drop is configurable. Some modems have a physical switch that must be set properly to detect the termination of a telephone call.[13]

The second problem is similar but more subtle. Some older telephone systems mishandle the propagation of call terminations. The result is a race condition,[15] in which a new connection arrives at the switch and is forwarded before the termination signal arrives at the modem. The effect is exactly the same as in the example above: the modem never sees the carrier drop. If the session is terminated, the modem initiates a new session and the race condition does not affect the system's accessibility, but if the session is unterminated, the new connection will have access to the session.

The Drib's solution to these problems is a mixture of physical and technical means. All workstations have display locking programs that do not accept a master password. They use the user's login password as the key to unlocking the display. If the user is unable to supply that password (for example, if the user forgets it or becomes ill and cannot communicate it), the system administrators can remotely log into the workstation and terminate the process. The procedural mechanisms involve disciplinary action against developers who fail to lock displays, or fail to lock the doors of their offices when they leave. As far as modems go, the Drib does not allow modems to be connected to the development network.

Files and Devices

Users keep information and programs in files. This makes file protection a security mechanism that users can manipulate to refine the protection afforded their data. Similarly, users manipulate the system through devices of various types. Their protection is to some degree under the user's control. This section explores both.

Files

Users must protect confidentiality and integrity of the files to satisfy policy component U2. To this end, they use the protection capabilities of the system to constrain access. Complicating the situation are the interpretation of permissions on the containing directories.

This example illustrates the cumbersome nature of abbreviated ACLs (see Exercise 3; Exercise 4 explores an approach to the situation in which Peter and Deborah are the only members in common to two groups). Ordinary ACLs make the task considerably simpler.

Users can control several aspects of file protection. The remainder of this section explores some of these aspects.

File Permissions on Creation

Many systems allow users to specify a template of permissions to be given to a file when it is created. The owner can then modify this set as required.

Group Access

Group access provides a selected set of users with the same access rights.[18] The problem is that the membership of the group is not under the control of the owner of the file. This has an advantage and a disadvantage.

The advantage arises when the group is used as a role.[19] Then, as users are allowed to assume the role, their access to the file is altered. Because the owner of the file is concerned only with controlling access of those role users, reconfiguration of the access to the role reconfigures user access to the file, which is what the user wants.

The disadvantage arises when a group is used as a shorthand for a set of specific users. If the membership of the group changes, unauthorized users may obtain access to the file, or authorized users may be denied access to the file.

In general, users should limit access as much as possible when creating new files. So ACLs and C-Lists should include as few entries as possible, and permissions for each entry should be as restrictive as possible. Constructs such as the umask should be set to deny permissions to as many users as possible (in the specific case of UNIX systems, umask should deny all permissions to all but the owner, unless there are specific reasons to set it differently).

File Deletion

A user deletes a file. Either the file data or the file name is discarded. The effects of these differ widely.

Computer systems store files on disk. The file attribute table contains information about the file. The file mapping table contains information that allows the operating system to locate the disk blocks that compose the file. Systems represent a file being in a directory in a variety of ways. All involve an entry in the directory for that file, but the entry may contain attribute information (such as permissions and file type) or may merely point to an entry in the file attribute table.

  • Definition 28–1. A direct alias is a directory entry that points to (names) the file. An indirect alias is a directory entry that points to a special file containing the name of the target file. The operating system interprets the indirect alias by substituting the contents of the special file for the name of the indirect alias file.

All direct aliases that name the same file are equal. Each direct alias is an alternative name for the same file.[20]

The representation of containment in a directory affects security. If each direct alias can have different permissions, the owner of a file must change the access modes of each alias in order to control access. To avoid this, most systems associate the file attribute information with the actual data, and directory entries consist of a pointer to the file attribute table.

When a user deletes a file, the directory entry is removed. The system tracks the number of directory entries for each file, and when that number becomes 0, the data blocks and table entries for that file are released. This means that deleting a file does not ensure that the file is unavailable; it merely deletes the directory entry.

The second issue affecting file deletion is persistence. When a file is deleted, its disk blocks are returned to the pool of unused disk blocks, and they may be reused. However, the data on them remains, and if an attacker can read those blocks, he may read information that was intended to be confidential. When sensitive files are deleted, the contents should be erased before deletion.[22]

The third issue lies in the difference between direct and indirect aliases. When a command that affects a file is executed, it may have different effects depending on whether the file is a direct alias or an indirect alias. This may mislead a user into believing that certain information has been protected or deleted when in fact the protection or deletion applied only to the indirect alias and not to the file itself.

Devices

Users communicate with the system through devices. The devices may be virtual, such as network ports, or physical, such as terminals. Policy components U1 and U4 require that these devices be protected so that the user can control what commands are sent to the system in her name and so that others are prevented from seeing her interactions.

Writable Devices

Devices that allow any user to write to them can pose serious security problems. Unless necessary for the correct functioning of the system, devices should restrict write access as much as possible.[23] Two examples will demonstrate why.

The development network users have a default configuration that denies write privileges to everyone except the user of a terminal.

Smart Terminals

A smart terminal provides built-in mechanisms for performing special functions. Most importantly, a smart terminal can perform a block send. Using this mode, a process can instruct a terminal to send a set of characters that are printed on the screen. The instructions are simply a sequence of characters that the process sends to the terminal. This can be used to implant a Trojan horse.[25]

The difference between a smart terminal and a writable terminal is subtle. Only the user of the terminal need have write access to the smart terminal, whereas the earlier attacks required the attacker as well as the user of the terminal to be able to write to the terminal. An attacker must therefore trick the user into reading data in order to spring the smart terminal attack. This requires malicious logic (or, in this context, malicious data).[26]

Monitors and Window Systems

Window systems provide a graphical user interface to a system. Typically, a process called the window manager controls what is displayed on the monitor and accepts input from input devices. Other processes, called clients, register with the window manager. They can then receive input from the window manager and send output to the window manager. The window manager draws the output on the monitor screen if appropriate. The window manager is also responsible for routing input to the correct client.

The obvious question is how the window manager determines which clients it may talk to. If an attacker is able to register a client with the window manager, the attacker can intercept input and send bogus output to the monitor.

Window systems can use any of the access control mechanisms described in Chapter 15 to control access to the window manager. The granularity of the access control mechanism varies among different window systems.

Processes

Processes manipulate objects, including files. Policy component U3 requires the user to be aware of how processes manipulate files. This section examines several aspects of this requirement.

Copying and Moving Files

Copying a file duplicates its contents. The semantics of the copy command determine if the file attributes are also copied. If the attributes are not copied, the user may need to take steps to preserve the integrity and confidentiality of the file.

Similarly, sometimes the semantics of moving files involve copying a file and deleting the original copy. In this case, the file attributes of the move command follow those of the copy command. Otherwise, the move command may preserve the attributes of the original command.

The semantics of the commands, and how well the user knows those semantics and can take steps to handle potential security problems, affect their ability to satisfy policy component U3.

Accidentally Overwriting Files

Part of policy component U3 is to protect users from themselves.[29] Sometimes people make mistakes when they enter commands. These mistakes can have unpleasant consequences.

Many programs that delete or overwrite files have an interactive mode. Before any file is deleted or overwritten, the program requests confirmation that the user intends for this to happen.[30] Policy component U3 strongly suggests that these modes be used. In fact, the development workstations have these modes set in user start-up files. The users can disable the modes, but generally do not.

Encryption, Cryptographic Keys, and Passwords

The basis for encryption is trust. Cryptographic considerations aside, if the encryption and decryption are done on a multiuser system, the cryptographic keys are potentially visible to anyone who can read memory and, possibly, swap space. Anyone who can alter the programs used to encipher and decipher the files, or any of the supporting tools (such as the operating system), can also obtain the cryptographic keys or the cleartext data itself. For this reason, unless users trust the privileged users,[31] and trust that other users cannot acquire the privileges needed to read memory, swap space, or alter the relevant programs, the sensitive data should never be on the system in cleartext.[32]

The saving of passwords on a multiuser system suffers from the same problem. In addition, some programs that allow users to put passwords into a file do not rely on enciphering the passwords; they simply require the user to set file permissions so that only the owner can read the file.

The circumstances under which a password should reside in a system are few.[33] Unless unavoidable, no password should reside unenciphered in a system, either on disk or in memory. The Drib has modified its ftp programs to ignore .netrc files. This discourages their use. Furthermore, system administrators have embedded a check for such files in their audit tools that check the systems.

Start-up Settings

Many programs, such as text editors and command interpreters, use start-up information. These variables and files contain commands that are executed when the program begins but before any input is accepted from the user. The set of start-up files, and the order in which they are accessed, affect the execution of the program.

Limiting Privileges

Users should know which of their programs grant additional privileges to others. They should also understand the implications of granting such privileges.

Malicious Logic

Section 27.2.2 discusses mechanisms for preventing users from bringing malicious software from outside the development network. However, insiders can write malicious programs in order to gain additional privileges or to sabotage others' work. Also, if an attacker breaks in, he may not acquire the desired privileges and may leave traps for authorized users to spring. Hence, users need to take precautions.

  • Definition 28–2. A search path is a sequence of directories that a system uses to locate an object (program, library, or file).

Because programs rely on search paths, users must take care to set theirs appropriately.

Some systems have many types of search paths. In addition to searching for executables, a common search path contains directories that are used to search for libraries when the system supports dynamic loading. In this case, an attacker can create a new library that the unsuspecting victim will load, much as Johannes executed the wrong program in the example above.[36]

Part of policy component U4 requires that the users have only trusted directories in their search paths. Here, “trusted” means that only trusted users can alter the contents of the directory. The default start-up files for all the development workstation users have search paths set in this way.[37]

Electronic Communications

Electronic communications deserves discussion to emphasize the importance of users understanding basic security precautions. Electronic mail may pass through firewalls (as the Drib policy allows; see Section 26.3.3.1). Although it can be checked for malicious content, such checking cannot detect all forms of such content.[38] Finally, users may unintentionally send out more material than they realize. Hence, users must understand the threats and follow the procedures that are appropriate to the site policy.

Automated Electronic Mail Processing

Some users automate the processing of electronic mail. When mail arrives, a program determines how to handle it. The mail may be stored for the user, or it may be interpreted as a sequence of commands causing execution of either programs already on the system or part of the content of the message, or both. The danger is that the execution may have unintended side effects.

Electronic mail comes from untrusted sources. Hence, in general, the contents of e-mail messages are not trustworthy. Mail programs should be configured not to execute attachments, or indeed any component of the letter.[39] The trust in the result of such execution is the same as the trust the reader puts in the data contained in the mail message.

Failure to Check Certificates

Electronic signatures can be misleading. In particular, a certificate may validate a signature, but the certificate itself may be compromised, invalid, or expired. Mail reading programs must notify the user of these problems, as well as provide a mechanism for allowing the user to validate certificates.

The Drib has enhanced all mail reading programs that use certificates to validate the certificates as far as possible. The programs then display the certificates that could not be validated, to allow users to determine how to proceed.

Sending Unexpected Content

Attachments to electronic mail may contain data of which the sender is not aware. When these files are sent, the recipient may see more than the sender intended.

Some programs perform “rapid saves,” in which data is appended to the file and pointers are updated. When the program rereads the file, the document appears as it was last saved, and the extraneous data is ignored. However, if the file is sent to a different system, or if other programs are used to access the file, the “deleted” contents will be accessible.

The users of the development workstations are periodically warned about this risk. Furthermore, all programs with “rapid saves” have them disabled by default.[42]

Summary

This chapter covered only a few aspects of how users can protect the data and programs with which they work. The security policy of the site and the desires of the user combine to provide a personalized, if unwritten, security policy.

Well-chosen reusable passwords, or (even better) one-time passwords, inhibit unauthorized access. Other authentication mechanisms allow users to control access to some degree on the basis of the host of origin and cryptographic keys (although in some cases the system administrator can override these access controls). Users can prevent interference with their sessions by using enciphered, integrity-checked sessions and by physically securing the monitors or terminals they use to interact with the system (as well as the system of origin, if they are working remotely).

Basic file permission mechanisms help protect the confidentiality and integrity of data and programs. The user can check programs for an “interactive” mode that will require verification of any request to delete or overwrite files. Other aspects of file handling, such as erasing files before deleting them, and verifying that deletion of a file does not delete only an alias and leave the file accessible, also affect file security.

Equally important are the controls on devices. The sophistication of most modern equipment allows devices to be programmed from the computer to which they are connected. Hence, devices should be configured to refuse unexpected or untrusted connections. Ideally, access control mechanisms will provide sufficient granularity to allow access based on users or processes.

Processes act on the user's behalf, and can perform any action that the user requests. Malicious logic, or corrupt input, can cause the process to act in ways that the user does not want. Users can minimize this risk by setting up their environments carefully and by not executing untrusted programs or giving untrusted data to trusted programs.

Research Issues

There is a tension between allowing security features to be highly configurable and expecting users to configure them correctly (as defined by adherence to a security policy). Users view security as an infrastructure measure, designed to support work, and not as an end goal in itself. Because their primary goal is not security, many users find security mechanisms cumbersome and difficult to use. Designing mechanisms that can be readily understood, and that can be configured with a minimum of effort by untrained users, is a critical area of research that has received little attention. Striking the right balance between configurability and usability is a topic that combines security, psychology, and user interfaces.

Further Reading

Discussions of user level mechanisms in various systems abound. Books on the security of various systems (such as Braun [145], Garfinkel and Spafford [382], and McLean [681]) focus on the system administration aspects of security but also describe user level mechanisms. Books on how to use the systems (such as Crawford [244] and Glass [396]) cover the material more effectively for ordinary users.

Zurko and Simon discuss the notion of user-centered security as fundamental to secure systems [1074]. Whitten and Tygar examine PGP from a usability point of view [1042].

Exercises

1:

Consider the isolated system described in the first example in Section 28.2.1. If custodians and other people not authorized to use the isolated system were allowed into the room without observation, would that violate policy component U1? Justify your answer.

2:

Reconsider the lock program discussed in Section 28.2.3.

  1. The program requires a user to choose a password (rather than using her login password) to lock the screen. Does this violate the principle of psychological acceptability (see Section 13.2.8)? Justify your answer.

  2. If a user forgets her password, how might she terminate the program without using the master password? (Hint: Although she cannot use that terminal, she can use another terminal to access the system.)

  3. How might a user determine the master password? Discuss steps that the implementer could take to prevent such a discovery. In particular, could a per-system master password be implemented (rather than a single master password for the program)? How?

3:

The example of Peter and Deborah on the UNIX system in Section 28.3.1 assumes that Deborah is the only member, or that Deborah and Peter are the only members, of a group. If this is not so, can Peter give only himself and Deborah access to the file by using the abbreviated ACL? Explain either how he can or why he cannot.

4:

Suppose that Deborah, Peter, and Kathy are the only members of the group proj and that Deborah, Peter, and Elizabeth are the only members of the group exeter. Show how Peter can restrict access to the file design to himself and Deborah using only abbreviated ACLs. (Hint: Consider both design and its containing directory.)

5:

The UNIX umask disables access by default. The Windows scheme enables it. Discuss the implications of enabling access by default and of disabling access by default with respect to security. In particular, which of Saltzer and Schroeder's design principles [865] (see Chapter 13, “Design Principles”) is violated by either enabling or disabling access by default?

6:

Many UNIX security experts say that the umask should be set to 077 (that is, to allow access only to the owner). Why? What problems might this cause?



[1] See Section 12.2.2.1, “Random Selection of Passwords.”

[2] See Section 12.2.2.3, “User Selection of Passwords.”

[3] See Section 12.2.2.1, “Random Selection of Passwords.”

[4] See Section 12.2.2.3, “User Selection of Passwords.”

[5] An example set of criteria begins on p. 316.

[6] See Section 22.2, “Trojan Horses.”

[8] See the examples in Section 11.4, “Example Protocols.”

[9] See Section 14.6.1, “Host Identity.”

[10] See Section 12.3, “Challenge-Response.”

[11] See Section 13.2.1, “Principle of Least Privilege.”

[12] Section 1.4, “Assumptions and Trust,” discusses the role of beliefs underlying security mechanisms such as a screen locking program. Section 18.1.3, “Assurance Throughout the Life Cycle,” discusses the role of assurance in developing software.

[13] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[14] See Section 13.2.6, “Principle of Separation of Privilege.”

[15] See Section 23.4.5.1, “The xterm Log File Flaw,” and Section 29.5.3.3, “Race Conditions in File Accesses,” for other examples of race conditions.

[16] See Section 15.1.1, “Abbreviations of Access Control Lists.”

[17] See Section 15.1.4, “Example: Windows NT Access Control Lists.”

[18] See Section 14.4, “Groups and Roles.”

[19] See Section 14.4, “Groups and Roles.”

[20] See Section 14.2, “Files and Objects.”

[21] See Section 14.3, “Users.”

[22] See, for example, Section 21.2.1.1, “TCSEC Functional Requirements,” and Section 21.8.3, “CC Security Functional Requirements.”

[23] See Section 13.2.1, “Principle of Least Privilege.”

[24] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[25] See Section 22.2, “Trojan Horses.”

[27] See Section 14.6.1, “Host Identity.”

[28] See Section 14.6.2, “State and Cookies.”

[29] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[30] See Section 13.2.8, “Principle of Psychological Acceptability.”

[31] Here, “privileged users” means those who can read memory, swap space, or alter system programs.

[32] See Section 13.2.1, “Principle of Least Privilege.”

[33] See Section 13.2.2, “Principle of Fail-Safe Defaults.”

[34] See Section 14.3, “Users.”

[35] See Section 13.2.1, “Principle of Least Privilege.”

[36] See Section 23.2.8, “Example: Penetrating a UNIX System.”

[37] See Section 13.2.2, “Principle pf Fail-Safe Defaults.”

[38] See Section 22.6, “Theory of Malicious Logic.”

[39] See Section 22.7.1, “Malicious Logic Acting as Both Data and Instructions.”

[40] See Section 14.5, “Naming and Certificates.”

[41] See Section 10.5.2, “Key Revocation.”

[42] See Section 13.2.8, “Principle of Psychological Acceptability.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.60.249