Chapter 2. Access Control

This chapter covers the following topics:

Access control concepts: Concepts discussed include the confidentiality, integrity, and availability (CIA) triad, default stance, defense in depth, and the access control process.

Identification and authentication concepts: Concepts discussed include the identification concepts and the three factors for authentication.

Authorization concepts: Concepts discussed include access control policies, separation of duties, least privilege, need to know, default to no access, Kerberos and Directory Services, single sign-on, and security domains.

Accountability: Concepts discussed include auditing and reporting, vulnerability assessment, penetration testing, and threat modeling.

Access control categories: Categories include compensative, corrective, detective, deterrent, directive, preventive, and recovery.

Access control types: Types include administrative (management) controls, logical (technical) controls, and physical controls.

Access control models: Models include discretionary access control, mandatory access control, role-based access control, rule-based access control, content-dependent versus context-dependent access control, and access control matrix.

Access control administration: Administration topics include centralized administration, decentralized administration, and provisioning life cycle.

Access control monitoring: Monitoring topics include intrusion detection system (IDS) and intrusion prevention system (IPS).

Access control threats: Threats include password threats, social engineering threats, DoS/DDoS, buffer overflow, mobile code, malicious software, spoofing, sniffing and eavesdropping, emanating, and backdoor/trapdoor.

Access occurs when information flows between a subject and an object. A subject is an entity that requests access to an object, and an object is an entity that contains information. A user is an example of a subject, and a file on a computer is an example of an object.

Image

Access control is the means by which a subject’s ability to communicate with an object is allowed or denied based on an organization’s security requirements. It is the mechanism by which managers and administrators can control access to the objects. Access control mechanisms include a broad range of controls that protect information, computers, networks, and even buildings.

The most important factor when implementing access control is to determine the value of the information being protected. If the cost of the access control mechanism is higher than the value of the information being protected, the access control mechanism is not a good value.

Foundation Topics

Access Control Concepts

When implementing access control, you should always keep several security principles in mind. Confidentiality, integrity, and availability are the three components of the CIA triad and the three main security principles that security professionals should understand when designing access controls. Other security principles that organizations and security professionals should understand are the organization’s default security stance and defense in depth.

CIA

The three main security principles of the CIA triad must be considered throughout any security design. Although the CIA triad is being introduced here, each principle of the triad should be considered in every aspect of security design. The CIA triad could easily be discussed in any domain of the CISSP exam.

To ensure confidentiality, you must prevent the disclosure of data or information to unauthorized entities. As part of confidentiality, the sensitivity level of data must be determined before putting any access controls in place. Data with a higher sensitivity level will have more access controls in place than data at a lower sensitivity level. The opposite of confidentiality is disclosure.

Integrity, the second part of the CIA triad, ensures that data is protected from unauthorized modification or data corruption. The goal of integrity is to preserve the consistency of data. The opposite of integrity is corruption.

Finally, availability means ensuring that data is accessible when and where it is needed. Only individuals who need access to data should be allowed access to that data. Availability is the opposite of destruction or isolation.

Every security control that is put into place by an organization fulfills at least one of the security principles of the CIA triad. Understanding how to circumvent these security principles is just as important as understanding how to provide them.

Default Stance

An organization’s approach to information security directly affects its access control strategy. For a default stance, organizations must choose between an allow-by-default or deny-by-default stance. As implied by its name, an allow-by-default stance permits access to any data unless a need exists to restrict access. The deny-by-default stance is much stricter because it denies any access that is not explicitly permitted. Government and military institutions and many commercial organizations use a deny-by-default stance.

Today few organizations implement either of these stances to its fullest. In most organizations, you see some mixture of the two. Although the core stance should guide the organization, organizations often find that this mixture is necessary to ensure that data is still protected while providing access to a variety of users. For example, a public Web site might grant an allow-by-default stance, whereas a SQL database might have a deny-by-default stance.

Defense In Depth

A defense-in-depth strategy refers to the practice of using multiple layers of security between data and the resources on which it resides and possible attackers. The first layer of a good defense-in-depth strategy is appropriate access control strategies. Access controls exist in all areas of an Information Systems (IS) infrastructure (more commonly referred to as an IT infrastructure), but a defense-in-depth strategy goes beyond access control. It also considers software development security, cryptography, and all other domains of the CISSP realm.

Figure 2-1 shows an example of the defense-in-depth concept.

Figure 2-1. Defense-In-Depth Example

Image
Access Control Process
Image

Although many approaches to implementing access controls have been designed, all the approaches generally involve the following steps:

1. Identify resources.

2. Identify users.

3. Identify the relationships between the resources and users.

Identify Resources

This first step in the access control process involves defining all resources in the IT infrastructure by deciding which entities need to be protected. When defining these resources, you must also consider how the resources will be accessed. The following questions can be used as a starting point during resource identification:

• Will this information be accessed by members of the general public?

• Should access to this information be restricted to employees only?

• Should access to this information be restricted to a smaller subset of employees?

Keep in mind that data, applications, services, servers, and network devices are all considered resources. Resources are any organizational asset that users can access. In access control, resources are often referred to as objects.

Identify Users

After identifying the resources, an organization should identify the users who need access to the resources. A typical security professional must manage multiple levels of users who require access to organizational resources. During this step, only identifying the users is important. The level of access these users will be given will be analyzed further in the next step.

As part of this step, you must analyze and understand the users’ needs and then measure the validity of those needs against organizational needs, policies, legal issues, data sensitivity, and risk.

Remember that any access control strategy and the system deployed to enforce it should avoid complexity. The more complex an access control system is, the harder that system is to manage. In addition, anticipating security issues that could occur in more complex systems is much harder. As security professionals, we must balance the organization’s security needs and policies with the needs of the users. If a security mechanism that we implement causes too much difficulty for the user, the user might engage in practices that subvert the mechanisms that we implement. For example, if you implement a password policy that requires a very long, complex password, users might find remembering their passwords to be difficult. Users might then write their passwords on sticky notes that are attached to their monitor or keyboard.

Identify Relationships between Resources and Users

The final step in the access control process is to define the access control levels that need to be in place for each resource and the relationships between the resources and users. For example, if an organization has defined a Web server as a resource, general employees might need a less restrictive level of access to the resource than the public and a more restrictive level of access to the resource than the Web development staff. Access controls should be designed to support the business functionality of the resources that are being protected. Controlling the actions that can be performed for a specific resource based on a user’s role is vital.

Identification and Authentication Concepts

To be able to access a resource, a user must profess his identity, provide the necessary credentials, and have the appropriate rights to perform the tasks he is completing. The first step in this process is called identification, which is the act of a user professing an identity to an access control system.

Authentication, the second part of the process, is the act of validating a user with a unique identifier by providing the appropriate credentials. When trying to differentiate between the two, security professionals should know that identification identifies the user and authentication verifies that the identity provided by the user is valid. Authentication is usually implemented through a user password provided at logon. When a user logs into a system, the login process should validate the login after the user supplies all the input data.

The most popular forms of user identification include user IDs or user accounts, account numbers, and personal identification numbers (PINs).

Three Factors for Authentication
Image

After establishing the user identification method, an organization must decide which authentication method to use.

Authentication methods are divided into three broad categories:

Knowledge factor authentication: Something a person knows

Ownership factor authentication: Something a person has or possesses

Characteristic factor authentication: Something a person is

Authentication usually ensures that a user provide at least one factor from these categories, which is referred to as single-factor authentication. An example of this would be providing a user name and password at login. Two-factor authentication ensures that the user provides two of the three factors. An example of two-factor authentication would be providing a user name, password, and smart card at login. Three-factor authentication ensures that a user provides three factors. An example of three-factor authentication would be providing a user name, password, smart card, and fingerprint at login. For authentication to be considered strong authentication, a user must provide factors from at least two different categories. (Note that the user name is the identification factor, not an authentication factor.)

You should understand that providing multiple authentication factors from the same category is still considered single-factor authentication. For example, if a user provides a user name, password, and the user’s mother’s maiden name, single-factor authentication is being used. In this example, the user is still only providing factors that are something a person knows.

Knowledge Factors

As briefly described in the preceding, knowledge factor authentication is authentication that is provided based on something that a person knows. This type of authentication is referred to as a Type I authentication factor. Although the most popular form of authentication used by this category is password authentication, other knowledge factors can be used, including date of birth, mother’s maiden name, key combination, or PIN.

Identity and Account Management

Identity and account management is vital to any authentication process. As a security professional, you must ensure that your organization has a formal procedure to control the creation and allocation of access credentials or identities. If invalid accounts are allowed to be created and are not disabled, security breaches will occur. Most organizations implement a method to review the identification and authentication process to ensure that user accounts are current. Questions that are likely to help in the process include:

• Is a current list of authorized users and their access maintained and approved?

• Are passwords changed at least every 90 days or earlier if needed?

• Are inactive user accounts disabled after a specified period of time?

Any identity management procedure must include processes for creating (provisioning), changing and monitoring (reviewing), and removing users from the access control system (revoking). This is referred to as the access control provisioning life cycle. When initially establishing a user account, new users should be required to provide valid photo identification and should sign a statement regarding password confidentiality. User accounts must be unique. Policies should be in place that standardize the structure of user accounts. For example, all user accounts should be firstname.lastname or some other structure. This ensures that users within an organization will be able to determine a new user’s identification, mainly for communication purposes.

After creation, user accounts should be monitored to ensure that they remain active. Inactive accounts should be automatically disabled after a certain period of inactivity based on business requirements. In addition, any termination policy should include formal procedures to ensure that all user accounts are disabled or deleted. Elements of proper account management include the following:

• Establish a formal process for establishing, issuing, and closing user accounts.

• Periodically review user accounts.

• Implement a process for tracking access authorization.

• Periodically rescreen personnel in sensitive positions.

• Periodically verify the legitimacy of user accounts.

User account reviews are a vital part of account management. User accounts should be reviewed for conformity with the principle of least privilege. (The principle of least privilege is explained later in this chapter.) User account reviews can be performed on an enterprise-wide, system-wide, or application-by-application basis. The size of the organization will greatly affect which of these methods to use. As part of user account reviews, organizations should determine whether all user accounts are active.

Password Types and Management

As mentioned earlier, password authentication is the most popular authentication method implemented today. However, password types can vary from system to system. Understanding all the types of passwords that can be used is vital.

Image

The types of passwords that you should be familiar with include:

Standard word or simple passwords: As the name implies, these passwords consist of single words that often include a mixture of upper- and lowercase letters. The advantage of this password type is that it is easy to remember. A disadvantage of this password type is that it is easy for attackers to crack or break, resulting in a compromised account.

Combination passwords: This password type uses a mix of dictionary words, usually two unrelated words. These are also referred to as composition passwords. Like standard word passwords, they can include upper- and lowercase letters and numbers. An advantage of this password is that it is harder to break than simple passwords. A disadvantage is that it can be hard to remember.

Static passwords: This password type is the same for each login. It provides a minimum level of security because the password never changes. It is most often seen in peer-to-peer networks.

Complex passwords: This password type forces a user to include a mixture of upper- and lowercase letters, numbers, and special characters. For many organizations today, this type of password is enforced as part of the organization’s password policy. An advantage of this password type is that it is very hard to crack. A disadvantage is that it is harder to remember and can often be much harder to enter correctly than standard or combination passwords.

Passphrase passwords: This password type requires that a long phrase be used. Because of the password’s length, it is easier to remember but much harder to attack, both of which are definite advantages. Incorporating upper- and lowercase letters, numbers, and special characters in this type of password can significantly increase authentication security.

Cognitive passwords: This password type is a piece of information that can be used to verify an individual’s identity. This information is provided to the system by answering a series of questions based on the user’s life, such as favorite color, pet’s name, mother’s maiden name, and so on. An advantage to this type is that users can usually easily remember this information. The disadvantage is that someone who has intimate knowledge of the person’s life (spouse, child, sibling, and so on) might be able to provide this information as well.

One-time passwords: Also called a dynamic password, this type of password is only used once to log in to the access control system. This password type provides the highest level of security because passwords are discarded when they are used.

Graphical passwords: Also called CAPTCHA passwords, this type of password uses graphics as part of the authentication mechanism. One popular implementation requires a user to enter a series of characters in the graphic displayed. This implementation ensures that a human is entering the password, not a robot. Another popular implementation requires the user to select the appropriate graphic for his account from a list of graphics given.

Numeric passwords: This type of password includes only numbers. Keep in mind that the choices of a password are limited by the number of digits allowed. For example, if all passwords are 4 digits, then the maximum number of password possibilities is 10,000, from 0000 through 9999. After an attacker realized that only numbers are used, cracking user passwords would be much easier because the possibilities would be known.

Passwords are considered weaker than passphrases, one-time passwords, token devices, and login phrases. After an organization has decided which type of password to use, the organization must establish its password management policies.

Password management considerations include, but might not be limited to

Password life: How long the password will be valid. For most organization, passwords are valid for 60 to 90 days.

Password history: How long before a password can be reused. Password policies usually remember a certain number of previously used passwords.

Authentication period: How long a user can remain logged in. If a user remains logged in for the period without activity, the user will be automatically logged out.

Password complexity: How the password will be structured. Most organizations require upper- and lowercase letters, numbers, and special characters.

Password length: How long the password must be. Most organization require 8–12 characters.

As part of password management, organizations should establish a procedure for changing passwords. Most organizations implement a service that allows users to automatically reset their password before the password expires. In addition, most organizations should consider establishing a password reset policy in cases where users have forgotten their password or passwords have been compromised. A self-service password reset approach allows users to reset their own passwords without the assistance of help desk employees. An assisted password reset approach requires that users contact help desk personnel for help in changing their passwords.

Password reset policies can also be affected by other organizational policies, such as account lockout policies. Account lockout policies are security policies that organizations implement to protect against attacks that are carried out against passwords. Organizations often configure account lockout policies so that user accounts are locked after a certain number of unsuccessful login attempts. If an account is locked out, the system administrator might need to unlock or re-enable the user account. Security professionals should also consider encouraging organizations to require users to reset their password if their account has been locked or after a password has been used for a certain amount of time (90 days for most organizations). For most organizations, all the password policies, including account lockout policies, are implemented at the enterprise level on the servers that manage the network.


Note

An older term that you might need to be familiar with is clipping level. A clipping level is a configured baseline threshold above which violations will be recorded. For example, an organization might want to start recording any unsuccessful login attempts after the first one, with account lockout occurring after five failed attempts.


Depending on which servers are used to manage the enterprise, security professionals must be aware of the security issues that affect user account and password management. Two popular server operating systems are Linux and Windows.

For Linux, passwords are stored in the /etc/passwd and /etc/shadow file. Because the /etc/passwd file is a text file that can be easily accessed, you should ensure that any Linux servers use the /etc/shadow file where the passwords in the file can be protected using a hash. The root user in Linux is a default account that is given administrative-level access to the entire server. If the root account is compromised, all passwords should be changed. Access to the root account should be limited only to systems administrators, and root login should only be allowed via a local system console, not remotely.

For Windows computers that are in workgroups, the Security Accounts Manager (SAM) stores user passwords in a hashed format. However, known security issues exist with a SAM, including the ability to dump the password hashes directly from the registry. You should take all Microsoft-recommended security measures to protect this file. If you manage a Windows network, you should change the name of the default Administrator account or disable it. If this account is retained, make sure that you assign it a password. The default Administrator account might have full access to a Windows server.

Ownership Factors

Ownership factor authentication is authentication that is provided based on something that a person has. This type of authentication is referred to as a Type II authentication factor. Ownership factors can include token devices, memory cards, and smart cards.

Synchronous and Asynchronous Token

The token device (often referred to as a password generator) is a handheld device that presents the authentication server with the one-time password. If the authentication method requires a token device, the user must be in physical possession of the device to authenticate. So although the token device provides a password to the authentication server, the token device is considered a Type II authentication factor because its use requires ownership of the device.

Two basic token device authentication methods are used: synchronous or asynchronous. A synchronous token generates a unique password at fixed time intervals with the authentication server. An asynchronous token generates the password based on a challenge/response technique with the authentication server, with the token device providing the correct answer to the authentication server’s challenge.

A token device is usually only implemented in very secure environments because of the cost of deploying the token device. In addition, token-based solutions can experience problems because of the battery lifespan of the token device.

Memory Cards

A memory card is a swipe card that is issued to valid users. The card contains user authentication information. When the card is swiped through a card reader, the information stored on the card is compared to the information that the user enters. If the information matches, the authentication server approves the login. If it does not match, authentication is denied.

Because the card must be read by a card reader, each computer or access device must have its own card reader. In addition, the cards must be created and programmed. Both of these steps add complexity and cost to the authentication process. However, it is often worth the extra complexity and cost for the added security it provides, which is a definite benefit of this system. However, the data on the memory cards is not protected, a weakness that organizations should consider before implementing this type of system. Memory-only cards are very easy to counterfeit.

Smart Cards

Similar to a memory card, a smart card accepts, stores, and sends data but can hold more data than a memory card. Smart cards, often known as integrated circuit cards (ICCs), contain memory like a memory card but also contain an embedded chip like bank or credit cards. Smart cards use card readers. However, the data on the smart card is used by the authentication server without user input. To protect against lost or stolen smart cards, most implementations require the user to input a secret PIN, meaning the user is actually providing both a Type I (PIN) and Type II (smart card) authentication factor.

Two basic types of smart cards are used: contact cards and contactless cards. Contact cards require physical contact with the card reader, usually by swiping. Contactless cards, also referred to as proximity cards, simply need to be in close proximity to the reader. Hybrid cards are available that allow a card to be used in both contact and contactless systems.

For comparative purposes, security professionals should remember that smart cards have processing power due to the embedded chips. Memory cards do not have processing power. Smart card systems are much more reliable than memory card systems or callback systems (which are discussed in Chapter 3, “Telecommunications and Network Security”.)

Smart cards are even more expensive to implement than memory cards. Many organizations prefer smart cards over memory cards because they are harder to counterfeit and the data on them can be protected using encryption.

Characteristic Factors

Characteristic factor authentication is authentication that is provided based on something that a person is. This type of authentication is referred to as a Type III authentication factor. Biometric technology is the technology that allows users to be authenticated based on physiological or behavioral characteristics. Physiological characteristics include any unique physical attribute of the user, including iris, retina, and fingerprints. Behavioral characteristics measure a person’s actions in a situation, including voice patterns and data entry characteristics.

Physiological Characteristics
Image

Physiological systems use a biometric scanning device to measure certain information about a physiological characteristic. You should understand the following physiological biometric systems:

• Fingerprint

• Finger scan

• Hand geometry

• Hand topography

• Palm or hand scans

• Facial scans

• Retina scans

• Iris scans

• Vascular scans

A fingerprint scan usually scans the ridges of a finger for matching. A special type of fingerprint scan called minutiae matching is more microscopic in that it records the bifurcations and other detailed characteristics. Minutiae matching requires more authentication server space and more processing time than ridge fingerprint scans. Fingerprint scanning systems have a lower user acceptance rate than many systems because users are concerned with how the fingerprint information will be used and shared.

A finger scan extracts only certain features from a fingerprint. Because a limited amount of the fingerprint information is needed, finger scans require less server space or processing time than any type of fingerprint scan.

A hand geometry scan usually obtains size, shape, or other layout attributes of a user’s hand but can also measure bone length or finger length. Two categories of hand geometry systems are mechanical and image-edge detective systems. Regardless of which category is used, hand geometry scanners require less server space and processing time than fingerprint or finger scans.

A hand topography scan records the peaks and valleys of the hand and its shape. This system is usually implemented in conjunction with hand geometry scans because hand topography scans are not unique enough if used alone.

A palm or hand scan combines fingerprint and hand geometry technologies. It records fingerprint information from every finger as well as hand geometry information.

A facial scan records facial characteristics, including bone structure, eye width, and forehead size. This biometric method uses eigenfeatures or eigenfaces. Neither of these methods actually captures a picture of a face. With eigenfeatures, the distance between facial features are measured and recorded. With eigenfaces, measurements of facial components are gathered and compared to a set of standard eigenfaces. For example, a person’s face might be composed of the average face plus 21% from eigenface 1, 83% from eigenface 2, and -18% from eigenface 3. Many facial scan biometric devices will use a combination of eigenfeatures and eigenfaces.

A retina scan scans the retina’s blood vessel pattern. A retina scan is considered more intrusive than an iris scan.

An iris scan scans the colored portion of the eye, including all rifts, coronas, and furrows. Iris scans have a higher accuracy than any other biometric scan.

A vascular scan scans the pattern of veins in the user’s hand or face. Although this method can be a good choice because it is not very intrusive, physical injuries to the hand or face, depending on which the system uses, could cause false rejections.

Behavioral Characteristics
Image

Behavioral systems use a biometric scanning device to measure a person’s actions. You should understand the following behavioral biometric systems:

• Signature dynamics

• Keystroke dynamics

• Voice pattern or print

Signature dynamics measures stroke speed, pen pressure, and acceleration and deceleration while the user writes his signature. Dynamic Signature Verification (DSV) analyzes signature features and specific features of the signing process.

Keystroke dynamics measures the typing pattern that a user uses when inputting a password or other pre-determined phrase. In this case, even if the correct password or phrase is entered but the entry pattern on the keyboard is different, the user will be denied access. Flight time, a term associated with keystroke dynamics, is the amount of time it takes to switch between keys. Dwell time is the amount of time you hold down a key.

Voice pattern or print measures the sound pattern of a user stating a certain word. When the user attempts to authenticate, he will be asked to repeat those words in different orders. If the pattern matches, authentication is allowed.

Biometric Considerations
Image

When considering biometric technologies, security professionals should understand the following terms:

Enrollment time: The process of obtaining the sample that is used by the biometric system. This process requires actions that must be repeated several times.

Feature extraction: The approach to obtaining biometric information from a collected sample of a user’s physiological or behavioral characteristics.

Accuracy: The most important characteristic of biometric systems. It is how correct the overall readings will be.

Throughput rate: The rate at which the biometric system will be able to scan characteristics and complete the analysis to permit or deny access. The acceptable rate is 6–10 subjects per minute. A single user should be able to complete the process in 5–10 seconds.

Acceptability: Describes the likelihood that users will accept and follow the system

False rejection rate (FRR): A measurement of valid users that will be falsely rejected by the system. This is called a Type I error.

False acceptance rate (FAR): A measurement of the percentage of invalid users that will be falsely accepted by the system. This is called a Type II error. Type II errors are more dangerous than Type I errors.

Crossover error rate (CER): The point at which FRR equals FAR. Expressed as a percentage, this is the most important metric.

Figure 2-2 shows the biometric enrollment and authentication process.

Figure 2-2. Biometric Enrollment and Authentication Process

Image

When analyzing biometric systems, security professionals often refer to a Zephyr chart that illustrates the comparative strengths and weaknesses of biometric system. However, you should also consider how effective each biometric system is and its level of user acceptance. The following is a list of the more popular biometric methods ranked by effectiveness, with the most effective being first:

1. Iris scan

2. Retina scan

3. Fingerprint

4. Hand print

5. Hand geometry

6. Voice pattern

7. Keystroke pattern

8. Signature dynamics

The following is a list of the more popular biometric methods ranked by user acceptance, with the methods that are ranked more popular by users being first:

1. Voice pattern

2. Keystroke pattern

3. Signature dynamics

4. Hand geometry

5. Hand print

6. Fingerprint

7. Iris scan

8. Retina scan

When considering FAR, FRR, and CER, smaller values are better. FAR errors are more dangerous than FRR errors. Security professionals can use the CER rate for comparative analysis when helping their organization decide upon which system to implement. For example, voice print systems usually have higher CERs than iris scans, hand geometry, or fingerprints.

Authorization Concepts

After a user is authenticated, the user must be granted the rights and permissions to resources. The process is referred to as authorization. Identification and authentication are necessary steps to providing authorization. The next sections cover important components in authorization: access control policies, separation of duties, least privilege/need to know, default to no access, Kerberos and directory services, single sign-on, and security domains.

Access Control Policies

An access control policy defines the method for identifying and authenticating users and the level of access that is granted to users. Organizations should put access control policies in place to ensure that access control decisions for users are based on formal guidelines. If an access control policy is not adopted, organizations will have trouble assigning, managing, and administering access management.

Separation of Duties

Separation of duties is an important concept to keep in mind when designing an organization’s authentication and authorization policies. Separation of duties prevents fraud by distributing tasks and their associated rights and privileges between more than one user. This helps to deter fraud and collusion because it requires collusion for any fraudulent act to occur. A good example of separation duties is authorizing one person to manage backup procedures and another to manage restore procedures.

Separation of duties is associated with dual controls and split knowledge. With dual controls, two or more users are authorized and required to perform certain functions. For example, a retail establishment might require two managers to open the safe. Split knowledge ensures that no single user has all the information to perform a particular task. An example of a split control is the military’s requiring two individuals to each enter a unique combination to authorize missile firing.

Least Privilege/Need to Know

The principle of least privilege requires that a user or process is given only the minimum access privilege needed to perform a particular task. Its main purpose is to ensure that users only have access to the resources they need and are authorized to perform only the tasks they need to perform. To properly implement the least privilege principle, organizations must identify all users’ jobs and restrict users only to the identified privileges.

The need to know principle is closely associated with the concept of least privilege. Although least privilege seeks to reduce access to a minimum, the need to know principle actually defines what the minimums for each job or business function are. Excessive privileges become a problem when a user has more rights, privileges, and permissions than he needs to do his job. Excessive privileges are hard to control in large environments.

A common implementation of the least privilege and need to know principles is when a systems administrator is issued both an administrative-level account and a normal user account. In most day-to-day functions, the administrator should use his normal user account. When the systems administrator needs to perform administrative-level tasks, he should use the administrative-level account. If the administrator uses his administrative-level account while performing routine tasks, he risks compromising the security of the system and user accountability.

• Organizational rules that support the principle of least privilege include the following:

• Keep the number of administrative accounts to a minimum.

• Administrators should use normal user accounts when performing routine operations.

• Permissions on tools that are likely to be used by attackers should be as restrictive as possible.

To more easily support the least privilege and need to know principles, users should be divided into groups to facilitate the confinement of information to a single group or area. This process is referred to as compartmentalization.

Default to No Access

During the authorization process, you should configure an organization’s access control mechanisms so that the default level of security is to default to no access. This means that if nothing has been specifically allowed for a user or group, then the user or group will not be able to access the resource. The best security approach is to start with no access and add rights based on a user’s need to know and least privilege needed to accomplish their daily tasks.

Directory Services

A directory service is a database designed to centralize data management regarding network subjects and objects. A typical directory contains a hierarchy that includes users, groups, systems, servers, client workstations, and so on. Because the directory service contains data about users and other network entities, it can be used by many applications that require access to that information.

Image

The three most common directory service standards are

• X.500

• Lightweight Directory Access Protocol (LDAP)

• X.400

X.500 uses the directory access protocol (DAP). In X.500, the distinguished name (DN) provides the full path in the X.500 database where the entry is found. The relative distinguished name (RDN) in X.500 is an entry’s name without the full path.

Based on X.500’s DAP, LDAP is simpler than X.500. LDAP supports DN and RDN, but includes more attributes such as the common name (CN), domain component (DC), and organizational unit (OU) attributes. Using a client/server architecture, LDAP uses TCP port 389 to communicate. If advanced security is needed, LDAP over SSL communicates via TCP port 636.

Image

Microsoft’s implementation of LDAP is Active Directory, which organizes directories into forests and trees.

X.400 is mainly for message transfer and storage. It uses elements to create a series of name/value pairs separated by semicolons. X.400 has gradually been replaced by Simple Mail Transfer Protocol (SMTP) implementations.

Single Sign-on

In a single sign-on (SSO) environment, a user enters his login credentials once and can access all resources in the network. The Open Group Security Forum has defined many objectives for a single sign-on system. Some of the objectives for the user sign-on interface and user account management include the following:

• The interface should be independent of the type of authentication information handled.

• The creation, deletion, and modification of user accounts should be supported.

• Support should be provided for a user to establish a default user profile.

• They should be independent of any platform or operating system.


Note

To obtain more information about the Open Group’s Single Sign-On Standard, you should access the Web site at http://www.opengroup.org/security/sso_scope.htm.


SSO provides many advantages and disadvantages when it is implemented.

Image

Advantages of an SSO system include:

• Users are able to use stronger passwords.

• User and password administration is simplified.

• Resource access is much faster.

• User login is more efficient.

• Users only need to remember the login credentials for a single system.

Image

Disadvantages of an SSO system include:

• After a user obtains system access through the initial SSO login, the user is able to access all resources to which he is granted access. Although this is also an advantage for the user (only one login needed), it is also considered a disadvantage because only one signin can compromise all the systems that participate in the SSO network.

• If a user’s credentials are compromised, attackers will have access to all resources to which the user has access.

Although the discussion on SSO so far has been mainly on how it is used for networks and domains, SSO can also be implemented in Web-based systems. Enterprise Access Management (EAM) provides access control management for Web-based enterprise systems. Its functions include accommodation of a variety of authentication methods and role-based access control.

SSO can be implemented in Kerberos and SESAME environments.

Kerberos

Kerberos is an authentication protocol that uses a client/server model developed by MIT’s Project Athena. It is the default authentication model in the recent editions of Windows Server and is also used in Apple, Sun, and Linux operating systems. Kerberos is a single sign-on system that uses symmetric key cryptography. Kerberos provides confidentiality and integrity.

Kerberos assumes that messaging, cabling, and client computers are not secure and are easily accessible. In a Kerberos exchange involving a message with an authenticator, the authenticator contains the client ID and a timestamp. Because a Kerberos ticket is valid for a certain time, the timestamp ensures the validity of the request.

In a Kerberos environment, the Key Distribution Center (KDC) is the repository for all user and service secret keys. The client sends a request to the authentication server (AS), which might or might not be the KDC. The AS forwards the client credentials to the KDC. The KDC authenticates clients to other entities on a network and facilitates communication using session keys. The KDC provides security to clients or principals, which are users, network services, and software. Each principal must have an account on the KDC. The KDC issues a ticket-granting ticket (TGT) to the principal. The principal will send the TGT to the ticket-granting service (TGS) when the principal needs to connect to another entity. The TGS then transmits a ticket and sessions keys to the principal. The set of principles for which a single KDC is responsible is referred to as a realm.

Figure 2-3 shows the ticket-issuing process for Kerberos.

Figure 2-3. Kerberos Ticket-Issuing Process

Image
Image

Some advantages of implementing Kerberos include the following:

• User passwords do NOT need to be sent over the network.

• Both the client and server authenticate each other.

• The tickets passed between the server and client are timestamped and include lifetime information.

• The Kerberos protocol uses open Internet standards and is not limited to proprietary codes or authentication mechanisms.

Some disadvantages of implementing Kerberos include:

• KDC redundancy is required if providing fault tolerance is a requirement. The KDC is a single point of failure.

• The KDC must be scalable to ensure that performance of the system does not degrade.

• Session keys on the client machines can be compromised.

• Kerberos traffic needs to be encrypted to protect the information over the network.

• All systems participating in Kerberos process must have synchronized clocks.

• Kerberos systems are susceptible to password-guessing attacks.

SESAME

The Secure European System for Applications in a Multi-vendor Environment (SESAME) project extended Kerberos’ functionality to fix Kerberos’ weaknesses. SESAME uses both symmetric and asymmetric cryptography to protect interchanged data. SESAME uses a trusted authentication server at each host.

SESAME uses Privileged Attribute Certificates (PACs) instead of tickets. It incorporates two certificates: one for authentication and one for defining access privileges. The trusted authentication server is referred to as the Privileged Attribute Server (PAS), which performs roles similar to the KDC in Kerberos. SESAME can be integrated into a Kerberos system.

Federated Identity Management

A federated identity is a portable identity that can be used across businesses and domains. In federated identity management, each organization that joins the federation agrees to enforce a common set of policies and standards. These policies and standards define how to provision and manage user identification, authentication, and authorization. Federated identity management uses two basic models for linking organizations within the federation: cross certification and trusted third-party or bridge model.

In the cross-certification model, each organization certifies that every other organization is trusted. This trust is established when the organizations review each other’s standards. Each organization must verify and certify through due diligence that the other organizations meet or exceed standards. One disadvantage of cross certification is that the number of trust relationships that must be managed can become a problem.

In the trusted third-party or bridge model, each organization subscribes to the standards of a third party. The third party manages verification, certification, and due diligence for all organizations. This is usually the best model if an organization needs to establish federated identity management relationships with a large number of organizations.

Security Domains

A domain is a set of resources that are available to a subject over a network. Subjects that access a domain include users, processes, and applications. A security domain is a set of resources that follow the same security policies and are available to a subject. The domains are usually arranged in a hierarchical structure of parent and child domains.


Note

Do not confuse the term security domain with protection domain. Although a security domain usually encompasses a network, a protection domain resides within a single resource. A protection domain is a group of processes that share access to the same resource.


Accountability

Accountability is an organization’s ability to hold users responsible for the actions they perform. To ensure that users are accountable for their actions, organizations must implement an auditing mechanism. In addition, the organization should periodically carry out vulnerability assessments, penetration testing, and threat modeling.

Although organizations should internally complete these accountability mechanisms, they should also periodically have a third party perform these audits and tests. This is important because the outside third party can provide objectivity that internal personnel often cannot provide.

Auditing and Reporting

Auditing and reporting ensures that users are held accountable for their actions, but an auditing mechanism can only report on events that it is configured to monitor. You should monitor network events, system events, application events, user events, and keystroke activity. Keep in mind that any auditing activity will impact the performance of the system being monitored. Organizations must find a balance between auditing important events and activities and ensuring that device performance is maintained at an acceptable level. Also, organizations must ensure that any monitoring that occurs is in compliance with all applicable laws.

Image

When designing an auditing mechanism, security professionals should remember the following guidelines:

• Develop an audit log management plan that includes mechanisms to control the log size, backup processes, and periodic review plans.

• Ensure that the ability to delete an audit log is a two-man control that requires the cooperation of at least two administrators. This ensures that a single administrator is not able to delete logs that might hold incriminating evidence.

• Monitor all high-privilege accounts (including all root users and administrative-level accounts).

• Ensure that the audit trail includes who processed the transaction, when the transaction occurred (date and time), where the transaction occurred (which system), and whether the transaction was successful or not.

• Ensure that deleting the log and deleting data within the logs cannot occur unless the user has the appropriate administrative-level permissions.


Note

Scrubbing is the act of deleting incriminating data within an audit log.


Audit trails detect computer penetrations and reveal actions that identify misuse. As a security professional, you should use the audit trails to review patterns of access to individual objects. To identify abnormal patterns of behavior, you should first identify normal patterns of behavior. Also, you should establish the clipping level, which is a baseline of user errors above which violations will be recorded. For example, your organization might choose to ignore the first invalid login attempt, knowing that initial failed login attempts are often due to user error. Any invalid login after the first would be recorded because it could be a sign of an attack. A common clipping level that is used is three failed login attempts. Any failed login attempt above the limit of three would be considered malicious. In most cases, a lockout policy would lock out a user’s account after this clipping level is reached.

Audit trails deter attacker attempts to bypass the protection mechanisms that are configured on a system or device. As a security professional, you should specifically configure the audit trails to track system/device rights or privileges being granted to a user and data additions, deletions, or modifications.

Finally, audit trails must be monitored, and automatic notifications should be configured. If no one monitors the audit trail, then the data recorded in the audit trail is useless. Certain actions should be configured to trigger automatic notifications. For example, you might want to configure an e-mail alert to occur after a certain number of invalid login attempts because invalid login attempts might be a sign that a brute-force password attack is occurring.

Vulnerability Assessment

A vulnerability assessment helps to identify the areas of weakness in a network. It can also help to determine asset prioritization within an organization. A comprehensive vulnerability assessment is part of the risk management process. But for access control, security professionals should use vulnerability assessments that specifically target the access control mechanisms.

Image

Vulnerability assessments usually fall into one of three categories:

• Personnel testing: Reviews standard practices and procedures that users follow.

• Physical testing: Reviews facility and perimeter protections.

• System and network testing: Reviews systems, devices, and network topology.

The security analyst who will be performing the vulnerability assessment must understand the systems and devices that are on the network and the job they perform. Having this information ensures that the analyst can assess the vulnerabilities of the systems and devices based on the known and potential threats to the systems and devices.

After gaining knowledge regarding the systems and devices, the security analyst should examine existing controls in place and identify any threats against these controls. The security analyst can then use all the information gathered to determine which automated tools to use to analyze for vulnerabilities. After the vulnerability analysis is complete, the security analyst should verify the results to ensure that they are accurate and then report the findings to management with suggestions for remedial action. With this information in hand, the analyst should carry out threat modeling to identify the threats that could negatively affect systems and devices and the attack methods that could be used.

Penetration Testing

The goal of penetration testing, also known as ethical hacking, is to simulate an attack to identify any threats that can stem from internal or external resources that plan to exploit the vulnerabilities of a system or device.

Image

The steps in performing a penetration test are as follows:

1. Document information about the target system or device.

2. Gather information about attack methods against the target system or device. This includes performing port scans.

3. Identify the known vulnerabilities of the target system or device.

4. Execute attacks against the target system or device to gain user and privileged access.

5. Document the results of the penetration test, and report the findings to management with suggestions for remedial action.

Both internal and external tests should be performed. Internal tests occur from within the network, whereas external tests originate outside the network by targeting the servers and devices that are publicly visible.

Image

Strategies for penetration testing are based on the testing objectives as defined by the organization. The strategies that you should be familiar with include the following:

• Blind test: The testing team is provided with limited knowledge of the network systems and devices using publicly available information. The organization’s security team knows that an attack is coming. This test requires more effort by the testing team, and the testing team must simulate an actual attack.

• Double-blind test: This test is like a blind test except the organization’s security team does NOT know that an attack is coming. Only a few individuals at the organization know about the attack, and they do not share this information with the security team. This test usually requires equal effort for both the testing team and the organization’s security team.

• Target test: Both the testing team and the organization’s security team are given maximum information about the network and the type of test that will occur. This is the easiest test to complete but will not provide a full picture of the organization’s security.

Image

Penetration testing is also divided into categories based on the amount of information to be provided. The main categories that you should be familiar with include the following:

• Zero-knowledge test: The testing team is provided with no knowledge regarding the organization’s network. Testing team can use any means at their disposal to obtain information about the organization’s network. This is also referred to as closed or black box testing.

• Partial-knowledge test: The testing team is provided with public knowledge regarding the organization’s network. Boundaries may be set for this type of test.

• Full-knowledge test: The testing team is provided with all available knowledge regarding the organization’s network. This test is focused more on what attacks can be carried out.

Access Control Categories

Image

You implement access controls as a countermeasure to identified vulnerabilities. Access control mechanisms that you can use are divided into seven main categories:

Compensative

Corrective

Detective

Deterrent

Directive

Preventive

Recovery

Any access control that you implement will fit into one or more access control category.


Note

Access controls are also defined by the type of protection they provide. Access control types are discussed in the next section.


Compensative

Compensative controls are in place to substitute for a primary access control and mainly act as a mitigation to risks. Using compensative controls, you can reduce the risk to a more manageable level. Examples of compensative controls include requiring two authorized signatures to release sensitive or confidential information and requiring two keys owned by different personnel to open a safety deposit box.

Corrective

Corrective controls are in place to reduce the effect of an attack or other undesirable event. Using corrective controls fixes or restores the entity that is attacked. Examples of corrective controls include installing fire extinguishers, isolating or terminating a connection, implementing new firewall rules, and using server images to restore to a previous state.

Detective

Detective controls are in place to detect an attack while it is occurring to alert appropriate personnel. Examples of detective controls include motion detectors, intrusion detection systems (IDSs), logs, guards, investigations, and job rotation.

Deterrent

Deterrent controls deter or discourage an attacker. Via deterrent controls, attacks can be discovered early in the process. Deterrent controls often trigger preventive and corrective controls. Examples of deterrent controls include user identification and authentication, fences, lighting, and organizational security policies, such as a non-disclosure agreement (NDA).

Directive

Directive controls specify acceptable practice within an organization. They are in place to formalize an organization’s security directive mainly to its employees. The most popular directive control is an acceptable use policy (AUP) that lists proper (and often examples of improper) procedures and behaviors that personnel must follow. Any organizational security policies or procedures usually fall into this access control category. You should keep in mind that directive controls are only efficient if there is a stated consequence for not following the organization’s directions.

Preventive

Preventative controls prevent an attack from occurring. Examples of preventive controls include locks, badges, biometric systems, encryption, intrusion prevention systems (IPSs), antivirus software, personnel security, security guards, passwords, and security awareness training.

Recovery

Recovery controls recover a system or device after an attack has occurred. The primary goal of recovery controls is restoring resources. Examples of recovery controls include disaster recovery plans, data backups, and offsite facilities.

Access Control Types

Image

Whereas the access control categories classify the access controls based on where they fit in time, access control types divide access controls on their method of implementation. The three types of access controls are

Administrative (management) controls

Logical (technical) controls

Physical controls

In any organization where defense in depth is a priority, access control requires the use of all three types of access controls. Even if you implement the strictest physical and administrative controls, you cannot fully protect the environment without logical controls.

Administrative (Management) Controls

Administrative or management controls are implemented to administer the organization’s assets and personnel and include security policies, procedures, standards, baselines, and guidelines that are established by management. These controls are commonly referred to as soft controls. Specific examples are personnel controls, data classification, data labeling, security awareness training, and supervision.

Security awareness training is a very important administrative control. Its purpose is to improve the organization’s attitude about safeguarding data. The benefits of security awareness training include reduction in the number and severity of errors and omissions, better understanding of information value, and better administrator recognition of unauthorized intrusion attempts. A cost-effective way to ensure that employees take security awareness seriously is to create an award or recognition program.

Image

Table 2-1 lists many administrative controls and includes in which access control categories the controls fit.

Table 2-1. Administrative (Management) Controls

Image
Logical (Technical) Controls

Logical or technical controls are software or hardware components used to restrict access. Specific examples of logical controls include firewalls, IDSs, IPSs, encryption, authentication systems, protocols, auditing and monitoring, biometrics, smart cards, and passwords.

Although auditing and monitoring are logical controls and are often listed together, they are actually two different controls. Auditing is a one-time or periodic event to evaluate security. Monitoring is an ongoing activity that examines either the system or users.

Image

Table 2-2 lists many logical controls and includes in which access control categories the controls fit.

Table 2-2. Logical (Technical) Controls

Image
Image
Physical Controls

Physical controls are implemented to protect an organization’s facilities and personnel. Personnel concerns should take priority over all other concerns. Specific examples of physical controls include perimeter security, badges, swipe cards, guards, dogs, man traps, biometrics, and cabling.

Image

Table 2-3 lists many physical controls and includes in which access control categories the controls fit.

Table 2-3. Physical Controls

Image

Access Control Models

An access control model is a formal description of an organization’s security policy. Access control models are implemented to simplify access control administration by grouping objects and subjects. Subjects are entities that request access to an object or data within an object. Users, programs, and processes are subjects. Objects are entities that contain information or functionality. Computers, databases, files, programs, directories, and fields are objects. A secure access control model must ensure that secure objects cannot flow to a less secure subject.

Image

The access control models and concepts that you need to understand include the following:

Discretionary access control

Mandatory access control

Role-based access control

Rule-based access control

Content-dependent versus context-dependent access control

Access control matrix

Capabilities table

Access control list

Discretionary Access Control

In discretionary access control (DAC), the owner of the object specifies which subjects can access the resource. DAC is typically used in local, dynamic situations. The access is based on the subject’s identity, profile, or role. DAC is considered to be a need-to-know control.

DAC can be an administrative burden because the data custodian or owner grants access privileges to the users. Under DAC, a subject’s rights must be terminated when the subject leaves the organization. Identity-based access control is a subset of DAC and is based on user identity or group membership.

Non-discretionary access control is the opposite of DAC. In non-discretionary access control, access controls are configured by a security administrator or other authority. The central authority decides which subjects have access to objects based on the organization’s policy. In non-discretionary access control, the system compares the subject’s identity with the objects’ access control list.

Mandatory Access Control

In mandatory access control (MAC), subject authorization is based on security labels. MAC is often described as prohibitive because it is based on a security label system. Under MAC, all that is not expressly permitted is forbidden. Only administrators can change the category of a resource.

MAC is more secure than DAC. DAC is more flexible and scalable than mandatory access control. Because of the importance of security in MAC, labeling is required. Data classification reflects the data’s sensitivity. In a MAC system, a clearance is a subject’s privilege. Each subject and object is given a security or sensitivity label. The security labels are hierarchical. For commercial organizations, the levels of security labels could be confidential, proprietary, corporate, sensitive, and public. For government or military institutions the levels of security labels could be top secret, secret, confidential, and unclassified.

In MAC, the system makes access decisions when it compares the subject’s clearance level with the object’s security label.

Role-based Access Control

In role-based access control (RBAC), each subject is assigned to one or more roles. Roles are hierarchical. Access control is defined based on the roles. RBAC can be used to easily enforce minimum privileges for subjects. An example of RBAC is implementing one access control policy for bank tellers and another policy for loan officers.

RBAC is not as secure as the previously mentioned access control models because security is based on roles. RBAC usually has a much lower cost to implement than the other models and is popular in commercial applications. It is an excellent choice for organizations with high employee turnover. RBAC can effectively replace DAC and MAC because it allows you to specify and enforce enterprise security policies in a way that maps to the organization’s structure.

RBAC is managed in four ways. In non-RBAC, no roles are used. In limited RBAC, users are mapped to single application roles, but some applications do not use RBAC and require identity-based access. In hybrid RBAC, each user is mapped to a single role, which gives them access to multiple systems, but each user may be mapped to other roles that have access to single systems. In full RBAC, users are mapped to a single role as defined by the organization’s security policy, and access to the systems is managed through the organizational roles.

Rule-based Access Control

Rule-based access control facilitates frequent changes to data permissions and is defined in RFC 2828. Using this method, a security policy is based on global rules imposed for all users. Profiles are used to control access. Many routers and firewalls use this type of access control and define which packet types are allowed on a network. Rules can be written allowing or denying access based on packet type, port number used, MAC address, and other parameters.

Content-dependent versus Context-dependent

Content-dependent access control makes access decisions based on the data contained within the object. With this access control, the data that a user sees might change based on the policy and access rules that are applied.

Context-dependent access control is based on subject or object attributes or environmental characteristics. These characteristics can include location or time of day. An example of this is if administrators implement a security policy that ensures that a user only logs in from a particular workstation during certain hours of the day.

Security experts consider a constrained user interface as another method of access control. An example of a constrained user interface is a shell, which is a software interface to an operating system that implements access control by limiting the system commands that are available. Another example is database views that are filtered based on user or system criteria. Constrained user interfaces can be content or context dependent based on how the administrator constrains the interface.

Access Control Matrix

An access control matrix is a table that consists of a list of subjects, a list of objects, and a list of the actions that a subject can take upon each object. The rows in the matrix are the subjects, and the columns in the matrix are the objects. Common implementations of an access control matrix include a capabilities table and an access control list (ACL).

Capabilities Table

A capability corresponds to a subject’s row from an access control matrix. A capability table lists the access rights that a particular subject has to objects. A capability table is about the subject.

Access Control List (ACL)

An ACL corresponds to an object’s column from an access control matrix. An ACL lists all the access rights that subjects have to a particular object. An ACL is about the object.

Figure 2-4 shows an access control matrix and how a capability and ACL are part of it.

Figure 2-4. Access Control Matrix

Image

Access Control Administration

After deciding on which access control model to use, an organization must decide how the system will be administered. Access control administration occurs in two basic manners: centralized and decentralized.

Centralized

In centralized access control, a central department or personnel oversee the access for all organizational resources. This administration method ensures that user access is controlled in a consistent manner across the entire enterprise. However, this method can be slow because all access requests are processed by the central entity.

Decentralized

In decentralized access control, personnel closest to the resources, such as department managers and data owners, oversee the access control for individual resources. This administration method ensures that those who know the data control the access rights to it. However, this method can be hard to manage because not just one entity is responsible for configuring access rights, thereby losing the uniformity and fairness of security.

Provisioning Life Cycle

Organizations should create a formal process for creating, changing, and removing users, which is the provisioning life cycle. This process includes user approval, user creation, user creation standards, and authorization. Users should sign a written statement that explains the access conditions, including user responsibilities. Finally, access modification and removal procedures should be documented.

User provision policies should be integrated as part of human resource management. Human resource policies should include procedures whereby the human resource department formally requests the creation or deletion of a user account when new personnel are hired or terminated.

Access Control Monitoring

Access control monitoring is the process of tracking resource access attempts. The primary goals of access control monitoring are accountability and response. The “Accountability” section earlier in this chapter covers auditing and reporting, vulnerability assessments, penetration testing, and threat modeling. This section covers two additional methods of monitoring: intrusion detection systems (IDSs) and intrusion prevention systems (IPSs).

IDS

An IDS is a system responsible for detecting unauthorized access or attacks against systems and networks. It can verify, itemize, and characterize threats from outside and inside the network. Most IDSs are programmed to react certain ways in specific situations. Event notification and alerts are crucial to an IDS. They inform administrators and security professionals when and where attacks are detected.

The most common way to classify an IDS is based on its information source: network based or host based.

A network-based IDS is the most common IDS and monitors network traffic on a local network segment. To monitor traffic on the network segment, the network interface card (NIC) must be operating in promiscuous mode. A network-based IDS (NIDS) can only monitor the network traffic. It cannot monitor any internal activity that occurs within a system, such as an attack against a system that is carried out by logging on the system’s local terminal. An NIDS is affected by a switched network because generally an NIDS only monitors a single network segment.


Note

Network hardware, such as hubs and switches, is covered in detail in Chapter 3, “Telecommunications and Network Security.”


A host-based IDS monitors traffic on a single system. Its primary responsibility is to protect the system on which it is installed. A host-based IDS (HIDS) uses information from the operating system audit trails and system logs. The detection capabilities of an HIDS are limited by how complete the audit logs and system logs are.

IDS implementations are furthered divided into the following categories:

Signature-based: This type of IDS analyzes traffic and compares it to attack or state patterns, called signatures that reside within the IDS database. It is also referred to as a misuse-detection system. Although this type of IDS is very popular, it can only recognize attacks as compared with its database and is only as effective as the signatures provided. Frequent updates are necessary. The two main types of signature-based IDSs are

Pattern-matching: The IDS compares traffic to a database of attack patterns. The IDS carries out specific steps when it detects traffic that matches an attack pattern.

Stateful-matching: The IDS records the initial operating system state. Any changes to the system state that specifically violate the defined rules result in an alert or notification being sent.

Anomaly-based: This type of IDS analyzes traffic and compares it to normal traffic to determine whether said traffic is a threat. It is also referred to as a behavior-based or profile-based system. The problem with this type of system is that any traffic outside of expected norms is reported, resulting in more false positives than signature-based systems. The three main types of anomaly-based IDSs are

Statistical anomaly-based: The IDS samples the live environment to record activities. The longer the IDS is in operation, the more accurate a profile that will be built. However, developing a profile that will not have a large number of false positives can be difficult and time consuming. Thresholds for activity deviations are important in this IDS. Too low a threshold results in false positives, whereas too high a threshold results in false negatives.

Protocol anomaly-based: The IDS has knowledge of the protocols that it will monitor. A profile of normal usage is built and compared to activity.

Traffic anomaly-based: The IDS tracks traffic pattern changes. All future traffic patterns are compared to the sample. Changing the threshold will reduce the number of false positives or negatives. This type of filter is excellent for detecting unknown attacks, but user activity might not be static enough to effectively implement this system.

Rule- or heuristic-based: This type of IDS is an expert system that uses a knowledge base, inference engine, and rule-based programming. The knowledge is configured as rules. The data and traffic is analyzed, and the rules are applied to the analyzed traffic. The inference engine uses its intelligent software to “learn.” If characteristics of an attack are met, alerts or notifications trigger. This is often referred to as an IF/THEN or expert system.

An application-based IDS is a specialized IDS that analyzes transaction log files for a single application. This type of IDS is usually provided as part of the application or can be purchased as an add-on.

Tools that can complement an IDS include vulnerability analysis systems, honeypots, and padded cells. Honeypots are systems that are configured with reduced security to entice attackers so that administrators can learn about attack techniques. Padded cells are special hosts to which an attacker is transferred during an attack.

IPS

An IPS is a system responsible for preventing attacks. When an attack begins, an IPS takes actions to prevent and contain the attack. An IPS can be network or host based, like an IDS. Although an IPS can be signature or anomaly based, it can also use a rate-based metric that analyzes the volume of traffic as well as the type of traffic.

In most cases, implementing an IPS is more costly than an IDS because of the added security of preventing attacks versus simply detecting attacks. In addition, running an IPS is more of an overall performance load than running an IDS.

Access Control Threats

Access control threats directly impact the confidentiality, integrity, and availability of organizational assets. The purpose of most access control threats is to cause harm to an organization. Because harming an organization is easier to do from within its network, outsiders usually first attempt to attack any access controls that are in place.

Image

Access control threats that you should understand include:

Password threats

Social engineering threats

DOS/DDOS

Buffer overflow

Mobile code

Malicious software

Spoofing

Sniffing and eavesdropping

Emanating

Backdoor/trapdoor

Password Threats

A password threat is any attack that attempts to discover user passwords. The two most popular password threats are dictionary attacks and brute-force attacks.

The best countermeasures against password threats is to implement complex password policies, require users to change passwords on a regular basis, employ account lockout policies, encrypt password files, and use password-cracking tools to discover weak passwords.

Dictionary Attack

A dictionary attack occurs when attackers use a dictionary of common words to discover passwords. An automated program uses the hash of the dictionary word and compares this hash value to entries in the system password file. Although the program comes with a dictionary, attackers also use extra dictionaries that are found on the Internet.

You should implement a security rule that says that a password must NOT be a word found in the dictionary to protect against these attacks.

Brute-Force Attack

Brute-force attacks are more difficult to carry out because they work through all possible combinations of numbers and characters. A brute-force attack is also referred to as an exhaustive attack. It carries out password searches until a correct password is found. These attacks are also very time consuming.

Social Engineering Threats

Social engineering attacks occur when attackers use believable language and user gullibility to obtain user credentials or some other confidential information. Social engineering threats that you should understand include phishing/pharming, shoulder surfing, identity theft, and dumpster diving.

The best countermeasure against social engineering threats is to provide user security awareness training. This training should be required and must occur on a regular basis because social engineering techniques evolve constantly.

Phishing/Pharming

Phishing is a social engineering attack in which attackers try to learn personal information, including credit card information and financial data. This type of attack is usually carried out by implementing a fake Web site that very closely resembles a legitimate Web site. Users enter data, including credentials on the fake Web site, allowing the attackers to capture any information entered. Spear phishing is a phishing attack carried out against a specific target by learning about the target’s habits and likes. Spear phishing attacks take longer to carry out than phishing attacks because of the information that must be gathered.

Pharming is similar to phishing, but pharming actually pollutes the contents of a computer’s DNS cache so that requests to a legitimate site are actually routed to an alternate site.

Caution users against using any links embedded in e-mail messages, even if the message appears to have come from a legitimate entity. Users should also review the address bar any time they access a site where their personal information is required to ensure that the site is correct and that SSL is being used, which is indicated by an HTTPS designation at the beginning of the URL address.

Shoulder Surfing

Shoulder surfing occurs when an attacker watches when a user enters login or other confidential data. Encourage users to always be aware of who is observing their actions. Implementing privacy screens helps to ensure that data entry cannot be recorded.

Identity Theft

Identity theft occurs when someone obtains personal information, including driver’s license number, bank account number, and Social Security number, and uses that information to assume an identity of the individual whose information was stolen. After the identity is assumed, the attack can go in any direction. In most cases, attackers open financial accounts in the user’s name. Attackers also can gain access to the user’s valid accounts.

Dumpster Diving

Dumpster diving occurs when attackers examine garbage contents to obtain confidential information. This includes personnel information, account login information, network diagrams, and organizational financial data.

Organizations should implement policies for shredding documents that contain this information.

DOS/DDOS

A denial-of-service (DOS) attack occurs when attackers flood a device with enough requests to degrade the performance of the targeted device. Some popular DOS attacks include SYN floods and teardrop attacks.

A distributed DOS (DDOS) attack is a DOS attack that is carried out from multiple attack locations. Vulnerable devices are infected with software agents, called zombies. This turns the vulnerable devices into botnets, which then carry out the attack. Because of the distributed nature of the attack, identifying all the attacking botnets is virtually impossible. The botnets also help to hide the original source of the attack.


Note

More information about this type of attack is in Chapter 3, “Telecommunications and Network Security”.


Buffer Overflow

Buffers are portions of system memory that are used to store information. A buffer overflow occurs when the amount of data that is submitted to the application is larger than the buffer can handle. Typically, this type of attack is possible because of poorly written application or operating system code. This can result in an injection of malicious code.

To protect against this issue, organizations should ensure that all operating systems and applications are updated with the latest service packs and patches. In addition, programmers should properly test all applications to check for overflow conditions. Finally, programmers should use input validation to ensure that the data submitted is not too large for the buffer.

Mobile Code

Mobile code is any software that is transmitted across a network to be executed on a local system. Examples of mobile code include Java applets, Java script code, and ActiveX controls. Mobile code includes security controls, Java implements sandboxes, and ActiveX uses digital code signatures. Malicious mobile code can be used to bypass access controls.

Organizations should ensure that users understand the security concerns of malicious mobile code. Users should only download mobile code from legitimate sites and vendors.

Malicious Software
Image

Malicious software, also called malware, is any software that is designed to perform malicious acts.

The following are the four classes of malware you should understand:

Virus: Any malware that attaches itself to another application to replicate or distribute itself.

Worm: Any malware that replicates itself, meaning that it does not need another application or human interaction to propagate.

Trojan horse: Any malware that disguises itself as a needed application while carrying out malicious actions.

Spyware: Any malware that collects private user data, including browsing history or keyboard input.

The best defense against malicious software is to implement anti-virus and anti-malware software. Today most vendors package these two types of software in the same package. Keeping anti-virus and anti-malware software up to date is vital. This includes ensuring that the latest virus and malware definitions are installed.

Spoofing

Spoofing, also referred to as masquerading, occurs when communication from an attacker appears to come from trusted sources. Spoofing examples include IP spoofing and hyperlink spoofing. The goal of this type of attack is to obtain access to credentials or other personal information.

A man-in-the-middle attack uses spoofing as part of the attack. Some security professionals consider phishing attacks as a type of spoofing attack.

Sniffing and Eavesdropping

Sniffing, also referred to as eavesdropping, occurs when an attacker inserts a device or software into the communication medium that collects all the information transmitted over the medium. Network sniffers are used by both legitimate security professionals and attackers.

Organizations should monitor and limit the use of sniffers. To protect against their use, you should encrypt all traffic on the network

Emanating

Emanations are electromagnetic signals that are emitted by an electronic device. Attackers can target certain devices or transmission mediums to eavesdrop on communication without having physical access to the device or medium.

The TEMPEST program, initiated by the U.S. and UK, researches ways to limit emanations and standardizes the technologies used. Any equipment that meets TEMPEST standards suppresses signal emanations using shielding material. Devices that meet TEMPEST standards usually implement an outer barrier or coating, called a Faraday cage or Faraday Shield. TEMPEST devices are most often used in government, military, or law enforcement.

Backdoor/Trapdoor

A backdoor or trapdoor is a mechanism implemented in many devices or applications that gives the user who uses the backdoor unlimited access to the device or application. Privileged backdoor accounts are the most common method of backdoor that you will see today.

Most established vendors no longer release devices or applications with this security issue. You should be aware of any known backdoors in the devices or applications you manage.

Exam Preparation Tasks

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 2-4 lists a reference of these key topics and the page numbers on which each is found.

Image

Table 2-4. Key Topics for Chapter 2

Image
Complete the Tables and Lists from Memory

Print a copy of the CD Appendix A, “Memory Tables,” or at least the section for this chapter, and complete the tables and lists from memory. The CD Appendix B, “Memory Tables Answer Key,” includes completed tables and lists to check your work.

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

access control

confidentiality

integrity

availability

default stance

identification

authentication

knowledge factors

ownership factors

synchronous token

asynchronous token

memory card

smart card

characteristic factors

physiological characteristics

fingerprint scan

finger scan

hand geometry

hand topography

palm or hand scan

facial scan

retinal scan

iris scan

vascular scan

behavioral characteristics

signature dynamics

keystroke dynamics

voice pattern or print

biometric enrollment time

biometric feature extraction

biometric accuracy

biometric throughput rate

biometric acceptability

false rejection rate

false acceptance rate

crossover error rate

authorization

access control policy

separation of duties

least privilege

need to know

single sign-on

Kerberos

SESAME

federated identity

cross-certification federated identity model

trusted third-party federated identity model

bridge federated identity model

security domain

accountability

vulnerability assessment

penetration testing

compensative controls

corrective controls

detective controls

deterrent controls

directive controls

preventive controls

recovery controls

administrative controls

logical controls

physical controls

discretionary access control

mandatory access control

role-based access control

rule-based access control

content-dependent

context-dependent

access control matric

capabilities table

access control list

centralized access control

decentralized access control

IDS

network-based IDS

host-based IDS

phishing

spear phishing

pharming

shoulder surfing

dumpster diving

backdoor, and trapdoor.

Answer Review Questions

1. Which security principle is the opposite of disclosure?

a. integrity

b. availability

c. confidentiality

d. authorization

2. Which of the following is NOT an example of a knowledge authentication factor?

a. password

b. mother’s maiden name

c. city of birth

d. smart card

3. Which of the following statements about memory cards and smart cards is false?

a. A memory card is a swipe card that contains user authentication information.

b. Memory cards are also known as integrated circuit cards (ICCs).

c. Smart cards contain memory and an embedded chip.

d. Smart card systems are more reliable than memory card systems.

4. Which biometric method is MOST effective?

a. Iris scan

b. Retina scan

c. Fingerprint

d. Hand print

5. What is a Type I error in a biometric system?

a. crossover error rate (CER)

b. false rejection rate (FRR)

c. false acceptance rate (FAR)

d. throughput rate

6. Which penetration test provides the testing team with limited knowledge of the network systems and devices using publicly available information and the organization’s security team knows an attack is coming?

a. target test

b. physical test

c. blind test

d. double-blind test

7. Which access control type reduces the effect of an attack or other undesirable event?

a. compensative control

b. preventive control

c. detective control

d. corrective control

8. Which of the following controls is an administrative control?

a. security policy

b. CCTV

c. data backups

d. locks

9. Which access control model is most often used by routers and firewalls to control access to networks?

a. discretionary access control

b. mandatory access control

c. role-based access control

d. rule-based access control

10. Which threat is NOT considered a social engineering threat?

a. phishing

b. pharming

c. DOS attack

d. dumpster diving

Answers and Explanations

1. c. The opposite of disclosure is confidentiality. The opposite of corruption is integrity. The opposite of destruction is availability. The opposite of disapproval is authorization.

2. d. Knowledge factors are something a person knows, including passwords, mother’s maiden name, city of birth, and date of birth. Ownership factors are something a person has, including a smart card.

3. b. Memory cards are NOT also known as integrated circuit cards (ICCs). Smart cards are also known as ICCs.

4. a. Iris scans are considered more effective than retina scans, fingerprints, and handprints.

5. b. A Type I error in a biometric system is false rejection rate (FRR). A Type II error in a biometric system is false acceptance rate (FAR). Crossover error rate (CER) is the point at which FRR equals FAR. Throughput rate is the rate at which users are authenticated.

6. c. A blind test provides the testing team with limited knowledge of the network systems and devices using publicly available information and the organization’s security team knows an attack is coming.

A target test occurs when the testing team and the organization’s security team are given maximum information about the network and the type of test that will occur.

A physical test is not a type of penetration test. It is a type of vulnerability assessment.

A double-blind test is like a blind test except the organization’s security team does not know that an attack is coming.

7. d. A corrective control reduces the effect of an attack or other undesirable event.

A compensative control substitutes for a primary access control and mainly acts as mitigation to risks. A preventive control prevents an attack from occurring. A detective control detects an attack while it is occurring to alert appropriate personnel.

8. a. A security policy is an administrative control. CCTV and locks are physical controls. Data backups are a technical control.

9. d. Rule-based access control is most often used by routers and firewalls to control access to networks. The other three types of access control models are not usually implemented by routers and firewalls.

10. c. A Denial of Service (DOS) is not considered a social engineering threat. The other three options are considered to be social engineering threats.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.214.60