Chapter 8. Identity and Access Management


Terms you’ll need to understand:

Image Identification and authentication of people and devices

Image Mandatory access control (MAC)

Image Discretionary access control (DAC)

Image Role-based access control (RBAC)

Image Single sign-on (SSO)

Image Crossover error rate (CER)

Image Zephyr chart

Topics you’ll need to master:

Image Identity and Access Management

Image Understand the methods of authentication for people and devices

Image Describe the differences between discretionary, mandatory, and role-based access control

Image Know the advantages of single sign-on technologies

Image Be able to differentiate authorization types


Introduction

Identity and access management is a key component of security because it helps to keep unauthorized users out and it keeps authorized users honest; it is critical for accountability and auditing. It is part of what is known as the triple-A process of authentication, authorization, and accountability. Take note—you may see “accountability” and “auditing” used synonymously; they mean the same thing.

Authentication systems based on passwords have been used for many years because they are cheaper and easier to integrate. Today, many more organizations are using tokens and biometrics. Some organizations even enforce two-factor authentication, whereas other entities are moving to federated authentication.

Security administrators have more to worry about than just authentication. Many employees now have multiple accounts to keep up with. Luckily, there is a way to consolidate these accounts: single sign-on solutions. A single sign-on solution allows users the ability to authenticate only once to access all needed resources and systems. Authentication systems can be centralized, decentralized, or hybrid. This chapter introduces each concept.


Tip

Single Sign-on (SSO) is NOT the same as Password Synchronization. Password Synchronization typically uses a static password that is shared across multiple systems or programs, whereas in an SSO solution a user must authenticate to an authentication server, and the authentication server provides further provisional access control privileges for the user.


Although knowing who to authenticate serves as a basis of access control, there also exists the issue of authorization. Authorization defines what access the user has and what abilities are present. Authorization is a core component of access control. Once a user has been authenticated to a domain, server, application, or system, what are they authorized to do?. As an example, administrators are typically authorized to perform many more functions than an average user. Controlling access is the first line of defense in allowing authorized users access while keeping unauthorized users out.

This chapter examines authorization in the context of discretionary, nondiscretionary, and role-based access control. Authorization should be implemented to allow the minimum access required for a user to accomplish his or her task. This approach helps control access, minimizes the damage that a single employee can inflict on the organization, and mitigates the risks associated with access control. The principle that employees should be provided only the amount of control and access that they need to accomplish their job duties, and nothing more is referred to as the principle of least privilege.

If something does go wrong, a method will be required to determine who has done what. That is the process of audit and accountability. In an audit, those individuals tasked with enforcement of network security review records to determine what was done, and by whom. Accountability means that malicious and repetitive mistakes can be tracked and tied to a specific individual, or at least traced to that individual’s credentials.

Identification, Authentication, and Authorization of People and Devices

Identification, authentication, and authorization are three of the core concepts of access control. Together, these items determine who gets into the network and what they have access to. When someone thinks of authentication, what might come to mind is who gains access; however, identification comes first. At the point of identification you are a claimant. This simply means that you may say you are Michael. But how does the system actually know this? That is where authentication comes into play by proving the veracity of a claim. Let’s look at some basic concepts and terms before reviewing more in-depth topics:

Image Identification is the process of identifying yourself to an authentication service.

Image Authentication is the process of proving the veracity of an identity claim; phrased differently, it is used to determine whether a user is who he or she claims to be.

Image Authorization is the process of determining whether a user has the right to access a requested resource.

Image Accountability is the ability to relate specific actions and operations to a unique individual, system, or process.

Image Access is the transfer of information between two entities. When access control is discussed, it is usually in terms of access, subjects, and objects.

Image A subject is an active entity that can be a person, application, or process.

Image An object is a passive entity that contains or holds information. An object can be a server, database, information system, etc.


Tip

It is important to note that a person can be a subject or an object. In this domain the person is typically the active entity, or subject. In other domains the application, for example, can be the subject.


Authentication Techniques

In network security, authentication is the process of determining the legitimacy of a user or process. Various authentication schemes have been developed over the years. Some common categories that have been established are as follows:

Image Something you know (Type 1)—Typically an alphanumeric password or PIN number.

Image Something you have (Type 2)—Can include smart cards, tokens, memory cards, or key fobs.

Image Something you are (Type 3)—Items like fingerprints, facial scans, retina scans, or voice recognition.


Tip

Some sources list a fourth type of authentication, which is somewhere you are. As an example, consider a callback system that requires you to be in a specific location to receive the call to authenticate. Another example is the use of GPS in a smartphone or tablet to identify where you are.


The authentication process is something that most individuals have performed thousands of times. Consider the log-in prompt at the website of your local bank. You are prompted to enter your username and password, which, if entered correctly, provides you with access. As an example, you might now be able to access your own bank records, but you should not be able to see someone else’s bank balance or access their funds. Your level of authorization as a bank user will be much different from that of a bank manager or loan officer. What is important to understand is that authorization can offer a wide range of access levels from all to nothing.

Organizations require this level of control to enforce effective controls and maintain a secure environment. Enforcement also requires audit and accountability. Enforcement means that someone must review employee and user activities. Just as the bank manager has a greater level of access than the average bank user doesn’t mean that his or her access is unchecked. Controls are needed to limit what the bank manager can access; furthermore, it is needed to enforce accountability so that fraud can be detected if the manager were to decide to take a small amount of the customers’ money each month and stash it away in a Swiss account.

The way in which authentication is handled is changing. As an example, federated authentication allows you to log in once and access multiple resources without having to log in to each unique site or service. The overarching framework is for organizations to share authentication information over the world wide web using Security Association Markup Language (SAML) with HTTPS; therefore, once users have proven their identity, if two organizations trust each other, a user’s shopping experience online gets easier and their security token goes with them. For example, if you were to book an airline ticket, you might be presented with a pop-up that asks if you also need to book a hotel room. Clicking Yes might take you to a major hotel chain website to which your identity and travel information have already been passed. This saves you the process of logging in a second time to the hotel website.

Such systems are already in use, and one early example was Microsoft Passport. These technologies allow for third-party identity management. As another example, you might go to a shopping site and be asked to log in with your Facebook credentials. These systems function by establishing a trust relationship between the identity provider and the service or application. Also, more organizations are starting to adopt authentication as a service (AaaS). AaaS enables organizations to easily apply strong authentication delivered from the cloud and use it as needed, from anywhere.

Something You Know (Type 1): Passwords and PINs

We begin our discussion of authentication systems by discussing passwords. Of the three types of authentication, passwords are the most widely used. The problem with this method is that passwords are typically weak. Consider the following:

Image People use passwords that are easy to remember.

Image Difficult passwords might be written down and left where others can find them.

Image Most of us are guilty of reusing passwords.

Image Reputability is a real issue with passwords because it is hard to prove who made a specific transaction or gained access.

Image Passwords can be cracked, sniffed, observed, replayed, or broken. Common password cracking can use dictionary, hybrid, or exhaustive search (brute force) attacks.

Image Dictionary attacks use common dictionary words, and hybrid password cracking uses a combination of words as random characters, such as 1password or p@ssw0rd. Brute force attempts all possible variations, which is typically time consuming. Rainbow table attacks use precomputed hash tables to reduce password cracking time and recover the plaintext password.

Image Many people are predictable and, as such, might use passwords that are easily guessed. Many times passwords are based on birthdays, anniversaries, a child’s name, or even a favorite pet. With the massive growth of the Internet and “Big Data” it is easy to use social engineering to find this information.


Tip

In May 2014, news sources reported that eBay had suffered a massive security breach and was advising all users to change their passwords. eBay suggest that about 145 million users change their passwords. See mashable.com/2014/05/21/ebay-breach-ramifications/#9rCZ8zr.euqY for more details.


This makes password security an important topic for anyone studying access control. Many times a password is all that stands between an unauthorized user and account access. If you can’t make the change to a more robust form of authentication, you can implement controls to make passwords more robust. A few of these options are as follows:

Image Password length—Short passwords can be broken quickly via brute force attacks.

Image Password composition—Passwords should not be based on personal information or consist of common words or names. If you use cognitive information, you should make this information up during enrollment. Remember, your “real” information can be found on the Internet.

Image Password complexity—A combination of numbers, symbols, upper/lowercase letters, and so on should be used. As an example, a company might use a standard that requires passwords must be at least eight characters, two of which must be numbers and two of which must be uppercase letters. The company might also suggest using a combination of symbols and lowercase letters for the remaining characters.

Image Password aging—Unlike fine wine, passwords do not get better with age. Two items of concern are maximum age and minimum age. Maximum age is the longest amount of time a user can use a password. Minimum age defines the minimum amount of time the user must keep the password.

Image Password history—Authentication systems should track previous passwords so that users cannot reuse previous passwords.

Image Password attempts—Log-on attempts should be limited to a small number of times, such as three successive attempts. Applying this control is also called setting a clipping or threshold level. The result of a threshold or clipping event can be anything from a locking of the account to a delayed re-enabling of the account

Image Password storage—Use the strongest form of one-way encryption available for storage of passwords, and never store in cleartext.


ExamAlert

You will be expected to understand CISSP terminology before attempting the exam. One such term is clipping level. Used in this context, it is simply another term for a log-on limit. Remember that a clipping level is the threshold or limit that must be reached before action is taken. A big part of the exam is understanding the terms that might be used and applying them in the context of the test question.


If all this talk of passwords has left you feeling somewhat vulnerable, you may want to consider a passphrase. A passphrase is often a modified sentence or phrase like “Uaremy#1lady4l!fe.” After being entered into a computer system, it is converted into a virtual password. Passphrases function by having someone enter the phrase into the computer. Software converts, or hashes, that phrase into a stronger virtual password that is harder for an attacker to crack. Using a passphrase adds a second layer of protection and requires the passphrase to be used to access the secret key.

Static and Dynamic Passwords

Another issue to consider when evaluating password-based authentication is what type of password-based system is being used. Is it a static password, dynamic password, or a cognitive password? Static passwords are those that are fixed and do not normally change. As an example, I once set up a Gmail account for email and assigned a password. This password remained in effect until I no longer used the account. Dynamic passwords are also known as single-use passwords and can be thought of as the facial tissue of the security world: You use them once or for a short period, and then they are discarded. One-time passwords might be provided through a token device that displays the time-limited password on an LCD screen. Finally, there are cognitive passwords, which are discussed next.


Tip

Cracking passwords are just one technique that hackers can attempt. Attacks against access control systems can also include directly targeting the hashes. There are tools to attempt this remotely, or the attacker can attempt this via physical access.


Cognitive Passwords

Cognitive passwords are another interesting password mechanism that has gained popularity. For example, three to five questions like the following might be asked:

Image What country were you born in?

Image What department do you work for?

Image What is your pet’s name?

Image What is the model of your first car?

Image What is your mother’s maiden name?

If you answer all the questions correctly, you are authenticated. Cognitive passwords are widely used during enrollment processes and when individuals call help desks or request other services that require authentication. Cognitive passwords are not without their problems. For example, if your name is Sarah Palin and the cognitive password you’re prompted for by Yahoo! Mail is “What’s the name of your high school,” anyone who knows that fact or that you grew up in Wasilla, Alaska could probably figure out where you went to high school and easily access your account. The most common area that cognitive systems are used is in self-service password reset systems. Should you forget your password, you are prompted with several questions that were answered during registration to verify your authenticity. If you answer correctly, the password is emailed or sent to you to restore access.


ExamAlert

Exam candidates must understand the strengths and weaknesses of passwords and how password-based authentication can be enhanced. Passwords should always be created by means of a one-way process (hashing), should be randomized (salted), and should never be stored in cleartext.


Something You Have (Type 2): Tokens, Cards, and Certificates

Something you have is the second type of authentication we will discuss. Examples of something you have include tokens, smart cards, magnetic stripe cards, and certificates.

One of the most common examples of type 2 authentication is a token. As an example, if you have been to a sports event lately, you most likely had to possess a token to enter the game. In this instance, the token was in the form of a ticket. In the world of network security, a token can be a synchronous token or an asynchronous token device. Tokens are widely used with one-time passwords (OTPs) or single-use passwords. These passwords change every time they are used. Thus, OTPs are often implemented with tokens.

Another great feature of token-based devices is that they can be used for two-factor authentication. Although physical tokens and key fobs can suffer from problems like battery failure and device failure, using tokens offers a much more secure form of authentication than using passwords.

Synchronous Tokens

Synchronous tokens are synchronized to the authentication server. This type of system works by means of a clock or time-based counter. Each individual passcode is valid for only a short period. Even if an attacker were able to intercept a token-based password, it would be valid for only a limited time. After that small window of opportunity, it would have no value to an attacker. As an example, RSA’s SecurID changes user passwords every 60 seconds. Figure 8.1 shows an example.

Image

FIGURE 8.1 RSA token authentication.

Asynchronous Token Devices

Asynchronous token devices are not synchronized to the authentication server. These devices use a challenge-response mechanism and usually require the user to press a key on the token and on the authentication server. The server sends the user a random value and the user will enter the random value into the device along with a username and password. Together, this authentication method is considered strong authentication as it is actually multifactor authentication (something you know and something you have). Figure 8.2 shows an example.

Image

FIGURE 8.2 Asynchronous token authentication.

These devices work as follows:

1. The computer generates a value and displays it to the user.

2. The value is entered into the token.

3. The user is prompted to enter a secret passphrase.

4. The token performs a computation on the entered value.

5. The new value is displayed on the LCD screen of the token device.

6. The user enters the displayed value into the computer for authentication.

7. The value is forwarded to an authentication server and compared to the value the authentication server is expecting.

Cards

Card-based authentication can be accomplished by means of a smart card, memory card, or magnetic stripe card. A smart card is an intelligent token with an embedded integrated circuit chip. It provides not only memory capacity, but computational capability because of its built-in microprocessor. The types of smart cards include:

Image Contact smart cards—When inserted into the reader, electrical contacts touch the card in the area of the integrated circuit (IC). These contacts provide power and a data path to the smart card.

Image Contactless smart card—When brought into the proximity of a reader, an embedded antenna provides power to the IC. When the correct PIN is entered into the smart card, processing can begin. Figure 8.3 shows an example of a generic smart card.

Image

FIGURE 8.3 Generic smart card.

Memory cards are like smart cards but cannot process information. They must be used in conjunction with readers and systems that can process the data held on the memory card. One of the primary advantages of a memory card is that, unlike passwords, memory cards require the user to possess the card to perform authentication. An older form of a card token is the magnetic stripe card, established as a widely used standard in the 1970s. The magnetic stripe contains information used to authenticate the user. Care must be exercised in the storage of information on the magnetic card. Although cleartext should not be used, some credit cards still hold information in cleartext. Magnetic stripe readers are cheap and easy to use. Anyone possessing such a device and a PC can steal card information anywhere cards are used, such as at a restaurant or store. Memory cards typically hold a PIN that when activated by a computer system will pull authentication information from a database.

Certificates

Some authentication methods, such as Protected Extensible Authentication Protocol (PEAP) and Extensible Authentication Protocol (EAP), can use certificates for authentication of computers and users. Certificates can reside on a smart card or can be used by Internet Protocol Security (IPSec) and Secure Sockets Layer (SSL) for web authentication. These digital certificates provide some basic information to prove the identity of the holder.

Digital certificates typically contain the following critical pieces of information:

Image Identification information that includes username, serial number, and validity dates of the certificates.

Image The public key of the certificate holder.

Image The digital signature of the signature authority. This piece is critical because it validates the entire package.

X.509 is the standard for digital signatures because it specifies information and attributes required for the identification of a person or a computer system. Version 3 is the most current. It is considered a secure process to store digital certificates in tokens.

Something You Are (Type 3): Biometrics

Biometrics is a means of authentication based on personal attributes or behavioral or physiological characteristics that are unique to each individual. Personal attributes are more closely related to identity features such fingerprints and retina scans, whereas an example of a behavioral trait is the way an individual signs his or her name, referred to as signature dynamics. This is not the same as a digital signature. Biometrics is a very accurate means of authentication, but is typically more expensive than the password systems that were discussed.

Biometric authentication systems have been slow to mature because many individuals are opposed to the technology. Issues like privacy are typically raised. Some individuals see the technology as too much of a Big Brother technology. Individuals who are accustomed to quickly entering usernames and passwords are forced to be much more patient and allow the biometric system to gather multiple sets of biometric data. Issues like sanitization are can be a barrier because users often have to touch authentication devices, including but not limited to placing their face on or near a small device for the retina scan.

During the authentication process, the biometric system might not be able to collect enough data on the first reading to compare to the reference file in the authentication store. This could mean that the user must allow the biometric system to make two or more passes before authentication takes place. These technical barriers further reduce acceptance of the biometric system.

However, the need for greater security has led more companies to look at biometric authentication systems as a way to meet the need for stronger security. Biometric authentication offers the capability of unique authentication of every single person on the planet. Biometric systems work by recording information that is very minute and unique to every person.

When the biometric system is first used, the system must develop a database of information about the user. This is considered the enrollment period. When enrollment is complete, the system is ready for use. If an employee then places his or her hand on the company’s new biometric palm scanner, the scanner compares the ridges and creases found on the employee’s palm to the one identified as belonging to that individual in the device’s database. This process is considered a one-to-one match of the individual’s biometric data.

In reality, the user’s unique attribute value is converted into a binary value and then hashed before being stored in an authentication server. In organizations that implement strong security, a user may be authenticated with both biometrics and a username/password. This is to ensure if one layer of security fails the system or facility is still protected, following the defense-in-depth approach. Different biometric systems have varying levels of accuracy and sensitivity.

The attributes are measured by the percentage of Type I and Type II errors it produces.

Type I errors, known as the false rejection rate (FRR), are a measurement of the percentage of individuals who should have been but were not allowed access. Think of the FRR as the insult rate. It’s called that because valid users are insulted that they were denied access even though they are legitimate users.

Type II errors, known as the false acceptance rate (FAR), are the percentage of individuals or subjects are a measurement of the percentage who got in but should not have been allowed access. Consider a situation where I, the author of this book and not an employee of your organization, show up at your work site and attempt to authenticate to one of the company’s systems. If I were allowed in, that would be an example of a Type II error.

Together these two values can be used to determine the overall accuracy of the system. This is one of the primary ways to evaluate the accuracy of a biometric device. Suppose you have been asked to assess similar biometric devices. In this situation, the crossover error rate (CER) can be used to help guide you into selecting the best system for your organization. This is determined by mapping the point at which Type I errors equal Type II errors. Figure 8.4 depicts the CER. The lower the CER, the more accurate is the biometric system. For example, if system A has a CER of 4 and system B has a CER of 2, system B has the greater accuracy.

Image

FIGURE 8.4 Crossover error rate.


ExamAlert

Before attempting the CISSP exam, make sure you understand the difference between Type I and Type II errors and the CER. Type II values are considered the most critical error rate to examine, whereas the CER is considered to be the best measurement of biometric systems accuracy.


The following are some of the more common biometric authentication systems. These systems are listed in order of best response times and lowest error rates.

1. Palm scan—Analyzes characteristics associated with a user’s palm, such as the creases and ridges. If a match is found, the individual is allowed access.

2. Hand geometry—Another biometric system that uses the unique geometry of a user’s shape, length, and width of his or her fingers and hand to determine the user’s identity. It is one of the oldest biometric techniques.

3. Iris recognition—An eye-recognition system that is considered the most accurate biometric system because it has more than 400 points of reference when matching the irises of an individual’s eyes. These systems typically work by taking a picture of the iris and comparing it to one stored in a database.

4. Retina pattern—Another ocular-based technology that scans the blood vessels in the back of the eye. It requires users to place their eye close to the reader. Older systems used a cup-and-air technique that some users found invasive as you must put. The cup-and-air technique also raises the possibility of exchange of bodily fluids. Although retina-based biometric system are considered very accurate, drawbacks include the fact that the retina can change due to medical conditions like diabetes. Pregnancy can also cause subtle changes in the blood vessels of the retina. Because of the privacy concerns related to revealed medical conditions, retina scans are not readily accepted by users.

5. Fingerprint—Widely used for access control to facilities and items, such as laptops. It works by distinguishing up to 30 to 40 details about the peaks, valleys, ridges, and minutiae of the user’s fingerprint. However, many commercial systems limit the number that is matched to around eight to ten.

6. Facial recognition—Requires users to place their face in front of a camera. The facial scan device performs a mathematical comparison with the face prints (eigenfeatures) it holds in a database to allow or block access.

7. Voice recognition—Uses voice analysis for identification and authentication. Its main advantage is that it can be used for telephone applications, but it is vulnerable to replay attacks. Anyone that has seen the movie Sneakers might remember the line “Hi, my name is Werner Brandes. My voice is my passport. Verify me.”

Regardless of which of the previous methods is used, all biometric systems basically follow a similar usage pattern:

1. Users must first enroll into the system—Enrollment is not much more than allowing the system to take multiple samples for analysis and feature extraction. These features will be used later for comparison.

2. A user requests to be authenticated—A sample is obtained, analyzed, and features are extracted.

3. A decision is reached—A match of multiple features allows the user access, whereas a discrepancy between the sample and the stored features causes the user to be denied access.

Different biometric systems have varying levels of accuracy. For example, fingerprint-scanning systems base their accuracy on fingerprint patterns and minutiae. Fingerprint patterns include arches, loops, and whorls, whereas minutiae include ridge endings, bifurcations, and short ridges. These are found on the fingertips, as seen in Figure 8.5.

Image

FIGURE 8.5 Fingerprint analysis.

Although the number of minutiae varies from finger to finger, the information can be stored electronically in file sizes that are usually from 250 to 1,000 bytes. When a user logs in, the stored file containing the minutiae is compared to the individual’s finger being scanned.

Other considerations must be made before deploying a biometric system:

Image Employee buy-in—Users might not like or want to interact with the system. If so, the performance of the system will suffer. For example, a retina scan requires individuals to look into a cuplike device, whereas an iris scanner requires only a quick look into a camera.

Image Age, gender, or occupation of the user—Older users might find biometric devices too Big Brotherish; women might not like the idea that the company’s new retina scanner can be used to detect whether they are pregnant. Users who perform physical labor or work in an unclean environment might find fingerprint scanners frustrating.

Image The physical status of the user—Users who are physically challenged or disabled might find the eye scanners difficult to reach. Those without use of their hands or fingers will be unable to use fingerprint readers, palm scanners, or hand geometry systems.

Image If the user can use the biometric—Some users may not be able to use the biometric. For example, some people cannot have their fingerprints read. This can be genetic or based on the job the person does. For example, brick layers and bank tellers typically cannot use fingerprint readers because their fingerprints may be worn off.

A final consideration of biometrics is selection. With so many technologies, it takes a significant amount of effort to get the right system that meets user criteria and is technologically feasible. One tool that can aid in this task is the Zephyr chart. The International Biometric Group’s Zephyr Analysis provides a means of evaluating different biometric technologies. Two categories are defined as follows:

Image User Criteria—Effort and intrusiveness

Image Technology Criteria—Cost and accuracy


ExamAlert

Exam candidates must understand the different ways in which biometric systems can be evaluated. When comparing like devices, the CER can be used; for unlike devices, a Zephyr chart is the preferred method.


Strong Authentication

To make authentication stronger, you can combine several of the methods discussed previously. This combination is referred to as multifactor or strong authentication. The most common form of strong authentication is two-factor authentication. Tokens combined with passwords form an effective and strong authentication. If you have a bank card, you are familiar with two-factor authentication. Bank ATMs require two items to successfully access an account: something you have (bank card) and something you know (your PIN).

The decision to use strong authentication depends on your analysis of the value of the assets being protected. What are the dollar values of the assets being protected? What might it cost the organization in dollars, lost profit, potential public embarrassment, or liability if unauthorized access is successful?


ExamAlert

CISSP exam questions are known for their unique style of wording. As such, make sure you can identify two-factor authentication. True two-factor authentication would require items from two of the three categories. As such, a password and a token would be two-factor authentication, whereas a password and a PIN would not.


Identity Management Implementation

Identity management has moved far beyond control of simple usernames and passwords. It involves a lifecycle of access control from account creation to decommissioning, and the management of each process in between. Provisioning is one important aspect. User provisioning is the creation, management, and deactivation of services and accounts of user objects. Access control is not an easy task because employees are hired, change roles, are promoted, gain additional duties, and are fired or resign. This constant state of flux requires organizations to develop effective user-provisioning systems.

Managing users is just the start of the process. Another area of concern is password management. Password management has forced organizations to develop different methods to address the access control needs of a complex world. Several techniques include the following:

Image Self-Service Password Reset—This approach allows users to reset their own password. For example, if you cannot access your LinkedIn account, their site allows you to reset your own password.

Image Assisted Password Reset—This method provides helpdesk and other authorized personnel a standardized mechanism to reset passwords. For example, Hitachi Systems makes a web portal product for just this application.

Image Password Synchronization—These systems are used to replicate a user’s password so that all systems are synchronized.

Any identity management systems must also consider account management. Account management should include the following:

Image How to establish, manage, and close accounts

Image Periodic account review

Image Periodic rescreening for individuals in sensitive positions

Typically, when an account is established a profile is created. Profile management is the control of information associated with an individual or group. Profiles can contain information like name, age, date of birth, address, phone number, and so on. Modern corporations have so much data to manage that systems like directory management are used. The idea is to simplify the management of data. One of the primary disadvantages of such systems is the integration of legacy systems. Mainframes, non-networked applications, and applications written in archaic languages like FORTRAN and COBOL make it more difficult to centrally manage users.

Another approach to the management of user access to multiple sites is federation. Federation is used in identity management systems to manage identity across multiple platforms and entities. Some of the directory standards used to ease user management are the X.500 standard, Lightweight Directory Access Protocol, and Active Directory.

Today’s systems are much more distributed than in the past and have a much greater reliance on the Internet. At the same time, there has been a move toward service-enabled delivery of services. There has also been a move to create web services that have a more abstract architectural style. This style, known as service-oriented architecture (SOA), attempts to bind together disjointed pieces of software. SOA allows for a company with distributed departments using different systems and services in different business domains to access services with security designed into the process. For example, suppose the legal department and the IT department provide different services on different systems and you want to have the legal department programs loaded on IT department systems. With the use of a web portal, the other department can access the service if required by employing Security Association Markup Language using HTTP.

A CISSP should have some knowledge of components of identity management, such as the following:

Image Web Services Security—WS Security is an extension to Simple Object Access Protocol (SOAP) and is designed to add security to web services.

Image XML—Years ago, hypertext markup language (HTML) dominated the web. Today, extensible markup language (XML) is the standard framework. XML is a standard that allows for a common expression of metadata. XML typically follows the SOAP standard.

Image SPML—Service Provisioning Markup Language is an XML-based framework that can be used to exchange access control information between organizations so that a user logged into one entity can have the access rights passed to the other.

Single Sign-On

Single sign-on is an attempt to address a problem that is common for all users and administrators. Various systems within the organization likely require the user to log on multiple times to multiple systems. Each of these systems requires the user to remember a potentially different username and password combination. Most of us become tired of trying to remember all this information and begin to look for shortcuts. The most common shortcut is just to write down the information. Walk around your office, and you might see that many of your co-workers have regrettably implemented this practice. Single sign-on is designed to address this problem by permitting users to authenticate once to a single authentication authority and then access all other protected resources without being required to authenticate again.

Before you run out and decide to implement single sign-on at your organization, you should be aware that it is expensive, and if an attacker can authenticate, that attacker then has access to everything. Kerberos, SESAME, KryptoKnight (by IBM), NetSP (a KryptoKnight derivative), thin clients, directory services, and scripted access are all examples of authentication systems with operational modes that can implement single sign-on.


Caution

Thin clients can be considered a type of single sign-on system because the thin client holds no data. All information is stored in a centralized server. Thus, after a user is logged in, there is no reason for that user to authenticate again.


Kerberos

Kerberos, created by the Massachusetts Institute of Technology (MIT), is a network authentication protocol that uses secret-key cryptography. Kerberos has three parts: a client, server, and a trusted third party called the Key Distribution Center (KDC) to mediate between them. Clients obtain tickets from the KDC, and they present these tickets to servers when connections are established. Kerberos tickets represent the client’s credentials. Kerberos relies on symmetric key cryptography (shared or secret key cryptography). Version 5 of Kerberos was implemented with Data Encryption Standard (DES). However, the Advanced Encryption Standard, which supersedes DES, is supported in later versions of Kerberos and operating systems like Microsoft Windows 7, 8, 10, Server 2012 and others. Kerberos communicates through an application programming interface (API) known as Generic Security Services (GSS-API). Common Kerberos terms include:

Image Ticket—Generated by the KDC and given to a principal for use in authenticating to another principal.

Image Realm—A domain consisting of all the principals for which the KDC provides security services; used to logically group resources and users.

Image Credentials—A ticket and a service key.

Image Principal—Can be a user, a process, or an application. Kerberos systems authenticate one principal to another.

The KDC is a service that runs on a physically secure server. The KDC consists of two components:

Image Authentication service—The authentication service issues ticket-granting tickets (TGTs) that are good for admission to the ticket-granting service (TGS). Before network clients can get tickets for services, they must obtain a TGT from the authentication service.

Image Ticket-granting service—Clients receive tickets to specific target services.


Note

Keep in mind that the TGT is an encrypted identification file with a limited validity window. The TGT is temporarily stored on the requesting principal’s system and is used so that the principal does not have to type in credentials multiple times to access a resource.


The basic operation of Kerberos, as depicted in Figure 8.6, is as follows:

1. The client asks the KDC for a ticket, making use of the authentication service.

2. The client receives the encrypted ticket and the session key.

3. The client sends the encrypted TGT to the TGS and requests a ticket for access to the application server. This ticket has two copies of the session key: One copy is encrypted with the client key, and the other copy is encrypted with the application server key.

4. The TGS decrypts the TGT using its own private key and returns the ticket to the client, granting it access to the application server.

5. The client sends this ticket, along with an authenticator, to the application server.

6. The application server sends confirmation of its identity to the client.

Image

FIGURE 8.6 Kerberos operation.


Note

Kerberos authenticates only authentication traffic; subsequent communications is not protected. If the supplicant uses an insecure protocol like File Transfer Protocol (FTP), the network traffic would be in the clear.


Although Kerberos can provide authentication, integrity, and confidentiality, it’s not without its weaknesses. One weakness is that Kerberos cannot guarantee availability. Some other weaknesses are as follows:

Image Kerberos is time-sensitive; therefore, it requires all system clocks to be closely synchronized.

Image The tickets used by Kerberos, which are authentication tokens, can be sniffed and potentially cracked.

Image If an attacker targets the Kerberos server, it can prevent anyone in the realm from logging in. It is important to note that the Kerberos server can be a single point of failure.

Image Secret keys are temporarily stored and decrypted on user workstations, making them vulnerable to an intruder who gets access to the workstation.

Image Kerberos is vulnerable to brute force attacks.

Image Kerberos may not be well-suited for large environments that have many systems, applications, users, and simultaneous requests.

Sesame

Kerberos is the most widely used SSO solution, but there are other options. One such option is the Secure European System and Applications in a Multivendor Environment (SESAME) project that was developed to address one of the biggest weaknesses in Kerberos: the plaintext storage of symmetric keys. SESAME uses both symmetric and asymmetric encryption, whereas Kerberos uses only symmetric encryption. Another difference is that SESAME incorporates MD5 and CRC32 hashing and uses two certificates. One of these certificates is used to provide authentication, as in Kerberos, and the second certificate is used to control the access privileges assigned to a client.

SESAME uses Privilege Attribute Certificates (PACs). PACs contain the requesting subject’s identity, access capabilities of the subject, and life span of the subject requiring access. KryptoKnight by IBM and NetSP, a KryptoKnight derivative, are also SSO technologies, but are not widely deployed. Although you are unlikely to see these systems, you should know their names and that they are used for SSO if they happen to show up on the exam.

Authorization and Access Control Techniques

With a user identified and authenticated, the next step to consider is authorization. After the user is logged in, what can they access and what types of rights and privileges do they have? At the core of this discussion is how subjects access objects and what they can do with these resources after access is established. The three primary types of access control are as follows:

Image Discretionary access control (DAC)

Image Mandatory access control (MAC)

Image Role-based access control (RBAC)

These might not be concepts that you are used to thinking about; however, in reality these types of decisions are made early on in the design of an operating system. Consider the design of early Microsoft Windows products where the design of the operating system was peer-to-peer—this is much different from SUSE Linux 12.0. Let’s look at each model to further describe their differences.

Discretionary Access Control

The DAC model is so titled because access control is left to the owner’s discretion. It can be thought of as similar to a peer-to-peer computer network. Each user is left to control their own system and resources. The owner is authorized to determine whether other users have access to files and resources. One significant problem of DAC is that its effectiveness is limited by the user’s skill and ability. A user who is inexperienced or simply doesn’t care can easily grant full access to files or objects under his or her control.

These are the two primary components of a DAC:

Image File and data ownership—All objects within a system must have an owner. Objects without an owner will be left unprotected.

Image Access rights and permissions—These control the access rights of an individual. Variations exist, but a basic access control list (ACL) checks read, write, or execute privileges.

The ACL identifies users who have authorization to specific information. This is a dynamic model that allows data to be easily shared. As an example, I might inform my son that he may not download any more music from the Internet onto his computer upstairs. From my computer in the den, my son might simply deny me access to the folder he has been downloading music into to prevent me from accessing it to monitor his activities. This case demonstrates that DAC is much like a peer-to-peer network in that users are in charge of their own resources and data.

Table 8.1 shows a sample ACL, with columns defining access to objects. A subject’s capabilities refers to a row within the matrix, and describe what actions can be taken on which objects. A subject’s capabilities refer to a row within the matrix and reference what action can be taken.

Image

TABLE 8.1 Sample Access Control List

You can think of capabilities as the actions that a specific user can perform within the access matrix. DAC controls are based upon this matrix, and you can think of it as a means to establish access permission of a subject to an object.

Although the data owner can create an ACL to determine who has access to a specific object, mistakes can lead to a loss of confidentiality, and no central oversight exists as in other more restrictive models.

Mandatory Access Control

A MAC model is static and based on a predetermined list of access privileges; therefore, in a MAC-based system, access is determined by the system rather than the user. To do this, a MAC system uses labels and clearances. Figure 8.7 shows the differences between DAC and MAC.

Image

FIGURE 8.7 Differences between DAC and MAC.

The MAC model is typically used by organizations that handle highly sensitive data, such as the Department of Defense, NSA, CIA, and FBI. Examples of MAC systems include SELinux, among others. Systems based on the MAC model use clearance on subjects, and mark objects by sensitivity label. For example, the military uses the clearances of top secret, secret, confidential, sensitive but unclassified (SBU), and unclassified. (See Chapter 2, “Logical Asset Security,” for a more in-depth discussion of government data classification.) Terms that you will need to know to understand this model include:

Image Objects—Passive entities that provide data or information to subjects.

Image Subjects—Active entities that can be a user, system, program, or file.

When a subject attempts to access an object, the object’s label is examined for a match to the subject’s level of clearance. If no match is found, access is denied. This model excels at supporting the need-to-know concept. Here is an example: Jeff wants to access a top secret file. The file is labeled “top secret, Joints Chiefs of Staff (JCS).” Although Jeff has top secret clearance, he is not JCS; therefore, access is denied. In reality, it is a little more complicated than the previous example details, but remember that the CISSP exam is considered a mile wide and an inch deep.

Image Clearance—Determines the type of information a user can access.

Image Category—Applied to objects and used to silo information.

Image Sensitivity Labels—Used to classify information. As previously mentioned, the U.S. military uses the labels of Top Secret, Secret, Confidential, Sensitive but Unclassified (SBU), and Unclassified.


Caution

Any time you see the term sensitivity label, you should start thinking MAC because this system is most closely associated with this term.


For the exam, you should know that MAC systems can be hierarchical, compartmentalized, or hybrid. Hierarchical designs work by means of classification levels. Each level of the hierarchy includes the lower levels as well. As an example, in a hierarchical system, Dwayne might be cleared for “top secret” and as such can also view “secret” and “confidential” information because those are less sensitive. If Dwayne was authorized to access only “confidential” data, however, he would not be able to read up to higher levels like “secret.” Compartmentalized objects require clearance from a specific domain or group like the Department of Homeland Security.

In a compartmentalized system, there is the ability to separate data into separate categories. As an example, Dwayne, who works for the Department of Defense, would not be able to read documents cleared for the State Department; therefore, Dwayne has a Top Secret - Sensitive Compartmented Information Clearance (TS-SCI). The military would be exercising a MAC system with least privileges to ensure that because Dwayne has TS, he has only the necessary access required to complete his job.

A hybrid design combines elements of both hierarchical and compartmentalized.

Important items to know about the MAC model include:

Image It’s considered a need-to-know system.

Image It has more overhead than DAC.

Image All users and resources are assigned a security label.


Caution

Although MAC uses a security label system, DAC systems allow users to set and control access to files and resources.


Role-Based Access Control

RBAC, also known as nondiscretionary access control, enables a user to have certain pre-established rights to objects. These rights are assigned to users based on their roles in the organization. The roles almost always map to the organization’s structure. Many organizations are moving to this type of model because it eases access management. As an example, if you are the IT administrator of a bank, there are some clearly defined roles, such as bank manager, loan officer, and teller. RBAC allows you to define specific rights and privileges to each group. The users are then placed into the groups and inherit the privileges assigned to the group. If Joan is a teller and gets promoted to loan officer, all the administrator must do is move her from the teller group to the loan officer group. Many modern OS designs use the RBAC model.

RBAC is well-suited to organizations that have a high turnover rate. Assigning access rights and privileges to a group rather than an individual reduces the burden on administration. How RBAC is implemented is up to the individuals designing the operating system. For RBAC to work, roles within the organization must be clearly defined by policy.

Your organization might decide to use static separation of duty (SSD). SSD dictates that the member of one group cannot be the member of another group. Let’s say Mike is a member of the network administrators group. If that is true, Mike cannot also be a member of the security administrators group or the audit group. For SSD to work, roles must clearly be defined to see where conflicts exist within the organization.

Another design is dynamic separation of duties (DSD). DSD dictates that a user cannot combine duties during any active session. Let’s say Mike is a member of the audit group and audit management group. If Mike logs on to perform an audit test, he does not have management rights. If Mike logs on under the audit management group, he cannot perform an audit test.

An example of separation of duties is shown in Table 8.2.

Image

TABLE 8.2 Separation of Duties

Finally, there is task-based access control (TBAC). TBAC is similar to RBAC but instead of being based on roles it uses tasks. TBAC is based on tasks so that the allowed duties are based around the types of tasks a specific individual would perform. A good example of this can be seen when you access a Windows Server 2012. In the user administration group, you will see some predefined profiles, such as print manager, backup manager, and power user. The access assigned to each of these is based on the types of tasks that individual would perform.

Other Types of Access Controls

Other types of access control techniques include rule-based access control. Rule-based access control is based on a specific set of rules, much like a router ACL. Rule-based access control is considered a variation of the DAC model. Rule-based access control is accomplished by

1. Intercepting each request

2. Comparing it to the level of authorization

3. Making a decision

For instance, a router may use a rule-based ACL that might have permissions set to allow web traffic on port 80 and deny Telnet traffic on port 23. These two basic rules define the ACL. ACLs are tied to objects. Permissions can be assigned in three different ways: explicitly, implicitly, or inherited. ACLs have an implicit deny all statement that is the last item processed. A sample Cisco-formatted ACL is shown here with both allow (permit) and deny statements:

no access-list 111
access-list 111 permit tcp 192.168.13.0 0.0.0.255 any eq www
access-list 111 deny tcp any any eq telnet
access-list 111 deny icmp any any
interface ethernet0
ip access-group 111 in


ExamAlert

Although some people interchange the terms access control list and capability table, they are not the same. Capability tables are bound to subjects; ACLs are bound to objects.


Content-dependent access control (CDAC) is based on the content of the resource. CDAC is primarily used to protect databases that contain potentially sensitive data. CDAC is also used to filter out unauthorized traffic and is typically used by proxies or the firewall. As an example, you may be able to log in to the company SharePoint page and see the number of days you will be expected to travel next month, yet you are unable to see when or where the CEO will be traveling during the same period.

Finally, there is lattice-based access control (LBAC). This MAC-based model derivative functions by defining the least upper and greatest lower bound The upper bound is called a join, and a lower bound is called a meet. This information flow model deals with access in complex situations. The lattice model allows access only if the subject’s capability is greater than or equal to that of the object being accessed. For example, Figure 8.7 demonstrates these boundaries. If you were cleared for secret access, you could read the level below, which is confidential.


ExamAlert

Don’t sweat the terms. As you might have noticed, both role-based access control (RBAC) and rule-based access control (RBAC) have the same abbreviation. The CISSP exam will spell out these terms and most others. You are not expected to memorize these; you are expected to know the concepts.


Access Control Models

Access control models can be divided into two distinct types: centralized and decentralized. Depending on the organization’s environment and requirements, one methodology typically works better than the other.

Centralized Access Control

Centralized access control systems maintain user IDs, rights, and permissions in one central location. Remote Authentication Dial-In User Service (RADIUS), Terminal Access Controller Access Control System (TACACS), and Diameter are all examples of centralized access control systems. Characteristics of centralized systems include:

Image One entity makes all access decisions.

Image Owners decide what users can access, and the administration supports these directives.

Consider old-school dialup. CNET reports that as of 2015 up to 2 million people still pay for it. There is most likely a certain amount of churn each month. The ISP might have many users sign up or leave each billing cycle. Centralized access control gives the ISP an easy way to manage this task. Users who do not pay can be denied access from one centralized point and those users who are paid up can be authenticated and allowed access to the Internet. Users are typically authenticated with one of the following authentication protocols:

Image Password Authentication Protocol (PAP)—Uses a two-way handshake to authenticate a peer to a server when a link is initially established, but is considered weak because it transmits passwords in cleartext. PAP also offers no protection from replay or brute force attacks.

Image Challenge Handshake Authentication Protocol (CHAP)—Uses a one-way hash function and a handshake to authenticate the client and the server. This process is performed when a link is initially established and may be repeated at defined intervals throughout the session. Although better than PAP, it is susceptible to replay attacks.

Image MS-CHAPv2—An authentication method that has been extended to authenticate both the client and the server. MS-CHAPv2 also uses stronger encryption keys than CHAP and MS-CHAP.

Image Extensible Authentication Protocol (EAP)—A framework that allows for more than just your standard username and password authentication by implementing various authentication mechanisms, such as MD5-challenge, token cards, and digital certificates.

RADIUS

RADIUS is an open protocol UDP-client/server protocol defined in RFCs 2058 and 2059. RADIUS provides three services: authentication, authorization, and accountability. RADIUS facilitates centralized user administration and keeps all user profiles in one location that all remote services share. When a RADIUS client begins the communication process with a RADIUS server, it uses attribute-value pairs (AVPs). These are really just a set of defined fields that will accept certain values. RADIUS was originally designed to provide protection from attack over dialup connections. It has been used by ISPs for years and has become a standard in many other ways. RADIUS can be used by mobile employees and integrates with lightweight directory access protocol (LDAP). RADIUS is considered a triple “AAA” protocol (authentication, authorization, and accountability), and all these services are performed together. It is important to note that RADIUS only encrypts the user’s password as it travels from the client to the server, and that other information is sent in cleartext.


Note

The LDAP protocol can be used by a cluster of hosts to allow centralized security authentication as well as access to user and group information.


RADIUS is also used for wireless LAN authentication. The IEEE designed EAP to easily integrate with RADIUS to authenticate wireless users. The wireless user takes on the role of the supplicant, and the access point serves as the client. RADIUS uses the following UDP ports.

Image UDP port 1812 for authentication and authorization services

Image UDP port 1813 for accounting of RADIUS services

If the organization has an existing RADIUS server that’s being used for remote users, it can be put to use authenticating wireless users, too.

RADIUS functions are as follows (see Figure 8.8):

1. The user connects to the RADIUS client.

2. The RADIUS client requests credentials from the user.

3. The user enters credentials.

4. The RADIUS client encrypts the credentials and passes them to the RADIUS server.

5. The RADIUS server then accepts, rejects, or challenges the credentials.

6. If the authentication was successful, the user is authenticated to the network.

Image

FIGURE 8.8 RADIUS authentication.

TACACS

Terminal Access Controller Access Control System is available in three variations: TACACS, XTACACS (Extended TACACS), and TACACS+. TACACS allows authentication, authorization, and auditing functions to be split up which gives the administrator more control over its deployment; conversely, RADIUS does not split these up. TACACS is highly Cisco- and MS-centric, and is considered proprietary. XTACACS separates the authentication, authorization, and accountability processes, and TACACS+ features two-factor authentication and security tokens. TACACS+ is a completely new and revised protocol that is incompatible with other versions of TACACS. TACACS has failed to gain the popularity of RADIUS; it is now considered a somewhat dated protocol.

There are some major differences between RADIUS and TACACS+. Where RADIUS only encrypts the password sent between the client and the server, TACACS+ encrypts all the information. TACACS+ also allows for more administration and has more AVPs because it can split up the AAA protocols. Where RADIUS uses UPD, TACACS+ uses TCP.

Diameter

You can never say the creators of Diameter didn’t have a sense of humor. Diameter’s name is a pun because the “diameter is twice the RADIUS.” Actually, Diameter is enhanced RADIUS in that was designed to do much more than provide services to dialup users. A single Diameter peer can support over a million concurrent Diameter sessions and Diameter can even do peer-to-peer authentication. Diameter is detailed in RFC 3588 and can use TCP, UDP, or Stream Control Transport Protocol (SCTP). Diameter can support protocols and devices not even envisioned when RADIUS and TACACS were created, such as VoIP (Voice over IP), Ethernet over PPP, and mobile IP. VoIP is the routing of voice communication over data networks and mobile IP is the ability of a user to keep the same IP. Consider the example of Mike taking his IP-based phone from his provider’s network to an overseas location. In such a situation, he needs a home IP address and also a “care of” address. Although Mike is normally a T-Mobile customer, his data needs to be routed to him while in Jamaica and using the AT&T network. Diameter provides this capability and is considered a very secure solution because cryptographic support of IPSec or TLS is mandatory.

Diameter is designed to use two protocols. The first is the base protocol that is used to provide secure communication between Diameter devices, and enables various types of information to be transmitted, such as headers, security options, commands, and AVPs.

The second protocol is really a set of extensions. Extensions are built on top of the base protocol to allow various technologies to use Diameter for authentication. This component is what interacts with other services, such as VoIP, wireless, and cell phone authentication. In a world of the Internet of Things, Internet of Everywhere, and System of Systems, where organizations are subscribing to “BYOD,” and all the intelligence is at the edge of the network and growing, Diameter creates the way forward for authentication of these devices into the organization’s network. It provides granular access and authorization beyond what an active directory domain controller can do.

Finally, Diameter is not fully backward-compatible with RADIUS, but there are several options for upgrading RADIUS component communication paths.

Decentralized Access Control

Decentralized access control systems store user IDs, rights, and permissions in different locations throughout the network. As an example, domains can be thought of as a form of decentralized access control. Large organizations typically establish multiple domains along organizational boundaries, such as manufacturing, engineering, marketing, sales, or R&D; or by geographical boundaries, like New York, Atlanta, San Jose, and Houston. When more than one domain exists, there has to be some type of trust between them. A trust is simply a separate link between domains that is necessary to resolve their different security policies and security databases. Trusts can be one-way or two-way. The important concept here is that although all of a domain’s authentication is centralized on domain controllers, a domain’s access control is distributed throughout the domain’s members. Access to resources is assigned and defined on the resource wherever it might reside in the domain.

Characteristics of a decentralized system include the following:

Image Gives control to individuals closer to the resource, such as department managers and occasionally users

Image Maintains multiple domains and trusts

Image Does not use one centralized entity to process access requests

Image Used in database-management systems (DBMS)

Image Peer-to-peer in design

Image Lacks standardization and overlapping rights, and might include security holes

Audit and Monitoring

Regardless of what method of authorization is used and what types of controls are enforced, individuals must be held accountable. For auditing to be effective, administrative controls are needed in the form of policies to ensure that audit data is reviewed on a periodic basis and not just when something goes wrong. Technical controls are needed so that user activity can be tracked within a network. Physical and technical controls are needed to protect audit data from being tampered with.

Although auditing is used only after the fact, it can help detect suspicious activity or identify whether a security breach has taken place. For example, security administrators often review logs for failed log-on attempts only, whereas successful log-ons hurt most and can show you who is in the network but should not be. One example might be if Mike, who works Monday through Friday, 9 to 5, in Houston has been logging in Sundays from 12 to 9 p.m. from San Jose. Maybe Mike is on vacation but there is also the possibility someone is using his account. Since he has a valid user account, it does not raise an alarm.

Monitoring Access and Usage

Computer resources are a limited commodity provided by a company to help meet its overall goals. Although many employees would never dream of placing all their long distance phone calls on a company phone, some of those same individuals have no problem using computer resources for their own personal use. Consider these statistics from Personal Computer World. According to information on its site, one-third of time spent online at work is not work-related, and more than 75% of streaming radio downloads occur between 5 a.m. and 5 p.m.

Accountability must be maintained for network access, software usage, and data access. In a high-security environment, the level of accountability should be substantial, and users should be held responsible by logging and auditing their activities.

Good practice dictates that audit logs are transmitted to a remote centralized site. Centralized logging makes it easier for the person assigned the task to review the data. Exporting the logs to a remote site also makes it harder for hackers to erase or cover their activity. If there is a downside to all this logging, it is that all the information must be recorded and reviewed. A balance must be found between collecting audit data and maintaining a manageable log size. Reviewing it can be expedited by using audit reduction tools. These tools parse the data and eliminate unneeded information. Another useful tool is a variance detection tool. These tools look for trends that fall outside the realm of normal activity. As an example, if an employee normally enters the building around 7 a.m. and leaves about 4 p.m. but is seen entering at 3 a.m., a variance detection tool would detect this abnormality.

Intrusion Detection Systems

Intrusion detection systems play a critical role in the protection of the IT infrastructure. Intrusion detection involves monitoring network traffic, detecting attempts to gain unauthorized access to a system or resource, and notifying the appropriate individuals so that counteractions can be taken. An IDS is designed to function as an access control monitor. Intrusion detection is a relatively new technology. It was really born in the 1980s when James Anderson put forth the concept in a paper titled Computer Security Threat Monitoring and Surveillance (csrc.nist.gov/publications/history/ande80.pdf).

An IDS can be configured to scan for attacks, track a hacker’s movements, alert an administrator to ongoing attacks, and highlight possible vulnerabilities that need to be addressed. The key to what type of activity the IDS will detect is dependent on where the intrusion sensors are placed. This requires some consideration because, after all, a sensor in the DMZ will work well at detecting misuse there but will prove useless against attackers inside the network. Even when you have determined where to place sensors, they still require specific tuning. Without specific tuning, the sensor will generate alerts for all traffic that matches given criteria, regardless of whether the traffic is indeed something that should generate an alert. An IDS must be trained to look for suspicious activity. That is why I typically tell people that an IDS is like a 3-year-old. They require constant care and nurturing, and don’t do well if left alone.


Note

Although the exam will examine these systems in a very basic way, modern systems are a mix of intrusion detection and intrusion prevention. These systems are referred to as intrusion detection and prevention (IDP) systems and are designed to identify potential incidents, log information, attempt to stop the event, and report the event. Many organizations even use IDP systems for activities like identifying problems with security policies, documenting threats, and deterring malicious activity that violates security policies. NIST 800-94 is a good resource to learn more (csrc.nist.gov/publications/nistpubs/800-94/SP800-94.pdf).


A huge problem with intrusion detection systems is that they are after-the-fact devices—the attack has already taken place. Other problems with IDS are false positives and false negatives. False positives refer to when the IDS has triggered an alarm for normal traffic. For example, if you go to your local mall parking lot, you’re likely to hear some car alarms going off due to reasons other than car theft. These car alarms are experiencing false positives. False positives are a big problem because they desensitize the administrator. False negatives are even worse. A false negative occurs when a real attack occurs; however, the IDS does not pick it up.


ExamAlert

A false negative is the worst type of event because it means an attack occurred, but the IDS failed to detect it.


IDS systems can be divided into two basic types: network-based intrusion detection systems (NIDS) and host-based intrusion detection systems (HIDS).

Network-Based Intrusion Detection Systems

Much like a protocol analyzer operating in promiscuous mode, NIDS capture and analyze network traffic. These devices diligently inspect each packet as it passes by. When they detect suspect traffic, the action taken depends on the particular NIDS. Alarms could be triggered, sessions could be reset, or traffic could be blocked. Among the advantages are that they are unobtrusive, they have the capability to monitor the entire network, and they provide an extra layer of defense between the firewall and the host. Their disadvantages include the fact that attackers can send high volumes of traffic to attempt to overload them, they cannot decrypt or analyze encrypted traffic, and they can be vulnerable to attacks. Conversely, attackers will send low levels of traffic to avoid tripping the IDS threshold alarms. Tools like NMAP have the ability to vary timing to avoid detection. Things to remember about NIDS include the following:

Image They monitor network traffic in real time.

Image They analyze protocols and other relevant packet information.

Image They integrate with a firewall and define new rules as needed.

Image When used in a switched environment, they require the user to perform port spanning and/or port mirroring.

Image They send alerts or terminate an offending connection.

Image When encryption is used, a NIDS will not be able to analyze the traffic.

Host-Based Intrusion-Detection Systems

HIDS are more closely related to a virus scanner in their function and design because they are application-based programs that reside on the host computer. Running quietly in the background, they monitor traffic and attempt to detect suspect activity. Suspect activity can range from attempted system file modification to unsafe activation of ActiveX commands. Although they are effective in a fully switched environment and can analyze network-encrypted traffic, they can take a lot of maintenance, cannot monitor network traffic, and rely on the underlying operating system because they do not control core services. HIDS are best served on high-value targets that require protection. Things to remember about HIDS include the following:

Image HIDS consume some of the host’s resources.

Image HIDS analyze encrypted traffic.

Image HIDS send alerts when unusual events are discovered.

Image HIDS are in some ways just another application running on the local host that is subject to attack.

Signature-Based, Anomaly-Based, and Rule-Based IDS Engines

Signature-based, anomaly-based, and rule-based IDS systems are the three primary types of analysis methods used by IDS systems. These types take different approaches to detecting intrusions.

Signature-based engines rely on a database of known attacks and attack patterns. This system examines data to check for malicious content, which could include fragmented IP packets, streams of SYN packets (DoS), or malformed Internet Control Message Protocol (ICMP) packets. Any time data is found that matches one of these known signatures, it can be flagged to initiate further action. This might include an alarm, an alert, or a change to the firewall configuration. Although signature-based systems work well, their shortcoming is that they are only as effective as their most current update. Any time there is a new or varied attack, the IDS will be unaware of it and will ignore the traffic. The two subcategories of signature-based systems include the following:

Image Pattern-based—Looks at specific signatures that packets are compared to. The open-source IDS Snort started as a pattern-matching IDS.

Image State-based—A more advanced design that has the capability to track the state of the traffic and data as it moves between host and target.

A behavioral-based IDS observes traffic and develops a baseline of normal operations. Intrusions are detected by identifying activity outside the normal range of activities. As an example, if Mike typically tries to log on only between the hours of 8 a.m. and 5 p.m., and now he’s trying to log on 5,000 times at 2 a.m., the IDS can trigger an alert that something is wrong. The big disadvantage of a behavior-based IDS system is that an activity taught over time is not seen as an attack, but merely as normal behavior. These systems also tend to have a high number of false positives.

The three subcategories of anomaly-based systems include the following:

Image Statistical-based—Compares normal to abnormal activity.

Image Traffic-based—Triggers on abnormal packets and data traffic.

Image Protocol-based—Possesses the capability to reassemble packets and look at higher-layer activity. If the IDS knows the normal activity of the protocol, it can pick out abnormal activity. Protocol-decoding intrusion detection requires the IDS to maintain state information. As an example, DNS is a two-step process; therefore, if a protocol-matching IDS sees a number of DNS responses that occur without a DNS request having ever taken place, the system can flag that activity as cache poisoning.

An anomaly-based IDS often compares the behavior of a protocol against what the RFC states. For example, it will look at how the flags in a TCP packet are set at the beginning start up session—the SYN flag should be set to 1. In contrast, a behavioral-based IDS will look at system or environment performance that would be considered normal, such as “Mike typically tries to log on only between the hours of 8 a.m. and 5 p.m., and now he’s trying to log on 5,000 times at 2 a.m.” This required the IDS to go through a learning phase to catch this anomaly. The military has been using this for years to monitor their employees.

A rule-based IDS involves rules and pattern descriptors that observe traffic and develop a baseline of normal operations. Intrusions are detected by identifying activity outside the normal range. This expert system follows a four-phase analysis process:

1. Preprocessing

2. Analysis

3. Response

4. Refinement

All IDS systems share some basic components:

Image Sensors or Agents—Detect and send data to the system. Place the sensor where you want to monitor traffic. On a HIDS, there can be many agents that report back to a server in a large environment.

Image Central monitoring system—Processes and analyzes data sent from sensors.

Image Report analysis—Offers information about how to counteract a specific event.

Image Database and storage components—Perform trend analysis and store the IP address and information about the attacker.

Image Response box—Inputs information from the previously listed components and forms an appropriate response.


ExamAlert

Carefully read any questions that discuss IDS. Remember that several variables can change the outcome or potential answer. Take the time to underline such words as network, host, signature, and behavior to help clarify the question.


Sensor Placement

Your organization’s security policy should detail the placement of your IDS system and sensors. The placement of IDS sensors requires some consideration. IDS sensors can be placed externally, in the DMZ, or inside the network. Your decision to place a sensor in any one or more of these locations will require specific tuning. Without it, the sensor will generate alerts for all traffic that matches a given criteria, regardless of whether the traffic is indeed something that should generate an alert. The placement of your sensors is dynamic and must constantly change as your environment changes. Sensors should not have an IP address associated with them or potentially be deployed via a one-way networking cable so that it is harder for the hacker to scanner and find them.


ExamAlert

Although an anomaly-based IDS can detect zero day attacks, signature-based and rule-based IDS cannot.


Intrusion Prevention Systems

Intrusion prevention systems (IPS’s) build on the foundation of IDS and attempt to take the technology a step further. IPS’s can react automatically and actually prevent a security occurrence from happening, preferably without user intervention. IPS is considered the next generation of IDS and can block attacks in real time. The National Institute of Standards and Technology (NIST) now uses the term IDP (Intrusion Detection and Prevention) to define modern devices that maintain the functionality of both IDS and IPS devices. These devices typically perform deep inspection and can be applied to devices that support OSI Layer 3 to OSI Layer 7 inspection.

Network Access Control

IDS and IDP can be seen as just the start of access control and security. The next step in this area is Network Access Control (NAC) or IEEE 802.1x. NAC has grown out of the trusted computer movement and has the goal of unified security. NAC offers administrators a way to verify that devices meet certain health standards before allowing them to connect to the network. Laptops, desktop computers, or any device that doesn’t comply with predefined requirements can be prevented from joining the network or can even be relegated to a controlled network where access is restricted until the device is brought up to the required security standards. Currently, there are several different incarnations of NAC available, which include the following:

Image Infrastructure-based NAC—Requires an organization to upgrade its hardware and/or operating systems.

Image Endpoint-based NAC—Requires the installation of software agents on each network client. These devices are then managed by a centralized management console.

Image Hardware-based NAC—Requires the installation of a network appliance. The appliance monitors for specific behavior and can limit device connectivity should noncompliant activity be detected.

Keystroke Monitoring

Keystroke monitoring can be accomplished with hardware or software devices and is used to monitor activity. These devices can be used for both legal and illegal activity. As a compliance tool, keystroke monitoring allows management to monitor a user’s activity and verify compliance. The primary issue of concern is the user’s expectation of privacy. Policies and procedures should be in place to inform the user that such technologies can be used to monitor compliance. In 1993, the department of justice requested that NIST publish guidance on keystroke monitoring. This guidance can be found in NIST bulletin 93-03 (csrc.nist.gov/publications/nistbul/csl93-03.txt). A sample acceptable use policy is shown here:

This acceptable use policy defines the boundaries of the acceptable use of this organization’s systems and resources. Access to any company system or resources is a privilege that may be wholly or partially restricted without prior notice and without consent of the user. In cases of suspected violations or during the process of periodic review, employees can have activities monitored. Monitoring may involve a complete keystroke log of an entire session or sessions as needed to vary compliance to company polices and usage agreements.

Unfortunately, key logging is not just for the good guys. Hackers can use the same tools to monitor and record an individual’s activities. Although an outsider to a company might have some trouble getting one of these devices installed, an insider is in a prime position to plant a keystroke logger. Keystroke loggers come in two basic types:

Image Hardware keystroke loggers are usually installed while users are away from their desks and are completely undetectable, except for their physical presence. Just take a moment to consider when you last looked at the back of your computer. Even if you see it, a hardware keystroke loggers can be overlooked because it resembles a balun or dongle. These devices are even available in wireless versions that can communicate via 802.11b/g/n/ac and Bluetooth. You can see one example at www.wirelesskeylogger.com/products.php.

Image Software keystroke loggers sit between the operating system and the keyboard. Most of these software programs are simple, but some are more complex and can even email the logged keystrokes back to a preconfigured address. What they all have in common is that they operate in stealth mode and can grab all the text, mouse clicks, and even all the URLs that a user enters.

Exam Prep Questions

1. Christine works for a government agency that is very concerned about the confidentiality of information. The government agency has strong controls for the process of identification, authentication, and authorization. Before Christine, the subject, can access her information the security label on objects and clearance on subjects are verified. What is this an example of what?

Image A. DAC

Image B. LBAC

Image C. RBAC

Image D. MAC

2. Which of the following biometric systems would be considered the most accurate?

Image A. Retina scan CER 3

Image B. Fingerprint CER 4

Image C. Keyboard dynamics CER 5

Image D. Voice recognition CER 6

3. What are the two primary components of a DAC?

Image A. Access rights and permissions, and security labels

Image B. File and data ownership, and access rights and permissions

Image C. Security labels and discretionary access lists

Image D. File and data ownership, and security labels

4. You have been hired as a contractor for a government agency. You have been cleared for secret access based on your need to know. Authentication, authorization, and accountability are also enforced. At the end of each week, the government security officer for whom you work is tasked with the review of security logs to ensure only authorized users have logged into the network and have not attempted to access unauthorized data. The process of ensuring accountability for access to an information system included four phases. What is this an example of?

Image A. Identification

Image B. Accountability

Image C. Authorization

Image D. Authentication

5. When registering for a new service, you were asked the following questions: “What country were you born in? What’s your pet’s name? What is your mother’s maiden name?” What type of password system is being used?

Image A. Cognitive

Image B. One-time

Image C. Virtual

Image D. Complex

6. Mark has just completed his new peer-to-peer network for the small insurance office he owns. Although he will allow Internet access, he does not want users to log in remotely. Which of the following models most closely match his design?

Image A. TACACS+

Image B. MAC

Image C. RADIUS

Image D. DAC

7. Which of the following is the best answer: TACACS+ features what?

Image A. One-factor authentication

Image B. Decentralized access control

Image C. Two-factor authentication

Image D. Accountability

8. A newly hired junior security administrator will assume your position temporarily while you are on vacation. You’re trying to explain the basics of access control and the functionality of rule-based access control mechanisms like ACL. Which of the following best describes the order in which an ACL operates?

Image A. ACLs apply all deny statements before allow statements.

Image B. Rule-based access control and role-based access control is basically the same thing.

Image C. ACLs end with an implicit deny all statement.

Image D. ACLs are processed from the bottom up.

9. RADIUS provides which of the following?

Image A. Authorization and accountability

Image B. Authentication

Image C. Authentication, authorization, and accountability

Image D. Authentication and authorization

10. Which of the following is the best description of a situation where a user can sign up for a social media account such as Facebook, and then use their credentials to log in and access another organization’s sites, such as Yahoo?

Image A. Transitive trust

Image B. Federated ID

Image C. Non-transitive trust

Image D. Single sign-on

11. What type of attack targets pronounceable passwords?

Image A. Brute-force attacks

Image B. Dictionary attacks

Image C. Hybrid attacks

Image D. Rainbow tables

12. Which of the following represents the best method of password storage?

Image A. A cleartext file

Image B. Symmetric encryption

Image C. A one-way encryption process

Image D. An XOR process

13. Which access control model makes use of a join and a meet?

Image A. Rule-based access control

Image B. MAC

Image C. DAC

Image D. Lattice

14. Which of the following access control models is commonly used with firewall and edge devices?

Image A. Rule-based access control

Image B. MAC

Image C. DAC

Image D. Lattice

15. Because of recent highly publicized hacking news reports, senior management has become more concerned about security. As the senior security administrator, you are asked to suggest changes that should be implemented. Which of the following access methods should you recommend if the method is to be one that is primarily based on pre-established access, can’t be changed by users and works well in situations where there is high turnover?

Image A. Discretionary access control

Image B. Mandatory access control

Image C. Rule-based access control

Image D. Role-based access control

Answers to Exam Prep Questions

1. D. MAC is correct because it uses security labels and clearances. A is not correct because DAC is uses ACLs; B is not correct because LBAC is lattice-based access control, which uses upper and lower limits; C is incorrect because RBAC uses roles or tasks in an organization based on the organization’s security policy.

2. A. The lower the CER, the better; retina scan CER 3 (answer A) is correct. Fingerprint CER 4 (answer B), keyboard dynamics CER 5 (answer C), and voice recognition CER 6 (answer D) are incorrect because they have higher CERs. The CER is determined by combining Type I and Type II errors.

3. B. The two primary components of a DAC are file and data ownership, and access rights and permissions. With file and data ownership, all objects within a system must have an owner. Objects without an owner will be left unprotected. Access rights and permissions control the access rights of an individual. Variation exists, but a basic access control list checks read, write, and execute privileges. Answers A, C, and D are incorrect.

4. B. The four key areas of identity and access management are identification, authentication, authorization, and accountability. The fact that the security officer is reviewing the logs for accuracy is a form of accountability. Therefore, answers A, C, and D are incorrect.

5. A. Cognitive passwords are widely used during enrollment processes, when individuals call help desks, or when individuals request other services that require authentication. All other answers are incorrect: One-time passwords (answer B) are associated with tokens, virtual passwords (answer C) are a form of passphrase, and the question does not describe a complex password (answer D).

6. D. The discretionary access control (DAC) model is so named because access control is left to the owner’s discretion. This can be thought of as being similar to a peer-to-peer computer network. All other answers are incorrect: A MAC model (answer B) is static and based on a predetermined list of access privileges, and both TACACS+ (answer A) and RADIUS (answer C) are used for remote access and do not properly address the question.

7. C. TACACS+ features two-factor authentication. All other answers are incorrect: TACACS+ offers more than one-factor authentication (answer A); it is a centralized, not decentralized, access control system (answer B); and although it offers accountability (answer D), it also offers authorization.

8. C. ACLs have an implicit deny all statement. As an example, if the ACL only had the one statement “Deny ICMP any, any,” ICMP would be denied; however, the implicit “deny all” would block all other traffic. Answers A and D are incorrect as ACLs are processed from top to bottom. Answer B is incorrect because rule-based access control and role-based access control are not the same thing.

9. C. RADIUS provides three services: authentication, authorization, and accountability. RADIUS facilitates centralized user administration and keeps all user profiles in one location that all remote services share. Answers A, B, and D are incorrect because they do not fully answer the question.

10. B. Federation is an arrangement that can be made among multiple enterprises (such as Facebook and Yahoo) that lets subscribers of one service use the same identification/authentication credentials to gain access to the second organization’s resources. It differs from single sign-on (SSO) in that SSO is used within a single organization. Examples of SSO include Kerberos and SESAME. Answers A and C are incorrect as a transitive trust is a two-way relationship automatically created between parent and child domains, and a non-transitive trust is a trust that will not extend past the domains it was created with. Both of these terms are directly associated with Microsoft operating systems.

11. B. Dictionary attacks target pronounceable passwords. Brute-force attacks (answer A), hybrid attacks (answer C), and rainbow tables (answer D) are all used to target any password combination for A–Z, 0–9, special characters, or any combination.

12. C. The best way to store passwords is by means of a one-way process. This one-way process is also known as hashing, and is used by operating systems like Microsoft Windows and Linux. A cleartext file (answer A) can be easily exposed. Symmetric encryption (answer B) would allow the process to be easily reversed by anyone with a key. An XOR process (answer D) would only obscure the password and not provide any real protection.

13. D. The lattice model makes use of a join and a meet. The lattice-based access control model (LBAC) is considered a complex model used to manage the interaction between subjects and objects. Answers A, B, and C are incorrect as they do not use these terms.

14. A. Rule-based access control is used with firewalls and routers. RBAC is based on a specific set of rules, much like a router ACL. Answer B, MAC, makes use of labels and is well suited for high-security environments. Answer C, DAC, describes discretionary control, and answer D, lattice, is a complex model that makes use of upper and lower bounds.

15. D. Role-based access control (RBAC) allows specific people to be assigned to specific roles with specific privileges. It allows access to be assigned to groups and works well where there are high levels of turnover. Answers A, B, and C do not meet that description.

Suggesting Reading and Resources

Zephyr Charts: www.cse.unr.edu/~bebis/CS790Q/Lect/Chapter_8.ppt

Honeypot resources: www.honeypots.net

Getting a grip on access control: www.owasp.org/index.php/Access_Control_Cheat_Sheet

Performance metrics for biometrics: www.biometric-solutions.com/index.php?story=performance_biometrics

Federated Identity: msdn.microsoft.com/en-us/library/aa479079.aspx

Differences between Kerberos and SESAME: cadse.cs.fiu.edu/corba/corbasec/faq/multi-page/node148.html

Comparison of Biometric Methods: mms.ecs.soton.ac.uk/2011/papers/4.pdf

RADIUS best practices: msdn.microsoft.com/en-us/library/bb742489.aspx

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.44.229