Images Authentication and Remote Access


We should set a national goal of making computers and Internet access available for every American.


In this chapter, you will learn how to

Images   Identify the differences among user, group, guest, service accounts, and role management

Images   Implement account policies

Images   Describe methods of account management (SSO, time of day, logical token, SSH keys, smart cards, account expiration, lockout, disablement)

Images   Describe methods of access management (MAC, DAC, RBAC, and ABAC)

Images   Explain authentication methods and the security implications in their use

Images   Examine the use of biometrics technology for authentication

Images   Discuss the methods and protocols for remote access to networks

Images   Identify authentication, authorization, and accounting (AAA) protocols

Images   Implement virtual private networks (VPNs) and their security aspects

On single-user systems such as PCs, the individual user typically has access to most of the system’s resources, processing capability, and stored data. On multiuser systems, such as servers and mainframes, an individual user typically has very limited access to the system and the data stored on that system. An administrator responsible for managing and maintaining the multiuser system has much greater access. So how does the computer system know which users should have access to what data? How does the operating system know what applications a user is allowed to use? There are three steps in the establishment of proper privileges; authentication, authorization, and accounting. These terms are commonly combined and simply referred to as AAA.

Authentication is the process of verifying an identity previously established in a computer system. Authentication is commonly performed by matching a set of user-supplied credentials to previously stored credentials on a host machine (for example, an account username and password). Once the user is authenticated, the authorization step takes place. There are a variety of methods of performing this function, each with its advantages and disadvantages. Authentication methods and their advantages and disadvantages are described throughout the chapter.

Remote access is another key issue for multiuser systems in today’s world of connected computers. Isolated computers, not connected to networks or the Internet, are rare items these days. Except for some special-purpose machines, most computers need interconnectivity to fulfill their purpose. Remote access enables users outside a network to have network access and privileges as if they were inside the network. Being outside a network means that the user is working on a machine that is not physically connected to the network and must therefore establish a connection through a remote means, such as by dialing in, connecting via the Internet, or connecting through a wireless connection.

Images User, Group, and Role Management

To manage the privileges of many different people effectively on the same system, a mechanism for separating people into distinct entities (that is, users) is required, so you can control access on an individual level. At the same time, it’s convenient and efficient to be able to lump users together when granting many different people (that is, groups) access to a resource at the same time. At other times, it’s useful to be able to grant or restrict access based on a person’s job or function within the organization (that is, roles). While you can manage privileges on the basis of users alone, managing user, group, and role assignments together is far more convenient and efficient.


The term user generally applies to any person accessing a computer system. In privilege management, a user is a single individual, such as “John Forthright” or “Sally Jenkins.” This is generally the lowest level addressed by privilege management and the most common area for addressing access, rights, and capabilities. When accessing a computer system, each user is generally given a username—a unique alphanumeric identifier they will use to identify themselves when logging in to or accessing the system. When developing a scheme for selecting usernames, you should keep in mind that usernames must be unique to each user, but they must also be fairly easy for the user to remember and use.


Tech Tip

Usernames, Permissions, and Rights

A username is a unique alphanumeric identifier used to identify a user to a computer system. Permissions control what a user is allowed to do with objects on a computer system—what files they can open, what printers they can use, and so on. In Windows security models, permissions define the actions a user can perform on an object (open a file, delete a folder, and so on). Rights define the actions a user can perform on the system itself, such as change the time, adjust auditing levels, and so on. Rights are typically applied to operating system–level tasks.

With some notable exceptions, in general a user who wants to access a computer system must first have a username created for them on the system they want to use. This is usually done by a system administrator, security administrator, or other privileged user, and this is the first step in privilege management—a user should not be allowed to create their own account.

Once the account is created and a username is selected, the administrator can assign specific permissions to that user. Permissions control what the user is allowed to do with objects on the system—which files they may access, which programs they may execute, and so on. Whereas PCs typically have only one or two user accounts, larger systems such as servers and mainframes can have hundreds of accounts on the same system. Figure 11.1 shows the Users tab of the Computer Management utility on a Windows Server system. Note that several user accounts have been created on this system, each identified by a unique username.


Figure 11.1 Users tab on a Windows Server system


Auditing user accounts, group membership, and password strength on a regular basis is an extremely important security control. Many compliance audits focus on the presence or lack of industry-accepted security controls.


Tech Tip

Generic Accounts

Generic accounts are accounts without a named user behind them. These can be employed for special purposes, such as running services and batch processes, but because they cannot be attributed to an individual, they should not have login capability. It is also important that if they have elevated privileges, their activities should be continually monitored as to what functions they are performing versus what they are expected to be doing. General use of generic accounts should be avoided because of the increased risk associated with no attribution capability.

A few “special” user accounts don’t typically match up one-to-one with a real person. These accounts are reserved for special functions and typically have much more access and control over the computer system than the average user account. Two such accounts are the administrator account under Windows and the root account under Linux. These are called privileged accounts because their privileges are elevated. These accounts are not typically assigned to a specific individual and are restricted, accessed only when the full capabilities of the account are required.

Due to the power possessed by these accounts, and the few, if any, restrictions placed on them, they must be protected with strong passwords that are not easily guessed or obtained. These accounts are also the most common targets of attackers—if the attacker can gain root access or assume the privilege level associated with the root account, they can bypass most access controls and accomplish anything they want on that system.

Another account that falls into the “special” category is the system account used by Windows operating systems. The system account has the same file privileges as the administrator account and is used by the operating system and by services that run under Windows. By default, the system account is granted full control to all files on an NTFS volume. Services and processes that need the capability to log on internally within Windows will use the system account—for example, the DNS Server and DHCP Server services in Windows Server use the Local System account.

Shared and Generic Accounts/Credentials

Shared accounts go against the specific treatise that accounts exist so that user activity can be tracked. This said, there are times when guest accounts are used, especially in situations where the guest access is limited to a defined set of functions and specific tracking is not particularly useful. Sometimes the shared accounts are called generic accounts and exist only to provide a specific set of functionalities, like in a PC running in kiosk mode, with a browser limited to specific sites as an information display. Under these circumstances, being able to trace the activity to a user is not particularly useful.

Guest Accounts

Guest accounts are frequently used on corporate networks to provide visitors’ access to the Internet and to some common corporate resources, such as projectors, printers in conference rooms, and so on. Again, these types of accounts are restricted in their network capability to a defined set of machines, with a defined set of access, much like a user from the Internet visiting their publicly facing website. As such, logging and tracing activity have little to no use, so the overhead of establishing an account does not make sense.


Tech Tip

Guest Accounts

Guest accounts are granted limited permissions and access. They are used primarily for visitors. It is common practice to disable guest accounts as well as other default accounts when not in use. If guest accounts are by wireless access, it is important to change passwords to prevent the user of the account from returning after the period of authorization.

Service Accounts

Service accounts are accounts that are used to run processes that do not require human intervention to start/stop/administer. From batch jobs that run in a data center, to simple tasks that are run on the enterprise for compliance objectives, the reasons for running them are many, but the need for an accountholder is not really there. One thing you can do with these accounts in Windows systems is to not allow them to log in to the system. This limits some of the attack vectors that can be applied to these accounts. Another security provision is to apply time restrictions for accounts that run batch jobs at night and then monitor when they run. Any service account that has to run in an elevated privilege mode should receive extra monitoring and scrutiny.


Service accounts run without human intervention and are granted only enough permission to run the services they support.

Privileged Accounts

Privileged accounts are any accounts with greater-than-normal user access. Privileged accounts are typically root or admin-level accounts and represent risk in that they are unlimited in their powers. These accounts require regular real-time monitoring, if at all possible, and should always be monitored when operating remotely. There may be reasons why and occasions when system administrators are acting via a remote session, but when they are, the purposes should be known and approved.


Tech Tip


Onboarding and offboarding involve the bringing of personnel on and off a project or team. During onboarding, proper account relationships need to be managed. New members can be put into the correct groups; then, when offboarded, they can be removed from the groups. This is one way in which groups can be used to manage permissions, which can be very efficient when users move between units and tasks.


Under privilege management, a group is a collection of users with some common criteria, such as a need for access to a particular data set or group of applications. A group can consist of one user or hundreds of users, and each user can belong to one or more groups. Figure 11.2 shows a common approach to grouping users—building groups based on job function.


Figure 11.2 Logical representation of groups

By assigning membership in a specific group to a user, you make it much easier to control that user’s access and privileges. For example, if every member of the engineering department needs access to product development documents, administrators can place all the users in the engineering department in a single group and allow that group to access the necessary documents. Once a group is assigned permissions to access a particular resource, adding a new user to that group will automatically allow that user to access that resource. In effect, the user “inherits” the permissions of the group as soon as they are placed in that group. As Figure 11.3 shows, a computer system can have many different groups, each with its own rights and permissions.


Figure 11.3 Groups tab on a Windows Server system

As you can see from the description for the Administrators group in Figure 11.3, this group has complete and unrestricted access to the system. This includes access to all files, applications, and data sets. Anyone who belongs to the Administrators group or is placed in this group will have a great deal of access and control over the system.

Some operating systems, such as Windows, have built-in groups—groups that are already defined within the operating system, such as Administrators, Power Users, and Everyone. The whole concept of groups revolves around making the tasks of assigning and managing permissions easier, and built-in groups certainly help to make these tasks easier. Individual users accounts can be added to built-in groups, allowing administrators to grant permission sets to users quickly and easily without having to specify permissions manually. For example, adding the user account named “bjones” to the Power Users group gives bjones all the permissions assigned to the built-in Power Users group, such as installing drivers, modifying settings, and installing software.


Another common method of managing access and privileges is by roles. A role is usually synonymous with a job or set of functions. For example, the role of security admin in Microsoft SQL Server may be applied to someone who is responsible for creating and managing logins, reading error logs, and auditing the application. Security admins need to accomplish specific functions and need access to certain resources that other users do not—for example, they need to be able to create and delete logins, open and read error logs, and so on. In general, anyone serving in the role of security admin needs the same rights and privileges as every other security admin. For simplicity and efficiency, rights and privileges can be assigned to the role security admin, and anyone assigned to fulfill that role automatically has the correct rights and privileges to perform the required tasks.

Images Account Policies

One of the key elements to guide security professionals in daily tasks is a good set of policies. Many issues are associated with the daily tasks, and leaving a lot of the decisions up to individual workers will rapidly result in conflicting results. Policies are needed for a wide range of elements, from naming conventions to operating rules, such as audit frequency and other specifics. Having these issues resolved as a matter of policy enables security professionals to go about the task of verifying and monitoring systems, rather than trying to adjudicate policy type issues with each user case that comes along.

Account Policy Enforcement

The primary method of account policy enforcement used in most access systems is still one based on passwords. The concepts of each user ID being traceable to a single person’s activity and no sharing of passwords and credentials form the foundation of a solid account policy. Passwords need to be managed to provide appropriate levels of protection. They need to be strong enough to resist attack, and yet not too difficult for users to remember. A password policy can act to ensure that the necessary steps are taken to enact a secure password solution, both by users and by the password infrastructure system.


Cross Check

Password Policies

Password policies, along with many other important security policies, are covered in detail in Chapter 3.

Domain Passwords

A domain password policy is a password policy for a specific domain. Because these policies are usually associated with the Windows operating system, a domain password policy is implemented and enforced on the domain controller, which is a computer that responds to security authentication requests, such as logging in to a computer, for a Windows domain. The domain password policy usually falls under a group policy object (GPO) and has the following elements (see Figure 11.4):


Figure 11.4 Password policy options in Windows Local Security Policy

Images   Enforce password history  Tells the system how many passwords to remember and does not allow a user to reuse an old password.

Images   Maximum password age  Specifies the maximum number of days a password may be used before it must be changed.

Images   Minimum password age  Specifies the minimum number of days a password must be used before it can be changed again.

Images   Minimum password length  Specifies the minimum number of characters that must be used in a password.

Images   Password must meet complexity requirements  Specifies that the password must meet the minimum length requirement and have characters from at least three of the following four groups: English uppercase characters (A through Z), English lowercase characters (a through z), numerals (0 through 9), and non-alphabetic characters (such as !, $, #, and %).


New research from NIST indicates that password complexity rules designed to force entropy into passwords do so at the risk of other, less-desirable password behaviors by users, such as writing them down or versioning them with an increasing number element. The latest NIST guidance (Special Publication 800-63B, June 2017) is that long passphrases offer the best protection. Proper practice today is to not rely on passwords alone but rather to use multifactor authentication. Also, users should use password managers to manage the passwords.

Images   Store passwords using reversible encryption Reversible encryption is a form of encryption that can easily be decrypted and is essentially the same as storing a plaintext version of the password (because it’s so easy to reverse the encryption and get the password). This should be used only when applications use protocols that require the user’s password for authentication, such as the Challenge-Handshake Authentication Protocol (CHAP).


Not only is it essential to ensure every account has a strong password, but it is also essential to disable or delete unnecessary accounts. If your system does not need to support guest or anonymous accounts, then disable them. When user or administrator accounts are no longer needed, remove or disable them. As a best practice, all user accounts should be audited periodically to ensure there are no unnecessary, outdated, or unneeded accounts on your systems.

Domains are logical groups of computers that share a central directory database, known as the Active Directory database for the more recent Windows operating systems. The database contains information about the user accounts and security information for all resources identified within the domain. Each user within the domain is assigned their own unique account (that is, a domain is not a single account shared by multiple users), which is then assigned access to specific resources within the domain. In operating systems that provide domain capabilities, the password policy is set in the root container for the domain and applies to all users within that domain. Setting a password policy for a domain is similar to setting other password policies in that the same critical elements need to be considered (password length, complexity, life, and so on). If a change to one of these elements is desired for a group of users, a new domain needs to be created because the domain is considered a security boundary. In a Windows operating system that employs Active Directory, the domain password policy can be set in the Active Directory Users and Computers menu in the Administrative Tools section of the Control Panel.


Tech Tip

Calculating Unique Password Combinations

One of the primary reasons administrators require users to have longer passwords that use upper- and lowercase letters, numbers, and at least one “special” character is to help deter password-guessing attacks. One popular password-guessing technique, called a brute force attack, uses software to guess every possible password until one matches a user’s password. Essentially, a brute force attack tries a, then aa, then aaa, and so on, until it runs out of combinations or gets a password match. Increasing both the pool of possible characters that can be used in the password and the number of characters required in the password can exponentially increase the number of “guesses” a brute force program needs to perform before it runs out of possibilities. For example, if our password policy requires a three-character password that uses only lowercase letters, there are only 17,576 possible passwords (with 26 possible characters, a password that’s three characters long equates to 263 combinations). Requiring a six-character password increases that number to 308,915,776 possible passwords (266). An eight-character password with upper- and lowercase letters, a special symbol, and a number increases the possible passwords to 708, or over 576 trillion combinations.

Precomputed hashes in rainbow tables can also be used to brute-force past shorter passwords. As the length increases, so does the size of the rainbow table.

Single Sign-On

To use a system, users must be able to access it, which they usually do by supplying their user IDs (or usernames) and corresponding passwords. As any security administrator knows, the more systems a particular user has access to, the more passwords that user must have and remember. The natural tendency for users is to select passwords that are easy to remember, or even the same password for use on the multiple systems they access. Wouldn’t it be easier for the user simply to log in once and have to remember only a single, good password? This is made possible with a technology called single sign-on.

Single sign-on (SSO) is a form of authentication that involves the transferring of credentials between systems. As more and more systems are combined in daily use, users are forced to have multiple sets of credentials. A user may have to log in to three, four, five, or even more systems every day just to do their job. Single sign-on allows a user to transfer their credentials so that logging in to one system acts to log them in to all of the systems. Once the user has entered a user ID and password, the single sign-on system passes these credentials transparently to other systems so that repeated logons are not required. Put simply, you supply the right username and password once and you have access to all the applications and data you need, without having to log in multiple times and remember many different passwords. From a user standpoint, SSO means you need to remember only one username and one password. From an administration standpoint, SSO can be easier to manage and maintain. From a security standpoint, SSO can be even more secure, as users who need to remember only one password are less likely to choose something too simple or something so complex they need to write it down. The following is a logical depiction of the SSO process (see Figure 11.5):


Figure 11.5 Single sign-on process

1.   The user signs in once, providing a username and password to the SSO server.

2.   The SSO server provides authentication information to any resource the user accesses during that session. The server interfaces with the other applications and systems—the user does not need to log in to each system individually.

In reality, SSO is usually a little more difficult to implement than vendors would lead you to believe. To be effective and useful, all your applications need to be able to access and use the authentication provided by the SSO process. The more diverse your network, the less likely this is to be the case. If your network, like most, contains different operating systems, custom applications, and a diverse user base, SSO may not even be a viable option.


Tech Tip


In 2014, a vulnerability that could cause user credentials to be exposed was discovered in millions of systems. Called the Heartbleed incident, this resulted in numerous users being told to change their passwords because of potential compromise. Users were also warned of the dangers of reusing passwords across different accounts. Although this makes passwords easier to remember, it also improves guessing chances. What made this whole effort of users protecting their passwords particularly challenging is that the breach was widespread—virtually all Linux systems—and the patching rate was uneven, so people could be suffering multiple exposures over time. After one year, an estimated 40 percent of all compromised systems remained unpatched. This highlights the importance of not reusing passwords across multiple accounts.

Credential Management

Credential management refers to the processes, services, and software used to store, manage, and log the use of user credentials. Credential management solutions are typically aimed at assisting end users manage their growing set of passwords. There are credential management products that provide a secure means of storing user credentials and making them available across a wide range of platforms, from local stores to cloud storage locations.

Group Policy

Microsoft Windows systems in an enterprise environment can be managed via group policy objects (GPOs). GPOs act through a set of registry settings that can be managed via the enterprise. A wide range of settings can be managed via GPOs, many of which are related to security, including user credential settings such as password rules.

Standard Naming Convention

Agreeing on a standard naming convention is one of the topics that can bring controversy out of professionals who seem to agree on most things. Having a standard naming convention has pluses in that it enables users to extract meaning from a name. Having servers with “dev,” “test,” and “prod” as part of their names can prevent inadvertent changes by a user because of the misidentification of an asset. By the same token, calling out privileges (say, appending “SA” to the end of usernames with system administrator privileges) results in two potential problems. First, it alerts adversaries to which accounts are the most valuable. Second, it creates a problem when the person is no longer a member of the system administrators group, as now the account must be renamed.

One aspect that everyone does agree on is the concept of leaving room for the future. The simplest example is in the numbering of accounts. For instance, for e-mail, use first initial plus last name plus a digit for a repeat. Will we ever have more than 10 John Smiths? Well, you might be surprised, as Joan Smiths and Jack Smiths also take from the pool. And the pool is further diluted by the fact that we inactivate old accounts, not reuse them. So plan on having plenty of room ahead for fixing any naming scheme.

Account Maintenance

Account maintenance is not the sexiest job in the security field. But then again, traffic cops have boring lives as well—until you realize that roughly half of all felons are arrested on simple traffic stops. The same is true with account maintenance—no, we aren’t catching felons, but we do find errors that otherwise only increase risk and because of their nature are hard to defend against any other way. Account maintenance is the routine screening of all attributes for an account. Is the business purpose for the account still valid—that is, is the user still employed? Is the business process for a system account still occurring? Are the actual permissions associated with the account appropriate for the account holder? Best practice indicates that this be performed in accordance with the risk associated with the profile. System administrators, and other privileged accounts, need greater scrutiny that normal users. Shared accounts, such as guest accounts, also require scrutiny to ensure they are not abused.

For some high-risk situations, such as unauthenticated guest accounts being granted administrator privilege, an automated check can be programmed and run on a regular basis. In Active Directory, it is also possible for the security group to be notified any time a user is granted domain admin privilege. And it is also important to note that the job of determining who has what access is actually one that belongs to the business, not the security group. The business side of the house is where the policy decision on who should have access is determined. The security group merely takes the steps to enforce this decision. Account maintenance is a joint responsibility.

Usage Auditing and Review

As with all security controls, a monitoring component is an important aspect of security controls used to mitigate risk. Logs are the most frequently used component, and with respect to privileged accounts, logging can be especially important. Usage auditing and review is just that: an examination of logs to determine user activity. Reviewing access control logs for root-level accounts is an important element of securing access control methods. Because of the power and potential for misuse, administrative or root-level accounts should be closely monitored. One important element for continuous monitoring of production would be the use of an administrative-level account on a production system.


Logging and monitoring of failed login attempts provides valuable information during investigations of compromises.

A strong configuration management environment will include the control of access to production systems by users who can change the environment. Root-level changes in a system tend to be significant changes, and in production systems these changes would be approved in advance. A comparison of all root-level activity against approved changes will assist in the detection of activity that is unauthorized.


Tech Tip

Account Recertification

User accounts should periodically be recertified as necessary. The process of account recertification can be as simple as a check against current payroll records to ensure all users are still employed, or as intrusive as having users identify themselves again. The latter is highly intrusive but may be warranted for high-risk accounts. The process of recertification ensures that only users needing accounts have accounts in the system.

Account Audits

Account audits are audits like all other audits—an independent verification that the policies associated with the accounts are being followed. An independent auditor can check all of the elements of policies. Passwords can be checked using a password cracker—if it breaks the password, odds are the user wasn’t following the rules. The various restrictions, such as account lockout and reuse, can be checked. An auditor can verify that all the authorized users are still with the firm or in an authorized capacity. Audits work to ensure the implementation of policies is actually working to specification.

Time-of-Day Restrictions

Some organizations need to tightly control certain users, groups, or even roles and limit access to certain resources to specific days and times. Most server-class operating systems enable administrators to implement time-of-day restrictions that limit when a user can log in, when certain resources can be accessed, and so on. Time-of-day restrictions are usually specified for individual accounts, as shown in Figure 11.6.


Figure 11.6 Logon hours for Guest account

From a security perspective, time-of-day restrictions can be very useful. If a user normally accesses certain resources during normal business hours, an attempt to access these resources outside this time period (either at night or on the weekend) might indicate an attacker has gained access to or is trying to gain access to that account. Specifying time-of-day restrictions can also serve as a mechanism to enforce internal controls of critical or sensitive resources. Obviously, a drawback to enforcing time-of-day restrictions is that it means a user can’t go to work outside of normal hours to “catch up” with work tasks. As with all security policies, usability and security must be balanced in this policy decision.


Be careful implementing time-of-day restrictions. Some operating systems give you the option of disconnecting users as soon as their “allowed login time” expires, regardless of what the user is doing at the time. The more commonly used approach is to allow currently logged-in users to stay connected but reject any login attempts that occur outside of allowed hours.

Impossible Travel Time/Risky Login

Correct logins to an account can record many elements of information, including where the login came from. This “where” can be a machine in a network or even a geographic location. Using this metadata, some interesting items can be calculated. Should a login occur from a separate location where a user is already logged in, is it possible for the user to be in two locations at the same time? Likewise, if the second login occurs from a geographically separate location, is there time to actually travel this far in the time between the logins? These are cases of risky logins or examples of impossible travel time. There are applications that can detect these anomalies and present this information so you can make decisions as to whether the second login should be allowed or not. What should govern these decisions is a policy that specifically addresses these conditions.

Elements of the policy are not simple because, while a remote login from a continent away might be easy to deny, what of the two logins in the same building overlapping? Is it against policy for a user to have one system logged in with the screen locked and then go to a different system? In some high-security instances, this second occurrence might be blocked by policy, whereas in less secure instances, the usability of multiple logins might be allowed. This is why a policy is needed, to coordinate management across all of these differing conditions, not leaving it up to the security technician’s discretion as they configure appliances and access control systems.

Account Expiration

In addition to all the other methods of controlling and restricting access, most modern operating systems allow administrators to specify the length of time an account is valid and when it “expires” or is disabled. Account expiration is the setting of an ending time for an account’s validity. This is a great method for controlling temporary accounts, or accounts for contractors or contract employees. For these accounts, the administrator can specify an expiration date; when the date is reached, the account automatically becomes locked out and cannot be logged in to without administrator intervention. A related action can be taken with accounts that never expire: they can automatically be marked “inactive” and locked out if they have been unused for a specified number of days. Account expiration is similar to password expiration, in that it limits the time window of potential compromise. When an account has expired, it cannot be used unless the expiration deadline is extended.


Tech Tip

Disabling Accounts

An administrator has several options for ending a user’s access (for instance, upon termination or offboarding). The best option is to disable the account but leave it in the system. This preserves account permission chains and prevents reuse of a user ID, leading to potential confusion later when examining logs.

Similarly, organizations must define whether accounts are deleted or disabled when no longer needed. Deleting an account removes the account from the system permanently, whereas disabling an account leaves it in place but marks it as unusable. Many organizations disable an account for a period of time after an employee departs (30 or more days) prior to deleting the account. This prevents anyone from using the account and allows administrators to reassign files, forward mail, and “clean up” before taking any permanent actions on the account.

Privileged Access Management

Privilege management is the process of restricting a user’s ability to interact with the computer system. Essentially, everything a user can do to or with a computer system falls into the realm of privilege management. Privilege management occurs at many different points within an operating system or even within applications running on a particular operating system.

Privileged accounts are any accounts with greater-than-normal user access. Privileged accounts are typically root- or administrative-level accounts and represent risk in that they are unlimited in their powers. These accounts require regular real-time monitoring, if at all possible, and should always be monitored when operating remotely. Administrators may need to perform tasks via a remote session in certain scenarios, but when they do, they first need to identify the purpose and get approval.

Privileged access management is a combination of the policies, procedures, and technologies for controlling access to and use of elevated or privileged accounts. This enables the organization to log and control privileged access across the entire environment. The primary purpose is to limit the attack surface that these accounts have, and to minimize exposure based on current operational needs and conditions.

Images Authorization

Authorization is the process of permitting or denying access to a specific resource. Once identity is confirmed via authentication, specific actions can be authorized or denied. Many types of authorization schemes are used, but the purpose is the same: determine whether a given user who has been identified has permissions for a particular object or resource being requested. This functionality is frequently part of the operating system and is transparent to users.

The separation of tasks, from identification to authentication to authorization, has several advantages. Many methods can be used to perform each task, and on many systems several methods are concurrently present for each task. Separation of these tasks into individual elements allows combinations of implementations to work together. Any system or resource, be it hardware (router or workstation) or a software component (database system), that requires authorization can use its own authorization method once authentication has occurred. This makes for efficient and consistent application of these principles.

Accounting is the process of ascribing resource usage by account for the purpose of tracking resource utilization. This is a basic accounting function that is still used by some enterprises. Accounting can include the collection of billing and other detail records. Network access is often a billable function, and a log of how much time, bandwidth, file transfer space, or other resources were used needs to be maintained. Other accounting functions include keeping detailed security logs to maintain an audit trail of tasks being performed.


Tech Tip


Authentication is the process of validating an identity. Authorization is the process of permitting or denying access to resources. Accounting is the process of keeping track of the resources a user accesses. Together, they make up the AAA framework for identity access security.

Access Control

The term access control has been used to describe a variety of protection schemes. It sometimes refers to all security features used to prevent unauthorized access to a computer system or network—or even a network resource such as a printer. In this sense, it may be confused with authentication. More properly, access is the ability of a subject (such as an individual or a process running on a computer system) to interact with an object (such as a file or hardware device). Once the individual has verified their identity, access controls regulate what the individual can actually do on the system. Just because a person is granted entry to the system does not mean they should have access to all the data the system contains.


Tech Tip

Access Control vs. Authentication

It may seem that access control and authentication are two ways to describe the same protection mechanism. This, however, is not the case. Authentication provides a way to verify to the computer who the user is. Once the user has been authenticated, the access controls decide what operations the user can perform. The two go hand-in-hand but are not the same thing.

Security Controls and Permissions

If multiple users share a computer system, the system administrator likely needs to control who is allowed to do what when it comes to viewing, using, or changing system resources. Although operating systems vary in how they implement these types of controls, most operating systems use the concepts of permissions and rights to control and safeguard access to resources. As we discussed earlier, permissions control what a user is allowed to do with objects on a system, and rights define the actions a user can perform on the system itself. Let’s examine how the Windows operating systems implement this concept.

The Windows operating systems use the concepts of permissions and rights to control access to files, folders, and information resources. When using the NTFS filesystem, administrators can grant users and groups permission to perform certain tasks as they relate to files, folders, and Registry keys. The basic categories of NTFS permissions are as follows:


Permissions can be applied to a specific user or group to control that user or group’s ability to view, modify, access, use, or delete resources such as folders and files.

Images   Full Control  A user/group can change permissions on the folder/file, take ownership if someone else owns the folder/file, delete subfolders and files, and perform actions permitted by all other NTFS folder permissions.

Images   Modify  A user/group can view and modify files/folders and their properties, can delete and add files/folders, and can delete properties from or add properties to a file/folder.

Images   Read & Execute  A user/group can view the file/folder and can execute scripts and executables, but they cannot make any changes (files/folders are read-only).

Images   List Folder Contents  A user/group can list only what is inside the folder (applies to folders only).

Images   Read  A user/group can view the contents of the file/folder and the file/folder properties.

Images   Write  A user/group can write to the file or folder.

Figure 11.7 shows the permissions on a folder called Data from a Windows Server system. In the top half of the Permissions window are the users and groups that have permissions for this folder. In the bottom half of the window are the permissions assigned to the highlighted user or group.


Figure 11.7 Permissions for the Data folder

The Windows operating system also uses user rights or privileges to determine what actions a user or group is allowed to perform or access. These user rights are typically assigned to groups, as it is easier to deal with a few groups than to assign rights to individual users, and they are usually defined in either a group or a local security policy. The list of user rights is quite extensive, but here are a few examples of user rights:

Images   Log on locally  Users/groups can attempt to log on to the local system itself.

Images   Access this computer from the network  Users/groups can attempt to access this system through the network connection.

Images   Manage auditing and security log  Users/groups can view, modify, and delete auditing and security log information.

Rights tend to be actions that deal with accessing the system itself, process control, logging, and so on. Figure 11.8 shows the user rights contained in the Local Security Policy on a Windows system.


Figure 11.8 User Rights Assignment options from Windows Local Security Policy

Folders and files are not the only things that can be safeguarded or controlled using permissions. Even access and use of peripherals such as printers can be controlled using permissions. Figure 11.9 shows the Security tab from a printer attached to a Windows system. Permissions can be assigned to control who can print to the printer, who can manage documents and print jobs sent to the printer, and who can manage the printer itself. With this type of granular control, administrators have a great deal of control over how system resources are used and who uses them.


Figure 11.9 Security tab showing printer permissions in Windows


Although it is very important to get security settings “right the first time,” it is just as important to perform routine audits of security settings such as user accounts, group memberships, file permissions, and so on.

Under Linux operating systems, file permissions consist of three distinct parts:

Images   Owner permissions (read, write, and execute)  The owner of the file

Images   Group permissions (read, write, and execute)  The group to which the owner of the file belongs

Images   World permissions (read, write, and execute)  Anyone else who is not the owner and does not belong to the group to which the owner of the file belongs


Discretionary access control (DAC) restricts access based on the user’s identity or group membership.

For example, suppose a file called secretdata has been created by the owner of the file, Luke, who is part of the Engineering group. The owner permissions on the file would reflect Luke’s access to the file (as the owner). The group permissions would reflect the access granted to anyone who is part of the Engineering group. The world permissions would represent the access granted to anyone who is not Luke and is not part of the Engineering group.

In Linux, a file’s permissions are usually displayed as a series of nine characters, with the first three characters representing the owner’s permissions, the second three characters representing the group permissions, and the last three characters representing the permissions for everyone else (that is, for the world). This concept is illustrated in Figure 11.10.


Figure 11.10 Discretionary file permissions in the Linux environment

Suppose the file secretdata is owned by Luke with group permissions for Engineering (because Luke is part of the Engineering group), and the permissions on that file are rwx, rw-, and ---, as shown in Figure 11.10. This would mean the following:

Images   Luke can read, write, and execute the file (rwx).

Images   Members of the Engineering group can read and write the file but not execute it (rw-).

Images   The world has no access to the file and can’t read, write, or execute it (---).

Remember that under the DAC model, the file’s owner, Luke, can change the file’s permissions any time he wants.

A very important concept to consider when assigning rights and privileges to users is the concept of least privilege. Least privilege requires that users be given the absolute minimum number of rights and privileges required to perform their authorized duties. For example, if a user does not need the ability to install software on their own desktop to perform their job, then don’t give them that ability. This reduces the likelihood the user will load malware, insecure software, or unauthorized applications onto their system.

Access Control Lists (ACLs)

The term access control list (ACL) is used in more than one manner in the field of computer security. When we discuss routers and firewalls, an ACL is a set of rules used to control traffic flow into or out of an interface or network. When we discuss system resources, such as files and folders, an ACL lists the permissions attached to an object—who is allowed to view, modify, move, or delete that object.

To illustrate this concept, consider an example. Figure 11.11 shows the access control list (permissions) for the Data folder. The user identified as Billy Williams has Read & Execute, List Folder Contents, and Read permissions, meaning this user can open the folder, see what’s in the folder, and so on. Figure 11.12 shows the permissions for a user identified as Leah Jones, who has only Read permissions on the same folder.


Figure 11.11 Permissions for Billy Williams on the Data folder


Figure 11.12 Permissions for Leah Jones on the Data folder

In computer systems and networks, access controls can be implemented in several ways. An access control matrix provides the simplest framework for illustrating the process. An example of an access control matrix is provided in Table 11.1. In this matrix, the system is keeping track of two processes, two files, and one hardware device. Process 1 can read both File 1 and File 2 but can write only to File 1. Process 1 cannot access Process 2, but Process 2 can execute Process 1. Both processes have the ability to write to the printer.

Table 11.1  An Access Control Matrix


Although simple to understand, the access control matrix is seldom used in computer systems because it is extremely costly in terms of storage space and processing. Imagine the size of an access control matrix for a large network with hundreds of users and thousands of files.

Mandatory Access Control (MAC)

Mandatory access control (MAC) is the process of controlling access to information based on the sensitivity of that information and whether or not the user is operating at the appropriate sensitivity level and has the authority to access that information. Under a MAC system, each piece of information and every system resource (files, devices, networks, and so on) is labeled with its sensitivity level (such as Public, Engineering Private, Jones Secret, and so on). Users are assigned a clearance level that sets the upper boundary of the information and devices that they are allowed to access.


Mandatory access control restricts access based on the sensitivity of the information and whether or not the user has the authority to access that information.

The access control and sensitivity labels are required in a MAC system. Labels are defined and then assigned to users and resources. Users must then operate within their assigned sensitivity and clearance levels—they don’t have the option to modify their own sensitivity levels or the levels of the information resources they create. Due to the complexity involved, MAC is typically run only on systems where security is a top priority, such as Trusted Solaris, OpenBSD, and SELinux.


Tech Tip

MAC Objective

Mandatory access controls are often mentioned in discussions of multilevel security. For multilevel security to be implemented, a mechanism must be present to identify the classification of all users and files. A file identified as Top Secret (that is, it has a label indicating that it is “Top Secret”) may be viewed only by individuals with a Top Secret clearance. For this control mechanism to work reliably, all files must be marked with appropriate controls and all user access must be checked. This is the primary goal of MAC.

Figure 11.13 illustrates MAC in operation. The information resource on the left has been labeled “Engineering Secret,” meaning only users in the Engineering group operating at the Secret sensitivity level or above can access that resource. The top user is operating at the Secret level but is not a member of Engineering and is denied access to the resource. The middle user is a member of Engineering but is operating at a Public sensitivity level and is therefore denied access to the resource. The bottom user is a member of Engineering, is operating at a Secret sensitivity level, and is allowed to access the information resource.


Figure 11.13 Logical representation of mandatory access control

Discretionary Access Control (DAC)

Discretionary access control (DAC) is the process of using file permissions and optional ACLs to restrict access to information based on a user’s identity or group membership. DAC is the most common access control system and is commonly used in both Linux and Windows operating systems. The “discretionary” part of DAC means that a file or resource owner has the ability to change the permissions on that file or resource.


Tech Tip

Multilevel Security

In the U.S. government, the following security labels are used to classify information and information resources for MAC systems:

Images   Top Secret  The highest security level and is defined as information that would cause “exceptionally grave damage” to national security if disclosed.

Images   Secret  The second highest level and is defined as information that would cause “serious damage” to national security if disclosed.

Images   Confidential  The lowest level of classified information and is defined as information that would “damage” national security if disclosed.

Images   For Official Use Only  Information that is unclassified but not releasable to public or unauthorized parties. Sometimes called Sensitive But Unclassified (SBU).

Images   Unclassified  Not an official classification level.

The labels work in a top-down fashion so that an individual holding a Secret clearance would have access to information at the Secret, Confidential, and Unclassified levels. An individual with a Secret clearance would not have access to Top Secret resources because that label is above the highest level of the individual’s clearance.

Role-Based Access Control (RBAC)

Access control lists can be cumbersome and can take time to administer properly. Role-based access control (RBAC) is the process of managing access and privileges based on the user’s assigned roles. RBAC is the access control model that most closely resembles an organization’s structure. In this scheme, instead of each user being assigned specific access permissions for the objects associated with the computer system or network, that user is assigned a set of roles that the user may perform. The roles are in turn assigned the access permissions necessary to perform the tasks associated with the role. Users will thus be granted permissions to objects in terms of the specific duties they must perform—not just because of a security classification associated with individual objects.


As defined by the “Orange Book,” a Department of Defense document (in the “rainbow series”) that at one time was the standard for describing what constituted a trusted computing system, a discretionary access control (DAC) is “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control).” This definition is still valid today.

Under RBAC, you must first determine the activities that must be performed and the resources that must be accessed by specific roles. For example, the role of “securityadmin” in Microsoft SQL Server must be able to create and manage logins, read error logs, and audit the application. Once all the roles are created and the rights and privileges associated with those roles are determined, users can then be assigned one or more roles based on their job functions. When a role is assigned to a specific user, the user gets all the rights and privileges assigned to that role.


Role-based and rule-based access control can both be abbreviated as RBAC. Standard convention is for RBAC to be used to denote role-based access control. Role-based focuses on the user’s role (administrator, backup operator, and so on). Rule-based focuses on predefined criteria such as time of day (users can only log in between 8 A.M. and 6 P.M.) or type of network traffic (web traffic is allowed to leave the organization).

Unfortunately, in reality, administrators often find themselves in a position of working in an organization where more than one user has multiple roles or even access to multiple accounts (a situation quite common in smaller organizations). Users with multiple accounts tend to select the same or similar passwords for those accounts, thereby increasing the chance one compromised account can lead to the compromise of other accounts accessed by that user. Where possible, administrators should first eliminate shared or additional accounts for users and then examine the possibility of combining roles or privileges to reduce the “account footprint” of individual users.

Rule-Based Access Control

Rule-based access control is yet another method of managing access and privileges (and unfortunately shares the same acronym as role-based access control). In this method, access is either allowed or denied based on a set of predefined rules. Each object has an associated ACL (much like DAC), and when a particular user or group attempts to access the object, the appropriate rule is applied.

A good example for rule-based access control is permitted logon hours. Many operating systems give administrators the ability to control the hours during which users can log in. For example, a bank might allow its employees to log in only between the hours of 8 A.M. and 6 P.M., Monday through Saturday. If a user attempts to log in outside of these hours (3 A.M. on Sunday, for example), then the rule will reject the login attempt regardless of whether the user supplies valid login credentials.

Attribute-Based Access Control (ABAC)

Attribute-based access control (ABAC) is a new access control schema based on the use of attributes associated with an identity. These can use any type of attributes (user attributes, resource attributes, environment attributes, and so on), such as location, time, activity being requested, and user credentials. An example would be a doctor getting access for a specific patient versus a different patient. ABAC can be represented via the eXtensible Access Control Markup Language (XACML), a standard that implements attribute- and policy-based access control schemes.


The ABAC process of authorization evaluates specific rules and policies against attributes associated with a subject or object. ABAC is often used in large enterprises that use a federated structure. It is somewhat more complicated and costly to implement than other access control models.

Conditional Access

Conditional access is an access control scheme where specific conditions are examined before access is given. Conditions include location when accessing resources, such as if local then grant access; if remote then deny access. The list of conditions can be broad and follows this general form:

If { condition } then { action }

Examples of this would be If { client is using legacy authentication } then { block access }, If { device is not compliant } then { block access }, and If { User is an Admin } then { Enable Multifactor Authentication }. Conditional access can be very useful when an entity has a wide array of different systems with differing access needs.

Images Identity

Identification is the process of ascribing a computer ID to a specific user, computer, network device, or computer process. The identification process is typically performed only once, when a user ID is issued to a particular user. User identification enables authentication and authorization to form the basis for accountability. For accountability purposes, user IDs should not be shared, and for security purposes, they should not be descriptive of job function. This practice enables you to trace activities to individual users or computer processes so that they can be held responsible for their actions. Identification links the logon ID or user ID to credentials that have been submitted previously to either HR or the IT staff. A required characteristic of user IDs is that they must be unique so that they map back to the credentials presented when the account was established.

Identity Provider (IdP)

The term identity provider (IdP) is used to denote a system or service that creates, maintains, and manages identity information. IdPs can range in scale and scope—from operating for a single system to operating across an enterprise. Additionally, they can be operated locally, distributed, or federated, depending on the specific solution. Multiple standards have been employed to achieve these services, including those built on the Security Assertion Markup Language (SAML), OpenID, and OAuth.


The identity provider (IdP) creates, manages, and is responsible for authenticating identity.

Identity Attributes

How would you describe the elements of an identity? Identity attributes are the specific characteristics of an identity—name, department, location, login ID, identification number, e-mail address, and so on—that are used to accurately describe a specific entity. These elements are needed if one is to store identity information in some form of directory, such as an LDAP directory. The particulars of a schema need to be considered to include attributes for people, equipment (servers and devices), and services (apps and programs), as any of these can have an identity in a system. The details of schemas have already been taken care of via Active Directory, various IdPs, and so on, so this is not something that needs to be created; however, it does need to be understood.


Certificate-based authentication is a means of proving identity via the presentation of a certificate. Certificates offer a method of establishing authenticity of specific objects such as an individual’s public key or downloaded software. A digital certificate is a digital file that is sent as an attachment to a message and is used to verify that the message did indeed come from the entity it claims to have come from. Using a digital certificate is a verifiable means of establishing possession of an item (specifically, the certificate). When the certificate is held within a store that prevents tampering or extraction, this becomes a reliable means of identification, especially when combined with an additional factor such as something you know or a biometric. The technical details behind digital certificates are covered in Chapter 7.

Identity Tokens

An access token is a physical object that identifies specific access rights and, in authentication, falls into the “something you have” factor. Your house key, for example, is a basic physical access token that allows you access into your home. Although keys have been used to unlock devices for centuries, they do have several limitations. Keys are paired exclusively with a lock or a set of locks, and they are not easily changed. It is easy to add an authorized user by giving the user a copy of the key, but it is far more difficult to give that user selective access unless that specified area is already set up as a separate key. It is also difficult to take access away from a single key or key holder, which usually requires a rekey of the whole system.

In many businesses, physical access authentication has moved to contactless radio frequency cards and proximity readers. When passed near a card reader, the card sends out a code using radio waves. The reader picks up this code and transmits it to the control panel. The control panel checks the code against the reader from which it is being read and the type of access the card has in its database. The advantages of this kind of token-based system include the fact that any card can be deleted from the system without affecting any other card or the rest of the system. In addition, all doors connected to the system can be segmented in any form or fashion to create multiple access areas, with different permissions for each one. The tokens themselves can also be grouped in multiple ways to provide different access levels to different groups of people. All of the access levels or segmentation of doors can be modified quickly and easily if building space is repurposed. Newer technologies are adding capabilities to the standard token-based systems. Smart cards can also be used to carry identification tokens. The primary drawback of token-based authentication is that only the token is being authenticated. Therefore, the theft of the token could grant anyone who possesses the token access to what the system protects.

The risk of theft of the token can be offset by the use of multifactor authentication described later in this chapter. One of the ways that people have tried to achieve multifactor authentication is to add a biometric factor to the system. A less expensive alternative is to use hardware tokens in a challenge/response authentication process. In this way, the token functions as both a “something you have” and “something you know” authentication mechanism. Several variations on this type of device exist, but they all work on the same basic principles. The device has an LCD screen and may or may not have a numeric keypad. Devices without a keypad will display a password (often just a sequence of numbers) that changes at a constant interval, usually about every 60 seconds. When an individual attempts to log in to a system, they enter their own user ID number and then the number that is displayed on the LCD. These two numbers are either entered separately or concatenated. The user’s own ID number is secret, and this prevents someone from using a lost device. The system knows which device the user has and is synchronized with it so that it will know the number that should have been displayed. Since this number is constantly changing, a potential attacker who is able to see the sequence will not be able to use it later, since the code will have changed. Devices with a keypad work in a similar fashion (and may also be designed to function as a simple calculator). The individual who wants to log in to the system will first type their personal identification number into the calculator. They will then attempt to log in. The system will then provide a challenge; the user must enter that challenge into the calculator and press a special function key. The calculator will then determine the correct response and display it. The user provides the response to the system they are attempting to log in to, and the system verifies that this is the correct response. Since each user has a different PIN, two individuals receiving the same challenge will have different responses. The device can also use the date or time as a variable for the response calculation so that the same challenge at different times will yield different responses, even for the same individual.

SSH Keys

SSH keys are access credentials used by the Secure Shell (SSH) protocol. They function like usernames and passwords, but SSH keys are primarily used for automated processes and services. SSH keys are also used in implementing single sign-on (SSO) systems used by system administrators. SSH keys are exchanged using public key cryptography, and the keys themselves are digital keys. The concepts of public key cryptography are covered in Chapter 5.

Smart Cards

Smart cards are devices that store cryptographic tokens associated with an identity. The form factor is commonly a physical card, credit card sized, that contains an embedded chip that has various electronic components to act as a physical carrier of information.

The U.S. federal government has several smart card solutions for identification of personnel. The Personal Identity Verification (PIV) card is a U.S. government smart card that contains the cardholder’s credential data used to determine access to federal facilities and information systems. The Common Access Card (CAC) is a smart card used by the U.S. Department of Defense (DoD) for active-duty military, Selected Reserve members, DoD civilians, and eligible contractors. Like the PIV card, it is used for carrying the cardholder’s credential data, in the form of a certificate, and to determine access to federal facilities and information systems.

Images Authentication Methods

Authentication is the process of verifying an identity previously established in a computer system. There are a variety of methods of performing this function, each with its advantages and disadvantages, as detailed in the following sections.


Authentication is the process of binding a specific ID to a specific computer connection. Two items need to be presented to cause this binding to occur—the user ID and some “secret” to prove that the user is the valid possessor of the credentials. Historically, three categories of secrets are used to authenticate the identity of a user: what users know, what users have, and what users are. Today, an additional category is used: what users do.

These methods can be used individually or in combination. These controls assume that the identification process has been completed and the identity of the user has been verified. It is the job of authentication mechanisms to ensure that only valid users are admitted. Described another way, authentication is using some mechanism to prove that you are who you claimed to be when the identification process was completed.

The most common method of authentication is the use of a password. For greater security, you can add an element from a separate group, such as a smart card token—something a user has in their possession. Passwords are common because they are one of the simplest forms of authentication, and they use user memory as a prime component. Because of their simplicity, passwords have become ubiquitous across a wide range of authentication systems.

Another method to provide authentication involves the use of something that only valid users should have in their possession. A physical-world example of this would be a simple lock and key. Only those individuals with the correct key will be able to open the lock and thus gain admittance to a house, car, office, or whatever the lock was protecting. A similar method can be used to authenticate users for a computer system or network (though the key may be electronic and could reside on a smart card or similar device). The problem with this technology, however, is that people do lose their keys (or cards), which means not only that the user can’t log in to the system but that somebody else who finds the key may then be able to access the system, even though they are not authorized. To address this problem, a combination of the something-you-know and something-you-have methods is often used so that the individual with the key is also required to provide a password or passcode. The key is useless unless the user knows this code.

The third general method to provide authentication involves something that is unique about you. We are accustomed to this concept in our physical world, where our fingerprints or a sample of our DNA can be used to identify us. This same concept can be used to provide authentication in the computer world. The field of authentication that uses something about you or something that you are is known as biometrics. A number of different mechanisms can be used to accomplish this type of authentication, such as a fingerprint, iris, retinal, or hand geometry scan. All of these methods obviously require some additional hardware in order to operate. The inclusion of fingerprint readers on mobile computers has become common as the additional hardware has become cost effective.

A new method, based on how users perform an action, such as their walking gait or their typing patterns, has emerged as a source of a personal “signature.” While not directly embedded into systems as yet, this is an option that will be coming in the future.

Although the three main approaches to authentication appear to be easy to understand and in most cases easy to implement, authentication is not to be taken lightly because it is such an important component of security. Potential attackers are constantly searching for ways to get past the system’s authentication mechanism, and they have employed some fairly ingenious methods to do so. Consequently, security professionals are constantly devising new methods, building on these three basic approaches, to provide authentication mechanisms for computer systems and networks.

Basic Authentication

Basic authentication is the simplest technique used to manage access control across HTTP. Basic authentication operates by passing information encoded in Base64 form using standard HTTP headers. This is a plaintext method without any pretense of security. Figure 11.14 illustrates the operation of basic authentication.


Figure 11.14 How basic authentication operates

Digest Authentication

Digest authentication is a method used to negotiate credentials across the Web. Digest authentication uses hash functions and a nonce to improve security over basic authentication. Digest authentication works as follows, as illustrated in Figure 11.15:


Figure 11.15 How digest authentication operates

1.   The client requests login.

2.   The server responds with a challenge and provides a nonce.

3.   The client hashes the password and nonce.

4.   The client returns the hashed password to the server.

5.   The server requests the password from a password store.

6.   The server hashes the password and nonce.

7.   If both hashes match, login is granted.

Digest authentication, although it improves security over basic authentication, does not provide any significant level of security. Passwords are not sent in the clear. Digest authentication is subject to on-path (formally man-in-the-middle) attacks and potentially replay attacks.


The bottom line for both basic and digest authentication is that these are insecure methods and should not be relied upon for any level of security.


Developed as part of MIT’s project Athena, Kerberos is a network authentication protocol designed for a client/server environment. The current version is Kerberos 5 release 1.16 and is supported by all major operating systems. Kerberos securely passes a symmetric key over an insecure network using the Needham-Schroeder symmetric key protocol. Kerberos is built around the idea of a trusted third party, termed a key distribution center (KDC), which consists of two logically separate parts: an authentication server (AS) and a ticket-granting server (TGS). Kerberos communicates via “tickets” that serve to prove the identity of users.


Two tickets are used in Kerberos. The first is a ticket-granting ticket (TGT) obtained from the authentication server (AS). The TGT is presented to a ticket-granting server (TGS) when access to a server is requested and then a client-to-server ticket is issued, granting access to the server. Typically both the AS and the TGS are logically separate parts of the key distribution center (KDC).

Taking its name from the three-headed dog of Greek mythology, Kerberos is designed to work across the Internet, an inherently insecure environment. Kerberos uses strong encryption so that a client can prove its identity to a server and the server can in turn authenticate itself to the client. A complete Kerberos environment is referred to as a Kerberos realm. The Kerberos server contains user IDs and hashed passwords for all users who will have authorizations to realm services. The Kerberos server also has shared secret keys with every server to which it will grant access tickets.

The basis for authentication in a Kerberos environment is the ticket. Tickets are used in a two-step process with the client. The first ticket is a ticket-granting ticket (TGT) issued by the AS to a requesting client. The client can then present this ticket to the Kerberos server with a request for a ticket to access a specific server. This client-to-server ticket (also called a service ticket) is used to gain access to a server’s service in the realm. Because the entire session can be encrypted, this eliminates the inherently insecure transmission of items such as a password that can be intercepted on the network. Tickets are timestamped and have a lifetime, so attempting to reuse a ticket will not be successful. Figure 11.16 details Kerberos operations.


Figure 11.16 Kerberos operations


Tech Tip

Kerberos Authentication

Kerberos is a third-party authentication service that uses a series of tickets as tokens for authenticating users. The six steps involved are protected using strong cryptography:

Images   The user presents their credentials and requests a ticket from the key distribution center (KDC).

Images   The KDC verifies credentials and issues a ticket-granting ticket (TGT).

Images   The user presents a TGT and request for service to the KDC.

Images   The KDC verifies authorization and issues a client-to-server ticket (or service ticket).

Images   The user presents a request and a client-to-server ticket to the desired service.

Images   If the client-to-server ticket is valid, service is granted to the client.

To illustrate how the Kerberos authentication service works, think about the common driver’s license. You have received a license that you can present to other entities to prove you are who you claim to be. Because other entities trust the state in which the license was issued, they will accept your license as proof of your identity. The state in which the license was issued is analogous to the Kerberos authentication service realm, and the license acts as a client-to-server ticket. It is the trusted entity both sides rely on to provide valid identifications. This analogy is not perfect, because we all probably have heard of individuals who obtained a phony driver’s license, but it serves to illustrate the basic idea behind Kerberos.


Kerberos is a third-party authentication service that uses a series of tickets as tokens for authenticating users. The steps involved are protected using strong cryptography.

Mutual Authentication

Mutual authentication describes a process in which each side of an electronic communication verifies the authenticity of the other. We are accustomed to the idea of having to authenticate ourselves to our ISP before we access the Internet, generally through the use of a user ID/password pair, but how do we actually know that we are really communicating with our ISP and not some other system that has somehow inserted itself into our communication (a man-in-the-middle attack)? Mutual authentication provides a mechanism for each side of a client/server relationship to verify the authenticity of the other to address this issue. A common method of performing mutual authentication involves using a secure connection, such as Transport Layer Security (TLS), to the server and a one-time password generator that then authenticates the client.


Mutual TLS–based authentication provides the same functions as normal TLS, with the addition of authentication and nonrepudiation of the client. This second authentication, the authentication of the client, is done in the same manner as the normal server authentication using digital signatures. The client authentication represents the many sides of a many-to-one relationship. Mutual TLS authentication is not commonly used because of the complexity, cost, and logistics associated with managing the multitude of client certificates. This reduces the effectiveness, and most web applications are not designed to require client-side certificates.


Certificates are a method of establishing authenticity of specific objects such as an individual’s public key or downloaded software. A digital certificate is a digital file that is sent as an attachment to a message and is used to verify that the message did indeed come from the entity it claims to have come from.


Cross Check

Digital Certificates and Digital Signatures

Kerberos uses tickets to convey messages. Part of the ticket is a certificate that contains the requisite keys. Understanding how certificates convey this vital information is an important part of understanding how Kerberos-based authentication works. Certificates, how they are used, and the protocols associated with PKI were covered in Chapter 7. Refer back to this chapter as needed for more information.


Tech Tip

PIV/CAC/Smart Cards

The U.S. federal government has several smart card solutions for identification of personnel. The personal identity verification (PIV) card is a U.S. government smart card that contains the credential data for the cardholder used to determine access to federal facilities and information systems. The Common Access Card (CAC) is a smart card identification used by the U.S. Department of Defense (DoD) for active-duty military, selected reserve personnel, DoD civilians, and eligible contractors. Like the PIV card, it is used for carrying the credential data, in the form of a certificate, for the cardholder and is used to determine access to federal facilities and information systems.


While the username/password combination has been and continues to be the cheapest and most popular method of controlling access to resources, many organizations look for a more secure and tamper-resistant form of authentication. Usernames and passwords are “something you know” (which can be used by anyone else who knows or discovers the information). A more secure method of authentication is to combine the “something you know” with “something you have.” A token is an authentication factor that typically takes the form of a physical or logical entity that the user must be in possession of to access their account or certain resources.

A token is a hardware device that can be used in a challenge/response authentication process. In this way, it functions as both a something-you-have and something-you-know authentication mechanism. Several variations on this type of device exist, but they all work on the same basic principles. Tokens are commonly employed in remote authentication schemes because they provide additional surety of the identity of the user, even users who are somewhere else and cannot be observed.

Most tokens are physical tokens that display a series of numbers that changes every 30 to 90 seconds, such as the token pictured in Figure 11.17 from Blizzard Entertainment. This sequence of numbers must be entered when the user is attempting to log in or access certain resources. The ever-changing sequence of numbers is synchronized to a remote server such that when the user enters the correct username, password, and matching sequence of numbers, they are allowed to log in. Even if an attacker obtains the username and password, the attacker cannot log in without the matching sequence of numbers. Other physical tokens include Common Access Cards (CACs), USB tokens, smart cards, and PC cards.


Figure 11.17 Token authenticator from Blizzard Entertainment


The use of a token is a common method of using “something you have” for authentication. A token can hold a cryptographic key or act as a one-time password (OTP) generator. It can also be a smart card that holds a cryptographic key (examples include the U.S. military Common Access Card and the Federal Personal Identity Verification [PIV] card). These devices can be safeguarded using a PIN and lockout mechanism to prevent use if stolen.

Software Tokens

Access tokens may also be implemented in software. Software tokens still provide two-factor authentication but don’t require the user to have a separate physical device on hand. Some tokens require software clients that store a symmetric key (sometimes called a seed record) in a secured location on the user’s device (laptop, desktop, tablet, and so on). Other software tokens use public key cryptography. Asymmetric cryptography solutions, such as public key cryptography, often associate a PIN with a specific user’s token. To log in or access critical resources, the user must supply the correct PIN. The PIN is stored on a remote server and is used during the authentication process so that if the user presents the right token, but not the right PIN, the user’s access can be denied. This helps prevent an attacker from gaining access if they get a copy of or gain access to the software token. The most common form of software token is for identifying a specific device in addition to a user, in that the software token is on the device and the user supplies the rest of the details needed to demonstrate authenticity.


Understand that tokens represent (1) something you have with respect to authentication and (2) a device that can store more information than you can memorize. This makes them very valuable for access control. The details in the question on the exam will provide the necessary criteria to pick the best token method for the question.


HMAC-based One-Time Password (HOTP) is an algorithm that can be used to authenticate a user in a system by using an authentication server. (HMAC stands for Hash-based Message Authentication Code.) It is defined in RFC 4226, dated December 2005. The Time-based One-Time Password (TOTP) algorithm is a specific implementation of an HOTP that uses a secret key with a current timestamp to generate a one-time password. It is described in RFC 6238, dated May 2011.


HOTP passwords can remain valid and active for an unknown time period. TOTP passwords are considered more secure because they are valid for short amounts of time and change often.

Smart Cards

Smart cards can increase physical security because they can carry cryptographic tokens that are too long to remember and have too large a space to guess. Because of the manner in which they are employed and used, copying the number is not a practical option either. Smart cards can find use in a variety of situations where you want to combine something you know (a PIN or password) together with something you have (and can’t be duplicated, such as a smart card). Many standard corporate-type laptops come with smart card readers installed, and their use is integrated into the Windows user access system.

Knowledge-Based Authentication

Knowledge-based authentication is a method where the identity of a user is verified via a common set of knowledge. This is a very useful method for verifying the identity of a user without having a stored secret in advance. The standard methodology associated with authentication is an identity and a common secret that are previously recorded in a system, and then upon later use verified by recall on the user’s part and lookup by the system. But what if the user has never accessed the site to establish their identity? How can it be established on the fly, so to speak? Knowledge-based authentication relies on a set of knowledge that, while it may be available to many, is from such a vast set of information that the recall only will work for the user themselves.

A good example is when accessing a site such as a credit bureau to obtain information on yourself. The site has a vast array of knowledge associated with you, and it can see if you can identify an address you have lived at (out of a list of four addresses), a car you owned (out of a list of four cars), a car or mortgage payment amount, or a credit card account. In a timed quiz, to eliminate extensive lookups, the user is presented with a series of multiple-choice options. If they get them all correct, then odds are that they are the person they represent themselves to be. The last time the author went through one of these tests, the range of time for the knowledge covered was greater than 20 years, making the breadth of knowledge to choose from large indeed.

Directory Services

A directory is a data storage mechanism similar to a database, but it has several distinct differences designed to provide efficient data-retrieval services compared to standard database mechanisms. A directory is designed and optimized for reading data, offering very fast search and retrieval operations. The types of information stored in a directory tend to be descriptive attribute data. A directory offers a static view of data that can be changed without a complex update transaction. The data is hierarchically described in a treelike structure, and a network interface for reading is typical. Common uses of directories include e-mail address lists, domain server data, and resource maps of network resources. The Lightweight Directory Access Protocol (LDAP) is commonly used to handle user authentication and authorization and to control access to Active Directory (AD) objects.

When integrating with cloud-based systems, you might find managing credentials across the two different domains challenging. Different vendors have created directory-based technologies to address this, such as AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. This service enables your directory-aware workloads and AWS resources to use a managed Active Directory in the AWS Cloud. Because AWS Managed Microsoft AD is built on the actual Microsoft Active Directory, you can use standard Active Directory administration tools and take advantage of built-in Active Directory features, such as Group Policy and single sign-on (SSO) features.


Federation, or identity federation, defines policies, protocols, and practices to manage identities across systems and organizations. Federation’s ultimate goal is to allow users to seamlessly access data or systems across domains. Federation is enabled through the use of industry standards such as Security Assertion Markup Language (SAML). Federated identity access management systems allow users to authenticate and access resources across multiple enterprises using a single credential. But don’t confuse this with single sign-on (SSO), which allows users access to multiple resource within a single organization or enterprise.


Attestation is the supplying of proof or evidence of some fact. In the case of authentication, attestation can be done by a service that checks the credentials supplied, and if they are correct and match the required values, the service can attest that the entry is valid or correct. Attestation is used throughout cybersecurity whenever a third party or entity verifies an object as valid or an item as correct in value.

Transitive Trust

Security across multiple domains is provided through trust relationships. When trust relationships between domains exist, authentication for each domain trusts the authentication for all other trusted domains. Thus, when an application is authenticated by a domain, its authentication is accepted by all other domains that trust the authenticating domain.

It is important to note that trust relationships apply only to authentication. They do not apply to resource usage, which is an access control issue. Trust relationships allow users to have their identity verified (authentication). The ability to use resources is defined by access control rules. Thus, even though a user is authenticated via the trust relationship, it does not provide access to actually use resources.

A transitive trust relationship means that the trust relationship extended to one domain will be extended to any other domain trusted by that domain. A two-way trust relationship means that two domains trust each other.


Transitive trust involves three parties: if A trusts B, and B trusts C, then in a transitive trust relationship, A will trust C.


There are multiple ways to perform authentication, and multiple technologies can be employed to assist in the effort.

Short Message Service (SMS)

The use of Short Message Service, or text messaging, in a cell phone provides a second authentication factor that is sent to a preidentified number. The message that is sent provides a code that the user enters into the system. This code typically has an expiration time, as shown in Figure 11.18. This is a way of verifying that the first credential, usually a password, was entered by the person expected—assuming they have control over the cell phone. This is a practical example of multifactor authentication, which is discussed later in this chapter.


Figure 11.18 Sample SMS verification codes

Trusted Platform Module (TPM)

The trusted platform module (TPM) is a hardware solution on the motherboard, one that assists with key generation and storage as well as random number generation. When the encryption keys are stored in the TPM, they are not accessible via normal software channels and are physically separated from the hard drive or other encrypted data locations. This makes the TPM a more secure solution than keeping the keys in the machine’s normal storage. A TPM acts as a secure cryptoprocessor. It is a hardware solution that assists with key generation and secure, encrypted storage.

Hardware Security Module (HSM)

A hardware security module (HSM) is a device used to manage or store encryption keys. It can also assist in cryptographic operations such as encryption, hashing, or the application of digital signatures. HSMs typically are peripheral devices connected via USB or a network connection. HSMs have tamper-protection mechanisms to prevent physical access to the secrets they protect. Because of their dedicated design, they can offer significant performance advantages over general-purpose computers when it comes to cryptographic operations. When an enterprise has significant levels of cryptographic operations, HSMs can provide throughput efficiencies. Storing private keys anywhere on a networked system is a recipe for loss. HSMs are designed to allow the use of keys without exposing them to the wide range of host-based threats.

Static Codes

Static codes are just that—codes that do not change or are static in nature. There are many use cases where these are essential, cases such as devices without user intervention. Deployed devices that do not have user intervention are widely deployed in many systems. An example would be a smart electric meter, a device that needs to communicate with other systems and authenticate its identity. The use of static codes has a weakness in that, if compromised, the keys are no longer valid. The standard is to use cryptographic protection of all transmission of static codes, making the code unreadable even if the communication channel data is copied.

Authentication Applications

Need a second factor for authentication? We have an app for that. And this is not just a joke, but an increasingly common method of authentication that works by verifying that a user has a given mobile device in their possession. An authentication application functions by accepting user input, and if the user input is correct, it can pass the appropriate credentials to the system requesting authentication. This can be in the form of either a stored digital value or a one-time code in response to a challenge. Authentication applications exist for a variety of platforms—from Android to iOS, Linux, and Windows—and there are multiple vendors for each platform. The use of the application on the device is a second factor of authentication and is part of a multifactor authentication scheme.

Push Notifications

Push notification authentication supports user authentication by pushing a notification directly to an application on the user’s device. The user receives the alert that an authentication attempt is taking place, and they can approve or deny the access via the user interface on the application. The push notification itself is not a secret; it is merely a means by which the user can authenticate and approve access. This is an out-of-band communication and demonstrates a second communication channel, thus making account hacking significantly more challenging.

Phone Call

Another form of authenticating a user has an interaction with the system via a phone call. The authentication phone call is delivered from the authentication system to a specified phone number, which then can verify that the user is in possession of the actual mobile device.


Tech Tip


Tokens represent something you have with respect to authentication as well as devices that can store more information than a user can memorize, which makes them very valuable for access control. The details in the scenario preceding a question will provide the necessary criteria to pick the best token method for the question.

Smart Card Authentication

A smart card (also known as an integrated circuit card [ICC] or chip card) is a credit card–sized card with embedded integrated circuits that is used to provide identification security authentication. Smart cards can increase physical security because they can carry long cryptographic tokens that are too long to remember and too large a space to guess. Also, because of the manner in which smart cards are employed and used, copying the number is not a practical option. Smart cards can find use in a variety of situations where you want to combine something you know (a PIN or password) with something you have (and can’t be duplicated, such as a smart card). Many standard corporate-type laptops come with smart card readers installed, and their use is integrated into the Windows user access system.

Password Vaults

Password vaults are software mechanisms designed to manage the problem of users having multiple passwords for the myriad of different systems. Vaults provide a means of storing the passwords until they are needed, and many password manager programs include additional functionality such as password generation and password handling via a browser. Vaults do represent a single point of failure in that if an attacker gets the password key, or master password, they have access to all of the user’s passwords. Cryptographic protections should remedy this, but it also introduces another issue with vaults—what to do when the user loses their master password. Any recovery mechanism would represent a major risk for the system, so in most systems it is incumbent on the user to maintain this information somewhere else as a backup.

Another form of password vault is the systems built into software and operating systems (OSs) to securely hold credentials. Examples of these are the Keychain in macOS and iOS and the Credential Manager in Microsoft Windows. The use of browser-based password storage is much less secure, as numerous utilities exist that can get the passwords out of most of them, making these solutions less secure and an obvious target for attackers. The OS-based Keychain and Credential Manager solutions are much more robust and can limit overall risk.

Images Biometric Factors

Biometrics factors use the measurements of certain biological features to identify one specific person from other people. These factors are based on parts of the human body that are unique. The most well-known of these unique biological factors is the fingerprint. Fingerprint readers have been available for several years in laptops and other mobile devices, on keyboards, and as standalone USB devices.

Many other biological factors can be used, too, such as the retina or iris of the eye, the geometry of the hand, and the geometry of the face. When these are used for authentication, there is a two-part process: enrollment and then authentication. During enrollment, a computer takes the image of the biological factor and reduces it to a numeric value, called a template. When the user attempts to authenticate, the biometric feature is scanned by the reader, and the computer computes a value in the same fashion as the template and then compares the numeric value being read to the one stored in the database. If they match, access is allowed. Because these physical factors are unique, theoretically only the actual authorized person would be allowed access.

In the real world, however, the theory behind biometrics breaks down. Tokens that have a digital code work very well because everything remains in the digital realm. A computer checks your code, such as 123, against the database; if the computer finds 123 and that number has access, the computer opens the door. Biometrics, however, take an analog signal, such as a fingerprint or a face, and attempt to digitize it, and it is then matched against the digits in the database. The problem with an analog signal is that it might not encode the exact same way twice. For example, if you came to work with a bandage on your chin, would the face-based biometrics grant you access or deny it? Because of this, the templates are more complex in a manner where there can be a probability of match (that is, they use a closeness measurement).

Fingerprint Scanner

Fingerprint scanners are used to measure the unique shape of fingerprints and then change them to a series of numerical values, or a template. Fingerprint readers can be enhanced to ensure that the pattern is a live pattern—one with blood moving or other detectable biological activity. This is to prevent simple spoofing with a mold of the print made of Jell-O. Fingerprint scanners are cheap to produce and have widespread use in mobile devices. One of the challenges of fingerprint scanners is they fail if the user is wearing gloves (such as medical gloves) or has worn-down fingerprints, as is the case for those involved in the sheetrock trade.

Retinal Scanner

Retinal scanners examine blood vessel patterns in the back of the eye. Believed to be unique and unchanging, this is a readily detectable biometric. It does suffer from user acceptance, as it involves a laser scanning the inside of the user’s eyeball, which has some psychological issues. This detection is close up, and the user has to be right at the device for it to work. It is also more expensive because of the precision of the detector and the involvement of lasers and users’ vision.

Iris Scanner

Iris scanners work in a way similar to retinal scanners in that they use an image of a unique biological measurement (in this case, the pigmentation associated with the iris of the eye). This can be photographed and measured from a distance, removing the psychological impediment of placing one’s eye on a scanner. But there are downsides: because the measurement can be taken at a distance, it is easy to measure other people’s values, and contact lenses can be constructed that mimic a certain pattern. There are also medical issues such as diseases, which if revealed would be a violation of privacy.

Voice Recognition

Voice recognition is the use of unique tonal qualities and speech patterns to identify a person. Long the subject of sci-fi movies, this biometric has been one of the hardest to develop into a reliable mechanism, primarily because of problems with false acceptance and rejection rates, which will be discussed later in the chapter.

Facial Recognition

Facial recognition was mostly the stuff of sci-fi until it was integrated into various mobile phones. A sensor that recognizes when you move the phone into a position to see your face, coupled with a state of not being logged in, turns on the forward-facing camera, causing the system to look for its enrolled owner. This system has proven to have fairly high discrimination and works fairly well, with only one drawback: another person can move the phone in front of the registered user to unlock it. In essence, another user can activate the unlocking mechanism when the user is unaware. The other minor drawback is that for certain transactions, such as positive identification for financial transactions, the position of the phone on an NFC location, together with the user’s face needing to be in a certain orientation with respect to the phone, leads to some awkward positions. In other words, having to put your face in a proper position on the phone to identify you while holding it against the counter-height NFC credit card reader can be awkward.


A different biometric is the use of blood vein patterns to identify a user. Humans have a common vascular system, but the individual elements can vary in size and microstructure, and these fine-grained patterns are believed to be unique. Sensors can measure these patterns and use them to identify a user. Three common vascular pattern locations are used: palms, fingers, and the veins in the retina. This measurement is done via spectral analysis of the tissue, using frequencies that detect the hemoglobin in the blood. These are noninvasive measurements, but they do require close proximity to the user’s item under measurement.

Gait Analysis

Gait analysis is the measurement of the pattern expressed by a person as they walk. An analysis of the gait, its length, the speed, and the rate of movement of specific points provides a unique signature that can be recorded and compared to previous samples. Even when not used for authentication, as a previous sample is required, gait analysis can be used to identify a suspect in a group of others, enabling the tracking of individuals in a crowd. From an access control perspective, in high-security situations, a camera can record the gait of incoming personnel and compare it to known values, providing a remote and early additional factor in determining identity.

Images Biometric Efficacy Rates

Biometric measurements have a level of uncertainty, and thus the efficacy of biometric solutions has been an issue since they were first developed. As each generation of sensor improved the accuracy of the measurements, the errors have been reduced to what is now a manageable level. For biometrics to be effective, they must have both low false positive rates and low false negative rates. The terms false acceptance rate (FAR) and false rejection rate (FRR) describe the chance that an incorrect user will be falsely accepted or a correct user will be falsely rejected, respectively, as covered in detail in the next sections. These two measures are different, and while a low false rejection rate is important for usability, a low false acceptance rate is more important from a security perspective. Users having to repeat trying to authenticate is bad, but having authentication occur for unauthorized users is worse.

The FIDO Alliance, a leading authentication standards and certification body, has specifications for error rates. FRR should be below 3 percent (no more than three errors in 100 attempts) and FAR should be below 0.01 percent (no more than one error in 10,000 attempts). As in all defense-in-depth scenarios, the backstop is a lockout function where devices will lock after a certain number of failed attempts. This makes the FAR more secure than just the simple percentage

False Positives and False Negatives

Engineers who design systems understand that if a system was set to exact checking, an encoded biometric might never grant access because it might never scan the biometric exactly the same way twice. Therefore, most systems have tried to allow a certain amount of error in the scan, while not allowing too much. This leads to the concepts of false positives and false negatives. A false positive is where you receive a positive result for a test, when you should have received a negative result. Thus, a false positive result occurs when a biometric is scanned and allows access to someone who is not authorized—for example, two people who have very similar fingerprints might be recognized as the same person by the computer, which in turn might grant access to the wrong person. A false negative occurs when the system denies access to someone who is actually authorized—for example, a user at the hand geometry scanner may have forgotten to wear a ring they usually wear and the computer doesn’t recognize their hand and denies them access.

In statistical terms, a false positive is called a type I error, and a false negative is a type II error. When you’re working with scientific problems, a type II error is considered to be more serious. In practical systems, the more serious error depends on the circumstances. If you are willing to trade off legitimate access (make authorized users try several times) to keep out unauthorized parties, then type II errors are being avoided at the expense of type I errors. But if legitimate access is not to be denied, even if in error (for example, signing in to prevent the meltdown of the core at a power plant), then type I errors might be prioritized over type II errors. Context and circumstances matter. For example, you need to consider what the biometrics are protecting and what the cost is of each type of failure.

What is desired is for the system to be able to differentiate the two signals—one being the stored value and the other being the observed value—in such a way that the two curves do not overlap. Figure 11.19 illustrates two probability distributions that do not overlap.


Figure 11.19 Ideal probabilities

For biometric authentication to work properly, and also be trusted, it must minimize the existence of both false positives and false negatives. But biometric systems are seldom that discriminating, and the curves tend to overlap, as shown in Figure 11.20. For detection to work, a balance between exacting and error must be created so that the machines allow a little physical variance—but not too much.


Figure 11.20 Overlapping probabilities

This leads us to acceptance and rejection rates.

False Acceptance Rate

The false acceptance rate (FAR) is just that: what level of false positives are going to be allowed in the system. If an unauthorized user is accepted by the system, this is a false acceptance. A false positive is demonstrated by the grayed-out area in Figure 11.21. In this section, the curves overlap, and the decision has been set that at the threshold or better an accept signal will be given. Thus, if you are on the upper end of the nonmatch curve, in the gray area, you will be a false positive. Expressed as probabilities, the false acceptance rate is the probability that the system incorrectly identifies a match between the biometric input and the stored template value. The FAR is calculated by counting the number of unauthorized accesses granted, divided by the total number of access attempts.


Figure 11.21 False acceptance rate

When selecting the threshold value, the designer must be cognizant of two factors: one is the rejection of a legitimate biometric, the area on the match curve below the threshold value. The other is the acceptance of a false positive. As you set the threshold higher, you will decrease false positives but increase false negatives (or rejections).

False Rejection Rate

The false rejection rate (FRR) is just that: what level of false negatives, or rejections, are going to be allowed in the system. If an authorized user is rejected by the system, this is a false rejection. A false rejection is demonstrated by the grayed-out area in Figure 11.22. In this section, the curves overlap, and the decision has been set that at the threshold or lower a reject signal will be given. Thus, if you are on the lower end of the match curve, in the gray area, you will be rejected, even if you should be a match. Expressed as probabilities, the false rejection rate is the probability that the system incorrectly rejects a legitimate match between the biometric input and the stored template value. The FRR is calculated by counting the number of authorized access attempts that were not granted, divided by the total number of access attempts.


Figure 11.22 False rejection rate

When comparing the FAR and the FRR, one realizes that in most cases, whenever the curves overlap, they are related. This brings up the issue of the crossover error rate (see Table 11.2).

Table 11.2  Comparison of Outcomes and Error Terms


Crossover Error Rate

The crossover error rate (CER), also known as the equal error rate (EER), is the rate where both accept and reject error rates are equal. This is the desired state for most efficient operation, and it can be managed by manipulating the threshold value used for matching. In practice, the values might not be exactly the same, but they will typically be close to each other. Figure 11.23 demonstrates the relationship between the FAR, FRR, and CER.


Figure 11.23 FRR, FAR, and CER compared

Biometrics Calculation Example

Assume we are using a fingerprint biometric system, and we have 1000 users. During the enrollment stage, five users were unable to enroll (the system could not establish a fingerprint signature/template for them). This means the system has a failure to enroll rate (FER) for 0.5 percent. In other words, only 995 users can use the system, and an alternative means needs to be in place for the users who cannot use the system.

During the testing of the 995 users, 50 users were rejected when the system matched their fingerprint against their enrollment fingerprint template.

FAR = (NFA / NIA) * 100%

(NFA = number of false acceptances, and NIA = number of imposter attempts)

FAR = (50/995) * 100

This makes the FRR 5.02 percent.

Also, 25 users out of the 995 users were accepted by the system when the system matched their fingerprint against other users’ fingerprint templates.

FRR = (NFR / NEA) * 100%

(NFR = number of failed rejections, and NEA = number of legitimate access attempts)

FRR = (25/995) * 100%

This means the FAR is 2.51 percent.


Understand how to calculate FAR and FRR, given data. This is an easy calculation, and remember to include those who fail enrollment.

The lower the FAR and FRR, the better the system, and the ideal situation is setting the thresholds where the FAR and FRR are equal (the crossover error rate).

Images Multifactor Authentication

Multifactor authentication (or multiple-factor authentication) is simply the combination of two or more types of authentication. Five broad categories of authentication can be used: what you are (for example, biometrics), what you have (for instance, tokens), what you know (passwords and other information), somewhere you are (location), and something you do (physical performance). Two-factor authentication combines any two of these before granting access. An example would be a card reader that then turns on a fingerprint scanner—if your fingerprint matches the one on file for the card, you are granted access. Three-factor authentication would combine all three types, such as a smart card reader that asks for a PIN before enabling a retina scanner. If all three correspond to a valid user in the computer database, access is granted.


Two-factor authentication combines any two methods, matching items such as a token with a biometric. Three-factor authentication combines any three, such as a passcode, a biometric, and a token.

Multifactor authentication methods greatly enhance security by making it very difficult for an attacker to obtain all the correct materials for authentication. They also protect against the risk of stolen tokens, as the attacker must have the correct biometric, password, or both. More important, multifactor authentication enhances the security of biometric systems by protecting against a stolen biometric. Changing the token makes the biometric useless unless the attacker can steal the new token. It also reduces false positives by trying to match the supplied biometric with the one that is associated with the supplied token. This prevents the computer from seeking a match using the entire database of biometrics. Using multiple factors is one of the best ways to ensure proper authentication and access control.


Factors are the specific elements that comprise an item of proof. These items can be grouped into three classes: something you know (passwords), something you have (token), and something you are (biometrics). Each of these has advantages and disadvantages, as discussed in the following sections.

Something You Know

The most common example of something you know is a password. One of the challenges with using something you know as an authentication factor is that it can be “shared” (or duplicated) without you knowing it. Another concern with using something you know is that because of the vast number of different elements a typical user has to remember, they do things to assist with memory, such as repeating passwords, making slight changes to a password, such as incrementing the number from password1 to password2, and writing them down. These are all common methods used to deal with password sprawl, yet they each introduce new vulnerabilities.

Another form of authentication via something you know is identity-driven authentication. In identity-driven authentication, you contact someone to get access, and they respond with a series of challenge questions. Sometimes the questions are based on previously submitted information, and sometimes the questions are based on publicly known information, such as previous addresses, phone numbers, cars purchase/licensed, and so on. Again, the proper respondent will know these answers, whereas an imposter will not. These tests are timed, and if the respondent takes too long (for example, taking the time to perform a lookup), they will fail.

Something You Have

Something you have specifically refers to security tokens and other items that a user can possess physically. One of the challenges with using something you have as an authentication factor is that you have to have it with you whenever you wish to be authenticated, and this can cause issues. It also relies on interfaces that may not be available for some systems, such as mobile devices, although interfaces, such as one-time password (OTP) generators, are device independent. OTP generators create new passwords on demand, against a sequence that is known only to the OTP generator and the OTP element on the system accepting the password.

One of the challenges of something you have is the concept of “something you lost,” such as something you left in a briefcase, at home, and so on. Just as leaving behind your key ring with your office key can force a return trip back home to get it, so can leaving a dongle or other security element that is “something you have” in nature. And if something you have becomes something you had stolen, the implications are fairly clear—you don’t have access and you have to re-identify yourself to get access again.

Something You Are

Something you are specifically refers to biometrics. One of the challenges with using “something you are” artifacts as authentication factors is that typically they are hard to change; once assigned, they inevitably become immutable, as you can change fingers, but only a limited number of times, and then you run out of changes. Another challenge with biometrics is that cultural or other issues associated with measuring things on a person may exist. For example, people in some cultures object to having their pictures taken. Another example is that physical laborers in some industries tend to lack scannable fingerprints because they are worn down. Some biometrics are not usable in certain environments; for instance, in the case of medical workers, or workers in clean-room environments, their personal protective gear inhibits the use of fingerprint readers and potentially other biometric devices.


Attributes are collections of artifacts, like the factors previously presented, but rather than focus on the authentication item, they focus on elements associated with the user. Common attributes include the user’s location, their ability to perform a task, or something about the user themselves. These attributes are discussed in the following sections.

Somewhere You Are

One of the more discriminant authentication factors is your location, or somewhere you are. When a mobile device is used, GPS can identify where the device is currently located. When you are logged on to a local, wired desktop connection, it shows you are in the building. Both of these can be compared to records to see if you are really there or should be there. If you are badged into your building, and at your desk on a wired PC, then a second connection with a different location would be suspect, as you can only be in one place at a time.

With geofencing, location becomes a big thing for marketing services pushing content to devices when in specific locations. Location services on mobile devices, coupled with geofencing, can alert others when you are in a specific area—not specifically authentication, but leading toward it.

Something You Can Do

Something you can do specifically refers to a physical action that you perform uniquely. An example of this is a signature; the movement of the pen and the two-dimensional output are difficult for others to reproduce. This makes it useful for authentication, but challenges exist in capturing the data, as signature pads are not common peripherals on machines. Gait analysis, presented earlier, is another example of this attribute. Something you can do is one of the harder artifacts to capture without specialized hardware, making it less ubiquitous as a method of authentication.

Something You Exhibit

Something you exhibit is a special case of a biometric. An example would be a brainwave response to seeing a picture. Another example would be the results of a lie detector test. The concept is to present a trigger and measure a response that cannot be faked. As sensors improve, tracking eye movement and sensing other aspects of responses will become forms that can be used to assist in authentication.

Someone You Know

Just as passwords relate to possession of knowledge, someone you know relates to a specific memory, but in this case an individual. This is the classic “having someone vouch for you” attribute. Electronically, this can be done via a chain of trust model, and it was commonly used in the past as a result of people signing each other’s keys, indicating trust.

Images Remote Access

The process of connecting by remote access involves two elements: a temporary network connection and a series of protocols to negotiate privileges and commands. The temporary network connection can occur via a dial-up service, the Internet, wireless access, or any other method of connecting to a network. Once the connection is made, the primary issue is authenticating the identity of the user and establishing proper privileges for that user. This is accomplished using a combination of protocols and the operating system on the host machine.


Tech Tip

Securing Remote Connections

By using encryption, remote access protocols can securely authenticate and authorize a user according to previously established privilege levels. The authorization phase can keep unauthorized users out, but after that, encryption of the communications channel becomes very important in preventing unauthorized users from breaking in on an authorized session and hijacking an authorized user’s credentials. As more and more networks rely on the Internet for connecting remote users, the need for and importance of secure remote access protocols and secure communication channels will continue to grow.

When a user connects to the Internet through an ISP, this is similarly a case of remote access—the user is establishing a connection to their ISP’s network, and the same security issues apply. The issue of authentication, the matching of user-supplied credentials to previously stored credentials on a host machine, is usually done via a user account name and password. Once the user is authenticated, the authorization step takes place. Remote authentication usually takes the common form of an end user submitting their credentials via an established protocol to a remote access server (RAS), which acts upon those credentials, either granting or denying access.

Access controls define what actions a user can perform or what objects a user is allowed to access. Access controls are built on the foundation of elements designed to facilitate the matching of a user to a process. These elements are identification, authentication, and authorization. A myriad of details and choices are associated with setting up remote access to a network, and to provide for the management of these options, it is important for an organization to have a series of remote access policies and procedures spelling out the details of what is permitted and what is not for a given network.


Tech Tip


Federated identity management is an agreement between multiple enterprises that lets parties use the same identification data to obtain access to the networks of all enterprises in the group. This federation enables access to be managed across multiple systems in common trust levels.

IEEE 802.1X

IEEE 802.1X is an authentication standard that supports port-based authentication services between a user and an authorization device, such as an edge router. IEEE 802.1X is used by all types of networks, including Ethernet, Token Ring, and wireless. This standard describes methods used to authenticate a user prior to granting access to a network and the authentication server, such as a RADIUS server. 802.1X acts through an intermediate device, such as an edge switch, enabling ports to carry normal traffic if the connection is properly authenticated. This prevents unauthorized clients from accessing the publicly available ports on a switch, keeping unauthorized users out of a LAN. Until a client has successfully authenticated itself to the device, only Extensible Authentication Protocol over LAN (EAPOL) traffic is passed by the switch.


One security issue associated with 802.1X is that the authentication occurs only upon initial connection, and another user can insert themselves into the connection by changing packets or using a hub. The secure solution is to pair 802.1X, which authenticates the initial connection, with a VPN or IPSec, which provides persistent security.

EAPOL is an encapsulated method of passing EAP messages over 802.1X frames. EAP is a general protocol that can support multiple methods of authentication, including one-time passwords, Kerberos, public keys, and security device methods such as smart cards. Once a client successfully authenticates itself to the 802.1X device, the switch opens ports for normal traffic. At this point, the client can communicate with the system’s AAA method, such as a RADIUS server, and authenticate itself to the network.

802.1X is commonly used on wireless access points as a port-based authentication service prior to admission to the wireless network. 802.1X over wireless uses either 802.11i or EAP-based protocols, such as EAP-TLS and PEAP-TLS.


Cross Check

Wireless Remote Access

Wireless is a common method of allowing remote access to a network, as it does not require physical cabling and allows mobile connections. Wireless security, including protocols such as 802.11i and EAP-based solutions, is covered in Chapter 12.


The Lightweight Directory Access Protocol (LDAP) is commonly used to handle user authentication/authorization as well as control access to Active Directory objects. A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), which by default is on TCP and UDP port 389, or on port 636 for LDAPS (LDAP over SSL).

To enable interoperability, X.500 was created as a standard for directory services. The primary method for accessing an X.500 directory is through the Directory Access Protocol (DAP), a heavyweight protocol that is difficult to implement completely, especially on PCs and more constrained platforms. This led to LDAP, which contains the most commonly used functionality. LDAP can interface with X.500 services and, more importantly, can be used over TCP with significantly less computing resources than a full X.500 implementation. LDAP offers all of the functionality most directories need and is easier and more economical to implement; hence, LDAP has become the Internet standard for directory services. LDAP standards are governed by two separate entities, depending on use: the International Telecommunication Union (ITU) governs the X.500 standard, and LDAP is governed for Internet use by the IETF. Many RFCs apply to LDAP functionality, but some of the most important are RFCs 2251 through 2256 and RFCs 2829 and 2830.


Remote Authentication Dial-In User Service (RADIUS) is an AAA protocol. It was submitted to the Internet Engineering Task Force (IETF) as a series of RFCs: RFC 2058 (RADIUS specification), RFC 2059 (RADIUS accounting standard), and updated RFCs 2865–2869, which are now standard protocols.

RADIUS is designed as a connectionless protocol that uses the User Datagram Protocol (UDP) as its transport layer protocol. Connection type issues, such as timeouts, are handled by the RADIUS application instead of the transport layer. RADIUS utilizes UDP port 1812 for authentication and authorization and UDP port 1813 for accounting functions.

RADIUS is a client/server protocol. The RADIUS client is typically a network access server (NAS). Network access servers act as intermediaries, authenticating clients before allowing them access to a network. RADIUS, RRAS (Microsoft), RAS, and VPN servers can all act as network access servers. The RADIUS server is a process or daemon running on a UNIX or Windows Server machine. Communications between a RADIUS client and RADIUS server are encrypted using a shared secret that is manually configured into each entity and not shared over a connection. Hence, communications between a RADIUS client (typically a NAS) and a RADIUS server are secure, but the communications between a user (typically a PC) and the RADIUS client are subject to compromise. This is important to note, because if the user’s machine (the PC) is not the RADIUS client (the NAS), then communications between the PC and the NAS are typically not encrypted and are passed in the clear.

RADIUS Authentication

The RADIUS protocol is designed to allow a RADIUS server to support a wide variety of methods to authenticate a user. When the server is given a username and password, it can support Point-to-Point Protocol (PPP), Password Authentication Protocol (PAP), Challenge-Handshake Authentication Protocol (CHAP), Linux login, and other mechanisms, depending on what was established when the server was set up. A user login authentication consists of a query (Access-Request) from the RADIUS client and a corresponding response (Access-Accept, Access-Challenge, or Access-Reject) from the RADIUS server, as you can see in Figure 11.24. The Access-Challenge response is the initiation of a challenge/response handshake. If the client cannot support challenge/response, it treats the Challenge message as an Access-Reject.


Figure 11.24 RADIUS communication sequence

The Access-Request message contains the username, encrypted password, NAS IP address, and port. The message also contains information concerning the type of session the user wants to initiate. Once the RADIUS server receives this information, it searches its database for a match on the username. If a match is not found, either a default profile is loaded or an Access-Reject reply is sent to the user. If the entry is found or the default profile is used, the next phase involves authorization, because in RADIUS these steps are performed in sequence. Figure 11.24 shows the interaction between a user and the RADIUS client and RADIUS server as well as the steps taken to make a connection.

RADIUS Authorization

In the RADIUS protocol, the authentication and authorization steps are performed together in response to a single Access-Request message, although they are sequential steps (see Figure 11.24). Once an identity has been established, either known or default, the authorization process determines what parameters are returned to the client. Typical authorization parameters include the service type allowed (shell or framed), the protocols allowed, the IP address to assign to the user (static or dynamic), and the access list to apply or static route to place in the NAS routing table.


Tech Tip

Shell Accounts

Shell account requests are those that desire command-line access to a server. Once authentication is successfully performed, the client is connected directly to the server so command-line access can occur. Rather than being given a direct IP address on the network, the NAS acts as a pass-through device conveying access.

These parameters are all defined in the configuration information on the RADIUS client and server during setup. Using this information, the RADIUS server returns an Access-Accept message with these parameters to the RADIUS client.

RADIUS Accounting

The RADIUS accounting function is performed independently of RADIUS authentication and authorization. The accounting function uses a separate UDP port, 1813 (see Table 11.3 in the “Connection Summary” section at the end of the chapter). The primary functionality of RADIUS accounting was established to support ISPs in their user accounting, and it supports typical accounting functions for time billing and security logging. The RADIUS accounting functions are designed to allow data to be transmitted at the beginning and end of a session, and they can indicate resource utilization, such as time, bandwidth, and so on.


Using UDP transport to a centralized network access server, RADIUS provides client systems authentication and access control within an enterprise network.


Diameter is the name of an AAA protocol suite, designated by the IETF to replace the aging RADIUS protocol. Diameter operates in much the same way as RADIUS in a client/server configuration, but it improves upon RADIUS, resolving discovered weaknesses. Diameter is a TCP-based service and has more extensive AAA capabilities. Diameter is also designed for all types of remote access, not just modem pools. As more and more users adopt broadband and other connection methods, these newer services require more options to determine permissible usage properly and to account for and log the usage. Diameter is designed with these needs in mind.

Diameter also has an improved method of encrypting message exchanges to prohibit replay and on-path (formerly known as man-in-the-middle) attacks. Taken all together, Diameter, with its enhanced functionality and security, is an improvement on the proven design of the old RADIUS standard.


The Terminal Access Controller Access Control System Plus (TACACS+) protocol is the current generation of the TACACS family. Originally TACACS was developed by BBN Planet Corporation for MILNET, an early military network, but it has been enhanced by Cisco, which has expanded its functionality twice. The original BBN TACACS system provided a combination process of authentication and authorization. Cisco extended this to Extended Terminal Access Controller Access Control System (XTACACS), which provided for separate authentication, authorization, and accounting processes. The current generation, TACACS+, has extended attribute control and accounting processes.

One of the fundamental design aspects is the separation of authentication, authorization, and accounting in this protocol. Although there is a straightforward lineage of these protocols from the original TACACS, TACACS+ is a major revision and is not backward-compatible with previous versions of the protocol series.

TACACS+ uses TCP as its transport protocol, typically operating over TCP port 49. This port is used for the login process and is reserved in RFC 3232, “Assigned Numbers,” manifested in a database from the Internet Assigned Numbers Authority (IANA). In the IANA specification, both UDP port 49 and TCP port 49 are reserved for the TACACS+ login host protocol (see Table 11.3 in the “Connection Summary” section at the end of the chapter).

TACACS+ is a client/server protocol, with the client typically being a NAS and the server being a daemon process on a UNIX, Linux, or Windows Server. This is important to note, because if the user’s machine (usually a PC) is not the client (usually a NAS), then communications between the PC and NAS are typically not encrypted and are passed in the clear. Communications between a TACACS+ client and TACACS+ server are encrypted using a shared secret that is manually configured into each entity and is not shared over a connection. Hence, communications between a TACACS+ client (typically a NAS) and a TACACS+ server are secure, but the communications between a user (typically a PC) and the TACACS+ client are subject to compromise.

TACACS+ Authentication

TACACS+ allows for arbitrary length and content in the authentication exchange sequence, enabling many different authentication mechanisms to be used with TACACS+ clients. Authentication is optional and is determined as a site-configurable option. When authentication is used, common forms include PPP PAP, PPP CHAP, PPP EAP, token cards, and Kerberos. The authentication process is performed using three different packet types: START, CONTINUE, and REPLY. START and CONTINUE packets originate from the client and are directed to the TACACS+ server. The REPLY packet is used to communicate from the TACACS+ server to the client.

The authentication process is illustrated in Figure 11.25, and it begins with a START message from the client to the server. This message may be in response to an initiation from a PC connected to the TACACS+ client. The START message describes the type of authentication being requested (simple plaintext password, PAP, CHAP, and so on). This START message may also contain additional authentication data, such as a username and password. A START message is also sent as a response to a restart request from the server in a REPLY message. A START message always has its sequence number set to 1.


Figure 11.25 TACACS+ communication sequence

When a TACACS+ server receives a START message, it sends a REPLY message. This REPLY message indicates whether the authentication is complete or needs to be continued. If the process needs to be continued, the REPLY message also specifies what additional information is needed. The response from a client to a REPLY message requesting additional data is a CONTINUE message. This process continues until the server has all the information needed, and the authentication process concludes with a success or failure.

TACACS+ Authorization

Authorization is defined as the granting of specific permissions based on the privileges held by the account. This generally occurs after authentication, as shown in Figure 11.25, but this is not a firm requirement. A default state of “unknown user” exists before a user is authenticated, and permissions can be determined for an unknown user. As with authentication, authorization is an optional process and may or may not be part of a site-specific operation. When it is used in conjunction with authentication, the authorization process follows the authentication process and uses the confirmed user identity as input in the decision process.

The authorization process is performed using two message types: REQUEST and RESPONSE. The authorization process is performed using an authorization session consisting of a single pair of REQUEST and RESPONSE messages. The client issues an authorization REQUEST message containing a fixed set of fields enumerating the authenticity of the user or process requesting permission and a variable set of fields enumerating the services or options for which authorization is being requested.

The RESPONSE message in TACACS+ is not a simple yes or no; it can also include qualifying information, such as a user time limit or IP restrictions. These limitations have important uses, such as enforcing time limits on shell access or enforcing IP access list restrictions for specific user accounts.

TACACS+ Accounting

As with the two previous services, accounting is also an optional function of TACACS+. When utilized, it typically follows the other services. Accounting in TACACS+ is defined as the process of recording what a user or process has done. Accounting can serve two important purposes:

Images   It can be used to account for services being utilized, possibly for billing purposes.

Images   It can be used for generating security audit trails.

TACACS+ accounting records contain several pieces of information to support these tasks. The accounting process has the information revealed in the authorization and authentication processes, so it can record specific requests by user or process. To support this functionality, TACACS+ has three types of accounting records: START, STOP, and UPDATE. Note that these are record types, not message types as earlier discussed.


TACACS+ is a protocol that takes a client/server model approach and handles authentication, authorization, and accounting (AAA) services. It is similar to RADIUS but uses TCP (port 49) as a transport method.

Authentication Protocols

Numerous authentication protocols have been developed, used, and discarded in the brief history of computing. Some have come and gone because they did not enjoy market share, others have had security issues, and yet others have been revised and improved in newer versions. It’s impractical to cover them all, so only some of the common ones follow.


Layer 2 Tunneling Protocol (L2TP) and Point-to-Point Tunneling Protocol (PPTP) are both OSI Layer 2 tunneling protocols. Tunneling is the encapsulation of one packet within another, which allows you to hide the original packet from view or change the nature of the network transport. This can be done for both security and practical reasons.

From a practical perspective, assume that you are using TCP/IP to communicate between two machines. Your message may pass over various networks, such as an Asynchronous Transfer Mode (ATM) network, as it moves from source to destination. Because the ATM protocol can neither read nor understand TCP/IP packets, something must be done to make them passable across the network. By encapsulating a packet as the payload in a separate protocol, so it can be carried across a section of a network, a mechanism called a tunnel is created. At each end of the tunnel, called the tunnel endpoints, the payload packet is read and understood. As it goes into the tunnel, you can envision your packet being placed in an envelope with the address of the appropriate tunnel endpoint on it. When the envelope arrives at the tunnel endpoint, the original message (the tunnel packet’s payload) is re-created, read, and sent to its appropriate next stop. The information being tunneled is understood only at the tunnel endpoints; it is not relevant to intermediate tunnel points because it is only a payload.


Layer 2 Tunneling Protocol (L2TP) is also an Internet standard and came from the Layer 2 Forwarding (L2F) protocol, a Cisco initiative designed to address issues with PPTP. Whereas PPTP is designed around PPP and IP networks, L2F (and hence L2TP) is designed for use across all kinds of networks, including ATM and Frame Relay. Additionally, whereas PPTP is designed to be implemented in software at the client device, L2TP was conceived as a hardware implementation using a router or a special-purpose appliance. L2TP can be configured in software and is in Microsoft’s Routing and Remote Access Service (RRAS), which uses L2TP to create a VPN.

L2TP works in much the same way as PPTP, but it opens up several items for expansion. For instance, in L2TP, routers can be enabled to concentrate VPN traffic over higher-bandwidth lines, creating hierarchical networks of VPN traffic that can be more efficiently managed across an enterprise. L2TP also has the ability to use IPSec, providing a higher level of data security. L2TP is also designed to work with established AAA services such as RADIUS and TACACS+ to aid in user authentication, authorization, and accounting.

L2TP is established via UDP port 1701, so this is an essential port to leave open across firewalls supporting L2TP traffic. Microsoft supports L2TP in Windows operating systems, but because of the computing power required, most implementations will use specialized hardware (such as a Cisco router).


Microsoft led a consortium of networking companies to extend PPP to enable the creation of virtual private networks (VPNs). The result was the Point-to-Point Tunneling Protocol (PPTP), a network protocol that enables the secure transfer of data from a remote PC to a server by creating a VPN across a TCP/IP network. This remote network connection can also span a public switched telephone network (PSTN) and is thus an economical way of connecting remote dial-in users to a corporate data network. The incorporation of PPTP into the Microsoft Windows product line provides a built-in secure method of remote connection using the operating system, and this has given PPTP a large marketplace footprint.

For most PPTP implementations, three computers are involved: the PPTP client, the NAS, and a PPTP server, as shown in Figure 11.26. The connection between the remote client and the network is established in stages, as illustrated in Figure 11.27. First, the client makes a PPP connection to a NAS, typically an ISP. (In today’s world of widely available broadband, if there is already an Internet connection, then there is no need to perform the PPP connection to the ISP.) Once the PPP connection is established, a second connection is made over the PPP connection to the PPTP server. This second connection creates the VPN connection between the remote client and the PPTP server. A typical VPN connection is one in which the user is in a hotel with a wireless Internet connection, connecting to a corporate network. This connection acts as a tunnel for future data transfers. Although these diagrams illustrate a telephone connection, this first link can be virtually any method. Common in hotels today are wired connections to the Internet. These wired connections typically are provided by a local ISP and offer the same services as a phone connection, albeit at a much higher data transfer rate.


Figure 11.26 PPTP communication diagram


Figure 11.27 PPTP message encapsulation during transmission

PPTP establishes a tunnel from the remote PPTP client to the PPTP server and enables encryption within this tunnel. This provides a secure method of transport. To do this and still enable routing, an intermediate addressing scheme, Generic Routing Encapsulation (GRE), is used.

To establish the connection, PPTP uses communications across TCP port 1723 (see Table 11.3 in the “Connection Summary” section at the end of the chapter), so this port must remain open across the network firewalls for PPTP to be initiated. Although PPTP allows the use of any PPP authentication scheme, CHAP is used when encryption is specified, to provide an appropriate level of security. For the encryption methodology, Microsoft chose the RSA RC4 cipher, with either a 40- or 128-bit session key length, and this is OS driven. Microsoft Point-to-Point Encryption (MPPE) is an extension to PPP that enables VPNs to use PPTP as the tunneling protocol.


Point-to-Point Protocol (PPP) is an older, still widely used protocol for establishing dial-in connections over serial lines or Integrated Services Digital Network (ISDN) services. PPP has several authentication mechanisms, including PAP, CHAP, and the Extensible Authentication Protocol (EAP). These protocols are used to authenticate the peer device, not a user of the system. PPP is a standardized Internet encapsulation of IP traffic over point-to-point links, such as serial lines. The authentication process is performed only when the link is established.


Tech Tip

PPP Functions and Authentication

PPP supports three functions:

Images   Encapsulate datagrams across serial links

Images   Establish, configure, and test links using LCP

Images   Establish and configure different network protocols using NCP

PPP supports three authentication protocols:

Images   Password Authentication Protocol (PAP)

Images   Challenge-Handshake Authentication Protocol (CHAP)

Images   Extensible Authentication Protocol (EAP)


Extensible Authentication Protocol (EAP) is a universal authentication framework defined by RFC 3748 that is frequently used in wireless networks and point-to-point connections. Although EAP is not limited to wireless and can be used for wired authentication, it is most often used in wireless LANs. EAP is discussed in detail in Chapter 12.


Challenge-Handshake Authentication Protocol (CHAP) is used to provide authentication across a point-to-point link using PPP. In this protocol, authentication after the link has been established is not mandatory. CHAP is designed to provide authentication periodically through the use of a challenge/response system that is sometimes described as a three-way handshake, as illustrated in Figure 11.28. The initial challenge (a randomly generated number) is sent to the client. The client uses a one-way hashing function to calculate what the response should be and then sends this back. The server compares the response to what it calculated the response should be. If they match, communication continues. If the two values don’t match, then the connection is terminated. This mechanism relies on a shared secret between the two entities so that the correct values can be calculated.


Figure 11.28 The CHAP challenge/response sequence

Microsoft has created two versions of CHAP, modified to increase the usability of CHAP across Microsoft’s product line. MS-CHAP v1, defined in RFC 2433, has been deprecated and was dropped in Windows Vista. The current standard, MS-CHAP v2, defined in RFC 2759, was introduced with Windows 2000.


NT LAN Manager (NTLM) is an authentication protocol designed by Microsoft for use with the Server Message Block (SMB) protocol. SMB is an application-level network protocol primarily used for sharing of files and printers in Windows-based networks. NTLM is the successor to the authentication protocol in Microsoft LAN Manager (LANMAN), an older Microsoft product. Both of these suites have been widely replaced by Microsoft’s Kerberos implementation, although NTLM is still used for logon authentication on standalone Windows machines. The current version is NTLM v2, which was introduced with Windows NT 4.0 SP4. NTLM uses an encrypted challenge/response protocol to authenticate a user without sending the user’s password over the wire, but the cryptography by today’s standards is weak, including MD4. Although Microsoft has adopted the Kerberos protocol for authentication, NTLM v2 is still used in the following situations:

Images   When authenticating to a server using an IP address

Images   When authenticating to a server that belongs to a different Active Directory forest

Images   When authenticating to a server that doesn’t belong to a domain

Images   When no Active Directory domain exists (“workgroup” or “peer-to-peer” connection)


Password Authentication Protocol (PAP) involves a two-way handshake in which the username and password are sent across the link in cleartext. PAP authentication does not provide any protection against playback and line sniffing. PAP is now a deprecated standard.


PAP is a cleartext authentication protocol and hence is subject to interception.


One of the methods to grant remote access to a system is through Telnet. Telnet is the standard terminal-emulation protocol within the TCP/IP protocol series, and it is defined in RFC 854. Telnet allows users to log in remotely and access resources as if the user had a local terminal connection. Telnet is an old protocol and offers little security. Information, including account names and passwords, is passed in cleartext over the TCP/IP connection.

Telnet makes its connection using TCP port 23. As Telnet is implemented on most products using TCP/IP, it is important to control access to Telnet on machines and routers when setting them up. Failure to control access by using firewalls, access lists, and other security methods, or even by disabling the Telnet daemon, is equivalent to leaving an open door for unauthorized users on a system.


Secure Shell (SSH) is a protocol series designed to facilitate secure network functions across an insecure network. SSH provides direct support for secure remote login, secure file transfer, and secure forwarding of TCP/IP and X Window System traffic. An SSH connection is an encrypted channel, providing for confidentiality and integrity protection. SSH uses TCP port 22. SCP (secure copy) and SFTP (secure FTP) use SSH, so each also uses TCP port 22.

SSH has its origins as a replacement for the insecure Telnet application from the UNIX operating system. An original component of UNIX, Telnet allowed users to connect between systems. Although Telnet is still used today, it has some drawbacks, as discussed in the preceding section. Some enterprising University of California, Berkeley, students subsequently developed the r- commands, such as rlogin, to permit access based on the user and source system, as opposed to passing passwords. This was not perfect either, however, because when a login was required, it was still passed in the clear. This led to the development of the SSH protocol series, designed to eliminate all of the insecurities associated with Telnet, r- commands, and other means of remote access.

SSH opens a secure transport channel between machines by using an SSH daemon on each end. These daemons initiate contact over TCP port 22 and then communicate over higher ports in a secure mode. One of the strengths of SSH is its support for many different encryption protocols. SSH 1.0 started with RSA algorithms, but at the time they were still under patent, and this led to SSH 2.0 with extended support for Triple DES (3DES) and other encryption methods. Today, SSH can be used with a wide range of encryption protocols, including RSA, Blowfish, International Data Encryption Algorithm (IDEA), CAST128, AES256, and others.

The SSH protocol has facilities to encrypt data automatically, provide authentication, and compress data in transit. It can support strong encryption, cryptographic host authentication, and integrity protection. The authentication services are host based and not user based. If user authentication is desired in a system, it must be set up separately at a higher level in the OSI model. The protocol is designed to be flexible and simple, and it is designed specifically to minimize the number of round trips between systems. The key exchange, public key, symmetric key, message authentication, and hash algorithms are all negotiated at connection time. Individual data-packet integrity is ensured through the use of a message authentication code that is computed from a shared secret, the contents of the packet, and the packet sequence number.

The SSH protocol consists of three major components:

Images   Transport layer protocol  Provides server authentication, confidentiality, integrity, and compression

Images   User authentication protocol  Authenticates the client to the server

Images   Connection protocol  Provides multiplexing of the encrypted tunnel into several logical channels

SSH is very popular in Linux environments, and it is actively used as a method of establishing VPNs across public networks. Because all communications between the two machines are encrypted at the OSI application layer by the two SSH daemons, this leads to the ability to build very secure solutions and even solutions that defy the ability of outside services to monitor. As SSH is a standard protocol series with connection parameters established via TCP port 22, different vendors can build differing solutions that can still interoperate.


Tech Tip


Remote Desktop Protocol (RDP) is a proprietary Microsoft protocol designed to provide a graphical connection to another computer. The computer requesting the connection has RDP client software (built into Windows), and the target uses an RDP server. This software has been available for many versions of Windows and was formerly called Terminal Services. Client and server versions also exist for Linux platforms. RDP uses TCP and UDP ports 3389, so if RDP is desired, these ports need to be open on the firewall.

Although Windows Server implementations of SSH exist, this has not been a popular protocol in the Windows environment from a server perspective. The development of a wide array of commercial SSH clients for the Windows platform indicates the marketplace strength of interconnection from desktop PCs to Linux-based servers utilizing this protocol. Windows 10 uses OpenSSH as both its default SSH client and server.


Security Assertion Markup Language (SAML) is a single sign-on (SSO) capability used for web applications to ensure user identities can be shared and are protected. It defines standards for exchanging authentication and authorization data between security domains. It is becoming increasingly important with cloud-based solutions and with Software as a Service (SaaS) applications because it ensures interoperability across identity providers.

SAML is an XML-based protocol that uses security tokens and assertions to pass information about a “principal” (typically an end user) with a SAML authority (an “identity provider,” or IdP) and the service provider (SP). The principal requests a service from the SP, which then requests and obtains an identity assertion from the IdP. The SP can then grant access or perform the requested service for the principal.


By allowing identity providers to pass on credentials to service providers, SAML allows you log in to many different websites using one set of credentials.


OAuth (Open Authorization) is an open protocol that allows secure token-based authentication and authorization in a simple and standard method from web, mobile, and desktop applications for authorization on the Internet. OAuth is used by companies such as Google, Facebook, Microsoft, and Twitter to permit users to share information about their accounts with third-party applications or websites. OAuth 1.0 was developed by a Twitter engineer as part of the Twitter OpenID implementation. OAuth 2.0 (not backward compatible) has taken off with support from most major web platforms. OAuth’s main strength is that it can be used by an external partner site to allow access to protected data without having to reauthenticate the user.

OAuth was created to remove the need for users to share their passwords with third-party applications, instead substituting a token. OAuth 2.0 expanded this into also providing authentication services, so it can eliminate the need for OpenID.

OpenID Connect

OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. OpenID Connect allows clients of all types (mobile, JavaScript, and web-based clients) to request and receive information about authenticated sessions and end users. OpenID is about proving who you are, which is the first step in the Authentication-Authorization ladder. To perform authorization, a second process is needed, and OpenID is commonly paired with OAuth 2.0. OpenID was created for federated authentication that lets a third party authenticate your users for you, by using accounts the users already have.


OpenID and OAuth are typically used together, yet have different purposes. OpenID is used for authentication, whereas OAuth is used for authorization.


Shibboleth is a service designed to enable single sign-on and federated identity-based authentication and authorization across networks. It began in 2000, has been through several revisions and versions, but has yet to gain any widespread acceptance. Shibboleth is a web-based technology that is built using SAML technologies. Shibboleth uses the HTTP/POST, artifact, and attribute push profiles of SAML, including both Identity Provider (IdP) and Service Provider (SP) components to achieve its goals. As such, it is included by many services that use SAML for identity management.

Secure Token

Within a claims-based identity framework, such as OASIS WS-Trust, security tokens are used. A secure token service is responsible for issuing, validating, renewing, and cancelling these security tokens. The tokens issued can then be used to identify the holder of the token to any services that adhere to the WS-Trust standard. Secure tokens solve the problem of authentication across stateless platforms, because user identity must be established with each request. The following outlines the basic five-step process for using tokens:

1.   The user requests access with a username and password.

2.   The secure token service validates the user’s credentials.

3.   The secure token service provides a signed token to the client.

4.   The client stores that token and sends it along with every request.

5.   The server verifies the token and responds with data.

These steps are highly scalable and can be widely distributed and even shared. A user application can use a token for access via another app (for example, allowing someone to validate a login to Twitter via Facebook) because the token is transportable.


One of the methods of transferring files between machines is through the use of the File Transfer Protocol (FTP). FTP is a plaintext protocol that operates by communicating over TCP between a client and a server. The client initiates a transfer with an FTP request to the server’s TCP port 21. This is the control connection, and this connection remains open over the duration of the file transfer. The actual data transfer occurs on a negotiated data transfer port, typically a high-order port number. FTP was not designed to be a secure method of transferring files. If a secure method is desired, then using FTPS or SFTP is best.

FTPS is the use of FTP over an SSL/TLS secured channel. This can be done either in explicit mode, where an AUTH TLS command is issued, or in implicit mode, where the transfer occurs over TCP port 990 for the control channel and TCP port 989 for the data channel. SFTP is not FTP per se, but rather a completely separate Secure File Transfer Protocol as defined by an IETF draft, the latest of which, version 6, expired in July of 2007 but has been incorporated into products in the marketplace.


FTP uses TCP port 21 as a control channel and TCP port 20 as a typical active mode data port, as some firewalls are set to block ports above 1024.

It is also possible to run FTP over SSH, as later versions of SSH allow securing of channels such as the FTP control channel; this has also been referred to as Secure FTP, or SFTP. This leaves the data channel unencrypted, a problem that has been solved in version 3.0 of SSH, which supports FTP commands. The challenge of encrypting the FTP data communications is that the mutual port agreement must be opened on the firewall, and for security reasons, high-order ports that are not explicitly defined are typically secured. Because of this challenge, Secure Copy (SCP) is often a more desirable alternative to SFTP when using SSH.


A virtual private network (VPN) is a secure virtual network built on top of a physical network. The security of a VPN lies in the encryption of packet contents between the endpoints that define the VPN. The physical network upon which a VPN is built is typically a public network, such as the Internet. Because the packet contents between VPN endpoints are encrypted, to an outside observer on the public network, the communication is secure, and depending on how the VPN is set up, security can even extend to the two communicating parties’ machines.

Virtual private networking is not a protocol but rather a method of using protocols to achieve a specific objective—secure communications—as shown in Figure 11.29. A user who wants to have a secure communication channel with a server across a public network can set up two intermediary devices, called VPN endpoints, to accomplish this task. The user can communicate with their endpoint, and the server can communicate with its endpoint. The two endpoints then communicate across the public network. VPN endpoints can be software solutions, routers, or specific servers set up for specific functionality. This implies that VPN services are set up in advance and are not something negotiated on the fly.


Figure 11.29 VPN service over an Internet connection


VPNs are commonly used for remote access to enterprise networks, providing protection from outside traffic. VPNs can also be used from site to site between network nodes in an overall system with geographic separation.

A typical use of VPN services is a user accessing a corporate data network from a home PC across the Internet. The employee installs VPN software from work on a home PC. This software is already configured to communicate with the corporate network’s VPN endpoint; it knows the location, the protocols that will be used, and so on. When the home user wants to connect to the corporate network, they connect to the Internet and then start the VPN software. The user can then log in to the corporate network by using an appropriate authentication and authorization methodology. The sole purpose of the VPN connection is to provide a private connection between the machines, which encrypts any data sent between the home user’s PC and the corporate network. Identification, authorization, and all other standard functions are accomplished with the standard mechanisms for the established system.

VPNs can use many different protocols to offer a secure method of communicating between endpoints. Common methods of encryption on VPNs include PPTP, IPSec, SSH, and L2TP, all of which are discussed in this chapter. The key is that both endpoints know the protocol and share a secret. All of this necessary information is established when the VPN is set up. At the time of use, the VPN only acts as a private tunnel between the two points and does not constitute a complete security solution.


Tech Tip

Split Tunnels

Split-tunnel is a form of VPN where not all traffic is routed via the VPN. Split tunneling allows multiple connection paths, some via the protected route such as the VPN, whereas other traffic from, say, local network resources, such as printers, is routed via non-VPN paths. A full tunnel solution routes all traffic over the VPN.

Vulnerabilities of Remote Access Methods

The primary vulnerability associated with many of these methods of remote access is the passing of critical data in cleartext. Plaintext passing of passwords provides no security if the password is sniffed, and sniffers are easy to use on a network. Even plaintext passing of user IDs gives away information that can be correlated and possibly used by an attacker. Plaintext credential passing is one of the fundamental flaws with Telnet and is why SSH was developed. This is also one of the flaws with RADIUS and TACACS+, as they have a segment unprotected. There are methods for overcoming these limitations, although they require discipline and understanding in setting up a system.

The strength of the encryption algorithm is also a concern. Should a specific algorithm or method prove to be vulnerable, services that rely solely on it are also vulnerable. To get around this dependency, many of the protocols allow numerous encryption methods so that, should one prove vulnerable, a shift to another restores security.


Tech Tip

Access Violations

The importance of authentication and authorization to a security program cannot be understated. These systems are the foundation of access to system objects, actions, and resources. Should failures occur, it is important to invoke logging and notification so that incident response can be activated if necessary. Access violations can be minor, or they can be significant with respect to risk, but they must be detected and acted upon. In this regard, the authorization system should be linked to logging for all critical items in a system so that actions can be initiated when violations occur.

As with any software implementation, there always exists the possibility that a bug could open the system to attack. Bugs have been corrected in most software packages to close holes that made systems vulnerable, and remote access functionality is no exception. This is not a Microsoft-only phenomenon, as one might believe from the popular press. Critical flaws have been found in almost every product, from open system implementations such as OpenSSH to proprietary systems such as Cisco IOS. The important issue is not the presence of software bugs, because as software continues to become more complex, this is an unavoidable issue. The true key is vendor responsiveness to fixing the bugs once they are discovered, and the major players, such as Cisco and Microsoft, have been very responsive in this area.

Images Preventing Data Loss or Theft

Identity theft and commercial espionage have become very large and lucrative criminal enterprises over the past decade. Hackers are no longer merely content to compromise systems and deface websites. In many attacks performed today, hackers are after intellectual property, business plans, competitive intelligence, personal information, credit card numbers, client records, or any other information that can be sold, traded, or manipulated for profit. This has created a whole industry of technical solutions labeled data loss prevention (DLP) solutions.

It can be assumed that a hacker has assumed the identity of an authorized user, and DLP solutions exist to prevent the exfiltration of data regardless of access control restrictions. DLP solutions come in many forms, and each of these solutions has strengths and weaknesses. The best solution is a combination of security elements: some to secure data in storage (encryption) and some in the form of monitoring (proxy devices to monitor data egress for sensitive data), and even NetFlow analytics to identify new bulk data transfer routes.

Images Database Security

Database security is a concern for many enterprises, as the data in the databases represents valuable information assets. Major database engines have built-in encryption capabilities. This can provide the desired levels of confidentiality and integrity to the contents of the database. The advantage to these encryption schemes is that they can be tailored to the data structure, protecting the essential columns while not impacting columns that are not sensitive. Properly employing database encryption requires that the data schema and its security requirements be designed into the database implementation. The advantages are better protection against any database compromise, and the performance hit is typically negligible with respect to other alternatives.

Images Cloud vs. On-premises Requirements

Authentication to cloud versus on-premises requirements is basically a revisiting of the identity and authentication problem all over again. When establishing either a cloud or on-premises system, you use identity and authentication as the foundation of your security effort. Whether you use an Active Directory methodology or other system to manage identities on premises, when you’re establishing a cloud-based system, the options need to be completely reviewed and appropriate choices made based on the use of the cloud in the enterprise. Simple methods include a completely new independent system, although this increases costs and reduces usability when the number of users grows. Solutions such as federated authentication and single sign-on exist, and the proper determination of authentication processes should rest on data criticality and who needs access.

Images Connection Summary

Many protocols are used for remote access and authentication and related purposes. These methods have their own assigned ports, and these assignments are summarized in Table 11.3.

Table 11.3  Common TCP/UDP Remote Access Networking Port Assignments



Images For More Information

SANS Consensus Policy Resource Community – Password Policy

Chapter 11 Review

Images Chapter Summary

After reading this chapter and completing the exercises, you should understand the following about privilege management, authentication, and remote access protocols.

Identify the differences among user, group, and role management

Images   Privilege management is the process of restricting a user’s ability to interact with the computer system.

Images   Privilege management can be based on an individual user basis, on membership in a specific group or groups, or on a function/role.

Images   Key concepts in privilege management are the ability to restrict and control access to information and information systems.

Images   One of the methods used to simplify privilege management is single sign-on, which requires a user to authenticate successfully once. The validated credentials and associated rights and privileges are then automatically carried forward when the user accesses other systems or applications.

Implement account policies

Images   Password policies are sets of rules that help users select, employ, and store strong passwords. Tokens combine “something you have” with “something you know,” such as a password or PIN, and can be hardware or software based.

Images   Passwords should have a limited span and should expire on a scheduled basis.

Describe methods of account management

Images   Administrators have many different tools at their disposal to control access to computer resources, including password- and account-expiration methods.

Images   User authentication methods can incorporate several factors, including tokens.

Images   Users can be limited as to the hours during which they can access resources.

Images   Resources such as files, folders, and printers can be controlled through permissions or access control lists.

Images   Permissions can be assigned based on a user’s identity or their membership in one or more groups.

Describe methods of access management

Images   Mandatory access control is based on the sensitivity of the information or process itself.

Images   Discretionary access control uses file permissions and ACLs to restrict access based on a user’s identity or group membership.

Images   Role-based access control restricts access based on the user’s assigned role or roles.

Images   Rule-based access control restricts access based on a defined set of rules established by the administrator.

Images   Attribute-based access control evaluates specific rules and policies against attributes associated with a subject or object.

Explain authentication methods and the security implications in their use

Images   Password-based authentication is still the most widely used because of cost and ubiquity.

Images   Ticket-based systems, such as Kerberos, form the basis for most modern authentication and credentialing systems.

Examine the use of biometrics technology for authentication

Images   Numerous biometric factors can be utilized for authentication including fingerprints, retina patterns, iris patterns, voice, face, vein patters, and more.

Images   Biometric efficacy rates, including false acceptance and false rejection rates, are critical in making biometrics work.

Images   Multifactor authentication is commonly employed with biometrics.

Discuss the methods and protocols for remote access to networks

Images   Remote access protocols provide a mechanism to remotely connect clients to networks.

Images   A wide range of remote access protocols has evolved to support various security and authentication mechanisms.

Images   Remote access is granted via remote access servers, such as RRAS and RADIUS.

Identify authentication, authorization, and accounting (AAA) protocols

Images   Authentication is a cornerstone element of security, connecting access to a previously approved user ID.

Images   Authorization is the process of determining whether an authenticated user has permission.

Images   Accounting protocols manage connection time and cost records.

Images   RADIUS and TACACS+ are examples of implementations of AAA.

Implement virtual private networks (VPNs) and their security aspects

Images   VPNs use protocols to establish a private network over a public network, shielding user communications from outside observation.

Images   VPNs can be invoked via many different protocol mechanisms and involve either a hardware or software client on each end of the communication channel.

Images Key Terms

AAA (358)

access control (371)

access control list (ACL) (374)

access control matrix (374)

accounting (370)

account expiration (369)

account maintenance (367)

account recertification (368)

administrator (360)

attestation (388)

attribute-based access control (ABAC) (377)

authentication (358)

authentication server (AS) (383)

authorization (370)

basic authentication (382)

biometric factors (391)

certificate (385)

Challenge-Handshake Authentication Protocol (CHAP) (409)

client-to-server ticket (383)

Common Access Card (CAC) (385)

conditional access (377)

credential management (366)

crossover error rate (CER) (395)

digest authentication (382)

digital certificate (385)

directory (387)

discretionary access control (DAC) (376)

domain controller (363)

domain password policy (363)

eXtensible Access Control Markup Language (XACML) (377)

Extensible Authentication Protocol (EAP) (408)

false acceptance rate (FAR) (394)

false negative (394)

false positive (393)

false rejection rate (FRR) (395)

federated identity management (399)

FTPS (413)

gait analysis (393)

generic accounts (360)

group (361)

group policy object (GPO) (366)

guest accounts (361)

hardware security module (HSM) (389)

HMAC-based One-Time Password (HOTP) (386)

identification (378)

identity provider (IdP) (378)

IEEE 802.1X (399)

Kerberos (383)

key distribution center (KDC) (383)

knowledge-based authentication (386)

Layer 2 Tunneling Protocol (L2TP) (406)

Lightweight Directory Access Protocol (LDAP) (400)

mandatory access control (MAC) (375)

multifactor authentication (396)

mutual authentication (384)

OAuth (Open Authorization) (411)

offboarding (361)

onboarding (361)

OpenID (412)

OpenID Connect (412)

Password Authentication Protocol (PAP) (410)

password vaults (390)

permissions (359)

personal identity verification (PIV) (385)

Point-to-Point Protocol (PPP) (408)

Point-to-Point Tunneling Protocol (PPTP) (407)

privilege management (370)

privileged accounts (361)

remote access server (RAS) (399)

Remote Authentication Dial-In User Service (RADIUS) (401)

Remote Desktop Protocol (RDP) (411)

rights (359)

role (362)

role-based access control (RBAC) (376)

root (360)

rule-based access control (377)

Security Assertion Markup Language (SAML) (411)

secure token (412)

service accounts (361)

SFTP (413)

SSH keys (380)

single sign-on (SSO) (365)

shared accounts (360)

Shibboleth (412)

smart card (386)

software tokens (386)

someone you know (399)

something you are (398)

something you can do (398)

something you exhibit (399)

something you have (397)

something you know (397)

somewhere you are (398)

static codes (389)

Terminal Access Controller Access Control System Plus (TACACS+) (403)

ticket-granting server (TGS) (383)

ticket-granting ticket (TGT) (383)

time-based One-Time Password (TOTP) (386)

Time-of-day restrictions (368)

token (385)

transitive trust (388)

trusted platform module (TPM) (389)

tunneling (406)

usage auditing and review (368)

user (359)

username (359)

virtual private network (VPN) (413)

Images Key Terms Quiz

Use terms from the Key Terms list to complete the sentences that follow. Don’t use the same term more than once. Not all terms will be used.

1.   _______________ is an authentication model designed around the concept of using tickets for accessing objects.

2.   _______________ is designed around the type of tasks people perform.

3.   _______________ refers to the condition where trust is extended to another domain that is already trusted.

4.   _______________ describes a system where every resource has access rules set for it all of the time.

5.   _______________ is an authentication process where the user can enter their user ID (or username) and password and then be able to move from application to application or resource to resource without having to supply further authentication information.

6.   _______________ is an algorithm that can be used to authenticate a user in a system by using an authentication server.

7.   If your fingerprints fail to let you into a system when they should, this is called a(n) _______________.

8.   When both the client and the server authenticate each other, this is called _______________.

9.   _______________ is an access control method that would allow you to control access to records only when someone is scheduled to work.

10.   Authentication that is sent in plaintext with only Base64 encoding is an example of ______________.

Images Multiple-Choice Quiz

1.   Authentication can be based on what?

A.   Something a user possesses

B.   Something a user knows

C.   Something measured on a user, such as a fingerprint

D.   All of the above

2.   You’ve spent the last week tweaking a fingerprint-scanning solution for your organization. Despite your best efforts, roughly 1 in 50 attempts will fail even if the user is using the correct finger and their fingerprint is in the system. Your supervisor says 1 in 50 is “good enough” and tells you to move onto the next project. Your supervisor just defined which of the following for your fingerprint-scanning system?

A.   False rejection rate

B.   False acceptance rate

C.   Critical threshold

D.   Failure acceptance criteria

3.   A ticket-granting server is an important element in which of the following authentication models?

A.   L2TP


C.   PPP

D.   Kerberos

4.   What protocol is used for RADIUS?

A.   UDP

B.   NetBIOS

C.   TCP

D.   Proprietary

5.   Under which access control system is each piece of information and every system resource (files, devices, networks, and so on) labeled with its sensitivity level?

A.   Discretionary access control

B.   Resource access control

C.   Mandatory access control

D.   Media access control

6.   Which of the following algorithms uses a secret key with a current timestamp to generate a one-time password?

A.   Hash-based Message Authentication Code

B.   Date-hashed Message Authorization Password

C.   Time-based One-Time Password

D.   Single sign-on

7.   You have to implement an OpenID solution. What is the typical relationship with existing systems?

A.   OpenID is used for authentication, OAuth is used for authorization.

B.   OpenID is used for authorization, OAuth is used for authentication.

C.   OpenID is not compatible with OAuth.

D.   OpenID only works with Kerberos.

8.   Elements of Kerberos include which of the following?

A.   Tickets, ticket-granting server, ticket-authorizing agent

B.   Ticket-granting ticket, authentication server, ticket

C.   Services server, Kerberos realm, ticket authenticators

D.   Client-to-server ticket, authentication server ticket, ticket

9.   To establish a PPTP connection across a firewall, you must do which of the following?

A.   Do nothing. PPTP does not need to cross firewalls by design.

B.   Do nothing. PPTP traffic is invisible and tunnels past firewalls.

C.   Open a UDP port of choice and assign it to PPTP.

D.   Open TCP port 1723.

10.   To establish an L2TP connection across a firewall, you must do which of the following?

A.   Do nothing. L2TP does not cross firewalls by design.

B.   Do nothing. L2TP tunnels past firewalls.

C.   Open a UDP port of choice and assign it to L2TP.

D.   Open UDP port 1701.

Images Essay Quiz

1.   A co-worker with a strong Windows background is having difficulty understanding Linux file permissions. Describe Linux file permissions to him. Compare Linux file permissions to Windows file permissions.

2.   How are authentication and authorization alike and how are they different. What is the relationship, if any, between the two?

Lab Projects

Lab Project 11.1

Using two workstations and some routers, set up a simple VPN. Using Wireshark (a shareware network protocol analyzer, available at, observe traffic inside and outside the tunnel to demonstrate protection. Examine the traffic to see what information is available between the machines when tunneling is employed.

Lab Project 11.2

Using freeSSHd and freeFTPd (both shareware programs, available at and Wireshark, demonstrate the security features of SSH compared to Telnet and FTP. Examine the traffic to see what information is available between the different protocols.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.