Chapter 24

Implementing Authentication and Authorization Solutions

This chapter covers the following topics related to Objective 3.8 (Given a scenario, implement authentication and authorization solutions) of the CompTIA Security+ SY0-601 certification exam:

  • Authentication managementx

    • Password keys

    • Password vaults

    • TPM

    • HSM

    • Knowledge-based authentication

  • Authentication/authorization

    • EAP

    • Challenge-Handshake Authentication Protocol (CHAP)

    • Password Authentication Protocol (PAP)

    • 802.1X

    • RADIUS

    • Single sign-on (SSO)

    • Security Assertions Markup Language (SAML)

    • Terminal Access Controller Access Control System Plus (TACACS+)

    • OAuth

    • OpenID

    • Kerberos

  • Access control schemes

    • Attribute-based access control (ABAC)

    • Role-based access control

    • Rule-based access control

    • MAC

    • Discretionary access control (DAC)

    • Conditional access

    • Privileged access management

    • Filesystem permissions

Authentication and authorization solutions are among the fundamental building blocks of an IT organization, and obviously they are a key control when it comes to security. This chapter starts with an overview of authentication management, including a discussion of password keys, password vaults, TPM, HSM, and knowledge-based authentication concepts. From there we dig deeper into the specifics of authentication and authorization with topics such as Extensible Authentication Protocol (EAP) and Challenge-Handshake authentication protocol (CHAP), including concepts such as Password Authentication Protocol (PAP), 802.1X, RADIUS, single sign-on (SSO), Security Assertions Markup Language (SAML), and Terminal Access Controller Access Control System Plus (TACACS+). Other topics that are touched on include OAuth, OpenID and Kerberos. We finish the chapter learning about access control schemes.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Chapter Review Activities” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 24-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”

Table 24-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

Authentication Management

1–3

Authentication/Authorization

4–6

Access Control Schemes

7–10

Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.

1. Which of the following is a type of technology used by inserting a hardware USB device into a computer for authentication?

  1. Password key

  2. Password vault

  3. Trusted Platform Module

  4. None of these answers are correct.

2. Which of the following can be used to store a set of credentials for later use?

  1. Password key

  2. Password vault

  3. Password authentication module

  4. All of these answers are correct.

3. Which of the following is utilized to verify a user based on something only that person knows?

  1. Knowledge-based authentication

  2. Password vault

  3. Password key

  4. Hardware security module

4. Which of the following is a system that is utilized to allow a user to log in once but gain access to multiple systems without being asked to log in again?

  1. Password key

  2. Password vault

  3. Two-factor authentication

  4. Single sign-on

5. Which of the following is a system where a user’s identity and attributes are shared across multiple identity management systems?

  1. Trusted Platform Module

  2. Knowledge-based authentication

  3. Password authentication

  4. Federated identity management

6. Which of the following is an open standard for exchanging authentication and authorization data between identity providers?

  1. KBA

  2. SAML

  3. SSO

  4. None of these answers are correct.

7. Which of the following is an authentication scheme where the access control policy is generally determined by the owner?

  1. MAC

  2. DAC

  3. RBAC

  4. SAML

8. Which of the following is an authentication scheme where the access control policy is determined by a computer system, not by a user or owner?

  1. 802.1X

  2. MAC

  3. RBAC

  4. SAML

9. Which of the following is a security principle where users are given only the privileges needed to do their job?

  1. Single sign-on

  2. Least privilege

  3. Knowledge-based authentication

  4. All of these answers are correct.

10. Which of the following is an access model that works with sets of permissions instead of individual permissions that are label-based?

  1. MAC

  2. RBAC

  3. DAC

  4. ABAC

Foundation Topics

Authentication Management

Authentication is one of the primary elements to address when securing your computing environment. It can also be one of the most challenging aspects for end users. For this reason, a number of technologies have emerged to address these challenges. In the following sections, we discuss some of them, including password keys, password vaults, Trusted Platform Modules, and hardware security modules.

Password Keys

Password keys are a technology typically deployed by corporations when implementing two-factor authentication. The primary use case is remote access to the organization’s environment. However, many organizations also use them internally. These keys are especially important to use for access to highly sensitive data or applications that serve that data—for instance, financial applications or anything involving intellectual property, such as source code.

Many types of password keys are on the market these days. They come in various form factors and are utilized in different ways. For instance, some are used by inserting into a computer USB port. Some are simply one-time password tokens used as a second factor when authenticating, whereas others are a combination of different functions such as those just mentioned.

Password Vaults

One of the challenges with traditional user/password authentication is remembering the different sets of credentials. Most people have more than 25 different sets of credentials that they need to remember. If these users are following best practices, they should have a different username and password for each account. Every new email or website that a user creates an account with adds another set of credentials to the list. A password vault helps solve the issue of credential storage by providing a central system that stores the various sets of credentials in a secure management system. The vault then has its own set of credentials and possibly another authentication factor that is used to access the vault when the credentials are needed for accessing a system. A password vault is also often referred to as a password manager. It is simply a piece of software that is utilized to store and manage credentials. Typically, the credentials are stored in an encrypted database. Having an encrypted database protects the credentials from being compromised if the database is obtained by a threat actor through the compromise of a system holding the database.

Trusted Platform Module

The Trusted Platform Module (TPM) is a chip residing on the motherboard that stores the encrypted keys. NIST defines a TPM as a tamper-resistant integrated circuit built into some computer motherboards that can perform cryptographic operations (including key generation) and protect small amounts of sensitive information, such as passwords and cryptographic keys. The security mechanisms that make the TPM tamper resistant are utilized to prevent a threat actor or malware from being able to tamper with the functions of the Trusted Platform Module. A TPM is a form of hardware security module.

Hardware Security Modules

Hardware security modules (HSMs) are physical devices that act as secure cryptoprocessors. This means that they are used for encryption during secure login/authentication processes, during digital signings of data, and for payment security systems. The beauty of a hardware-based encryption device such as an HSM (or a TPM) is that it is faster than software encryption.

HSMs can be found in adapter card form, as devices that plug into a computer via USB, and as network-attached devices. They are generally tamper-proof, giving a high level of physical security. They can also be used in high-availability clustered environments because they work independently of other computer systems and are used solely to calculate the data required for encryption keys. However, many of these devices require some kind of management software to be installed on the computer they are connected to. Some manufacturers offer this software as part of the purchase, but others do not, forcing purchasers to build the management software themselves. Due to this lack of management software and the cost involved in general, HSMs have seen slower deployment with some organizations. This concept also holds true for hardware-based drive encryption solutions.

Often, HSMs are involved in the generation, storage, and archiving of encrypted key pairs such as the ones used in Transport Layer Security (TLS) sessions online, public key cryptography, and public key infrastructures (PKIs).

Knowledge-Based Authentication

NIST defines knowledge-based authentication (KBA) as authentication of an individual based on knowledge of information associated with his or her claimed identity in public databases. Knowledge of such information is considered to be private rather than secret, because it may be used in contexts other than authentication to a verifier, thereby reducing the overall assurance associated with the authentication process.

A popular use case for this type of authentication is to recover a username or reset a password. Typically, a set of predetermined questions is asked of the user. These questions must have already been provided by the end user at the time of account setup or provided as an authenticated user at a later time. The idea is that the information that was provided is something only the user would know. That is why it is important to utilize a set of questions that cannot be easily guessed or are not public knowledge.

Authentication/Authorization

Now that we’ve covered some physical authentication methods, let’s move into authentication models, components, and technologies used to grant or deny access to operating systems and computer networks.

The first thing you, as security administrator, should do is plan what type of authentication model to use. Then you should consider what type of authentication technology and how many factors of authentication will be implemented. Also, you should consider how the authentication system will be monitored and logged. Getting more into the specifics, will only local authentication be necessary? Or will remote authentication also be needed? And which type of technology should be utilized? Will it be Windows-based or a third-party solution? Let’s look at these concepts now and examine some different examples of the possible solutions you can implement.

Many small businesses and even some midsized businesses often have one type of authentication to gain access to a computer network: the username and password. In today’s security-conscious world, having only this one type of authentication is not enough for the average organization. Some companies share passwords or fail to enforce password complexity. In addition, password-cracking programs are becoming more and more powerful and work much more quickly than they did just a few years ago, making the username and password authentication scheme limiting. We’re not saying that this approach shouldn’t be used, but perhaps it should be enforced, enhanced, and integrated with other technologies.

Because of the limitations of a single type of authentication such as username and password, organizations sometimes use multiple factors of authentication. In multifactor authentication (MFA), two or more types of authentication are used for user access control. One example of multifactor authentication would be when a user needs to sign in with a username and password and swipe some type of smartcard or use some other type of physical token at the same time. Adding factors of authentication makes it more difficult for a malicious person to gain access to a computer network or an individual computer system. Sometimes an organization uses three factors of authentication—perhaps a smartcard, biometrics, and a username/password. The disadvantages of a multifactor authentication scheme are that users need to remember more information and remember to bring more identification with them, and more IT costs and more administration are involved. Another disadvantage of some MFA environments is that they are static: rules and whitelists/blacklists are usually configured manually.

A more dynamic way of authenticating individuals is to utilize context-aware authentication (also known as context-sensitive access). It is an adaptive way of authenticating users based on their usage of resources and the confidence that the system has in these users. This approach can automatically increase the level of identification required and/or increase or decrease the level of access to resources based on constant analysis of the users.

In some organizations an individual user might need access to several computer systems. By default, each of these systems has a separate login. It can be difficult for users to remember the various logins. Using single sign-on (SSO), a user can log in once but gain access to multiple systems without being asked to log in again. This system is complemented by single sign-off, which is basically the reverse; logging off signs off a person from multiple systems. Single sign-on is meant to reduce password fatigue or password chaos, which occurs when a person can become confused and possibly even disoriented when having to log in with several different usernames and passwords. It is also meant to reduce IT help desk calls and password resets. By implementing a more centralized authentication scheme such as single sign-on, many companies have reduced IT costs significantly. If implemented properly, single sign-on can also reduce phishing. In large networks and enterprise scenarios, it might not be possible for users to have a single sign-on, and in these cases, the security principle might be referred to as reduced sign-on. Single sign-on can be Kerberos-based, integrated with Windows authentication, or token- or smartcard-based.

SSO is a derivative of federated identity management (also called FIM or FIdM). In this system, a user’s identity and attributes are shared across multiple identity management systems. These various systems can be owned by one organization; for example, Microsoft offers the Forefront Identity Manager software, which can control user accounts across local and cloud environments. Also, Google, Yahoo!, and Amazon utilize this federation approach. But some providers join forces so that information can be shared across multiple services and environments between the companies, yet still allow the user a single login. Shibboleth is an example of an SSO system that allows people to sign in with a single digital identity and connect to various systems run by federations of different organizations. SSO systems—and federated systems in general—often incorporate the concept of transitive trust where two networks (or more) have a relationship such that users logging in to one network get access to data on the other.

While an SSO is easier for users to remember, it also acts as a single point of failure. In addition, sometimes a company might not be watching out for the users’ best interests—either unwittingly or otherwise—and might fail to realize that multiple systems have been configured as a transitive trust. Let’s say that a user has an account with Company A and has a separate account with Company B. Imagine that Companies A and B have a two-way trust. Now, let’s say there is a third organization, Company C, that has a two-way trust with Company B. At this point, the user’s account information from Companies A and B could be shared with Company C even though the user never signed up with that company. This kind of activity is frowned upon, but the user might not even know when it happens: two companies might merge, or a company might be bought out or otherwise absorbed by another. So, when it comes to authentication, it is sometimes wise to avoid trust relationships and strongly consider whether single sign-on will ultimately be more beneficial or costly to your organization.

Another concern is web-based SSO, which can be problematic due to disparate proprietary technologies. To help alleviate this problem, the XML-based Security Assertion Markup Language (SAML) and the OpenID Connect protocol were developed. OpenID Connect is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. It uses straightforward REST/JSON message flows with a design goal of “making simple things simple and complicated things possible.” Both OpenID Connect and SAML specify separate roles for the user, the service provider, and the identity provider. Shibboleth is also based on SAML.

Security Assertion Markup Language

Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between identity providers. SAML is used in many SSO implementations.

The OASIS Security Assertion Markup Language standard is currently the most used standard for implementing federated identity processes. SAML is an XML-based framework that describes the use and exchange of SAML assertions in a secure way between business entities. The standard describes the syntax and rules to request, create, use, and exchange these assertions.

The SAML process involves a minimum of two entities: the SAML assertion party (or SAML authority), which is the entity that produces the assertion, and the SAML relying party, which is the entity that uses the assertion to make access decisions.

An assertion is the communication of security information about a subject (also called a principal) in the form of a statement. The basic building blocks of SAML are the SAML assertion, SAML protocol, SAML binding, and SAML profile. SAML assertions can contain the following information:

  • Authentication statement: Includes the result of the authentication and additional information such as the authentication method, timestamps, and so on

  • Attribute statement: Includes attributes about the principal

  • Authorization statement: Includes information on what the principal is allowed to do

An example of an assertion would be User A, who has the email address [email protected] authenticated via username and password, is a platinum member, and is authorized for a 10 percent discount.

SAML protocols define the protocols used to transfer assertion messages. SAML bindings include information on how lower-level protocols (such as HTTP or SOAP) transport SAML protocol messages. SAML profiles are specific combinations of assertions, protocols, and bindings for specific use cases. Examples of profiles include Web Browser Single Sign-On, Identity Provider Discovery, and Enhanced Client and Proxy (ECP).

Figure 24-1 shows the SAML building blocks.

An illustration shows the SAML building blocks.

FIGURE 24-1 SAML Building Blocks

SAML also defines the concepts of identity provider (IdP) and service provider (SP). SAML can work in two different ways:

  • In IdP-initiated mode, a user is already authenticated on the IdP and requests a service from the SP (for example, by clicking a link on the IdP website). The IdP builds an assertion that is sent to the SP within the user request to the SP itself.

    For example, a user who is authenticated on an airline website decides to book a rental car by clicking a link on the airline website. The airline IAM system, which assumes the role of an IdP, sends assertion information about the user to the rental car IAM, which in turn authenticates the user and provides access rights based on the information in the assertion.

  • In SP-initiated mode, a user initiates an access request to some resource on the SP. Because the federated identity is managed by a different IdP, the SP redirects the user to log in at the IdP. After the login, the IdP sends a SAML assertion back to the SP.

Figure 24-2 shows an example of IdP-initiated mode (on the left) and SP-initiated mode (on the right).

A two-part illustration depicts SAML IdP-Initiated mode and SP-Initiated Mode.

FIGURE 24-2 SAML IdP-Initiated Mode and SP-Initiated Mode

OAuth

OAuth is a framework that provides authorization to a third-party entity (for example, a smartphone application) to access resources hosted on a resource server. In a classic client/server authorization framework, the third-party entity would receive the credentials from the resource owner (user) and then access the resource on the resource server.

The main issue OAuth resolves is providing the third-party entity authorization to access restricted resources without passing the client credentials to this third party. Instead of getting the user credentials, the entity requesting access receives an authorization token that includes authorization information, such as scope and duration, and that is used to request access to a resource hosted by the resource server. The OAuth schema is usually called delegation of access.

OAuth 2.0, defined in RFC 6749, includes four main roles:

  • Resource owner: The party that owns the resource (for example, a user) and grants authorization to access some of its resources

  • Client: The party that requires access to a specific resource

  • Resource server: The party that hosts or stores the resource

  • Authorization server: The party that provides an authorization token

In the basic scenario, the authorization is done with six messages:

  1. The client sends an authorization request to the resource owner or indirectly to the authorization server.

  2. The resource owner (or the authorization server on behalf of the resource owner) sends an authorization grant to the client.

  3. The client sends the authorization grant to the authorization server as proof that authorization was granted.

  4. The authorization server authenticates the client and sends an access token.

  5. The client sends the access token to the resource server as proof of authentication and authorization to access the resources.

  6. The resource server validates the access token and grants access.

For example, a user (the resource owner) may grant access to personal photos hosted at some online storage provider (the resource server) to an application on a mobile phone (the client) without directly providing credentials to the application but instead by directly authenticating with the authorization server (in this case, also the online storage provider) and authorizing the access.

Figure 24-3 shows an example of an OAuth exchange.

An illustration depicts OAuth exchange.

FIGURE 24-3 OAuth Exchange

OpenID and OpenID Connect

OpenID has been a popular SSO protocol for federated systems for quite some time. In the 2.0 version, the authentication and authorization process is similar to the one in SAML. OpenID also defines an IdP, called the OpenID provider (OP), and a relying party (RP), which is the entity that holds the resource the user wants to access. In OpenID, a user is free to select an OP of his or her choice, and the initial identity is provided in the form of a URL.

Version 2.0 has been superseded by OpenID Connect. This version drops the authorization functionality present in version 2.0 and is designed to work with OAuth 2.0 for deployments. In practice, OpenID Connect operates as an authentication profile for OAuth. In OpenID Connect, when a user tries to access resources on an RP, the RP sends an authentication request to the OP for that user. In practice, this is an OAuth 2.0 authorization request to access the user’s identity at the OP. The authentication request can be of three types:

  • Authorization code flow (the most commonly used)

  • Implicit flow

  • Hybrid flow

In an authorization code flow scenario, after the user authenticates with the OP, the OP asks the user for consent and issues an authorization code that the user then sends to the RP. The RP uses this code to request an ID token and access token from the OP, which is the way the OP provides assertion to the RP.

Whatever the type of authentication scheme used, it needs to be monitored periodically to make sure that it’s working properly. The authentication system should block people who cannot furnish proper identification and should allow access to people who do have proper identification.

Note

Keep in mind that OpenID is not the same as OpenID Connect. OpenID is an open standard and decentralized authentication protocol controlled by the OpenID Foundation. OAuth is an open standard for access delegation. OpenID Connect (OIDC) combines the features of OpenID and OAuth and provides both authentication and authorization.

802.1X and EAP

802.1X is an IEEE standard that defines port-based network access control (PNAC). Not to be confused with 802.11x WLAN standards, 802.1X is a data link layer authentication technology used to connect hosts to a LAN or WLAN. 802.1X enables you to apply a security control that ties physical ports to end-device MAC addresses and prevents additional devices from being connected to the network. It is a good way of implementing port security, much better than simply setting up MAC filtering.

It all starts with the central connecting device such as a switch or wireless access point. These devices must first enable 802.1X connections; they must have the 802.1X protocol (and supporting protocols) installed. Vendors that offer 802.1X-compliant devices (for example, switches and wireless access points) include Cisco, Symbol Technologies, and Intel. Next, the client computer needs to have an operating system, or additional software, that supports 802.1X. The client computer is known as the supplicant. All recent Windows versions support 802.1X. macOS offers support as well, and Linux computers can use Open1X to enable client access to networks that require 802.1X authentication.

802.1X encapsulates the Extensible Authentication Protocol (EAP) over wired or wireless connections. EAP is not an authentication mechanism in itself but instead defines message formats. 802.1X is the authentication mechanism and defines how EAP is encapsulated within messages. Figure 24-4 shows an example of an 802.1X-enabled network adapter. In the figure, you can see that the box for enabling 802.1X has been checked and that the type of network authentication method for 802.1X is EAP—specifically, Protected EAP (PEAP).

A screenshot shows the Local Area Connection Properties window.

FIGURE 24-4 An 802.1X-Enabled Network Adapter in Windows

Note

You can enable 802.1X in Windows by accessing the Local Area Connection Properties page.

Following are three components to an 802.1X connection:

  • Supplicant: A software client running on a workstation. This is also known as an authentication agent.

  • Authenticator: A wireless access point or switch.

  • Authentication server: An authentication database, most likely a RADIUS server.

The typical 802.1X authentication procedure has four steps:

Step 1. Initialization: If a switch or wireless access point detects a new supplicant, the port connection enables port 802.1X traffic; other types of traffic are dropped.

Step 2. Initiation: The authenticator (switch or wireless access point) periodically sends EAP requests to a MAC address on the network. The supplicant listens for this address and sends an EAP response that might include a user ID or other similar information. The authenticator encapsulates this response and sends it to the authentication server.

Step 3. Negotiation: The authentication server then sends a reply to the authenticator. The authentication server specifies which EAP method to use. (These are listed next.) Then the authenticator transmits that request to the supplicant.

Step 4. Authentication: If the supplicant and authentication server agree on an EAP method, the two transmit until there is either success or failure to authenticate the supplicant computer.

Figure 24-5 illustrates the components used in these steps.

An illustration shows the components of a typical 802.1 X authentication procedure.

FIGURE 24-5 Components of a Typical 802.1X Authentication Procedure

Following are several types of EAP authentication:

  • EAP-MD5: This challenge-based authentication provides basic EAP support. It enables only one-way authentication and not mutual authentication.

  • EAP-TLS: This version uses Transport Layer Security, which is a certificate-based system that does enable mutual authentication. It does not work well in enterprise scenarios because certificates must be configured or managed on the client side and server side.

  • EAP-TTLS: This version is Tunneled Transport Layer Security and is basically the same as TLS except that it is done through an encrypted channel, and it requires only server-side certificates.

  • EAP-FAST: This version uses a protected access credential instead of a certificate to achieve mutual authentication. FAST stands for Flexible Authentication via Secure Tunneling.

  • PEAP: This is the Protected Extensible Authentication Protocol (also known as Protected EAP). It uses MS-CHAPv2, which supports authentication via Microsoft Active Directory databases. It competes with EAP-TTLS and includes legacy password-based protocols. It creates a TLS tunnel by acquiring a public key infrastructure (PKI) certificate from a server known as a certificate authority (CA). The TLS tunnel protects user authentication much like EAP-TTLS.

Cisco also created a proprietary protocol called Lightweight EAP (LEAP), and it is just that—proprietary. To use LEAP, you must have a Cisco device such as an Aironet WAP or Catalyst switch, or another vendor’s device that complies with the Cisco Compatible Extensions program. Then you must download a third-party client on Windows computers to connect to the Cisco device. Most WLAN vendors offer an 802.1X LEAP download for their wireless network adapters.

Although 802.1X is often used for port-based network access control on the LAN, especially VLANs, it can also be used with virtual private networks (VPNs) as a way of remote authentication. Central connecting devices such as switches and wireless access points remain the same, but on the client side 802.1X would need to be configured on a VPN adapter instead of a network adapter.

Many vendors, such as Intel and Cisco, refer to 802.1X with a lowercase x; however, the IEEE displays this on its website with an uppercase X, as does the IETF.

LDAP

The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used for accessing and modifying directory services data. It is part of the TCP/IP suite. Originally used in WAN connections, it has developed over time into a protocol commonly used by services such as Microsoft Active Directory on Windows server domain controllers. LDAP acts as the protocol that controls the directory service. This service organizes the users, computers, and other objects within the Active Directory. Figure 24-6 shows an example of the Active Directory. Take note of the highlighted list of users (known as objects of the Active Directory) from the Users folder. Also, observe other folders such as Computers that house other objects (such as Windows client computers).

A screenshot of the Console window shows two panes.

FIGURE 24-6 Active Directory Showing User Objects

NOTE

Windows servers running Active Directory use parameters and variables when querying the names of objects—for example, CN=dprowse, where CN stands for common name and dprowse is the username. Taking it to the next level, consider DC=ServerName. DC stands for domain component, and ServerName is the variable and the name of the server. Microsoft is famous for using the name fabrikam as its test name, but, of course, you would use whatever your server name is. In the case of fabrikam, an entire LDAP query might look something like this:

<LDAP://DC=Fabrikam, DC=COM

A Microsoft server that has Active Directory and LDAP running has inbound port 389 open by default. To protect Active Directory from being tampered with, you can use Secure LDAP, also known as LDAPS. It was covered previously in Chapter 17, “Implementing Secure Protocols.”

Kerberos and Mutual Authentication

Kerberos is an authentication protocol designed at MIT; it enables computers to prove their identity to each other in a secure manner. It is used most often in a client/server environment; the client and server both verify each other’s identity. This is known as two-way authentication or mutual authentication. Often, Kerberos protects a network server from illegitimate login attempts, just as the mythological three-headed guard dog of the same name (also known as Cerberus) guards Hades.

A common implementation of Kerberos occurs when a user logs on to a Microsoft domain. (Of course, we are not saying that Microsoft domains are analogous to Hades!) The domain controller in the Microsoft domain is known as the key distribution center (KDC). This server works with tickets that prove the identity of users. The KDC is composed of two logical parts: the authentication server and the ticket-granting server. Basically, a client computer attempts to authenticate itself to the authentication server portion of the KDC. When it does so successfully, the client receives a ticket. This is actually a ticket to get other tickets—known as a ticket-granting ticket (TGT). The client uses this preliminary ticket to demonstrate its identity to a ticket-granting server in the hopes of ultimately getting access to a service—for example, making a connection to the Active Directory of a domain controller.

The domain controller running Kerberos has inbound port 88 open to service logon requests from clients. Figure 24-7 shows a netstat -an command run on a Windows server that has been promoted to a domain controller. It points out port 88 (used by Kerberos) and port 389 (used by LDAP) on the same domain controller.

A screenshot shows a netstat -an command run on a Windows server.

FIGURE 24-7 Results of the netstat -an Command on a Windows Server

Kerberos is designed to protect against replay attacks and eavesdropping. One of the drawbacks of Kerberos is that it relies on a centralized server such as a domain controller. This can be a single point of failure. To alleviate this problem, you can install secondary and tertiary domain controllers that keep a copy of the Active Directory and are available with no downtime in case the first domain controller fails.

Another possible issue is one of synchronicity. Time between the clients and domain controller must be synchronized for Kerberos to work properly. If, for some reason, a client attempting to connect to a domain controller becomes desynchronized, it cannot complete the Kerberos authentication, and as a result, the user cannot log on to the domain. This situation can be fixed by logging on to the affected client locally and synchronizing the client’s time to the domain controller by using the net time command. For example, to synchronize to the domain controller in Figure 24-7, the command would be

net time \10.254.254.252 /set

Afterward, the client should be able to connect to the domain.

Kerberos—like any authentication system—is vulnerable to attack. Older Windows operating systems that run, or connect to, Kerberos are vulnerable to privilege escalation attacks; and newer Windows operating systems are vulnerable to spoofing. Of course, Microsoft quickly releases updates for these kinds of vulnerabilities (as they are found), but if you do not allow Windows Update to automatically update, it’s important to review the CVEs for the Microsoft systems often.

Note

In a Red Hat Enterprise environment that uses an SSO such as Kerberos, pluggable authentication modules (PAMs) can be instrumental in providing both the system administrator and security administrator with flexibility and control over authentication, as well as fully documented libraries for the developer.

Remote Authentication Technologies

Even more important than authenticating local users is authenticating remote users. The chances of illegitimate connections increase when you allow remote users to connect to your network. Examples of remote authentication technologies include RAS, VPN, RADIUS, TACACS+, and CHAP.

Remote Access Service

Remote Access Service (RAS) began as a service that enabled dial-up connections from remote clients. Nowadays, more and more remote connections are made with high-speed Internet technologies such as cable Internet and fiber-optic connections. But you can’t discount the dial-up connection. It is used in certain areas where other Internet connections are not available and is still used as a fail-safe in many network operation centers and server rooms to take control of networking equipment.

One of the best things you can do to secure an RAS server is to deny access to individuals who don’t require it. Even if the user or user group is set to “not configured,” it is wise to specifically deny them access. You should allow access to only those users who need it and on a daily basis monitor the logs that list who connected. If there are any unknowns, you should investigate immediately. Also, be sure to update the permissions list often in the case that a remote user is terminated or otherwise leaves the organization.

The next most important security precaution is to set up RAS authentication. One secure way is to use the Challenge-Handshake Authentication Protocol (CHAP), which is an authentication scheme used by the Point-to-Point Protocol (PPP), which in turn is the standard for dial-up connections. It uses a challenge-response mechanism with one-way encryption. Therefore, it is not capable of mutual authentication in the way that Kerberos is. Microsoft developed its own version of CHAP, known as MS-CHAP; an example is shown in Figure 24-8. The figure shows the Advanced Security Settings dialog box of a dial-up connection. In this particular configuration, notice that encryption is required and that the only protocol allowed is MS-CHAPv2. It’s important to use version 2 of MS-CHAP because it provides for mutual authentication between the client and authenticator. Of course, the RAS server has to be configured to accept MS-CHAP connections as well. You also have the option to enable EAP for the dial-up connection. Other RAS authentication protocols include SPAP, which is of lesser security, and Password Authentication Protocol (PAP), which sends usernames and passwords in clear text—obviously insecure and to be avoided. Therefore, PAP should not be used in networks today.

A screenshot shows the Advanced Security Settings dialog box.

FIGURE 24-8 MS-CHAP Enabled on a Dial-Up Connection

Note

MS-CHAP utilizes the MD5 hashing algorithm, which is known to be insecure.

Note

You should use CHAP, MS-CHAP, or EAP for dial-up connections. Also, you should verify that it is configured properly on the RAS server and dial-up client to ensure a proper handshake.

The CHAP authentication scheme consists of several steps. It authenticates a user or network host to entities such as Internet access providers. CHAP periodically verifies the identity of the client by using a three-way handshake. The verification is based on a shared secret. After the link has been established, the authenticator sends a challenge message to the peer. The encrypted results are compared, and finally the client is either authorized or denied access.

The actual data transmitted in these RAS connections is encrypted as well. By default, Microsoft RAS connections are encrypted by the RSA RC4 algorithm.

You might wonder why dial-up connections are still relevant. Well, they are important for two reasons. First, the supporting protocols, authentication types, and encryption types are used in other technologies; this is the basis for those systems. Second, as previously mentioned, some organizations still use dial-up connections—for remote users or for administrative purposes. And hey, don’t downplay the dial-up connection. Old-school dial-up guys used to tweak the connection to the point where it was as fast as some DSL versions and as reliable. So, there are going to be die-hards out there as well. Plus, some areas of the United States and the rest of the world have no other option than dial-up.

However, RAS now has morphed into something that goes beyond just dial-up. VPN connections that use dial-up, cable Internet, fiber, and so on are all considered remote access.

RADIUS versus TACACS+

The Remote Authentication Dial-In User Service (RADIUS) can be used in combination with a small office/home office (SOHO) router to provide strong authentication. RADIUS provides centralized administration of dial-up, VPN, and wireless authentication and can be used with EAP and 802.1X. To set this up on a Windows server, you must load the Internet Authentication Service; it is usually set up on a separate physical server. RADIUS is a client/server protocol that runs on the application layer of the OSI model.

RADIUS works within the AAA concept: it is used to authenticate users, authorize them to services, and account for the usage of those services. RADIUS checks whether the correct authentication scheme such as CHAP or EAP is used when clients attempt to connect. It commonly uses port 1812 for authentication messages and port 1813 for accounting messages (both of which use UDP as the transport mechanism). In some proprietary cases, it uses ports 1645 and 1646 for these messages, respectively. You should memorize these four ports for the exam!

Another concept you will encounter is that of RADIUS federation. Here, an organization has multiple RADIUS servers—possibly on different networks—that need to communicate with each other in a safe way. This communication is accomplished by creating trust relationships and developing a core to manage those relationships as well as the routing of authentication requests. It is often implemented in conjunction with 802.1X. This federated network authentication could also span between multiple organizations.

Note

Another protocol similar to RADIUS—though not as commonly used—is the Diameter protocol. Once again, it’s an AAA protocol. It evolves from RADIUS by supporting TCP or SCTP, but not UDP. It uses port 3868.

The Terminal Access Controller Access Control System (TACACS) is another remote authentication protocol that was used more often in UNIX networks. In UNIX, the TACACS service is known as the TACACS daemon. The newer and more commonly used implementation of TACACS is called Terminal Access Controller Access Control System Plus (TACACS+). It is not backward compatible with TACACS, however. TACACS+ and its predecessor, XTACACS, were developed by Cisco. TACACS+ uses inbound port 49 like its forerunners; however, it uses TCP as the transport mechanism instead of UDP. Let’s clarify: the older TACACS and XTACACS technologies are not commonly seen anymore. The two common protocols for remote authentication used today are RADIUS and TACACS+.

There are a few differences between RADIUS and TACACS+. Whereas RADIUS uses UDP as its transport layer protocol, TACACS+ uses TCP as its transport layer protocol, which is usually seen as a more reliable transport protocol (though each has its own unique set of advantages). Also, RADIUS combines the authentication and authorization functions when dealing with users; however, TACACS+ separates these two functions into two separate operations that introduce another layer of security. It also separates the accounting portion of AAA into its own operation.

RADIUS encrypts only the password in the access-request packet, from the client to the server. The remainder of the packet is unencrypted. Other information such as the username can be easily captured, without need of decryption, by a third party. However, TACACS+ encrypts the entire body of the access-request packet. So, effectively TACACS+ encrypts entire client/server dialogues, whereas RADIUS does not. Finally, TACACS+ provides for more types of authentication requests than RADIUS.

Table 24-2 summarizes the local and remote authentication technologies covered thus far.

Table 24-2 Authentication Technologies

Authentication Type

Description

802.1X

An IEEE standard that defines port-based network access control (PNAC). 802.1X is a data link layer authentication technology used to connect devices to a LAN or WLAN. It defines EAP.

Kerberos

An authentication protocol designed at MIT that enables computers to prove their identity to each other in a secure manner. It is used most often in a client/server environment; the client and server both verify each other’s identity.

CHAP

An authentication scheme used by the Point-to-Point Protocol (PPP) that is the standard for dial-up connections. It utilizes a challenge-response mechanism with one-way encryption. Derivatives include MS-CHAP and MS-CHAPv2.

RADIUS

A protocol used to provide centralized administration of dial-up, VPN, and wireless authentication. It can be used with EAP and 802.1X. It uses ports 1812 and 1813, or 1645 and 1646, over a UDP transport.

TACACS+

Remote authentication developed by Cisco, similar to RADIUS but separates authentication and authorization into two separate processes. It uses port 49 over a TCP transport.

Access Control Schemes

Access control models are methodologies in which admission to physical areas and, more important, computer systems is managed and organized. Access control, also known as an access policy, is extremely important when it comes to users accessing secure or confidential data. Some organizations also practice concepts such as separation of duties, job rotation, and least privilege. By combining these best practices along with an access control model, you can develop a robust plan concerning how users access confidential data and secure areas of a building.

There are several models for access control, each with its own special characteristics that you should know for the exam.

Discretionary Access Control

Discretionary access control (DAC) is an access control policy generally determined by the owner. Objects such as files and printers can be created and accessed by the owner. Also, the owner decides which users are allowed to have access to the objects, and what level of access they may have. The levels of access, or permissions, are stored in access control lists (ACLs).

Originally, DAC was described in The Orange Book as the Discretionary Security Policy and was meant to enforce a consistent set of rules governing limited access to identified individuals. The Orange Book’s proper name is the Trusted Computer System Evaluation Criteria (TCSEC), and was developed by the U.S. Department of Defense (DoD); however, The Orange Book is old (it’s referred to in the movie Hackers in the 1990s!), and the standard was superseded in 2005 by an international standard called the Common Criteria for Information Technology Security Evaluation (or simply Common Criteria). But the DAC methodology lives on in many of today’s personal computers and client/server networks.

Note

An entire set of security standards known as the “Rainbow Series” was published by the DoD in the 1980s and 1990s. Although The Orange Book is the centerpiece of the series (maybe not in the color spectrum, but as far as security content), there are other ones you might come into contact with, such as The Red Book, which is the Trusted Network Interpretation standard. Some of the standards have been superseded, but they contain the basis for many of today’s security procedures.

An example of DAC would be a typical Windows computer with two users. User A can log on to the computer, create a folder, stock it with data, and then finally configure permissions so that only she can access the folder. User B can log on to the computer but cannot access User A’s folder by default, unless User A says so and configures it as so! However, User B can create his own folder and lock down permissions in the same way. Let’s say that there is a third user, User C, who wants both User A and User B to have limited access to a folder that he created. That is also possible by setting specific permission levels, as shown in Figure 24-9. The first Properties window shows that User C (the owner) has Full Control permissions. This permission is normal because User C created the folder. But in the second Properties window, you see that User A has limited permissions, which were set by User C.

Two screenshots show the User-C-Folder Properties dialog box.

FIGURE 24-9 Discretionary Access in Windows

Note

Take notice of standard naming conventions used in your organization. In Figure 24-9 the naming convention is user@domainname—for example, [email protected].

Note

The owner of a resource controls the permissions to that resource! This is the core of the DAC model.

Windows networks/domains work in the same fashion. Access to objects is based on which user created them and what permissions that user assigned to those objects. However, in Windows networks, you can group users together and assign permissions by way of roles as well. For more details, see the “Role-Based Access Control” section.

In a way, DAC, when implemented in client/server networks, is sort of a decentralized administration model. Even though you, as administrator, still have control over most, or all, resources (depending on company policy), the owners retain a certain amount of power over their own resources. But many companies take away the ability for users to configure permissions. They may create folders and save data to them, but the permissions list is often generated on a parent folder by someone else and is inherited by the subfolder.

There are two important points to remember about the DAC model. First, every object in the system has an owner, and the owner has control over its access policy. And second, access rights, or permissions, can be assigned by the owner to users to specifically control object access.

Mandatory Access Control

Mandatory access control (MAC) is an access control policy determined by a computer system, not by a user or owner, as it is in DAC. Permissions are predefined in the MAC model. Historically, it has been used in highly classified government and military multilevel systems, but you will find lesser implementations of it in today’s more common operating systems as well. The MAC model defines sensitivity labels that are assigned to subjects (users) and objects (files, folders, hardware devices, network connections, and so on). A subject’s label dictates its security level, or level of trust. An object’s label dictates what level of clearance is needed to access it, also known as a trust level (this is also known as data labeling). The access controls in a MAC system are based on the security classification of the data and “need-to-know” information—where a user can access only what the system considers absolutely necessary. Also, in the MAC model, data import and export are controlled. MAC is the strictest of the access control models.

An example of MAC can be seen in FreeBSD version 5.0 and higher. In this operating system, access control modules can be installed that allow for security policies that label subjects and objects. Policies are enforced by administrators or by the OS; this is what makes it mandatory and sets it apart from DAC. Another example is Security-Enhanced Linux (SELinux), a set of kernel modifications to Linux that supports DoD-style mandatory access controls such as the requirement for a trusted computing base (TCB). Though often interpreted differently, a TCB can be described as the set of all hardware and software components critical to a system’s security and all associated protection mechanisms. The mechanisms must meet a certain standard, and SELinux helps accomplish this by modifying the kernel of the Linux OS in a secure manner. Like DAC, MAC was also originally defined in The Orange Book, but as the Mandatory Security Policy—a policy that enforces access control based on a user’s clearance and by the confidentiality levels of the data.

Note

Rule-based access control that uses labels is part of mandatory access control and should not be confused with role-based access control.

Note

Other related access control models include Bell-LaPadula, Biba, and Clark-Wilson. Bell-LaPadula is a state machine model used for enforcing access control in government applications. It is a less common multilevel security derivative of mandatory access control. This model focuses on data confidentiality and controlled access to classified information. The Biba integrity model describes rules for the protection of data integrity. Clark-Wilson is another integrity model that provides a foundation for specifying and analyzing an integrity policy for a computing system.

Role-Based Access Control

Role-based access control (RBAC) is an access model that, like MAC, is controlled by the system, and, unlike DAC, not by the owner of a resource. However, RBAC is different from MAC in the way that permissions are configured. RBAC works with sets of permissions instead of individual permissions that are label-based. A set of permissions constitutes a role. When users are assigned to roles, they can then gain access to resources. A role might be the ability to complete a specific operation in an organization as opposed to accessing a single data file. For example, a person in a bank who wants to check a prospective client’s credit score would be attempting to perform a transaction that is allowed only if that person holds the proper role. So roles are created for various job functions in an organization. Roles might have overlapping privileges and responsibilities. Also, some general operations can be completed by all the employees of an organization. Because there is overlap, an administrator can develop role hierarchies; these define roles that can contain other roles or have exclusive attributes.

Think about it. Did you ever notice that an administrator or root user is extremely powerful? Perhaps too powerful? And standard users are often not powerful enough to respond to their own needs or fix their own problems? Some operating systems counter this problem by creating mid-level accounts such as auditors (Microsoft) or operators (Solaris), but for large organizations, this approach is not flexible enough. Currently, more levels of roles and special groups of users are implemented in newer operating systems. RBAC is used in database access as well and is becoming more common in the health-care industry and government.

Attribute-Based Access Control

Attribute-based access control (ABAC) is an access model that is dynamic and context-aware. Access rights are granted to users through the use of multiple policies that can combine various user, group, and resource attributes together. It makes use of IF-THEN statements based on the user and requested resource. For example, if David is a system administrator, then allow full control access to the \dataserveradminfolder share. If implemented properly, this solution can be more flexible.

Rule-Based Access Control

Rule-based access control, also known as label-based access control, defines whether access should be granted or denied to objects by comparing the object label and subject label. Rule-based access control is another model that can be considered a special case of attribute-based access control (ABAC). In reality, this is not a well-defined model and includes any access control model that implements some sort of rule that governs the access to a resource. Usually, rule-based access controls are used in the context of access list implementation to access network resources—for example, where the rule is to provide access only to certain IP addresses or only at certain hours of the day. In this case, the IP addresses are attributes of the subject and object, and the time of day is part of the environment attribute evaluation.

Conditional Access

Conditional access is an access control model where access is granted based on specific criteria requirements. It is primarily used in the software as a service (SaaS) environments such as Microsoft 365 and other services provided by Microsoft Azure. The main function of conditional access is based on the concept of limiting access based on whether or not specific conditions are met. An example of those conditions might be things like IP address, type of browser, type of operating system, or geographic location.

Privileged Access Management

Privileged access management (PAM) is a system used to centrally manage access to privileged accounts. It is primarily based on the concept of least privilege. Typically, a privileged access management system is used to securely store the elevated credentials used by an organization and broker the use of those credentials based on criteria set by a privileged access management administrator. Many different privileged access management solutions are available today with varying features and functions.

Summary of Access Control Models

Table 24-3 summarizes the access control models just discussed: DAC, MAC, RBAC, ABAC, conditional access, PAM, and rule-based access control.

Table 24-3 Access Control Models

Access Control Model

Key Points

DAC

Every object in the system has an owner.

Permissions are determined by the owner.

MAC

Permissions are determined by the system.

It can be rule-based or lattice-based.

Labels are used to identify security levels of subjects and objects.

RBAC

It is based on roles or sets of permissions involved in an operation.

It is controlled by the system.

ABAC

It is context-aware and provides dynamic authentication.

It uses IF-THEN statements to allow access.

Conditional Access

Access is granted based on specific conditions being met.

Office 365 is a use case.

Privileged access management (PAM)

It provides centralized control and management of privileged credentials.

Rule-based access control

Access is granted or denied to objects by comparing the object label and subject label.

Usually, it is used in the context of access list implementation to access network resources.

Note

Another type of access control method is known as anonymous access control—for example, access to an FTP server. This method uses attributes before access is granted to an object. Authentication is usually not required.

Note

In general, access control can be centralized or decentralized. Centralized access control means that one entity is responsible for administering access to resources. Decentralized access control means that more than one entity is responsible, and those entities are closer to the actual resources than the entity would be in a centralized access control scenario.

Access Control Wise Practices

After you decide on an access control model that fits your needs, you should consider employing some other concepts. Some of these are used in operating systems automatically to some extent:

  • Implicit deny: This concept denies all traffic to a resource unless the users generating that traffic are specifically granted access to the resource. Even if permissions haven’t been configured for the user in question, that person is still denied access. This is a default setting for access control lists on a Cisco router. It is also used by default on Microsoft computers to a certain extent. Figure 24-10 shows an example. In the folder’s permissions, you can see that the Users group has the Read & Execute, List Folder Contents, and Read permissions set to Allow. But other permissions such as Modify are not configured at all—not set to Allow or Deny. Therefore, the users in the Users group cannot modify data inside the folder because that permission is implicitly denied. Likewise, they can’t take full control of the folder.

    A screenshot of the test folder Properties window.

    FIGURE 24-10 Implicit Deny on a Windows Folder

    Note

    The Implicit deny setting denies users access to a resource unless they are specifically allowed access.

  • Least privilege: With this setting, users are given only the number of privileges needed to do their job and not one iota more. A basic example would be the Guest account in a Windows computer. This account (when enabled) can surf the web and use other basic applications but cannot make any modifications to the computer system. However, least privilege as a principle goes much further. One of the ideas behind this principle is to run the user session with only the processes necessary, thus reducing the amount of CPU power needed. This hopefully leads to better system stability and system security. Have you ever noticed that many crashed systems are due to users trying to do more than they really should be allowed? Or more than the computer can handle? The concept of least privilege tends to be absolute, whereas an absolute solution isn’t quite possible in the real world. It is difficult to gauge exactly what the “least” number of privileges and processes would be. Instead, as security administrator, you should practice the implementation of minimal privilege, reducing what a user has access to as much as possible. Programmers also practice this principle when developing applications and operating systems, making sure that apps have only the least privilege necessary to accomplish what they need to do. This concept is also known as “the principle of least privilege.”

Chapter Review Activities

Use the features in this section to study and review the topics in this chapter.

Review Key Topics

Review the most important topics in the chapter, noted with the Key Topic icon in the outer margin of the page. Table 24-4 lists a reference of these key topics and the page number on which each is found.

Table 24-4 Key Topics for Chapter 24

Key Topic Element

Description

Page Number

Section

Authentication Management

655

Section

Authentication/Authorization

657

Section

802.1X and EAP

664

Figure 24-4

An 802.1X-enabled network adapter in Windows

665

Figure 24-5

Components of a typical 802.1X authentication procedure

666

Figure 24-6

Active Directory showing user objects

667

Figure 24-7

Results of the netstat -an command on a Windows server

669

Figure 24-8

MS-CHAP enabled on a dial-up connection

671

Table 24-2

Authentication Technologies

673

Paragraph

Access control schemes

674

Figure 24-9

Discretionary access in Windows

675

Table 24-3

Access Control Models

679

Define Key Terms

Define the following key terms from this chapter, and check your answers in the glossary:

password keys

password vault

Trusted Platform Module (TPM)

hardware security modules (HSMs)

knowledge-based authentication (KBA)

single sign-on (SSO)

Security Assertion Markup Language (SAML)

OAuth

OpenID

802.1X

Extensible Authentication Protocol (EAP)

Kerberos

Challenge-Handshake Authentication Protocol (CHAP)

Password Authentication Protocol (PAP)

Remote Authentication Dial-In User Service (RADIUS)

Terminal Access Controller Access-Control System Plus (TACACS+)

access control models

discretionary access control (DAC)

mandatory access control (MAC)

role-based access control (RBAC)

attribute-based access control (ABAC)

rule-based access control

conditional access

privileged access management (PAM)

Review Questions

Answer the following review questions. Check your answers with the answer key in Appendix A.

1. What is a security model where users are given only the number of privileges needed to do their job?

2. What concept denies all traffic to a resource unless the users who generate the traffic are specifically granted access to the resource?

3. What kind of file system permissions are broken down into read, write, and execute?

4. What is an access model based on roles or sets of permissions involved in an operation?

5. What is an access model where access is controlled by the owner?

6. What is a system used to centrally manage access to privileged accounts?

7. What is an access model where permissions are determined by the system?

8. What is an authentication protocol designed by MIT that enables computers to prove their identity to each other in a secure manner?

9. What is an access control model that is dynamic and context-aware?

10. What is a physical device that can act as a secure cryptoprocessor?

11. What is an authentication based on knowledge of information associated with an individual?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.200.77.92