Chapter 4
Identity and Access Management

COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE THE FOLLOWING:

  • images 4.1 Compare and contrast identity and access management concepts.
    • Identification, authentication, authorization and accounting (AAA)
    • Multifactor authentication
      • Something you are
      • Something you have
      • Something you know
      • Somewhere you are
      • Something you do
    • Federation
    • Single sign-on
    • Transitive trust
  • images 4.2 Given a scenario, install and configure identity and access services.
    • LDAP
    • Kerberos
    • TACACS+
    • CHAP
    • PAP
    • MSCHAP
    • RADIUS
    • SAML
    • OpenID Connect
    • OAuth
    • Shibboleth
    • Secure token
    • NTLM
  • images 4.3 Given a scenario, implement identity and access management controls.
    • Access control models
      • MAC
      • DAC
      • ABAC
      • Role-based access control
      • Rule-based access control
    • Physical access control
      • Proximity cards
      • Smart cards
    • Biometric factors
      • Fingerprint scanner
      • Retinal scanner
      • Iris scanner
      • Voice recognition
      • Facial recognition
      • False acceptance rate
      • False rejection rate
      • Crossover error rate
    • Tokens
      • Hardware
      • Software
      • HOTP/TOTP
    • Certificate-based authentication
      • PIV/CAC/smart card
      • IEEE 802.1x
    • File system security
    • Database security
  • images 4.4 Given a scenario, differentiate common account management practices.
    • Account types
      • User account
      • Shared and generic accounts/credentials
      • Guest accounts
      • Service accounts
      • Privileged accounts
    • General Concepts
      • Least privilege
      • Onboarding/offboarding
      • Permission auditing and review
      • Usage auditing and review
      • Time-of-day restrictions
      • Recertification
      • Standard naming convention
      • Account maintenance
      • Group-based access control
      • Location-based policies
    • Account policy enforcement
      • Credential management
      • Group policy
      • Password complexity
      • Expiration
      • Recovery
      • Disablement
      • Lockout
      • Password history
      • Password reuse
      • Password length

images The Security+ exam will test your knowledge of identity and access management concepts, since they relate to secure networked systems both for the home office and in corporate environments. To pass the test and be effective in implementing security, you need to understand the basic concepts and terminology related to network security as detailed in this chapter. You will also need to be familiar with when and why to use various tools and technologies, given a scenario.

4.1 Compare and contrast identity and access management concepts.

Identity is the concept of uniquely naming and referencing each individual user, program, and system component in order to authenticate, authorize, and audit for the purposes of holding users accountable for their actions. This is also known as “identification followed by authentication.” Access management is the concept of defining and enforcing what can and cannot be done by each identified subject. This is also known as authorization.

Identification, authentication, authorization and accounting (AAA)

It’s important to understand the differences between identification, authentication, and authorization. Although these concepts are similar and are essential to all security mechanisms, they’re distinct and must not be confused.

Identification and authentication are commonly used as a two-step process, but they’re distinct activities. Identification is the assertion of an identity. This needs to occur only once per authentication or access process. Any one of the common authentication factors can be employed for identification. Once identification has been performed, the authentication process must take place. Authentication is the act of verifying or proving the claimed identity. The issue is both checking that such an identity exists in the known accounts of the secured environment and ensuring that the human claiming the identity is the correct, valid, and authorized human to use that specific identity.

A username is the most common form of identification. It’s any name used by a subject in order to be recognized as a valid user of a system. Some usernames are derived from a person’s actual name, some are assigned, and some are chosen by the subject. Using a consistent username across multiple systems can help establish a consistent reputation across those platforms. However, it’s extremely important to keep all authentication factors unique between locations, even when duplicating a username.

Authentication can take many forms, most commonly of one-, two-, or multifactor configurations. The more unique factors used in an authentication process, the more resilient and reliable the authentication itself becomes. If all the proffered authentication factors are valid and correct for the claimed identity, it’s then assumed that the accessing person is who they claim to be. Then the permission- and action-restriction mechanisms of authorization take over to control the activities of the user from that point forward.

Identity proofing—that is, authentication—typically takes the form of one or more of the following authentication factors:

  • Something you know (such as a password, code, PIN, combination, or secret phrase)
  • Something you have (such as a smartcard, token device, or key)
  • Something you are (such as a fingerprint, a retina scan, or voice recognition; often referred to as biometrics, discussed later in this chapter)
  • Somewhere you are (such as a physical or logical location); this can be seen as a subset of something you know.
  • Something you do (such as your typing rhythm, a secret handshake, or a private knock). This can be seen as a subset of something you know.

The authentication factor of something you know is also known as a Type 1 factor, something you have is also known as a Type 2 factor, and something you are is also known as a Type 3 factor. The factors of somewhere you are and something you do are not given Type labels.

When only one authentication factor is used, this is known as single-factor authentication (or, rarely, one-factor authentication).

Authorization is the mechanism that controls what a subject can and can’t do, access, use, or view. Authorization is commonly called access control or access restriction. Most systems operate from a default authorization stance of deny by default or implicit deny. Then all needed access is granted by exception to individual subjects or to groups of subjects.

Once a subject is authenticated, its access must be authorized. The process of authorization ensures that the requested activity or object access is possible, given the rights and privileges assigned to the authenticated identity (which we refer to as the subject from this point forward). Authorization indicates who is trusted to perform specific operations. In most cases, the system evaluates an access-control matrix that compares the subject, the object, and the intended activity. If the specific action is allowed, the subject is authorized; if it’s disallowed, the subject isn’t authorized.

Keep in mind that just because a subject has been identified and authenticated, that doesn’t automatically mean it has been authorized. It’s possible for a subject to log on to a network (in other words, be identified and authenticated) and yet be blocked from accessing a file or printing to a printer (by not being authorized to perform such activities). Most network users are authorized to perform only a limited number of activities on a specific collection of resources. Identification and authentication are “all-or-nothing” aspects of access control. Authorization occupies a wide range of variations between all and nothing for each individual subject or object in the environment. Examples would include a user who can read a file but not delete it, or print a document but not alter the print queue, or log on to a system but not be allowed to access any resources.

Multifactor authentication

Multifactor authentication is the requirement that a user must provide two or more authentication factors in order to prove their identity. There are three generally recognized categories of authentication factors.

When two different authentication factors are used, the strategy is known as two-factor authentication (see Figure 4.1). If two or more authentication factors are used but some of them are of the same type, it is known as strong authentication. Using different factors (whether two or three) is always a more secure solution than any number of factors of the same authentication type, because with two or more different factors, two or more different types of attacks must take place to capture the authentication factor. With strong authentication, even if 10 passwords are required, only a single type of password-stealing attack needs to be waged to break through the authentication security.

Diagram shows client machine connected with smartcard reader verifies both authentication factors such as user ID password and smartcard and passes information to security server.

FIGURE 4.1 Two-factor authentication

Authentication factors are the concepts used to verify the identity of a subject.

Something you are

Something you are is often known as biometrics. Examples include fingerprints, a retina scan, or voice recognition. See “Biometric factors” later in the chapter for more information.

Something you have

Something you have requires the use of a physical object. Examples include a smartcard, token device, or key.

Something you know

Something you know involves information you can recall from memory. Examples include a password, code, PIN, combination, or secret phrase.

Somewhere you are

Somewhere you are is a location-based verification. Examples include a physical location or a logical address, such as a domain name, an IP address, or a MAC address.

Something you do

Something you do involves some skill or action you can perform. Examples include solving a puzzle, a secret handshake, or a private knock. This concept can also include activities that are biometrically measured and semi-voluntary, such as your typing rhythm, patterns of system use, or mouse behaviors.

Federation

Federation or federated identity is a means of linking a subject’s accounts from several sites, services, or entities in a single account. It’s a means to accomplish single sign-on. Federated solutions often implement trans-site authentication using SAML (see the later section “SAML”).

Federation creates authentication trusts between systems in order to facilitate single sign-on benefits. Federation trusts can be one-way or two-way and can be transitive or nontransitive. In a one-way trust, as when system A is trusted by system B, users from A can access resources in both A and B systems, but users from B can only access resources in B. In a two-way trust, such as between system A and system B, users from either side can access resources on both sides. If three systems are trust-linked using two-way nontransitive trusts, such as A links to B which links to C, then A resources are accessible by users from A and B, B resources are accessible by users from A, B, and C, and C resources are accessible by users from B and C. If three systems are trust-linked using two-way transitive trusts, then all users from all three systems can access resources from all three systems.

Single sign-on

Single sign-on (SSO) means that once a user (or other subject) is authenticated into the realm, they don’t need to reauthenticate to access resources on any realm entity. (Realm is another term for domain or network.) This allows users to access all the resources, data, applications, and servers they need to perform their work tasks with a single authentication procedure. SSO eliminates the need for users to manage multiple usernames and passwords, because only a single set of logon credentials is required. Some examples of single sign-on include Kerberos, SESAME, NetSP, KryptoKnight, directory services, thin clients, and scripted access. Kerberos is one of the SSO solution options you should know about for the Security+ exam; it is discussed in the later section “Kerberos.”

Transitive trust

Transitive trust or transitive authentication is a security concern when a block can be bypassed using a third party. A transitive trust is a linked relationship between entities (such as systems, networks, or organizations) where trust from one endpoint crosses over or through middle entities to reach the farthest linked endpoint. For example, if four systems are transitive trust linked, such as A-B-C-D, then entities in A can access resources in B, C, and D thanks to the nature of a transitive trust. It can be thought of as a shared trust.

A real-world example of transitive trust occurs when you order a pizza. The cook makes the pizza and passes it on to the assistant, who packages the pizza in a box. The assistant then hands the pizza to the delivery person, the delivery person brings it to your location to hand it to your roommate, and then your roommate brings the pizza into the kitchen, where you grab a slice to eat. Since you trust each link in the chain, you are experiencing transitive trust.

Keep in mind that transitive trust can be both a beneficial feature of linked systems as well as a source of risk or compromise. If the cook placed pineapple, mushrooms, or anchovies on the pizza that you did not want or order, then the trust is broken.

Attackers often seek out transitive trust situations in order to bypass defenses and blockades against a direct approach. For example, the company firewall prevents the attacker from launching a direct attack against the internal database server. The attacker instead targets a worker who uses social networks. After friending the target, the attacker sends the worker a link that leads to a malware infector. If the worker clicks on the link, their system may become infected by remote-control malware. Then, when the worker takes the compromised system back into the office, it provides the attacker with an access pathway to attack the internal database. Thus, the transitive trust of attacker through social network to worker to company network allowed a security breach to take place.

Exam Essentials

Understand identification. Identification is the act of claiming an identity using just one authentication factor.

Define authentication. Authentication is the act of proving a claimed identity using one or more authentication factors.

Understand multifactor authentication. Multifactor authentication is the requirement that users must provide two or more authentication factors in order to prove their identity.

Know about multifactor authentication. Multifactor authentication or strong authentication occurs when two or more authentication factors are used but some of them are of the same type.

Understand two-factor authentication. Two-factor authentication occurs when two different authentication factors are used.

Comprehend federation. Federation or federated identity is a means of linking a subject’s accounts from several sites, services, or entities in a single account.

Understand single sign-on. Single sign-on means that once a user (or other subject) is authenticated into a realm, they need not reauthenticate to access resources on any realm entity.

Know about transitive trust. Transitive trust or transitive authentication is a security concern when a block can be bypassed using a third party. A transitive trust is a linked relationship between entities where trust from one endpoint crosses over or through middle entities to reach the farthest linked endpoint.

4.2 Given a scenario, install and configure identity and access services.

Authentication is the mechanism by which users prove their identity to a system. It’s the process of proving that a subject is the valid user of an account. Often, the authentication process involves a simple username and password. But other more complex authentication factors or credential-protection mechanisms are involved in order to provide strong protection for the logon and account-verification processes. The authentication process requires that the subject provide an identity and then proof of that identity.

Many systems and technologies are involved with identification, authentication, and access control. Several of these are discussed in this section.

LDAP

Please see the “LDAPS” section in Chapter 2, “Technologies and Tools,” for an introduction to this technology.

LDAP is usually present by default in every private network because it is the primary foundation of network directory services, such as Active Directory. LDAP is used to grant access to information about available resources in the network. The ability to view or search network resources can be limited through the use of authorization restrictions.

Kerberos

Early authentication transmission mechanisms sent logon credentials from the client to the authentication server in clear text. Unfortunately, this solution is vulnerable to eavesdropping and interception, thus making the security of the system suspect. What was needed was a solution that didn’t transmit the logon credentials in a form that could be easily captured, extracted, and reused.

One such method for providing protection for logon credentials is Kerberos, a trusted third-party authentication protocol that was originally developed at MIT under Project Athena. The current version of Kerberos in widespread use is version 5. Kerberos is used to authenticate network principles (subjects) to other entities on the network (objects, resources, and servers). Kerberos is platform independent; however, some OSs require special configuration adjustments to support true interoperability (for example, Windows Server with Unix).

Kerberos is a centralized authentication solution. The core element of a Kerberos solution is the key distribution center (KDC), which is responsible for verifying the identity of principles and granting and controlling access within a network environment through the use of secure cryptographic keys and tickets.

Kerberos is a trusted third-party authentication solution because the KDC acts as a third party in the communications between a client and a server. Thus, if the client trusts the KDC and the server trusts the KDC, then the client and server can trust each other.

Kerberos is also a single sign-on solution. Single sign-on means that once a user (or other subject) is authenticated into the realm, they need not reauthenticate to access resources on any realm entity. (A realm is the network protected under a single Kerberos implementation.)

The basic process of Kerberos authentication is as follows:

  1. The subject provides logon credentials.
  2. The Kerberos client system encrypts the password and transmits the protected credentials to the KDC.
  3. The KDC verifies the credentials and then creates a ticket-granting ticket (TGT)—a hashed form of the subject’s password with the addition of a time stamp that indicates a valid lifetime. The TGT is encrypted and sent to the client.
  4. The client receives the TGT. At this point, the subject is an authenticated principle in the Kerberos realm.
  5. The subject requests access to resources on a network server. This causes the client to request a service ticket (ST) from the KDC.
  6. The KDC verifies that the client has a valid TGT and then issues an ST to the client. The ST includes a time stamp that indicates its valid lifetime.
  7. The client receives the ST.
  8. The client sends the ST to the network server that hosts the desired resource.
  9. The network server verifies the ST. If it’s verified, it initiates a communication session with the client. From this point forward, Kerberos is no longer involved.

Figure 4.2 shows the Kerberos authentication process.

Image described by caption and surrounding text.

FIGURE 4.2 The Kerberos authentication process

The Kerberos authentication method helps ensure that logon credentials aren’t compromised while in transit from the client to the server. The inclusion of a time stamp in the tickets ensures that expired tickets can’t be reused. This prevents replay and spoofing attacks against Kerberos.

Kerberos supports mutual authentication (client and server identities are proven to each other). It’s scalable and thus able to manage authentication for large networks. Being centralized, Kerberos helps reduce the overall time involved in accessing resources within a network.


TACACS+

Terminal Access Controller Access Control System (TACACS) is another example of an AAA server. TACACS is an Internet standard (RFC 1492). Similar to RADIUS, it uses ports TCP 49 and UDP 49. XTACACS was the first proprietary Cisco revision of the standard RFC form. TACACS+ was the second major revision by Cisco of this service into yet another proprietary version. None of these three versions of TACACS are compatible with each other. TACACS and XTACACS are utilized on many older systems but have been all but replaced by TACACS+ on current systems.

TACACS+ differs from RADIUS in many ways. One major difference is that RADIUS combines authentication and authorization (the first two As in AAA), whereas TACACS+ separates the two, allowing for more flexibility in protocol selection. For instance, with TACACS+, an administrator may use Kerberos as an authentication mechanism while choosing something entirely different for authorization. With RADIUS these options are more limited.

Scenarios where TACACS+ would be used include any remote access situation where Cisco equipment is present. Cisco hardware is required in order to operate a TACACS+ AAA service for authenticating local or remote systems and users.

CHAP

Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol used over a wide range of Point-to-Point Protocol (PPP) connections (including dial-up, ISDN, DSL, and cable) as a means to provide a secure transport mechanism for logon credentials. It was developed as a secure alternative and replacement for PAP, which transmitted authentication credentials in clear text.

CHAP uses an initial authentication-protection process to support logon and a periodic midstream reverification process to ensure that the subject/client is still who they claim to be. The process is as follows:

  1. The user is prompted for their name and password. Only the username is transmitted to the server.
  2. The authentication process performs a one-way hash function on the subject’s password.
  3. The authentication server compares the username to its accounts database to verify that it is a valid existing account.
  4. If there is a match, the server transmits a random challenge number to the client.
  5. The client uses the password hash and the challenge number as inputs to the CHAP algorithm to produce a response, which is then transmitted back to the server.
  6. The server retrieves the password hash from the user account stored in the account database and then, using it along with the challenge number, computes the expected response.
  7. The server compares the response it calculated to that received from the client.

If everything matches, the subject is authenticated and allowed to communicate over the connection link. Figure 4.3 shows the CHAP authentication process.

Once the client is authenticated, CHAP periodically sends a challenge to the client at random intervals. The client must compute the correct response to the issued challenge; otherwise, the connection is automatically severed. This post-authentication verification process ensures that the authenticated session hasn’t been hijacked.

Image described by caption and surrounding text.

FIGURE 4.3 CHAP authentication

Whenever a CHAP or CHAP-like authentication system is supported, use it. The only other authentication option that is more secure than CHAP is mutual certificate–based authentication.

PAP

Password Authentication Protocol (PAP) is an insecure plain-text password-logon mechanism. PAP was an early plain old telephone service (POTS) authentication mechanism. PAP is mostly unused today, because it was superseded by CHAP and numerous EAP add-ons. Don’t use PAP—it transmits all credentials in plain text.

MSCHAP

MSCHAP is Microsoft’s customized or proprietary version of CHAP. The original MSCHAPv1 was integrated into the earliest versions of Windows but was dropped with the release of Windows Vista. MSCHAPv2 was originally added to Windows NT 4.0 through Service Pack 4, as well as Windows 95 and Windows 98 with network update packages. MSCHAPv1 and MSCHAPv2 were both available on Windows NT 4.0 through Windows XP and Windows Server 2003. MSCHAP is often associated with the Point-to-Point Tunneling Protocol (PPTP), a VPN protocol, and Protected Extensible Authentication Protocol (PEAP). One of the key differences between MSCHAP and CHAP is support for mutual authentication rather than client-only authentication. MSCHAPv2 uses DES encryption to encrypt the transmitted NTML password hash, which is weak and easily cracked. Thus, MSCHAP should generally be avoided and not used in any scenario where other, stronger authentication options are available.

RADIUS

Remote Authentication Dial-In User Service (RADIUS) is a centralized authentication system. It’s often deployed to provide an additional layer of security for a network. By offloading authentication of remote access clients from domain controllers or even the remote access server itself to a dedicated authentication server such as RADIUS, you can provide greater protection against intrusion for the network as a whole. RADIUS can be used with any type of remote access, including dial-up, virtual private network (VPN), and terminal services.

RADIUS is known as an AAA server. AAA stands for authentication, authorization (or access control), and accounting (sometimes referred to as auditing). RADIUS provides for distinct AAA functions for remote-access clients separate from those of normal local domain clients. RADIUS isn’t the only AAA server, but it’s the most widely deployed.

When RADIUS is deployed, it’s important to understand the terms RADIUS client and RADIUS server, both of which are depicted in Figure 4.4. The RADIUS server is obviously the system hosting the RADIUS service. However, the RADIUS client is the remote-access server (RAS), not the remote system connecting to RAS. As far as the remote-access client is concerned, it sees only the RAS, not the RADIUS server. Thus, the RAS is the RADIUS client. RADIUS is a tried-and-true AAA solution, but alternatives include the Cisco proprietary TACACS+ as well as the direct RADIUS competitor Diameter.

Image described by caption and surrounding text.

FIGURE 4.4 The RADIUS client manages the local connection and authenticates against a central server.

RADIUS can be used in any remote-access authentication scenario. RADIUS is platform independent and thus does not require any specific vendor’s hardware. Most RADIUS products from any vendor are interoperable with all others. RADIUS is a widely supported AAA service and can be used as the authentication system for most implementations of IEEE 802.1x (see the section “IEEE 802.1x,” later in this chapter), including the ENT authentication option on wireless access points.

SAML

Security Assertion Markup Language (SAML) is an open-standard data format based on XML for the purpose of supporting the exchange of authentication and authorization details between systems, services, and devices. SAML was designed to address the difficulties related to the implementation of single sign-on (SSO) over the web. SAML’s solution is based on a trusted third-party mechanism in which the subject or user (the principle) is verified through a trusted authentication service (the identity provider) in order for the target server or resource host (the service provider) to accept the identity of the visitor. SAML doesn’t dictate the authentication credentials that must be used, so it’s flexible and potentially compatible with future authentication technologies.

SAML is used to create and support federation of authentication. The success of SAML can be seen online wherever you are offered the ability to use an alternate site’s authentication to access an account. For example, if you visit feedly.com and click the “Get started for free” button, a dialog box appears (Figure 4.5) where you can select to link a new Feedly account to an existing account at Google, Facebook, Twitter, Windows, Evernote, or your own company’s enterprise authentication if you don’t want to use a unique email and password for Feedly.

Image described by caption and surrounding text.

FIGURE 4.5 An example of a SAML/OAuth single sign-on interface

SAML should be used in any scenario where linking of systems, services, or sites is desired but the authentication solutions are not already compatible. SAML allows for the creation of interfaces between authentication solutions in order to allow federation.

OpenID Connect

OpenID Connect is an Internet-based single sign-on solution. It operates over the OAuth protocol (see next section) and can be used in relation to Web services as well as smart device apps. The purpose or goal of OpenID Connect is to simplify the process by which applications are able to identify and verify users. For more detailed information and programming guidance, please see https://openid.net/connect/.

OpenID Connect should be considered for use as an authentication solution for any online application or web service. It is a simple solution that may be robust enough for your soon-to-be virally popular digital app.

OAuth

OAuth is an open standard for authentication and access delegation (federation). OAuth is widely used by websites, web services, and mobile device applications. OAuth is an easy means of supporting federation of authentication between primary and secondary systems. A primary system could be Google, Facebook, or Twitter, and secondary systems are anyone else. OAuth is often implemented using SAML. OAuth can be recognized as being in use when you are offered the ability to use an existing authentication from a primary service as the authentication at a secondary one (see Figure 4.5 earlier).

OAuth can be used in any scenario where a new, smaller secondary entity wants to employ the access tokens from primary entities as a means of authentication. In other words, OAuth is used to implement authentication federation.

Shibboleth

Shibboleth is another example of an authentication federation and single sign-on solution. Shibboleth is a standards-based open-source solution that can be used for website authentication across the Internet or within private networks. Shibboleth was developed for use by Internet2 and is now available for use in any networking environment, public or private. Shibboleth is based on SAML.

For more information on Shibboleth, please visit https://shibboleth.net/.

Secure token

A secure token is a protected, possibly encrypted authentication data set that proves a particular user or system has been verified through a formal logon procedure. Access tokens include web cookies, Kerberos tickets, and digital certificates. A secure token is an access token that does not leak any information about the subject’s credentials or allow for easy impersonation.

A secure token can also refer to a physical authentication device known as a TOTP or HOTP device (see the “HOTP/TOTP” section later in this chapter).

Secure tokens should be considered for use in any private or public authentication scenario. Minimizing the risk of information leakage or impersonation should be a goal of anyone designing, establishing, or managing authentication solutions.

NTLM

New Technology LAN Manager (NTLM) is a password hash storage system used on Microsoft Windows. NTLM exists in two versions. NTLMv1 is a challenge-response protocol system that, using a server-issued random challenge along with the user’s password (in both LM hash and MD4 hash), produces two responses that are sent back to the server (this is assuming a password with 14 or fewer characters; otherwise only an MD4 hash-based response is generated). NTLMv2 is also a challenge-response protocol system, but it uses a much more complex process that is based on MD5. Both versions of NTLM use a challenge response–based hashing mechanism whose result is nonreversible and thus much more secure than LM hashing. However, reverse-engineering password- cracking mechanisms can ultimately reveal NTLMv1 or v2 stored passwords if the passwords are relatively short (under 15 characters) and the hacker is given enough processing power and time.

LANMAN, or what is typically referred to as LM or LAN Manager, is a legacy storage mechanism developed by Microsoft to store passwords. LM was replaced by NTLM on Windows NT 4.0 and should be disabled (usually left disabled) and avoided on all current versions of Windows.

One of the most significant issues with LM is that it limited passwords to a maximum of 14 characters. Shorter passwords were padded out to 14 characters using null characters. The 14 characters of the password were converted to uppercase and then divided into two seven-character sections. Each seven-character section was then used as a DES encryption key to encrypt the static ASCII string “KGS!@#$%”. The two results were recombined to form the LM hash. Obviously, this system is fraught with problems. Specifically, the process is reversible and not truly a one-way hash, and all passwords are ultimately no stronger than seven characters.

As a user, you can completely avoid LM by using passwords of at least 15 characters. LM has been disabled by default on all versions of Windows since Windows 2000. However, this disabling only addresses the initial request for and the default transmission of LM for the authentication process. The Security Accounts Manager (SAM) still contains an LM equivalent of all passwords with 14 or fewer characters through Windows Vista, at least by default. Windows 7 and later versions of Windows do not even create the LM version of user passwords to store in the user account database by default. Settings are available in the Registry and Group Policy Objects to turn on this backward-compatibility feature.

You should leave LM disabled and disable it when it isn’t. If you need LM to support a legacy system, you should find a way to upgrade the legacy system rather than continue to use LM. The use of LM is practically equivalent to using only plain text.

NTLM is used in nearly every scenario of Windows-to-Windows authentication. Although it is not the most robust or secure form of authentication, it is secure enough in most circumstances. When NTLM is deemed insufficient or incompatible (such as when connecting to non-Windows systems), then digital certificate-based authentication should be used.

Exam Essentials

Understand Kerberos. Kerberos is a trusted third-party authentication protocol. It uses encryption keys as tickets with time stamps to prove identity and grant access to resources. Kerberos is a single sign-on solution employing a key distribution center (KDC) to manage its centralized authentication mechanism.

Know about TACACS+. TACACS is a centralized remote access authentication solution. It’s an Internet standard (RFC 1492); however, Cisco’s proprietary implementations of XTACACS and now TACACS+ have quickly gained popularity as RADIUS alternatives.

Understand CHAP. The Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol used primarily over dial-up connections (usually PPP) as a means to provide a secure transport mechanism for logon credentials. CHAP uses a one-way hash to protect passwords and periodically reauthenticate clients. A good example of CHAP usage is a point-to-point link between two corporate routers.

Define PAP. Password Authentication Protocol (PAP) is an insecure plain-text password-logon mechanism. PAP was an early plain old telephone service (POTS) authentication mechanism.

Understand MSCHAP. MSCHAP is Microsoft’s customized or proprietary version of CHAP. One of the key differences between MSCHAP and CHAP is that MSCHAP supports mutual authentication, rather than client-only authentication. MSCHAPv2 uses DES encryption to encrypt the transmitted NTML password hash, which is weak and easily cracked.

Comprehend RADIUS. RADIUS is a centralized authentication system. It’s often deployed to provide an additional layer of security for a network.

Understand SAML. Security Assertion Markup Language is an open-standard data format based on XML for the purpose of supporting the exchange of authentication and authorization details between systems, services, and devices.

Know about OpenID Connect. OpenID Connect is an Internet-based single sign-on solution. It operates over the OAuth protocol and can be used in relation to web services as well as smart device apps.

Understand OAuth. OAuth is an open standard for authentication and access delegation (federation). OAuth is widely used by websites/services and mobile device applications.

Define Shibboleth. Shibboleth is another example of an authentication federation and single sign-on solution. Shibboleth is a standards-based, open source-solution that can be used for website authentication across the Internet or within private networks.

Understand secure tokens. A secure token is a protected, possibly encrypted authentication data set that proves a particular user or system has been verified through a formal logon procedure. Access tokens include web cookies, Kerberos tickets, and digital certificates. A secure token is an access token that does not leak any information about the subject’s credentials or allow for easy impersonation.

Know about NTLM. New Technology LAN Manager (NTLM) is a password hash storage system used on Microsoft Windows. It’s a challenge-response protocol system that is nonreversible and thus much more secure than LM hashing. One place where NTLM is frequently used is in Microsoft Active Directory for user logon authentication in lieu of a RADIUS or TACACS solution.

4.3 Given a scenario, implement identity and access management controls.

Authorization is the second element of AAA services. Thus, authorization or access control is an essential part of security through an organization. Understanding the variations and options for identity verification and access control management is important for security management.

Access control models

The mechanism by which users are granted or denied the ability to interact with and use resources is known as access control. Access control is often referred to using the term authorization. Authorization defines the type of access to resources that users are granted—in other words, what users are authorized to do. Authorization is often considered the next logical step immediately after authentication. Authentication is proving your identity to a system or the act of logging on. With proper authorization or access control, a system can properly control access to resources in order to prevent unauthorized access.

There are three common access control methods:

  • Mandatory access control (MAC)
  • Discretionary access control (DAC)
  • Role-based access control (RBAC)

These three models are widely used in today’s IT environments. Familiarity with these models is essential for the Security+ exam.

In most environments, DAC is a sufficient authorization mechanism to use to control and manage a subject’s access to and use of resources. Most operating systems are DAC by default. In government or military environments, where classifications are deemed an essential control mechanism, MAC should be used to directly enforce and restrict access based on a subject’s clearance. RBAC is a potential alternative in many environments, but it is most appropriate in those situations where there is a high rate of employee turnover.

MAC

Mandatory access control (MAC) is a form of access control commonly employed by government and military environments. MAC specifies that access is granted based on a set of rules rather than at the discretion of a user. The rules that govern MAC are hierarchical in nature and are often called sensitivity labels, security domains, orclassifications. MAC environments define a few specific security domains or sensitivity levels and then use the associated labels from those domains to impose access control on objects and subjects.

A government or military implementation of MAC typically includes the following five levels (in order from least sensitive to most sensitive):

  • Unclassified
  • Sensitive but unclassified
  • Confidential
  • Secret
  • Top secret

Objects or resources are assigned sensitivity labels corresponding to one of these security domains. Each specific security domain or level defines the security mechanisms and restrictions that must be imposed in order to provide protection for objects in that domain.

MAC can also be deployed in private sector or corporate business environments. Such cases typically involve the following four security domain levels (in order from least to most sensitive):

  • Public
  • Sensitive
  • Private
  • Confidential/Proprietary

The primary purpose of a MAC environment is to prevent disclosure: the violation of the security principle of confidentiality. When an unauthorized user gains access to a secured resource, it is a security violation. Objects are assigned a specific sensitivity label based on the damage that would be caused if disclosure occurred. For example, if a top-secret resource was disclosed, it could cause grave damage to national security.

A MAC environment works by assigning subjects a clearance level and assigning objects a sensitivity label—in other words, everything is assigned a classification marker. The name of the clearance level is the same as the name of the sensitivity label assigned to objects or resources. A person (or other subject, such as a program or a computer system) must have the same or greater assigned clearance level as the resources they wish to access. In this manner, access is granted or restricted based on the rules of classification (that is, sensitivity labels and clearance levels).

MAC is so named because the access control it imposes on an environment is mandatory. Its assigned classifications and the resulting granting and restriction of access can’t be altered by users. Instead, the rules that define the environment and judge the assignment of sensitivity labels and clearance levels control authorization.

MAC isn’t a security environment with very granular control. An improvement to MAC includes the use of need to know: a security restriction in which some objects (resources or data) are restricted unless the subject has a need to know them. The objects that require a specific need to know are assigned a sensitivity label, but they’re compartmentalized from the rest of the objects with the same sensitivity label (in the same security domain). The need to know is a rule in itself, which states that access is granted only to users who have been assigned work tasks that require access to the cordoned-off object. Even if users have the proper level of clearance, without need to know, they’re denied access. “Need to know” is the MAC equivalent of the principle of least privilege from DAC (described in the following section).

DAC

Discretionary access control (DAC) is the form of access control or authorization that is used in most commercial and home environments. DAC is user-directed or, more specifically, controlled by the owner and creators of the objects (resources) in the environment. DAC is identity-based: access is granted or restricted by an object’s owner based on user identity and on the discretion of the object owner. Thus, the owner or creator of an object can decide which users are granted or denied access to their object. To do this, DAC uses ACLs.

An access control list (ACL) is a security logical mechanism attached to every object and resource in the environment. It defines which users are granted or denied the various types of access available based on the object type. Individual user accounts or user groups can be added to an object’s ACL and granted or denied access.

If your user account isn’t granted access through an object’s ACL, then often your access is denied by default (note: not all OSs use a deny-by-default approach). If your user account is specifically granted access through an object’s ACL, then you’re granted the specific level or type of access defined. If your user account is specifically denied access through an object’s ACL, then you’re denied the specific level or type of access defined. In some cases (such as with Microsoft Windows), a Denied setting in an ACL overrides all other settings. Table 4.1 shows an access matrix for a user who is a member of three groups, and the resulting access to specific files within a folder on a network server. As you can see, the presence of the Denied setting overrides any other access granted from another group. Thus, if your membership in one user group grants you write access over an object, but another group specifically denies you write access to the same object, then you’re denied write access to the object.

TABLE 4.1 Cumulative access based on group memberships

Sales Group User group Research group Resulting access Filename
Change Read None specified Change SalesReport.xls
Read Read Change Change ProductDevelopment.doc
None specified Read Denied Denied EmailPolicy.pdf
Full control Denied None specified Denied CustomerContacts.doc

User-assigned privileges are permissions granted or denied on a specific individual user basis. This is a standard feature of DAC-based OSs, including Linux and Windows. All objects in Linux have an owner assigned. The owner (an individual) is granted specific privileges. In Windows, an access control entry (ACE) in an ACL can focus on an individual user to grant or deny permissions on the object.

In a DAC environment, it is common to use groups to assign access to resources in aggregate rather than only on an individual basis. This often results in users being members of numerous groups. In these situations, it is often important to determine the effective permissions for a user. This is accomplished by accumulating all allows or grants of access to a resource, and then subtracting or removing any denials for that resource.

ABAC

Attribute-based access control (ABAC) is a mechanism for assigning access and privileges to resources through a scheme of attributes or characteristics. The attributes can be related to the user, the object, the system, the application, the network, the service, time of day, or even other subjective environmental concerns. ABAC access is then determined through a set of Boolean logic rules, similar to if-then programming statements, that relate who is making a request, what the object is, what type of access is being sought, and results the action would cause. ABAC is a dynamic, context-aware authorization scheme that can modify access based on risk profiles and changing environmental conditions (such as system load, latency, whether or not encryption is in use, and whether the requesting system has the latest security patches). ABAC is also known by the terms policy-based access control (PBAC) and claims-based access control (CBAC).

Role-based access control

Role-based access control (RBAC) is another strict form of access control. It may be grouped with the nondiscretionary access control methods along with MAC. The rules used for RBAC are basically job descriptions: users are assigned a specific role in an environment, and access to objects is granted based on the necessary work tasks of that role. For example, the role of backup operator may be granted the ability to back up every file on a system to a tape drive. The user given the backup operator role can then perform that function.

RBAC is most suitable for environments with a high rate of employee turnover. It allows a job description or role to remain static even when the user performing that role changes often. It’s also useful in industries prone to privilege creep, such as banking.

Rule-based access control

Rule-based access control (RBAC or rule-BAC) is typically used in relation to network devices that filter traffic based on filtering rules, as found on firewalls and routers. Rule-based access control (RBAC) systems enforce rules independent of the user or the resource, as the rules are the rules. If a firewall rule sets a port as closed, then it is closed regardless of who is attempting to access the system. These filtering rules are often called rules, rule sets, filter lists, tuples, or ACLs. Be sure you understand the context of the Security+ exam question before assuming role or rule when you see RBAC.


Physical access control

Often overlooked when considering IT security is the need to manage physical access. Physical access controls are needed to restrict physical access violations, whereas logical access controls are needed to restrict logical access violations.

Physical access controls should be implemented in any scenario in which there is a difference in value, risk, or use between one area of a facility and another. Any place where it would make sense to have a locked door, technology-managed physical access controls should be implemented.

Proximity cards

In addition to smart and dumb cards, proximity devices can be used to control physical access. A proximity device or proximity card can be a passive device, a field-powered device, or a transponder. The proximity device is worn or held by the authorized bearer. When it passes a proximity reader, the reader is able to determine who the bearer is and whether they have authorized access. A passive device reflects or otherwise alters the electromagnetic field generated by the reader. This alteration is detected by the reader.

The passive device has no active electronics; it is just a small magnet with specific properties (like antitheft devices commonly found on DVDs). A field-powered device has electronics that activate when the device enters the electromagnetic (EM) field that the reader generates. Such devices generate electricity from an EM field to power themselves (such as card readers that only require the access card be waved within inches of the reader to unlock doors). A transponder device is self-powered and transmits a signal received by the reader. This can occur continuously or only at the press of a button (like a garage door opener or car alarm key fob).

In addition to smart/dumb cards and proximity readers, physical access can be managed with radio frequency identification (RFID) or biometric access-control devices.

Smart cards

See the later section “PIV/CAC/smart card” in the discussion of implementing certificate-based authentication.

Biometric factors

Biometrics is the term used to describe the collection of physical attributes of the human body that can be used as an identification or authentication factor. Biometrics fall into the authentication factor category of something you are: you, as a human, have the element of identification as part of your physical body.

Numerous biometric factors can be considered for identification and authentication purposes. Several of these options are discussed in the following sections.

However, when an organization decides to implement a biometric factor, it is important to evaluate the available options in order to select a biometric solution that is most in line with the organization’s security priorities. One method to accomplish this is to consult a Zephyr analysis chart (Figure 4.6). This type of chart presents the relative strengths and weaknesses of various characteristics of biometric factor options. The specific example shown in Figure 4.6 evaluates eight biometric types on four characteristics (intrusiveness, accuracy, cost, and effort). The security administrator should select a form of biometric based on their organization’s priorities for the evaluated characteristics.

Chart shows evaluation of eight biometric types such as keystroke scan, facial scan, retina scan, iris scan, voice scan, finger scan, signature scan and hand scan on intrusiveness, accuracy, cost, and effort.

FIGURE 4.6 A Zephyr analysis chart

Once the type of biometric is selected, then a specific make and model needs to be purchased. Finding the most accurate device to implement is accomplished using a crossover error rate analysis (see the section “Crossover error rate” later in this chapter).

Biometric factor devices or biometric scanners should be used as an element in multifactor authentication. Any scenario in which there is sensitive data carries a corresponding need for greater security. One element of stronger security is more robust authentication. Any form of multifactor authentication is stronger than a single-factor authentication solution.

Fingerprint scanner

A fingerprint scanner is used to analyze the visible patterns of skin ridges on the fingers and thumbs of people. Fingerprints are thought to be unique to an individual and have been used for decades in physical security for identification, and are now often used as an electronic authentication factor as well. Fingerprint readers are now commonly used on laptop computers, smartphones, and USB flash drives as a method of identification and authentication. Although fingerprint scanners are common and seemingly easy to use, they can sometimes be fooled by photos of fingerprints, black-powder and tape-lifted fingerprints, or gummy re-creations of fingerprints.

Retinal scanner

Retinal scanners focus on the pattern of blood vessels at the back of the eye. Retinal scans are the most accurate form of biometric authentication and are able to differentiate between identical twins. However, they are the least acceptable biometric scanning method for employees because they can reveal medical conditions, such as high blood pressure and pregnancy. Older retinal scans blew a puff of air into the user’s eye (which is uncomfortable), but newer ones typically use an infrared light instead. Retinal patterns can also change as people age and retinas deteriorate.

Iris scanner

Iris scanners focus on the colored area around the pupil. They are the second most accurate form of biometric authentication. Iris scans are often recognized as having a longer useful authentication life span than other biometric factors because the iris remains relatively unchanged throughout a person’s life (barring eye damage or illness). Iris scans are considered more acceptable by general users than retina scans because they don’t reveal personal medical information. However, some scanners can be fooled with a high-quality image in place of an actual person’s eye; sometimes a contact lens can be placed on the photo to improve the subterfuge. Additionally, accuracy can be affected by changes in lighting.

Voice recognition

Voice recognition is a type of biometric authentication that relies on the characteristics of a person’s speaking voice, known as a voiceprint. The user speaks a specific phrase, which is recorded by the authentication system. To authenticate, the user repeats the same phrase and it is compared to the original. Voice pattern recognition is sometimes used as an additional authentication mechanism but is rarely used by itself.

Facial recognition

Facial recognition is based on the geometric patterns of faces for detecting authorized individuals. Face scans are used to identify and authenticate people before accessing secure spaces, such as a secure vault. Many photo sites now include facial recognition, which can automatically recognize and tag individuals once they have been identified in other photos.

False acceptance rate

As with all forms of hardware, there are potential errors associated with biometric readers. Two specific error types are a concern: false rejection rate (FRR) or Type I errors and false acceptance rate (FAR) or Type II errors. The FRR is the number of failed authentications for valid subjects based on device sensitivity, whereas the FAR is the number of accepted invalid subjects based on device sensitivity.

False rejection rate

Discussed in the previous section.

Crossover error rate

The two error measurements of biometric devices (FRR and FAR) can be mapped on a graph comparing sensitivity level to rate of errors. The point on this graph where these two rates intersect is known as the crossover error rate (CER); see Figure 4.7. Notice how the number of FRR errors increases with sensitivity, whereas FAR errors decrease with an increase in sensitivity. The CER point (as measured against the error scale) is used to determine which biometric device for a specific body part from various vendors or of various models is the most accurate. The comparatively lowest CER point is the more accurate biometric device for the relevant body part.

Percentage of errors versus sensitivity graph shows concave up and increasing curve for FRR type 1, concave up and decreasing curve for FAR type 2 and intersecting point of curves represent CER.

FIGURE 4.7 A graphing of FRR and FAR, which reveals the CER

Tokens

A token is a form of authentication factor that is something you have. It’s usually a hardware device, but it can be implemented in software as a logical token. A token is used to generate temporary single-use passwords for the purpose of creating stronger authentication. In this way, a user account isn’t tied to a single static password. Instead, the user must be in physical possession of the password-generating device. Users enter the currently valid password from the token as their password during the logon process.

There are several forms of tokens. Some tokens generate passwords based on time (see Figure 4.8 later in the chapter), whereas others generate passwords based on challenges from the authentication server. In either case, users can use (or attempt to use) the generated password just once before they must either wait for the next time window or request another challenge. Passwords that can be used only once are known as one-time passwords (OTP). This is the most secure form of password, because regardless of whether its use results in a successful logon, that one-use password is never valid again. One-time passwords can be employed only when a token is used, due to the complexity and ever-changing nature of the passwords. However, a token need not be a device; there are paper-based options as well as smartphone app–based solutions.

Image described by caption and surrounding text.

FIGURE 4.8 The RSA SecurID token device

A token may be a device (Figure 4.8), like a small calculator with or without a keypad. It may also be a high-end smartcard. When properly deployed, a token-based authentication system is more secure than a password-only system.

A token should be used in any scenario in which multifactor authentication is needed or warranted. Almost every authentication event would be improved by implementing a multifactor solution as opposed to remaining with single-factor authentication.

Hardware

An authentication token can be a hardware device that must be present each time the user attempts to log on. Often hardware tokens are designed to be small and attach to a keychain or lanyard. They are often referred to as keychain tokens or key fobs.

Software

An authentication token can be a software solution, such as an app on a smart device. Since many of us carry a smartphone with us almost everywhere we go, having an app that provides OTP when necessary can eliminate the need for carrying around another hardware device or physical token. Software token apps are widely available and implemented on many Internet services; thus, they are easy to adopt for use as an authentication factor for a private network.

HOTP/TOTP

HMAC-based one-time password (HOTP) tokens, or asynchronous dynamic password tokens, are devices or applications that generate passwords based not on fixed time intervals but on a nonrepeating one-way function, such as a hash or hash message authentication code (HMAC—a type of hash that uses a symmetric key in the hashing process) operation. These tokens often generate a password after the user enters a PIN into the token device. The authentication process commonly includes a challenge and a response in which a server sends the user a PIN and the user enters the PIN to create the password. These tokens have a unique seed (or random number) embedded along with a unique identifier for the device. See the earlier section “CHAP” for a description of this operation.

There is a potential downside to using HOTPs, known as the off-by-one problem. If the non–time-based seed or key synchronization gets desynchronized, the client may be calculating a value that the server has already tossed or has not yet generated. This requires the device to be resynced with the authentication server.

Time-based one-time password (TOTP) tokens, or synchronous dynamic password tokens, are devices or applications that generate passwords at fixed time intervals, such as every 60 seconds. Time-interval tokens must have their clocks synchronized to an authentication server. To authenticate, the user enters the password shown along with a PIN or passphrase as a second factor of authentication. The generated one-time password provides identification, and the PIN/passphrase provides authentication.

Certificate-based authentication

Certificates or digital certificates are a trusted third-party authentication technology derived from asymmetric public key cryptography. Please see Chapter 6 for detailed coverage of digital certificates.

Certificate-based authentication is often a reliable mechanism for verifying the identity of devices, systems, services, applications, networks, and organizations. However, certificates alone are insufficient to identify or authenticate individuals, since the certificate is a digital file and can be lost, stolen, or otherwise abused for impersonation attacks. However, when implemented as a multifactor authentication process, certificates can be a significant improvement in logon security for a wide range of scenarios.


PIV/CAC/smart card

Smartcards (Figure 4.8) are credit card–sized IDs, badges, or security passes with embedded integrated circuit chips. They can contain information about the authorized bearer that can be used for identification and/or authentication purposes. Some smartcards can even process information or store reasonable amounts of data in a memory chip. Many smartcards are used as the means of hardware-based removable media storage for digital certificates. This enables users to carry a credit card–sized device on their person, which is then used as an element in multifactor authentication, specifically supporting certificate authentication as one of those factors.

A smartcard may be known by several terms:

  • An identity token containing integrated circuits (ICs)
  • A processor IC card
  • An IC card with an ISO 7816 interface

Smartcards are often viewed as a complete security solution, but they should not be considered complete by themselves. Like any single security mechanism, smartcards are subject to weaknesses and vulnerabilities. They can fall prey to physical attacks, logical attacks, Trojan horse attacks, or social engineering attacks.

Memory cards are machine-readable ID cards with a magnetic strip or a read-only chip, like a credit card, a debit card, or an ATM card. Memory cards can retain a small amount of data but are unable to process data like a smartcard. Memory cards often function as a type of two-factor control: the card is something you have, and its PIN is something you know. However, memory cards are easy to copy or duplicate and are insufficient for authentication purposes in a secure environment.

The Common Access Card (CAC) is the name given to the smartcard used by the U.S. government and military for authentication purposes. Although the CAC name was assigned by the Department of Defense (DoD), the same technology is widely used in commercial environments. This smartcard is used to host credentials, specifically digital certificates, that can be used to grant access to a facility or to a computer terminal.

Personal identification verification (PIV) cards, such as badges, identification cards, and security IDs, are forms of physical identification and/or electronic access control devices. A badge can be as simple as a name tag indicating whether you’re a valid employee or a visitor. Or it can be as complex as a smartcard or token device that employs multifactor authentication to verify and prove your identity and provide authentication and authorization to access a facility, specific rooms, or secured workstations. Badges often include pictures, magnetic strips with encoded data, and personal details to help a security guard verify identity.

Badges can be used in environments in which physical access is primarily controlled by security guards. In such conditions, the badge serves as a visual identification tool for the guards. They can verify your identity by comparing your picture to your person and consult a printed or electronic roster of authorized personnel to determine whether you have valid access.

Badges can also serve in environments guarded by scanning devices rather than security guards. In such conditions, a badge can be used either for identification or for authentication. When a badge is used for identification, it’s swiped in a device, and then the badge owner must provide one or more authentication factors, such as a password, passphrase, or biological trait (if a biometric device is used). When a badge is used for authentication, the badge owner provides an ID, username, and so on, and then swipes the badge to authenticate.

IEEE 802.1x

IEEE 802.1x is a port-based authentication mechanism. It’s based on Extensible Authentication Protocol (EAP) and is commonly used in closed-environment wireless networks. However, 802.1x isn’t exclusively used on wireless access points (WAPs); it can also be used on firewalls, proxies, VPN gateways, and other locations where an authentication handoff service is desired. Think of 802.1x as an authentication proxy. When you wish to use an existing authentication system rather than configure another, 802.1x lets you do that.

When 802.1x is in use, it makes a port-based decision about whether to allow or deny a connection based on the authentication of a user or service. 802.1x was initially used to compensate for the weaknesses of Wired Equivalent Privacy (WEP), but today it’s often used as a component in more complex authentication and connection-management systems, including Remote Authentication Dial-In User Service (RADIUS), Diameter, Cisco System’s Terminal Access Controller Access-Control System Plus (TACACS+), and Network Access Control (NAC).

Like many technologies, 802.1x is vulnerable to man-in-the-middle and hijacking attacks because the authentication mechanism occurs only when the connection is established.

802.1x is a standard port-based network-access control that ensures that clients can’t communicate with a resource until proper authentication has taken place. Effectively, 802.1x is a handoff system that allows any device to use the existing network infrastructure’s authentication services. Through the use of 802.1x, other techniques and solutions such as RADIUS, TACACS, certificates, smartcards, token devices, and biometrics can be integrated into any communications system. 802.1x is most often associated with wireless access points, but its use isn’t limited to wireless.

File system security

Filesystem security is usually focused on authorization instead of authentication. To protect a filesystem, either access to the computer through which the storage device is accessed needs to be locked down in order to deny access to anyone not specifically authorized (such as using multifactor authentication), or the storage device should be encrypted to block access to all but the intentionally authorized. See the section “Access control models” earlier in this chapter for details on authorization control via DAC, MAC, RBAC, and others.

Filesystem security should be used in all scenarios to define what access users have. Such access should be granted based on their work responsibilities in order to enable users to complete work tasks without placing the organization at any significant level of additional and unwarranted risk. This concept is known as the principle of least privilege, and it should be adopted and enforced across all means of resource access management.

Database security

Database security is an important part of any organization that uses large sets of data as an essential asset. Without database security efforts, business tasks can be interrupted and confidential information disclosed. The wide array of topics that are part of database security includes aggregation, inference, aggregation, data mining, data warehousing, and data analytics.

Structured Query Language (SQL), the language used to interact with most databases, provides a number of functions that combine records from one or more tables to produce potentially useful information. This process, known as aggregation, is not without its security vulnerabilities. Aggregation attacks are used to collect numerous low-level security items or low-value items and combine them to create something of a higher security level or value.

For example, suppose a low-level military records clerk is responsible for updating records of personnel and equipment as they are transferred from base to base. As part of his duties, this clerk may be granted the database permissions necessary to query and update personnel tables.

The military might not consider an individual transfer request (in other words, Sergeant Jones is being moved from Base X to Base Y) to be classified information. The records clerk has access to that information because he needs it to process Sergeant Jones’s transfer. However, with access to aggregate functions, the records clerk might be able to count the number of troops assigned to each military base around the world. These force levels are often closely guarded military secrets, but the low-ranking records clerk could deduce them by using aggregate functions across a large number of unclassified records.

For this reason, it’s especially important for database security administrators to strictly control access to aggregate functions and adequately assess the potential information they may reveal to unauthorized individuals.

The database security issues posed by inference attacks are very similar to those posed by the threat of data aggregation. Inference attacks involve combining several pieces of nonsensitive information to gain access to information that should be classified at a higher level. However, inference makes use of the human mind’s deductive capacity rather than the raw mathematical ability of modern database platforms.

A commonly cited example of an inference attack is that of the accounting clerk at a large corporation who is allowed to retrieve the total amount the company spends on salaries for use in a top-level report but is not allowed to access the salaries of individual employees. The accounting clerk often has to prepare those reports with effective dates in the past and so is allowed to access the total salary amounts for any day in the past year. Say, for example, that this clerk must also know the hiring and termination dates of various employees and has access to this information. This opens the door for an inference attack. If an employee was the only person hired on a specific date, the accounting clerk can now retrieve the total salary amount on that date and the day before and deduce the salary of that particular employee—sensitive information that the user would not be permitted to access directly.

As with aggregation, the best defense against inference attacks is to maintain constant vigilance over the permissions granted to individual users. Furthermore, intentional blurring of data may be used to prevent the inference of sensitive information. For example, if the accounting clerk were able to retrieve only salary information rounded to the nearest million, he would probably not be able to gain any useful information about individual employees. Finally, you can use database partitioning, dividing up a single database into multiple distinct databases according to content value, risk, and importance, to help subvert these attacks.

Many organizations use large databases, known as data warehouses (a predecessor to the idea of big data), to store large amounts of information from a variety of databases for use with specialized analysis techniques. These data warehouses often contain detailed historical information not normally stored in production databases because of storage limitations or data security concerns.

A data dictionary is commonly used for storing critical information about data, including usage, type, sources, relationships, and formats. Database management software (DBMS) reads the data dictionary to determine access rights for users attempting to access data.

Data mining techniques allow analysts to comb through data warehouses and look for potential correlated information. For example, an analyst might discover that the demand for lightbulbs always increases in the winter months and then use this information when planning pricing and promotion strategies. Data mining techniques result in the development of data models that can be used to predict future activity.

The activity of data mining produces metadata—information about data. Metadata is not exclusively the result of data mining operations; other functions or services can produce metadata as well. Think of metadata from a data mining operation as a concentration of data. It can also be a superset, a subset, or a representation of a larger data set. Metadata can be the important, significant, relevant, abnormal, or aberrant elements from a data set.

One common security example of metadata is that of a security incident report. An incident report is the metadata extracted from a data warehouse of audit logs through the use of a security auditing data mining tool. In most cases, metadata is of a greater value or sensitivity (due to disclosure) than the bulk of data in the warehouse. Thus, metadata is stored in a more secure container known as the data mart.

Data warehouses and data mining are significant to security professionals for two reasons. First, as previously mentioned, data warehouses contain large amounts of potentially sensitive information vulnerable to aggregation and inference attacks, and security practitioners must ensure that adequate access controls and other security measures are in place to safeguard this data. Second, data mining can actually be used as a security tool when it’s used to develop baselines for statistical anomaly–based intrusion detection systems.

Data analytics is the science of raw data examination with the focus of extracting useful information out of the bulk information set. The results of data analytics could focus on important outliers or exceptions to normal or standard items, a summary of all data items, or some focused extraction and organization of interesting information. Data analytics is a growing field as more organizations are gathering an astounding volume of data from their customers and products. The sheer volume of information to be processed has demanded a whole new category of database structures and analysis tools. It has even picked up the nickname of “big data.”

Big data refers to collections of data that have become so large that traditional means of analysis or processing are ineffective, inefficient, and insufficient. Big data involves numerous difficult challenges, including collection, storage, analysis, mining, transfer, distribution, and results presentation. Such large volumes of data have the potential to reveal nuances and idiosyncrasies that more mundane sets of data fail to address. The potential to learn from big data is tremendous, but the burdens of dealing with big data are equally great. As the volume of data increases, the complexity of data analysis increases as well. Big data analysis requires high-performance analytics running on massively parallel or distributed processing systems. With regard to security, organizations are endeavoring to collect an ever more detailed and exhaustive range of event data and access data. This data is collected with the goal of assessing compliance, improving efficiencies, improving productivity, and detecting violations.

A relational database is a means to organize and structure data in a flat two-dimensional table. The row and column–based organizational scheme is widely used, but it isn’t always the best solution. Relational databases can become difficult to manage and use when they grow extremely large, especially if they’re poorly designed and managed. Their performance can be slowed when significant numbers of simultaneous users perform queries. And they might not support data mapping needed by modern complex programming techniques and data structures. In the past, most applications of RDBMSs did not experience any of these potential downsides. However, in today’s era of big data and services the size of Google, Amazon, Twitter, and Facebook, RDBMSs aren’t sufficient solutions to some data-management needs.

NoSQL is a database approach that employs nonrelational data structures, such as hierarchies or multilevel nesting and referencing. A hierarchical data structure is one in which every data object can have a single data-parent relation and none, one, or many data-child relations. A data parent is an item upward or closer to the root of the hierarchy, whereas a data child is an item downward or further away from the root. DNS and XML data are excellent examples of hierarchical data structures.

A multilevel nesting and cross-referencing data structure is a system in which a data object can have multiple data-parent and data-child links and may even have links across multiple levels or among “peer” data items. Effectively, any data item can be linked to any other data item, with no structural limitations. The organization of Facebook, Twitter, and Google+ relationships is of this nature. This DBMS structure is also known as a distributed database model.

NoSQL databases or SQL databases? That is a common argument waged between DBMS managers and database programmers alike. However, using the term SQL here isn’t entirely accurate, because SQL is a means to interact with a database rather than a form or type of database. More specifically, the comparison is between relational database management systems (RDBMSs) and nonrelational databases. Databases that are labeled as NoSQL may actually support SQL commands, and thus instead should be labeled as NotRDBMS or NotRelational. The nickname NoSQL is more of a slight against Microsoft SQL Server than an indication that a DBMS used for big data does not support SQL queries.

In recent years, services, applications, and websites that have employed SQL databases (again, for clarity, a DBMS that supports SQL expressions) have been found vulnerable to a range of attacks, most notably SQL injection. However, this attack has less to do with the DBMS and SQL expressions than it does with the tendency for sites to be configured with minimal security and to use nondefensive scripts. Scripts that receive input from users but aren’t written to specifically defend against SQL injection are by default vulnerable. This vulnerability, tied in with loose security controls on the DBMS, has enabled the proliferation of SQL injection attacks across the Internet.

Although this is a serious issue, it isn’t the reason to switch to a NoSQL solution. There are many RDBMS options that can allow SQL to be disabled or don’t support SQL as an expression language. NoSQL DBMS options can often support SQL as an expression language. Thus, switching to NoSQL doesn’t resolve the SQL injection attack vulnerability on its own. The reason to switch to NoSQL solutions is to obtain a data structure and have access to data-management features that are better suited for a particular data set or programming need.

NoSQL is also known for not supporting ACID, which is a standard benefit or feature of most RDBMSs. ACID stands for the following:

  • Atomicity—Each transaction occurs in an all-or-nothing state.
  • Consistency—Each transaction maintains valid data and a valid state of the database.
  • Isolation—Each transaction occurs individually without interference.
  • Durability—Each applied transaction is resilient.

A discussion of NoSQL often brings up the topic of JSON. JavaScript Object Notation (JSON) is a common organizational and referencing format used by some NoSQL database options. The use of JSON as the basis for a NoSQL solution is a popular option for Internet services. However, it’s only one of the many NoSQL options available.

Exam Essentials

Understand authorization. Authorization is the mechanism that controls what a subject can and can’t do, access, use, or view. Authorization is commonly called access control or access restriction.

Know about access control. Access control or privilege management can be addressed using one of three primary schemes: user, group, or role. These schemes correspond directly to the access-control methodologies DAC, MAC, and RBAC.

Understand MAC. Mandatory access control (MAC) is based on classification rules. Objects are assigned sensitivity labels. Subjects are assigned clearance labels. Users obtain access by having the proper clearance for the specific resource. Classifications are hierarchical.

Know common MAC hierarchies. Government or military MAC uses the following levels: unclassified, sensitive but unclassified, confidential, secret, and top secret. Private sector or corporate business environment MAC uses these: public, sensitive, private, and confidential.

Understand DAC. Discretionary access control (DAC) is based on user identity. Users are granted access through ACLs on objects, at the discretion of the object’s owner or creator.

Comprehend ACLs. An ACL is a security logical device attached to every object and resource in the environment. It defines which users are granted or denied the various types of access available based on the object type.

Understand ABAC. Attribute-based access control (ABAC) is a mechanism of assigning access and privileges to resources through a scheme of attributes or characteristics.

Know about role-based access control (RBAC). Role-based access control (RBAC) is based on job description. Users are granted access based on their assigned work tasks. RBAC is most suitable for environments with a high rate of employee turnover.

Know about rule-based access control (RBAC). Rule-based access control (RBAC) is typically used in relation to network devices that filter traffic based on filtering rules, such as those found on firewalls and routers. Rule-based systems enforce rules independent of the user or the resource, since the rules are the rules.

Understand proximity systems. A proximity device or proximity card can be a passive device, a field-powered device, or a transponder.

Comprehend biometric device selection. It is important to evaluate the available options in order to select a biometric solution that is most in line with the organization’s security priorities; this can be accomplished by consulting a Zephyr analysis chart.

Understand FRR and FAR. False rejection rate (FRR, or Type I) errors are the number of failed authentications for valid subjects based on device sensitivity. False acceptance rate (FAR, or Type II) errors are the number of accepted invalid subjects based on device sensitivity.

Define CER. The crossover error rate (CER) is the point where the FRR and FAR lines cross on a graph. The comparatively lowest CER point is the more accurate biometric device for the relevant body part.

Understand tokens. A token is a form of authentication factor that is something you have. It’s usually a hardware device, but it can be implemented in software as a logical token.

Comprehend TOTP. Time-based one-time password (TOTP) tokens or synchronous dynamic password tokens are devices or applications that generate passwords at fixed time intervals.

Comprehend HOTP. HMAC-based one-time password (HOTP) tokens or asynchronous dynamic password tokens are devices or applications that generate passwords based not on fixed time intervals but on a nonrepeating one-way function, such as a hash or HMAC operation.

Understand personal identification verification cards. Personal identification verification cards, such as badges, identification cards, and security IDs, are forms of physical identification and/or electronic access-control devices.

Know about smartcards. Smartcards are credit card–sized IDs, badges, or security passes with embedded integrated circuit chips. They can contain information about the authorized bearer that can be used for identification and/or authentication purposes.

Understand 802.1x. 802.1x is a port-based authentication mechanism. It’s based on EAP and is commonly used in closed-environment wireless networks. However, 802.1x isn’t exclusively used on WAPs; it can also be used on firewalls, proxies, VPN gateways, and other locations where an authentication handoff service is desired. Think of 802.1x as an authentication proxy.

4.4 Given a scenario, differentiate common account management practices.

Account management is an element of authentication and authorization management. Secure account management includes an understanding of the various account types allowed in the IT environment, comprehension of a wide range of concepts, and understanding of account policy restrictions enforcement. These issues are discussed in this section.

Account types

User account types are the starting point for the type, level, and restriction settings related to a subject’s access to resources. Organizations should consider which types of accounts to use in their network and which types should be prohibited for use.

User account

A user account is also known as a standard account, limited account, regular account, or even a normal account. A user account is the most common type of account in a typical network, since everyone is assigned a user account if they have computer and network privileges. A user account is limited because this type of account is to be used for regular, normal daily operation tasks. A user account is prohibited, in most environments, from installing software or making significant system/OS changes (such as installing device drivers or updates).

Even system administrators should be assigned a standard user account to use for most of their work activities. The powerful administrator account should be reserved for use only when absolutely necessary.

Shared and generic accounts/credentials

Under no circumstances should a standard work environment implement shared accounts. It isn’t possible to distinguish between the actions of one person and another if several people use a shared account. Shared accounts should be used only for public systems (such as kiosks) or anonymous connections (which should be avoided as well).

Generic account prohibition is the rule that no generic or shared or anonymous accounts should be allowed in private networks or on any system where security is important. Only when each subject has a unique account is it possible to track the activities of individuals and hold them accountable for their actions and any violations of company policy or the law.

Generic credentials can refer either to the shared knowledge of credentials for a shared account or to the default credentials of a built-in account. Neither form of generic credentials is secure, and both should be avoided. All native and/or default accounts should be assigned a complex password.

Guest accounts

Guest accounts can be of two forms. One option is to use a shared group guest account that all visitors use. A second option is to create a unique account for each guest, with limited privileges. The former concept of a guest account is to be avoided because it does not support holding individuals accountable for their actions. The latter concept for a guest account is more desirable since it does allow for holding individuals accountable. A per-user unique guest account can also be used to customize and target access and permission for the needs and job requirements of the temporary visitor or guest.

Guest accounts are not for every person who visits a facility; instead, they should be issued only to those who have a valid and legitimate work need to be on the company network. This might include consultants, temporary workers, visiting workers from other locations, interns, investigators, and auditors.

Service accounts

A service account is a user account that is used to control the access and capabilities of an application. Through the use of a service account, an application can be granted specific authorization related to its function and data access needs. This is a more secure solution than configuring applications to operate as an administrator, root, or the system. Most applications and services do not need full and complete systemwide power; a service account allows for fine-tuned customization of permissions, privileges, and user rights for the exact needs of the software.

Privileged accounts

Administrative personnel need two user accounts: a standard account and an administrative or privileged account. Their standard account should have the normal privileges that every other typical worker has. This account should be used for the mundane tasks that most workdays consist of. Their administrative account should be configured to have only the special privileges needed to accomplish the assigned administrative functions. This account should not be able to perform the mundane tasks of everyday work. This restriction forces the user to employ the correct account for the task at hand. It also limits the amount of time the administrative account is in use and prevents it from being used when administrative access is a risk rather than a benefit, such as when an administrator account is used to access the Internet, open email, or perform general file transfers or executions.

For users with multiple roles within the organization, especially multiple administrative roles, each role should have its own administrative user account. This could mean a worker has a single standard user account and two or more administrative accounts. This places an extra burden on the worker to keep authentication distinct, but it prevents a single account from being too powerful. The use of multifactor authentication should be required on all privileged accounts in order to improve security and prevent a single basic password from being defined for the account.

General Concepts

This section includes descriptions of numerous account management concepts that are essential to the secure management of an IT environment.

Least privilege

The principle of least privilege is the security stance that users are granted only the minimum access, permissions, and privileges that are required for them to accomplish their work tasks. This ensures that users are unable to perform any task beyond the scope of their assigned responsibilities.

The assignment of privileges, permissions, rights, access, and so on should be periodically reviewed to check for privilege creep or misalignment with job responsibilities. Privilege creep occurs when workers accumulate privileges over time as their job responsibilities change. The end result is that a worker has more privileges than the principle of least privilege would dictate based on that individual’s current job responsibilities.

Least privilege is a staple of the information security realm. Simply put, where users are concerned, the principle of least privilege states that a user should be granted only the minimal privileges necessary to perform their work or to accomplish a specific task. This principle should be applied to all facets of a LAN, MAN, WAN, or any secure environment. For instance, a typical end user should not normally be granted administrative privileges. A trouble-call technician might require local administrative privileges but doesn’t normally require domain administrative privileges. Basically, as a security administrator, you should limit the damage that can be done by user error, a disgruntled employee, or a hijacked account. Least privilege is one of the easiest ways to protect against these and myriad other potential security risks.

Onboarding/offboarding

Onboarding is the process of adding new employees to the identity and access management (IAM) system of an organization. The onboarding process is also used when an employee’s role or position changes or when that person is awarded additional levels of privilege or access.

Offboarding is the reverse of this process. It is the removal of an employee’s identity from the IAM system once that person has left the organization.

The procedures for onboarding and offboarding should be clearly documented in order to ensure consistency of application as well as compliance with regulations or contractual obligations.

Onboarding can also refer to organizational socialization. This is the process by which new employees are trained in order to be properly prepared for performing their job responsibilities. It can include training, job skill acquisition, and behavioral adaptation in an effort to integrate employees efficiently into existing organizational processes and procedures. Well-designed onboarding can result in higher levels of job satisfaction, higher levels of productivity, faster integration with existing workers, a rise in organizational loyalty, stress reduction, and a decreased occurrence of resignation.

Permission auditing and review

Permissions are the access activities granted or denied users, often through the use of per-object access control lists (ACLs). An ACL is a collection of individual access control entries (ACEs). Each object in a discretionary access control (DAC) environment has an ACL. Each ACE focuses on either one user account or a group and then grants or denies an object-specific permission, such as read, write, or execute.

Permissions should be assigned on a job responsibility basis. Users should only have sufficient permissions to accomplish their work tasks. This is one aspect of the principle of least privilege.

User access, user rights, and permission auditing and review are often based on a comparative assessment of assigned resource privileges. A privilege or permission is an ability or activity that a user account is granted permission to perform. User accounts are often assigned privileges to access resources based on their work tasks and their normal activities. The principle of least privilege is a security rule of thumb that states that users should be granted only the level of access needed for them to accomplish their assigned work tasks, and no more. Furthermore, those privileges should be assigned for the shortest time period possible.

A user right is an ability to alter the operating environment as a whole. User rights include changing the system time, being able to shut down and reboot a system, and installing device drivers. Standard user accounts are granted few user rights, whereas administrators often require user rights in order to accomplish their privilege system management tasks.

Exploitation of privileges is known as privilege abuse or privilege escalation. Privilege escalation occurs when a user account is able to obtain unauthorized access to higher levels of privileges, such as a normal user account that can perform administrative functions. Privilege escalation can occur through the use of a hacker tool or when an environment is incorrectly configured. It can also occur when lazy administrators fail to remove older privileges as a user is granted new privileges based on new job descriptions. An accumulation of privileges can be considered a form of privilege escalation.

Auditing and review of access and privilege should be used to monitor and track not just the assignment of privilege and the unauthorized escalation of privilege, but also privilege usage. Knowing what users are doing and how often they do it may assist administrators in assigning and managing privileges.

Usage auditing and review

Part of security is holding users accountable for their actions. This can be accomplished only if every user has a unique user account. Thus, shared or group accounts aren’t sufficient to provide accountability. Each user should be required to provide strong authentication credentials to prevent account takeover. Each account needs to have clearly defined access-control and authorization restrictions. Finally, all activities of users should be recorded in an auditing or logging mechanism. By having these elements in place, you can carry out user auditing and review in order to determine whether users have been performing their work tasks appropriately or whether there have been failed and/or successful attempts at violating company policies or the law.

Time-of-day restrictions

Time-of-day restrictions is an access control concept that limits a user account to be able to log into a system or network only during specific hours and days of the week. For example, a daily worker may only be able to log into the work network from 7 a.m. to 6 p.m. Monday through Friday. Although this might have some effect on preventing employees from working late or accumulating overtime, the main purpose is to prevent abuse of the account during evenings and weekends when the account should generally not be in use. This is a tool and technique for limiting access to sensitive environments to normal business hours, when oversight and monitoring can be performed to prevent fraud, abuse, or intrusion.

Recertification

Recertification can be used to refer to a variety of important security management concepts.

Recertification can mean performing a periodic assessment of workers’ job responsibilities in relation to their user account’s permissions and rights. Recertification is a means to ensure that the principle of least privilege is being adhered to.

Recertification is used in relation to formal certification procedures, such as establishing proof of knowledge and/or skill of a subject, and may relate to the assignment, repeal, or extension of a license or an approval to operate.

The term recertification can also refer to the act of assessing an organization’s compliance with regulations, standards, and their own written security policy.

Finally, recertification can reference the concept of evaluating the IT infrastructure’s mechanisms of account management and privilege assignment to ensure that they continue to provide sufficient authentication and authorization security.

Standard naming convention

Some organizations have adopted a standard naming convention to control the names of systems, shares, user account names, and email addresses. Such systems can make creating new names easier and more straightforward, which in turn makes recovery of a forgotten name simple as well. However, this can also make it easy for outsiders to predict names if they discover the naming convention in use. Still, since names of objects and users are not as sensitive as passwords, PII, and company intellectual property, adopting a standard naming convention can be seen as a streamlining effort to prevent or curtail questionable or offensive names that some users might select on their own.

Account maintenance

Account maintenance is the regular or periodic activity of reviewing and assessing the user accounts of an IT environment. Any accounts that are no longer needed should be disabled, such as those used by previous employees or related to services that have been uninstalled. Once an account has been disabled for a reasonable length of time for any security auditing concerns (for some companies this might be 2 weeks, whereas others may need 6 months), the account should be deleted. Keep in mind that once an account is deleted, all audit records related to that account now have no user object to point to and thus might be grouped in a catch-all category in any system or security audit.

Account maintenance can also include ongoing password auditing or cracking to discover poor passwords before attackers do, in order to have users change them to something more robust.

Account management should also review group memberships, user rights, time restrictions, and resource access in relation to each worker’s individual job description and work task responsibilities.

Group-based access control

Group-based privileges assign a privilege or access to a resource to all members of a group as a collective. Group-based access control grants every member of the group the same level of access to a specific object. Group-based privileges are common in many OSs, including Linux and Windows. Linux (as well as Unix) uses group-based privileges on each object. In fact, each object has three types of permissions: those for the owner, those for the group of the owner, and those for other users (known as World or Everyone). The second permission set, which defines permissions for all members of the group, is associated with the object because the owner is a member of that group.

Windows uses group management differently. Each object has an ACL. The ACL can contain one or more access control entries (ACEs). Each ACE focuses on either a single user or a group. If an ACE focuses on a group, then all members of the group are granted (or denied) the related permissions on the object.

When using group-assigned privileges, it’s important to consider whether doing so violates the principle of least privilege as well as whether you actually want to grant all members of a specific group the same access to a specific object. If not, you need to alter the permissions assignment.

Location-based policies

Location-based policies for controlling authorization grant or deny resource access based on where the subject is located. This might be based on whether the network connection is local wired, local wireless, or remote. Location-based policies can also grant or deny access based on MAC address, IP address, OS version, patch level, and/or subnet in addition to logical or geographical location. Location-based policies should be used only in addition to standard authentication processes, not as a replacement for them.

Account policy enforcement

The combination of a username and a password is the most common form of authentication (see Figure 4.9). If the provided password matches the password stored in a system’s accounts database for the specified user, then that user is authenticated to the system. However, just because using a username and password is the most common form of authentication, that doesn’t mean it’s the most secure. On the contrary, it’s generally considered to be the least secure form of authentication.

Image described by caption and surrounding text.

FIGURE 4.9 A basic logon process employing a username and password

Numerous means to improve the basic username/password combination have been developed. First is the storage of passwords in an accounts database in an encrypted form. Typically that form is the hash value from a one-way hash function. Second is the use of an authentication protocol (or mechanism) that prevents the transmission of passwords in an easily readable form over a network or especially the Internet. Third, strong (complex) passwords are often enforced at a programmatic level. This is done to ensure that only passwords that are difficult for a password-cracking tool to discover are allowed by the system.

Whatever means of authentication is adopted by an organization, it is important to consider best secure business practices and to establish a standard operating procedure to follow. Once an account management policy is established, it should be enforced. Only with consistent application of security can consistent and reliable results be expected.

Strong passwords have the following characteristics:

  • Consist of numerous characters (in 2017, 12 or more with at least 16 preferred)
  • Include at least three types of characters (uppercase and lowercase letters, numerals, and keyboard symbols)
  • Are changed on a regular basis (every 90 days)
  • Don’t include any dictionary or common words or acronyms
  • Don’t include any part of the subject’s real name, username, or email address.

These features can be implemented as a requirement through account policy enforcement. This is the collection of password requirement features in the OS, often called a password policy.

Passwords should be strong enough to resist discovery through attack but easy enough for the person to remember. This can sometimes be a difficult line to walk. Training users on picking strong passphrases and memorizing them is an important element of modifying risky behavior.

Continuous monitoring stems from the need to have user accountability through the use of user access reviews. It’s becoming a standard element in government regulations and security contracts that the monitoring of an environment be continuous in order to provide a more comprehensive overview of the security stance and user compliance with security policies. Effectively, continuous monitoring requires that all users be monitored equally, that users be monitored from the moment they enter the physical or logical premises of an organization until they depart or disconnect, and that all activities of all types on any and all services and resources be tracked. This comprehensive approach to auditing, logging, and monitoring increases the likelihood of capturing evidence related to abuse or violations.

Credential management

Credential management is a service or software product designed to store, manage, and even track user credentials. Many credential management options are available for enterprise networks, where hundreds or thousands of users must be managed. However, most credential management solutions are designed for end-user deployment. Credential management products allow users to store all their online (and even local) credentials in a local or cloud-based secured digital container. Examples of products of this type include LastPass, 1Password, KeePass, and Dashlane. By using a credential manager, users can define longer and more random credentials for their various accounts without the burden of having to remember them or the problem of writing them down.

The storage of credentials in a central location is referred to as credential management. Given the wide range of Internet sites and services, each with its own particular logon requirements, it can be a burden to use unique names and passwords. Credential management solutions offer a means to securely store a plethora of credential sets. Often these tools employ a master credential set (multifactor being preferred) to unlock the data set when needed. Some credential management options can even provide auto-login options for apps and websites.

Group policy

Group Policy is the mechanism by which Windows systems can be managed in a Windows network domain environment. A Group Policy Object (GPO) is a collection of Registry settings that can be applied to a system at the time of bootup or at the moment of user login. Group Policy enables a Windows administrator to maintain consistent configurations and security settings across all members of a large network. In the vast array of setting options available in a GPO, there are numerous settings related to credentials, such as password complexity requirements, password history, password length, and account lockout settings.

Password complexity

A password policy is both a set of rules written out as part of the organizational security policy that dictates the requirements of user and device passwords, and a technical enforcement tool (typically a native part of an OS) that enforces the password rules. The password policy typically spells out the requirements for minimum password length, maximum password age, minimum password age, password history retention, and some sort of password complexity requirement. This latter setting, password complexity, often enforces a minimum of three out of four standard character types (uppercase and lowercase letters, numbers, and symbols) to be represented in the password and does not allow the username, real name, and email address to appear in the password.

Generally, passwords over 12 characters are considered fairly secure, and those over 15 characters are considered very secure. Usually, the more characters in a password, along with some character type complexity, the more resistant it is to password-cracking techniques, specifically brute-force attacks. Requiring regular password changes, such as every 90 days, and forbidding the reuse of previous passwords (password history) improves the security of a system that uses passwords as the primary means of authentication.

Passwords are notoriously weak forms of authentication. Any environment that still relies on passwords alone is at greater risk for account compromise than organizations that have adopted stronger forms of authentication. Multifactor authentication should be seriously considered by every organization as a means to improve authentication security.

Good passwords can be crafted. However, most users revert to default or easier behaviors if left to their own devices. It is not uncommon for users—even when they are trained to pick passwords that are strong, long, and easier to remember—to write them down, be fooled by a social engineer, or reuse the password in other environments.

Bad password behaviors also include the following:

  • Reusing old or previous passwords
  • Sharing passwords with co-workers, friends, or family
  • Using a nonencrypted password storage tool
  • Allowing passwords to be used over nonencrypted protocols
  • Failing to check for hardware keystroke loggers, video cameras, or shoulder-surfing onlookers.

Most of these poor password behaviors can be addressed with security policy, technology limitations, and user training.

Good password behaviors include selecting a passphrase of at least 15 characters, ensuring that at least three character types are represented (uppercase, lowercase, numbers, symbols, higher-order ASCII characters, and foreign language characters), memorizing passwords, using an encrypted password-storage tool only with authorized permission, following password-change rules, and not reusing passwords on the same or even on different systems.

Expiration

It has been common practice for years for passwords to automatically expire after a specific length of time in order to force users to change them. The length of time for a password to remain static can vary based on risk and threat levels. However, a common traditional rule of thumb for password expiration is for passwords to be changed every 90 days. This may still be considered the “right” answer for Security+, but the idea that a password needs to be changed due to its age is now considered to be invalid (see the accompanying sidebar). A password needs to be changed only if it

  • Isn’t in compliance with company password policy
  • Is obviously insecure
  • Has been reused
  • Is likely compromised due to a system intrusion

Otherwise, a strong (long and complex) password can remain static.


Recovery

Password recovery is usually a poor security solution. When a password is forgotten, it should be changed. The ability to recover and/or reveal a password requires that the password storage mechanism be reversible or that passwords be stored in multiple ways. A more secure option is to require passwords to be changed rather than recovered.

Most systems have moved to password replacement and away from actual password recovery. However, password replacement is not necessarily a secure process. Organizations often have a secure option that requires workers to visit an account manager’s office in person to show a photo ID in order to have an account password changed or reset. However, websites often use one of two poor password reset options. If you have forgotten your password to a website, it often offers an “I forgot my password” link. Then you are prompted to provide the email address related to your account or answer several security or identity proofing questions.

If a password is sent to your email address and you then happen to recognize it as the password you had forgotten, you know that the web service is storing passwords in plain-text form. If they were storing passwords in a secure manner, even just a simple hash, they would not have the ability to instantly send you your actual existing password. What you hope to see in the recovery email is a new, hopefully random, password. But since email is itself a plain-text communication system, you need to use the new password quickly to log into your account and immediately change the password to something new.

If you are asked several security questions, then you need to know the facts that are being requested about you from some database source, your purchase history or credit history, or the answers to the questions you were asked or you selected at the time you set up the account originally. Unfortunately, it is all too common for these questions to be rather mundane and standardized across various websites. For example, many ask about your favorite vacation, favorite food, favorite music, favorite movie, third-grade teacher’s name, first pet name, first vehicle make and model, first job, high school mascot, or best man/maid of honor at your wedding. These common questions are often information about you that many others know, especially friends, family, and some enemies. When setting the answers to security questions, consider recommending to users that they implement an obfuscation technique so the typed-in answers are not as obvious. Some options include spelling the correct answer backward; answering the opposite of the question; or adding a padding statement, such as ABC123, to each answer. Although no obfuscation technique is foolproof and any will be disclosed if the answer database is breached, it will at least provide some additional protection against those who would attempt to impersonate you via password recovery vulnerabilities.

Disablement

Disablement, or account expiration, is a little-used feature of some OS user accounts that automatically disables a user account or causes the account to expire at a specific time and on a specific day. Account expiration is a secure feature to employ on user accounts for temporary workers, interns, or consultants. Workers who need valid user accounts but whose employment or access will expire at a specific known date and time can be set up with accounts that are preconfigured to become disabled. In most cases, such accounts can be re-enabled after they expire, and new or updated expiration dates can be established at any time.

Lockout

Account lockout automatically disables an account when someone attempts to log on but fails repeatedly because they type in an incorrect password. Account lockout is often configured to lock out an account after three to five failed logon attempts within a short time (such as 15 minutes). Accounts that are locked out may remain permanently disabled until an administrator intervenes or may return to functional status after a specified period of time.

Password history

Password history is an authentication protection feature that tracks previous passwords (by archiving hashes) in order to prevent password reuse. For password history to be effective, it must typically be combined with a minimum password age requirement. For example, if five password histories are being retained, a worker could change their password six times to return to their preferred password. However, if there was also the requirement to keep a password a minimum of three days, it would take the person eighteen days to be able to get back to their preferred password. This lengthy delay is often a sufficient deterrent against password reuse.

Password reuse

Password reuse occurs when a user attempts to use a password they had used previously on the same system. The management of password history prevents password reuse.

Password length

Password length, in combination with complexity, is an important factor in determining a password’s strength. Generally, longer passwords are better. Passwords of seven characters or less are likely to be cracked within hours. Passwords of eight or nine characters are likely to be cracked within days to weeks. Passwords of ten or more characters are unlikely to be cracked.

These relative strengths are based on the range of character types, the use of a strong hashing mechanism for storage, and never transmitting the password in plain text. The mathematical predictions of strength aren’t a guarantee. Additionally, lazy actions on the part of the user or poor security management in the environment can provide other means to learn or bypass strong passwords.

Exam Essentials

Know about shared accounts. Under no circumstances should a standard work environment implement shared accounts. It isn’t possible to distinguish between the actions of one person and another if they both use a shared account.

Understand the principle of least privilege. The principle of least privilege is a security rule of thumb that states that users should be granted only the level of access needed for them to accomplish their assigned work tasks, and no more.

Define onboarding/offboarding. Onboarding is the process of adding new employees to the organization’s identity and access management (IAM) system. It can also mean organizational socialization, which is the process by which new employees are trained in order to be properly prepared for performing their job responsibilities. Offboarding is the removal of an employee’s identity from the IAM system once they have left the organization.

Understand privileges. Group-based privileges assign a privilege or access to a resource to all members of a group as a collective. User-assigned privileges are permissions that are granted or denied on a specific individual user basis.

Know about time-of-day restrictions. Time-of-day restrictions is an access control concept that limits a user account to be able to log into a system or network only during specific hours.

Understand account maintenance. Account maintenance is the regular or periodic activity of reviewing and assessing the user accounts of an IT environment.

Comprehend group-based access control. Group-based access control grants every member of the group the same level of access to a specific object.

Understand location-based policies. Location-based policies for controlling authorization grant or deny resource access based on where the subject is located.

Know about credential management. Credential management is a service or software product designed to store, manage, and even track user credentials.

Understand Group Policy. Group Policy is the mechanism by which Windows systems can be managed in a Windows network domain environment. A Group Policy Object (GPO) is a collection of Registry settings that can be applied to a system at the time of bootup or at the moment of user login.

Comprehend password management. Password management is the system used to manage passwords across a large network environment. It typically includes a requirement for users to create complex passwords. It also addresses the issues of complexity, expiration, recovery, account disablement, lockout, history, reuse, and length.

Review Questions

You can find the answers in the Appendix.

  1. What method of access control is best suited for environments with a high rate of employee turnover?

    1. MAC
    2. DAC
    3. RBAC
    4. ACL
  2. What mechanism is used to support the exchange of authentication and authorization details between systems, services, and devices?

    1. Biometric
    2. Two-factor authentication
    3. SAML
    4. LDAP
  3. Which is the strongest form of password?

    1. More than eight characters
    2. One-time use
    3. Static
    4. Different types of keyboard characters
  4. Which of the following technologies can be used to add an additional layer of protection between a directory services–based network and remote clients?

    1. SMTP
    2. RADIUS
    3. PGP
    4. VLAN
  5. Which of the following is not a benefit of single sign-on?

    1. The ability to browse multiple systems
    2. Fewer usernames and passwords to memorize
    3. More granular access control
    4. Stronger passwords
  6. Federation is a means to accomplish _______.

    1. Accountability logging
    2. ACL verification
    3. Single sign-on
    4. Trusted OS hardening
  7. You have been tasked with installing new kiosk systems for use in the retail area of your company’s store. The company elected to use standard equipment and an open-source Linux operating system. You are concerned that everyone will know the default password for the root account. What aspect of the kiosk should be adjusted to prevent unauthorized entities from being able to make system changes?

    1. Authorization
    2. Accounting
    3. Authentication
    4. Auditing
  8. Your company has several shifts of workers. Overtime and changing shifts is prohibited due to the nature of the data and the requirements of the contract. To ensure that workers are able to log into the IT system only during their assigned shifts, you should implement what type of control?

    1. Multifactor authentication
    2. Time-of-day restrictions
    3. Location restrictions
    4. Single sign-on
  9. Place the following steps (represented by the letters A through I) in the correct order:

    1. The KDC verifies that the client has a valid TGT and then issues an ST to the client. The ST includes a time stamp that indicates its valid lifetime.
    2. The client receives the TGT. At this point, the subject is an authenticated principle in the Kerberos realm.
    3. The client sends the ST to the network server that hosts the desired resource.
    4. The Kerberos client system encrypts the password and transmits the protected credentials to the KDC.
    5. The subject requests access to resources on a network server. This causes the client to request a service ticket (ST) from the KDC.
    6. The subject provides logon credentials.
    7. The network server verifies the ST. If it’s verified, it initiates a communication session with the client. From this point forward, Kerberos is no longer involved.
    8. The KDC verifies the credentials and then creates a ticket-granting ticket (TGT)—a hashed form of the subject’s password with the addition of a time stamp that indicates a valid lifetime. The TGT is encrypted and sent to the client.
    9. The client receives the ST.
      1. F, D, H, B, E, A, I, C, G
      2. H, I, C, D, G, F, E, A, B
      3. A, B, C, D, E, F, G, H, I
      4. I, A, E, B, C, G, F, H, D
  10. Your company has recently purchased Cisco networking equipment. When you are setting up to allow remote access, what means of AAA service is now available to your organization?

    1. RADIUS
    2. X.500
    3. TACACS+
    4. X.509 v3
  11. Your organization has recently decided to allow some employees to work from home two days a week. While configuring the network to allow for remote access, you realize the risk this poses to the organization’s infrastructure. What mechanism can be implemented to provide an additional barrier against remote access abuse?

    1. Kerberos
    2. Single sign-on
    3. Stronger authorization
    4. RADIUS
  12. You are developing a smart app that will control a new IoT device that automates blinking light fixtures in time with the beat of music. You want to make using the device as simple as possible, so you want to adopt an authentication technique that is seamless for the user. Which technology should you integrate into your app and device?

    1. OpenID Connect
    2. Shibboleth
    3. A secure token
    4. Role-based access control
  13. How are effective permissions calculated?

    1. Count the number of allows, subtract the number of denials
    2. Accumulate allows, remove denials
    3. Look at the user’s clearance level
    4. Count the number of groups the user is a member of
  14. What form of authorization is based on a scheme of characteristics related to the user, the object, the system, the application, the network, the service, time of day, or even other subjective environmental concerns?

    1. RBAC
    2. MAC
    3. DAC
    4. ABAC
  15. Your organization wants to integrate a biometric factor into the existing multifactor authentication system. To ensure alignment with company priorities, what tool should be used in selecting which type or form of biometric to use?

    1. CER comparison
    2. OAuth verifier
    3. Zephyr analysis chart
    4. Federation assessment
  16. What type of biometric error increases as the sensitivity of the device increases?

    1. FAR
    2. FRR
    3. CER
    4. False positive
  17. You are installing a new network service application. The application requires a variety of permissions on several resources and even a few advanced user rights in order to operate properly. Which type of account should be created for this application to operate under?

    1. Service
    2. User
    3. Privileged
    4. Generic
  18. Failing to perform regular permissions auditing can result in a violation of what security concept?

    1. Implicit deny
    2. Security by obscurity
    3. Least privilege
    4. Diversity of defense
  19. What type of access management can involve restrictions based on MAC address, IP address, OS version, patch level, and/or subnet in addition to logical or geographical position?

    1. Geography-based access control
    2. Physical access control
    3. Logical access control
    4. Location-based access control
  20. Which of the following is a recommended basis for reliable password complexity?

    1. Minimum of eight characters; include representations of at least three of the four character types
    2. Allow for a maximum of three failed logon attempts before locking the account for 15 minutes
    3. Require that a password remain static for at least three days and prevent the reuse of the five most recently used passwords
    4. Require that each administrator have a normal user account in addition to a privileged account
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.44.108