Images

Authentication and Remote Access

We should set a national goal of making computers and Internet access available for every American.

—WILLIAM JEFFERSON CLINTON

Images

In this chapter, you will learn how to

Images   Identify the differences among user, group, and role management

Images   Implement password and domain password policies

Images   Describe methods of account management (SSO, time of day, logical token, account expiration)

Images   Describe methods of access management (MAC, DAC, and RBAC)

Images   Discuss the methods and protocols for remote access to networks

Images   Identify authentication, authorization, and accounting (AAA) protocols

Images   Explain authentication methods and the security implications in their use

Images   Implement virtual private networks (VPNs) and their security aspects

On single-user systems such as PCs, the individual user typically has access to most of the system’s resources, processing capability, and stored data. On multiuser systems, such as servers and mainframes, an individual user typically has very limited access to the system and the data stored on that system. An administrator responsible for managing and maintaining the multiuser system has much greater access. So how does the computer system know which users should have access to what data? How does the operating system know what applications a user is allowed to use?

On early computer systems, anyone with physical access had fairly significant rights to the system and could typically access any file or execute any application. As computers became more popular and it became obvious that some way of separating and restricting users was needed, the concepts of users, groups, and privileges came into being (privileges mean you have the ability to “do something” on a computer system, such as create a directory, delete a file, or run a program). These concepts continue to be developed and refined and are now part of what we call privilege management.

Privilege management is the process of restricting a user’s ability to interact with the computer system. Essentially, everything a user can do to or with a computer system falls into the realm of privilege management. Privilege management occurs at many different points within an operating system or even within applications running on a particular operating system.

Remote access is another key issue for multiuser systems in today’s world of connected computers. Isolated computers, not connected to networks or the Internet, are rare items these days. Except for some special-purpose machines, most computers need interconnectivity to fulfill their purpose. Remote access enables users outside a network to have network access and privileges as if they were inside the network. Being outside a network means that the user is working on a machine that is not physically connected to the network and must therefore establish a connection through a remote means, such as by dialing in, connecting via the Internet, or connecting through a wireless connection.

Authentication is the process of establishing a user’s identity to enable the granting of permissions. To establish network connections, a variety of methods are used, the choice of which depends on network type, the hardware and software employed, and any security requirements.

Images User, Group, and Role Management

To manage the privileges of many different people effectively on the same system, a mechanism for separating people into distinct entities (users) is required, so you can control access on an individual level. At the same time, it’s convenient and efficient to be able to lump users together when granting many different people (groups) access to a resource at the same time. At other times, it’s useful to be able to grant or restrict access based on a person’s job or function within the organization (role). While you can manage privileges on the basis of users alone, managing user, group, and role assignments together is far more convenient and efficient.

User

The term user generally applies to any person accessing a computer system. In privilege management, a user is a single individual, such as “John Forthright” or “Sally Jenkins.” This is generally the lowest level addressed by privilege management and the most common area for addressing access, rights, and capabilities. When accessing a computer system, each user is generally given a username—a unique alphanumeric identifier they will use to identify themselves when logging into or accessing the system. When developing a scheme for selecting usernames, you should keep in mind that usernames must be unique to each user, but they must also be fairly easy for the user to remember and use.

Images

A username is a unique alphanumeric identifier used to identify a user to a computer system. Permissions control what a user is allowed to do with objects on a computer system—what files they can open, what printers they can use, and so on. In Windows security models, permissions define the actions a user can perform on an object (open a file, delete a folder, and so on). Rights define the actions a user can perform on the system itself, such as change the time, adjust auditing levels, and so on. Rights are typically applied to operating system–level tasks.

With some notable exceptions, in general a user who wants to access a computer system must first have a username created for them on the system they want to use. This is usually done by a system administrator, security administrator, or other privileged user, and this is the first step in privilege management—a user should not be allowed to create their own account.

Images

Auditing user accounts, group membership, and password strength on a regular basis is an extremely important security control. Many compliance audits focus on the presence or lack of industry-accepted security controls.

Once the account is created and a username is selected, the administrator can assign specific permissions to that user. Permissions control what the user is allowed to do with objects on the system—which files they may access, which programs they may execute, and so on. Whereas PCs typically have only one or two user accounts, larger systems such as servers and mainframes can have hundreds of accounts on the same system. Figure 11.1 shows the Users management tab of the Computer Management utility on a Windows Server 2008 system. Note that several user accounts have been created on this system, each identified by a unique username.

Images

Figure 11.1   Users tab on a Windows Server 2008 system

Generic Accounts

Generic accounts are accounts without a named user behind them. These can be employed for special purposes, such as running services and batch processes, but because they cannot be attributed to an individual, they should not have login capability. It is also important that if they have elevated privileges, their activities be continually monitored as to what functions they are performing versus what they are expected to be doing. General use of generic accounts should be avoided because of the increased risk associated with no attribution capability.

A few “special” user accounts don’t typically match up one-to-one with a real person. These accounts are reserved for special functions and typically have much more access and control over the computer system than the average user account. Two such accounts are the administrator account under Windows and the root account under UNIX. Each of these accounts is also known as the superuser—if something can be done on the system, the superuser has the power to do it. These accounts are not typically assigned to a specific individual and are restricted, accessed only when the full capabilities of the account are required.

Due to the power possessed by these accounts, and the few, if any, restrictions placed on them, they must be protected with strong passwords that are not easily guessed or obtained. These accounts are also the most common targets of attackers—if the attacker can gain root access or assume the privilege level associated with the root account, they can bypass most access controls and accomplish anything they want on that system.

Another account that falls into the “special” category is the system account used by Windows operating systems. The system account has the same file privileges as the administrator account and is used by the operating system and by services that run under Windows. By default, the system account is granted full control to all files on an NTFS volume. Services and processes that need the capability to log on internally within Windows will use the system account—for example, the DNS Server and DHCP Server services in Windows Server 2008 use the Local System account.

Shared and Generic Accounts/Credentials

Shared accounts go against the specific treatise that accounts exist so that user activity can be tracked. This said, there are times when guest accounts are used, especially in situations where the guest access is limited to a defined set of functions and specific tracking is not particularly useful. Sometimes the shared accounts are called generic accounts and exist only to provide a specific set of functionality, like in a PC running in kiosk mode, with a browser limited to specific sites as an information display. Under these circumstances, being able to trace the activity to a user is not particularly useful.

Guest Accounts

Guest accounts are frequently used on corporate networks to provide visitors’ access to the Internet and to some common corporate resources, such as projectors, printers in conference rooms, and so on. Again, these types of accounts are restricted in their network capability to a defined set of machines, with a defined set of access, much like a user from the Internet visiting their publically facing web site. As such, logging and tracing activity have little to no use, so the overhead of establishing an account does not make sense.

Service Accounts

Service accounts are accounts that are used to run processes that do not require human intervention to start/stop/administer. From batch jobs that run in a data center, to simple tasks that are run on the enterprise for compliance objectives, the reasons for running are many, but the need for an accountholder is not really there. One thing you can do with these accounts in Windows systems is to not allow them to log into the system. This limits some of the attack vectors that can be applied to these accounts. Another security provision is to apply time restrictions for accounts that run batch jobs at night and then monitor when they run. Any service account that has to run in an elevated privilege mode should receive extra monitoring and scrutiny.

Onboarding/Offboarding

Onboarding and offboarding involve the bringing of personnel on and off a project or team. During onboarding, proper account relationships need to be managed. New members can be put into the correct groups; then, when offboarded, they can be removed from the groups. This is one way in which groups can be used to manage permissions, which can be very efficient when users move between units and tasks.

Privileged Accounts

Privileged accounts are any accounts with greater than normal user access. Privileged accounts are typically root or admin-level accounts and represent risk in that they are unlimited in their powers. These accounts require regular real-time monitoring, if at all possible, and should always be monitored when operating remotely. There may be reasons why and occasions when system administrators are acting via a remote session, but when they are, the purposes should be known and approved.

Group

Under privilege management, a group is a collection of users with some common criteria, such as a need for access to a particular dataset or group of applications. A group can consist of one user or hundreds of users, and each user can belong to one or more groups. Figure 11.2 shows a common approach to grouping users—building groups based on job function.

Images

Figure 11.2   Logical representation of groups

By assigning membership in a specific group to a user, you make it much easier to control that user’s access and privileges. For example, if every member of the engineering department needs access to product development documents, administrators can place all the users in the engineering department in a single group and allow that group to access the necessary documents. Once a group is assigned permissions to access a particular resource, adding a new user to that group will automatically allow that user to access that resource. In effect, the user “inherits” the permissions of the group as soon as they are placed in that group. As Figure 11.3 shows, a computer system can have many different groups, each with its own rights and permissions.

Images

Figure 11.3   Groups tab on a Windows Server 2008 system

As you can see from the description for the Administrators group in Figure 11.3, this group has complete and unrestricted access to the system. This includes access to all files, applications, and datasets. Anyone who belongs to the Administrators group or is placed in this group will have a great deal of access and control over the system.

Some operating systems, such as Windows, have built-in groups—groups that are already defined within the operating system, such as Administrators, Power Users, and Everyone. The whole concept of groups revolves around making the tasks of assigning and managing permissions easier, and built-in groups certainly help to make these tasks easier. Individual users accounts can be added to built-in groups, allowing administrators to grant permission sets to users quickly and easily without having to specify permissions manually. For example, adding a user account named “bjones” to the Power Users group gives bjones all the permissions assigned to the built-in Power Users group, such as installing drivers, modifying settings, and installing software.

Role

Another common method of managing access and privileges is by roles. A role is usually synonymous with a job or set of functions. For example, the role of security admin in Microsoft SQL Server may be applied to someone who is responsible for creating and managing logins, reading error logs, and auditing the application. Security admins need to accomplish specific functions and need access to certain resources that other users do not—for example, they need to be able to create and delete logins, open and read error logs, and so on. In general, anyone serving in the role of security admin needs the same rights and privileges as every other security admin. For simplicity and efficiency, rights and privileges can be assigned to the role security admin, and anyone assigned to fulfill that role automatically has the correct rights and privileges to perform the required tasks.

Images Domain Passwords

A domain password policy is a password policy for a specific domain. Because these policies are usually associated with the Windows operating system, a domain password policy is implemented and enforced on the domain controller, which is a computer that responds to security authentication requests, such as logging into a computer, for a Windows domain. The domain password policy usually falls under a group policy object (GPO) and has the following elements (see Figure 11.4):

Images

Figure 11.4   Password policy options in Windows Local Security Policy

Images

Not only is it essential to ensure every account has a strong password, but it is also essential to disable or delete unnecessary accounts. If your system does not need to support guest or anonymous accounts, then disable them. When user or administrator accounts are no longer needed, remove or disable them. As a best practice, all user accounts should be audited periodically to ensure there are no unnecessary, outdated, or unneeded accounts on your systems.

Images   Enforce password history Tells the system how many passwords to remember and does not allow a user to reuse an old password.

Images   Maximum password age Specifies the maximum number of days a password may be used before it must be changed.

Images   Minimum password age Specifies the minimum number of days a password must be used before it can be changed again.

Images   Minimum password length Specifies the minimum number of characters that must be used in a password.

Images   Password must meet complexity requirements Specifies that the password must meet the minimum length requirement and have characters from at least three of the following four groups: English uppercase characters (A through Z), English lowercase characters (a through z), numerals (0 through 9), and non-alphabetic characters (such as !, $, #, and %).

Images   Store passwords using reversible encryption Reversible encryption is a form of encryption that can easily be decrypted and is essentially the same as storing a plaintext version of the password (because it’s so easy to reverse the encryption and get the password). This should be used only when applications use protocols that require the user’s password for authentication, such as the Challenge-Handshake Authentication Protocol (CHAP).

Calculating Unique Password Combinations

One of the primary reasons administrators require users to have longer passwords that use upper- and lowercase letters, numbers, and at least one “special” character is to help deter password-guessing attacks. One popular password-guessing technique, called a brute-force attack, uses software to guess every possible password until one matches a user’s password. Essentially, a brute force-attack tries a, then aa, then aaa, and so on, until it runs out of combinations or gets a password match. Increasing both the pool of possible characters that can be used in the password and the number of characters required in the password can exponentially increase the number of “guesses” a brute-force program needs to perform before it runs out of possibilities. For example, if our password policy requires a three-character password that uses only lowercase letters, there are only 17,576 possible passwords (with 26 possible characters, a password that’s three characters long equates to 263 combinations). Requiring a six-character password increases that number to 308,915,776 possible passwords (266). An eight-character password with upper- and lowercase letters, a special symbol, and a number increases the possible passwords to 708, or over 576 trillion combinations.

Precomputed hashes in rainbow tables can also be used to brute-force past shorter passwords. As the length increases, so does the size of the rainbow table.

Domains are logical groups of computers that share a central directory database, known as the Active Directory database for the more recent Windows operating systems. The database contains information about the user accounts and security information for all resources identified within the domain. Each user within the domain is assigned their own unique account (that is, a domain is not a single account shared by multiple users), which is then assigned access to specific resources within the domain. In operating systems that provide domain capabilities, the password policy is set in the root container for the domain and applies to all users within that domain. Setting a password policy for a domain is similar to setting other password policies in that the same critical elements need to be considered (password length, complexity, life, and so on). If a change to one of these elements is desired for a group of users, a new domain needs to be created because the domain is considered a security boundary. In a Windows operating system that employs Active Directory, the domain password policy can be set in the Active Directory Users and Computers menu in the Administrative Tools section of the Control Panel.

Images Single Sign-On

To use a system, users must be able to access it, which they usually do by supplying their user IDs (or usernames) and corresponding passwords. As any security administrator knows, the more systems a particular user has access to, the more passwords that user must have and remember. The natural tendency for users is to select passwords that are easy to remember, or even the same password for use on the multiple systems they access. Wouldn’t it be easier for the user simply to log in once and have to remember only a single, good password? This is made possible with a technology called single sign-on.

Single sign-on (SSO) is a form of authentication that involves the transferring of credentials between systems. As more and more systems are combined in daily use, users are forced to have multiple sets of credentials. A user may have to log into three, four, five, or even more systems every day just to do their job. Single sign-on allows a user to transfer their credentials so that logging into one system acts to log them into all of the systems. Once the user has entered a user ID and password, the single sign-on system passes these credentials transparently to other systems so that repeated logons are not required. Put simply, you supply the right username and password once and you have access to all the applications and data you need, without having to log in multiple times and remember many different passwords. From a user standpoint, SSO means you need to remember only one username and one password. From an administration standpoint, SSO can be easier to manage and maintain. From a security standpoint, SSO can be even more secure, as users who need to remember only one password are less likely to choose something too simple or something so complex they need to write it down. The following is a logical depiction of the SSO process (see Figure 11.5):

Images

Figure 11.5   Single sign-on process

Images

The CompTIA Security+ exam will very likely contain questions regarding single sign-on because it is such a prevalent topic and a very common approach to multisystem authentication.

1.   The user signs in once, providing a username and password to the SSO server.

2.   The SSO server provides authentication information to any resource the user accesses during that session. The server interfaces with the other applications and systems—the user does not need to log into each system individually.

Heartbleed

In 2014, a vulnerability that could cause user credentials to be exposed was discovered in millions of systems. Called the Heartbleed incident, this resulted in numerous users being told to change their passwords because of potential compromise. Users were also warned of the dangers of reusing passwords across different accounts. Although this makes passwords easier to remember, it also improves guessing chances. What made this whole effort of user protecting their passwords particularly challenging is that the breach was widespread—virtually all Linux systems—and the patching rate was uneven, so people could be suffering multiple exposures over time. After one year, an estimated 40 percent of all compromised systems remained unpatched. This highlights the importance of not reusing passwords across multiple accounts.

In reality, SSO is usually a little more difficult to implement than vendors would lead you to believe. To be effective and useful, all your applications need to be able to access and use the authentication provided by the SSO process. The more diverse your network, the less likely this is to be the case. If your network, like most, contains different operating systems, custom applications, and a diverse user base, SSO may not even be a viable option.

Images Security Controls and Permissions

If multiple users share a computer system, the system administrator likely needs to control who is allowed to do what when it comes to viewing, using, or changing system resources. Although operating systems vary in how they implement these types of controls, most operating systems use the concepts of permissions and rights to control and safeguard access to resources. As we discussed earlier, permissions control what a user is allowed to do with objects on a system, and rights define the actions a user can perform on the system itself. Let’s examine how the Windows operating systems implement this concept.

The Windows operating systems use the concepts of permissions and rights to control access to files, folders, and information resources. When using the NTFS file system, administrators can grant users and groups permission to perform certain tasks as they relate to files, folders, and Registry keys. The basic categories of NTFS permissions are as follows:

Images

Permissions can be applied to a specific user or group to control that user or group’s ability to view, modify, access, use, or delete resources such as folders and files.

Images   Full Control A user/group can change permissions on the folder/file, take ownership if someone else owns the folder/file, delete subfolders and files, and perform actions permitted by all other NTFS folder permissions.

Images   Modify A user/group can view and modify files/folders and their properties, can delete and add files/folders, and can delete properties from or add properties to a file/folder.

Images   Read & Execute A user/group can view the file/folder and can execute scripts and executables, but they cannot make any changes (files/folders are read-only).

Images   List Folder Contents A user/group can list only what is inside the folder (applies to folders only).

Images   Read A user/group can view the contents of the file/folder and the file/folder properties.

Images   Write A user/group can write to the file or folder.

Figure 11.6 shows the permissions on a folder called Data from a Windows Server system. In the top half of the Permissions window are the users and groups that have permissions for this folder. In the bottom half of the window are the permissions assigned to the highlighted user or group.

Images

Figure 11.6   Permissions for the Data folder

The Windows operating system also uses user rights or privileges to determine what actions a user or group is allowed to perform or access. These user rights are typically assigned to groups, as it is easier to deal with a few groups than to assign rights to individual users, and they are usually defined in either a group or a local security policy. The list of user rights is quite extensive, but here are a few examples of user rights:

Images   Log on locally Users/groups can attempt to log onto the local system itself.

Images   Access this computer from the network Users/groups can attempt to access this system through the network connection.

Images   Manage auditing and security log Users/groups can view, modify, and delete auditing and security log information.

Rights tend to be actions that deal with accessing the system itself, process control, logging, and so on. Figure 11.7 shows the user rights contained in the local security policy on a Windows system.

Images

Figure 11.7   User Rights Assignment options from Windows Local Security Policy

Images

Although it is very important to get security settings “right the first time,” it is just as important to perform routine audits of security settings such as user accounts, group memberships, file permissions, and so on.

Folders and files are not the only things that can be safeguarded or controlled using permissions. Even access and use of peripherals such as printers can be controlled using permissions. Figure 11.8 shows the Security tab from a printer attached to a Windows system. Permissions can be assigned to control who can print to the printer, who can manage documents and print jobs sent to the printer, and who can manage the printer itself. With this type of granular control, administrators have a great deal of control over how system resources are used and who uses them.

Images

Figure 11.8   Security tab showing printer permissions in Windows

A very important concept to consider when assigning rights and privileges to users is the concept of least privilege. Least privilege requires that users be given the absolute minimum number of rights and privileges required to perform their authorized duties. For example, if a user does not need the ability to install software on their own desktop to perform their job, then don’t give them that ability. This reduces the likelihood the user will load malware, insecure software, or unauthorized applications onto their system.

Access Control Lists

The term access control list (ACL) is used in more than one manner in the field of computer security. When discussing routers and firewalls, an ACL is a set of rules used to control traffic flow into or out of an interface or network. When discussing system resources, such as files and folders, an ACL lists permissions attached to an object—who is allowed to view, modify, move, or delete that object.

To illustrate this concept, consider an example. Figure 11.9 shows the access control list (permissions) for the Data folder. The user identified as Billy Williams has Read & Execute, List Folder Contents, and Read permissions, meaning this user can open the folder, see what’s in the folder, and so on. Figure 11.10 shows the permissions for a user identified as Leah Jones, who has only Read permissions on the same folder.

Images

Figure 11.9   Permissions for Billy Williams on the Data folder

Images

Figure 11.10   Permissions for Leah Jones on the Data folder

In computer systems and networks, access controls can be implemented in several ways. An access control matrix provides the simplest framework for illustrating the process. An example of an access control matrix is provided in Table 11.1. In this matrix, the system is keeping track of two processes, two files, and one hardware device. Process 1 can read both File 1 and File 2 but can write only to File 1. Process 1 cannot access Process 2, but Process 2 can execute Process 1. Both processes have the ability to write to the printer.

Table 11.1 An Access Control Matrix

Images

Although simple to understand, the access control matrix is seldom used in computer systems because it is extremely costly in terms of storage space and processing. Imagine the size of an access control matrix for a large network with hundreds of users and thousands of files.

Mandatory Access Control (MAC)

Mandatory access control (MAC) is the process of controlling access to information based on the sensitivity of that information and whether or not the user is operating at the appropriate sensitivity level and has the authority to access that information. Under a MAC system, each piece of information and every system resource (files, devices, networks, and so on) is labeled with its sensitivity level (such as Public, Engineering Private, Jones Secret, and so on). Users are assigned a clearance level that sets the upper boundary of the information and devices that they are allowed to access.

Images

Mandatory access control restricts access based on the sensitivity of the information and whether or not the user has the authority to access that information.

The access control and sensitivity labels are required in a MAC system. Labels are defined and then assigned to users and resources. Users must then operate within their assigned sensitivity and clearance levels—they don’t have the option to modify their own sensitivity levels or the levels of the information resources they create. Due to the complexity involved, MAC is typically run only on systems where security is a top priority, such as Trusted Solaris, OpenBSD, and SELinux.

MAC Objective

Mandatory access controls are often mentioned in discussions of multilevel security. For multilevel security to be implemented, a mechanism must be present to identify the classification of all users and files. A file identified as Top Secret (that is, it has a label indicating that it is “Top Secret”) may be viewed only by individuals with a Top Secret clearance. For this control mechanism to work reliably, all files must be marked with appropriate controls and all user access must be checked. This is the primary goal of MAC.

Figure 11.11 illustrates MAC in operation. The information resource on the left has been labeled “Engineering Secret,” meaning only users in the Engineering group operating at the Secret sensitivity level or above can access that resource. The top user is operating at the Secret level but is not a member of Engineering and is denied access to the resource. The middle user is a member of Engineering but is operating at a Public sensitivity level and is therefore denied access to the resource. The bottom user is a member of Engineering, is operating at a Secret sensitivity level, and is allowed to access the information resource.

Images

Figure 11.11   Logical representation of mandatory access control

Discretionary Access Control (DAC)

Discretionary access control (DAC) is the process of using file permissions and optional ACLs to restrict access to information based on a user’s identity or group membership. DAC is the most common access control system and is commonly used in both UNIX and Windows operating systems. The “discretionary” part of DAC means that a file or resource owner has the ability to change the permissions on that file or resource.

Multilevel Security

In the U.S. government, the following security labels are used to classify information and information resources for MAC systems:

Images   Top Secret The highest security level and is defined as information that would cause “exceptionally grave damage” to national security if disclosed.

Images   Secret The second highest level and is defined as information that would cause “serious damage” to national security if disclosed.

Images   Confidential The lowest level of classified information and is defined as information that would “damage” national security if disclosed.

Images   For Official Use Only Information that is unclassified but not releasable to public or unauthorized parties. Sometimes called Sensitive But Unclassified (SBU).

Images   Unclassified Not an official classification level.

The labels work in a top-down fashion so that an individual holding a Secret clearance would have access to information at the Secret, Confidential, and Unclassified levels. An individual with a Secret clearance would not have access to Top Secret resources because that label is above the highest level of the individual’s clearance.

Under UNIX operating systems, file permissions consist of three distinct parts:

Images   Owner permissions (read, write, and execute) The owner of the file

Images   Group permissions (read, write, and execute) The group to which the owner of the file belongs

Images   World permissions (read, write, and execute) Anyone else who is not the owner and does not belong to the group to which the owner of the file belongs

For example, suppose a file called secretdata has been created by the owner of the file, Luke, who is part of the Engineering group. The owner permissions on the file would reflect Luke’s access to the file (as the owner). The group permissions would reflect the access granted to anyone who is part of the Engineering group. The world permissions would represent the access granted to anyone who is not Luke and is not part of the Engineering group.

In UNIX, a file’s permissions are usually displayed as a series of nine characters, with the first three characters representing the owner’s permissions, the second three characters representing the group permissions, and the last three characters representing the permissions for everyone else (or for the world). This concept is illustrated in Figure 11.12.

Images

Figure 11.12   Discretionary file permissions in the UNIX environment

Suppose the file secretdata is owned by Luke with group permissions for Engineering (because Luke is part of the Engineering group), and the permissions on that file are rwx, rw-, and ---, as shown in Figure 11.12. This would mean the following:

Images   Luke can read, write, and execute the file (rwx).

Images   Members of the Engineering group can read and write the file but not execute it (rw-).

Images   The world has no access to the file and can’t read, write, or execute it (---).

Images

Discretionary access control restricts access based on the user’s identity or group membership.

Remember that under the DAC model, the file’s owner, Luke, can change the file’s permissions any time he wants.

Role-Based Access Control (RBAC)

Access control lists can be cumbersome and can take time to administer properly. Role-based access control (RBAC) is the process of managing access and privileges based on the user’s assigned roles. RBAC is the access control model that most closely resembles an organization’s structure. In this scheme, instead of each user being assigned specific access permissions for the objects associated with the computer system or network, that user is assigned a set of roles that the user may perform. The roles are in turn assigned the access permissions necessary to perform the tasks associated with the role. Users will thus be granted permissions to objects in terms of the specific duties they must perform—not just because of a security classification associated with individual objects.

Images

As defined by the “Orange Book,” a Department of Defense document (in the “rainbow series”) that at one time was the standard for describing what constituted a trusted computing system, a discretionary access control (DAC) is “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control).”

Under RBAC, you must first determine the activities that must be performed and the resources that must be accessed by specific roles. For example, the role of “securityadmin” in Microsoft SQL Server must be able to create and manage logins, read error logs, and audit the application. Once all the roles are created and the rights and privileges associated with those roles are determined, users can then be assigned one or more roles based on their job functions. When a role is assigned to a specific user, the user gets all the rights and privileges assigned to that role.

Images

Role-based and rule-based access control can both be abbreviated as RBAC. Standard convention is for RBAC to be used to denote role-based access control. A seldom-seen acronym for rule-based access control is RB-RBAC. Role-based focuses on the user’s role (administrator, backup operator, and so on). Rule-based focuses on predefined criteria such as time of day (users can only log in between 8 A.M. and 6 P.M.) or type of network traffic (web traffic is allowed to leave the organization).

Unfortunately, in reality, administrators often find themselves in a position of working in an organization where more than one user has multiple roles or even access to multiple accounts (a situation quite common in smaller organizations). Users with multiple accounts tend to select the same or similar passwords for those accounts, thereby increasing the chance one compromised account can lead to the compromise of other accounts accessed by that user. Where possible, administrators should first eliminate shared or additional accounts for users and then examine the possibility of combining roles or privileges to reduce the “account footprint” of individual users.

Rule-Based Access Control

Rule-based access control is yet another method of managing access and privileges (and unfortunately shares the same acronym as role-based access control). In this method, access is either allowed or denied based on a set of predefined rules. Each object has an associated ACL (much like DAC), and when a particular user or group attempts to access the object, the appropriate rule is applied.

Images

The CompTIA Security+ exam will very likely expect you to be able to differentiate between the four major forms of access control discussed here: mandatory access control, discretionary access control, role-based access control, and rule-based access control.

A good example for rule-based access control is permitted logon hours. Many operating systems give administrators the ability to control the hours during which users can log in. For example, a bank might allow its employees to log in only between the hours of 8 A.M. and 6 P.M., Monday through Saturday. If a user attempts to log in outside of these hours (3 A.M. on Sunday, for example), then the rule will reject the login attempt regardless of whether the user supplies valid login credentials.

Attribute-Based Access Control (ABAC)

Attribute-based access control (ABAC) is a new access control schema based on the use of attributes associated with an identity. These can use any type of attributes (user attributes, resource attributes, environment attributes, and so on), such as location, time, activity being requested, and user credentials. An example would be a doctor getting access for a specific patient versus a different patient. ABAC can be represented via the eXtensible Access Control Markup Language (XACML), a standard that implements attribute- and policy-based access control schemes.

Images Account Policies

One of the key elements to guide security professionals in daily tasks is a good set of policies. Many issues are associated with the daily tasks, and leaving a lot of the decisions up to individual workers will rapidly result in conflicting results. Policies are needed for a wide range of elements, from naming conventions to operating rules, such as audit frequency and other specifics. Having these issues resolved as a matter of policy enables security professionals to go about the task of verifying and monitoring systems, rather than trying to adjudicate policy type issues with each user case that comes along.

Account Policy Enforcement

The primary method of account policy enforcement used in most access systems is still one based on passwords. The concepts of each user ID being traceable to a single person’s activity and no sharing of passwords and credentials form the foundation of a solid account policy. Passwords need to be managed to provide appropriate levels of protection. They need to be strong enough to resist attack, and yet not too difficult for users to remember. A password policy can act to ensure that the necessary steps are taken to enact a secure password solution, both by users and by the password infrastructure system.

Password Policies

Password policies, along with many other important security policies, are covered in detail in Chapter 3.

Credential Management

Credential management refers to the processes, services, and software used to store, manage, and log the use of user credentials. Credential management solutions are typically aimed at assisting end users manage their growing set of passwords. There are credential management products that provide a secure means of storing user credentials and making them available across a wide range of platforms, from local stores to cloud storage locations.

Group Policy

Microsoft Windows systems in an enterprise environment can be managed via group policy objects (GPOs). GPOs act through a set of registry settings that can be managed via the enterprise. A wide range of settings can be managed via GPOs, many of which are related to security, including user credential settings such as password rules.

Standard Naming Convention

Agreeing on a standard naming convention is one of the topics that can bring controversy out of professionals who seem to agree on most things. Having a standard naming convention has pluses in that it enables users to extract meaning from a name. Having servers with “dev,” “test,” and “prod” as part of their names can prevent inadvertent changes by a user because of the misidentification of an asset. By the same token, calling out privileges (say, appending “SA” to the end of usernames with system administrator privileges) results in two potential problems. First, it alerts adversaries to which accounts are the most valuable. Second, it creates a problem when the person is no longer a member of the system administrators group, as now the account must be renamed.

One aspect that everyone does agree on is the concept of leaving room for the future. The simplest example is in the numbering of accounts. For instance, for e-mail, use first initial plus last name plus a digit if a repeat. Will we ever have more than 10 John Smiths? Well, you might be surprised, as Joan Smiths and Jack Smiths also take from the pool. And the pool is further diluted by the fact that we inactivate old accounts, not reuse them. So plan on having plenty of room ahead for fixing any naming scheme.

Account Maintenance

Account maintenance is not the sexiest job in the security field. But then again, traffic cops have boring lives as well—until you realize that roughly half of all felons are arrested on simple traffic stops. The same is true with account maintenance—no, we aren’t catching felons, but we do find errors that otherwise only increase risk and because of their nature are hard to defend against any other way. Account maintenance is the routine screening of all attributes for an account. Is the business purpose for the account still valid, that is, is the user still employed? Is the business process for a system account still occurring? Are the actual permissions associated with the account appropriate for the account holder? Best practice indicates that this be performed in accordance with the risk associated with the profile. System administrators, and other privileged accounts, need greater scrutiny that normal users. Shared accounts, such as guest accounts, also require scrutiny to ensure they are not abused.

For some high-risk situations, such as unauthenticated guest accounts being granted administrator privilege, an automated check can be programmed and run on a regular basis. In Active Directory, it is also possible for the security group to be notified any time a user is granted domain admin privilege. And it is also important to note that the job of determining who has what access is actually one that belongs to the business, not the security group. The business side of the house is where the policy decision on who should have access is determined. The security group merely takes the steps to enforce this decision. Account maintenance is a joint responsibility.

Usage Auditing and Review

As with all security controls, a monitoring component is an important aspect of security controls used to mitigate risk. Logs are the most frequently used component, and with respect to privileged accounts, logging can be especially important. Usage auditing and review is just that: an examination of logs to determine user activity. Reviewing access control logs for root-level accounts is an important element of securing access control methods. Because of the power and potential for misuse, administrative or root-level accounts should be closely monitored. One important element for continuous monitoring of production would be the use of an administrative-level account on a production system.

Images

Logging and monitoring of failed login attempts provides valuable information during investigations of compromises.

A strong configuration management environment will include the control of access to production systems by users who can change the environment. Root-level changes in a system tend to be significant changes, and in production systems these changes would be approved in advance. A comparison of all root-level activity against approved changes will assist in the detection of activity that is unauthorized.

Account Recertification

User accounts should periodically be recertified as necessary. The process of account recertification can be as simple as a check against current payroll records to ensure all users are still employed, or as intrusive as having users identify themselves again. The later is highly intrusive but may be warranted for high-risk accounts. The process of recertification ensures that only users needing accounts have accounts in the system.

Time-of-Day Restrictions

Some organizations need to tightly control certain users, groups, or even roles and limit access to certain resources to specific days and times. Most server-class operating systems enable administrators to implement time-of-day restrictions that limit when a user can log in, when certain resources can be accessed, and so on. Time-of-day restrictions are usually specified for individual accounts, as shown in Figure 11.13.

Images

Figure 11.13   Logon hours for Guest account

Images

Be careful implementing time-of-day restrictions. Some operating systems give you the option of disconnecting users as soon as their “allowed login time” expires, regardless of what the user is doing at the time. The more commonly used approach is to allow currently logged-in users to stay connected but reject any login attempts that occur outside of allowed hours.

From a security perspective, time-of-day restrictions can be very useful. If a user normally accesses certain resources during normal business hours, an attempt to access these resources outside this time period (either at night or on the weekend) might indicate an attacker has gained access to or is trying to gain access to that account. Specifying time-of-day restrictions can also serve as a mechanism to enforce internal controls of critical or sensitive resources. Obviously, a drawback to enforcing time-of-day restrictions is that it means a user can’t go to work outside of normal hours to “catch up” with work tasks. As with all security policies, usability and security must be balanced in this policy decision.

Account Expiration

In addition to all the other methods of controlling and restricting access, most modern operating systems allow administrators to specify the length of time an account is valid and when it “expires” or is disabled. Account expiration is the setting of an ending time for an account’s validity. This is a great method for controlling temporary accounts, or accounts for contractors or contract employees. For these accounts, the administrator can specify an expiration date; when the date is reached, the account automatically becomes locked out and cannot be logged into without administrator intervention. A related action can be taken with accounts that never expire: they can automatically be marked “inactive” and locked out if they have been unused for a specified number of days. Account expiration is similar to password expiration, in that it limits the time window of potential compromise. When an account has expired, it cannot be used unless the expiration deadline is extended.

Disabling Accounts

An administrator has several options for ending a user’s access (for instance, upon termination). The best option is to disable the account but leave it in the system. This preserves account permission chains and prevents reuse of a user ID, leading to potential confusion later when examining logs.

Similarly, organizations must define whether accounts are deleted or disabled when no longer needed. Deleting an account removes the account from the system permanently, whereas disabling an account leaves it in place but marks it as unusable. Many organizations disable an account for a period of time after an employee departs (30 or more days) prior to deleting the account. This prevents anyone from using the account and allows administrators to reassign files, forward mail, and “clean up” before taking any permanent actions on the account.

Images Preventing Data Loss or Theft

Identity theft and commercial espionage have become very large and lucrative criminal enterprises over the past decade. Hackers are no longer merely content to compromise systems and deface web sites. In many attacks performed today, hackers are after intellectual property, business plans, competitive intelligence, personal information, credit card numbers, client records, or any other information that can be sold, traded, or manipulated for profit. This has created a whole industry of technical solutions labeled data loss prevention (DLP) solutions.

It can be assumed that a hacker has assumed the identity of an authorized user, and DLP solutions exist to prevent the exfiltration of data regardless of access control restrictions. DLP solutions come in many forms, and each of these solutions has strengths and weaknesses. The best solution is a combination of security elements: some to secure data in storage (encryption) and some in the form of monitoring (proxy devices to monitor data egress for sensitive data), and even NetFlow analytics to identify new bulk data transfer routes.

Images The Remote Access Process

The process of connecting by remote access involves two elements: a temporary network connection and a series of protocols to negotiate privileges and commands. The temporary network connection can occur via a dial-up service, the Internet, wireless access, or any other method of connecting to a network. Once the connection is made, the primary issue is authenticating the identity of the user and establishing proper privileges for that user. This is accomplished using a combination of protocols and the operating system on the host machine.

Securing Remote Connections

By using encryption, remote access protocols can securely authenticate and authorize a user according to previously established privilege levels. The authorization phase can keep unauthorized users out, but after that, encryption of the communications channel becomes very important in preventing unauthorized users from breaking in on an authorized session and hijacking an authorized user’s credentials. As more and more networks rely on the Internet for connecting remote users, the need for and importance of secure remote access protocols and secure communication channels will continue to grow.

The three steps in the establishment of proper privileges are authentication, authorization, and accounting, commonly referred to simply as AAA. Authentication is the matching of user-supplied credentials to previously stored credentials on a host machine, and it usually involves an account username and password. Once the user is authenticated, the authorization step takes place. Authorization is the granting of specific permissions based on the privileges held by the account. Does the user have permission to use the network at this time, or is their use restricted? Does the user have access to specific applications, such as mail and FTP, or are some of these restricted? These checks are carried out as part of authorization, and in many cases this is a function of the operating system in conjunction with its established security policies. Accounting is the collection of billing and other detail records. Network access is often a billable function, and a log of how much time, bandwidth, file transfer space, or other resources were used needs to be maintained. Other accounting functions include keeping detailed security logs to maintain an audit trail of tasks being performed.

When a user connects to the Internet through an ISP, this is similarly a case of remote access—the user is establishing a connection to their ISP’s network, and the same security issues apply. The issue of authentication, the matching of user-supplied credentials to previously stored credentials on a host machine, is usually done via a user account name and password. Once the user is authenticated, the authorization step takes place. Remote authentication usually takes the common form of an end user submitting their credentials via an established protocol to a remote access server (RAS), which acts upon those credentials, either granting or denying access.

Access controls define what actions a user can perform or what objects a user is allowed to access. Access controls are built on the foundation of elements designed to facilitate the matching of a user to a process. These elements are identification, authentication, and authorization. A myriad of details and choices are associated with setting up remote access to a network, and to provide for the management of these options, it is important for an organization to have a series of remote access policies and procedures spelling out the details of what is permitted and what is not for a given network.

Federation

Federated identity management is an agreement between multiple enterprises that lets parties use the same identification data to obtain access to the networks of all enterprises in the group. This federation enables access to be managed across multiple systems in common trust levels.

Identification

Identification is the process of ascribing a computer ID to a specific user, computer, network device, or computer process. The identification process is typically performed only once, when a user ID is issued to a particular user. User identification enables authentication and authorization to form the basis for accountability. For accountability purposes, user IDs should not be shared, and for security purposes, they should not be descriptive of job function. This practice enables you to trace activities to individual users or computer processes so that they can be held responsible for their actions. Identification links the logon ID or user ID to credentials that have been submitted previously to either HR or the IT staff. A required characteristic of user IDs is that they must be unique so that they map back to the credentials presented when the account was established.

Authentication

Authentication is the process of binding a specific ID to a specific computer connection. Two items need to be presented to cause this binding to occur—the user ID and some “secret” to prove that the user is the valid possessor of the credentials. Historically, three categories of secrets are used to authenticate the identity of a user: what users know, what users have, and what users are. Today, an additional category is used: what users do.

These methods can be used individually or in combination. These controls assume that the identification process has been completed and the identity of the user has been verified. It is the job of authentication mechanisms to ensure that only valid users are admitted. Described another way, authentication is using some mechanism to prove that you are who you claimed to be when the identification process was completed.

Categories of Shared Secrets for Authentication

Originally published by the U.S. government in one of the “rainbow series” of manuals on computer security, the categories of shared “secrets” are as follows:

Images   What users know (such as a password)

Images   What users have (such as tokens)

Images   What users are (static biometrics such as fingerprints or iris pattern)

Today, because of technological advances, new categories have emerged, patterned after subconscious behaviors and measurable attributes:

Images   What users do (dynamic biometrics such as typing patterns or gait)

Images   Where a user is (actual physical location)

The most common method of authentication is the use of a password. For greater security, you can add an element from a separate group, such as a smart card token—something a user has in their possession. Passwords are common because they are one of the simplest forms of authentication, and they use user memory as a prime component. Because of their simplicity, passwords have become ubiquitous across a wide range of authentication systems.

Another method to provide authentication involves the use of something that only valid users should have in their possession. A physical-world example of this would be a simple lock and key. Only those individuals with the correct key will be able to open the lock and thus gain admittance to a house, car, office, or whatever the lock was protecting. A similar method can be used to authenticate users for a computer system or network (though the key may be electronic and could reside on a smart card or similar device). The problem with this technology, however, is that people do lose their keys (or cards), which means not only that the user can’t log into the system but that somebody else who finds the key may then be able to access the system, even though they are not authorized. To address this problem, a combination of the something-you-know and something-you-have methods is often used so that the individual with the key is also required to provide a password or passcode. The key is useless unless the user knows this code.

The third general method to provide authentication involves something that is unique about you. We are accustomed to this concept in our physical world, where our fingerprints or a sample of our DNA can be used to identify us. This same concept can be used to provide authentication in the computer world. The field of authentication that uses something about you or something that you are is known as biometrics. A number of different mechanisms can be used to accomplish this type of authentication, such as a fingerprint, iris, retinal, or hand geometry scan. All of these methods obviously require some additional hardware in order to operate. The inclusion of fingerprint readers on laptop computers is becoming common as the additional hardware is becoming cost effective.

A new method, based on how users perform an action, such as their walking gait or their typing patterns, has emerged as a source of a personal “signature.” While not directly embedded into systems as yet, this is an option that will be coming in the future.

Although the three main approaches to authentication appear to be easy to understand and in most cases easy to implement, authentication is not to be taken lightly because it is such an important component of security. Potential attackers are constantly searching for ways to get past the system’s authentication mechanism, and they have employed some fairly ingenious methods to do so. Consequently, security professionals are constantly devising new methods, building on these three basic approaches, to provide authentication mechanisms for computer systems and networks.

Basic Authentication

Basic authentication is the simplest technique used to manage access control across HTTP. Basic authentication operates by passing information encoded in Base64 form using standard HTTP headers. This is a plaintext method without any pretense of security. Figure 11.14 illustrates the operation of basic authentication.

Images

Figure 11.14   How basic authentication operates

Digest Authentication

Digest authentication is a method used to negotiate credentials across the Web. Digest authentication uses hash functions and a nonce to improve security over basic authentication. Digest authentication works as follows, as illustrated in Figure 11.15:

Images

Figure 11.15   How digest authentication operates

1.   The client requests login.

2.   The server responds with a challenge and provides a nonce.

3.   The client hashes the password and nonce.

4.   The client returns the hashed password to the server.

5.   The server requests the password from a password store.

6.   The server hashes the password and nonce.

7.   If both hashes match, login is granted.

Digest authentication, although it improves security over basic authentication, does not provide any significant level of security. Passwords are not sent in the clear. Digest authentication is subject to man-in-the-middle attacks and potentially replay attacks.

Images

The bottom line for both basic and digest authentication is that these are insecure methods and should not be relied upon for any level of security.

Kerberos

Developed as part of MIT’s project Athena, Kerberos is a network authentication protocol designed for a client/server environment. The current version is Kerberos 5 release 1.16 and is supported by all major operating systems. Kerberos securely passes a symmetric key over an insecure network using the Needham-Schroeder symmetric key protocol. Kerberos is built around the idea of a trusted third party, termed a key distribution center (KDC), which consists of two logically separate parts: an authentication server (AS) and a ticket-granting server (TGS). Kerberos communicates via “tickets” that serve to prove the identity of users.

Images

Two tickets are used in Kerberos. The first is a ticket-granting ticket (TGT) obtained from the authentication server (AS). The TGT is presented to a ticket-granting server (TGS) when access to a server is requested and then a client-to-server ticket is issued, granting access to the server. Typically both the AS and the TGS are logically separate parts of the key distribution center (KDC).

Taking its name from the three-headed dog of Greek mythology, Kerberos is designed to work across the Internet, an inherently insecure environment. Kerberos uses strong encryption so that a client can prove its identity to a server and the server can in turn authenticate itself to the client. A complete Kerberos environment is referred to as a Kerberos realm. The Kerberos server contains user IDs and hashed passwords for all users who will have authorizations to realm services. The Kerberos server also has shared secret keys with every server to which it will grant access tickets.

The basis for authentication in a Kerberos environment is the ticket. Tickets are used in a two-step process with the client. The first ticket is a ticket-granting ticket (TGT) issued by the AS to a requesting client. The client can then present this ticket to the Kerberos server with a request for a ticket to access a specific server. This client-to-server ticket (also called a service ticket) is used to gain access to a server’s service in the realm. Because the entire session can be encrypted, this eliminates the inherently insecure transmission of items such as a password that can be intercepted on the network. Tickets are time-stamped and have a lifetime, so attempting to reuse a ticket will not be successful. Figure 11.16 details Kerberos operations.

Images

Figure 11.16   Kerberos operations

Kerberos Authentication

Kerberos is a third-party authentication service that uses a series of tickets as tokens for authenticating users. The six steps involved are protected using strong cryptography:

Images   The user presents their credentials and requests a ticket from the key distribution center (KDC).

Images   The KDC verifies credentials and issues a ticket-granting ticket (TGT).

Images   The user presents a TGT and request for service to the KDC.

Images   The KDC verifies authorization and issues a client-to-server ticket (or service ticket).

Images   The user presents a request and a client-to-server ticket to the desired service.

Images   If the client-to-server ticket is valid, service is granted to the client.

To illustrate how the Kerberos authentication service works, think about the common driver’s license. You have received a license that you can present to other entities to prove you are who you claim to be. Because other entities trust the state in which the license was issued, they will accept your license as proof of your identity. The state in which the license was issued is analogous to the Kerberos authentication service realm, and the license acts as a client-to-server ticket. It is the trusted entity both sides rely on to provide valid identifications. This analogy is not perfect, because we all probably have heard of individuals who obtained a phony driver’s license, but it serves to illustrate the basic idea behind Kerberos.

Images

Mutual TLS–based authentication provides the same functions as normal TLS, with the addition of authentication and nonrepudiation of the client. This second authentication, the authentication of the client, is done in the same manner as the normal server authentication using digital signatures. The client authentication represents the many sides of a many-to-one relationship. Mutual TLS authentication is not commonly used because of the complexity, cost, and logistics associated with managing the multitude of client certificates. This reduces the effectiveness, and most web applications are not designed to require client-side certificates.

Mutual Authentication

Mutual authentication describes a process in which each side of an electronic communication verifies the authenticity of the other. We are accustomed to the idea of having to authenticate ourselves to our ISP before we access the Internet, generally through the use of a user ID/password pair, but how do we actually know that we are really communicating with our ISP and not some other system that has somehow inserted itself into our communication (a man-in-the-middle attack)? Mutual authentication provides a mechanism for each side of a client/server relationship to verify the authenticity of the other to address this issue. A common method of performing mutual authentication involves using a secure connection, such as Transport Layer Security (TLS), to the server and a one-time password generator that then authenticates the client.

Certificates

Certificates are a method of establishing authenticity of specific objects such as an individual’s public key or downloaded software. A digital certificate is a digital file that is sent as an attachment to a message and is used to verify that the message did indeed come from the entity it claims to have come from.

PIV/CAC/Smart Cards

The U.S. federal government has several smart card solutions for identification of personnel. The personal identity verification (PIV) card is a U.S. government smart card that contains the credential data for the cardholder used to determine access to federal facilities and information systems. The Common Access Card (CAC) is a smart card identification used by the U.S. Department of Defense (DoD) for active-duty military, selected reserve personnel, DoD civilians, and eligible contractors. Like the PIV card, it is used for carrying the credential data, in the form of a certificate, for the cardholder used to determine access to Federal facilities and information systems.

Digital Certificates and Digital Signatures

Kerberos uses tickets to convey messages. Part of the ticket is a certificate that contains the requisite keys. Understanding how certificates convey this vital information is an important part of understanding how Kerberos-based authentication works. Certificates, how they are used, and the protocols associated with PKI were covered in Chapter 7. Refer back to this chapter as needed for more information.

Tokens

While the username/password combination has been and continues to be the cheapest and most popular method of controlling access to resources, many organizations look for a more secure and tamper-resistant form of authentication. Usernames and passwords are “something you know” (which can be used by anyone else who knows or discovers the information). A more secure method of authentication is to combine the “something you know” with “something you have.” A token is an authentication factor that typically takes the form of a physical or logical entity that the user must be in possession of to access their account or certain resources.

A token is a hardware device that can be used in a challenge/response authentication process. In this way, it functions as both a something-you-have and something-you-know authentication mechanism. Several variations on this type of device exist, but they all work on the same basic principles. Tokens are commonly employed in remote authentication schemes because they provide additional surety of the identity of the user, even users who are somewhere else and cannot be observed.

Most tokens are physical tokens that display a series of numbers that changes every 30 to 90 seconds, such as the token pictured in Figure 11.17 from Blizzard Entertainment. This sequence of numbers must be entered when the user is attempting to log in or access certain resources. The ever-changing sequence of numbers is synchronized to a remote server such that when the user enters the correct username, password, and matching sequence of numbers, they are allowed to log in. Even if an attacker obtains the username and password, the attacker cannot log in without the matching sequence of numbers. Other physical tokens include Common Access Cards (CACs), USB tokens, smart cards, and PC cards.

Images

Figure 11.17   Token authenticator from Blizzard Entertainment

Images

The use of a token is a common method of using “something you have” for authentication. A token can hold a cryptographic key or act as a one-time password (OTP) generator. It can also be a smart card that holds a cryptographic key (examples include the U.S. military Common Access Card and the Federal Personal Identity Verification [PIV] card). These devices can be safeguarded using a PIN and lockout mechanism to prevent use if stolen.

Software Tokens

Access tokens may also be implemented in software. Software tokens still provide two-factor authentication but don’t require the user to have a separate physical device on hand. Some tokens require software clients that store a symmetric key (sometimes called a seed record) in a secured location on the user’s device (laptop, desktop, tablet, and so on). Other software tokens use public key cryptography. Asymmetric cryptography solutions, such as public key cryptography, often associate a PIN with a specific user’s token. To log in or access critical resources, the user must supply the correct PIN. The PIN is stored on a remote server and is used during the authentication process so that if the user presents the right token, but not the right PIN, the user’s access can be denied. This helps prevent an attacker from gaining access if they get a copy of or gains access to the software token. The most common form of software token is for identifying a specific device in addition to a user, in that the software token is on the device and the user supplies the rest of the details needed to demonstrate authenticity.

Images

Understand that tokens represent (1) something you have with respect to authentication and (2) a device that can store more information than you can memorize. This makes them very valuable for access control. The details in the question on the exam will provide the necessary criteria to pick the best token method for the question.

HOTP/TOTP

HMAC-based One-Time Password (HOTP) is an algorithm that can be used to authenticate a user in a system by using an authentication server. (HMAC stands for Hash-based Message Authentication Code.) It is defined in RFC 4226, dated December 2005. The Time-based One-Time Password (TOTP) algorithm is a specific implementation of an HOTP that uses a secret key with a current time stamp to generate a one-time password. It is described in RFC 6238, dated May 2011.

Smart Cards

Smart cards can increase physical security because they can carry cryptographic tokens that are too long to remember and have too large a space to guess. Because of the manner in which they are employed and used, copying the number is not a practical option either. Smart cards can find use in a variety of situations where you want to combine something you know (a pin or password) together with something you have (and can’t be duplicated, such as a smart card). Many standard corporate-type laptops come with smart card readers installed, and their use is integrated into the Windows user access system.

Multifactor Authentication

Images

Two-factor authentication combines any two methods, matching items such as a token with a biometric. Three-factor authentication combines any three, such as a passcode, biometric, and a token.

Multifactor authentication (or multiple-factor authentication) is simply the combination of two or more types of authentication. Five broad categories of authentication can be used: what you are (for example, biometrics), what you have (for instance, tokens), what you know (passwords and other information), somewhere you are (location), and something you do (physical performance). Two-factor authentication combines any two of these before granting access. An example would be a card reader that then turns on a fingerprint scanner—if your fingerprint matches the one on file for the card, you are granted access. Three-factor authentication would combine all three types, such as a smart card reader that asks for a PIN before enabling a retina scanner. If all three correspond to a valid user in the computer database, access is granted.

Multifactor authentication methods greatly enhance security by making it very difficult for an attacker to obtain all the correct materials for authentication. They also protect against the risk of stolen tokens, as the attacker must have the correct biometric, password, or both. More important, multifactor authentication enhances the security of biometric systems by protecting against a stolen biometric. Changing the token makes the biometric useless unless the attacker can steal the new token. It also reduces false positives by trying to match the supplied biometric with the one that is associated with the supplied token. This prevents the computer from seeking a match using the entire database of biometrics. Using multiple factors is one of the best ways to ensure proper authentication and access control.

Something You Are

Something you are is one of the categories of authentication factors. It specifically refers to biometrics, as the “you are” indicates. One of the challenges with something-you-are artifacts is they are typically hard to change, so once assigned they become immutable. Another challenge with biometrics involves the issues associated with measuring things on a person. For example, taking pictures of people might be a cultural issue for some groups, and there might be a lack of fingerprints for some types of physical laborers. Some biometrics suffer from not being usable in certain environments; for instance, in the case of medical workers, or workers in clean-room environments, their personal protective gear will inhibit the use of fingerprint readers and potentially other biometrics.

Something You Have

Something you have is another one of the categories of authentication factors. It specifically refers to tokens and other items that a user can possess physically, as the “you have” indicates. One of the challenges with something you have is that you have to have it with you whenever you wish to be authenticated, and this can cause issues. It also relies on interfaces that might not be available for some systems, such as mobile devices, although one-time password generators are device independent.

Something You Know

Something you know is another one of the categories of authentication factors. It specifically refers to passwords, as the “you know” indicates. The most common example of something you know is a password. One of the challenges with something you know is that is can be “shared” without the user knowing about it (for example, a password can be duplicated without the owner’s knowledge).

Something You Do

Something you do is another one of the categories of authentication factors. It specifically refers to activities, as the “you do” indicates. An example of this is a signature, because the movement of the pen and the two dimensional output are difficult for others to reproduce. This makes it useful for authentication, although challenges exist in capturing the data.

Somewhere You Are

One of the more stringent elements is your location, or somewhere you are. When you use a mobile device, GPS can tell where the device is currently located. Also, when you use a local, wired desktop connection, this can indicate that you are in the building. Both of these can be compared to records to determine if you are really there, or even should be there. Suppose you are badged into your building and are at your desk on a wired PC. In this case, a second connection from a different location would be suspect, because you can only be at one place at a time.

Images

Be able to differentiate between the different criteria for authentication: something you are, have, know, or do, as well as somewhere you are. These are easily tested on the exam. Be sure you have a solid foundation on how they differ and know examples of each to match to a scenario-type question.

Transitive Trust

Security across multiple domains is provided through trust relationships. When trust relationships between domains exist, authentication for each domain trusts the authentication for all other trusted domains. Thus, when an application is authenticated by a domain, its authentication is accepted by all other domains that trust the authenticating domain.

It is important to note that trust relationships apply only to authentication. They do not apply to resource usage, which is an access control issue. Trust relationships allow users to have their identity verified (authentication). The ability to use resources is defined by access control rules. Thus, even though a user is authenticated via the trust relationship, it does not provide access to actually use resources.

Images

Transitive trust involves three parties: if A trusts B, and B trusts C, then in a transitive trust relationship, A will trust C.

A transitive trust relationship means that the trust relationship extended to one domain will be extended to any other domain trusted by that domain. A two-way trust relationship means that two domains trust each other.

Biometric Factors

Biometrics factors use the measurements of certain biological features to identify one specific person from other people. These factors are based on parts of the human body that are unique. The most well-known of these unique biological factors is the fingerprint. Fingerprint readers have been available for several years in laptops and other mobile devices, on keyboards, and as standalone USB devices.

However, many other biological factors can be used, such as the retina or iris of the eye, the geometry of the hand, and the geometry of the face. When these are used for authentication, there is a two-part process: enrollment and then authentication. During enrollment, a computer takes the image of the biological factor and reduces it to a numeric value, called a template. When the user attempts to authenticate, the biometric feature is scanned by the reader, and the computer computes a value in the same fashion as the template, and then compares the numeric value being read to the one stored in the database. If they match, access is allowed. Because these physical factors are unique, theoretically only the actual authorized person would be allowed access.

In the real world, however, the theory behind biometrics breaks down. Tokens that have a digital code work very well because everything remains in the digital realm. A computer checks your code, such as 123, against the database; if the computer finds 123 and that number has access, the computer opens the door. Biometrics, however, take an analog signal, such as a fingerprint or a face, and attempt to digitize it, and it is then matched against the digits in the database. The problem with an analog signal is that it might not encode the exact same way twice. For example, if you came to work with a bandage on your chin, would the face-based biometrics grant you access or deny it? Because of this, the templates are more complex in a manner where there can be a probability of match (that is, they use a closeness measurement).

Fingerprint Scanner

Fingerprint scanners are used to measure the unique shape of fingerprints and then change them to a series of numerical values, or a template. Fingerprint readers can be enhanced to ensure that the pattern is a live pattern—one with blood moving or other detectable biological activity. This is to prevent simple spoofing with a mold of the print made of Jell-O. Fingerprint scanners are cheap to produce and have widespread use in mobile devices. One of the challenges of fingerprint scanners is they fail if the user is wearing gloves (such as medical gloves) or has worn-down fingerprints, as is the case for those involved in the sheetrock trade.

Retinal Scanner

Retinal scanners examine blood vessel patterns in the back of the eye. Believed to be unique and unchanging, this is a readily detectable biometric. It does suffer from user acceptance, as it involves a laser scanning the inside of the user’s eyeball, which has some psychological issues. This detection is close up, and the user has to be right at the device for it to work. It is also more expensive because of the precision of the detector and the involvement of lasers and users’ vision.

Iris Scanner

Iris scanners work in a way similar to retinal scanners in that they use an image of a unique biological measurement (in this case, the pigmentation associated with the iris of the eye). This can be photographed and measured from a distance, removing the psychological impediment of placing one’s eye on a scanner. But there are downsides: because the measurement can be taken at a distance, it is easy to measure other people’s values, and contact lenses can be constructed that mimic a certain pattern. There are also medical issues such as diseases, which if revealed would be a violation of privacy.

Voice Recognition

Voice recognition is the use of unique tonal qualities and speech patterns to identify a person. Long the subject of sci-fi movies, this biometric has been one of the hardest to develop into a reliable mechanism, primarily because of problems with false acceptance and rejection rates, which will be discussed later in the chapter.

Facial Recognition

Facial recognition was also mostly the stuff of stories until it was integrated into various mobile phones. A sensor that recognizes when you move the phone into a position to see your face, coupled with a state of not being logged in, turns on the forward-facing camera, causing the system to look for its enrolled owner. This system has proven to have fairly high discrimination, and works fairly well, with only one major drawback. Another person can move the phone in front of the registered user and unlock it; in essence, another user can activate the unlocking mechanism when the user is unaware. Another, minor drawback is that for certain transactions, such as for positive identification for financial transactions, the position of the phone during a near field communication (NFC) location, together with the user’s face needing to be in a certain orientation with respect to the phone, can lead to awkward positions.

False Positives and False Negatives

Engineers who design systems understand that if a system was set to exact checking, an encoded biometric might never grant access because it might never scan the biometric exactly the same way twice. Therefore, most systems have tried to allow a certain amount of error in the scan, while not allowing too much. This leads to the concepts of false positives and false negatives. A false positive is where you receive a positive result for a test, when you should have received a negative result. Thus, a false positive result occurs when a biometric is scanned and allows access to someone who is not authorized—for example, two people who have very similar fingerprints might be recognized as the same person by the computer, which in turn might grant access to the wrong person. A false negative occurs when the system denies access to someone who is actually authorized—for example, a user at the hand geometry scanner may have forgotten to wear a ring they usually wear and the computer doesn’t recognize their hand and denies them access.

In statistical terms, a false positive is called a type I error, and a false negative is a type II error. When you’re working with scientific problems, a type II error is considered to be more serious. In practical systems, the more serious error depends on the circumstances. If you are willing to trade off legitimate access (make authorized users try several times) to keep out unauthorized parties, then type II errors are being avoided at the expense of type I errors. But if legitimate access is not to be denied, even if in error (for example, signing in to prevent the meltdown of the core at a power plant), then type I errors might be prioritized over type II errors. Context and circumstances matter. For example, you need to consider what the biometrics are protecting and what the cost is of each type of failure.

What is desired is for the system to be able to differentiate the two signals—one being the stored value and the other being the observed value—in such a way that the two curves do not overlap. Figure 11.18 illustrates two probability distributions that do not overlap.

Images

Figure 11.18   Ideal probabilities

For biometric authentication to work properly, and also be trusted, it must minimize the existence of both false positives and false negatives. But biometric systems are seldom that discriminating, and the curves tend to overlap, as shown in Figure 11.19. For detection to work, a balance between exacting and error must be created so that the machines allow a little physical variance—but not too much.

Images

Figure 11.19   Overlapping probabilities

This leads us to acceptance and rejection rates.

False Acceptance Rate

The false acceptance rate (FAR) is just that: what level of false positives are going to be allowed in the system. If an unauthorized user is accepted by the system, this is a false acceptance. A false positive is demonstrated by the grayed-out area in Figure 11.20. In this section, the curves overlap, and the decision has been set that at the threshold or better an accept signal will be given. Thus, if you are on the upper end of the nonmatch curve, in the gray area, you will be a false positive. Expressed as probabilities, the false acceptance rate is the probability that the system incorrectly identifies a match between the biometric input and the stored template value. The FAR is calculated by counting the number of unauthorized accesses granted, divided by the total number of access attempts.

Images

Figure 11.20   False acceptance rate

When selecting the threshold value, the designer must be cognizant of two factors: one is the rejection of a legitimate biometric, the area on the match curve below the threshold value. The other is the acceptance of a false positive. As you set the threshold higher, you will decrease false positives, but increase false negatives (or rejections).

False Rejection Rate

The false rejection rate (FRR) is just that: what level of false negatives, or rejections, are going to be allowed in the system. If an authorized user is rejected by the system, this is a false rejection. A false rejection is demonstrated by the grayed-out area in Figure 11.21. In this section, the curves overlap, and the decision has been set that at the threshold or lower a reject signal will be given. Thus, if you are on the lower end of the match curve, in the gray area, you will be rejected, even if you should be a match. Expressed as probabilities, the false rejection rate is the probability that the system incorrectly rejects a legitimate match between the biometric input and the stored template value. The FRR is calculated by counting the number of authorized access attempts that were not granted, divided by the total number of access attempts.

Images

Figure 11.21   False rejection rate

When comparing the FAR and the FRR, one realizes that in most cases, whenever the curves overlap, they are related. This brings up the issue of the crossover error rate (see Table 11.2).

Table 11.2 An Access Control Matrix

Images

Crossover Error Rate

The crossover error rate (CER), also known as the equal error rate (EER), is the rate where both accept and reject error rates are equal. This is the desired state for most efficient operation, and it can be managed by manipulating the threshold value used for matching. In practice, the values might not be exactly the same, but they will typically be close to each other. Figure 11.22 demonstrates the relationship between the FAR, FRR, and CER.

Images

Figure 11.22   FRR, FAR, and CER compared

Biometrics Calculation Example

Assume we are using a fingerprint biometric system, and we have 1000 users. During the enrollment stage, five users were unable to enroll (the system could not establish a fingerprint signature/template for them). This means the system has a failure to enroll rate (FER) for 0.5 percent. In other words, only 995 users can use the system, and an alternative means needs to be in place for the users who cannot use the system.

During the testing of the 995 users, 50 users were rejected when the system matched their fingerprint against their enrollment fingerprint template.

FAR = (NFA / NIA) * 100%

(NFA = number of false acceptances, and NIA = number of imposter attempts)

FAR = (50/995) * 100

This makes the FRR 5.02 percent.

Also, 25 users out of the 995 users were accepted by the system when the system matched their fingerprint against other users’ fingerprint templates.

FRR = (NFR / NEA) * 100%

(NFR = number of failed rejections, and NEA = number of legitimate access attempts)

FRR = (25/995) * 100%

This means the FAR is 2.51 percent.

The lower the FAR and FRR, the better the system, and the ideal situation is setting the thresholds where the FAR and FRR are equal (the crossover error rate).

Images

Understand how to calculate FAR and FRR, given data. This is an easy calculation, and remember to include those who fail enrollment.

Authorization

Authorization is the process of permitting or denying access to a specific resource. Once identity is confirmed via authentication, specific actions can be authorized or denied. Many types of authorization schemes are used, but the purpose is the same: determine whether a given user who has been identified has permissions for a particular object or resource being requested. This functionality is frequently part of the operating system and is transparent to users.

The separation of tasks, from identification to authentication to authorization, has several advantages. Many methods can be used to perform each task, and on many systems several methods are concurrently present for each task. Separation of these tasks into individual elements allows combinations of implementations to work together. Any system or resource, be it hardware (router or workstation) or a software component (database system), that requires authorization can use its own authorization method once authentication has occurred. This makes for efficient and consistent application of these principles.

Access Control

The term access control has been used to describe a variety of protection schemes. It sometimes refers to all security features used to prevent unauthorized access to a computer system or network—or even a network resource such as a printer. In this sense, it may be confused with authentication. More properly, access is the ability of a subject (such as an individual or a process running on a computer system) to interact with an object (such as a file or hardware device). Once the individual has verified their identity, access controls regulate what the individual can actually do on the system. Just because a person is granted entry to the system, that does not mean that they should have access to all data the system contains.

Access Control vs. Authentication

It may seem that access control and authentication are two ways to describe the same protection mechanism. This, however, is not the case. Authentication provides a way to verify to the computer who the user is. Once the user has been authenticated, the access controls decide what operations the user can perform. The two go hand-in-hand but are not the same thing.

ACLs

Access control lists (ACLs) are lists of users and their permitted actions. Users can be identified in a variety of ways, including by a user ID, a network address, or a token. The simple objective is to create a lookup system that allows a device to determine which actions are permitted and which are denied. A router can contain an ACL that lists permitted addresses or blocked addresses, or a combination of both. The most common implementation is for file systems, where named user IDs are used to determine which file system attributes are permitted to the user. This same general concept is reused across all types of devices and situations in networking.

Just as the implicit deny rule applies to firewall rulesets, the explicit deny principle can be applied to ACLs. When this approach is used for ACL building, allowed traffic must be explicitly allowed by a permit statement. All of the specific permit commands are followed by a deny all statement in the ruleset. ACL entries are typically evaluated in a top-to-bottom fashion, so any traffic that does not match a “permit” entry will be dropped by a “deny all” statement placed as the last line in the ACL.

Images Remote Access Methods

When a user requires access to a remote system, the process of remote access is used to determine the appropriate controls. This is done through a series of protocols and processes described in the remainder of this chapter.

Images

One security issue associated with 802.1X is that the authentication occurs only upon initial connection, and another user can insert themselves into the connection by changing packets or using a hub. The secure solution is to pair 802.1X, which authenticates the initial connection, with a VPN or IPsec, which provides persistent security.

IEEE 802.1X

IEEE 802.1X is an authentication standard that supports port-based authentication services between a user and an authorization device, such as an edge router. IEEE 802.1X is used by all types of networks, including Ethernet, Token Ring, and wireless. This standard describes methods used to authenticate a user prior to granting access to a network and the authentication server, such as a RADIUS server. 802.1X acts through an intermediate device, such as an edge switch, enabling ports to carry normal traffic if the connection is properly authenticated. This prevents unauthorized clients from accessing the publicly available ports on a switch, keeping unauthorized users out of a LAN. Until a client has successfully authenticated itself to the device, only Extensible Authentication Protocol over LAN (EAPOL) traffic is passed by the switch.

EAPOL is an encapsulated method of passing EAP messages over 802.1 frames. EAP is a general protocol that can support multiple methods of authentication, including one-time passwords, Kerberos, public keys, and security device methods such as smart cards. Once a client successfully authenticates itself to the 802.1X device, the switch opens ports for normal traffic. At this point, the client can communicate with the system’s AAA method, such as a RADIUS server, and authenticate itself to the network.

802.1X is commonly used on wireless access points as a port-based authentication service prior to admission to the wireless network. 802.1X over wireless uses either 802.11i or EAP-based protocols, such as EAP-TLS and PEAP-TLS.

Wireless Remote Access

Wireless is a common method of allowing remote access to a network, as it does not require physical cabling and allows mobile connections. Wireless security, including protocols such as 802.11i and EAP-based solutions, is covered in Chapter 12.

LDAP

A directory is a data storage mechanism similar to a database, but it has several distinct differences designed to provide efficient data-retrieval services compared to standard database mechanisms. A directory is designed and optimized for reading data, offering very fast search and retrieval operations. The types of information stored in a directory tend to be descriptive attribute data. A directory offers a static view of data that can be changed without a complex update transaction. The data is hierarchically described in a treelike structure, and a network interface for reading is typical. Common uses of directories include e-mail address lists, domain server data, and resource maps of network resources. LDAP is a protocol that is commonly used to handle user authentication/authorization as well as control access to Active Directory objects.

Images

A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), which by default is on TCP and UDP port 389, or on port 636 for LDAPS (LDAP over SSL).

To enable interoperability, the X.500 standard was created as a standard for directory services. The primary method for accessing an X.500 directory is through the Directory Access Protocol (DAP), a heavyweight protocol that is difficult to implement completely, especially on PCs and more constrained platforms. This led to theLightweight Directory Access Protocol (LDAP), which contains the most commonly used functionality. LDAP can interface with X.500 services, and, most importantly, LDAP can be used over TCP with significantly less computing resources than a full X.500 implementation. LDAP offers all of the functionality most directories need and is easier and more economical to implement; hence, LDAP has become the Internet standard for directory services. LDAP standards are governed by two separate entities, depending on use: The International Telecommunication Union (ITU) governs the X.500 standard, and LDAP is governed for Internet use by the IETF. Many RFCs apply to LDAP functionality, but some of the most important are RFCs 2251 through 2256 and RFCs 2829 and 2830.

RADIUS

Remote Authentication Dial-In User Service (RADIUS) is an AAA protocol. It was submitted to the Internet Engineering Task Force (IETF) as a series of RFCs: RFC 2058 (RADIUS specification), RFC 2059 (RADIUS accounting standard), and updated RFCs 2865–2869, which are now standard protocols.

RADIUS is designed as a connectionless protocol that uses the User Datagram Protocol (UDP) as its transport layer protocol. Connection type issues, such as timeouts, are handled by the RADIUS application instead of the transport layer. RADIUS utilizes UDP port 1812 for authentication and authorization and UDP port 1813 for accounting functions.

RADIUS is a client/server protocol. The RADIUS client is typically a network access server (NAS). Network access servers act as intermediaries, authenticating clients before allowing them access to a network. RADIUS, RRAS (Microsoft), RAS, and VPN servers can all act as network access servers. The RADIUS server is a process or daemon running on a UNIX or Windows Server machine. Communications between a RADIUS client and RADIUS server are encrypted using a shared secret that is manually configured into each entity and not shared over a connection. Hence, communications between a RADIUS client (typically a NAS) and a RADIUS server are secure, but the communications between a user (typically a PC) and the RADIUS client are subject to compromise. This is important to note, because if the user’s machine (the PC) is not the RADIUS client (the NAS), then communications between the PC and the NAS are typically not encrypted and are passed in the clear.

RADIUS Authentication

The RADIUS protocol is designed to allow a RADIUS server to support a wide variety of methods to authenticate a user. When the server is given a username and password, it can support Point-to-Point Protocol (PPP), Password Authentication Protocol (PAP), Challenge-Handshake Authentication Protocol (CHAP), UNIX login, and other mechanisms, depending on what was established when the server was set up. A user login authentication consists of a query (Access-Request) from the RADIUS client and a corresponding response (Access-Accept, Access-Challenge, or Access-Reject) from the RADIUS server, as you can see in Figure 11.23. The Access-Challenge response is the initiation of a challenge/response handshake. If the client cannot support challenge/response, then it treats the Challenge message as an Access-Reject.

Images

Figure 11.23   RADIUS communication sequence

The Access-Request message contains the username, encrypted password, NAS IP address, and port. The message also contains information concerning the type of session the user wants to initiate. Once the RADIUS server receives this information, it searches its database for a match on the username. If a match is not found, either a default profile is loaded or an Access-Reject reply is sent to the user. If the entry is found or the default profile is used, the next phase involves authorization, because in RADIUS these steps are performed in sequence. Figure 11.23 shows the interaction between a user and the RADIUS client and RADIUS server as well as the steps taken to make a connection.

RADIUS Authorization

In the RADIUS protocol, the authentication and authorization steps are performed together in response to a single Access-Request message, although they are sequential steps (see Figure 11.24). Once an identity has been established, either known or default, the authorization process determines what parameters are returned to the client. Typical authorization parameters include the service type allowed (shell or framed), the protocols allowed, the IP address to assign to the user (static or dynamic), and the access list to apply or static route to place in the NAS routing table.

Shell Accounts

Shell account requests are those that desire command-line access to a server. Once authentication is successfully performed, the client is connected directly to the server so command-line access can occur. Rather than being given a direct IP address on the network, the NAS acts as a pass-through device conveying access.

These parameters are all defined in the configuration information on the RADIUS client and server during setup. Using this information, the RADIUS server returns an Access-Accept message with these parameters to the RADIUS client.

RADIUS Accounting

The RADIUS accounting function is performed independently of RADIUS authentication and authorization. The accounting function uses a separate UDP port, 1813 (see Table 11.3 in the “Connection Summary” section at the end of the chapter). The primary functionality of RADIUS accounting was established to support ISPs in their user accounting, and it supports typical accounting functions for time billing and security logging. The RADIUS accounting functions are designed to allow data to be transmitted at the beginning and end of a session, and they can indicate resource utilization, such as time, bandwidth, and so on.

Diameter

Diameter is the name of an AAA protocol suite, designated by the IETF to replace the aging RADIUS protocol. Diameter operates in much the same way as RADIUS in a client/server configuration, but it improves upon RADIUS, resolving discovered weaknesses. Diameter is a TCP-based service and has more extensive AAA capabilities. Diameter is also designed for all types of remote access, not just modem pools. As more and more users adopt broadband and other connection methods, these newer services require more options to determine permissible usage properly and to account for and log the usage. Diameter is designed with these needs in mind.

Diameter also has an improved method of encrypting message exchanges to prohibit replay and man-in-the-middle attacks. Taken all together, Diameter, with its enhanced functionality and security, is an improvement on the proven design of the old RADIUS standard.

TACACS+

The Terminal Access Controller Access Control System+ (TACACS+) protocol is the current generation of the TACACS family. Originally TACACS was developed by BBN Planet Corporation for MILNET, an early military network, but it has been enhanced by Cisco, which has expanded its functionality twice. The original BBN TACACS system provided a combination process of authentication and authorization. Cisco extended this to Extended Terminal Access Controller Access Control System (XTACACS), which provided for separate authentication, authorization, and accounting processes. The current generation, TACACS+, has extended attribute control and accounting processes.

One of the fundamental design aspects is the separation of authentication, authorization, and accounting in this protocol. Although there is a straightforward lineage of these protocols from the original TACACS, TACACS+ is a major revision and is not backward-compatible with previous versions of the protocol series.

TACACS+ uses TCP as its transport protocol, typically operating over TCP port 49. This port is used for the login process and is reserved in RFC 3232, “Assigned Numbers,” manifested in a database from the Internet Assigned Numbers Authority (IANA). In the IANA specification, both UDP port 49 and TCP port 49 are reserved for the TACACS+ login host protocol (see Table 11.3 in the “Connection Summary” section at the end of the chapter).

TACACS+ is a client/server protocol, with the client typically being a NAS and the server being a daemon process on a UNIX, Linux, or Windows server. This is important to note, because if the user’s machine (usually a PC) is not the client (usually a NAS), then communications between the PC and NAS are typically not encrypted and are passed in the clear. Communications between a TACACS+ client and TACACS+ server are encrypted using a shared secret that is manually configured into each entity and is not shared over a connection. Hence, communications between a TACACS+ client (typically a NAS) and a TACACS+ server are secure, but the communications between a user (typically a PC) and the TACACS+ client are subject to compromise.

TACACS+ Authentication

TACACS+ allows for arbitrary length and content in the authentication exchange sequence, enabling many different authentication mechanisms to be used with TACACS+ clients. Authentication is optional and is determined as a site-configurable option. When authentication is used, common forms include PPP PAP, PPP CHAP, PPP EAP, token cards, and Kerberos. The authentication process is performed using three different packet types: START, CONTINUE, and REPLY. START and CONTINUE packets originate from the client and are directed to the TACACS+ server. The REPLY packet is used to communicate from the TACACS+ server to the client.

The authentication process is illustrated in Figure 11.24, and it begins with a START message from the client to the server. This message may be in response to an initiation from a PC connected to the TACACS+ client. The START message describes the type of authentication being requested (simple plaintext password, PAP, CHAP, and so on). This START message may also contain additional authentication data, such as a username and password. A START message is also sent as a response to a restart request from the server in a REPLY message. A START message always has its sequence number set to 1.

When a TACACS+ server receives a START message, it sends a REPLY message. This REPLY message indicates whether the authentication is complete or needs to be continued. If the process needs to be continued, the REPLY message also specifies what additional information is needed. The response from a client to a REPLY message requesting additional data is a CONTINUE message. This process continues until the server has all the information needed, and the authentication process concludes with a success or failure.

TACACS+ Authorization

Authorization is defined as the granting of specific permissions based on the privileges held by the account. This generally occurs after authentication, as shown in Figure 11.24, but this is not a firm requirement. A default state of “unknown user” exists before a user is authenticated, and permissions can be determined for an unknown user. As with authentication, authorization is an optional process and may or may not be part of a site-specific operation. When it is used in conjunction with authentication, the authorization process follows the authentication process and uses the confirmed user identity as input in the decision process.

Images

Figure 11.24   TACACS+ communication sequence

The authorization process is performed using two message types: REQUEST and RESPONSE. The authorization process is performed using an authorization session consisting of a single pair of REQUEST and RESPONSE messages. The client issues an authorization REQUEST message containing a fixed set of fields enumerating the authenticity of the user or process requesting permission and a variable set of fields enumerating the services or options for which authorization is being requested.

The RESPONSE message in TACACS+ is not a simple yes or no; it can also include qualifying information, such as a user time limit or IP restrictions. These limitations have important uses, such as enforcing time limits on shell access or enforcing IP access list restrictions for specific user accounts.

TACACS+ Accounting

As with the two previous services, accounting is also an optional function of TACACS+. When utilized, it typically follows the other services. Accounting in TACACS+ is defined as the process of recording what a user or process has done. Accounting can serve two important purposes:

Images   It can be used to account for services being utilized, possibly for billing purposes.

Images   It can be used for generating security audit trails.

TACACS+ accounting records contain several pieces of information to support these tasks. The accounting process has the information revealed in the authorization and authentication processes, so it can record specific requests by user or process. To support this functionality, TACACS+ has three types of accounting records: START, STOP, and UPDATE. Note that these are record types, not message types as earlier discussed.

Authentication Protocols

Numerous authentication protocols have been developed, used, and discarded in the brief history of computing. Some have come and gone because they did not enjoy market share, others have had security issues, and yet others have been revised and improved in newer versions. It’s impractical to cover them all, so only some of the common ones follow.

Tunneling

Layer 2 Tunneling Protocol (L2TP) and Point-to-Point Tunneling Protocol (PPTP) are both OSI Layer 2 tunneling protocols. Tunneling is the encapsulation of one packet within another, which allows you to hide the original packet from view or change the nature of the network transport. This can be done for both security and practical reasons.

From a practical perspective, assume that you are using TCP/IP to communicate between two machines. Your message may pass over various networks, such as an Asynchronous Transfer Mode (ATM) network, as it moves from source to destination. Because the ATM protocol can neither read nor understand TCP/IP packets, something must be done to make them passable across the network. By encapsulating a packet as the payload in a separate protocol, so it can be carried across a section of a network, a mechanism called a tunnel is created. At each end of the tunnel, called the tunnel endpoints, the payload packet is read and understood. As it goes into the tunnel, you can envision your packet being placed in an envelope with the address of the appropriate tunnel endpoint on it. When the envelope arrives at the tunnel endpoint, the original message (the tunnel packet’s payload) is re-created, read, and sent to its appropriate next stop. The information being tunneled is understood only at the tunnel endpoints; it is not relevant to intermediate tunnel points because it is only a payload.

L2TP

Layer 2 Tunneling Protocol (L2TP) is also an Internet standard and came from the Layer 2 Forwarding (L2F) protocol, a Cisco initiative designed to address issues with PPTP. Whereas PPTP is designed around PPP and IP networks, L2F (and hence L2TP) is designed for use across all kinds of networks, including ATM and Frame Relay. Additionally, whereas PPTP is designed to be implemented in software at the client device, L2TP was conceived as a hardware implementation using a router or a special-purpose appliance. L2TP can be configured in software and is in Microsoft’s Routing and Remote Access Service (RRAS), which use L2TP to create a VPN.

L2TP works in much the same way as PPTP, but it opens up several items for expansion. For instance, in L2TP, routers can be enabled to concentrate VPN traffic over higher-bandwidth lines, creating hierarchical networks of VPN traffic that can be more efficiently managed across an enterprise. L2TP also has the ability to use IPsec and Data Encryption Standard (DES) as encryption protocols, providing a higher level of data security. L2TP is also designed to work with established AAA services such as RADIUS and TACACS+ to aid in user authentication, authorization, and accounting.

L2TP is established via UDP port 1701, so this is an essential port to leave open across firewalls supporting L2TP traffic. Microsoft supports L2TP in Windows 2000 and above, but because of the computing power required, most implementations will use specialized hardware (such as a Cisco router).

PPTP

Microsoft led a consortium of networking companies to extend PPP to enable the creation of virtual private networks (VPNs). The result was the Point-to-Point Tunneling (PPTP), a network protocol that enables the secure transfer of data from a remote PC to a server by creating a VPN across a TCP/IP network. This remote network connection can also span a public switched telephone network (PSTN) and is thus an economical way of connecting remote dial-in users to a corporate data network. The incorporation of PPTP into the Microsoft Windows product line provides a built-in secure method of remote connection using the operating system, and this has given PPTP a large marketplace footprint.

For most PPTP implementations, three computers are involved: the PPTP client, the NAS, and a PPTP server, as shown in Figure 11.25. The connection between the remote client and the network is established in stages, as illustrated in Figure 11.26. First, the client makes a PPP connection to a NAS, typically an ISP. (In today’s world of widely available broadband, if there is already an Internet connection, then there is no need to perform the PPP connection to the ISP.) Once the PPP connection is established, a second connection is made over the PPP connection to the PPTP server. This second connection creates the VPN connection between the remote client and the PPTP server. A typical VPN connection is one in which the user is in a hotel with a wireless Internet connection, connecting to a corporate network. This connection acts as a tunnel for future data transfers. Although these diagrams illustrate a telephone connection, this first link can be virtually any method. Common in hotels today are wired connections to the Internet. These wired connections typically are provided by a local ISP and offer the same services as a phone connection, albeit at a much higher data transfer rate.

Images

Figure 11.25   PPTP communication diagram

Images

Figure 11.26   PPTP message encapsulation during transmission

PPTP establishes a tunnel from the remote PPTP client to the PPTP server and enables encryption within this tunnel. This provides a secure method of transport. To do this and still enable routing, an intermediate addressing scheme, Generic Routing Encapsulation (GRE), is used.

To establish the connection, PPTP uses communications across TCP port 1723 (see Table 11.3 in the “Connection Summary” section at the end of the chapter), so this port must remain open across the network firewalls for PPTP to be initiated. Although PPTP allows the use of any PPP authentication scheme, CHAP is used when encryption is specified, to provide an appropriate level of security. For the encryption methodology, Microsoft chose the RSA RC4 cipher, with either a 40- or 128-bit session key length, and this is OS driven. Microsoft Point-to-Point Encryption (MPPE) is an extension to PPP that enables VPNs to use PPTP as the tunneling protocol.

PPP

Point-to-Point Protocol (PPP) is an older, still widely used protocol for establishing dial-in connections over serial lines or Integrated Services Digital Network (ISDN) services. PPP has several authentication mechanisms, including PAP, CHAP, and the Extensible Authentication Protocol (EAP). These protocols are used to authenticate the peer device, not a user of the system. PPP is a standardized Internet encapsulation of IP traffic over point-to-point links, such as serial lines. The authentication process is performed only when the link is established.

PPP Functions and Authentication

PPP supports three functions:

Images   Encapsulate datagrams across serial links

Images   Establish, configure, and test links using LCP

Images   Establish and configure different network protocols using NCP

PPP supports two authentication protocols:

Images   Password Authentication Protocol (PAP)

Images   Challenge-Handshake Authentication Protocol (CHAP)

EAP

Extensible Authentication Protocol (EAP) is a universal authentication framework defined by RFC 3748 that is frequently used in wireless networks and point-to-point connections. Although EAP is not limited to wireless and can be used for wired authentication, it is most often used in wireless LANs. EAP is discussed in detail in Chapter 12.

CHAP

Challenge-Handshake Authentication Protocol (CHAP) is used to provide authentication across a point-to-point link using PPP. In this protocol, authentication after the link has been established is not mandatory. CHAP is designed to provide authentication periodically through the use of a challenge/response system that is sometimes described as a three-way handshake, as illustrated in Figure 11.27. The initial challenge (a randomly generated number) is sent to the client. The client uses a one-way hashing function to calculate what the response should be and then sends this back. The server compares the response to what it calculated the response should be. If they match, communication continues. If the two values don’t match, then the connection is terminated. This mechanism relies on a shared secret between the two entities so that the correct values can be calculated.

Images

Figure 11.27   The CHAP challenge/response sequence

Microsoft has created two versions of CHAP, modified to increase the usability of CHAP across Microsoft’s product line. MS-CHAP v1, defined in RFC 2433, has been deprecated and was dropped in Windows Vista. The current standard, version 2, defined in RFC 2759, was introduced with Windows 2000.

NTLM

NT LAN Manager (NTLM) is an authentication protocol designed by Microsoft, for use with the Server Message Block (SMB) protocol. SMB is an application-level network protocol primarily used for sharing of files and printers in Windows-based networks. NTLM is the successor to the authentication protocol in Microsoft LAN Manager (LANMAN), an older Microsoft product. Both of these suites have been widely replaced by Microsoft’s Kerberos implementation, although NTLM is still used for logon authentication on standalone Windows machines. The current version is NTLM v2, which was introduced with Windows NT 4.0 SP4. NTLM uses an encrypted challenge/response protocol to authenticate a user without sending the user’s password over the wire, but the cryptography by today’s standards is weak, including MD4. Although Microsoft has adopted the Kerberos protocol for authentication, NTLM v2 is still used in the following situations:

Images   When authenticating to a server using an IP address

Images   When authenticating to a server that belongs to a different Active Directory forest

Images   When authenticating to a server that doesn’t belong to a domain

Images   When no Active Directory domain exists (“workgroup” or “peer-to-peer” connection)

PAP

Password Authentication Protocol (PAP) involves a two-way handshake in which the username and password are sent across the link in cleartext. PAP authentication does not provide any protection against playback and line sniffing. PAP is now a deprecated standard.

Telnet

One of the methods to grant remote access to a system is through Telnet. Telnet is the standard terminal-emulation protocol within the TCP/IP protocol series, and it is defined in RFC 854. Telnet allows users to log in remotely and access resources as if the user had a local terminal connection. Telnet is an old protocol and offers little security. Information, including account names and passwords, is passed in cleartext over the TCP/IP connection.

Images

Telnet uses TCP port 23. Be sure to memorize the common ports used by common services for the exam.

Telnet makes its connection using TCP port 23. As Telnet is implemented on most products using TCP/IP, it is important to control access to Telnet on machines and routers when setting them up. Failure to control access by using firewalls, access lists, and other security methods, or even by disabling the Telnet daemon, is equivalent to leaving an open door for unauthorized users on a system.

SSH

Secure Shell (SSH) is a protocol series designed to facilitate secure network functions across an insecure network. SSH provides direct support for secure remote login, secure file transfer, and secure forwarding of TCP/IP and X Window System traffic. An SSH connection is an encrypted channel, providing for confidentiality and integrity protection.

Images

SSH uses TCP port 22. SCP (secure copy) and SFTP (secure FTP) use SSH, so each also uses TCP port 22.

SSH has its origins as a replacement for the insecure Telnet application from the UNIX operating system. An original component of UNIX, Telnet allowed users to connect between systems. Although Telnet is still used today, it has some drawbacks, as discussed in the preceding section. Some enterprising University of California, Berkeley, students subsequently developed the r- commands, such as rlogin, to permit access based on the user and source system, as opposed to passing passwords. This was not perfect either, however, because when a login was required, it was still passed in the clear. This led to the development of the SSH protocol series, designed to eliminate all of the insecurities associated with Telnet, r- commands, and other means of remote access.

SSH opens a secure transport channel between machines by using an SSH daemon on each end. These daemons initiate contact over TCP port 22 and then communicate over higher ports in a secure mode. One of the strengths of SSH is its support for many different encryption protocols. SSH 1.0 started with RSA algorithms, but at the time they were still under patent, and this led to SSH 2.0 with extended support for Triple DES (3DES) and other encryption methods. Today, SSH can be used with a wide range of encryption protocols, including RSA, 3DES, Blowfish, International Data Encryption Algorithm (IDEA), CAST128, AES256, and others.

The SSH protocol has facilities to encrypt data automatically, provide authentication, and compress data in transit. It can support strong encryption, cryptographic host authentication, and integrity protection. The authentication services are host-based and not user-based. If user authentication is desired in a system, it must be set up separately at a higher level in the OSI model. The protocol is designed to be flexible and simple, and it is designed specifically to minimize the number of round-trips between systems. The key exchange, public key, symmetric key, message authentication, and hash algorithms are all negotiated at connection time. Individual data-packet integrity is ensured through the use of a message authentication code that is computed from a shared secret, the contents of the packet, and the packet sequence number.

The SSH protocol consists of three major components:

Images   Transport layer protocol Provides server authentication, confidentiality, integrity, and compression

Images   User authentication protocol Authenticates the client to the server

Images   Connection protocol Provides multiplexing of the encrypted tunnel into several logical channels

SSH is very popular in the UNIX environment, and it is actively used as a method of establishing VPNs across public networks. Because all communications between the two machines are encrypted at the OSI application layer by the two SSH daemons, this leads to the ability to build very secure solutions and even solutions that defy the ability of outside services to monitor. As SSH is a standard protocol series with connection parameters established via TCP port 22, different vendors can build differing solutions that can still interoperate.

RDP

Remote Desktop Protocol (RDP) is a proprietary Microsoft protocol designed to provide a graphical connection to another computer. The computer requesting the connection has RDP client software (built into Windows), and the target uses an RDP server. This software has been available for many versions of Windows and was formerly called Terminal Services. Client and server versions also exist for Linux platforms. RDP uses TCP and UDP ports 3389, so if RDP is desired, these ports need to be open on the firewall.

Although Windows Server implementations of SSH exist, this has not been a popular protocol in the Windows environment from a server perspective. The development of a wide array of commercial SSH clients for the Windows platform indicates the marketplace strength of interconnection from desktop PCs to UNIX-based servers utilizing this protocol.

SAML

Security Assertion Markup Language (SAML) is a single sign-on capability used for web applications to ensure user identities can be shared and are protected. It defines standards for exchanging authentication and authorization data between security domains. It is becoming increasingly important with cloud-based solutions and with Software as a Service (SaaS) applications because it ensures interoperability across identity providers.

SAML is an XML-based protocol that uses security tokens and assertions to pass information about a “principal” (typically an end user) with a SAML authority (an “identity provider” or IdP) and the service provider (SP). The principal requests a service from the SP, which then requests and obtains an identity assertion from the IdP. The SP can then grant access or perform the requested service for the principal.

OAuth

OAuth (Open Authorization) is an open protocol that allows secure token-based authentication and authorization in a simple and standard method from web, mobile, and desktop applications, for authorization on the Internet. OAuth is used by companies such as Google, Facebook, Microsoft, and Twitter to permit users to share information about their accounts with third-party applications or web sites. OAuth 1.0 was developed by a Twitter engineer as part of the Twitter OpenID implementation. OAuth 2.0 (not backward compatible) has taken off with support from most major web platforms. OAuth’s main strength is that it can be used by an external partner site to allow access to protected data without having to re-authenticate the user.

OAuth was created to remove the need for users to share their passwords with third-party applications, instead substituting a token. OAuth 2.0 expanded this into also providing authentication services, so it can eliminate the need for OpenID.

OpenID Connect

OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. OpenID Connect allows clients of all types (mobile, JavaScript, and web-based clients) to request and receive information about authenticated sessions and end users. OpenID is about proving who you are, which is the first step in Authentication-Authorization ladder. To perform authorization, a second process is needed, and OpenID is commonly paired with OAuth 2.0. OpenID was created for federated authentication that lets a third party authenticate your users for you, by using accounts the users already have.

Images

OpenID and OAuth are typically used together, yet have different purposes. OpenID is used for authentication, whereas OAuth is used for authorization.

Shibboleth

Shibboleth is a service designed to enable single sign-on and federated identity-based authentication and authorization across networks. It began in 2000, has been through several revisions and versions, but has yet to gain any widespread acceptance. Shibboleth is a web-based technology that is built using SAML technologies. Shibboleth uses the HTTP/POST, artifact, and attribute push profiles of SAML, including both Identity Provider (IdP) and Service Provider (SP) components to achieve its goals. As such, it is included by many services that use SAML for identity management.

Secure Token

Within a claims-based identity framework, such as OASIS WS-Trust, security tokens are used. A secure token service is responsible for issuing, validating, renewing, and cancelling these security tokens. The tokens issued can then be used to identify the holder of the token to any services that adhere to the WS-Trust standard. Secure tokens solve the problem of authentication across stateless platforms, because user identity must be established with each request. The following outlines the basic five-step process for using tokens:

1.   The user requests access with a username and password.

2.   The secure token service validates the user’s credentials.

3.   The secure token service provides a signed token to the client.

4.   The client stores that token and sends it along with every request.

5.   The server verifies the token and responds with data.

These steps are highly scalable and can be widely distributed and even shared. A user application can use a token for access via another app (for example, allowing someone to validate a login to Twitter via Facebook) because the token is transportable.

FTP/FTPS/SFTP

One of the methods of transferring files between machines is through the use of the File Transfer Protocol (FTP). FTP is a plaintext protocol that operates by communicating over TCP between a client and a server. The client initiates a transfer with an FTP request to the server’s TCP port 21. This is the control connection, and this connection remains open over the duration of the file transfer. The actual data transfer occurs on a negotiated data transfer port, typically a high-order port number. FTP was not designed to be a secure method of transferring files. If a secure method is desired, then using FTPS or SFTP is best.

Images

FTP uses TCP port 21 as a control channel and TCP port 20 as a typical active mode data port, as some firewalls are set to block ports above 1024.

FTPS is the use of FTP over an SSL/TLS secured channel. This can be done either in explicit mode, where an AUTH TLS command is issued, or in implicit mode, where the transfer occurs over TCP port 990 for the control channel and TCP port 989 for the data channel. SFTP is not FTP per se, but rather a completely separate Secure File Transfer Protocol as defined by an IETF draft, the latest of which, version 6, expired in July of 2007 but has been incorporated into products in the marketplace.

It is also possible to run FTP over SSH, as later versions of SSH allow securing of channels such as the FTP control channel; this has also been referred to as Secure FTP, or SFTP. This leaves the data channel unencrypted, a problem that has been solved in version 3.0 of SSH, which supports FTP commands. The challenge of encrypting the FTP data communications is that the mutual port agreement must be opened on the firewall, and for security reasons, high-order ports that are not explicitly defined are typically secured. Because of this challenge, Secure Copy (SCP) is often a more desirable alternative to SFTP when using SSH.

VPNs

A virtual private network (VPN) is a secure virtual network built on top of a physical network. The security of a VPN lies in the encryption of packet contents between the endpoints that define the VPN. The physical network upon which a VPN is built is typically a public network, such as the Internet. Because the packet contents between VPN endpoints are encrypted, to an outside observer on the public network, the communication is secure, and depending on how the VPN is set up, security can even extend to the two communicating parties’ machines.

Images

VPNs are commonly used for remote access to enterprise networks, providing protection from outside traffic. VPNs can also be used from site to site between network nodes in an overall system with geographic separation.

Virtual private networking is not a protocol but rather a method of using protocols to achieve a specific objective—secure communications—as shown in Figure 11.28. A user who wants to have a secure communication channel with a server across a public network can set up two intermediary devices, called VPN endpoints, to accomplish this task. The user can communicate with their endpoint, and the server can communicate with its endpoint. The two endpoints then communicate across the public network. VPN endpoints can be software solutions, routers, or specific servers set up for specific functionality. This implies that VPN services are set up in advance and are not something negotiated on the fly.

Images

Figure 11.28   VPN service over an Internet connection

A typical use of VPN services is a user accessing a corporate data network from a home PC across the Internet. The employee installs VPN software from work on a home PC. This software is already configured to communicate with the corporate network’s VPN endpoint; it knows the location, the protocols that will be used, and so on. When the home user wants to connect to the corporate network, they connect to the Internet and then start the VPN software. The user can then log into the corporate network by using an appropriate authentication and authorization methodology. The sole purpose of the VPN connection is to provide a private connection between the machines, which encrypts any data sent between the home user’s PC and the corporate network. Identification, authorization, and all other standard functions are accomplished with the standard mechanisms for the established system.

Split Tunnels

Split-tunnel is a form of VPN where not all traffic is routed via the VPN. Split-tunneling allows multiple connection paths, some via the protected route such as the VPN, whereas other traffic from, say, local network resources, such as printers, are routed via non-VPN paths. A full tunnel solution routes all traffic over the VPN.

VPNs can use many different protocols to offer a secure method of communicating between endpoints. Common methods of encryption on VPNs include PPTP, IPsec, SSH, and L2TP, all of which are discussed in this chapter. The key is that both endpoints know the protocol and share a secret. All of this necessary information is established when the VPN is set up. At the time of use, the VPN only acts as a private tunnel between the two points and does not constitute a complete security solution.

Vulnerabilities of Remote Access Methods

The primary vulnerability associated with many of these methods of remote access is the passing of critical data in cleartext. Plaintext passing of passwords provides no security if the password is sniffed, and sniffers are easy to use on a network. Even plaintext passing of user IDs gives away information that can be correlated and possibly used by an attacker. Plaintext credential passing is one of the fundamental flaws with Telnet and is why SSH was developed. This is also one of the flaws with RADIUS and TACACS+, as they have a segment unprotected. There are methods for overcoming these limitations, although they require discipline and understanding in setting up a system.

Access Violations

The importance of authentication and authorization to a security program cannot be understated. These systems are the foundation of access to system objects, actions and resources. Should failures occur, it is important to invoke logging and notification so that incident response can be activated if necessary. Access violations can be minor, or they can be significant with respect to risk, but they must be detected and acted upon. In this regard, the authorization system should be linked to logging for all critical items in a system so that actions can be initiated when violations occur.

The strength of the encryption algorithm is also a concern. Should a specific algorithm or method prove to be vulnerable, services that rely solely on it are also vulnerable. To get around this dependency, many of the protocols allow numerous encryption methods, so that should one prove vulnerable, a shift to another restores security.

As with any software implementation, there always exists the possibility that a bug could open the system to attack. Bugs have been corrected in most software packages to close holes that made systems vulnerable, and remote access functionality is no exception. This is not a Microsoft-only phenomenon, as one might believe from the popular press. Critical flaws have been found in almost every product, from open system implementations such as OpenSSH to proprietary systems such as Cisco IOS. The important issue is not the presence of software bugs, because as software continues to become more complex, this is an unavoidable issue. The true key is vendor responsiveness to fixing the bugs once they are discovered, and the major players, such as Cisco and Microsoft, have been very responsive in this area.

Images File System Security

Files need security on systems, to prevent unauthorized access and unauthorized alterations. File system security is the set of mechanisms and processes employed to ensure this critical function. Using a connection of file storage mechanisms, along with access control lists and access control models, provides a means by which this can be done. You need a file system capable of supporting user-level access differentiation—something NTFS does but FAT32 does not. Next you need to have a functioning access control model, MAC, DAC, ABAC, or others, as previously described in this chapter. Then you need a system to apply the users’ permissions to the files, which can be handled by the OS, although administering and maintaining this can be a challenge.

Images Database Security

Database security is a concern for many enterprises, as the data in the databases represents valuable information assets. Major database engines have built-in encryption capabilities. This can provide the desired levels of confidentiality and integrity to the contents of the database. The advantage to these encryption schemes is that they can be tailored to the data structure, protecting the essential columns while not impacting columns that are not sensitive. Properly employing database encryption requires that the data schema and its security requirements be designed into the database implementation. The advantages are better protection against any database compromise, and the performance hit is typically negligible with respect to other alternatives.

Images Connection Summary

Many protocols used for remote access and authentication and related purposes. These methods have their own assigned ports, and these assignments are summarized in Table 11.3.

Table 11.3 Common TCP/UDP Remote Access Networking Port Assignments

Images

Images For More Information

Microsoft’s TechNet Group Policy page http://technet.microsoft.com/en-us/windowsserver/grouppolicy/default.aspx

SANS Consensus Policy Resource Community – Password Policy https://www.sans.org/security-resources/policies/general/pdf/password-protection-policy

Chapter 11 Review

images   Chapter Summary


After reading this chapter and completing the exercises, you should understand the following about privilege management, authentication, and remote access protocols.

Identify the differences among user, group, and role management

Images   Privilege management is the process of restricting a user’s ability to interact with the computer system.

Images   Privilege management can be based on an individual user basis, on membership in a specific group or groups, or on a function/role.

Images   Key concepts in privilege management are the ability to restrict and control access to information and information systems.

Images   One of the methods used to simplify privilege management is single sign-on, which requires a user to authenticate successfully once. The validated credentials and associated rights and privileges are then automatically carried forward when the user accesses other systems or applications.

Implement password and domain password policies

Images   Password policies are sets of rules that help users select, employ, and store strong passwords. Tokens combine “something you have” with “something you know,” such as a password or PIN, and can be hardware or software based.

Images   Passwords should have a limited span and should expire on a scheduled basis.

Describe methods of account management (SSO, time of day, logical token, account expiration)

Images   Administrators have many different tools at their disposal to control access to computer resources, including password- and account-expiration methods.

Images   User authentication methods can incorporate several factors, including tokens.

Images   Users can be limited as to the hours during which they can access resources.

Images   Resources such as files, folders, and printers can be controlled through permissions or access control lists.

Images   Permissions can be assigned based on a user’s identity or their membership in one or more groups.

Describe methods of access management (MAC, DAC, and RBAC)

Images   Mandatory access control is based on the sensitivity of the information or process itself.

Images   Discretionary access control uses file permissions and ACLs to restrict access based on a user’s identity or group membership.

Images   Role-based access control restricts access based on the user’s assigned role or roles.

Images   Rule-based access control restricts access based on a defined set of rules established by the administrator.

Discuss the methods and protocols for remote access to networks

Images   Remote access protocols provide a mechanism to remotely connect clients to networks.

Images   A wide range of remote access protocols has evolved to support various security and authentication mechanisms.

Images   Remote access is granted via remote access servers, such as RRAS and RADIUS.

Identify authentication, authorization, and accounting (AAA) protocols

Images   Authentication is a cornerstone element of security, connecting access to a previously approved user ID.

Images   Authorization is the process of determining whether an authenticated user has permission.

Images   Accounting protocols manage connection time and cost records.

Explain authentication methods and the security implications in their use

Images   Password-based authentication is still the most widely used because of cost and ubiquity.

Images   Ticket-based systems, such as Kerberos, form the basis for most modern authentication and credentialing systems.

Implement virtual private networks (VPNs) and their security aspects

Images   VPNs use protocols to establish a private network over a public network, shielding user communications from outside observation.

Images   VPNs can be invoked via many different protocol mechanisms and involve either a hardware or software client on each end of the communication channel.

images   Key Terms


AAA (336)

access control (349)

access control list (ACL) (327)

access control matrix (328)

accounting (336)

account expiration (335)

account maintenance (333)

account recertification (334)

administrator (320)

attribute-based access control (ABAC) (332)

authentication (336)

authentication server (AS) (339)

authorization (336)

basic authentication (338)

biometric factors (344)

certificate (340)

Challenge-Handshake Authentication Protocol (CHAP) (359)

client-to-server ticket (339)

Common Access Card (CAC) (340)

credential management (332)

crossover error rate (347)

digest authentication (338)

digital certificate (340)

directory (350)

discretionary access control (DAC) (329)

domain controller (323)

domain password policy (323)

eXtensible Access Control Markup Language (XACML) (332)

Extensible Authentication Protocol (EAP) (359)

false acceptance rate (346)

false negative (345)

false positive (345)

false rejection rate (347)

federated identity management (336)

FTPS (363)

generic accounts (320)

group (321)

group policy object (GPO) (332)

guest accounts (321)

HMAC-based One-Time Password (HOTP) (341)

identification (336)

IEEE 802.1X (349)

Kerberos (338)

key distribution center (KDC) (339)

Layer 2 Tunneling Protocol (L2TP) (357)

Lightweight Directory Access Protocol (LDAP) (350)

mandatory access control (MAC) (329)

multifactor authentication (342)

mutual authentication (340)

OAuth (Open Authorization) (362)

offboarding (321)

onboarding (321)

OpenID (362)

OpenID Connect (362)

Password Authentication Protocol (PAP) (360)

permissions (320)

personal identity verification (PIV) (340)

Point-to-Point Protocol (PPP) (358)

Point-to-Point Tunneling Protocol (PPTP) (357)

privilege management (319)

privileged accounts (321)

privileges (318)

remote access server (RAS) (336)

Remote Authentication Dial-In User Service (RADIUS) (351)

Remote Desktop Protocol (RDP) (361)

rights (319)

role (322)

role-based access control (RBAC) (331)

root (320)

rule-based access control (331)

Security Assertion Markup Language (SAML) (361)

secure token (362)

service accounts (321)

SFTP (363)

single sign-on (SSO) (324)

shared accounts (321)

Shibboleth (362)

smart card (342)

software tokens (341)

something you are (342)

something you do (343)

something you have (343)

something you know (343)

somewhere you are (343)

superuser (320)

Terminal Access Controller Access Control System+ (TACACS+) (353)

ticket-granting server (TGS) (339)

ticket-granting ticket (TGT) (339)

Time-based One-Time Password (TOTP) (341)

Time-of-day restrictions (334)

token (340)

transitive trust (343)

tunneling (356)

usage auditing and review (334)

user (319)

username (319)

virtual private network (VPN) (363)

images   Key Terms Quiz


Use terms from the Key Terms list to complete the sentences that follow. Don’t use the same term more than once. Not all terms will be used.

1.   _______________ is an authentication model designed around the concept of using tickets for accessing objects.

2.   _______________ is designed around the type of tasks people perform.

3.   _______________ refers to the condition where trust is extended to another domain that is already trusted.

4.   _______________ describes a system where every resource has access rules set for it all of the time.

5.   _______________ is an authentication process where the user can enter their user ID (or username) and password and then be able to move from application to application or resource to resource without having to supply further authentication information.

6.   _______________ is an algorithm that can be used to authenticate a user in a system by using an authentication server.

7.   If your fingerprints fail to let you into a system when they should, this is called a _______________.

8.   When both the client and the server authenticate each other, this is called _______________.

9.   _______________ is an access control method that would allow you to control access to records only when someone is scheduled to work.

10.   Authentication that is sent in plaintext with only Base64 encoding is an example of ______________ .

Images   Multiple-Choice Quiz


1.   Authentication can be based on what?

A.   Something a user possesses

B.   Something a user knows

C.   Something measured on a user, such as a fingerprint

D.   All of the above

2.   You’ve spent the last week tweaking a fingerprint-scanning solution for your organization. Despite your best efforts, roughly 1 in 50 attempts will fail even if the user is using the correct finger and their fingerprint is in the system. Your supervisor says 1 in 50 is “good enough” and tells you to move onto the next project. Your supervisor just defined which of the following for your fingerprint-scanning system?

A.   False rejection rate

B.   False acceptance rate

C.   Critical threshold

D.   Failure acceptance criteria

3.   A ticket-granting server is an important element in which of the following authentication models?

A.   L2TP

B.   RADIUS

C.   PPP

D.   Kerberos

4.   What protocol is used for RADIUS?

A.   UDP

B.   NetBIOS

C.   TCP

D.   Proprietary

5.   Under which access control system is each piece of information and every system resource (files, devices, networks, and so on) labeled with its sensitivity level?

A.   Discretionary access control

B.   Resource access control

C.   Mandatory access control

D.   Media access control

6.   Which of the following algorithms uses a secret key with a current time stamp to generate a one-time password?

A.   Hash-based Message Authentication Code

B.   Date-hashed Message Authorization Password

C.   Time-based One-Time Password

D.   Single sign-on

7.   Secure Shell uses which port to communicate?

A.   TCP port 80

B.   UDP port 22

C.   TCP port 22

D.   TCP port 110

8.   Elements of Kerberos include which of the following?

A.   Tickets, ticket-granting server, ticket-authorizing agent

B.   Ticket-granting ticket, authentication server, ticket

C.   Services server, Kerberos realm, ticket authenticators

D.   Client-to-server ticket, authentication server ticket, ticket

9.   To establish a PPTP connection across a firewall, you must do which of the following?

A.   Do nothing; PPTP does not need to cross firewalls by design.

B.   Do nothing; PPTP traffic is invisible and tunnels past firewalls.

C.   Open a UDP port of choice and assign it to PPTP.

D.   Open TCP port 1723.

10.   To establish an L2TP connection across a firewall, you must do which of the following?

A.   Do nothing; L2TP does not cross firewalls by design.

B.   Do nothing; L2TP tunnels past firewalls.

C.   Open a UDP port of choice and assign it to L2TP.

D.   Open UDP port 1701.

Images   Essay Quiz


1.   A co-worker with a strong Windows background is having difficulty understanding UNIX file permissions. Describe UNIX file permissions to him. Compare UNIX file permissions to Windows file permissions.

2.   How are authentication and authorization alike and how are they different. What is the relationship, if any, between the two?

Lab Projects

   Lab Project 11.1

Using two workstations and some routers, set up a simple VPN. Using Wireshark (a shareware network protocol analyzer, available at http://wireshark.com), observe traffic inside and outside the tunnel to demonstrate protection.

   Lab Project 11.2

Using freeSSHd and freeFTPd (both shareware programs, available at www.freesshd.com) and Wireshark, demonstrate the security features of SSH compared to Telnet and FTP.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.45.212