Chapter 8. Securing Web Applications

IN TODAY'S NETWORK AND COMPUTING ENVIRONMENTS, security is the name of the game. Securing Web applications has become an integral part of an organization's overall security strategy. As our personal and business lives are increasingly integrated with the Web and Web applications, Web application security moves front and center. Web application security is the battleground for IT security and will be for the foreseeable future.

Web application security encompasses many elements from end-user education to stronger programming and development. One of the first considerations when designing strong Web application security strategies is to know the threats, where they come from, and how to mitigate them. This chapter examines one of the more commonly exploited areas of Web application security: end-user input. It looks at the dangers of clear-text communication and explains how to encrypt data as it travels throughout the network.

Does Your Application Require User Input into Your Web Site?

In the beginning, Web sites were non-interactive and mainly information portals with information flowing only one way. The early Web was a communication stream from the server to the client browser. Because each visitor to a site was given the same information and the same rights, there was no need to authenticate or validate users.

Today's interactive Web sites and applications provide two-way communication with menus, lists, forms, radio buttons, and uniform resource locator (URL) links all allowing interaction between the client browser and the Web application. This interactivity and potential user input allow a variety of attacks, including:

  • Cross-site scripting (XSS)

  • SQL injection

  • Directory traversal

  • URL redirector abuse

  • Extensive Markup Language (XML) injection

  • XQuery injection

As discussed in Chapter 7, improperly validated and untrusted user input remains one of the larger security risks with Web applications. To combat input attacks, input validation methods are deployed both on the client side and on the server side.

Input validation is challenging, but it remains a key defense against validation attacks. Input validation is difficult because it's not easy to determine what constitutes valid input across numerous Web applications and processes. Although there's no single, uniform answer to managing user input, general guidelines and practices include:

Note

With most Web browsers, client-side validation can be easily bypassed by turning off JavaScript in the browser.

  • Do not rely solely on client-side validation—On the client side, validation mechanisms are accomplished using the client browser. Client-side validation is adequate to catch mistakes such as typos or input errors from reaching the server but cannot be relied on as a strong security measure. It is likely that malicious users can bypass client-side validation mechanisms. The clear benefit of client-side validation is in increased performance by reducing unnecessary validation trips to the server.

  • Ensure server-side validation—Unlike client-side validation, server-side validation cannot be easily circumvented by malicious users. The server will verify the integrity of all user input to ensure that it is valid and trusted. Often, the server search validation will search for string lengths, characters, and any other syntax it can flag as potentially dangerous.

  • Use whitelisting and blacklisting—One method to help validate user input is to employ a blacklist. The blacklist identifies strings and characters that are known to be potentially dangerous. All data in the blacklist are rejected; however, harmful data not listed in the blacklist are allowed. In this way, the blacklist must be updated and monitored regularly to ensure that threats are identified. Blacklists do not adapt quickly to new threats. Often whitelisting is the preferred method of input validation. With whitelists, only known, good input strings and syntax are allowed; everything else is rejected. Whitelists are better suited to managing new threats because new threats will not match the known, good data identified by the whitelist.

    Note

    One primary disadvantage of blacklists is that they become obsolete when a new attack is discovered.

  • Assume all input is malicious—A key concept of input validation security is the underlying assumption that all input is potentially harmful or malicious. In essence, validation is guilty until proven innocent. Client-side and server-side validation mechanisms should adopt this approach. All input regardless of source, is a potential risk.

  • Sanitize your input—When user input is sanitized, it is inspected for potentially harmful code and modified according to predetermined, acceptable guidelines. Sanitization often involves identifying and disallowing specific characters and syntax sequences.

Get to Know Your Syntax with Request for Comments (RFC)

A Request for Comments (RFC) is a formal document from the Internet Engineering Task Force (IETF), which is the result of committee drafting and revisions to a technical document. Many RFCs are intended to become Internet standards and, as such, hold important information. When reviewing acceptable syntax for e-mail addresses, URL input, XML input, and more, it is advisable to review the RFC to verify the syntax used. Consider the following examples:

Note

Transmission of non-text objects in messages raises additional security issues. These are outlined in RFCs 2047, 2049, 4288, and 4289.

  • Validating e-mail address syntax and usage—To help prevent malicious code being used in e-mail, verify correct e-mail usage procedures and syntax in RFC 5322.

  • Validating URL input—RFCs define the rules and syntax for URL usage and access. Defend against canonicalization attacks (../) by ensuring that the resource paths are resolved before applying business rules for validating them.

  • Validating HTTP input—To find out all about HTTP formats and technical details, review RFC 2616.

Reviewing RFCs is a great way to become more familiar with the correct usage and characteristics of a trusted and used technology. Visit the Internet RFC/STD/FYI/BCP Archives Web site at http://www.faqs.org/rfcs/ to search more than 5,000 RFC entries.

The following are some RFCs that are important to the topics in this book:

  • RFC 821, "Simple Mail Transfer Protocol"

  • RFC 2396, "Uniform Resource Identifiers (URI): Generic Syntax"

  • RFC 793, "Transmission Control Protocol"

  • RFC 2045, "Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies"

  • RFC 1738, "Uniform Resource Locators (URL)"

  • RFC 822, "Standard for the Format of ARPA Internet Text Messages"

  • RFC 1122, "Requirements for Internet Hosts—Communication Layers"

  • RFC 2046, "Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types"

  • RFC 1157, "Simple Network Management Protocol (SNMP)"

  • RFC 1866, "Hypertext Markup Language—2.0"

Technologies and Systems Used to Make a Complete Functional Web Site

Web sites use an array of technologies and systems to make a complete functional site. These technologies cover everything from the protocols, programming languages used, database systems, and more. Each of these elements must be secured individually to ensure the security of the entire Web site.

This section explores some common Web elements—HTML, CGI, JavaScripting, and SQL database back-ends—and discusses potential security risks of each.

Hypertext Markup Language (HTML)

HTML is the predominant language for creating Web pages. HTML provides an easy method of programming structural semantics for text such as headings, paragraphs, lists, links, and quotes. HTML allows programmers to embed pictures and interactive forms within a Web document. HTML was not created with security in mind; it was created as an online development tool.

HTML is known as a markup language that uses code for formatting a Web site within a text file. The codes used to specify the formatting are called tags. HTML tags are keywords surrounded by angle brackets like <body> that normally come in pairs, such as <body> and </body>, with the first tag at the start of the formatting and the second tag at the end. For example:

<html>
<body>
<h1>This is my Web site heading</h1>
<p>This is an introduction paragraph.</p>
</body>
</html>

In this example, the <html> and </html> tag pair identifies the Web site itself, the <body> and </body> tag pair identifies the visible text on the Web site, <h> and</h> identifies the heading, and <p> and </p> identifies the paragraphs.

In normal operation, HTML is a powerful markup language. Unfortunately, malicious HTML scripts can be used to attack a Web site. As an example, HTML pages can use forms that require user input:

<form>
First name:
<input type="text" name="firstname" />
Last name:
<input type="text" name="lastname" />
</form>

These HTML tags would create two simple boxes in which visitors to the Web site can enter their first and last names. What happens when a malicious user doesn't enter the proper, expected input but enters malicious scripts instead? If the user input is not properly sanitized and validated, the attacker can alter the page, insert unwanted or offensive images or sounds, and change content to damage the site's reputation.

If malicious users can insert their own <form> tags, they can create interactive content designed to steal information from the user, such as credit card information and bank account information. Malicious HTML tags can be placed in a message posted to an online forum or message board using the <script>, <embed>, <object>, and <applet> tags. When these tags are snuck into messages, they can run automatically on the victim's browser.

If a Web site is hacked, the maliciously entered tags may do damage. This may include setting up false forms, redirecting to other Web sites, or running a malicious script whenever the browser goes to the home page.

To help prevent such attacks, interactive Web elements such as discussion groups must be monitored to identify which data input is untrustworthy when it is presented to other users. Today, Web servers either will not accept non-validated input or will encode/filter it before sending anything to other browsers. All HTML forms and interactive user elements must ensure input is validated to prevent malicious HTML from being presented to the user. It is critical to check the HTML code periodically to see if malicious code has been added. One quick way to check is to verify the size of the file. If the file size has changed, there may be a problem. It is important to keep the Web site, Web server, and HTML code secure. If the source can be viewed and changed by anyone, there is a problem.

Common Gateway Interface (CGI) Script

CGI is a standard that defines a method by which a Web server can obtain data from or send data to databases, documents, and other programs, and present that data to viewers via the Web. CGI programs are commonly written in a language such as Perl, C++, or ASP. A CGI program is accessed by the Web server in response to some action by a Web visitor. This might be something simple, like a Web page counter or site form, or complex, such as a shopping cart and e-commerce elements.

A CGI program is often used to manage user queries sent from an interactive HTML page (Web page) with the CGI script functioning as the intermediary between the query and the database. CGI programs often accept user input from the browser to the Web server.

To help secure CGI, it is important to create and program CGI with security in mind, research known vulnerabilities with CGI programming, and incorporate security best practices in all programming efforts. Secondly, review the program periodically to verify and incorporate any updated vulnerability information, and apply security patches when necessary. Finally, as with other interactive elements, user validation and sanitization must be used.

Note

One of the rules for all interactive Web elements is to never blindly trust user input. User input is a primary source of attack, but it can be mitigated through sanitization and validation.

JavaScripting

JavaScript is considered to be the scripting language of the Web. Programmers use JavaScript to add functionality and dynamic and interactive content to their Web sites. These interactive elements are almost limitless and may include e-commerce forms, setting and reading cookies, advertising, and various Web forms. Using JavaScript is quite easy, as the Web programmer can embed the JavaScript code easily into HTML pages, as shown:

<html>
<body>
<script type="text/javascript">
document.write ("Text goes here.");
</script>
</body>
</html>

Any programming language that can be used to execute content on Web pages can be used maliciously. Because loading a Web page can cause arbitrary code to be executed on your computer, stringent security precautions are required to prevent malicious code from damaging your data or your privacy. JavaScript provides some security in that it does not allow writing or deleting files or directories on the client computer. With no file object and no file access functions, a JavaScript program cannot delete a user's data or plant viruses on the user's system.

A discussion of securing JavaScript and coding best practices is covered in Chapter 9.

SQL Database Back-End

A back-end database is a database that is accessed by users indirectly through another application. The back-end database is fronted by a Web server, which in turn is accessed by client browsers. This is a common configuration for online e-commerce. For example, a shoe store houses all product details in the back end, which users browse through the Web server.

Note

Although not directly related to security, one measure that must be considered is database backups. Backups are not a security measure, but a good backup may be necessary to restore a database that has been tampered with and compromised. Having up-to-date backups are a mandatory part of database administration.

An SQL back-end database may be vulnerable to both physical and logical attacks. These may include physical threats, such as theft of the database server, and logical attacks such as injection attacks and brute-force password attacks. Securing any database requires a multilayered approach, including:

  • Access controls

  • Role-based authentication

  • Encryption methods

  • Integrity verification

One common attack against a SQL back-end database is a SQL injection. Additional information on SQL injection attacks and security are covered in Chapters 7 and 9.

As shown in this section, many elements must be considered to secure an entire site. Because of various security risks, a Web site requires vigilance and maintenance to ensure that the latest security mitigation strategies are incorporated into its design and that security patches are applied as necessary.

Does Your Development Process Follow the Software Development Life Cycle (SDLC)?

Every day new software is developed and old software enhanced to meet a personal, production, or business need. In its development, all software passes through specific stages, referred to as the software development life cycle (SDLC). Certain basic steps are followed in all software development projects. Although development models vary, all share some similarities. The oldest and the best-known model is the waterfall: a sequence of stages in which the output of each stage becomes the input for the next. Figure 8-1 shows the waterfall model for software development.

As shown in Figure 8-1, there are several stages in this SDLC model, as follows:

  • Systems analysis—The analyzing stage seeks a clear definition of what the software is designed to do and what problem or issue it is intended to address. The analysis provides the direction for further development and refines project goals into clear functions and operation of the intended application.

  • Designing—In the designing stage, the application's features and operational functions are clearly established. This includes documentation of application processes and screen shots. The design phase should give a clear idea of what the application will look like and what user needs it will address.

  • Implementation—With a clear idea of the purpose of the software and a development plan, the code is written.

    The waterfall model encompasses the software development life cycle.

    Figure 8-1. The waterfall model encompasses the software development life cycle.

  • Testing—This phase can be overlooked in the rush to get a software application to market. The software is tested and examined for bugs, errors, interoperability failures, and more. A strong and comprehensive approach to testing will help ensure that the product works as expected.

  • Acceptance and deployment—Once tested, the software product either will be accepted and shipped, or sent back to the design or implementation phase. Given the costs and competitive nature of software development, a testing phase failure can be a significant problem.

  • Maintenance—Today's software packages are not stagnant; they are constantly improved, enhanced, and updated. This is done to address shortcomings in the application, to add new features, or to address end-user concerns. As long as the application is in use, maintenance continues.

The stages an application or piece of software passes through vary by developer and environment. However, each developer will follow a developmental process to ensure that the software is created as efficiently and error-free as possible.

Designing a Layered Security Strategy for Web Sites and Web Applications

It used to be that security administrators felt comfortable focusing their efforts on perimeter security. Firewalls were the name of the game and, for a time, networks were relatively safe behind them. Today's attacks—such as injection attacks, social engineering, and others—bypass many of a network's perimeter defenses, leaving it vulnerable.

Today, a network cannot be secure with a single security approach. A complete and layered approach to network security is required. There are any number of layers that a security administrator could use to protect a network, including perimeter security, host-based security mechanisms, authentication and access management, network and application access controls, and vulnerability management. The layers are defined as follows:

  • Perimeter security—Perimeter defense strategies refer to those security measures that are on the edge of the network and protect the internal network from external attack. There are several types of perimeter defense mechanisms used, including antivirus/antispyware/antispam/anti-phishing software, intrusion detection systems (IDSs), intrusion prevention systems (IPSs), and firewalls.

  • Host-based security mechanisms—Network-wide and centralized security is an important part of an overall security strategy but, increasingly, security is shifting towards host-based mechanisms. In network terms, this includes using host-based firewalls and IDS antivirus solutions. For Web applications and Web security, it involves placing increasing importance on host security, such as by increasing the security potential of browsers. This may include XSS filters for browsers, stronger error-handling mechanisms, and localized script sanitization filters.

  • End-user education—A layered security strategy has to include end-user education. This includes information on the latest Web-based attacks, what they are designed to do, and how end users can help prevent them.

  • Authentication and access management—Key to securing Web applications and Web sites is strong authentication. Access management controls dictate who can and cannot access a Web application or Web site.

  • Input validation—Any layered security approach includes mechanisms to protect from malicious input, which can lead to a variety of injection attacks.

  • Vulnerability management—Vulnerability management refers to the ongoing maintenance and management of existing Web sites and applications. It often involves updating, using patches, applying service packs, and using other techniques to ensure Web site and application security is up to date.

The exact layers of the security strategy often depend on the organization and developer involved.

Incorporating Security Requirements Within the SDLC

As discussed in this chapter and Chapter 7, one of the biggest vulnerabilities in today's networks and online applications is poorly developed applications that have received little consideration as to confidentiality, integrity, and overall security. Many experts recommend that security be implemented during the design phase and throughout the maintenance of Web applications.

Listed previously in this chapter were several stages of software development. It is possible to incorporate security throughout the entire SDLC. The following sections highlight how security considerations can be incorporated into each stage of the SDLC.

Systems Analysis Stage

In the systems analysis stage, what the software is designed to do and what problem or issue it is intended to address must be clearly defined. In this stage, a qualified security person is brought into the development process. The security person's task is to understand the intent of the application and identify potential security threats. These may be injection attacks, buffer overflows, directory traversal, and more.

The security professional will ensure that the software's functions and operations are not exposed and cannot be exploited. The importance of incorporating security from the start cannot be understated; it guides the security process and ensures that security remains a concern throughout software development. Without incorporating security from the initial phase, security can become an afterthought.

Designing Stage

In the designing stage, the application's features and operational functions are clearly established. In this stage, incorporating security is a critical consideration because the security foundation for the software is established. This may include the following activities:

  • Information gathering—The security professional may meet with potential clients and software users to determine their security needs. In this way, the security requirements are tailored to the software users' environment. Further information may include reviewing industry security standards and ensuring that the software meets these standards. These standards may include the ISO 27002 and the ISO 15408 standards, which provide an accepted code of practice for information security management.

  • Threat assessment—While the design of the software is unfolding, the security professional can get a picture of threats and their potential impact on the software. With an understanding of the potential threats and exploitable areas, the security professional can begin to incorporate mitigation strategies into the design.

    Note

    During the SDLC, the security professional stresses the importance of security throughout the development phases. Security professionals should know the policies, standards, and guidelines of Web security to ensure they are met. Developers can then incorporate these security elements.

  • Threat mitigation—With a clear understanding of the risks and threats, the security professional can develop strategies to mitigate those threats. Incorporating security in the initial steps of the SDLC allows developers to anticipate threats and build mitigation strategies into the software. This helps ensure that software out of the box is secure.

Implementation Stage

If the security professional has communicated the security concerns clearly, the developers can incorporate security into the coding. Several security guidelines and best practices that developers can follow when coding software include:

  • Input validation mechanisms—Injection attacks are common.

    Creating input validation mechanisms in software helps mitigate the threat.

  • Strong encryption—Developers need to use industry-accepted encryption and cryptography standards.

  • Securing data that's stored and in transit—Developers should safeguard the security of data whether it's stored or is traveling through the network.

  • Authentication mechanisms—Access controls should require proper authentication.

  • Error handling—Software error messages should not reveal too much to malicious users.

There are several other coding practices that developers must keep in mind. Creating software without these considerations may lead to security holes.

Testing Stage

In the testing stage, the software is tested and retested to ensure that it is interoperable and functions as it should. Part of the testing involves a review of the software security structure. A security professional or an entire security team tests the software for least privilege, buffer overflows, injection attacks, error-handling risks, directory traversal risks, and more.

The final stage tests the software in a live production environment. Sometimes called penetration testing, it involves ethical hacking. With ethical hacking, professional hackers attempt to access and crash the software. The hacking occurs both by an outsider trying to access the software and by a malicious user within the network. The hacker's training and ability limits the value of ethical hacking.

Acceptance and Deployment Stage

Once security tests are complete and security requirements have been met, the software is ready for release. Although much security testing has been done, testing must continue throughout the lifespan of the software. In the deployment stage, the security professional monitors the deployment, searching for potential security threats and exploitable areas. As the software is deployed, the security professional may review the application's user manual, looking for security breaches or unforeseen threats. The deployment stage security plan may include:

  • Creating document sources to review the status of the deployment

  • Ensuring that user administration and access privileges are established

  • Defining response process for handling security bugs

Maintenance

Live production software can be under constant attack from malicious users. Over time, the software may become exploited and need to be maintained with security measures that meet the new attacks and threats. Maintenance is a critical component for overall software security. Maintenance security procedures may include:

  • Develop and deploy service packs and patches to manage security threats.

  • Review logs, audits, and other material to search for new threats and attacks that may have been found.

  • Review error report messages to see if they are accurate and not revealing information that an attacker could use.

  • Monitor feedback from software users.

Many developers may not fully appreciate the importance of incorporating security measures in each step of the development cycle. However, weaving security into the fabric of software development as a proactive measure will result in secure and robust software packages.

HTTP and Clear Text Versus HTTPS and Encryption

Transmission Control Protocol/Internet Protocol (TCP/IP) is the routable protocol on which Internet communication is based. Actually a protocol suite, TCP/IP is a set of protocols. Each of these protocols provides a different function. Collectively, the protocols provide TCP/IP functionality. The protocol suite gets its name from two of the main protocols in the suite, TCP and IP. Within the TCP/IP protocol suite are secure and unsecure protocols. The unsecure protocols are typically designed for speed, for use in a trusted environment, or when no sensitive data is involved. For example, HTTP is used for day-to-day Web access, but HTTP sends information in clear text. That is, it is susceptible to man-in-the-middle attacks and can be read if intercepted. For sensitive Web communications, the Hypertext Transfer Protocol Secure (HTTPS) protocol is used for encryption.

Table 8-1. Secure and unsecure protocols.

PROTOCOL

FULL NAME

DESCRIPTION

FTP

File Transfer Protocol

Protocol for uploading and downloading files to and from a remote host. Also accommodates basic file management tasks.

SFTP

Secure File Transfer Protocol

Protocol for securely uploading and downloading files to and from a remote host. Based on Secure Shell (SSH) security.

HTTP

Hypertext Transfer Protocol

Protocol for retrieving files from a Web server. Data is sent in clear text.

HTTPS

Hypertext Transfer Protocol Secure

Secure protocol for retrieving files from a Web server. HTTPS uses SSL for encrypting data between the client and the host.

Telnet

Telnet

Allows sessions to be opened on a remote host.

RSH

UNIX utility used to run a command on a remote machine

Replaced with SSH because RSH sends all data in clear text.

SSH

Secure Shell

Secure alternative to Telnet that allows secure sessions to be opened on a remote host.

RCP

Remote Copy Protocol

Copies files between systems but transport is not secured.

SCP

Secure Copy Protocol

Allows files to be copied securely between two systems. Uses SSH technology to provide encryption services.

SNMPv1/2

Simple Network Management Protocol v1/2

Network monitoring system used to monitor a network's condition. Neither SNMPv1 nor v2 is secure.

SNMPv3

Simple Network Management Protocol v3

Enhanced SNMP service offering both encryption and authentication services.

HTTP and HTTPS are not the only protocols within TCP/IP that have a secure and unsecure option. If enabling traffic to and from a Web site, it may be necessary to know which ones are secure and which ones are not. Several protocols move data throughout the network and Internet. Table 8-1 shows some of the secure and insecure protocols.

SSL—Encryption for Data Transfer Between Client and Web Site

As mentioned, plain HTTP sends data in clear text, which is too risky for bank sites or other data-sensitive transactions. HTTPS is used to ensure safe and secure communication between a client and a Web server. Secure Sockets Layer (SSL) is widely used to authenticate a service to a client and then to provide confidentiality (encryption) to the data being transmitted.

SSL works in a negotiation process—known as a handshake—between the client and the Web server. The handshake process is highlighted in Figure 8-2.

SSL handshake negotiation.

Figure 8-2. SSL handshake negotiation.

As shown in Figure 8-2, several steps are required in the SSL negotiation process:

  1. The session begins by the client browser sending a basic "Hello" message to the server. This initial message includes a request for a secure communication channel, including various cryptographic algorithms supported by the client.

  2. The server responds with its own "Hello" message, including its choice of algorithm to create the cryptography. If no mutual cryptography method can be agreed upon, the handshake fails. The server also sends its digital certificate and public key to the client.

  3. If the browser verifies the certificate, it sends a one-time session key encrypted with the server's public key.

  4. Both the client and the server now have symmetric keys, and the communication between them is encrypted and decrypted at each end.

SSL Encryption and Hash Protocols

Note

For integrity verification, hash algorithms are used to confirm that the information received is exactly the same as the information sent. A hash algorithm is essentially a cryptographic checksum used by both the sender and receiver to verify that the message has not been changed. If the message has changed in transit, the hash values are different and the packet is rejected.

SSL uses various protocols for both encryption and hashing services. Hashing algorithms are used to verify the integrity of a data stream. They are not used for encryption. Hashing ensures that data has not been tampered with during transmission.

There are two hashing algorithm protocols to be aware of: Secure Hash Algorithm 1 (SHA1) and Message Digest 5 (MD5). MD5 offers a 128-bit hashing algorithm. SHA1 uses an algorithm with a 160-bit function. Although it provides more security than MD5, SHA1 can affect overall performance because it demands more system resources. Further, known vulnerabilities have been discovered with MD5.

As mentioned, SSL communication uses a symmetric key exchange to secure the communication channel. There are several key encryption protocols associated with symmetric key exchanges. These include:

  • Data Encryption Standard (DES) (40-bit)—This encryption method provides the best performance but at a cost: the encryption security is lower. Can be used in environments where the need for data security is a little lower.

  • Data Encryption Standard (56-bit)—Through your Internet Protocol Security (IPSec) policies, you can implement DES as the encryption method. The DES algorithm is a 56-bit encryption key. This algorithm was published in 1977 by the U.S. National Bureau of Standards and allows for the ability to frequently regenerate keys during a communication. This prevents the entire data set from being compromised if one DES key is broken. However, it's considered outdated for business use and should be used only for legacy application support. Specialized hardware has been able to crack the standard 56-bit key.

  • Triple DES (3DES)—IPSec policies also allow the choice of a strong encryption algorithm, 3DES, which provides stronger encryption than DES for higher security. 3DES uses a 56-bit encryption key as well, but, as the name implies, it uses three of them. There are three options for using 3DES, differing by whether any or all the encryption keys are unique to each other. If 3DES uses three unique keys, the result is considered 168-bit encryption. However, because of a discovered "meet in the middle" attack, the effective security is equivalent to 112-bit encryption The "meet in the middle" attack involves guessing the algorithm's values between the three keys, hoping to reveal how it works. It's similar to a brute-force attack but with much better odds.

  • Advanced Encryption Standard (AES)—Also known as Rijndael, AES is a block cipher encryption standard. AES can create keys from 128 bits to 256 bits in length.

  • Rivest Cipher—This is a family of secret key cryptographic algorithms from RSA Security, Inc. The family includes RC2, RC4, RC5, and RC6. Although RSA is widely known for its public key methods, its secret key algorithms are also widely used. The RCs were designed as a replacement for DES. RC2 uses a variable key and the block cipher method. RC4 uses a variable key and stream cipher method. Both RC5 and RC6 are block ciphers with variable keys up to 2,040 bits. RC6 uses integer multiplication for improved performance over R5.

    Note

    More information on public and private keys, including their proper storage, can be found in Chapter 6.

Selecting an Appropriate Access Control Solution

Note

The primary objective of access control is to preserve and protect the confidentiality, integrity, and availability of information, systems, and resources.

Access control is a cornerstone concept when designing a secure network and Web site environment. "Access control" refers to the mechanisms that identify and control who can and cannot access a network, a resource, an application, specific data, and more. To secure a network or computer and its resources, you must consider what access will be granted to other users and then design strategies to ensure that only required users actually have access. It is a fundamental concept and forms the basis of safe and secure Web applications.

It's possible to secure a Web application by granting users specific rights and privileges. These privileges dictate who can and who cannot access the application. Consider a Web server that shares applications. You can configure the server to allow only certain users or groups access to those shared resources. The Administrator account, for example, can access all aspects of the Web application while other users have limited access.

In the strictest terms, access control is a much more general way of talking about controlling access to a resource. Access can be granted or denied based on a wide variety of criteria, such as the network address of the client, time of day, the Web site visitor's browser, and other general restrictions. Access control involves controlling network access by an arbitrary condition that may or may not have anything to do with the attributes of a particular visitor.

You will most often see access control used to refer to any mechanism that restricts unwanted access to network resources and applications. This general definition would include the process of authentication and authorization. In practice, authentication, authorization, and access control are so closely related it is difficult to discuss them separately. In particular, authentication and authorization are, in most implementations, tightly linked.

Table 8-2. System audit policies.

AUDIT POLICY

DESCRIPTION

Audit account logon events

Audits a user logging on or off from another computer in which this computer is used to validate the account.

Audit account management

Audits account management, such as the creation or deletion of groups or users.

Audit directory service access

Tracks a user accessing an Active Directory object that has its own system access control list (SACL) specified.

Audit logon events

Tracks each instance of a user logging on or logging off, or making a network connection to this computer.

Audit object access

Tracks a user accessing an object, such as a file, folder, registry key, or printer, that has its own SACL specified.

Audit policy change

Enables tracking of changes to the Audit policy.

Audit privilege use

Records each instance of a user exercising a user right.

Audit process tracking

Audits such things as program activation, accessing an object, or exiting a process.

Audit system events

Tracks in the Event Viewer such audit system events as shutting down or restarting the system, and monitoring changes to system security and the security log.

A basic understanding of access control makes it possible to see why it plays such an important role in a security strategy. Several types of access control strategies are used, including mandatory access control, discretionary access control, rule-based access control, and role-based access control.

Discretionary Access Control

With discretionary access control (DAC), information access is not forced from the administrator or the operating system. Rather, access is controlled by the information's owner. The level of access a user receives is based on the permissions associated with authentication credentials, such as a smart card or username and password combination.

DAC uses an access control list (ACL) to determine access. The ACL is a table that informs the operating system of the rights each user has to a particular system object, such as a file, directory, or application. Each object has a security attribute that identifies its ACL. The list has an entry for each system user with access privileges. The most common privileges include the ability to read a file (or all the files in a directory), to write to the file or files, and to execute the file (if it is an executable file or program).

Some of the characteristics of DAC include:

  • Data owners control the level of access to information.

  • Data owners can determine the type of access (read, write, copy, etc.) and can modify access privileges at any time.

  • Access is determined by comparing the user against permissions held in an ACL.

  • Authentication credentials are associated with the level of access.

Mandatory Access Control

With mandatory access control (MAC), the creator of information does not govern who can access or modify data. Application administrators dictate who can access and modify data, systems, and resources.

MAC secures information and resources by assigning sensitivity labels to objects and comparing them to the user's assigned level of sensitivity. In the security world, a "label" is a feature applied to files, directories, and other resources in a system. A label can be thought of as a confidentiality stamp. When a label is placed on a file, it describes the level of security for that specific file and permits access by files, users, and other resources with a similar or lesser security setting.

In practice, MAC mechanisms assign a security level to all information and assign a security clearance to each user to ensure that all users have access only to that data for which they have clearance. For example, users are assigned a Top Secret or Confidential security label, and data and resources are classified accordingly. MAC restricts access to objects based on a comparable sensitivity between the user-assigned levels and the object-assigned levels.

In general terms, mandatory security policy represents any security policy that is defined strictly by a system security administrator along with associated policy attributes. The need for a MAC mechanism arises when the security policy of a system or network dictates that:

  • Security decisions must not be decided or managed by the object owner.

  • The system administrator and operating system must enforce the protection decisions.

Rule-Based Access Control

Rule-based access control governs access to objects according to established rules. A good example is the configuration and security settings on a router or a firewall.

As you probably know, if you tried to gain access through a firewall, your request would be reviewed to see if you meet the criteria for access through the firewall. For example, suppose you had a firewall that was configured to reject all addresses in the 192.166.x.x range of Internet Protocol (IP) addresses. If you had such an address and tried to get through the firewall, you would be denied.

In practice, rule-based access control is a type of MAC. An administrator typically configures the firewalls or other devices to allow or deny access. The owner or another user does not specify the conditions of acceptance, and safeguards are in place to ensure that an average user cannot change settings on such devices.

Rule-based access control uses ACLs to determine the level of access a user will have to a resource or application.

Role-Based Access Control

In a role-based access control configuration, access decisions are determined by the roles that individual users have as part of an organization. Organizations may have roles as marketers, sales people, managers, administrative assistants, and so on. Access to network objects is then determined by the role that has been assigned to a particular user. Role-based access requires a thorough understanding of how an organization operates, the number of users, and their functions.

Access rights are grouped by role name. The use of resources is restricted to individuals authorized to assume the associated role. For example, within a school system, the role of teacher can include access to data including test banks, research material, and memos. A school administrator role may have access to employee records, financial data, planning projects, and more.

The use of roles to control access can be an effective means for developing and enforcing enterprise-specific security policies and for streamlining the security management process.

When a user is associated with a role, the user should be assigned just the privilege level necessary to do the job. This is a general security principal known as "least privilege." In practice, when someone is hired for an organization, the role is clearly defined. A network administrator creates a user account for the new employee and places that user account in a group with those who have the same role in the organization. Therefore, if a new teacher were hired, the new user account would be placed in the Teachers group. Once in the group, the new employee will gain the same level of access as all those who perform the same role.

In the real world, this often is too restrictive to be practical. For instance, some teachers who have more experience or more responsibility may require more access to a particular network object than another teacher. It can be a time-consuming process to customize access to suit everyone's needs.

When roles overlap, responsibilities and privileges do as well. Access needs to reflect this. In such a case, you can establish role hierarchies to compensate for overlapping roles. A "role hierarchy" defines roles that have unique attributes and that may contain other roles; that is, one role may implicitly include the rights that are associated with another role. For example, if a user performs the roles of both a school administrator and a teacher, that user would be given access to areas permitted for each role.

Note

Role-based access is considered to be a type of MAC because access is dictated by an administrator, and the criteria for object access is not in the hands of the owner.

Create Access Controls That Are Commensurate with the Level of Sensitivity of Data Access or Input

In many ways, establishing the need for access controls is the easy part; determining the right level and type of access control is not so easy. There often is a struggle between those in charge of securing Web applications and Web pages and end users who often want greater access than is necessary. Somewhere in the middle, the two must meet. For the security administrator the following are important:

  • Security—Access control methods are great for ensuring that only those who should have access actually do. Allow only known and authorized users and devices onto your organization's network

  • Control—Access controls enable administrators to restrict access to specific network resources based on identity of users and/or devices.

  • Organize access—Access controls allow end-user access to appropriate resources based on their job or role of function.

  • Operational efficiency—Controlled access to Web applications and Web sites often leads to a greater operational efficiency.

For their part, end users want sufficient access to perform their tasks unencumbered by restrictions. Security personnel need to apply access controls while at the same time not prevent end users from performing necessary tasks. Finding the right level of access control will often be accomplished by using multiple access methods simultaneously.

Best Practices for Securing Web Applications

Protecting Web applications from attack has become an important consideration for all organizations and developers. There are a number of general hints, tips, and tricks you can employ to help prevent Web attacks and the exploitation of vulnerabilities. This chapter has touched on many different methods that help mitigate the risks. Table 8-3 highlights some common vulnerabilities and threats, and mitigation strategies and best practices for securing Web applications.

Table 8-3. Common threats and vulnerabilities, and mitigation strategies and best practices.

ATTACK DESCRIPTION

MITIGATION STRATEGY AND BEST PRACTICES

Attacks performed by embedding malicious strings in query strings, form fields, cookies, and HTTP headers. These include command execution, XSS, SQL injection, and buffer overflow attacks.

Assume all input is harmful. Constrain, reject, and sanitize all input.

Data can be captured in transit, read, and used.

Use HTTPS to ensure data is encrypted during transit.

Log files and audit files may contain sensitive data that may be interpreted and used by a malicious user.

Use a principle of least privilege and access controls to ensure that only those needing access to log and audit files have it.

Malicious users may be able to authenticate using password cracking, elevation of privileges, and social engineering strategies.

Educate users on password security, use protocols to ensure passwords are not sent in clear text, and develop password policies.

Malicious users may gain access to restricted and sensitive data or resources.

Ensure files, directories, and objects are explicitly unobtainable, not just out of sight. Validate authorization per object. Audit object access.

Malicious users are able to hijack a session and use valid credentials.

Manually log out of a session. Automatically log users out of sessions after a period of inactivity.

Hiding or ignoring file, folder, or resource locations.

Obscurity does not provide security by itself. Use access control mechanisms and security privileges to protect even hidden resources.

CHAPTER SUMMARY

Any Web application with security holes may provide unwanted access to the network. Vulnerable Web applications allow malicious users to get around perimeter security measures and move straight to the network. Securing Web application security encompasses many elements, from end-user education to stronger programming and development. End-user education involves informing end users of the threats and the strategies to mitigate those threats.

On the developer's side, security measures need to be considered and implemented in each phase of the application software development life cycle. This creates a layered and well-designed approach to application security. In addition to implementing security in the development of applications, secure protocols and procedures must be adhered to when using any application. This includes the use of the HTTPS protocol and using access control methods.

KEY CONCEPTS AND TERMS

  • Advanced Encryption Standard (AES)

  • Client-side validation

  • Canonicalization attacks

  • Data encryption standard (DES)

  • Digital certificate

  • Discretionary access control (DAC)

  • Host-based security

  • Hypertext Transfer Protocol Secure (HTTPS)

  • Mandatory access control (MAC)

  • Rivest Cipher

  • Request for Comments (RFC)

  • Role-based access control

  • Rule-based access control

  • Software development life cycle (SDLC)

  • System access control list (SACL)

  • Triple Data Encryption Standard (3DES)

  • Vulnerability management

CHAPTER 8 ASSESSMENT

  1. SFTP is a secure version of FTP.

    1. True

    2. False

  2. You are the administrator of a large network. The network has several groups of users—including students, administrators, developers, and front-end staff. Each user on the network is assigned network access depending on his or her job in the organization. Which access control method is being used?

    1. Discretionary access control

    2. Role-based access control

    3. Rule-based access control

    4. Mandatory access control

  3. Discretionary access control uses an access control list to determine access.

    1. True

    2. False

  4. As a network administrator, you are concerned with the clear-text transmission of sensitive data on the network. Which of the following protocols are used to help secure communications? (Select two.)

    1. FTPv2

    2. SCP

    3. SSL

    4. SNMP

  5. As part of the network's overall security strategy, you want to establish an access control method in which the owner decides who can and who cannot access the information. Which type of access control method is being described?

    1. Mandatory access control

    2. Role-based access control

    3. Discretionary access control

    4. Rule-based access control

  6. _____ and HTTP are combined to secure online transactions.

  7. Mandatory access control secures information and resources by assigning sensitivity labels on objects and comparing this to the level of sensitivity a user is assigned.

    1. True

    2. False

  8. _____, also known as Rijndael, is a block cipher encryption standard. It can create keys from 128 bits to 256 bits in length.

  9. As a network administrator, you have configured your company's firewall to allow remote users access to the network only between the hours of 1:00 p.m. and 4:00 p.m. Which type of access control method is being used?

    1. Discretionary access control

    2. Role-based access control

    3. Mandatory access control

    4. Rule-based access control

  10. You are concerned about the integrity of messages sent over your HTTP connection. You use HTTPS to secure the communication. Which of the following are hashing protocols used with SSL to provide security? (Select two.)

    1. IPSec

    2. SHA1

    3. MD5

    4. SFTP

  11. A malicious user can insert <form> tags into your Web pages, creating interactive content designed to steal information from your users.

    1. True

    2. False

  12. Authorization is any process by which you verify that someone is who they claim they are.

    1. True

    2. False

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.53.5