CHAPTER 30

E-COMMERCE AND WEB SERVER SAFEGUARDS

Robert Gezelter

30.1 INTRODUCTION

30.2 BUSINESS POLICIES AND STRATEGIES

30.2.1 Step 1: Define Information Security Concerns Specific to the Application

30.2.2 Step 2: Develop Security Service Options

30.2.3 Step 3: Select Security Service Options Based on Requirements

30.2.4 Step 4: Ensures the Ongoing Attention to Changes in Technologies and Requirements

30.2.5 Using the Security Services Framework

30.2.6 Framework Conclusion

30.3 RULES OF ENGAGEMENT

30.3.1 Web Site–Specific Measures

30.3.2 Defining Attacks

30.3.3 Defining Protection

30.3.4 Maintaining Privacy

30.3.5 Working with Law Enforcement

30.3.6 Accepting Losses

30.3.7 Avoiding Overreaction

30.3.8 Appropriate Responses to Attacks

30.3.9 Counter-Battery

30.3.10 Hold Harmless

30.4 RISK ANALYSIS

30.4.1 Business Loss

30.4.2 PR Image

30.4.3 Loss of Customers/Business

30.4.4 Interruptions

30.4.5 Proactive versus Reactive Threats

30.4.6 Threat and Hazard Assessment

30.5 OPERATIONAL REQUIREMENTS

30.5.1 Ubiquitous Internet Protocol Networking

30.5.2 Internal Partitions

30.5.3 Critical Availability

30.5.4 Accessibility

30.5.5 Applications Design

30.5.6 Provisioning

30.5.7 Restrictions

30.5.8 Multiple Security Domains

30.5.9 What Needs to Be Exposed?

30.5.10 Access Controls

30.5.11 Site Maintenance

30.5.12 Maintaining Site Integrity

30.6 TECHNICAL ISSUES

30.6.1 Inside/Outside

30.6.2 Hidden Subnets

30.6.3 What Need Be Exposed?

30.6.4 Multiple Security Domains

30.6.5 Compartmentalization

30.6.6 Need to Access

30.6.7 Accountability

30.6.8 Read-Only File Security

30.6.9 Going Off-Line

30.6.10 Auditing

30.6.11 Emerging Technologies

30.7 ETHICAL AND LEGAL ISSUES

30.7.1 Liabilities

30.7.2 Customer Monitoring, Privacy, and Disclosure

30.7.3 Litigation

30.7.4 Application Service Providers

30.8 SUMMARY

30.9 FURTHER READING

30.10 NOTES

30.1 INTRODUCTION.

Today, electronic commerce involves the entire enterprise. While the most obvious e-commerce applications involve business transactions with outside customers on the World Wide Web (WWW or Web), they are merely the proverbial tip of the iceberg. The presence of e-commerce has become far more pervasive, often involving the entire logistical and financial supply chains that are the foundations of modern commerce. Even the smallest organizations now rely on the Web for access to services and information.

The pervasive desire to improve efficiency often causes a convergence between the systems supporting conventional operations with those supporting the organization's online business. It is thus common for internal systems at bricks-and-mortar stores to utilize the same back-office systems as are used by Web customers. It is also common for kiosks and cash registers to use wireless networks to establish connections back to internal systems. These interconnections have the potential to provide intruders with access directly into the heart of the enterprise.

The TJX case, which came to public attention in the beginning of 2007, was one of a series of large-scale compromises of electronically stored information on back-office and e-commerce systems. Most notably, the TJX case appears to have started with an insufficiently secured corporate network and the associated back-office systems, not a Web site penetration. This breach escalated into a security breach of corporate data systems. It has been reported that at least 94 million credit cards were compromised.1 On November 30, 2007, it was reported that TJX, the parent organization of stores including TJ Maxx and Marshall's, agreed to settle bank claims related to VISA cards for US$ 40.9M.2

E-commerce has now come of age, giving rise to fiduciary risks that are important to senior management and to the board of directors. The security of data networks, both those used by customers and those used internally, now has reached the level where it significantly affects the bottom line. TJX has suffered both monetarily and in public relations, with stories concerning the details of this case appearing in the Wall Street Journal, the New York Times, Business Week, and many industry trade publications. Data security is no longer an abstract issue of concern only to technology personnel. The legal settlements are far in excess of the costs directly associated with curing the technical problem.

Protecting e-commerce information requires a multifaceted approach, involving business policies and strategies as well as the technical issues more familiar to information security professionals.

Throughout the enterprise, people and information are physically safeguarded. Even the smallest organizations have a locked door and a receptionist to keep outsiders from entering the premises. The larger the organization, the more elaborate the precautions needed. Small businesses have simple locked doors; larger enterprises often have many levels of security, including electronic locks, security guards, and additional levels of receptionists. Companies also jealously guard the privacy of their executive conversations and research projects. Despite these norms, it is not unusual to find that information security practices are weaker than physical security measures. Connection to the Internet (and within the company, to the intranet) worsens the problem by greatly increasing the risk and decreasing the difficulty, of attacks.

30.2 BUSINESS POLICIES AND STRATEGIES.

In the complex world of e-commerce security, best practices are constantly evolving. New protocols and products are announced regularly. Before the Internet explosion, most companies rarely shared their data and their propriety applications with any external entities, and information security was not a high priority. Now companies taking advantage of e-commerce need sound security architectures for virtually all applications. Effective information security has become a major business issue. This chapter provides a flexible framework for building secure e-commerce applications and assistance in identifying the appropriate and required security services. The theoretical examples shown are included to facilitate the reader's understanding of the framework in a business-to-customer (B2C) and business-to-business (B2B) environment.

A reasonable framework for e-commerce security is one that:

  1. Defines information security concerns specific to the application.
  2. Defines the security services needed to address the security concerns.
  3. Selects security services based on a cost-benefit analysis and risk versus reward issues.
  4. Ensures the ongoing attention to changes in technologies and requirements as both threats and application requirements change.

This four-step approach is recommended to define the security services selection and decision-making processes.

30.2.1 Step 1: Define Information Security Concerns Specific to the Application.

The first step is to define or develop the application architecture and the data classification involved in each transaction. This step considers how the application will function. As a general rule, if security issues are defined in terms of the impact on the business, it will be easier to discuss with management and easier to define security requirements.

The recommended approach is to develop a transactional follow-the-flow diagram that tracks transactions and data types through the various servers and networks. This should be a functional and logical view of how the application is going to work—that is, how transactions will occur, what systems will participate in the transaction management, and where these systems will support the business objectives and the organization's product value chain. Data sources and data interfaces need to be identified, and the information processed needs to be classified. In this way a complete transactional flow can be represented. (See Exhibit 30.1.)

images

EXHIBIT 30.1 Trust Levels for B2C Security Services

Common tiered architecture points include:

  • Clients. These may be PCs, thin clients (devices that use shared applications from a server and have small amounts of memory), personal digital assistants (PDAs), and wireless application protocol (WAP) telephones.
  • Servers. These may include World Wide Web, application, database, and middleware processors, as well as back-end servers and legacy systems.
  • Network devices. Switches, routers firewalls, NICs, codecs, modems, and internal and external hosting sites.
  • Network spaces. Network demilitarized zones (DMZs), intranets, extranets, and the Internet.

It is important at this step of the process to identify the criticality of the application to the business and the overriding security concerns: transactional confidentiality, transactional integrity, or transactional availability. Defining these security issues will help justify the security services selected to protect the system. The more completely the architecture can be described, the more thoroughly the information can be protected via security services.

30.2.2 Step 2: Develop Security Service Options.

The second step considers the security services alternatives for each architecture component and the data involved in each transaction. Each architectural component and data point should be analyzed and possible security services defined for each. Cost and feasibility should not be considered to any great degree at this stage. The objective is to form a complete list of security service options with all alternatives considered. The process should be comparable with, or use the same techniques as, brainstorming. All ideas, even if impractical or far-fetched, should be included.

Decisions should not be made during this step; that process is reserved for Step 3.

The information security organization provides services to an enterprise. The services provided by information security organizations vary from company to company. Several factors will determine the required services, but the most significant considerations include:

  • Industry factors
  • The company's risk appetite
  • Maturity of the security function
  • Organizational approach (centralized or decentralized)
  • Impact of past security incidents
  • Internal organizational factors
  • Political factors
  • Regulatory factors
  • Perceived strategic value of information security

Several factors contribute to the services that information security organizations provide. “Security services” are defined as safeguards and control measures to protect the confidentiality, integrity, and accountability of information and computing resources. Security services that are required to secure e-commerce transactions need to be based on the business requirements and on the willingness to assume or reduce the risk of the information being compromised. Information security professionals can be subject-matter experts, but they are rarely equipped to make the business decisions required to select the necessary services. Twelve security services that are critical for successful e-commerce security have been identified:

  1. Policy and procedures are a security service that defines the amount of information security that the organization requires and how it will be implemented. Effective policy and procedures will dovetail with system strategy, development, implementation, and operation. Each organization will have different policies and procedures; best practice dictates that organizations have policies and procedures based on the risk the organization is willing to take with its information. At a minimum, organizations should have a high-level policy that dictates the proper use of information assets and the ramifications of misuse.
  2. Confidentiality and encryption are a security service that secures data while they are stored or in transit from one machine to another. A number of encryption schemes and products exist; each organization needs to identify those products that best integrate with the application being deployed. For a discussion of cryptography, see Chapter 7 in this Handbook.
  3. Authentication and identification are a security service that differentiates users and verifies that they are who they claim to be. Typically, passwords are used, but stronger methods include tokens, smart cards, and biometrics. These stronger methods verify what you have (e.g., token) or who you are (e.g., biometrics), not just what you know (password). Two-factor authentication combines two of these three methods and is referred to as strong authentication. For more on this subject, see Chapter 28 in this Handbook.
  4. Authorization determines what access privileges a user requires within the system. Access includes data, operating system, transactional functions, and processes. Access should be approved by management who own or understand the system before access is granted. Authorized users should be able to access only the information they require for their jobs.
  5. Authenticity is a security service that validates a transaction and binds the transaction to a single accountable person or entity. Also called nonrepudiation, authenticity ensures that a person cannot dispute the details of a transaction. This is especially useful for contract and legal purposes.
  6. Monitoring and audit provide an electronic trail for a historical record of the transaction. Audit logs consist of operating system logs, application transaction logs, database logs, and network traffic logs. Monitoring these logs for unauthorized events is considered a best practice.
  7. Access controls and intrusion detection are technical, physical, and administrative services that prevent unauthorized access to hardware, software, or information. Data are protected from alteration, theft, or destruction. Access controls are preventive—stopping unauthorized access from occurring. Intrusion detection catches unauthorized access after it has occurred, so that damage can be minimized and access cut off. These controls are especially necessary when confidential or critical information is being processed.
  8. Trusted communication is a security service that assures that communication is secure. In most instances involving the Internet, this means that the communication will be encrypted. In the past, communication was trusted because it was contained within an organization's perimeter. Communication is currently ubiquitous and can come from almost anywhere, including extranets and the Internet.
  9. Antivirus is a security service that prevents, detects, and cleans viruses, Trojan horse programs, and other malware.
  10. System integrity controls are security services that help to assure that the system has not been altered or tampered with by unauthorized access.
  11. Data retention and disposal are a security service that keeps required information archived, or deletes data when they are no longer required. Availability of retained data is critical when an emergency exists. This is true whether the problem is a systems outage or a legal process, whether caused by a natural disaster or by a terrorist attack (e.g., September 11, 2001).
  12. Data classification is a security service that identifies the sensitivity and confidentiality of information. The service provides guides for information labeling, and for protection during the information's life.

Once an e-commerce application has been identified, the team must identify the security issues with that specific application and the necessary security services. Not all of the services will be relevant, but using a complete list and excluding those that are not required will assure a comprehensive assessment of requirements, with appropriate security built into the system's development. In fact, management can reconcile the services accepted with their level of risk acceptance.

30.2.3 Step 3: Select Security Service Options Based on Requirements.

The third step uses classical cost-benefit and risk management analysis techniques to make a final selection of security service options. However, we recommend that all options identified in Step 3 be distributed along a continuum, such as shown in Exhibit 30.2, so that they can be viewed together, and compared.

Gauging and comparing the level of security for each security service and the data within the transaction will facilitate the decision process. Feasible alternatives can then be identified and the best solution selected based on the requirements. The most significant element to consider is the relative reduction in risk of each option, compared with the other alternatives. The cost-benefit analysis is based on the risk versus reward issues. The effectiveness information is very useful in a cost-benefit model.

Four additional concepts drive the security service option selection:

  1. Implementation risk, or feasibility
  2. Cost to implement and support
  3. Effectiveness in increasing control, thereby reducing risk
  4. Data classification

Implementation risk considers the feasibility of implementing the security service option. Some security systems are difficult to implement due to factors such as product maturity, scalability, complexity, and supportability. Other factors to consider include skills available, legal issues, integration required, capabilities, prior experience, and limitations of the technology.

Cost to implement and support measures the costs of hardware and software implementation, support, and administration. Consideration of administration issues is especially critical because high-level support of the security service is vital to an organization's success.

Effectiveness measures the reduction of risk proposed by a security service option once it is in production. Risk can be defined as the impact and likelihood of a negative event occurring after mitigating strategies have been implemented. An example of a negative event is the theft of credit card numbers from a business's database. Such an event causes not only possible losses to consumers but also negative public relations that may impact future business. Effective security service options reduce the risk of a negative event occurring.

images

EXHIBIT 30.2 Continuum of Options

Data classification measures the sensitivity and confidentiality of the information being processed. Data must be classified and protected from misuse, disclosure, theft, or destruction, regardless of storage format, throughout their life (from creation through destruction). Usually the originator of the information is considered to be the owner of the information and is responsible for classification, identification, and labeling. The more sensitive and confidential the information, the more information security measures will be required to safeguard and protect it.

30.2.4 Step 4: Ensures the Ongoing Attention to Changes in Technologies and Requirements.

The only constant in this analysis is the need to evolve and to address ever-increasing threats and technologies. Whatever security approaches the preceding steps identify, they must always be considered in the context of the continuing need to update the selected approaches. Changes will be inevitable, whether they arrive from compliance, regulation, technological advances, or new threats.

30.2.5 Using the Security Services Framework.

The next two sections are examples to demonstrate the power of the security services methodology. The first example is a B2C model; the business could be any direct-selling application. The second example is a B2B model. Both businesses take advantage of the Internet to improve their product value chain. The examples are a short demonstration of the security services methodology, neither complete nor representative of any particular application or business.

30.2.5.1 Business-to-Customer Security Services.

The B2C company desires to contact customers directly through the Internet, and allow them to enter their information into the application. These assumptions are made to prepare this B2C system example:

  • Internet-facing business
  • Major transactions supported
  • External customer-based system, specifically excluding support, administration, and operations
  • Business-critical application
  • Highly sensitive data classification
  • Three-tiered architecture
  • Untrusted clients, because anyone on the Internet can be a customer

Five layers must be secured:

  1. The presentation layer is the customer interface, what the client sees or hears using the Web device. The client is the customer's untrusted PC or other device. The security requirements at this level are minimal because the company will normally not dictate the security of the customer. The identification and authentication done at the presentation level are those controls associated with access to the device. The proliferation of appliances in the client space (e.g., traditional PCs, thin desktops, and PDAs) makes it difficult to establish a uniform access control procedure at this level. Although this layer is the terminal endpoint of the secure connection, it is not inherently trustworthy, as has been illustrated by all too many incidents involving public computers in cafes, hotels, and other establishments.
  2. The network layer is the communication connection between the business and the customer. The client, or customer, uses an Internet connection to access the B2C Web applications. The security requirements are minimal, but sensitive, and confidential traffic will need to be encrypted.
  3. The middle layer is the Web server that connects to the client's browser and can forward and receive information. The Web server supports the application by being an intermediary between the business and the customer. The Web server needs to be very secure. Theft, tampering, and fraudulent use of information needs to be prevented. Denial of service and Web site defacement are also common risks that need to be prevented in the middle layer.
  4. The application layer is where the information is processed. The application serves as an intermediary between the customer requests and the fulfillment systems internal to the business. In some examples, the application server and database server are the same because both the application and database reside on the same server. However, they could reside within the Web server in other cases.
  5. The internal layer is comprised of the business's legacy systems and databases that support customer servicing. Back-end servers house the supporting application, including order processing, accounts receivable, inventory, distribution, and other systems.

For each of these five levels, we need four security services:

  1. Trusted communications
  2. Authentication/identification
  3. Audit
  4. Access controls

Step 1: Define Information Security Concerns Specific to the Application. Defining security issues will be particular to the system being implemented. To understand the risk of the system, the best starting place is with the business risk; then defining risks at each element of the architecture.

Business Risk

  • The application needs high availability, because customers will demand contact at off-hours and on weekends.
  • The Web pages need to be secure from tampering and cybervandalism, because the business; is concerned about the loss of customer confidence as a result of negative publicity.
  • Customers must be satisfied with the level of service.
  • The system will process customer credit card information and will be subject to the privacy regulations of the Gramm-Leach-Bililey Act, 15 USC §§ 6801–09.

Technology Concerns

Of the five architectural layers in this example, four will need to be secured:

  1. The presentation layer will not be secured or trusted. Communications between the client and the Web server will be encrypted at the network layer.
  2. The network layer will need to filter unwanted traffic, to prevent denial of service (DoS) attacks and to monitor for possible intrusions.
  3. The middle layer will need to prevent unauthorized access, be tamper-proof, contain effective monitoring, and support efficient and effective processing.
  4. The application layer will need to prevent unauthorized access, support timely processing of transactions, provide effective audit trails, and process confidential information.
  5. The internal layer will need to prevent unauthorized access, especially through Internet connections, and to protect confidential information during transmission, processing, and storage.

Step 2: Develop Security Services Options. The four security services reviewed in this example are the most critical in an e-commerce environment. Other services such as nonrepudiation and data classification are important but not included in order to simplify the example. Services elected are:

  • Trusted communication
  • Authentication and identification
  • Monitoring and auditing
  • Access control

Many security services options are available for the B2C case, with more products and protocols on the horizon. There are five architectural layers for each of the services defined in Step 1.

  1. Presentation layer. Several different options can be selected for trusted communication. Hypertext transfer protocol (HTTP) is the most common, with secure socket layer (SSL) certificates in a PKI, or digital signatures, being even less common, and WAP for wireless communications and transport security layer protocol (TLS) for encrypted Internet communications being the most rare. Because the client is untrusted, the client's authentication, audit, or access control methods cannot be relied on.
  2. Network layer. Virtual private networks (VPNs) are considered best practice for secure network layer communication. Firewalls are effective devices for securing network communication. The client may have a personal firewall. If the client is in a large corporation, there is a significant likelihood that a firewall will intervene in communications. If the client is using a home or laptop computer, then a personal firewall may protect traffic. There will also be a firewall on the B2C company side of the Internet.
  3. Middle layer. The Web server and the application server security requirements are significant. Unauthorized access to the Web server can result in Web site defacement by hackers who change the Web data. More important, access to the Web or application server can lead to theft, manipulation, or deletion of customer or proprietary data. Communication between the Web server and the client needs to be secure in e-commerce transactions. HTTP is the most common form of browser communication. In 2000, it was reported that over 33 percent of credit card transactions were using unsecured HTTP.3 In 2007, VISA reported that compliance with the Payment Card Industry Data Security Standard (PCI DSS) had improved to 77 percent of the largest merchants in the United States,4 still far from universal. This lack of encryption is a violation of both the merchant's agreement and the PCI, but episodes continue to occur. In the same vein, it is not unusual for organizations to misuse encryption, whether it involves self-signed, expired, or not generally recognized certificates. (See Chapter 37 in this Handbook.) SSL and HTTPS are the most common secure protocols, but the encryption key length and contents is critical: The larger the key, the more secure the transaction is from brute-force decryption. (See Chapter 7.) Digital certificates in a PKI and digital signatures are not as common, but they are more effective forms of security.
  4. Application layer. The application server provides the main processing for the Web server. This layer may include transaction processing and database applications; it needs to be secure to prevent erroneous or fraudulent processing. Depending on the sensitivity of the information processed, the data may need to be encrypted. Interception and the unauthorized manipulation of data are the greatest risks in the application layer.
  5. Internal layer. In a typical example, the internal layer is secured by a firewall protecting the external-facing system. This firewall helps the B2C company protect itself from Internet intrusions. Database and operating system passwords and access control measures are required.

Management can decide which security capability is required and at what level. This format can be repeated to discuss security services at all levels and all systems, not just e-commerce-related systems.

Step 3: Select Security Service Options Based on Requirements. In Step 3 the B2C company can analyze and select the security services that best meet its legal, business, and information security requirements. There are four stages required for this analysis:

  1. Implementation risk, or feasibility
  2. Cost to implement and support
  3. Effectiveness in increasing control, thereby reducing risk
  4. Data classification

Implementation risk is a function of the organization's ability to effectively roll out the technology. In this example, we assume that implementation risk is low and resources are readily available to implement the technology.

Costs to implement and support are paramount to the decision making process. Both costs need to be considered together. Unfortunately, the cost to support is difficult to quantify and easily overlooked. In this example, resources are available to both implement and support the technology.

Effectiveness in increasing control is an integral part of the benefit and risk management decisions. Each component needs to be considered in order to determine cross-benefits where controls overlap and supplement other controls. In this case, the control increase is understood and supported by management.

Data classification is the foundation for requirements. It will help drive the cost-benefit discussions because it captures the value of the information to the underlying business. In this example, the data are considered significant enough to warrant additional security measures to safeguard the data against misuse, theft, and loss.

There are many technology decisions required to secure the example environment. Management can use this approach to plot the security levels required by the system. For example, for system audit services, in order of minimal service to maximum:

  • The minimal level of security is to have systems with a limited number of system events logged. For example, the default level of logging from the manufacturer is used but does not contain all of the required information. The logs are not reviewed but are available for forensics in the event of a security incident.
  • A higher level of security is afforded with a log that records more activities based on requirements and not, for example, the manufacturer's default level. As in the minimal level of security, the activities are logged and available to support forensics but are not reviewed. In this case, more types of information are recorded in the system log, but it still may not contain all that is required.
  • A sufficient log is kept on each server and is manually monitored for anomalies and potential security events.
  • The log is automatically reviewed by software on each server.
  • System logs are consolidated onto a centralized security server. Data from the system logs are transmitted to the centralized security server, and software is then used to scan the logs for specific events that require attention. Events such as attempts to gain escalated privileges to root or administrative access can be flagged for manual review.
  • The maximum service level is a host-based intrusion detection system (IDS) used to scan the system logs for anomalies and possible security events. Once detected, action needs to be taken to resolve the intrusion. The procedure should include processes such as notification, escalation, and automated defensive response.

30.2.5.2 Business-to-Business Security Services.

The second case study uses the security services framework in a B2B example. Following is a theoretical discussion of how the framework can be applied to B2B e-commerce security. These assumptions may be made in this B2B system example:

  • Internet-facing.
  • Supports major transactions.
  • Descriptions will be external and customer based (excluding support, administration, and operations security services).
  • Trusted communication is required.
  • Three-tier architecture.
  • Untrusted client.
  • Business-critical application.
  • Data are classified as highly sensitive.

There are five layers in this example that need to be secured:

  1. The presentation layer is the customer interface and is what the client sees or hears using the Web device. The client is the customer's untrusted PC, but more security constraints can be applied because the business can dictate enhanced security. As noted previously, the security of the presentation layer is complicated by the wide range of potential client devices that may be employed.
  2. The application layer is where the information is processed. The application serves as an intermediary between the business customer's requests and the fulfillment systems internal to the business (the back-end server). The application server is the supporting server and database.
  3. The customer internal layer is the interface between the application server supporting the system at the customer's business location, and the customer's own internal legacy applications and systems.
  4. The network layer is the communication connection between the business and another business. The Internet is used to connect the two businesses. Sensitive and confidential traffic will need to be encrypted. Best practice is to have the traffic further secured using a firewall.
  5. The internal layer is the business's legacy systems that support customer servicing. The back-end server houses the supporting systems, including order processing, accounts receivable, inventory, distribution, and other systems.

The four security services are:

  1. Trusted communications
  2. Authentication/identification
  3. Audit
  4. Access controls

Step 1: Define Information Security Concerns Specific to the Application. Defining security issues will be particular to the system being implemented. To understand the risk of the system, it is best to start with the business risk, then define risk at each element of the architecture.

Business Risk

  • Communication between application servers needs to be very secure. Data must not be tampered with, stolen, or misrouted.
  • Availability is critical during normal business hours.
  • Cost savings realized by switching from electronic data interchange (EDI) to the Internet is substantial, and will more than cover the costs of the system.

Technology Concerns. There are six architectural layers in this example, five of which need to be secured:

  1. The presentation layer will not be secured or trusted. The communication between the client and the customer application is trusted because it uses the customer's private network.
  2. The application server will need to be secure. Traffic between the two application servers will need to be encrypted. The application server is inside the customer's network and demonstrates a potentially high, and perhaps unnecessary degree of trust between the two companies (see § 30.6.5).
  3. The customer's internal layer will be secured by the customer.
  4. The network layer needs to filter out traffic that is not required, prevent DoS attacks, and monitor for possible intrusions. Two firewalls are shown: one to protect the client and the other to protect the B2B company.
  5. The application layer will need to prevent unauthorized access, support timely processing of transactions, provide effective audit trails, and process confidential information.
  6. The internal layer will need to prevent unauthorized access (especially through Internet connections) and protect confidential information during transmission, processing, and storage.

Step 2: Develop Security Services Options. There are four security services reviewed in this example. Others could have been included, such as authenticity, nonrepudiation, and confidentiality, but they have been excluded to simplify this example. Elected security services include:

  1. Trusted communication
  2. Authentication/identification
  3. Audit
  4. Access control

Many security services options are available for B2B environments.

  • Presentation layer. Several different options can be selected for communication. HTTP is the most common. The communications between the presentation layer residing on the client device and the application server, in this example, are internal to the customer's trusted internal network and will be secured by the customer.
  • Application layer. Communication between the two application servers needs to be secure. The easiest and most secure method of peer-to-peer communication is via a VPN.
  • Customer internal. Communications between the customer's application server and the customers back-end server are internal to the customer's trusted internal network and will be secured by the customer.
  • Network layer. It is common in a B2B environment that a trusted network is created via a VPN. The firewalls will probably participate in these communications, but hardware solutions are also possible.
  • Application layer. The application server is at both the customer and B2B company sites. VPN is the most secure communication method. The application server also needs to communicate with the internal layer, and this traffic should be encrypted as well.
  • Internal layer. The internal layer may be secured with another firewall from the external-facing system. This firewall helps the B2B company to protect itself from intrusions and unauthorized access. In this example, a firewall is not assumed so the external firewall and DMZ need to be very secure.

Intrusion detection, log reading, and other devices can easily be added and discussed with management. This format can be repeated to discuss security services at all levels and all systems, not just e-commerce-related systems.

Step 3: Develop Security Service Options. In Step 3, the B2B company can analyze and select the security services that best meet its legal, business, and information security requirements. The biggest difference between B2C and B2B systems is that the B2C system assumes no level of trust. The B2B system assumes trust, but additional coordination and interface with the B2B customer or partner is required. This coordination and interoperability must not be underestimated, because they may prove difficult and expensive to resolve. There are four stages required for this analysis:

  1. Implementation risk, or feasibility
  2. Cost to implement and support
  3. Effectiveness in increasing control, thereby reducing risk
  4. Data classification

Implementation risk is a function of the organization's ability to effectively roll out the technology. In this example, we assume that implementation risk is low and resources are readily available to implement the technology.

Cost to implement and support are paramount to the decision-making process. Both businesses' costs need to be considered. Unfortunately, the cost to support is difficult to quantify and easily overlooked. In this example, resources are available both to implement and to support the technology.

Effectiveness in increasing control is an integral part of the benefit and risk management analysis. Each security component needs to be considered in order to determine cross-benefits where controls overlap and supplement others. In this example, increased levels of control are understood and supported by management.

Data classification is the foundation for requirements and will help drive the cost-benefit discussions because it captures the value of the information to the underlying business. In this example, the data are considered significant enough to warrant additional security measures to safeguard the data against misuse, theft, and loss.

Each security service can be defined along a continuum, with implementation risk, cost, and data classification all considered. Management can use this chart to plot the security levels required by the system. This example outlines the effectiveness of security services options relative to other protocols or products. Each organization should develop its own continuums and provide guidance to Web developers and application programmers as to the correct uses and standard settings of the security services. For example, for authentication/identification services, in order of minimal service to maximum:

  • The minimal level of security is to have no passwords.
  • Weak passwords (e.g., easy to guess, shared, poor construction) are better than no passwords but still provide only a minimal level of security.
  • Operating system or database level passwords usually allow too much access to the system but can be effectively managed.
  • Application passwords are difficult to manage but can be used to restrict data access to a greater degree.
  • Role-based access distinguishes users by their need to know to support their job function. Roles are established and users are grouped by their required function.
  • Tokens are given to users and provide for two-part authentication. Passwords and tokens are combined for strong authentication.
  • Biometrics are means to validate the person claiming to be the user via fingerprints, retina scans, or other unique body function.

For more information on identification and authentication, see Chapters 28 and 29 in this Handbook.

30.2.6 Framework Conclusion.

Internet e-commerce has changed the way corporations conduct business with their customers, vendors, suppliers, and business units. The B2B and B2C sectors will likely continue to grow. Despite security concerns, the acceleration toward increased use of the Internet as a sales, logistics, and marketing channel continues. The challenge for information security professionals' is to keep pace with this change from a security perspective, but not to impede progress. Another equal challenge is that the products that secure the Internet are new and not fully functional or mature. The products will improve, but meanwhile, existing products must be implemented, and later retrofitted, with improved and more secure security services. This changing environment, including the introduction of ever more serious and sophisticated threats, will remain difficult to secure.

The processes described in this section will allow the security practitioner to provide business units with a powerful tool to communicate, select, and implement information security services. Three steps were described and demonstrated with two examples. The process supports decision making. Decisions can be made and readily documented to demonstrate cost effectiveness of the security selections. The risk of specific decisions can be discussed and accepted by management. The trade-offs between cost and benefit can be calculated and discussed. Therefore, it becomes critical that alternatives be reviewed and good decisions made. The processes supporting these decisions need to be efficient and quickly applied. The information security services approach will allow companies to implement security at a practical pace. Services not selected are easily seen. The risk of not selecting specific security services needs to be accepted by management.

30.3 RULES OF ENGAGEMENT.

The Web is a rapidly evolving, complex environment. Dealing with customers electronically is a challenge. Web-related security matters raise many sensitive security issues. Attacks against a Web site always need to be taken seriously. Correctly differentiating “false alarms” from real attacks continues to present a challenge. As an example, the Victoria's Secret online lingerie show, in February 1998, exceeded even the most optimistic expectations of its creators, and the volume of visitors caused severe problems. Obviously, the thousands of people were not attacking the site; they were merely a virtual mob attempting to access the same site at the same time. Similar episodes have occurred when sites were described as interesting on Usenet newsgroups. This effect also occurs with social networking sites such as YouTube, Facebook, and others, where a virtual tidal wave of requests can occur without warning. Physical mobs are limited by transportation, timing, and costs; virtual mobs are solely limited by how many can attempt to access a resource simultaneously, from locations throughout the world.

30.3.1 Web Site–Specific Measures.

Protecting a Web site means ensuring that the site and its functions are available 24 hours a day, seven days a week, and 365 days a year. It also means ensuring that the information exchanged with the site is accurate and secure.

The preceding section focused on protecting Internet-visible systems, predominantly those systems used within the company to interact with the outside world. This section focuses on issues specific to Web interactions with customers as well as to supply and distribution chains. Practically speaking, the Web site is an important, if not the most important, component of an organization's interface with the outside world.

Web site protection lies at the intersection of technology, strategy, operations, customer relations, and business management. Web site availability and integrity directly affect the main streams of cash flow and commerce: an organization's customers, production chains, and supply chains. This is in contrast to the general Internet-related security issues examined in the preceding section, which primarily affect those inside the organization.

Availability is the cornerstone of all Web-related strategies. Idle times have become progressively rarer. Depending on the business and its markets, there may be some periods of lower activity. In the financial trading community, there remain only a few small windows during a 24-hour period when updates and maintenance can be performed. As global business becomes the norm, customers, suppliers, and distributors increasingly expect information, and the ability to effect transactions, at any time of the day or night, even from modest-size enterprises. On the Internet, “nobody knows that you are a dog” also means “nobody knows that you are not a large company.” The playing field has indeed been leveled, but it was not uniformly raised or lowered, but expectations have increased while capital and operating expenses have dramatically dropped.

Causation is unrelated to impact. The overwhelming majority of Web outages are caused by unglamorous problems. High-profile, deliberate attacks are much less frequent than equipment and personnel failures. The effect on the business organization is indistinguishable. Having a low profile is no defense against random scanning attack.

External events and their repercussions can also wreak havoc, both directly and indirectly. The September 11, 2001, terrorist attacks that destroyed New York City's World Trade Center complex had worldwide impact, not only on systems and firms located in the destroyed complex. Telecommunications infrastructure was damaged or destroyed, severing Internet links for many organizations. Parts supply and all travel was disrupted when North American airspace was shutdown. Manhattan was sealed to exits and entries, while within the city itself, and throughout much of the world, normal operations were suspended. The September 11 attacks were extraordinarily disruptive, but security precautions similar to those described throughout this Handbook served to ameliorate damage to Web operations and other infrastructure elements of those concerns that had implemented them. Indeed, the existence of the Web and the resulting ability to organize groups without physical presence proved a means to ameliorate the damage from the attacks, even to firms that had a major presence in the World Trade Center. In the period following the attacks on the World Trade Center, Morgan Stanley and other firms that had offices in the affected area implemented extensive telecommuting and Web-based interactions first to account for their staffs5 and then to enable work to continue.6

Best practices and scale are important. Some practices, issues, and concerns at first glance appear relevant only to very large organizations, such as Fortune 500 companies. In fact, this is not so. Considering issues in the context of a large organization permits them to appear magnified and in full detail. Smaller organizations are subject to the same issues and concerns but may be able to implement less formal solutions. “Formal” does not necessarily imply written procedures. It may mean that certain computer-related practices, such as modifying production facilities in place, are inherently poor ideas and should be avoided. Very large enterprises might address the problems by having a separate group, with separate equipment, responsible for operating the development environment.

30.3.2 Defining Attacks.

Repeated, multiple attempts to connect to a server could be ominous, or they could be nothing more than a customer with a technical problem. Depending on the source, large numbers of failed connects or aborted operations coming from gateway nodes belonging to an organization could represent a problem somewhere in the network, an attack against the server, or anything in between. It could also represent something no more ominous than a group of users within a locality accessing a Web resource through a firewall.

30.3.3 Defining Protection.

There is a difference between protecting Internet-visible assets and protecting Web sites. For the most part, Internet-visible assets are not intended for public use. Thus, it is often far easier to anticipate usage volumes and to account for traffic patterns. With Web sites, activity is subject to the vagaries of the worldwide public. A dramatic surge in traffic could be an attack, or it could be an unexpected display of the site's URL in a television program or in a relatively unrelated news story. Differentiating between belligerence and popularity is difficult.

Self-protective measures that do not impact customers are always permissible. How-ever, care must be exercised to ensure that the measures are truly impact free. As an example, some sites, particularly public FTP servers, often require that the Internet protocol (IP) address of the requesting computer have an entry in the inverse domain name system, which maps IP addresses to host names (e.g., node 192.168.0.1 has a PTR [pointer record] 1.0.168.192.in-addr.arpa) (RFC1034, RFC1035; Mockapetris 1987a, 1987b) as opposed to the more widely known domain name system database, which maps host names into IP addresses. It is true that many machines do have such entries, but it is also true that many sites, including company networks, do not provide inverse DNS information. Whether this entire population should be excluded from the site is a policy and management decision, not a purely technical decision. Even a minuscule incident rate on a popular WWW site can be catastrophic, both for the provider and for the naive end user who has no power to resolve the situation.

30.3.4 Maintaining Privacy.

Logging interactions between customers and the Web site is also a serious issue. A Web site's privacy policy is again a managerial, legal, and customer relations issue with serious overtones. Technical staff needs to be conscious that policies, laws, and other issues may dictate what information may be logged, where it can be stored, and how it may be used. For example, the 1998 Children's Online Privacy Protection Act (COPPA) (15 U.S.C. § 6501 et seq.) makes it illegal to obtain name and address information from children under the age of 13 in the United States. Many firms are party to agreements with third-party organizations such as TRUSTe,7 governing the use and disclosure of personal information. For more information on legal aspects of protecting privacy, see Chapter 69 in this Handbook.

30.3.5 Working with Law Enforcement.

Dealing with legal authorities is similarly complicated. Attempts at fraudulent purchases and other similar issues can be addressed using virtually the same procedures that are used with conventional attempts at mail or phone order fraud. Dealing with attacks and similar misuses is more complicated and depends on the organization's policies and procedures, and the legal environment. The status of the Web site is also a significant issue. If the server is located at a hosting facility, or is owned and operated by a third party, the situation becomes even more legally complicated. Involving law enforcement in a situation will likely require that investigators have access to the Web servers and supporting network, which may be difficult. Last, there is a question of what information is logged, and under what circumstances. For more information on working with law enforcement, see Chapter 61 in this Handbook.

30.3.6 Accepting Losses.

No security scheme is foolproof. Incidents will happen. Some reassurance can be taken from the fact that the most common reasons for system compromises in 2001 appear to remain the same as when Clifford Stoll wrote The Cuckoo's Egg in 1989. Then and now, poorly secured systems have:

  • Obvious passwords into management accounts
  • Unprotected system files
  • Unpatched known security holes

However, eliminating the simplest and most common ways in which outsiders can compromise Web sites does not resolve all problems. The increasing complexity of site content, and of the applications code supporting dynamic sites, means that there is an ongoing design, implementation, testing, and quality assurance challenge. Server-based and server-distributed software (e.g., dynamic www sites) is subject to the same development hazards as other forms of software. Security hazards will slip into a Web site, despite the best efforts of developers and testers. The acceptance of this reality is an important part of the planning necessary to deal with the inevitable incidents. When it is suspected that a Web site, or an individual component, has been compromised, the reaction plans should be activated. The plans required are much the same as those discussed in Chapter 21 in this Handbook. The difference is that the reaction plan for a Web site has to take into consideration that the group primarily impacted by the plan will be the firm's customers. The primary goal of the reaction plan is to contain the damage. For more information on computer security incident response, see Chapter 56.

30.3.7 Avoiding Overreaction.

Severe reactions may create as much, if not more, damage than the actual attack. The reaction plan must identify the decision-making authority and the guidelines to allow effective decisions to be made. This is particularly true of major sites, where attacks are likely to occur on a regular basis. Methods to determine the point at which the Web site must be taken off-line to prevent further damage need to be determined in advance.

In summary, when protecting Web sites and customers, defensive actions are almost always permissible and offensive actions of any kind are almost always impermissible. Defensive actions that are transparent to the customer are best of all.

30.3.8 Appropriate Responses to Attacks.

Long before the advent of the computer, before the development of instant communications, international law recognized that firing upon a naval vessel was an act of war. Captains of naval vessels were given standing orders summarized as fire if fired upon. In areas without readily accessible police protection, the right of citizens to defend themselves is generally recognized by most legal authorities. Within the body of international law, such formal standards of conduct for military forces are known as rules of engagement, a concept with global utility.

In cyberspace, it is tempting to jettison the standards of the real world. It is easy to imagine oneself master of one's own piece of cyberspace, without connection to real-world laws and limitations on behavior. However, information technology (IT) personnel do not have the standing of ships' captains, with no communications to the outside world. Some argue that fire if fired upon is an acceptable standard for online behavior. Such an approach does not take into account the legal and ethical issues surrounding response strategies and tactics.

Any particular security incident has a range of potential responses. Which response is appropriate depends on the enterprise and its political, legal, and business environment. Acceptability of response is also a management issue as well as potentially a political issue. Determining what responses are acceptable in different situations requires input from management on policy, from legal counsel on legality, from public relations on public perceptions, and from technical staff on technical feasibility. Depending on the organization, it also may be necessary to involve unions and other parties in the negotiation of what constitutes appropriate responses.

What is acceptable or appropriate in one area is not necessarily acceptable or appropriate in another. Often the national security arena has lower standards of proof than would be acceptable in normal business litigation. In U.S. civil courts, cases are decided upon a preponderance of evidence. Standards of proof acceptable in civil litigation are not conclusive when a criminal case is being tried, where guilt must be established beyond a reasonable doubt.

Gambits or responses that are perfectly legal in a national security environment may be completely illegal and recklessly irresponsible in the private sector, exposing the organization to significant legal liability.

Rules of etiquette and behavior are similarly complex. The rights of prison inmates in the United States remain significant, even though they are subject to rules and regulations substantially more restrictive than for the general population. Security measures, as well, must be appropriate for the persons and situations to which they are applied.

30.3.9 Counter-Battery.

Some suggest that the correct response to a perceived attack is to implement the cyberspace equivalent of counter-battery, that is, targeting the artillery that has just fired upon you. However, counter-battery tactics, when used as a defensive measure against Internet attacks, will be perceived, technically and legally, as an attack like any other.

Counter-battery tactics may be emotionally satisfying but are prone to both error and collateral damage. Counter-battery can be effective only when the malefactor is correctly identified and the effects of the reciprocal attack are limited to the malefactor. If third parties are harmed in any way, then the retaliatory action becomes an attack in and of itself. One of the more celebrated counter-battery attacks gone awry was the 1994 case when two lawyers from Phoenix, Arizona, spammed over 5,000 Usenet newsgroups to give unsolicited information on a U.S. Immigration and Naturalization Service lottery for 55,000 green cards (immigration permits). The resulting retaliation—waves of e-mail protests—against the malefactors flooded their Internet service provider (ISP) and caused at least one server to crash, resulting in a denial of service to all the other, innocent, customers of the ISP.8

30.3.10 Hold Harmless.

It is critical that Internet policies adopt a hold harmless position. Dealing with an Internet crisis often requires fast reactions. That is, if employees act in good faith, in accordance with their responsibilities, and within documented procedures, management should never punish them for such action. If the procedures are wrong, managers should improve the rules and procedures, not blame the employees for following established policies. Disciplinary actions are manifestly inappropriate in such circumstances.

30.4 RISK ANALYSIS.

As noted earlier in this chapter, protecting an organization's Web sites depends on an accurate, rational assessment of the risks. Developing effective strategies and tactics to ensure site availability and integrity requires that all potential risks be examined in turn.

Unrestricted commercial activity has been permitted on the Internet since 1991. Since then, enterprises large and small have increasingly integrated Web access into their second-to-second operations. The risks inherent in a particular configuration and strategy are dependent on many factors, including the scale of the enterprise and the relative importance of the Web-based entity within the enterprise. Virtually all high-visibility Web sites (e.g., Yahoo, America Online, cnn.com, Amazon.com, and eBay) have experienced significant outages at various times.

The more significant the organization's Web component, the more critical is availability and integrity. Large, traditional firms with relatively small Web components can tolerate major interruptions with little damage. Firms large or small that rely on the Web for much of their business must pay greater attention to their Web presences, because a serious outage can quickly escalate into financial or public relations catastrophe.

For more details of risk analysis and management, see Chapters 62 and 63 in this Handbook.

30.4.1 Business Loss.

Business losses fall into several categories, any of which can occur in conjunction with an organization's Web presence. In the context of this chapter, customers are both outsiders accessing the Internet presence and insiders accessing intranet applications. In practice, insiders using intranet-hosted applications pose the same challenges as the outside users.

30.4.2 PR Image.

The Web site is the organization's public face 24/7/365. This ongoing presence is a benefit, making the firm visible at all times, but the site's high public profile also makes it a prime target.

Government sites in the United States and abroad have often been the targets of attacks. In January 2000, “Thomas,” the Web site of the U.S. Congress, was defaced. Earlier, in 1996, the Web site of the U.S. Department of Justice was vandalized. Sites belonging to the Japanese, U.K., and Mexican governments also have been vandalized.

These incidents have continued, with hackers on both sides of various issues attacking the other side's Web presence. While no conclusive evidence of the involvement of nation states in such activities has become generally known, it is inevitable. Surges of such activity have coincided with major public events, including the 2001 U.S.-China incident involving a aerial collision between military aircraft, the Afghan and Iraqi operations, and the Israeli-Lebanese war of 2006. Such activity has not been limited to the national security arena, however. Many sites were defaced during the incidents following publication of cartoons in a Danish newspaper that some viewed as defaming the prophet Muhammad.

In some cases,9 companies have suffered collateral damage from hacking contests, where hackers prove their mettle by defacing as many sites as they can. The fact that there is no direct motive or animus toward the company is not relevant; the damage has still been done.

In the corporate world, company Web sites have been the target of attacks intended to defame the corporation for real or imagined slights. Some such episodes have been reported in the news media, whereas others have not been the subjects of extensive reporting. The scale and newsworthiness of the episode is unimportant; the damage done to the targeted organization is the true measure. An unreported incident that is the initiating event in a business failure is more damaging to the affected parties than a seemingly more significant outage with less severe consequences.

Other cybervandals (e.g., sadmind/IIS) have used address scanners to target randomly selected machines. Obscurity is not a defense against address-scanner attacks.

30.4.3 Loss of Customers/Business.

Internet customers are highly mobile. Web site problems quickly translate into permanently lost customers. The reason for the outage is immaterial; the fact that there is a problem is often sufficient to provoke erosion of customer loyalty.

In most areas, there is competitive overlap. Using the overnight shipping business as an example, in most U.S. metropolitan areas there is (in alphabetical order) Airborne Express, Federal Express, United Parcel Service, and the United States Postal Service. All of the firms offer Web-based shipment tracking, a highly popular service. Problems or difficulties with shipment tracking will quickly lead to a loss of business in favor of a different company with easier tracking.

30.4.4 Interruptions.

Increasingly, modern enterprises are being constructed around ubiquitous 24/7/365 information systems, most often with Web sites playing a major role. In this environment, interruptions of any kind are catastrophic.

Production. The past 20 years have seen a streamlining of production processes in all areas of endeavor. Twenty years ago, it was common for facilities to have multiday supplies of components on-hand in inventory. Today, zero latency or just-in-time (JIT) environments are common, permitting large facilities to have no inventory. Fiscally, zero latency environments may be optimally cost efficient, yet the paradigm leaves little margin for disruptions of the supporting logistical chain. This chain is sometimes fragile, and subject to disruption by any number of hazards.

Supply Chain. Increasingly, it is common for Web-based sites to be an integral part of the supply chain. Firms may encourage their vendors to use a Web-based portal to gain access to the vendor side of the purchasing system. XML10-based gateways and service-oriented architecture (SOA) approaches, together with other Web technologies, are used to arrange for and manage the flow of raw materials and components required to support production processes. The same streamlining that speeds information between supplier and manufacturer also provides a potential for serious mischief and liability.

Delivery Chain. Web-based sites, both internal and external, also have become the vehicle of choice for tracking the status of orders and shipments, and increasingly as the backbone of many enterprises' delivery chain management and inquiry systems.

Information Delivery. Banks, brokerages, utilities, and municipalities are increasingly turning to the Web as a convenient, low-cost method for managing their relationships with consumers. Firms are also supporting downloading records of transactions and other relationship information in formats required by personal database programs and organizers. These outputs, in turn, are often used as inputs to other processes, which then generate other transactions. Not surprisingly, as time passes, more and more people and businesses depend on the availability of information on demand. Today's Web-based customers presume that information is accessible wherever they can use a personal computer or even a Web-enabled cellular telephone. This is reminiscent of usage patterns of automatic teller machines in an earlier decade, which allowed people to base their plans on access to teller machines, often making multiple $20 withdrawals instead of cashing a $200 check weekly.

30.4.5 Proactive versus Reactive Threats.

Some threats and hazards can be addressed proactively, whereas others are inherently reactive. When strategies and tactics are developed to protect a Web presence, the strategies and tactics themselves can induce availability problems.

As an example, consider the common strategy of having multiple name servers responsible for providing the translation of domain names to IP addresses. It is required before a domain name (properly referred to as a domain name service [DNS] zone) can be entered into the root-level name servers, that at least two name servers be identified to process name resolution requests. Name servers are a prime example of resources that should be geographically diverse.

Updating DNS zones requires care. If an update is performed improperly, then the resources referenced via the symbolic DNS names will become unresolvable, regardless of the actual state of the Web server and related infrastructure. The risk calculus involving DNS names is further complicated by the common, efficient, and appropriate practice of designating ISP name servers as the primary mechanism for the resolution of domain names. In short, name translation provides a good example of the possible risks that can affect a Web presence.

30.4.6 Threat and Hazard Assessment.

Some threats are universal, whereas others are specific to an individual environment. The most devastating and severe threats are those that simultaneously affect large areas or populations, where efforts to repair damage and correct the problem are hampered by the scale of the problem.

On a basic level, threats can be divided into several categories. The first is between deliberate acts and accidents. Deliberate acts comprise actions done with the intent to damage the Web site or its infrastructure. Accidents include natural phenomena (acts of God) and clumsiness, carelessness, and unconsidered consequences (acts of clod).

Broadly put, a deliberate act is one whose goal is to impair the system. Deliberate acts come in a broad spectrum of skill and intent. For the purpose of risk analysis and planning, deliberate acts against infrastructure providers can often appear to be acts of God. To an organization running a Web site, an employee attack against a telephone carrier appears simply as a service interruption of unknown origin.

No enterprise or agency should consider itself an unlikely target. Past high-profile incidents have targeted the FBI (May 26 and 27, 1999), major political parties, and interest groups in the United States. On the consumer level, numerous digital subscriber line (DSL)-connected home systems have been targeted for subversion as preparation for the launching of DDoS attacks. In 2007, several investigations resulted in the arrest of several individuals for running so-called botnets (ensembles of compromised computers). These networks numbered hundreds of thousands of machines.11 The potential for such networks to be used for mischief cannot be underestimated.

For more details of threats and hazards, see many other chapters in this Handbook, including 1, 2, 4, 5, 13, 14, 15, 16, 17, 18, 19, 20, 22, and 23.

30.5 OPERATIONAL REQUIREMENTS.

Internet-visible systems are those with any connection to the worldwide Internet. It is tempting to consider protecting Internet-visible systems as a purely technical issue. However, technical and business issues are inseparable in today's risk management. For example, as noted earlier in this chapter, the degree to which systems should be exposed to the Internet is fundamentally a business risk-management issue. Protection technologies and the policies behind the protection can be discussed only after the business risk questions have been considered and decided, setting the context for the technical discussions. In turn, business risk-management evaluation (see Chapter 62) must include a full awareness of all of the technical risks. Ironically, nontechnical business managers can accurately assess the degree of business risk only after the technical risks have been fully exposed.

Additional business and technical risks result from outsourcing. Today, many enterprises include equipment owned, maintained, and managed by third parties. Some of this equipment resides on the organization's own premises and other equipment resides off site: for example, at application service provider facilities.

Protecting a Web site begins with the initial selection and configuration of the equipment and its supporting elements, and continues throughout its life. In general, care and proactive consideration of the availability and security aspects of the site from the beginning will reduce costs and operational problems. Although virtually impossible to achieve, the goal is to design and implement an automatic system, with a configuration whose architecture and implementation operates even in the face of problems, with minimal customer impact.

That is not to say that a Web site can operate without supervision. Ongoing, proactive monitoring is critical to ensuring the secure operation of the site. Redundancy only reduces the need for real-time response by bypassing a problem temporarily; it does not eliminate the underlying cause. The initial failure must be detected, isolated, and corrected as soon as possible, albeit on a more schedulable basis. Otherwise, the system will operate in its successively degraded redundancy modes until the last redundant component fails, at which time the system will fail completely.

30.5.1 Ubiquitous Internet Protocol Networking.

Business has been dealing with the security of internets (i.e., interconnected networks) since the advent of internetworking in the late 1960s. However, the growing use of Transmission Control Protocol/Internet Protocol (TCP/IP) networks and of the public Internet has exposed much more equipment to attack than in the days of closed corporate networks. In addition, a much wider range of equipment, such as voice telephones based on voice-over IP (VoIP), fax machines, copiers, and even soft drink dispensers, are now network accessible.

IP connectivity has been a great boon to productivity and ease of use, but it has not been without a darker side. Network accessibility also has created unprecedented opportunities for improper, unauthorized access to networked resources and other mischief. It is not uncommon to experience probes and break-in attempts within hours or even minutes of unannounced connection to the global Internet.

Protecting Internet-visible assets is inherently a conflict between ease of access and security. The safest systems are those unconnected to the outside world. Similarly, the easiest to use systems are those that have no perceivable restrictions on use. Adjusting the limits on user activities and access must balance conflicting requirements. As an example, many networks are managed in-band, meaning that the switches, routers, firewalls and other elements of the network infrastructure are managed using the actual network as the connection medium. If the management interfaces were not managed over properly encrypted connections, management passwords would be visible on the network. If an outsider, or an unauthorized insider, monitors the network, information sufficient to paralyze the network and, in turn, the organization may be gained.

30.5.2 Internal Partitions.

Complex corporate environments can often be secured effectively by dividing the organization into a variety of interrelated and nested security domains, each with its own legal, technical, and cultural requirements. For example, there are specific legal requirements for medical records (see Chapters 71 in this Handbook) and for privacy protection (see Chapter 69 and 70). Partners and suppliers, as well as consultants, contractors, and customers, often need two-way access to corporate data and facilities. These diverse requirements mean that a single corporate firewall is often insufficient. Different domains within the organization will often require their own firewalls and security policies. Keeping track of the multitude of data types, protection and access requirements, and different legal jurisdictions and regulations makes for previously unheard-of degrees of complexity.

Damage control is another property of a network with internal partitions. A system compromised by an undetected malware component will be limited in its ability to spread the contagion beyond its own compartment.

30.5.3 Critical Availability.

Networks are often critical for second-to-second operations; as a result, the side effects of ill-considered countermeasures may be worse than the damage from the actual attack. For example, shutting down the network, or even part of it, for maintenance or repair can wreak more havoc than penetration by a malicious hacker.

30.5.4 Accessibility.

Users must be involved in the evolution of rules and procedures. Today, it is still not unheard of for a university faculty to take the position that any degree of security will undermine the very nature of their community, compromising their ability to perform research and inquiries. This extreme position persists despite the attention of the mass media, the justified beliefs of the technical community, and documented evidence that lack of protection of any Internet-connected system undermines the safety of the entire connected community.

Connecting a previously isolated computer system or network to the global Internet creates a communications pathway to every corner of the world. Customers, partners, and employees can obtain information, send messages, place orders, and otherwise interact 24 hours a day, seven days a week, 365 days a year, from literally anywhere on or near Earth. Even the Space Shuttle and the International Space Station have Internet access. Under these circumstances, the possibilities for attack or inadvertent misuse are limitless.

Despite the vast increase in connectivity, some businesses and individuals do not need extensive access to the global Internet for their day-to-day activities, although they may resent being excluded. The case for universal access is therefore a question of business policy and political considerations.

30.5.5 Applications Design.

Protecting a Web site begins with the most basic steps. First, a site processing confidential information should always support the secure hypertext transfer protocol (HTTPS), typically using TCP port 443. Properly supporting HTTPS requires the presence of an appropriate digital certificate (see Chapter 37).

When the security requirements are uncertain, the site design should err on the side of using HTTPS for communications. Although the available literature on the details of Internet eavesdropping is sparse, the Communications Intelligence and Signals Intelligence (COMINT/SIGINT) historical literature from World War II makes it abundantly clear that encryption of all potentially sensitive traffic is the only way to protect information.

Encryption also should be used within an organization, possibly with a different digital certificate, for sensitive internal communications and transactions. Earlier, it was noted that organizations are not monolithic security domains. This is nowhere more true than when dealing with human resources, employee evaluations, compensation, benefits, and other sensitive employee information. There are positive requirements that this information be safeguarded, but few organizations have truly secured their internal networks against internal monitoring. It is far safer to route all such communications through securely encrypted channels. Such measures also demonstrate a good faith effort to ensure the privacy and confidentiality of sensitive information.

It is also important to avoid providing all of the authentication information on a single page or, for that matter, in a sequence of pages. When parts of information are suppressed, as, for example, portions of a credit card or account number, the division between suppressed and displayed portions should be maintained. Displaying all of the information, even if it is on different screens, is an invitation to a security breach.

30.5.6 Provisioning.

Although today's hardware has unprecedented reliability, any failure of hardware between the customer and the data center will impair an enterprise's Web presence. For a highly available, front-line Web site, the effective requirement is a minimum of two diversely located facilities, each with a minimum of two servers. This is not necessarily an expensive proposition. Fairly powerful Web servers can be purchased for less than $5,000, so the total hardware expenditure for four servers is reasonable, substantially less than the annual cost of a single technician. In most cases, the cost of the extra hardware is more than offset by the business cost of downtime, which can sometimes exceed the total cost of the duplicative hardware by a factor as much as 100, in a single episode.12

Duplicate hardware and geographic diversity ensure constant customer access to some degree of functionality. The degree of functionality that must be maintained depends on the market and the customers. Financial firms supporting online stock trading have different operational, regulatory, and legal requirements than supermarkets. The key is matching the support level to the activities. Some degree of planned degradation is generally acceptable. Total unavailability is not an option.

30.5.7 Restrictions.

All Web servers should be located behind a firewall in a demilitarized zone, as discussed earlier in this chapter). Incoming and outgoing services should be restricted using protocols such as HTTP, HTTPS, and Internet control message protocol (ICMP). For troubleshooting purposes, it is desirable to implement ICMP, which is used by PING, an echo requester, as a way to check connectivity. All unused ports should be disabled. Furthermore, the disabled ports should be blocked by the firewalls separating the DMZ from the outside world.

Customer information should, to the extent possible, be stored on systems separate from the systems actually providing Web serving. Many security episodes appear to exploit file protection errors on the Web server, in order to access the database directly. Segregating customer data on separate machines, and ensuring that the only way to access customer data is through the documented pathways, is likely severely to impede improper attempts to access and modify information.

These safeguards are especially important for high-security information such as credit card numbers. The number of incidents in which malefactors have downloaded credit card numbers directly from Web site is an indication of the importance of such precautions. The systems actually storing the sensitive information should never be accessible from the public Internet.

The TJX case, which came to public attention in the beginning of 2007, was one of a series of large-scale compromises of electronically stored information on back-office and e-commerce systems. Most notably, the TJX case appears to have started with an insufficiently secured corporate network and the associated back-office systems, not a Web site penetration. This breach escalated into a security breach of corporate data systems. It is reported on the TJX Web site that at least 45.7 million credit cards were compromised (original reports in USA Today and other publications cite 94 million credit card numbers as being compromised). On November 30, 2007, it was reported that TJX, the parent organization of stores including TJ Maxx and Marshall's, had agreed to settle bank claims related to VISA cards for US$ 40.9M.13 Also, as of March 2008, a class action suit on behalf of customers was in process of being settled. It called for fees to be paid to each of the plaintiffs for credit monitoring and identity theft, and for new driver's licenses, as well as $6.5 million to plaintiff's counsel. In spite of several such high-profile attacks, the lessons appear not to have been learned. As recently as March 2008, Hannaford Bros. Co. reported that 4.2 million credit card numbers had been exposed, with about 1,800 cases of fraudulent usage reported as of that date. Clearly, more stringent controls and safeguards are called for.

30.5.8 Multiple Security Domains.

The front-line Web servers, and the database servers supporting their activities, comprise two different security domains.

The Web servers, as noted previously, need to be globally accessible via HTTP, HTTPS, and ICMP. In turn, they need to access the application or database servers, and only those servers. In a production system, it is preferable that application or database servers interact with the Web servers using a dedicated, restricted-use protocol. Properly implemented, such a restriction prevents a hijacked Web server from exploiting its access to the application or database server.

These second-tier servers should be in a security domain separated by restrictive firewalls from the externally accessible front-line Web servers. This seems like a significant expenditure, but it is often less expensive and lower risk than a single significant incident.

30.5.9 What Needs to Be Exposed?

Publicly accessible Web sites need publicly accessible Web servers to perform their functions. The challenge is to provide the desired services to the public without simultaneously providing levers that can be used in unauthorized ways to subvert the site. A penetration incident may lead to significant financial losses, embarrassment, and financial and (in some cases) criminal liability.

No system on the Web site should be directly connected to the public Internet. All connections to the public network should be made through a firewall system, with the firewall configured to pass only Web-related traffic to those hosts.

Many sites will opt to place externally visible Web servers in a security compartment of their own, on a separate port of the firewall (if not a totally separate firewall), using a separate DMZ from other publicly accessible resources. These precautions may seem excessive, but having improperly secured systems can lead to security breaches that are extremely difficult to correct and can lead to extended downtime while the problems are analyzed and remedied. In this case, an ounce of prevention is worth substantially more than a pound of cure.

30.5.9.1 Exposed Systems.

Exposed systems are inherently a security hazard. Systems that are not accessible from the public network cannot be compromised from the public network. Only systems that absolutely need to be publicly accessible should be so configured. Minimizing the number of exposed systems is generally desirable, but this is best considered in terms of machine roles rather than actual counts of systems. Increasing the load on each publicly accessible server by increasing the size of the server, thus increasing the amount of capacity impacted by a single system outage, is not a benefit. However, this must be balanced against the new trend towards server virtualization, which does not increase the size of a server, but which increases its utilization, in order to lower costs and improve reliability.

The introduction of SSL-based VPN implementations and services supporting remote access via secure HTTPS connections (e.g., gotomypc.com) create an entirely new class of security hazard.

30.5.9.2 Hidden Subnets.

The servers directly supporting the Web site need to be accessed by the outside world, and thus must generally have normal Internet addresses. However, in most cases the other systems supporting the Web servers generally have no legitimate need for unrestricted access from or to the public network.

The safest address assignments for supporting, non–outside-visible systems are the IPv4 addresses allocated for use in private Internets14 and the corresponding IPv6 equivalents. Needless to say, these systems should be in a security compartment separated from the publicly accessible Web servers, and that compartment should be isolated from the publicly accessible compartment with a very restrictive firewall.

30.5.10 Access Controls.

Publicly accessible systems are both the focus of an organization's security efforts and the primary target of attempts to compromise that security. The number of individuals authorized to make changes to the systems and the ways in which changes may be made need to be carefully controlled, reported, and monitored. The cleared individuals should use individual accounts, and the access to sensitive functions through such accounts should be immediately invalidated if the individual's access is no longer authorized or if the employee ceases employment, is under investigation for impropriety, or other reason. For more information on access controls, see Chapters 9, 15, 23, 24, 25, 26, 27 and 28 in this Handbook.

30.5.11 Site Maintenance.

Maintaining and updating the Web site requires great care. The immediate nature of the Web makes it possible for a single-character error in a major enterprise-level application to cause hundreds of thousands of dollars of damage within moments. Web servers need to be treated with respect; the entire enterprise is riding on the electronic image projected by the server. Cybervandalism, which most commonly consists of defacing the home page of a well-known site, requires unauthorized updating of the files comprising the site. In 2000 alone, well-known public and private entities including the FBI, OPEC, World Trade Organization, and NASA, as well as educational institutions including the University of Limerick, have been harmed in this way. These attacks continue to be a danger, both in terms of damage to the organization's image and covertly, as using the WWW site as a launching pad for Web-based exploits.

The Web is inherently a highly leveraged environment. Small changes in the content of a single page percolate throughout the Web in a matter of minutes. Information disseminates easily and quickly at low cost. Leverage helps tremendously when things go right; when things go badly, leverage dramatically compounds the damage. For more information on change control, see Chapters 40, 47, and 52 in this Handbook.

30.5.12 Maintaining Site Integrity.

Every Web site and its servers is a target. Antagonists can be students, activists, terrorists, disgruntled former employees, or unhappy customers. Because Web sites are an enterprise's most public face, they represent extremely desirable targets.

Maintaining integrity requires that updates and changes to the site be done in a disciplined manner. Write access to the site must be restricted, and those authorized must use secure methods to access the Web servers. The majority of reported incidents appear to be the result of weak security in the update process. For example, unsecured FTP access from the general Internet is a poor practice. Safer mechanisms include:

  • Secure FTP
  • FTP from a specific node within the inner firewall
  • KERMIT on a directly wired port
  • Logins and file transfers over secure authenticated connections via SSH
  • Physical media transfers

Most of the technologies do not inherently require that an on-site individual perform server updates, which would preclude remote maintenance. It does mean that in order to get to a machine from which an update can be performed, it is necessary to come through a virtual private network with point-to-point tunneling protocol or Layer2 tunneling protocol (VPN PPTP/L2TP) authenticated by at least one of the secure gateways. For more information on VPNs, see Chapters 32, 33, 34, and 35 in this Handbook.

30.6 TECHNICAL ISSUES.

There are many technical issues involved in protecting Internet-accessible resources. The technologies used to protect network assets include routers, firewalls, proxy servers, redundancy, and dispersion. When properly designed and implemented, security measures produce a positive feedback loop, where improvements in network security and robustness are self-reinforcing. Each improvement makes other improvements possible and more effective.

30.6.1 Inside/Outside.

Some visions of the future include a utopian world where everything is directly accessible from anywhere without effort, and with only beneficial results. The original Internet operated on this basis, until a number of incidents (including the 1988 Morris worm) caused people to rethink the perceived inherent peacefulness of the networked world. It is a trend that has only accelerated, as the total number of systems grows ever larger.

The architecture and design of protective measures for a network depend on differentiating inside trustable systems from outside untrustworthy systems. This is equally true for intranets, the Internet, or an extranet, so called to distinguish private inter-connections of networks from the public Internet. Unfortunately, trust is not a black- and-white issue. A system may be trustworthy from one perspective and untrustworthy from another, thus complicating the security design. In addition, the vast majority of inappropriate computer use is thought to be done by those with legitimate access to some aspect of the system and its data. Alternatively, it is not possible to be strong everywhere. There is a significant danger from trusted but compromised systems.

Basic connectivity configuration is one of those few areas that are purely technical, without a business risk element. One of the most obvious elements involves the tables implemented in routers connecting the enterprise to the public carrier–supplied IP connection. The table rules must prevent IP spoofing, which is the misrepresentation of IP packet origins. This is also true when originators are within the organization.

There are three basic rules for preventing IP spoofing applicable to all properly configured networks:

  1. Packets entering the network from the outside should never have originator addresses within the target network.
  2. Packets leaving a network and going to the public network must have originator addresses within the originating network.
  3. Packets leaving a network and going to the public network must not have destination addresses within the originating network.

An exception to these rules is in the use of stealth internal networks, those whose internal addresses correspond to legal external addresses.15

A corollary to these rules is that packets with originator or destination addresses in the most local intranet addresses range16 or dynamic IP addresses,17 should never be permitted to enter or leave an internal network. Nested address spaces may deliberately create aliased address spaces.18

30.6.2 Hidden Subnets.

Firewalls funnel network traffic through one or more choke points, concentrating the security task in a small number of systems. The reasoning behind such concentration is that the likelihood of a security breach of the entire network rises rapidly with the number of independent access points.

Firewalls and proxy servers (see Chapter 26 in this Handbook) are effective only in topologies where the firewall filters all traffic between the protected systems and the less-trusted world outside its perimeter. If the protected systems can in any way be accessed without going through the firewall, then the firewall itself has been rendered irrelevant. Security audits often uncover systems that violate this policy; relying on administrative sanctions to preclude such holes in the security perimeter generally does not work. The rule of international diplomacy, “Trust but verify,” applies.

The simplest solution to this problem is the use of RFC 1918 addresses within protected networks.19 RFC1918 provides a range of IPv4 addresses guaranteed by the Internet Assigned Numbers Authority (IANA) never to occur in the normal, public Internet. The address ranges used for dynamic IP address assignment have similar properties.20

Filtering these addresses on both inbound and outbound data streams is straightforward and highly effective at stopping a wide range of attacks.21 Requiring the use of such addresses, and prohibiting the internal use of externally valid addresses, goes a long way toward preventing the use of unauthorized protocols and connections.

30.6.3 What Need Be Exposed?

A security implementation starts with an analysis of the enterprise's mission, needs, and requirements. All designs and implementations are a compromise between absolute security, achieved only in a powered-down or disconnected system, and total openness, in which a system is completely open to any and all access from the outside.

Although in most cases communications must be enabled between the enterprise and the outside world for normal business functions, total disconnection, known as an air gap, is sometimes both needed and appropriate between specific components. Industrial real-time control systems, life-critical systems, and systems with high requirements for confidentiality remain appropriate candidates for total air gaps.

Often, systems that do not need to receive information from outside sources must publish statistical or other information to less secure systems. This requirement can often be satisfied through the use of limited functionality links such as media exchange, or through tightly controlled one-way transfers. These mechanisms can be implemented with IP-related technologies or with more limited technologies, including KERMIT,22 UUCP, or vendor-specific solutions.

In other cases, restrictions reflect policies for permitted use and access rather than for protection against outside attack. For example, it is reasonable and appropriate for a public library to limit access to HTTP and related protocols while prohibiting access to such facilities as FTP and TELNET. There is a collateral issue of what provisions should be made to for systems within the perimeter to connect to outside networks using various protocols. PPTP and L2TP are the most well known. Whether these methods of accessing outside networks should be permitted or not is a management question of no small import, equivalent to totally suppressing outside communications.

From a security standpoint, tunnels to the outside world are a perfect storm, potentially permitting unlimited, nonmonitored communications to the outside. From the standpoint of enabling business operations, such access may be a practical necessity. Vendors, contractors, suppliers, and customers all need access to internal systems at their respective organizations, and such access will almost invariably require the use of a tunnel. Perhaps the best solution is to reverse the traditional inside/outside dichotomy, making the LAN an untrusted network. Using the underlying LAN as a universal dial tone. Access to internal corporate systems would be secured separately using VPN technology.

The advent of SSL/HTTP-based tunnels presents another challenge.gotomypc.com offers such a service, and individuals have used it to circumvent firewalls and implemented ad hoc remote access. The cost is nominal, often a small fraction of the monthly out-of-pocket outlay for an individual's daily commute to the office. The solution of blocking connections to the IP addresses assigned to such a provider is a recommended, yet inherently flawed, prophylactic. Blocking one or more such services does nothing to secure the network against an as-yet-unidentified service of this type, such as that hosted on a home server. Limiting the duration of SSL connections is also merely a speed bump to such schemes. These remain a challenge, as does the advent of directly usable SSL/HTTP tunnels communicating using the standard HTTP (TCP Port 80) and HTTPS (TCP Port 443) ports.

In these cases, firewalls are a reasonable solution, so long as their limitations are recognized. For example, firewalls do not prevent mobile code such as Active-X and JAVA from working around prohibited functions. Indeed, several network attacks have been published using code supplied to browsers for local execution within the local network context. Such code, in combination with weak security practices on network infrastructure, can cause severe damage (see Chapter 17 in this Handbook).

30.6.4 Multiple Security Domains.

Many networks implement security solely at the point of entry, where the organization's network connects to the public Internet. Such a monolithic firewall is a less than effective choice for all but the most simple of small organizations. As a starting point, systems available to the general population should be outside of the internal security domain in a no-man's land between the public Internet and the private internal net; such a barrier is referred to as a demilitarized zone (DMZ). These systems should be afforded a degree of protection by sandwiching them between an outer firewall (protecting the DMZ from the public network) and an inner firewall (protecting the internal network from the public network and controlling the communications between the publicly accessible servers located in the DMZ to the internal network). Alternatively, the DMZ may be attached to a separate port on a single firewall, using a different set of traffic management rules. Such a topology permits implementation of the differing security restrictions applicable to the two networks. Systems located within the DMZ should also be suspect, as they are targets for compromise.

images

EXHIBIT 30.3 Sibling and Nested Security Domains

Although the industry as a whole agrees on the need for DMZ configurations, it is less appreciated that such restrictions also have a place within organizations. Different groups, departments, and functions within an organization have different security and access requirements. For example, in a financial services firm, the security requirements differ dramatically among departments. Three obvious examples of departments with different requirements are personnel, mergers and acquisitions, and research and development (see Exhibit 30.3).

The personnel department is the custodian of a wide range of sensitive information about the firm, its employees, and often outsiders who are either regularly on company premises or work regularly with the company on projects. Some of this information, such as residence addresses, pay levels, and license plates, is sensitive for personal or cultural reasons. Other information subjects the organization to legal or regulatory sanctions if it is improperly disclosed or used. In the United States, examples of sensitive data include social security numbers, age, sexual orientation, and the presence of human immunodeficiency virus (HIV) or other medical details.

The mergers and acquisitions department handles sensitive information of a different sort. Information about business negotiations or future plans is subject to strict confidentiality requirements. Furthermore, the disclosure of such information is subject to a variety of regulations on governmental and securities industry levels. Within the mergers and acquisitions department, access to information often must be on a need-to-know basis, both to protect the deal and to protect the firm from exposure to civil and criminal liability.

Some information in the research and development department is completely open to the public, whereas other information is restricted to differing degrees.

A full implementation of an adequate security environment will require protections that are not only logically different on a departmental basis but also require that different departments be protected from each other. It is difficult, therefore, if not topologically impossible for a single firewall, located at the connection to the outside world, to implement the required security measures.

Securing systems in isolated logical areas is an example of necessary distrust, merely a matter of ensuring that the interactions between the third-party systems and the outside world are allowed to the extent that they are expected. As an example, consider the straightforward situation at Hypothetical Brokerage. Hypothetical Brokerage uses two trading networks, Omega and Gamma. At first glance, it would seem that that it would be acceptable to place Omega's and Gamma's network gateways on the usual DMZ, together with Hypothetical's Web servers.

However, this grants a high degree of trust to Omega and Gamma and all of their staff, suppliers, and contractors. The most important operative question is whether there is a credible hazard.

Either of the two gateways is well situated to:

  • Monitor the communications traffic to and from Hypothetical's Web servers
  • Monitor the traffic between Hypothetical and the other, competing network
  • Attack the other gateway
  • Disrupt communications to and from the other gateway
  • Attack Hypothetical's network

Network providers also represent an attractive attack option. A single break-in to a network provider–supplied system component has the effect of compromising large numbers of end user sites. There is ample history of private (PBX) and public (carrier-owned) switches being preferred targets.23

The solution (see Exhibit 30.4) is to isolate the third-party systems in separate DMZs, with the traffic between each of the DMZs and the rest of the network scrupulously checked as to transmission control protocol and user datagram protocol (TCP/UDP), port number, and source and destination addresses, to ensure that all traffic is authorized. One method is to use a single firewall, with multiple local area network (LAN) ports, each with different filtering rules, to recast Hypothetical's original single DMZ into disjoint, protected, DMZs.

30.6.5 Compartmentalization.

Breaking the network into separate security compartments reduces the potential for total network meltdown. Limiting the potential damage of an incident is an important step in resolving the problems.

The same rule applies to the DMZs. For example, in a financial trading or manufacturing enterprise, it is not uncommon to have gateways representing access points to trading and partner networks. Where one places these friendly systems is problematic. Many organizations have chosen to place these systems within their regular DMZ.

Sites have belatedly discovered that such gateways have, on occasion, been found to have acted as routers, taking over that function from the intended routers. In other cases, the gateways have experienced malfunctions and impaired the functioning of the rest of the network (or DMZ). As always, the only solutions certain to work are shutdown or isolation.

Compartmentalization also prevents accidents from cascading. A failure in a single gateway is not likely to propagate throughout the network, because the unexpected traffic will be stopped by the firewall isolating the gateway from the network. Such an event can be made to trigger a firewall's attack alarms.

images

EXHIBIT 30.4 Omega and Gamma Servers in Separate DMZs from Hypothetical's Server

The network problem is not limited to externally provided nodes. An errant system operating within a noncompartmented network, located as it is within the outer security perimeter, can wreak havoc throughout the entire corporation. Constructing the network as a series of nested and peer-related security domains, each protected by appropriate firewalls, localizes the impact of the inevitable incidents, whereas an uncompartmented network permits the contagion to spread throughout the organization, unchecked. The larger the network, the more expensive an incident is. It is quite conceivable that the entire budget for compartmentalizing a corporate network will be less expensive than the single hour of downtime resulting from the first errant system.

The ready availability of portable storage devices, including USB memory devices and external hard drives, makes compartmentalization an even more serious issue.

30.6.6 Need to Access.

Sometimes it is easy to determine who requires access, to which resources, and to which information. However, the question of access control often involves painful choices, with many nuances and subtleties. Legal and contractual responsibilities further complicate the question. For example, lack of access may be a benefit to persons alleged to have misused information.

Physical and logical access controls (see Chapters 27, 28, and 29 in this Handbook) need to be implemented for Internet-accessible systems as for any other sensitive and critical system. Controls for such systems must be enforced and respected by all members of the organization. It is important that personnel with restricted access to the network and security infrastructure understand and comprehend the reasons for the security rules and that they not take measures that circumvent those rules. The integrity of an organization's firewalls and network infrastructure is only as good as the physical and logical security of the personnel, equipment, infrastructure, and systems comprising the firewall and network. Regular auditing of both physical and logical access to infrastructure assets is critical and necessary.

The need to maintain information security on communications within the organization argues for the extensive use of security technologies, even when the data packets are never expected to leave the premises. It is sound planning, even within the enterprise, to require applications with privacy or confidentiality requirements to make use of privacy infrastructure such as the secure sockets layer (SSL) for Web-based applications or tunneling such as layer 2 tunneling protocol24 and point-to-point tunneling protocol.25 This helps ensure that sensitive information is limited in distribution.

Needless to say, these approaches should employ high-grade encryption and properly signed X.509 certificates from well-accepted Certification Authorities. Self-signed, expired, and not-generally-accepted certificates should not be used.

30.6.7 Accountability.

People often talk about impenetrable systems. However, despite the castle-and-moat analogies used in many discussions of security, no perimeter is likely to be perfect. Given the likelihood of successful attacks, security personnel must use both technical and managerial measures for effective response.

When securing infrastructure, priority should be given to protective measures that ensure accountability for actions. Just as it is desirable to prevent inappropriate activity, it is even more important to ensure that activities can be accounted for.

For example, there are many ways to carry out denial-of-service attacks. The most troublesome are those which are completely legal. Although some of these attacks, such as distributed denial of service, are belligerent or politically and ideologically motivated, involving remote-control zombie programs and botnets (described in Chapters 17 and 18), many accidental DoSs can occur without malice in the course of software development. For example, two of the most famous worms that inadvertently led to DoS, the Morris worm and the WANK worm, were detected inadvertently as a result of implementation errors in their replication mechanisms that caused both worms to proliferate extremely quickly, effectively producing unintended DoS attacks and subsequent detection.

When a compromised machine is analyzed forensically, the presence of the attack may remain undetected. This is particularly true when the attack involves designer malware not in general circulation (see Chapter 16 in this Handbook).

It is important to analyze security breaches to distinguish among attacks, accidents, and experiments. It is better for weaknesses to be uncovered, even accidentally, within an organization than it is to deal with a truly belligerent security breach. Policies and practices should therefore encourage employees to report accidents rather than try to hide them. As for false alarms caused by overenthusiastic security neophytes, management should avoid punishing those who report illusory breaches or attacks. Accountability provides the raw material to determine what actually happened. The resulting information is the critical underpinning for analysis and education, thus enabling the enterprise to evolve to higher levels of security and integrity.

30.6.8 Read-Only File Security.

Many sites allow downloads, usually via FTP, of large numbers of files. These files may include copies of forms, manuals, instructions, maps, and service guides. If such file serving is provided, then it is critical to ensure that:

  • The servers supporting the ftp service are secure.
  • The contents of the publicly accessible file store are read-only and subject to change control.
  • The entire contents of the public file store can be restored quickly in the event of a possible compromise.
  • There is a designated party who is responsible for maintaining and protecting the public file service.

30.6.9 Going Off-Line.

The responsiveness required to a problem with an Internet-connected system is directly related to the out-of-service costs. In some organizations, this may be the cost of lost business; in other organizations, the cost may be that of lost professional time, of damaged public relations, and of lowered morale. In any event, the larger the proportion of the organization (or its customers) affected by the problem, the higher the cost, and the greater the urgency to effect repairs.

Today's interconnected world makes Internet disconnection a truly painful option for a network or security manager. Although the cost of disconnection can run into hundreds of thousands of dollars in lost business and productivity, disconnection in certain situations is bothnecessary and appropriate. At times, disconnection presents the lowest-cost, most effective way to protect users, systems, and the public. For example, on May 4, 2000, during the epidemic of the Microsoft Outlook–exploiting “I Love You” virus attack, network managers at Ford Motor Company26 disconnected Ford's network from the outside world to limit the entry of contaminated e-mail into Ford's systems and to prevent Ford's systems from spreading the contagion. The response achieved its goals. It was not painless, but the alternatives were more painful.

The primary issue surrounding disconnection is what can be disconnected, on whose authority. According to well-known corollaries of Murphy's Law, such important incidents require short response times, inevitably on occasions when senior managers are not available. The best way to provide for this contingency is to furnish the personnel with authority to defend the systems within guidelines and a guarantee that actions within the guidelines will be immune from reprisal. If an organization chooses, as some do, not to authorize such actions, they also forswear the benefits.

30.6.10 Auditing.

In any organization, facilities usage should be monitored and analyzed, including network activity. Because people interested in attacking or subverting the enterprise's networks will attack when they wish, such monitoring and analysis must be part of a continuing process. Ongoing observation and review should include:

  • Physical communications infrastructure
  • Firewalls, router tables, and filtering rules
  • Host security
  • File security
  • Traffic patterns on backbones, DMZ, and other network segments
  • Physical security of systems and communications infrastructure

These areas are both synergistic and independent. They are synergistic in that the combination of all of them is mutually reinforcing in promoting a secure computing environment. They are independent because any one of them may represent a weak point in an otherwise secure environment. For more information on auditing, see Chapter 54 in this Handbook.

30.6.11 Emerging Technologies.

Emerging technologies continue to recast the challenges in the Internet technology arena. One such challenge is emerging in the use of SSL-based firewalls.

Using HTTPS as a basis for building encrypted tunnels appears at first impression to be a tremendously enabling technology. Most firewalls permit the initiation of connections from within the protected zone on TCP port 443 for the purpose of allowing connection to the wide variety of Web sites that need to secure information, as has been mentioned in other locations throughout this chapter. For privacy and security, such traffic is opaque to monitoring.

The use of HTTPS as the basis for tunneling also provides a tailor-made technique for abuse. For example, such technologies make it possible for a compromised desktop system to monitor the network, extracting data of interest, and to transmit the resulting data securely to an outside location through the firewall. The opaque nature of the SSL-based connection makes it impossible to scan the outgoing data.

This technology may well be the death knell of shared-connectivity LANs. It is ever more important to secure the network infrastructure against attacks that allow a rogue machine to accumulate data by monitoring network traffic.

Intrusion detection systems may also need to be repurposed to identify HTTPS connections that have significant traffic volumes, for signs of compromise or abuse. Approaches that treat the entire network as inherently compromised, with the use of VPN technology from the desktop server, to secure communications infrastructure, may be needed to protect against such attacks and malfeasance.

30.7 ETHICAL AND LEGAL ISSUES.

Managing a Web site poses a variety of ethical and legal issues, mostly surrounding the information that the site accumulates from processing transactions, from performance tracking, problem identification, and auditing. Policies need to be promulgated to ensure that information is not used for unauthorized purposes, and the staff running such systems needs to be aware of, and conform to, the policies. Many of these ethical issues have civil and criminal considerations as well. The topic of information privacy is more completely addressed in Chapter 69 of this Handbook. For more information on security policy standards, development and implementation, see Chapters 44,45,49, 50, and 51.

30.7.1 Liabilities.

The liability environment surrounding Web servers is too new for litigation to have run its course. However, there is no reason to believe that the myriad laws governing the disclosure of personal information will not be fully enforced in the context of Web sites.

Web sites increasingly handle sensitive information. Financial industry sites routinely handle bank and securities transactions. E-mail services handle large volumes of consumer traffic, and more and more insurance companies, employee benefits departments, and others are using Web sites to deal with extremely sensitive information covered by a variety of regulations.

Part of the rationale for recommending careful attention to the control of Web servers and their supporting systems is the need to create an environment of due diligence, where an organization can show that it took reasonable steps to ensure the integrity, safety, and confidentiality of information.

30.7.2 Customer Monitoring, Privacy, and Disclosure.

Customer monitoring is inherently a sensitive subject. The ability to accumulate detailed information about spending patterns, for example, is subject to abuse. A valid use of this information helps to pinpoint sales offerings that a customer would find relevant while eliminating contacts that would be of no interest. Used unethically or even illegally, such information could be used to assemble a dossier that could be the subject of embarrassing disclosure, of insurance refusal, and even of job termination. The overall problem predates the Web. In fact, more than 20 years ago a major network news segment reconstructed someone's life using nothing more than the information contained in their canceled checks, supplemented with publicly available information. The resulting analysis was surprisingly detailed. A similar experiment was reported in 1999, using the Web.27

Organizations sometimes violate the most basic security practices for protecting online information, when all of the information required to access an account improperly is contained on the single page of a billing statement. There have been repeated incidents (e.g., CDUniverse, CreditCard.com) where extremely sensitive information has been stored unencrypted on Web-accessible systems. These incidents recur with regularity and are almost always the result of storing large amounts of sensitive client information on systems that are Internet accessible. There is little question that it is inappropriate to store customer credit card, and similarly sensitive data, on exposed systems.

The security and integrity of systems holding customer order information is critical. The disclosure of customer ordering information is a significant privacy hazard. Failure to protect customer banking information (e.g., credit cardnumbers and expiration dates) can be extremely costly, both in economic terms and in damaged customer relations. Credit card merchant agreements also may subject the enterprise to additional liabilities and obligations.

A Web site, by its monitoring of customer activity, will accumulate a collection of sensitive material. It may be presumed that the information is useful only for the site, and is inherently valid, but there are a variety of hazards here, most of which are not obvious:

  • Information appearing to originate from a single source may indeed be a compilation of data from multiple sources. Shared computers, firewalls, and proxy servers can give rise to this phenomenon.
  • Casual correlations may arise between otherwise unrelated items. For example, it is not an uncommon acceptable business practice for one member of a business group to pay for all expenses of a group of traveling colleagues. Failure to correctly interpret such an event could be misconstrued as proof of illicit or inappropriate behavior.

The problem with casual associations is the damage they can cause. In the national security area, the use of casual associations to gather intelligence is a useful tool, albeit one that is recognized to have serious limitations. In other situations, it is an extremely dangerous tool, with significant potential to damage individuals and businesses. An example:

A California-based married businessman flies to New York City. When he arrives, he checks into a major hotel. A short time later, he makes a telephone call, and shortly a young woman goes up to his room and is greeted warmly. Apparently, a compromising situation. The businessman is old enough to be the woman's father; in fact, he is her father. That single fact changes apparently inappropriate behavior into a harmless family get-together. Peter Lewis28 of The New York Times correctly notes that this single fact, easily overlooked, dramatically changes the import of the information.

The danger with correlations and customer monitoring is that there is often no control on the expansive use of the conclusions generated. The information often has some degree of validity, but it is both easy to overstep the bounds of validity, and difficult to correct the damage, once damage has been done.

30.7.3 Litigation.

The increasing pervasiveness of the Web has led to increasing volumes of related litigation. In this chapter, the emphasis is on litigation or regulatory investigation involving commercial or consumer transactions, and the issues surrounding criminal prosecution for criminal activities involving a Web site. More detailed information on this subject appears in Chapter 11 of this Handbook

Civil. Web site logs and records can become involved in litigation in many ways. In an increasing number of cases, neither the site owner nor the operator is a party to the action; the records merely document transactions involved in a dispute. The dispute may be a general commercial matter, a personnel matter, or even a domestic relations matter involving divorce.

It is important that records handling and retention policies be developed in concert with counsel. The firm's management and counsel also must determine what the policy is to be with regard to subpoenas and related requests. The counsel also will determine what materials are subject to which procedures and regulations. For example, in a case of an e-mail provider (e.g., hotmail.com), material may be subject to the Electronic Communications Privacy Act of 1986 (18 U.S.C.A. § 2510 et seq.). Other material may be subject to different legal or contractual obligations.

Regulatory. A wide range of rules is enforced (and often promulgated) by various regulatory agencies. In the United States, such agencies exist at the federal, state, regional, and local level. (Outside the United States, many nations have agencies only at the national and provincial levels.) Many of these agencies have the authority to request records and conduct various types of investigations. For many organizations, such investigations are significantly more likely than civil or criminal investigations. Regulatory agencies also may impose record-keeping and retention requirements on companies within their jurisdiction.

Criminal. Criminal prosecutions receive more attention than the preceding two categories of investigation yet are much less frequent. Criminal matters are expensive to investigate and prosecute and must pass a higher standard of proof than regulatory or civil prosecutions. Relatively few computer-related incidents reach the stage of a criminal prosecution, although because of its seriousness, the process is the most visible.

Logs, Evidence, and Recording What Happened. The key to dealing effectively with any legal proceeding relating to a Web site is the maintenance of accurate, complete records in a secure manner. This is a complex topic, some details of which are covered in Chapter 55 of this Handbook.

In the context of protecting a Web site, records and logs of activity should be offloaded to external media and preserved for possible later use, as determined by the site's policy and its legal obligations. Once offloaded, these records should be stored using the strict procedures suitable for evidence in a criminal matter. The advent of inexpensive CD-ROM and DVD writers greatly simplifies the physical issues of securely storing such media.

Archival records not intended for evidentiary use also should be stored off-line, either physically, or at least on systems that are not accessible from the public Internet.

The media should be stored in signed, sealed containers in an inventoried, secure storage facility with controlled access. For this reason, the copies archived for records purposes should not be the copies normally used for system recovery. In the event of an investigation or other problem, these records will be carefully examined for possible modification or misuse.

30.7.4 Application Service Providers.

In recent years, it has become increasingly common to outsource entire applications services. External organizations providing such services are known as applications service providers, more commonly referred to as ASPs. Even more recently, this trend has diversified with the emergence of software as a service. Both raise similar security and integrity concerns. In both cases, significant applications and data are stored outside of the organization, with the organization retaining responsibility for the integrity and confidentiality of these records.

Conceptually, ASPs are not new. Many organizations have historically outsourced payroll processing and other applications. Theoretically, the ASP is responsible for the entire application. Often, paying a package price for the application seems attractive: no maintenance charges, no depreciation costs, lower personnel costs, latest technology, and moderately priced upgrades. However, just as a ship's captain retains responsibility for the safety of his ship, despite the presence of a harbor pilot, an enterprise must not forget that if something goes wrong, the enterprise, not the ASP, will likely bear the full consequences. In short, the ASP must be required to answer the same questions, and held to the same standards, as an inside IT organization regarding privacy, security, and integrity issues.

The security and integrity issues surrounding the use of ASPs are the same as those surrounding the use of an internal corporate service. Questions of privacy, integrity, and reliability remain relevant, but as with any form of outsourcing, there are additional questions. For example, is stored information commingled with that of other firms, perhaps competitors? Is information stored encrypted? What backup provisions exist? Is there off-site storage of backups? What connectivity does the ASP have? Where are the ASP's servers, and are they dispersed? What are the personnel practices of the ASP? Does the ASP itself own and operate its facilities, or does it in turn contract out to other providers?

The bottom line is that although outsourcing promises speedy implementation, lower personnel costs, and economies of scale, the customer organization will suffer considerable harm if there is a problem with the ASP, with availability, results, or confidentiality.

Finally, in the end analysis, the organization retains liability for its operations and its data. While it is comforting to consider that the legal system will accept “My provider did it.” as an excuse for lost, compromised, or untrustable data, it remains an untested theory. Recent experiences with major manufacturers, subcontractors, and tainted products would seem to indicate that outsourcing risk remains a serious potential liability.

Legal process requests, either criminal warrants or civil subpoenas, against ASPs for third-party data represent another hazard. If data belonging to third-parties is commingled on the ASP's system, care is required to prevent the unauthorized and inappropriate disclosure of unrelated data belonging to third parties other than the one that is the subject of the request.

None of the items cited may be reason to justify a negative finding on ASPs as a group. They should, however, serve as reminders that each of the issues related to keeping a Web presence secure, available, and effective apply no less to an ASP than they do to an in-house IT organization.

For information about outsourcing security, see Chapter 68 in this Handbook.

30.8 SUMMARY.

Availability is the cornerstone of all Web-related strategies. Throughout this chapter, it has been noted that redundant hosting and routing were necessary to ensure 24/7/365 availability. It was also noted that although some providers of services offer various guarantees, these guarantees almost never provide adequate compensation for consequential damage done to the enterprise. In the end, an enterprise's only protection is to take adequate measures to ensure their own security and integrity.

Operating guidelines and authority are also critical to ensuring the availability of Web resources on a 24/7/365 basis. Systems must be architected and implemented to enhance availability on an overall level. Operating personnel must have the freedom and authority to take actions that they perceive as necessary without fear of reprisal if the procedures do not produce the desired outcome.

Privacy and integrity of information exchanged with the Web site is also important. The implementation and operation of the site and its components must be in compliance with the appropriate laws, regulations, and obligations of the site owner, in addition to being in conformance with the expectations of the user community.

Protecting Internet and Web assets is a multifaceted task encompassing many disciplines. It is an area that requires attention at all levels, beginning at the highest levels of business strategy and descending successively to ever more detailed implementation and technology issues.

It is also an area where the smallest detail can have catastrophic impact. Recent events have shown that the lessons of communications history apply. Even a small error in network architecture, key management, or implementation can snowball until it is an issue that can be felt in the corporate boardroom.

30.9 FURTHER READING

Alderman, E., and C. Kennedy. The Right to Privacy. New York: Alfred A. Knopf, 1995. Well-written work concerning legal issues relating to privacy, particularly in the United States.

Ashley, B. K. “The United States Is Vulnerable to Cyberterrorism,” SIGNAL Magazine (March 2004); retrieved from www.afcea.org/signal/articles/templates/SIGNAL_Article_Template.asp?articleid=32'zoneid=10 on March 31, 2008.

Barnes, S. “In Ice-Coated Arkansas and Oklahoma, Chaos Rules.” New York Times, December 28, 2000.

Bernstein, D. “We've Been Hacked.” Inc Technology, No. 3 (2000).

Bernstein, T., A.B. Bhimani, E. Schultz, and C. A. Siegel. Internet Security for Business. New York: John Wiley & Sons, 1996.

Bowman, D., and AFP. “Internet Problems Continue with Fourth Cable Break,” Arabian Business, February 3, 2008; retrieved from www.arabianbusiness.com/510132-internet-problems-continue-with-fourth-cable-break on March 31, 2008.

CERT. “sadmind/IIS Worm.” May 8, 2001; retrieved from www.cert.org/advisories/CA-2001-11.html on May 31, 2008.

Cheshire, S., B. Aboba, and E. Gutterman. “RFC 3927—Dynamic Configuration of IPv4 Link-Local Addresses,” 2005; retrieved from www.ietf.org/rfc/rfc3927.txt on February 18, 2008.

Children's Online Privacy Protection Act of 1998; 15 U.S.C. § 6501 et seq.

CNN. “Internet Failure Hits Two Continents,” January 31, 2008, retrieved from www.cnn.com/2008/WORLD/meast/01/31/dubai.outage on February 19, 2008.

deGroot, G., D. Karrenberg, V. Moskowitz, and Lear E. Rekhter “RFC 1918—Address Allocation for Private Internets” (RFC1597). February 1996; retrieved from www.ietf.org/rfc/rfc1918.txt on May 27, 2008.

Derfler, Frank J., and Jay Munro. “Home Appliances Hit the Net.” PC Magazine, January 2, 2001; retrieved from www.pcmag.com/article2/0,2817,110893,00.asp on May 28, 2008.

Electronic Communications Privacy Act of 1986; 18 U.S.C.A. § 2510 et seq.

“Employees in the Twin Towers,” New York Times, September 16, 2001, p. 10.

Ford, W., and M. S. Baum. Secure Electronic Commerce: Building the Infrastructure for Digital Signatures and Encryption. Upper Saddle River, NJ: Prentice Hall, 1997.

Fraser, B. “RFC2196—Site Security Handbook,” September 1997. Retrieved from www.ietf.org/rfc/rfc2196.txt on May 28, 2008.

Garfinkel, S., and G. Spafford. Web Security and Commerce. Sebastopol, CA: O'Reilly & Associates, 1997.

Gezelter, R. “System Security—The Forgotten Issues.” Conference Session, US DE-CUS Symposium, Las Vegas, Nevada, Fall 1990; retrievable via www.rlgsc.com/presentations.html.

Gezelter, R. “Internet Security.” In The Computer Security Handbook, 3rd ed. New York: John Wiley & Sons, 1995.

Gezelter, Robert. “Plain Talk Management Needs to Hear from Its Technical Support Staff.” Commerce in Cyberspace: Expanding Your Enterprise via the Internet, The Conference Board (February 1996); retrieved from www.rlgsc.com/tcb/plaintalk.html on May 28, 2008.

Gezelter, R. “Security Prosecution: Records and Evidence.” DECUS Magazine (Spring 1996).

Gezelter, R. “Internet Security.” In The Computer Security Handbook, 3rd ed. supplement. New York: John Wiley & Sons, 1997.

Ghosh, A. K. E-Commerce Security: Weak Links, Best Defenses. New York: John Wiley & Sons, 1998.

Glater, J. “Hemming in the World Wide Web,” New York Times, January 7, 2001.

Guernsey, L. “Keeping the Life Lines Open,” New York Times, September 20, 2001.

Heyman, K. “A New Virtual Private Network for Today's Mobile World,” IEEE Computer (December 2007): 17–19.

Internet standards: www.ietf.org

Janofsky, M. “Police Seek Record of Bookstore Patrons in Bid for Drug Charge,” New York Times, November 24, 2000.

Kahn, D. Codebreakers. New York: Macmillan, 1970.

Kahn, D. Seizing the Enigma. Boston: Houghton Mifflin, 1991.

Khare, R., ed. Web Security: A Matter of Trust. Sebastopol, CA: O'Reilly & Associates, 1998.

Klensin, J., ed. “RFC 2821—Simple Mail Transfer Protocol” (obsoletes RFC 821, RFC 974, and RFC 1869), May 2001; retrieved from www.ietf.org/rfc/rfc2821.txt on May 28, 2008.

Knight, W. “Browser-Based Network Attack Discovered,” New Scientist Tech, July 31, 2006; retrieved from http://technology.newscientist.com/article.ns?id=dn9645 on February 18, 2008.

Layton, E. And I Was There—Pearl Harbor and Midway—Breaking the Secrets. New York: William Morrow, 1985.

Lichtblau, E. “F.B.I. Received Unauthorized E-Mail Access,” New York Times, February 17, 2008; retrieved from www.nytimes.com/2008/02/17/washington/17fisa.html on February 17, 2008.

Littman, J. The Fugitive Game—Online with Kevin Mitnick. Boston: Little, Brown, 1996.

Liu, C., and P. Albitz. DNS and BIND, 5th ed. Sebastopol, CA: O'Reilly & Associates, 2006.

Llosa, M. V. “Crossing the Moral Boundary” (op-ed.), New York Times, January 7, 2001.

Mockapetris, P. V. “RFC1034—Domain Names: Concepts and Facilities,” November 1, 1987; retrieved from www.ietf.org/rfc/rfc1034.txt on May 28, 2008.

Mockapetris, P. V. “RFC1035—Domain Names: Implementation and Specification,” November 1, 1987; retrieved from www.ietf.org/rfc/rfc1035.txt on May 28, 2008.

National Computer Network Emergency Response Technical Team/Coordination Center of China, “2006 Annual Report by CNCERT/CC”; retrieved from www.cert.org/cn/english_Web/document/2006AnnualReportByCNCERT.pdf on February 20, 2008.

National Infrastructure Protection Center “Cyber Protests: The Threat to U.S. Information Infrastructure” (October 2001); retrieved from www.au.af.mil/au/awc/awcgate/nipc/cyberprotests.html on February 19, 2008.

Overbye, Dennis. “Engineers Tackle Havoc Underground,” New York Times, September 18, 2001.

Postel, J., and J.K. Reynolds. “RFC 854—Telnet Protocol Specification” (May 1983); retrieved from http://www.ietf.org/rfc/rfc0854.txt on May 28, 2008.

Postel, J., and J.K. Reynolds. “RFC 959—File Transfer Protocol” (October 1985); retrieved from http://www.ietf.org/rfc/rfc0959.txt on May 28, 2008.

Preatoni, Roberto. “Digital Implication of the Attack to Lebanon: It's Cyber-war,” Zone-H.org, August 1, 2006; retrieved from www.zone-h.org/index2.php?option=com_content'task=view'id=13937 on February 19, 2008.

Pummill, T., and B. Manning. “RFC 1878—Variable Length Subnet Table for IPv4” (December 1995); retrieved from http://www.ietf.org/rfc/rfc1878.txt on May 28, 2008.

Schwartau, W. Information Warfare—Chaos on the Information Superhighway. New York: Avalon, 1994.

Security-related information: www.cert.org

Shimormura, T., and J. Markoff. Takedown. New York: Hyperion, 1996.

Simmons, M. The Credit Card Catastrophe. Fort Lee, NJ: Barricade Books, 1995.

Slatalla, M., and J. Quittner. Masters of Deception. New York: Harper Collins, 1995.

Stoll, C. The Cuckoo's Egg. New York: Bantam Doubleday, 1989.

Talbot, M. “The Devil in the Nursery,” New York Times Magazine, January 7, 2001.

Tripathy, D. “Internet Chaos Nears Its End,” Arabian Business, retrieved from www.arabianbusiness.com/510649 on February 19, 2008.

Weinberg, G. The Psychology of Computer Programming. New York: Van Nostrand Reinhold, 1971.

Weizenbaum, J. Computer Power and Human Reason. San Francisco: W.H. Freeman, 1976.

Wright, B. The Law of Electronic Commerce: EDI, E-Mail and Internet—Technology, Proof and Liability, 2nd ed. Boston: Little, Brown, 1996.

Wright, P. Spycatcher. Viking Penguin, 1987.

Zoellick, B. “Wide Use of Electronic Signatures Awaits Market Decisions About the Risks and Benefits,” New York State Bar Association Journal (November/December 2000).

30.10 NOTES

1. J. Swartz, “TJX Data Breach May Involve 94 Million Credit Cards,” USA Today, October 24, 2007; retrieved from www.usatoday.com/money/industries/technology/2007-10-24-tjx-security-breach_N.htm on March 31, 2008.

2. J. Vijayan, “TJX Agrees to Pay $40.9M to Visa Card Issuers in Breach Case,” Computerworld, November 30, 2007; retrieved from www.computerworld.com/action/article.do?command=viewArticleBasic'articleId=9050322 on May 29, 2008.

3. E. Murray, “Survey on Information Security,” Information Security Magazine (October 2000.)

4. VISA, “PCI Compliance Continued to Grow in 2007,” Press Release, January 22, 2008.

5. S. Schiesel and R. Atlas, “By-the-Numbers Operation at Morgan Stanley Finds Its Human Side,” New York Times, September 16, 2001.

6. A. Harmon, “Breaking Up the Central Office; Staffs Make a Virtue of Necessity,” New York Times, October 29, 2001 (corrected October 30, 2001).

7. www.truste.org.

8. P. G. Neumann, “The Green Card Flap,” RISKS Forum Digest 15, April 18, 1994.

9. G. Keizer, “Hacker Contest Weekend,” Web Design & Technology News, July 2, 2003; retrieved from www.webdesignsnow/news/070203.html on February 19, 2008.

10. Extensible Markup Language: A universal standard for structured documents and data on the World Wide Web sponsored by the World Wide Web Consortium (W3C), www.w3c.org.

11. J. Serjeant, “‘Botmaster’ Admits Infecting 250,000 Computers,” Reuters, November 9, 2007; “Computer Security Consultant Charges with Infecting up to a Quarter Million Computers That Were Used to Wiretap, Engage in Identity Theft, Defraud Banks,” November 9, 2007; retrieved from www.reuters.com/article/domesticNews/idUSN0823938120071110 on May 31, 2008. Press Release No. 07-143, United States Attorney's Office, Central District of California, retrieved from www.usdoj.gov/usao/cac/pressroom/pr2007/143.html on May 31, 2008. FBI, “BOT ROAST II,” Cracking Down on CyberCrime, retrieved from www.fbi.gov/page2/nov07/botnet112907.html on May 31, 2008. FBI, “Bot Roast II Nets 8 [sic] Individuals,” www.fbi.gov/pressrel/pressrel07/botroast112907.htm on May 31, 2008.

12. TechWise Research, Inc. “Quantifying the Value of Availability” (June 2000).

13. Vijayan, “TJX Agrees to Pay $40.9M.”

14. G. deGroot, D. Karrenberg, V. Moskowitz, and Lear E. Rekhter, “RFC 1597—Address Allocation for Private Internets” (March 1994); retrieved from www.ietf.org/rfc/rfc1597.txt on May 27, 2008.

15. R. Gezelter, “Internet Dial Tones & Firewalls: One Policy Does Not Fit All,” IEEE Computer Society, Charleston, SC, June 10, 2003; retrieved from www.rlgsc.com/ieee/charleston/2003-6/internetdial.htmlon May 28, 2008. R. Gezelter, “Safe Computing in the Age of Ubiquitous Connectivity,” LISAT 2007 (May 2007); retrieved from www.rlgsc.com/ieee/longisland/2007/ubiquitous.html on February 19, 2007.

16. RFC1597, superseded by RFC1918.

17. RFC 3927.

18. Gezelter, “Internet Dial Tones & Firewalls.” Gezelter, “Safe Computing in the Age of Ubiquitous Connectivity.”

19. Formerly RFC 1597.

20. RFC 3927.

21. Robert Gezelter, “Stopping Spoofed Addresses Can Cut Down on DDoS Attacks,” Network World Fusion, August 14, 2000; retrieved from www.networkworld.com/columnists/2000/0814gezelter.html on May 28, 2008.

22. Frank da Cruz, and Christine M. Gianone. Using C-KERMIT, 2nd ed. (Boston: Digital Press, 1997).

23. M. Slatalla and J. Quittner, Masters of Deception (New York: Harper Collins, 1995).

24. W. Townsley, A. Valencia, A. Rubens, G. Pall, G. Zorn, and B. Palter. “RFC 2661—Layer Two Tunneling Protocol ‘L2TP’” (August 1999); retrieved from www.ietf.org/rfc/rfc2661.txt on May 28, 2008.

25. K. Hamzeh, G. Pall, W. Verthein, J. Taarud, W. Little, and G. Zorn “RFC 2637—Point-to-Point Tunneling Protocol” (July 1999); retrieved from www.ietf.org/rfc/rfc2637.txt on May 28, 2008.

26. K. Timms, Telephone interview (Summer 2001); K. Bradsher, “With Its E-mail Infected, Ford Scrambled and Caught Up,” New York Times, May 8, 2000.

27. T. Kelley, “An Expert in Computer Security Finds His Life Is a Wide Open Book,” New York Times, December 31. 1999.

28. P. Lewis, “Forget Big Brother,” New York Times, March 19, 1998.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.136.226