CHAPTER 10

NETWORK SECURITY1

images

THIS CHAPTER describes why networks need security and how to provide it. The first step in any security plan is risk assessment, understanding the key assets that need protection, and assessing the risks to each. There are a variety of steps that can be taken to prevent, detect, and correct security problems due to disruptions, destruction, disaster, and unauthorized access.

OBJECTIVES images

  • Be familiar with the major threats to network security
  • Be familiar with how to conduct a risk assessment
  • Understand how to ensure business continuity
  • Understand how to prevent intrusion

CHAPTER OUTLINE images

10.1 INTRODUCTION

10.1.1 Why Networks Need Security

10.1.2 Types of Security Threats

10.1.3 Network Controls

10.2 RISK ASSESSMENT

10.2.1 Develop a Control Spreadsheet

10.2.2 Identify and Document the Controls

10.2.3 Evaluate the Network's Security

10.3 ENSURING BUSINESS CONTINUITY

10.3.1 Virus Protection

10.3.2 Denial of Service Protection

10.3.3 Theft Protection

10.3.4 Device Failure Protection

10.3.5 Disaster Protection

10.4 INTRUSION PREVENTION

10.4.1 Security Policy

10.4.2 Perimeter Security and Firewalls

10.4.3 Server and Client Protection

10.4.4 Encryption

10.4.5 User Authentication

10.4.6 Preventing Social Engineering

10.4.7 Intrusion Prevention Systems

10.4.8 Intrusion Recovery

10.5 BEST PRACTICE RECOMMENDATIONS

10.6 IMPLICATIONS FOR MANAGEMENT

10.1 INTRODUCTION

Business and government have always been concerned with physical and information security. They have protected physical assets with locks, barriers, guards, and the military since organized societies began. They have also guarded their plans and information with coding systems for at least 3,500 years. What has changed in the last 50 years is the introduction of computers and the Internet.

The rise of the Internet has completely redefined the nature of information security. Now companies face global threats to their networks, and, more importantly, to their data. Viruses and worms have long been a problem, but credit card theft and identity theft, two of the fastest-growing crimes, pose immense liability to firms who fail to protect their customers’ data. Laws have been slow to catch up, despite the fact that breaking into a computer in the United States—even without causing damage—is now a federal crime punishable by a fine and/or imprisonment. Nonetheless, we have a new kind of transborder cyber crime against which laws may apply but will be very difficult to enforce. The United States and Canada may extradite and allow prosecution of digital criminals operating within their borders, but investigating, enforcing, and prosecuting transnational cyber crime across different borders is much more challenging. And even when someone is caught they face lighter sentences than bank robbers.

Computer security has become increasingly important over the last 10 years with the passage of the Sarbanes-Oxley Act (SOX) and the Health Insurance Portability and Accountability Act (HIPAA). The number of Internet security incidents reported to the Computer Emergency Response Team (CERT) doubled every year up until 2003, when CERT stopped keeping records because there were so many incidents that it was no longer meaningful to keep track.2 CERT was established by the U.S. Department of Defense at Carnegie Mellon University with a mission to work with the Internet community to respond to computer security problems, raise awareness of computer security issues, and prevent security breaches.

Approximately 70 percent of organizations experienced security breaches in the last 12 months.3 The median number of security incidents was four, but the top 5 percent of organizations were attacked more than 100 times a year. About 60% reported they suffered a measurable financial loss due to a security problem, with the average loss being about $350,000, which is significantly higher than in previous years. The median loss was much smaller, under $50,000. Experts estimate that worldwide annual losses due to security problems exceed $2 trillion. Some of the most notable security breaches of 2010 were the AT&T's website hack that exposed the email addresses of 114,000 iPad3G owners and the Aurora Attack that targeted Google and affected dozens of other organizations that collaborate with Google.

Part of the reason for the increase in computer security problems is the increasing availability of sophisticated tools for breaking into networks. Ten years ago, someone wanting to break into a network needed to have some expertise. Today, even inexperienced attackers can download tools from a Web site and immediately begin trying to break into networks.

Two other factors are also at work increasing security problems. First, organized crime has recognized the value of computer attacks. Criminal organizations have launched spam campaigns with fraudulent products and claims, created viruses to steal information, and have even engaged in extortion by threatening to disable a small company's network unless it pays them a fee. Computer crime is less risky than traditional crime and also pays a lot better.

Second, there is considerable evidence that the Chinese military and security services have engaged in a major, ongoing cyberwarfare campaign against military and government targets in the western world.4 To date these attacks have focused on espionage and disabling military networks. Most large Chinese companies are owned by the Chinese military or security services or by leaders recently departed from these organizations. There is some evidence that China's cyberwarfare campaign has expanded to include industrial espionage against western companies in support of Chinese companies.

As a result, the cost of network security has increased. Firms spent an average of about 5 percent of their total IT budget on network security. The average expenditure was about $1,250 per employee per year—and that's all employees in the organization not per IT employee, so that an organization with 100 employees spends an average of $1.25 million per year on IT security. About 30 percent of organizations had purchased insurance for security risks.

10.1.1 Why Networks Need Security

In recent years, organizations have become increasingly dependent on data communication networks for their daily business communications, database information retrieval, distributed data processing, and the internetworking of LANs. The rise of the Internet with opportunities to connect computers anywhere in the world has significantly increased the potential vulnerability of the organization's assets. Emphasis on network security also has increased as a result of well-publicized security break-ins and as government regulatory agencies have issued security-related pronouncements.

The losses associated with the security failures can be huge. An average annual loss of about $350,000 sounds large enough, but this is just the tip of the iceberg. The potential loss of consumer confidence from a well-publicized security break-in can cost much more in lost business. More important than these, however, are the potential losses from the disruption of application systems that run on computer networks. As organizations have come to depend on computer systems, computer networks have become “mission-critical.” Bank of America, one of the largest banks in the United States, estimates that it would cost the bank $50 million if its computer networks were unavailable for 24 hours. Other large organizations have produced similar estimates.

Protecting customer privacy and the risk of identity theft also drives the need for increased network security. In 1998, the European Union passed strong data privacy laws that fined companies for disclosing information about their customers. In the United States, organizations have begun complying with the data protection requirements of the HIPAA, and a California law providing fines up to $250,000 for each unauthorized disclosure of customer information (e.g., if someone were to steal 100 customer records, the fine could be $25 million).

As you might suspect, the value of the data stored on most organizations’ networks and the value provided by the application systems in use far exceeds the cost of the networks themselves. For this reason, the primary goal of network security is to protect organizations’ data and application software, not the networks themselves.

10.1.2 Types of Security Threats

For many people, security means preventing intrusion, such as preventing an attacker from breaking into your computer. Security is much more than that, however. There are three primary goals in providing security: confidentiality, integrity, and availability (also known as CIA). Confidentiality refers to the protection of organizational data from unauthorized disclosure of customer and proprietary data. Integrity is the assurance that data have not been altered or destroyed. Availability means providing continuous operation of the organization's hardware and software so that staff, customers, and suppliers can be assured of no interruptions in service.

There are many potential threats to confidentiality, integrity, and availability. Figure 10.1 shows some threats to a computer center, the data communication circuits, and the attached computers. In general, security threats can be classified into two broad categories: ensuring business continuity and preventing unauthorized access.

Ensuring business continuity refers primarily to ensuring availability, with some aspects of data integrity. There are three main threats to business continuity. Disruptions are the loss of or reduction in network service. Disruptions may be minor and temporary. For example, a network switch might fail or a circuit may be cut, causing part of the network to cease functioning until the failed component can be replaced. Some users may be affected, but others can continue to use the network. Some disruptions may also be caused by or result in the destruction of data. For example, a virus may destroy files, or the “crash” of a hard disk may cause files to be destroyed. Other disruptions may be catastrophic. Natural (or human-made) disasters may occur that destroy host computers or large sections of the network. For example, hurricanes, fires, floods, earthquakes, mudslides, tornadoes, or terrorist attacks can destroy large parts of the buildings and networks in their path.

Intrusion refers primarily to confidentiality, but also to integrity, as an intruder may change important data. Intrusion is often viewed as external attackers gaining access to organizational data files and resources from across the Internet. However, almost half of all intrusion incidents involve employees. Intrusion may have only minor effects. A curious intruder may simply explore the system, gaining knowledge that has little value. A more serious intruder may be a competitor bent on industrial espionage who could attempt to gain access to information on products under development, or the details and price of a bid on a large contract, or a thief trying to steal customer credit card numbers or information to carry out identity theft. Worse still, the intruder could change files to commit fraud or theft or could destroy information to injure the organization.

10.1.3 Network Controls

Developing a secure network means developing controls. Controls are software, hardware, rules, or procedures that reduce or eliminate the threats to network security. Controls prevent, detect, and/or correct whatever might happen to the organization because of threats facing its computer-based systems.

images

FIGURE 10.1 Some threats to a computer center, data communication circuits, and client computers

Preventive controls mitigate or stop a person from acting or an event from occurring. For example, a password can prevent illegal entry into the system, or a set of second circuits can prevent the network from crashing. Preventive controls also act as a deterrent by discouraging or restraining someone from acting or proceeding because of fear or doubt. For example, a guard or a security lock on a door may deter an attempt to gain illegal entry.

Detective controls reveal or discover unwanted events. For example, software that looks for illegal network entry can detect these problems. They also document an event, a situation, or an intrusion, providing evidence for subsequent action against the individuals or organizations involved or enabling corrective action to be taken. For example, the same software that detects the problem must report it immediately so that someone or some automated process can take corrective action.

10.1 AURORA ATTACK

MANAGEMENT FOCUS

Even information technology giants, such as Google, are not safe when it comes to cyber security. The Aurora Attack began mid-2009 and ended in December 2009 when it was discovered by Google. The name Aurora is believed to be an internal name that the attackers gave to this operation. This attack is believed to be ordered by the Chinese government and its goal was to gain access and potentially modify source code repositories of high-tech, security, and defense contractors.

This wasn't a simple attack done by script kiddies, young adults who download scripts written by somebody else to exploit known vulnerabilities. Security experts were amazed by the sophistication of this attack and some claim it changed the threat model. Nearly a dozen pieces of malware and several levels of encryption were used to exploit a zero-day vulnerability in Internet Explorer and to break deeply into the corporate network while avoiding common detection methods.

This attack also hit 33 other companies in the United States, including Adobe, Yahoo, Symantec, and Dow Chemical. As a response to this attack, governments of other foreign countries publicly issued warnings to users of Internet Explorer. The Aurora Attack only reminds us that cyber security is a global problem and everybody who uses Internet can and probably will be under attack. Therefore learning about security and investing in it is necessary to survive and strive in the Internet era.

__________

SOURCES: http://www.wired.com/threatlevel/2010/01/operation-aurora/

http://en.wikipedia.org/wiki/OperationAurora http://www.wired.com/threatlevel/2010/01/google-hack-attack/

Corrective controls remedy an unwanted event or an intrusion. Either computer programs or humans verify and check data to correct errors or fix a security breach so it will not recur in the future. They also can recover from network errors or disasters. For example, software can recover and restart the communication circuits automatically when there is a data communication failure.

The remainder of this chapter discusses the various controls that can be used to prevent, detect, and correct threats. We also present a control spreadsheet and risk analysis methodology for identifying the threats and their associated controls. The control spreadsheet provides a network manager with a good view of the current threats and any controls that are in place to mitigate the occurrence of threats.

Nonetheless, it is important to remember that it is not enough just to establish a series of controls; someone or some department must be accountable for the control and security of the network. This includes being responsible for developing controls, monitoring their operation, and determining when they need to be updated or replaced.

Controls must be reviewed periodically to be sure that they are still useful and must be verified and tested. Verifying ensures that the control is present, and testing determines whether the control is working as originally specified.

It is also important to recognize that there may be occasions in which a person must temporarily override a control, for instance when the network or one of its software or hardware subsystems is not operating properly. Such overrides should be tightly controlled, and there should be a formal procedure to document this occurrence should it happen.

10.2 RISK ASSESSMENT

One key step in developing a secure network is to conduct a risk assessment. This assigns levels of risk to various threats to network security by comparing the nature of the threats to the controls designed to reduce them. It is done by developing a control spreadsheet and then rating the importance of each risk. This section provides a brief summary of the risk assessment process.5

10.2.1 Develop a Control Spreadsheet

To be sure that the data communication network and microcomputer workstations have the necessary controls and that these controls offer adequate protection, it is best to build a control spreadsheet (Figure 10.2). Threats to the network are listed across the top, organized by business continuity (disruption, destruction, disaster) and intrusion, and the network assets down the side. The center of the spreadsheet incorporates all the controls that currently are in the network. This will become the benchmark on which to base future security reviews.

images

FIGURE 10.2 Sample control spreadsheet with some assets and threats. DNS = Domain Name Service; LAN = local area network

10.1 BASIC CONTROL PRINCIPLES OF A SECURE NETWORK

TECHNICAL FOCUS

  • The less complex a control, the better.
  • A control's cost should be equivalent to the identified risk. It often is not possible to ascertain the expected loss, so this is a subjective judgment in many cases.
  • Preventing a security incident is always preferable to detecting and correcting it after it occurs.
  • An adequate system of internal controls is one that provides “just enough” security to protect the network, taking into account both the risks and costs of the controls.
  • Automated controls (computer-driven) always are more reliable than manual controls that depend on human interaction.
  • Controls should apply to everyone, not just a few select individuals.
  • When a control has an override mechanism, make sure that it is documented and that the override procedure has its own controls to avoid misuse.
  • Institute the various security levels in an organization on the basis of “need to know.” If you do not need to know, you do not need to access the network or the data.
  • The control documentation should be confidential.
  • Names, uses, and locations of network components should not be publicly available.
  • Controls must be sufficient to ensure that the network can be audited, which usually means keeping historical transaction records.
  • When designing controls, assume that you are operating in a hostile environment.
  • Always convey an image of high security by providing education and training.
  • Make sure the controls provide the proper separation of duties. This applies especially to those who design and install the controls and those who are responsible for everyday use and monitoring.
  • It is desirable to implement entrapment controls in networks to identify attackers who gain illegal access.
  • When a control fails, the network should default to a condition in which everyone is denied access. A period of failure is when the network is most vulnerable.
  • Controls should still work even when only one part of a network fails. For example, if a backbone network fails, all local area networks connected to it should still be operational, with their own independent controls providing protection.
  • Don't forget the LAN. Security and disaster recovery planning has traditionally focused on host mainframe computers and WANs. However, LANs now play an increasingly important role in most organizations but are often overlooked by central site network managers.
  • Always assume your opponent is smarter than you.
  • Always have insurance as the last resort should all controls fail.

Assets The first step is to identify the assets on the network. An asset is something of value and can be either hardware, software, data, or applications. Probably the most important asset on a network is the organization's data. For example, suppose someone destroyed a mainframe worth $10 million. The mainframe could be replaced simply by buying a new one. It would be expensive, but the problem would be solved in a few weeks. Now suppose someone destroyed all the student records at your university so that no one knows what courses anyone had taken or their grades. The cost would far exceed the cost of replacing a $10 million computer. The lawsuits alone would easily exceed $10 million, and the cost of staff to find and reenter paper records would be enormous and certainly would take more than a few weeks. Figure 10.3 summarizes some typical assets.

images

FIGURE 10.3 Types of assets. DNS = Domain Name Service; DHCP = Dynamic Host Control Protocol; LAN = local area network; WAN = wide area network

An important type of asset is the mission-critical application, which is an information system that is critical to the survival of the organization. It is an application that cannot be permitted to fail, and if it does fail, the network staff drops everything else to fix it. For example, for an Internet bank that has no brick-and-mortar branches, the Web site is a mission-critical application. If the Web site crashes, the bank cannot conduct business with its customers. Mission-critical applications are usually clearly identified, so their importance is not overlooked.

Once you have a list of assets, they should be evaluated based on their importance. There will rarely be enough time and money to protect all assets completely, so it is important to focus the organization's attention on the most important ones. Prioritizing asset importance is a business decision, not a technology decision, so it is critical that senior business managers be involved in this process.

Threats A threat to the data communication network is any potential adverse occurrence that can do harm, interrupt the systems using the network, or cause a monetary loss to the organization. Although threats may be listed in generic terms (e.g., theft of data, destruction of data), it is better to be specific and use actual data from the organization being assessed (e.g., theft of customer credit card numbers, destruction of the inventory database).

Once the threats are identified they can be ranked according to their probability of occurrence and the likely cost if the threat occurs. Figure 10.4 summarizes the most common threats and their likelihood of occurring, plus a typical cost estimate, based on several surveys. The actual probability of a threat to your organization and its costs depend on your business. An Internet bank, for example, is more likely to be a target of fraud and to suffer a higher cost if it occurs than a restaurant with a simple Web site. Nonetheless, Figure 10.4 provides some general guidance.

images

FIGURE 10.4 Likelihood and costs of common risks

From Figure 10.4 you can see that the most likely event is a virus infection, suffered by more than 60 percent of organizations each year. The average cost to clean up a virus that slips through the security system and infects an average number of computers is about $33,000 per virus. Depending on your background, this was probably not the first security threat that came to mind; most people first think about unknown attackers breaking into a network across the Internet. This does happen, too; unauthorized access by an external hacker is experienced by about 30 percent of all organizations each year, with some experiencing an act of sabotage or vandalism. The average cost to recover after these attacks is $100,000.

Interestingly, companies suffer intrusion by their own employees about as often as by outsiders, although the dollar loss is usually less—unless fraud or theft of information is involved. While few organizations experience fraud or theft of information from internal or external attackers, the cost to recover afterward can be very high, both in dollar cost and bad publicity. Several major companies have had their networks broken into and have had proprietary information such as customer credit card numbers stolen. Winning back customers whose credit card information was stolen can be an even greater challenge than fixing the security breach.

You will also see that device failure and computer equipment theft are common problems but usually result in low dollar losses compared to other security violations. Natural disasters (e.g., fire, flood) are also common, and result in high dollar losses.

Denial of service attacks, in which someone external to your organization blocks access to your networks, are also common (35 percent) and somewhat costly ($25,000 per event). Even temporary disruptions in service that cause no data loss can have significant costs. Estimating the cost of denial of service is very organization-specific; the cost of disruptions to a company that does a lot of e-commerce through a Web site is often measured in the millions.

Amazon.com, for example, has revenues of more than $10 million per hour, so if its Web site were unavailable for an hour or even part of an hour it would cost millions of dollars in lost revenue. Companies that do no e-commerce over the Web would have lower costs, but recent surveys suggest losses of $100,000–200,000 per hour are not uncommon for major disruptions of service. Even the disruption of a single LAN has cost implications; surveys suggest that most businesses estimate the cost of lost work at $1,000–5,000 per hour.

There are two “big picture” messages from Figure 10.4. First, the most common threat that has a noticeable cost is viruses. In fact, if we look at the relative probabilities of the different threats, we can see that the threats to business continuity (e.g., virus, theft of equipment, or denial of service) have a greater chance of occurring than intrusion. Nonetheless, given the cost of fraud and theft of information, even a single event can have significant impact.6

The second important message is that the threat of intrusion from the outside intruder coming at you over the Internet has increased. For the past 30 years, more organizations reported encountering security breaches caused by employees than by outsiders. This has been true ever since the early 1980s when the FBI first began keeping computer crime statistics and security firms began conducting surveys of computer crime. However, in recent years, the number of external attacks has increased at a much greater rate while the number of internal attacks has stayed relatively constant. Even though some of this may be due to better internal security and better communications with employees to prevent security problems, much of it is simply due to an increase in activity by external attackers and the global reach of the Internet. Today, external attackers pose almost as great a risk as internal employees.

10.2.2 Identify and Document the Controls

Once the specific assets and threats have been identified, you can begin working on the network controls, which mitigate or stop a threat, or protect an asset. During this step, you identify the existing controls and list them in the cell for each asset and threat.

Begin by considering the asset and the specific threat, and then describe each control that prevents, detects, or corrects that threat. The description of the control (and its role) is placed in a numerical list, and the control's number is placed in the cell. For example, assume 24 controls have been identified as being in use. Each one is described, named, and numbered consecutively. The numbered list of controls has no ranking attached to it: the first control is number 1 just because it is the first control identified.

Figure 10.5 shows a partially completed spreadsheet. The assets and their priority are listed as rows, with threats as columns. Each cell lists one or more controls that protect one asset against one threat. For example, in the first row, the mail server is currently protected from a fire threat by a Halon fire suppression system, and there is a disaster recovery plan in place. The placement of the mail server above ground level protects against flood, and the disaster recovery plan helps here too.

images

FIGURE 10.5 Sample control spreadsheet with some assets, threats, and controls. DNS = Domain Name Service; LAN = local area network

10.2.3 Evaluate the Network's Security

The last step in using a control spreadsheet is to evaluate the adequacy of the existing controls and the resulting degree of risk associated with each threat. Based on this assessment, priorities can be established to determine which threats must be addressed immediately. Assessment is done by reviewing each set of controls as it relates to each threat and network component. The objective of this step is to answer the specific question: Are the controls adequate to effectively prevent, detect, and correct this specific threat?

The assessment can be done by the network manager, but it is better done by a team of experts chosen for their in-depth knowledge about the network and environment being reviewed. This team, known as the Delphi team, is composed of three to nine key people. Key managers should be team members because they deal with both the long-term and day-to-day operational aspects of the network. More important, their participation means the final results can be implemented quickly, without further justification, because they make the final decisions affecting the network.

10.3 ENSURING BUSINESS CONTINUITY

Business continuity means that the organization's data and applications will continue to operate even in the face of disruption, destruction, or disaster. A business continuity plan has two major parts: the development of controls that will prevent these events from having a major impact on the organization, and a disaster recovery plan that will enable the organization to recover if a disaster occurs. In this section, we discuss controls designed to prevent, detect, and correct these threats.7 We focus on the major threats to business continuity: viruses, theft, denial of service, attacks, device failure, and disasters. Business continuity planning is sometimes overlooked because intrusion is more often the subject of news reports.

10.3.1 Virus Protection

Special attention must be paid to preventing computer viruses. Some are harmless and just cause nuisance messages, but others are serious, such as destroying data. In most cases, disruptions or the destruction of data are local and affect only a small number of computers. Such disruptions are usually fairly easy to deal with; the virus is removed and the network continues to operate. Some viruses cause widespread infection, although this has not occurred in recent years.

10.2 ATTACK OF THE AUDITORS

MANAGEMENT FOCUS

Security has become a major issue over the past few years. With the passage of HIPPA and the Sarbanes-Oxley Act, more and more regulations are addressing security. It takes years for most organizations to become compliant, because the rules are vague and there are many ways to meet the requirements.

“If you've implemented commonsense security, you're probably already in compliance from an IT standpoint,” says Kim Keanini, Chief Technology Officer of nCricle, a security software firm. “Compliance from an auditing standpoint, however, is something else.” Auditors require documentation. It is no longer sufficient to put key network controls in place; now you have to provide documented proof that a control is working, which usually requires event logs of transactions and thwarted attacks.

When it comes to security, Bill Randal, MIS Director of Red Robin Restaurants, can't stress the importance of documentation enough. “It's what the auditors are really looking for,” he says. “They're not IT folks, so they're looking for documented processes they can track. At the start of our [security] compliance project, we literally stopped all other projects for another three weeks while we documented every security and auditing process we had in place.”

Software vendors are scrambling to ensure that their security software not only performs the functions it is designed to do, but also to improve its ability to provide documentation for auditors.

__________

SOURCE: Oliver Rist, “Attack of the Auditors,” InfoWorld, March 21, 2005, pp. 34–40.

Most viruses attach themselves to other programs or to special parts on disks. As those files execute or are accessed, the virus spreads. Macro viruses, viruses that are contained in documents, emails, or spreadsheet files, can spread when an infected file is simply opened. Some viruses change their appearances as they spread, making detection more difficult.

A worm is special type of virus that spreads itself without human intervention. Many viruses attach themselves to a file and require a person to copy the file, but a worm copies itself from computer to computer. Worms spread when they install themselves on a computer and then send copies of themselves to other computers, sometimes by emails, sometimes via security holes in software. (Security holes are described later in this chapter.)

The best way to prevent the spread of viruses is to install antivirus software such as that by Symantec. Most organizations automatically install antivirus software on their computers, but many people fail to install them on their home computers. Antivirus software is only as good as its last update, so it is critical that the software be updated regularly. Be sure to set your software to update automatically or do it manualy on a regular basis.

Viruses are often spread by downloading files from the Internet, so do not copy or download files of unknown origin (e.g., music, videos, screen savers), or at least check every file you do download. Always check all files for viruses before using them (even those from friends!). Researchers estimate that 10 new viruses are developed every day, so it is important to frequently update the virus information files that are provided by the antivirus software.

10.3.2 Denial of Service Protection

With a denial-of-service (DoS) attack, an attacker attempts to disrupt the network by flooding it with messages so that the network cannot process messages from normal users. The simplest approach is to flood a Web server, mail server, and so on with incoming messages. The server attempts to respond to these, but there are so many messages that it cannot.

One might expect that it would be possible to filter messages from one source IP so that if one user floods the network, the messages from this person can be filtered out before they reach the Web server being targeted. This could work, but most attackers use tools that enable them to put false source IP addresses on the incoming messages so that it is difficult to recognize a message as a real message or a DoS message.

A distributed denial-of-service (DDoS) attack is even more disruptive. With a DDoS attack, the attacker breaks into and takes control of many computers on the Internet (often several hundred to several thousand) and plants software on them called a DDoS agent (or sometimes a zombie or a bot). The attacker then uses software called a DDoS handler (sometimes called a botnet) to control the agents. The handler issues instructions to the computers under the attacker's control, which simultaneously begin sending messages to the target site. In this way, the target is deluged with messages from many different sources, making it harder to identify the DoS messages and greatly increasing the number of messages hitting the target (see Figure 10.6). Some DDos attacks have sent more than one million packets per second at the target.

There are several approaches to preventing DoS and DDoS attacks from affecting the network. The first is to configure the main router that connects your network to the Internet (or the firewall, which will be discussed later in this chapter) to verify that the source address of all incoming messages is in a valid address range for that connection (called traffic filtering). For example, if an incoming message has a source address from inside your network, then it is obviously a false address. This ensures that only messages with valid addresses are permitted into the network, although it requires more processing in the router and thus slows incoming traffic.

A second approach is to configure the main router (or firewall) to limit the number of incoming packets that could be DoS/DDoS attack packets that it allows to enter the network, regardless of their source (called traffic limiting). Technical Focus box 10.2 describes some of the types of DoS/DDoS attacks and the packets used. Such packets have the same content as legitimate packets that should be permitted into the network. It is a flood of such packets that indicates a DoS/DDoS attack, so by discarding packets over a certain number that arrive each second, one can reduce the impact of the attack. The disadvantage is that during an attack, some valid packets from regular customers will be discarded so they will be unable to reach your network. Thus the network will continue to operate, but some customer packets (e.g., Web requests, emails) will be lost.

images

FIGURE 10.6 A distributed denial-of-service attack

10.3 CAN DDOS ATTACKS BREAK THE INTERNET?

MANAGEMENT FOCUS

Although the idea of DDoS is not new at all, it causes headaches to more and more companies with online presence. According to Arbor Networks Security Report, DDoS attacks have increased by 1000 percent since 2005. Attackers are now able to bombard a target at 100 Gbps, which is twice the size of the largest attack in 2009. The concern is that DDoS attacks may break the Internet in the near future.

There are three reasons for this. First, after the Wikileaks website was taken down for several hours in 2010 by a DDoS, we now realize that DDoS can be seen as a mass demonstration or protest that happens online. Second, faster household Internet connections make it easier to launch successful attacks because each bot now can generate more traffic. Third, some speculate that the very fast growing network of mobile devices that run on fast mobile networks can be used in DDoS. The concern that DDoS can be the threat of the next decade is even more tangible these days because of an announcement made by a computer science graduate, Max Schuchard. He claims that he discovered a way how to launch a DDoS on Border Gateway Protocol (BGP) that runs on all major Internet routers, and thus can crash the Internet. The question, then, is no longer whether one can break a website but rather whether one can break the Internet.

__________

SOURCES: http://www.pcworld.com/businesscenter/article/218533/will ddos attacks take over the internet.html

http://www.zdnet.com/blog/networking/how-to-crash-the-internet/680

A third and more sophisticated approach is to use a special-purpose security device, called a traffic anomaly detector, that is installed in front of the main router (or firewall) to perform traffic analysis. This device monitors normal traffic patterns and learns what normal traffic looks like. Most DoS/DDoS attacks target a specific server or device so when the anomaly detector recognizes a sudden burst of abnormally high traffic destined for a specific server or device, it quarantines those incoming packets but allows normal traffic to flow through into the network. This results in minimal impact to the network as a whole. The anomaly detector re-routes the quarantined packets to a traffic anomaly analyzer (see Figure 10.7). The anomaly analyzer examines the quarantined traffic, attempts to recognize valid source addresses and “normal” traffic, and selects which of the quarantined packets to release into the network. The detector can also inform the router owned by the ISP that is sending the traffic into the organization's network to reroute the suspect traffic to the anomaly analyzer, thus avoiding the main circuit leading into the organization. This process is never perfect, but is significantly better than the other approaches.

images

FIGURE 10.7 Traffic analysis reduces the impact of denial of service attacks

10.2 INSIDE A DOS ATTACK

TECHNICAL FOCUS

A DoS attack typically involves the misuse of standard TCP/IP protocols or connection processes so that the target for the DoS attack responds in a way designed to create maximum trouble. Five common types of attacks include:

  • ICMP Attacks:

    The network is flooded with ICMP echo requests (i.e., pings) that have a broadcast destination address and a faked source address of the intended target. Because it is a broadcast message, every computer on the network responds to the faked source address so that the target is overwhelmed by responses. Because there are often dozens of computers in the same broadcast domain, each message generates dozens of messages at the target.

  • UDP Attacks:

    This attack is similar to an ICMP attack except that it uses UDP echo requests instead of ICMP echo requests.

  • TCP SYN Floods:

    The target is swamped with repeated SYN requests to establish a TCP connection, but when the target responds (usually to a faked source address) there is no response. The target continues to allocate TCP control blocks, expects each of the requests to be completed, and gradually runs out of memory.

  • UNIX Process Table Attacks:

    This is similar to a TCP SYN flood, but instead of TCP SYN packets, the target is swamped by UNIX open connection requests that are never completed. The target allocates open connections and gradually runs out of memory.

  • Finger of Death Attacks:

    This is similar to the TCP SYN flood, but instead the target is swamped by finger requests that are never disconnected.

  • DNS Recursion Attacks:

    The attacker sends DNS requests to DNS servers (often within the target's network), but spoofs the from address so the requests appear to come from the target computer which is overwhelmed by DNS responses. DNS responses are larger packets than ICMP, UDP, or SYN responses so the effects can be stronger

    __________

    SOURCE: “Web Site Security and Denial of Service Protection,” www.nwfusion.com.

Another possibility under discussion by the Internet community as a whole is to require Internet Service Providers (ISPs) to verify that all incoming messages they receive from their customers have valid source IP addresses. This would prevent the use of faked IP addresses and enable users to easily filter out DoS messages from a given address. It would make it virtually impossible for a DoS attack to succeed, and much harder for a DDoS attack to succeed. Because small- to medium-sized businesses often have poor security and become the unwilling accomplices in DDoS attacks, many ISPs are beginning to impose security restrictions on them, such as requiring firewalls to prevent unauthorized access (firewalls are discussed later in this chapter).

10.3.3 Theft Protection

One often overlooked security risk is theft. Computers and network equipment are commonplace items that have a good resale value. Several industry sources estimate that over $1 billion is lost to computer theft each year, with many of the stolen items ending up on Internet auction sites (e.g., eBay).

Physical security is a key component of theft protection. Most organizations require anyone entering their offices to go through some level of physical security. For example, most offices have security guards and require all visitors to be authorized by an organization employee. Universities are one of the few organizations that permit anyone to enter their facilities without verification. Therefore, you'll see most computer equipment and network devices protected by locked doors or security cables so that someone cannot easily steal them.

One of the most common targets for theft is laptop computers. More laptop computers are stolen from employee's homes, cars, and hotel rooms than any other device. Airports are another common place for laptop thefts. It is hard to provide physical security for traveling employees, but most organizations provide regular reminders to their employees to take special care when traveling with laptops. Nonetheless, they are still the most commonly stolen device.

10.3.4 Device Failure Protection

Eventually, every computer network device, cable, or leased circuit will fail. It's just a matter of time. Some computers, devices, cables, and circuits are more reliable than others, but every network manager has to be prepared for a failure.

The best way to prevent a failure from impacting business continuity is to build redundancy into the network. For any network component that would have a major impact on business continuity, the network designer provides a second, redundant component. For example, if the Internet connection is important to the organization, the network designer ensures that there are at least two connections into the Internet—each provided by a different common carrier, so that if one common carrier's network goes down, the organization can still reach the Internet via the other common carrier's network. This means, of course, that the organization now requires two routers to connect to the Internet, because there is little use in having two Internet connections if they both run through the same router; if that one router goes down, having a second Internet connection provides no value.

This same design principle applies to the organization's internal networks. If the core backbone is important (and it usually is), then the organization must have two core backbones, each served by different devices. Each distribution backbone that connects to the core backbone (e.g., a building backbone that connects to a campus backbone) must also have two connections (and two routers) into the core backbone.

The next logical step is to ensure that each access layer LAN also has two connections into the distribution backbone. Redundancy can be expensive, so at some point, most organizations decide that not all parts of the network need to be protected. Most organizations build redundancy into their core backbone and their Internet connections, but are very careful in choosing which distribution backbones (i.e., building backbones) and access layer LANs will have redundancy. Only those building backbones and access LANs that are truly important will have redundancy. This is why a risk assessment with a control spreadsheet is important, because it is too expensive to protect the entire network. Most organizations only provide redundancy in mission critical backbones and LANs (e.g., those that lead to servers).

Redundancy also applies to servers. Most organizations use a server farm, rather than a single server, so that if one server fails, the other servers in the server farm continue to operate and there is little impact. Some organizations use fault-tolerant servers that contain many redundant components so that if one of its components fails, it will continue to operate.

Redundant array of independent disks (RAID) is a storage technology that, as the name suggests, is made of many separate disk drives. When a file is written to a RAID device, it is written across several separate, redundant disks.

There are several types of RAID. RAID 0 uses multiple disk drives and therefore is faster than traditional storage, because the data can be written or read in parallel across several disks, rather than sequentially on the same disk. RAID 1 writes duplicate copies of all data on at least two different disks; this means that if one disk in the RAID array fails, there is no data loss because there is a second copy of the data stored on a different disk. This is sometimes called disk mirroring, because the data on one disk is copied (or mirrored) onto another. RAID 2 provides error checking to ensure no errors have occurred during the reading or writing process. RAID 3 provides a better and faster error checking process than RAID 2. RAID 4 provides slightly faster read access than RAID 3 because of the way it allocates the data to different disk drives. RAID 5 provides slightly faster read and write access because of the way it allocates the error checking data to different disk drives. RAID 6 can survive the failure of two drives with no data loss.

Power outages are one of the most common causes of network failures. An uninterruptable power supply (UPS) is a device that detects power failures and permits the devices attached to it to operate as long as its battery lasts. UPS for home use are inexpensive and often provide power for up to 15 minutes—long enough for you to save your work and shut down your computer. UPS for large organizations often have batteries that last for an hour and permit mission critical servers, switches, and routers to operate until the organization's backup generator can be activated.

10.3.5 Disaster Protection

A disaster is an event that destroys a large part of the network and computing infrastructure in one part of the organization. Disasters are usually caused by natural forces (e.g., hurricanes, floods, earthquakes, fires), but some can be humanmade (e.g., arson, bombs, terrorism).

Avoiding Disaster Ideally, you want to avoid a disaster, which can be difficult. For example, how do you avoid an earthquake? There are, however, some commonsense steps you can take to avoid the full impact of a disaster from affecting your network. The most fundamental is again redundancy; store critical data in at least two very different places, so if a disaster hits one place, your data are still safe.

Other steps depend on the disaster to be avoided. For example, to avoid the impact of a flood, key network components and data should never be located near rivers or in the basement of a building. To avoid the impact of a tornado, key network components and data should be located underground. To reduce the impact of fire, a fire suppression system should be installed in all key data centers. To reduce the impact of terrorist activities, the location of key network components and data should be kept a secret and should be protected by security guards.

10.4 RECOVERING FROM KATRINA

MANAGEMENT FOCUS

As Hurricane Katrina swept over New Orleans, Ochsner Hospital lost two of its three backup power generators knocking out air conditioning in the 95-degree heat. Fans were brought out to cool patients, but temperatures inside critical computer and networking equipment reached 150 degrees. Kurt Induni, the hospital's network manager, shut down part of the network and the mainframe with its critical patient records system to ensure they survived the storm. The hospital returned to paper-based record keeping, but Induni managed to keep email alive, which became critical when the telephone system failed and a main fiber line was cut. e-mail through the hospital's T-3 line into Baton Rouge became the only reliable means of communication. After the storm, the mainframe was turned back on and the patient records were updated.

While Ochsner Hospital remained open, Kindred Hospital was forced to evacuate patients (under military protection from looters and snipers). The patients’ files, all electronic, were simply transferred over the network to other hospitals with no worry about lost records, X-rays, CT scans, and such.

In contrast, the Louisiana court system learned a hard lesson. The court system is administered by each individual parish (i.e., county) and not every parish had a disaster recovery plan or even backups of key documents–many parishes still use old paper files that were destroyed by the storm. “We've got people in jails all over the state right now that have no paperwork and we have no way to offer them any kind of means for adjudication,” says Freddie Manit, CIO for the Louisiana Ninth Judicial District Court. No paperwork means no prosecution, even for felons with long records, so many prisoners will simply be released. Sometimes losing data is not the worst thing that can happen.

__________

SOURCES: Phil Hochmuth, “Weathering Katrina,” NetworkWorld, September 19, 2005, pp. 1, 20; and M. K. McGee, “Storm Shows Benefits, Failures of Technology,” Informationweek, September 15, 2005, p. 34.

Disaster Recovery A critical element in correcting problems from a disaster is the disaster recovery plan, which should address various levels of response to a number of possible disasters and should provide for partial or complete recovery of all data, application software, network components, and physical facilities. A complete disaster recovery plan covering all these areas is beyond the scope of this text. Figure 10.8 provides a summary of many key issues. A good example of a disaster recovery plan is MIT's business continuity plan at web.mit.edu/security/www/pubplan.htm. Some firms prefer the term business continuity plan.

The most important elements of the disaster recovery plan are backup and recovery controls that enable the organization to recover its data and restart its application software should some portion of the network fail. The simplest approach is to make backup copies of all organizational data and software routinely and to store these backup copies off-site. Most organizations make daily backups of all critical information, with less important information (e.g., email files) backed up weekly. Backups used to be done on tapes that were physically shipped to an off-site location, but more and more, companies are using their WAN connections to transfer data to remote locations (it's faster and cheaper than moving tapes). Backups should always be encrypted (encryption is discussed later in the chapter) to ensure that no unauthorized users can access them.

images

FIGURE 10.8 Elements of a disaster recovery plan

Continuous data protection (CDP) is another option that firms are using in addition to or instead of regular backups. With CDP, copies of all data and transactions on selected servers are written to CDP servers as the transaction occurs. CDP is more flexible than traditional backups that take snapshots of data at specific times, or disk mirroring, that duplicates the contents of a disk from second to second. CDP enables data to be stored miles from the originating server and time-stamps all transactions to enable organizations to restore data to any specific point in time. For example, suppose a virus brings down a server at 2:45 P.M. The network manager can restore the server to the state it was in at 2:30 P.M. and simply resume operations as though the virus had not hit.

Backups and CDP ensure that important data are safe, but they do not guarantee the data can be used. The disaster recovery plan should include a documented and tested approach to recovery. The recovery plan should have specific goals for different types of disasters. For example, if the main database server was destroyed, how long should it take the organization to have the software and data back in operation by using the backups? Conversely, if the main data center was completely destroyed, how long should it take? The answers to these questions have very different implications for costs. Having a spare network server or a server with extra capacity that can be used in the event of the loss of the primary server is one thing. Having a spare data center ready to operate within 12 hours (for example) is an entirely different proposition.

Many organizations have a disaster recovery plan, but only a few test their plans. A disaster recovery drill is much like a fire drill in that it tests the disaster recovery plan and provides staff the opportunity to practice little-used skills to see what works and what doesn't work before a disaster happens and the staff must use the plan for real. Without regular disaster recovery drills, the only time a plan is tested is when it must be used. For example, when an island-wide blackout shut down all power in Bermuda, the backup generator in the British Caymanian Insurance office automatically took over and kept the company operating. However, the key-card security system, which was not on the generator, shut down, locking out all employees and forcing them to spend the day at the beach. No one had thought about the security system and the plan had not been tested.

Organizations are usually much better at backing up important data than are individual users. When did you last backup the data an your computer? What would you do if your computer was stolen or destroyed? There is an inexpensive alternative to CDP for home users. Online backup services such as mozy.com enable you to back up the data on your computer to their server on the Internet. You download and install client software that enables you to select what folders to back up. After you back up the data for the first time, which takes a while, the software will run every few hours and automatically back up all changes to the server, so you never have to think about backups again. If you need to recover some or all of your data, you can go to their Web site and download it.

Disaster Recovery Outsourcing Most large organizations have a two-level disaster recovery plan. When they build networks they build enough capacity and have enough spare equipment to recover from a minor disaster such as loss of a major server or portion of the network (if any such disaster can truly be called minor). This is the first level. Building a network that has sufficient capacity to quickly recover from a major disaster such as the loss of an entire data center is beyond the resources of most firms. Therefore, most large organizations rely on professional disaster recovery firms to provide this second-level support for major disasters.

Many large firms outsource their disaster recovery efforts by hiring disaster recovery firms that provide a wide range of services. At the simplest, disaster recovery firms provide secure storage for backups. Full services include a complete networked data center that clients can use when they experience a disaster. Once a company declares a disaster, the disaster recovery firm immediately begins recovery operations using the backups stored onsite and can have the organization's entire data network back in operation on the disaster recovery firm's computer systems within hours. Full services are not cheap, but compared to the potential millions of dollars that can be lost per day from the inability to access critical data and application systems, these systems quickly pay for themselves in time of disaster.

10.5 DISASTER RECOVERY HITS HOME

MANAGEMENT FOCUS

“The building is on fire” were the first words she said as I answered the phone. It was just before noon and one of my students had called me from her office on the top floor of the business school at the University of Georgia. The roofing contractor had just started what would turn out to be the worst fire in the region in more than 20 years although we didn't know it then. I had enough time to gather up the really important things from my office on the ground floor (memorabilia, awards, and pictures from 10 years in academia) when the fire alarm went off. I didn't bother with the computer; all the files were backed up off-site.

Ten hours, 100 firefighters, and 1.5 million gallons of water later, the fire was out. Then our work began. The fire had completely destroyed the top floor of the building, including my 20-computer networking lab. Water had severely damaged the rest of the building, including my office, which, I learned later, had been flooded by almost 2 feet of water at the height of the fire. My computer, and virtually all the computers in the building, were damaged by the water and unusable.

My personal files were unaffected by the loss of the computer in my office; I simply used the backups and continued working—after making new backups and giving them to a friend to store at his house. The Web server I managed had been backed up to another server on the opposite side of campus 2 days before (on its usual weekly backup cycle), so we had lost only 2 days’ worth of changes. In less than 24 hours, our Web site was operational; I had our server's files mounted on the university library's Web server and redirected the university's DNS server to route traffic from our old server address to our new temporary home.

Unfortunately, the rest of our network did not fare as well. Our primary Web server had been backed up to tape the night before and while the tapes were stored off-site, the tape drive was not; the tape drive was destroyed and no one else on campus had one that could read our tapes; it took 5 days to get a replacement and reestablish the Web site. Within 30 days we were operating from temporary offices with a new network, and 90 percent of the office computers and their data had been successfully recovered.

Living through a fire changes a person. I'm more careful now about backing up my files, and I move ever so much more quickly when a fire alarm sounds. But I still can't get used to the rust that is slowly growing on my “recovered” computer.

__________

SOURCE: Alan Dennis

10.4 INTRUSION PREVENTION

Intrusion is the second main type of security problem and the one that tends to receive the most attention. No one wants an intruder breaking into their network.

There are four types of intruders who attempt to gain unauthorized access to computer networks. The first are casual intruders who have only a limited knowledge of computer security. They simply cruise along the Internet trying to access any computer they come across. Their unsophisticated techniques are the equivalent of trying door-knobs, and, until recently, only those networks that left their front doors unlocked were at risk. Unfortunately, there are now a variety of hacking tools available on the Internet that enable even novices to launch sophisticated intrusion attempts. Novice attackers that use such tools are sometimes called script kiddies.

The second type of intruders are experts in security, but their motivation is the thrill of the hunt. They break into computer networks because they enjoy the challenge and enjoy showing off for friends or embarrassing the network owners. These intruders are called hackers and often have a strong philosophy against ownership of data and software. Most cause little damage and make little attempt to profit from their exploits, but those that do can cause major problems. Hackers that cause damage are often called crackers.

The third type of intruder is the most dangerous. They are professional hackers who break into corporate or government computers for specific purposes, such as espionage, fraud, or intentional destruction. The U.S. Department of Defense (DoD), which routinely monitors attacks against U.S. military targets, has until recently concluded that most attacks are individuals or small groups of hackers in the first two categories. While some of their attacks have been embarrassing (e.g., defacement of some military and intelligence Web sites), there have been no serious security risks. However, in the late 1990s the DoD noticed a small but growing set of intentional attacks that they classify as exercises, exploratory attacks designed to test the effectiveness of certain software attack weapons. Therefore, they established an information warfare program and a new organization responsible for coordinating the defense of military networks under the U.S. Space Command.

The fourth type of intruder is also very dangerous. These are organization employees who have legitimate access to the network, but who gain access to information they are not authorized to use. This information could be used for their own personnel gain, sold to competitors, or fraudulently changed to give the employee extra income. Many security break-ins are caused by this type of intruder.

The key principle in preventing intrusion is to be proactive. This means routinely testing your security systems before an intruder does. Many steps can be taken to prevent intrusion and unauthorized access to organizational data and networks, but no network is completely safe. The best rule for high security is to do what the military does: Do not keep extremely sensitive data online. Data that need special security are stored in computers isolated from other networks. In the following sections, we discuss the most important security controls for preventing intrusion and for recovering from intrusion when it occurs.

10.4.1 Security Policy

In the same way that a disaster recovery plan is critical to controlling risks due to disruption, destruction, and disaster, a security policy is critical to controlling risk due to intrusion. The security policy should clearly define the important assets to be safeguarded and the important controls needed to do that. It should have a section devoted to what employees should and should not do. Also, it should contain a clear plan for routinely training employees—particularly end-users with little computer expertise—on key security rules and a clear plan for routinely testing and improving the security controls in place (Figure 10.9). A good set of examples and templates is available at www.sans.org/resources/policies.

10.4.2 Perimeter Security and Firewalls

Ideally, you want to stop external intruders at the perimeter of your network, so that they cannot reach the servers inside. There are three basic access points into most networks: the Internet, LANS, and WLANs. Recent surveys suggest that the most common access point for intrusion is the Internet connection (70 percent of organizations experienced an attack from the Internet), followed by LANs and WLANs (30 percent). External intruders are most likely to use the Internet connection, whereas internal intruders are most likely to use the LAN or WLAN. Because the Internet is the most common source of intrusions, the focus of perimeter security is usually on the Internet connection, although physical security is also important.

images

FIGURE 10.9 Elements of a security policy

A firewall is commonly used to secure an organization's Internet connection. A firewall is a router or special-purpose device that examines packets flowing into and out of a network and restricts access to the organization's network. The network is designed so that a firewall is placed on every network connection between the organization and the Internet (Figure 10.10). No access is permitted except through the firewall. Some firewalls have the ability to detect and prevent denial-of-service attacks, as well as unauthorized access attempts. Three commonly used types of firewalls are packet-level firewalls, application-level firewalls, and NAT firewalls.

images

FIGURE 10.10 Using a firewall to protect networks

Packet-Level Firewalls A packet-level firewall examines the source and destination address of every network packet that passes through it. It only allows packets into or out of the organization's networks that have acceptable source and destination addresses. In general, the addresses are examined only at the transport layer (TCP port id) and network layer (IP address). Each packet is examined individually, so the firewall has no knowledge of what packets came before. It simply chooses to permit entry or exit based on the contents of the packet itself. This type of firewall is the simplest and least secure because it does not monitor the contents of the packets or why they are being transmitted, and typically does not log the packets for later analysis.

The network manager writes a set of rules (called an access control list [ACL]) for the packet level firewall so it knows what packets to permit into the network and what packets to deny entry. Remember that the IP packet contains the source IP address and the destination address and that the TCP segment has the destination port number that identifies the application layer software to which the packet is going. Most application layer software on servers uses standard TCP port numbers. The Web (HTTP) uses port 80, whereas email (SMTP) uses port 25.

Suppose that the organization had a public Web server with an IP address of 128.192.44.44 and an email server with an address of 128.192.44.45 (see Figure 10.11). The network manager wants to make sure that no one outside of the organization can change the contents of the Web server (e.g., by using telnet or FTP). The ACL could be written to include a rule that permits the Web server to receive HTTP packets from the Internet (but other types of packets would be discarded). For example, the rule would say if the source address is anything, the destination IP address is 128.192.44.44 and the destination TCP port is 80, then permit the packet into the network; see the ACL on the firewall in Figure 10.11. Likewise, we could add a rule to the ACL that would permit SMTP packets to reach the email server: If the source address is anything, the destination is 128.192.44.45 and the destination TCP port is 25, then permit the packet through (see Figure 10.11). The last line in the ACL is usually a rule that says to deny entry to all other packets that have not been specifically permitted (some firewalls come automatically configured to deny all packets other than those explicitly permitted, so this command would not be needed). With this ACL, if an external intruder attempted to use telnet (port 23) to reach the Web server, the firewall would deny entry to the packet and simply discard it.

images

FIGURE 10.11 How packet-level firewalls work

Although source IP addresses can be used in the ACL, they often are not used. Most hackers have software that can change the source IP address on the packets they send (called IP spoofing) so using the source IP address in security rules is not usually worth the effort. Some network managers do routinely include a rule in the ACL that denies entry to all packets coming from the Internet that have a source IP address of a subnet inside the organization, because any such packets must have a spoofed address and therefore obviously are an intrusion attempt.

Application-Layer Firewalls An application-level firewall is more expensive and more complicated to install and manage than a packet-level firewall, because it examines the contents of the application layer packet and searches for known attacks (see Security Holes later in this chapter). Application-layer firewalls have rules for each application they can process. For example, most application-layer firewalls can check Web packets (HTTP), email packets (SMTP), and other common protocols. In some cases, special rules must be written by the organization to permit the use of application software it has developed.

Remember from Chapter 5 that TCP uses connection-oriented messaging in which a client first establishes a connection with a server before beginning to exchange data. Application-level firewalls can use stateful inspection, which means that they monitor and record the status of each connection and can use this information in making decisions about what packets to discard as security threats.

Many application-level firewalls prohibit external users from uploading executable files. In this way, intruders (or authorized users) cannot modify any software unless they have physical access to the firewall. Some refuse changes to their software unless it is done by the vendor. Others also actively monitor their own software and automatically disable outside connections if they detect any changes.

Network Address Translation Firewalls Network address translation (NAT) is the process of converting between one set of public IP addresses that are viewable from the Internet and a second set of private IP addresses that are hidden from people outside of the organization. NAT is transparent, in that no computer knows it is happening. While NAT can be done for several reasons, the most common reason is security. If external intruders on the Internet can't see the private IP addresses inside your organization, they can't attack your computers. Most routers and firewalls today have NAT built into them, even inexpensive routers designed for home use.

The NAT firewall uses an address table to translate the private IP addresses used inside the organization into proxy IP addresses used on the Internet. When a computer inside the organization accesses a computer on the Internet, the firewall changes the source IP address in the outgoing IP packet to its own address. It also sets the source port number in the TCP segment to a unique number that it uses as an index into its address table to find the IP address of the actual sending computer in the organization's internal network. When the external computer responds to the request, it addresses the message to the firewall's IP address. The firewall receives the incoming message, and after ensuring the packet should be permitted inside, changes the destination IP address to the private IP address of the internal computer and changes the TCP port number to the correct port number before transmitting it on the internal network.

This way systems outside the organization never see the actual internal IP addresses, and thus they think there is only one computer on the internal network. Most organizations also increase security by using illegal internal addresses. For example, if the organization has been assigned the Internet 128.192.55.X address domain, the NAT fire-wall would be assigned an address such as 128.192.55.1. Internal computers, however, would not be assigned addresses in the 128.192.55.X subnet. Instead, they would be assigned unauthorized Internet addresses such as 10.3.3.55 (addresses in the 10.X.X.X domain are not assigned to organizations but instead are reserved for use by private intranets). Because these internal addresses are never used on the Internet but are always converted by the firewall, this poses no problems for the users. Even if attackers discover the actual internal IP address, it would be impossible for them to reach the internal address from the Internet because the addresses could not be used to reach the organization's computers.8

Firewall Architecture Many organizations use layers of NAT, packet-level, and application-level firewalls (Figure 10.12). Packet-level firewalls are used as an initial screen from the Internet into a network devoted solely to servers intended to provide public access (e.g., Web servers, public DNS servers). This network is sometimes called the DMZ (demilitarized zone) because it contains the organization's servers but does not provide complete security for them. This packet-level firewall will permit Web requests and similar access to the DMZ network servers but will deny FTP access to these servers from the Internet because no one except internal users should have the right to modify the servers. Each major portion of the organization's internal networks has its own NAT firewall to grant (or deny) access based on rules established by that part of the organization.

This figure also shows how a packet sent by a client computer inside one of the internal networks protected by a NAT firewall would flow through the network. The packet created by the client has the client's false source address and the source port number of the process on the client that generated the packet (an HTTP packet going to a Web server, as you can tell from the destination port address of 80). When the packet reaches the firewall, the firewall changes the source address on the IP packet to its own address and changes the source port number to an index it will use to identify the client computer's address and port number. The destination address and port number are unchanged. The firewall then sends the packet on its way to the destination. When the destination Web server responds to this packet, it will respond using the firewall's address and port number. When the firewall receives the incoming packets it will use the destination port number to identify what IP address and port number to use inside the internal network, change the inbound packet's destination and port number, and send it into the internal network so it reaches the client computer.

images

FIGURE 10.12 A typical network design using firewalls

Physical Security One important element to prevent unauthorized users from accessing an internal LAN is physical security: preventing outsiders from gaining access into the organization's offices, server room, or network equipment facilities. Both main and remote physical facilities should be secured adequately and have the proper controls. Good security requires implementing the proper access controls so that only authorized personnel can enter closed areas where servers and network equipment are located or access the network. The network components themselves also have a level of physical security. Computers can have locks on their power switches or passwords that disable the screen and keyboard.

In the previous section we discussed the importance of locating backups and servers at separate (off-site) locations. Some companies have also argued that by having many servers in different locations you can reduce your risk and improve business continuity. Does having many servers disperse risk, or does it increase the points of vulnerability? A clear disaster recovery plan with an off-site backup and server facility can disperse risk, like distributed server systems. Distributed servers offer many more physical vulnerabilities to an attacker: more machines to guard, upgrade, patch, and defend. Many times these dispersed machines are all part of the same logical domain, which means that breaking into one of them often can give the attacker access to the resources of the others. It is our feeling that a well backed-up, centralized data center can be made inherently more secure than a proliferated base of servers.

Proper security education, background checks, and the implementation of error and fraud controls are also very important. In many cases, the simplest means to gain access is to become employed as a janitor and access the network at night. In some ways this is easier than the previous methods because the intruder only has to insert a listening device or computer into the organization's network to record messages. Three areas are vulnerable to this type of unauthorized access: wireless LANs, network cabling, and network devices.

Wireless LANs are the easiest target for eavesdropping because they often reach beyond the physical walls of the organization. Chapter 6 discussed the techniques of WLAN security, so we do not repeat them here.

Network cables are the next easiest target for eavesdropping because they often run long distances and usually are not regularly checked for tampering. The cables owned by the organization and installed within its facility are usually the first choice for eavesdropping. It is 100 times easier to tap a local cable than it is to tap an interexchange channel because it is extremely difficult to identify the specific circuits belonging to any one organization in a highly multiplexed switched interexchange circuit operated by a common carrier. Local cables should be secured behind walls and above ceilings, and telephone equipment and switching rooms (wiring closets) should be locked and their doors equipped with alarms. The primary goal is to control physical access by employees or vendors to the connector cables and modems. This includes restricting their access to the wiring closets in which all the communication wires and cables are connected.

Certain types of cable can impair or increase security by making eavesdropping easier or more difficult. Obviously, any wireless network is at extreme risk for eavesdropping because anyone in the area of the transmission can easily install devices to monitor the radio or infrared signals. Conversely, fiber-optic cables are harder to tap, thus increasing security. Some companies offer armored cable that is virtually impossible to cut without special tools. Other cables have built-in alarm systems. The U.S. Air Force, for example, uses pressurized cables that are filled with gas. If the cable is cut, the gas escapes, pressure drops, and an alarm is sounded.

Network devices such as switches and routers should be secured in a locked wiring closet. As discussed in Chapter 6, all messages within a given local area network are actually received by all computers on the LAN although they only process those messages addressed to them. It is rather simple to install a sniffer program that records all messages received for later (unauthorized) analysis. A computer with a sniffer program could then be plugged into an unattended switch to eavesdrop on all message traffic. A secure switch makes this type of eavesdropping more difficult by requiring a special authorization code to be entered before new computers can be added.

10.3 DATA SECURITY REQUIRES PHYSICAL SECURITY

TECHNICAL FOCUS

The general consensus is that if someone can physically get to your server for some period of time, then all of your information on the computer (except perhaps strongly encrypted data) is available to the attacker.

With a Windows server, the attacker simply boots the computer from the CD drive with a Knoppix version of Linux. (Knoppix is Linux on a CD.) If the computer won't boot from the CD, the attacker simply changes the BIOS to make it boot from the CD. Knoppix finds all the drivers for the specific computer and gives you a Linux desktop that can fully read all of the NTFS or FAT32 files.

But what about Windows password access? Nothing to it. Knoppix completely bypasses it. The attacker can then read, copy, or transmit any of the files on the Windows machine. Similar attacks are also possible on a Linux or Unix server, but they are slightly more difficult.

10.4.3 Server and Client Protection

Security Holes Even with physical security and firewalls, the servers and client computers on a network may not be safe because of security holes. A security hole is simply a bug that permits unauthorized access. Many commonly used operating systems have major security holes well known to potential intruders. Many security holes have been documented and “patches” are available from vendors to fix them, but network managers may be unaware of all the holes or simply forget to update their systems with new patches regularly.

A complete discussion of security holes is beyond the scope of this book. Many security holes are highly technical; for example, sending a message designed to overflow a memory buffer, thereby placing a short command into a very specific memory area that performs some function. Others are rather simple, but not obvious. For example, the attacker sends a message that lists the server's address as both the sender and the destination, so the server repeatedly sends messages to itself until it crashes.

Once a security hole is discovered, it is quickly circulated through the Internet. The race begins between hackers and security teams; hackers share their discovery with other hackers and security teams share the discovery with other security teams. CERT is the central clearinghouse for major Internet-related security holes, so the CERT team quickly responds to reports of new security problems and posts alerts and advisories on the Web and emails them to those who subscribe to its service. The developer of the software with the security hole usually works quickly to fix the security hole and produces a patch that corrects the hole. This patch is then shared with customers so they can download and apply it to their systems to prevent hackers from exploiting the hole to break in. Attacks that take advantage of a newly discovered security hole before a patch is developed are called zero-day attacks. One problem is that many network managers do not routinely respond to such security threats and immediately download and install the patch. Often it takes many months for patches to be distributed to most sites.9 Do you regularly install all the Windows or Mac updates on your computer?

10.6 FAKE ANTIVIRUS?

MANAGEMENT FOCUS

The world of computer viruses is constantly evolving and becoming more and more advanced. At the beginning of Internet, viruses were design to do funny things (such as turn text on your screen upside down), but today they are designed to get your money and private information. Once a virus is installed on a computer it will interact with a remote computer and transfer sensitive data to that computer. Antivirus software was developed to prevent viruses from being installed on computers. However, not all antivirus software is made equal.

There are many antivirus software companies that offer to scan your computer for free. Yes, for free! A old saying relates that if something sounds too good to be true, it probably is. Free antivirus software is not an exception. Chester Wisniewky, at Sophos Labs, explains that once you download a free antivirus on your computer, you actually downloaded malware. Once you launch this software on your computer it looks and behaves like a legitimate antivirus. Many of these free antivirus software packages are fully multilingual. The software has a very user-friendly GUI (graphical user interface) that looks and behaves like a legitimate antivirus. However, once you start scanning your computer it will mark legitimate files on your computer as worms and Trojans and will give you a warning that your computer is infected. A regular user gets scared at this point and allows the software to remove the infected files. What is really happening is that malware is installed on your computer that will scan for any sensitive information and send this information to a host.

Rather than trying to get a free antivirus, spend money on a legitimate product such as Sophos, Symantec, or McAfee. Popular news magazines, such as PC Magazine, provide annual reviews of legitimate antivirus software and also the free antivirus. Your best protection against exploits of this kind is education.

__________

SOURCES: http://www.buzzle.com/articles/computer-viruses2010.html

http://www.sophos.com/security/anatomy-of-anattack/? utm source=Non-campaign&utm medium= AdWords&utm campaign=NA-AW-AoA

Other security holes are not really holes but simply policies adopted by computer vendors that open the door for security problems, such as computer systems that come with a variety of preinstalled user accounts. These accounts and their initial passwords are well documented and known to all potential attackers. Network managers sometimes forget to change the passwords on these well-known accounts thus enabling an attacker to slip in.

Operating Systems The American government requires certain levels of security in the operating systems and network operating systems it uses for certain applications. The minimum level of security is C2. Most major operating systems (e.g., Windows) provide at least C2. Most widely used systems are striving to meet the requirements of much higher security levels such as B2. Very few systems meet the highest levels of security (A1 and A2).

There has been a long running debate about whether the Windows operating system is less secure than other operating systems such as Linux. Every new attack on Windows systems ignites the debate; Windows detractors repeat “I told you so” while Windows defenders state that this happens mostly because Windows is the obvious system to attack since it is the most commonly used operating system and because of the hostility of the Windows detractors themselves.

There is a critical difference in what applications can do in Windows and in Linux. Linux (and its ancestor Unix) was first written as a multiuser operating system in which different users had different rights. Only some users were system administrators and had the rights to access and make changes to the critical parts of the operating system. All other users were barred from doing so.

In contrast, Windows (and its ancestor DOS) was first written as an operating system for a single personal computer, an environment in which the user was in complete control of the computer and could do anything he or she liked. As a result, Windows applications regularly access and make changes to critical parts of the operating system. There are advantages to this. Windows applications can do many powerful things without the user needing to understand them. These applications can be very rich in features, and more important, they can appear to the user to be very friendly and easy to use. Everything appears to run “out-of-the-box” without modification. Windows has built these features into the core of their systems. Any major rewrite of Windows to prevent this would most likely cause significant incompatibilities with all applications designed to run under previous versions of Windows. To many, this would be a high price to pay for some unseen benefits called “security.”

10.4 EXPLOITING A SECURITY HOLE

TECHNICAL FOCUS

In order to exploit a security hole, the hacker has to know it's there. So how does a hacker find out? It's simple in the era of automated tools.

First, the hacker has to find the servers on a network. The hacker could start by using network scanning software to systematically probe every IP address on a network to find all the servers on the network. At this point, the hacker has narrowed the potential targets to a few servers.

Second, the hacker needs to learn what services are available on each server. To do this, he or she could use port scanning software to systematically probe every TCP/IP port on a given server. This would reveal which ports are in use and thus what services the server offers. For example, if the server has software that responds to port 80, it is a Web server, while if it responds to port 25, it is a mail server.

Third, the hacker would begin to seek out the exact software and version number of the server software providing each service. For example, suppose the hacker decides to target mail servers. There are a variety of tools that can probe the mail server software, and based on how the server software responds to certain messages, determine which manufacturer and version number of software is being used.

Finally, once the hacker knows which package and version number the server is using, the hacker uses tools designed to exploit the known security holes in the software. For example, some older mail server software packages do not require users to authenticate themselves (e.g., by a user id and password) before accepting SMTP packets for the mail server to forward. In this case, the hacker could create SMTP packets with fake source addresses and use the server to flood the Internet with spam (i.e., junk mail). In another case, a certain version of a well-known e-commerce package enabled users to pass operating system commands to the server simply by including a UNIX pipe symbol (|) and the command to the name of a file name to be uploaded; when the system opened the uploaded file, it also executed the command attached to it.

10.5 OPEN SOURCE VERSUS CLOSED SOURCE SOFTWARE

TECHNICAL FOCUS

A cryptographic system should still be secure if everything is known about it except its key. You should not base the security of your system upon its obscurity.”—Auguste Kerckhoffs (1883).

Auguste Kerckhoffs was a Flemish cryptographer and linguist who studied military communications during the Franco-Prussian War. He observed that neither side could depend on hiding their telegraph lines and equipment from the other side because the enemy would find the hidden telegraph lines and tap into the communications. One could not rely on their system being obscure. In 1948, Claude Shannon of Bell Labs extended Kerckhoffs’ Law when he said, “Always assume that the enemy knows your system.” Cryptographers and military colleges teach Kerckhoffs’ and Shannon's laws as fundamental rules in information security.

How does this apply to computer security? There are a few basics that we should understand first: Programmers write their code in human-readable source code, which is then compiled to produce binary object code (i.e., zeros and ones); very few people can read binary code. For-profit developers do not release their source code when they sell software; they only release the binary object code. This closed source code is their proprietary “crown jewels,” to be jealously guarded. In contrast, open source software is not-for-profit software in which the source code is provided along with the binary object code so that other developers can read the code and write new features or find and fix bugs.

So, does this mean that closed source is safer than open source because no one can see any bugs or security holes that might be hidden in the source code? No. With closed source, there is the temptation to use “security via obscurity.” The history of security holes is that they become well known. Why? First, because there may be literally hundreds of people with access to the source code. Some of those people come and go. Some take the code with them. And some talk to others, who post it on the Internet.

And then there are the decompilers. A decompiler converts binary object code back into source code. Decompilers do not produce exact copies of the original source code, but they are getting better and better. With their use, attackers can better guess where the security holes are.

There is also a tendency within the closed source community to rely on the source code being hidden as a line of defense. In effect, the users drop their guard, falsely thinking that they are safe behind the obscurity of hidden code. The open source community has far more people able to examine the code than any closed source system. One of the tenets of the open source community is “No bug is too obscure or difficult for a million eyes.”

Also, the motives of the developers are different. Open source coders generally do not write for profit. Closed source developers are inevitably writing for profit. With the profit motive comes more pressure to release software quickly to “beat the market.” Rushing code to market is one of the surest ways of releasing flawed code. This pressure does not exist in the open source world since no one is going to make much money on it anyway.

Can there be secure closed source software? Yes. But the developers must be committed to security from the very beginning of development. By most reasonable measures, open source software has been and continues to be more secure than closed source software. This is what Auguste Kerckhoffs would have predicted.

But there is a price for this friendliness. Hostile applications can easily take over the computer and literally do whatever they want without the user knowing. Simply put, there is a tradeoff between ease of use and security. Increasing needs for security demand more checks and restrictions, which translates into less friendliness and fewer features. It may very well be that there is an inherent and permanent contradiction between the ease of use of a system and its security.

Triojan Horses One important tool in gaining unauthorized access is a Trojan horse. Trojans are remote access management consoles (sometimes called rootkits) that enable users to access a computer and manage it from afar. If you see free software that will enable you to control your computer from anywhere, be careful; the software may also permit an attacker to control your computer from anywhere! Trojans are more often concealed in other software that unsuspecting users download over the Internet (their name alludes to the original Trojan horse). Music and video files shared on Internet music sites are common carriers of Trojans. When the user downloads and plays a music file, it plays normally and the attached Trojan software silently installs a small program that enables the attacker to take complete control of the user's computer, so the user is unaware that anything bad has happened. The attacker then simply connects to the user's computer and has the same access and controls as the user. Many Trojans are completely undetectable by the very best antivirus software.

One of the first major Trojans was Back Orifice, which aggressively attacked Windows servers. Back Orifice gave the attacker the same functions as the administrator of the infected server, and then some: complete file and network control, device and registry access, with packet and application redirection. It was every administrator's worst nightmare, and every attacker's dream.

More recently, Trojans have morphed into tools such as MoSucker and Optix Pro. These attack consoles now have one-button clicks to disable firewalls, antivirus software, and any other defensive process that might be running on the victim's computer. The attacker can choose what port the Trojan runs on, what it is named, and when it runs. They can listen in to a computer's microphone or look through an attached camera—even if the device appears to be off. Figure 10.13 shows a menu from one Trojan that illustrates some of the “fun stuff” that an attacker can do, such as opening and closing the CD tray, beeping the speaker, or reversing the mouse buttons so that clicking on the left button actually sends a right click.

Not only have these tools become powerful, but they are also very easy to use—much easier to use than the necessary defensive countermeasures to protect oneself from them. And what does the near future hold for Trojans? We can easily envision Trojans that schedule themselves to run at, say 2:00 A.M., choosing a random port, emailing the attacker that the machine is now “open for business” at port # NNNNN. The attackers can then step in, do whatever they want to do, run a script to erase most of their tracks, and then sign out and shut off the Trojan. Once the job is done, the Trojan could even erase itself from storage. Scary? Yes. And the future does not look better.

images

FIGURE 10.13 One menu on the control console for the Optix Pro Trojan

Spyware, adware, and DDoS agents are three types of Trojans. DDoS agents were discussed in the previous section. As the name suggests, spyware monitors what happens on the target computer. Spyware can record keystrokes that appear to be userids and passwords so the intruder can gain access to the user's account (e.g., bank accounts). Adware monitors user's actions and displays pop-up advertisements on the user's screen. For example, suppose you clicked on the Web site for an online retailer. Adware might pop-up a window for a competitor, or, worse still, redirect your browser to the competitor's Web site. Many antivirus software packages now routinely search for and remove spyware, adware, and other Trojans and special purpose antispyware software is available (e.g., Spybot). Some firewall vendors are now adding anti-Trojan logic to their devices to block any transmissions from infected computers from entering or leaving their networks.

10.4.4 Encryption

One of the best ways to prevent intrusion is encryption, which is a means of disguising information by the use of mathematical rules known as algorithms. 10 Actually, cryptography is the more general and proper term. Encryption is the process of disguising information, whereas decryption is the process of restoring it to readable form. When information is in readable form, it is called plaintext; when in encrypted form, it is called ciphertext. Encryption can be used to encrypt files stored on a computer or to encrypt data in transit between computers.11

10.7 SONY'S SPYWARE

MANAGEMENT FOCUS

Sony BMG Entertainment, the music giant, included a spyware rootkit on audio CDs sold in the fall of 2005, including CDs by such artists as Celine Dion, Frank Sinatra, and Ricky Martin. The rootkit was automatically installed on any PC that played the infected CD. The rootkit was designed to track the behavior of users who might be illegally copying and distributing the music on the CD, with the goal of preventing illegal copies from being widely distributed.

Sony made two big mistakes. First, it failed to inform customers who purchased its CDs about the rootkit, so users unknowingly installed it. The rootkit used standard spyware techniques to conceal its existence to prevent users from discovering it. Second, Sony used a widely available rootkit, which meant that any knowledgeable user on the Internet could use the rootkit to take control of the infected computer. Several viruses have been written that exploit the rootkit and are now circulating on the Internet. The irony is that rootkit infringes on copyrights held by several open source projects, which means Sony was engaged in the very act it was trying to prevent: piracy.

When the rootkit was discovered, Sony was slow to apologize, slow to stop selling rootkit-infected CDs, and slow to help customers remove the rootkit. Several lawsuits have been filed in the United States and abroad seeking damages. The Federal Trade Commission (FTC) found on January 30, 2007, that Sony BMG's CD copy protection had violated Federal Law. Sony BMG had to reimburse consumers up to $150 to repair damages that were caused by the illegal software that was installed on users’ computers without their consent. This adventure proved to be very costly for Sony BMG.

__________

SOURCES: J.A. Halderman and E.W. Felton, “Lessons from the Sony CD DRM Episode,” working paper, Princeton University, 2006; and “Sony Anti-Customer Technology Roundup and Time-Line, “www.boingboing.net, February 15, 2006. Wikipedia.com

There are two fundamentally different types of encryption: symmetric and asymmetric. With symmetric encryption, the key used to encrypt a message is the same as the one used to decrypt it. With asymmetric encryption, the key used to decrypt a message is different from the key used to encrypt it.

Single Key Encryption Symmetric encryption (also called single-key encryption) has two parts: the algorithm and the key, which personalizes the algorithm by making the transformation of data unique. Two pieces of identical information encrypted with the same algorithm but with different keys produce completely different ciphertexts. With symmetric encryption, the communicating parties must share the one key. If the algorithm is adequate and the key is kept secret, acquisition of the ciphertext by unauthorized personnel is of no consequence to the communicating parties.

Good encryption systems do not depend on keeping the algorithm secret. Only the keys need to be kept secret. The key is a relatively small numeric value (in terms of the number of bits). The larger the key, the more secure the encryption because large “key space” protects the ciphertext against those who try to break it by brute-force attacks—which simply means trying every possible key.

There should be a large enough number of possible keys that an exhaustive brute-force attack would take inordinately long or would cost more than the value of the encrypted information.

Because the same key is used to encrypt and decrypt, symmetric encryption can cause problems with key management; keys must be shared among the senders and receivers very carefully. Before two computers in a network can communicate using encryption, both must have the same key. This means that both computers can then send and read any messages that use that key. Companies often do not want one company to be able to read messages they send to another company, so this means that there must be a separate key used for communication with each company. These keys must be recorded but kept secure so that they cannot be stolen. Because the algorithm is known publicly, the disclosure of the key means the total compromise of encrypted messages. Managing this system of keys can be challenging.

10.8 TROJANS AT HOME

MANAGEMENT FOCUS

It started with a routine phone call to technical support—one of our users had a software package that kept crashing. The network technician was sent to fix the problem but couldn't, so thoughts turned to a virus or Trojan. After an investigation, the security team found a remote FTP Trojan installed on the computer that was storing several gigabytes of cartoons and making them available across the Internet. The reason for the crash was that the FTP server was an old version that was not compatible with the computer's operating system. The Trojan was removed and life went on.

Three months later the same problem occurred on a different computer. Because the previous Trojan had been logged, the network support staff quickly recognized it as a Trojan. The same hacker had returned, storing the same cartoons on a different computer. This triggered a complete investigation. All computers on our Business School network were scanned and we found 15 computers that contained the Trojan. We gathered forensic evidence to help identify the attacker (e.g., log files, registry entries) and filed an incident report with the University incident response team advising them to scan all computers on the university network immediately.

The next day, we found more computers containing the same FTP Trojan and the same cartoons. The attacker had come back overnight and taken control of more computers. This immediately escalated the problem. We cleaned some of the machines but left some available for use by the hacker to encourage him not to attack other computers. The network security manager replicated the software and used it to investigate how the Trojan worked. We determined that the software used a brute force attack to break the administrative password file on the standard image that we used in our computer labs. We changed the password and installed a security patch to our lab computer's standard configuration. We then upgraded all the lab computers and only then cleaned the remaining machines controlled by the attacker.

The attacker had also taken over many other computers on campus for the same purpose. With the forensic evidence that we and the university security incident response team had gathered, the case is now in court.

__________

SOURCE: Alan Dennis

One commonly used symmetric encryption technique is the Data Encryption Standard (DES), which was developed in the mid-1970s by the U.S. government in conjunction with IBM. DES is standardized by the National Institute of Standards and Technology (NIST). The most common form of DES uses a 56-bit key, which experts can break in less than a day (i.e., experts with the right tools can figure out what a message encrypted using DES says without knowing the key in less than 24 hours). DES is no longer recommended for data needing high security although some companies continue to use it for less important data.

Triple DES (3DES) is a newer standard that is harder to break. As the name suggests, it involves using DES three times, usually with three different keys to produce the encrypted text, which produces a stronger level of security because it has a total of 168 bits as the key (i.e., 3 times 56 bits).12

The NIST's new standard, called Advanced Encryption Standard (AES), has replaced DES. AES has key sizes of 128, 192, and 256 bits. NIST estimates that, using the most advanced computers and techniques available today, it will require about 150 trillion years to crack AES by brute force. As computers and techniques improve, the time requirement will drop, but AES seems secure for the foreseeable future; the original DES lasted 20 years, so AES may have a similar life span.

Another commonly used symmetric encryption algorithm is RC4, developed by Ron Rivest of RSA Data Security, Inc. RC4 can use a key up to 256 bits long but most commonly uses a 40-bit key. It is faster to use than DES but suffers from the same problems from brute-force attacks: Its 40-bit key can be broken by a determined attacker in a day or two.

Today, the U.S. government considers encryption to be a weapon and regulates its export in the same way it regulates the export of machine guns or bombs. Present rules prohibit the export of encryption techniques with keys longer than 64 bits without permission, although exports to Canada and the European Union are permitted, and American banks and Fortune 100 companies are now permitted to use more powerful encryption techniques in their foreign offices. This policy made sense when only American companies had the expertise to develop powerful encryption software. Today, however, many non-American companies are developing encryption software that is more powerful than American software that is limited only by these rules. Therefore, the American software industry is lobbying the government to change the rules so that they can successfully compete overseas.13

Public Key Encryption The most popular form of asymmetric encryption (also called public key encryption) is RSA, which was invented at MIT in 1977 by Rivest, Shamir, and Adleman, who founded RSA Data Security in 1982.14 The patent expired in 2000, so many new companies entered the market and public key software dropped in price. The RSA technique forms the basis for today's public key infrastructure (PKI).

Public key encryption is inherently different from symmetric single-key systems like DES. Because public key encryption is asymmetric, there are two keys. One key (called the public key) is used to encrypt the message and a second, very different private key is used to decrypt the message. Keys are often 512 bits, 1,024 bits, or 2048 bits in length.

Public key systems are based on one-way functions. Even though you originally know both the contents of your message and the public encryption key, once it is encrypted by the one-way function, the message cannot be decrypted without the private key. One-way functions, which are relatively easy to calculate in one direction, are impossible to “uncalculate” in the reverse direction. Public key encryption is one of the most secure encryption techniques available, excluding special encryption techniques developed by national security agencies.

Public key encryption greatly reduces the key management problem. Each user has its public key that is used to encrypt messages sent to it. These public keys are widely publicized (e.g., listed in a telephone book-style directory)—that's why they're called “public” keys. In addition, each user has a private key that decrypts only the messages that were encrypted by its public key. This private key is kept secret (that's why it's called the “private” key). The net result is that if two parties wish to communicate with one another, there is no need to exchange keys beforehand. Each knows the other's public key from the listing in a public directory and can communicate encrypted information immediately. The key management problem is reduced to the on-site protection of the private key.

Figure 10.14 illustrates how this process works. All public keys are published in a directory. When Organization A wants to send an encrypted message to Organization B, it looks through the directory to find its public key. It then encrypts the message using B's public key. This encrypted message is then sent through the network to Organization B, which decrypts the message using its private key.

Authentication Public key encryption also permits the use of digital signatures through a process of authentication. When one user sends a message to another, it is difficult to legally prove who actually sent the message. Legal proof is important in many communications, such as bank transfers and buy/sell orders in currency and stock trading, which normally require legal signatures. Public key encryption algorithms are invertable, meaning that text encrypted with either key can be decrypted by the other. Normally, we encrypt with the public key and decrypt with the private key. However, it is possible to do the inverse: encrypt with the private key and decrypt with the public key. Since the private key is secret, only the real user could use it to encrypt a message. Thus, a digital signature or authentication sequence is used as a legal signature on many financial transactions. This signature is usually the name of the signing party plus other key-contents such as unique information from the message (e.g., date, time, or dollar amount). This signature and the other key-contents are encrypted by the sender using the private key. The receiver uses the sender's public key to decrypt the signature block and compares the result to the name and other key contents in the rest of the message to ensure a match.

images

FIGURE 10.14 Secure transmission with public key encryption

Figure 10.15 illustrates how authentication can be combined with public encryption to provide a secure and authenticated transmission with a digital signature. The plaintext message is first encrypted using Organization A's private key and then encrypted using Organization's B public key. It is then transmitted to B. Organization B first decrypts the message using its private key. It sees that part of the message (the key-contents) is still in cyphertext, indicating it is an authenticated message. B then decrypts the key-contents part of the message using A's public key to produce the plaintext message. Since only A has the private key that matches A's public key, B can safely assume that A sent the message.

images

FIGURE 10.15 Authenticated and secure transmission with public key encryption

The only problem with this approach lies in ensuring that the person or organization who sent the document with the correct private key is actually the person or organization they claim to be. Anyone can post a public key on the Internet, so there is no way of knowing for sure who they actually are. For example, it would be possible for someone to create a Web site and claim to be “Organization A” when in fact they are really someone else.

This is where the Internet's public key infrastructure (PKI) becomes important.15 The PKI is a set of hardware, software, organizations, and polices designed to make public key encryption work on the Internet. PKI begins with a certificate authority (CA), which is a trusted organization that can vouch for the authenticity of the person or organization using authentication (e.g., VeriSign). A person wanting to use a CA registers with the CA and must provide some proof of identity. There are several levels of certification, ranging from a simple confirmation from a valid email address to a complete police-style background check with an in-person interview. The CA issues a digital certificate that is the requestor's public key encrypted using the CA's private key as proof of identity. This certificate is then attached to the user's email or Web transactions, in addition to the authentication information. The receiver then verifies the certificate by decrypting it with the CA's public key—and must also contact the CA to ensure that the user's certificate has not been revoked by the CA.

For higher security certifications, the CA requires that a unique “fingerprint” be issued by the CA for each message sent by the user. The user submits the message to the CA, who creates the unique fingerprint by combining the CA's private key with the message's authentication key contents. Because the user must obtain a unique fingerprint for each message, this ensures that the CA has not revoked the certificate between the time it was issued and the time the message was sent by the user.

Encryption Software Pretty Good Privacy (PGP) is a freeware public key encryption package developed by Philip Zimmermann that is often used to encrypt email. Users post their public key on Web pages, for example, and anyone wishing to send them an encrypted message simply cuts and pastes the key off the Web page into the PGP software, which encrypts and sends the message.16

Secure Sockets Layer (SSL) is an encryption protocol widely used on the Web. It operates between the application layer software and the transport layer (in what the OSI model calls the presentation layer). SSL encrypts outbound packets coming out of the application layer before they reach the transport layer and decrypts inbound packets coming out of the transport layer before they reach the application layer. With SSL, the client and the server start with a handshake for PKI authentication and for the server to provide its public key and preferred encryption technique to the client (usually RC4, DES, 3DES, or AES). The client then generates a key for this encryption technique, which is sent to the server encrypted with the server's public key. The rest of the communication then uses this encryption technique and key.

IP Security Protocol (IPSec) is another widely used encryption protocol. IPSec differs from SSL in that SSL is focused on Web applications, whereas IPSec can be used with a much wider variety of application layer protocols. IPSec sits between IP at the network layer and TCP/UDP at the transport layer. IPSec can use a wide variety of encryption techniques so the first step is for the sender and receiver to establish the technique and key to be used. This is done using Internet Key Exchange (IKE). Both parties generate a random key and send it to the other using an encrypted authenticated PKI process, and then put these two numbers together to produce the key.17 The encryption technique is also negotiated between the two, often being 3DES. Once the keys and technique have been established, IPSec can begin transmitting data.

IP Security Protocol can operate in either transport mode or tunnel mode for VPNs. In IPSec transport mode, IPSec encrypts just the IP payload, leaving the IP packet header unchanged so it can be easily routed through the Internet. In this case, IPSec adds an additional packet (either an Authentication Header [AH] or an Encapsulating Security Payload [ESP]) at the start of the IP packet that provides encryption information for the receiver.

In IPSec tunnel mode, IPSec encrypts the entire IP packet, and must therefore add an entirely new IP packet that contains the encrypted packet, as well as the IPSec AH or ESP packets. In tunnel mode, the newly added IP packet just identifies the IPSec encryption agent at the next destination, not the final destination; once the IPSec packet arrives at the encryption agent, the excrypted packet is VPN decrypted and sent on its way. In tunnel mode, attackers can only learn the endpoints of the VPN tunnel, not the ultimate source and destination of the packets.

10.4.5 User Authentication

Once the network perimeter and the network interior have been secured, the next step is to develop a way to ensure that only authorized users are permitted into the network and into specific resources in the interior of the network. This is called user authentication.

The basis of user authentication is the user profile for each user's account that is assigned by the network manager. Each user's profile specifies what data and network resources he or she can access, and the type of access (read only, write, create, delete).

User profiles can limit the allowable log-in days, time of day, physical locations, and the allowable number of incorrect log-in attempts. Some will also automatically log a user out if that person has not performed any network activity for a certain length of time (e.g., the user has gone to lunch and has forgotten to log off the network). Regular security checks throughout the day when the user is logged in can determine whether a user is still permitted access to the network. For example, the network manager might have disabled the user's profile while the user is logged in, or the user's account may have run out of funds.

Creating accounts and profiles is simple. When a new staff member joins an organization, that person is assigned a user account and profile. One security problem is the removal of user accounts when someone leaves an organization. Often, network managers are not informed of the departure and accounts remain in the system. For example, an examination of the user accounts at the University of Georgia found 30 percent belonged to staff members no longer employed by the university. If the staff member's departure was not friendly, there is a risk that he or she may attempt to access data and resources and use them for personal gain, or destroy them to “get back at” the organization. Many systems permit the network manager to assign expiration dates to user accounts to ensure that unused profiles are automatically deleted or deactivated, but these actions do not replace the need to notify network managers about an employee's departure as part of the standard Human Resources procedures.

10.6 CRACKING A PASSWORD

TECHNICAL FOCUS

To crack Windows passwords, you just need to get a copy of the security account manager (SAM) file in the WINNT directory, which contains all the Windows passwords in an encrypted format. If you have physical access to the computer, that's sufficient. If not, you might be able to hack in over the network. Then, you just need to use a Windows-based cracking tool such as LophtCrack. Depending on the difficulty of the password, the time needed to crack the password via brute force could take minutes or up to a day.

Or that's the way it used to be. Recently the Cryptography and Security Lab in Switzerland developed a new password-cracking tool that relies on very large amounts of RAM. It then does indexed searches of possible passwords that are already in memory. This tool can cut cracking times to less than 1/10 of the time of previous tools. Keep adding RAM and mHertz and you could reduce the crack times to 1/100 that of the older cracking tools. This means that if you can get your hands on the Windows-encrypted password file, then the game is over. It can literally crack complex passwords in Windows in seconds.

It's different for Linux, Unix, or Apple computers. These systems insert a 12-bit random “salt” to the password, which means that cracking their passwords will take 4,096 (2^12) times longer to do. That margin is probably sufficient for now, until the next generation of cracking tools comes along. Maybe.

So what can we say from all of this? That you are 4,096 times safer with Linux? Well, not necessarily. But what we may be able to say is that strong password protection, by itself, is an oxymoron. We must combine it with other methods of security to have reasonable confidence in the system.

Gaining access to an account can be based on something you know, something you have, or something you are.

Passwords The most common approach is something you know, usually a password. Before users can log-in, they need to enter a password. Unfortunately, passwords are often poorly chosen, enabling intruders to guess them and gain access. Some organizations are now requiring that users choose passwords that meet certain security requirements, such as a minimum length or including numbers and/or special characters (e.g., $, #, !). Some have moved to passphrases which, as the name suggests, is a series of words separated by spaces. Using complex passwords and passphrases has also been called one of the top five least effective security controls because it can frustrate users and lead them to record their passwords in places from which they can be stolen.

Access Cards Requiring passwords provides, at best, midlevel security (much like locking your doors when you leave the house); it won't stop the professional intruder, but it will slow amateurs. Nonetheless, most organizations today use only passwords. About a third of organizations go beyond this and are requiring users to enter a password in conjunction with something they have, an access card. A smart card is a card about the size of a credit card that contains a small computer chip. This card can be read by a device and in order to gain access to the network, the user must present both the card and the password. Intruders must have access to both before they can break in. The best example of this is the automated teller machine (ATM) network operated by your bank. Before you can gain access to your account, you must have both your ATM card and the access number.

Another approach is to use one-time passwords. The user connects into the network as usual, and after the user's password is accepted, the system generates a one-time password. The user must enter this password to gain access, otherwise the connection is terminated. The user can receive this one-time password in a number of ways (e.g., via a pager). Other systems provide the user with a unique number that must be entered into a separate handheld device (called a token), which in turn displays the password for the user to enter. Other systems use time-based tokens in which the one-time password is changed every 60 seconds. The user has a small card (often attached to a key chain) that is synchronized with the server and displays the one-time password. With any of these systems, an attacker must know the user's account name, password, and have access to the user's password device before he or she can login.

Biometrics In high-security applications, a user may be required to present something they are, such as a finger, hand, or the retina of their eye for scanning by the system. These biometric systems scan the user to ensure that the user is the sole individual authorized to access the network account. About 15 percent of organizations now use biometrics. While most biometric systems are developed for high-security users, several low-cost biometric systems are now on the market. The most popular biometric system is the fingerprint scanner. Several vendors sell devices the size of a mouse that can scan a user's fingerprint for less than $100. Some laptops now come with built-in fingerprint scanners that replace traditional Windows logins. While some banks have begun using fingerprint devices for customer access to their accounts over the Internet, use of such devices has not become widespread, which we find a bit puzzling. The fingerprint is unobtrusive and means users no longer have to remember arcane passwords.

10.9 SELECTING PASSWORDS

MANAGEMENT FOCUS

The key to users’ accounts are passwords; each account has a unique password chosen by the user. The problem is that passwords are often chosen poorly and not changed regularly. Many network managers require users to change passwords periodically (e.g., every 90 days), but this does not ensure that users choose “good” passwords.

A good password is one that the user finds easy to remember, but is difficult for potential intruders to guess. Several studies have found that about three-quarters of passwords fall into one of four categories:

  • Names of family members or pets
  • Important numbers in the user's life (e.g., SSN or birthday)
  • Words in a dictionary, whether an English or other language dictionary (e.g., cat, hunter, supercilious, gracias, ici)
  • Keyboard patterns (e.g., QWERTY, ASDF)

The best advice is to avoid these categories because such passwords can be easily guessed.

Better choices are passwords that:

  • Are meaningful to the user but no one else
  • Are at least seven characters long
  • Are made of two or more words that have several letters omitted (e.g., PPLEPI [apple pie]) or are the first letters of the words in phase that is not in common usage (e.g., no song lyrics) such as hapwicac (hot apple pie with ice cream and cheese)
  • Include characters such as numbers or punctuation marks in the middle of the password (e.g., 1hapwic,&c for one hot apple pie with ice cream, and cheese)
  • Include some uppercase and lowercase letters (e.g., 1HAPwic,&c)
  • Substitute numbers for certain letters that are similar, such as using a 0 instead of an O, a 1 instead of an I, a 2 instead of a Z, a 3 instead of an E, and so on (e.g., 1HAPw1c,&c)

For more information, see www.securitystats.com/tools/password.asp.

Central Authentication One long-standing problem has been that users are often assigned user profiles and passwords on several different computers. Each time a user wants to access a new server, he or she must supply his or her password. This is cumbersome for the users, and even worse for the network manager who must manage all the separate accounts for all the users.

More and more organizations are adopting central authentication (also called network authentication, single sign-on, or directory services), in which a log-in server is used to authenticate the user. Instead of logging into a file server or application server, the user logs into the authentication server. This server checks the user id and password against its database and if the user is an authorized user, issues a certificate (also called credentials). Whenever the user attempts to access a restricted service or resource that requires a user id and password, the user is challenged and his or her software presents the certificate to the authentication server (which is revalidated by the authentication server at the time). If the authentication server validates the certificate, then the service or resource lets the user in. In this way, the user no longer needs to enter his or her password to be authenticated to each new resource or service he or she uses. This also ensures that the user does not accidentally give out his or her password to an unauthorized service—it provides mutual authentication of both the user and the service or resource. The most commonly used authentication protocol is Kerberos, developed at MIT (see web.mit.edu/kerberos/www).

Although many systems use only one authentication server, it is possible to establish a series of authentication servers for different parts of the organization. Each server authenticates clients in its domain but can also pass authentication credentials to authentication servers in other domains.

10.4.6 Preventing Social Engineering

One of the most common ways for attackers to break into a system, even master hackers, is through social engineering, which refers to breaking security simply by asking. For example, attackers routinely phone unsuspecting users and, imitating someone such as a technician or senior manager, ask for a password. Unfortunately, too many users want to be helpful and simply provide the requested information. At first, it seems ridiculous to believe that someone would give their password to a complete stranger, but a skilled social engineer is like a good con artist: he—and most social engineers are men—can manipulate people.18

10.7 INSIDE KERBEROS

TECHNICAL FOCUS

Kerberos, the most commonly used central authentication protocol, uses symmetric encryption (usually DES). Kerberos is used by a variety of central authentication services, including Windows active directory services. When you log-in to a Kerberos-based system, you provide your user id and password to the Kerberos software on your computer. This software sends a request containing the user id but not the password to the Kerberos authentication server (called the Key Distribution Center [KDC]).

The KDC checks its database for the user id and if it finds it, then it accepts the log-in and does two things. First, it generates a service ticket (ST) for the KDC which contains information about the KDC, a time stamp, and, most importantly, a unique session key (SK1), which will be used to encrypt all further communication between the client computer and the KDC until the user logs off. SK1 is generated separately for each user and is different every time the user logs-in. Now, here's the clever part: The ST is encrypted using a key based on the password that matches the user id. The client computer can only decrypt the ST if it knows the password that matches the user id used to log-in. If the user enters an incorrect password, the Kerberos software on the client can't decrypt the ST and asks the user to enter a new password. This way, the password is never sent over the network.

Second, the KDC creates a Ticket-Granting Ticket (TGT). The TGT includes information about the client computer and a time stamp that is encrypted using a secret key known only to the KDC and other validated servers. The KDC sends the TGT to the client computer encrypted with SK1, because all communications between the client and the server are encrypted with SK1 (so no one else can read the TGT). The client decrypts the transmission to receive the TGT, but because the client does not know the KDC's secret key, it cannot decrypt the contents of the TGT. From now until the user logs-off, the user does not need to provide his or her password again; the Kerberos client software will use the TGT to gain access to all servers that require a password.

The first time a user attempts to use a server that requires a password, that server directs the user's Kerberos software to obtain a service ticket (ST) for it from the KDC. The user's Kerberos software sends the TGT to the KDC along with information about which server the user wants to access (remember that all communications between the client and the KDC are encrypted with SK1). The KDC checks to make sure that the user has not logged off and if the TGT is validated, the KDC sends the client an ST for the desired server and a new session key (SK2) that the client will use to communicate with that server, both of which have been encrypted using SK1. The ST contains authentication information and SK2, both of which have been encrypted using the secret key known only to the KDC and the server.

The client presents a log-in request (that specifies the user id, a time and date stamp, and other information) that has been encrypted with SK2 and the ST to the server. The server decrypts the ST using the KDC's secret key to find the authentication information and SK2. It uses the SK2 to decrypt the log-in request. If the log-in request is valid after decrypting with SK2, the server accepts the log-in and sends the client a packet that contains information about the server that has been encrypted with SK2. This process authenticates the client to the server, and also authenticates the server to the client. Both now communicate using SK2. Notice that the server never learns the user's password.

Most security experts no longer test for social engineering attacks; they know from experience that social engineering will eventually succeed in any organization and therefore assume that attackers can gain access at will to normal user accounts. Training end users not to divulge passwords may not eliminate social engineering attacks, but it may reduce their effectiveness so that hackers give up and move on to easier targets. Acting out social engineering skits in front of users often works very well; when employees see how they can be manipulated into giving out private information, it becomes more memorable and they tend to become much more careful.

Phishing is a very common type of social engineering. The attacker simply sends an email to millions of users telling them that their bank account has been shut down due to an unauthorized access attempt and that they need to reactivate it by logging in. The email contains a link that directs the user to a fake Web site that appears to be the bank's Web site. After the user logs into the fake site, the attacker has the user's user id and password and can break into his or her account at will. Clever variants on this include an email informing you that a new user has been added to your paypal account, stating that the IRS has issued you a refund and you need to verify your social security number, or offering a mortgage at very low rate for which you need to provide your social security number and credit number.

10.4.7 Intrusion Prevention Systems

Intrusion prevention systems (IPS) are designed to detect an intrusion and take action to stop it. There are two general types of IPSs, and many network managers choose to install both. The first type is a network-based IPS. With a network-based IPS, an IPS sensor is placed on key network circuits. An IPS sensor is simply a device running a special operating system that monitors all network packets on that circuit and reports intrusions to an IPS management console. The second type of IPS is the host-based IPS, which, as the name suggests, is a software package installed on a host or server. The host-based IPS monitors activity on the server and reports intrusions to the IPS management console.

There are two fundamental techniques that these types of IPSs can use to determine that an intrusion is in progress; most IPSs use both techniques. The first technique is misuse detection, which compares monitored activities with signatures of known attacks. Whenever an attack signature is recognized, the IPS issues an alert and discards the suspicious packets. The problem, of course, is keeping the database of attack signatures up to date as new attacks are invented.

The second fundamental technique is anomaly detection, which works well in stable networks by comparing monitored activities with the “normal” set of activities. When a major deviation is detected (e.g., a sudden flood of ICMP ping packets, an unusual number of failed log-ins to the network manager's account), the IPS issues an alert and discards the suspicious packets. The problem, of course, is false alarms when situations occur that produce valid network traffic that is different from normal (e.g., on a heavy trading day on Wall Street, e-trade receives a larger than normal volume of messages).

Intrusion prevention systems are often used in conjunction with other security tools such as firewalls (Figure 10.16). In fact, some firewalls are now including IPS functions. One problem is that the IPS and its sensors and management console are a prime target for attackers. Whatever IPS is used, it must be very secure against attack. Some organizations deploy redundant IPSs from different vendors (e.g., a network-based IPS from one vendor and a host-based IPS from another) in order to decrease the chance that the IPS can be hacked.

Although IPS monitoring is important, it has little value unless there is a clear plan for responding to a security breach in progress. Every organization should have a clear response planned if a break-in is discovered. Many large organizations have emergency response “SWAT” teams ready to be called into action if a problem is discovered. The best example is CERT, which is the Internet's emergency response team. CERT has helped many organizations establish such teams.

10.10 SOCIAL ENGINEERING WINS AGAIN

MANAGEMENT FOCUS

Danny had collected all the information he needed to steal the plans for the new product. He knew the project manager's name (Bob Billings), phone number, department name, office number, computer user id, and employee number, as well as the project manager's boss's name. These had come from the company Web site and a series of innocuous phone calls to helpful receptionists. He had also tricked the project manager into giving him his password, but that hadn't worked because the company used one-time passwords using a time-based token system called Secure ID. So, after getting the phone number of the computer operations room from another helpful receptionist, all he needed was a snowstorm.

Late one Friday night, a huge storm hit and covered the roads with ice. The next morning, Danny called the computer operations room:

Danny: “Hi, this is Bob Billings in the Communications Group. I left my Secure ID in my desk and I need it to do some work this weekend. There's no way I can get into the office this morning. Could you go down to my office and get it for me? And then read my code to me so I can log-in?”

Operations: “Sorry, I can't leave the Operations Center.”

Danny: “Do you have a Secure ID yourself?” Operations: “There's one here we keep for emergencies.”

Danny: “Listen. Can you do me a big favor? Could you let me borrow your Secure ID? Just until it's safe to drive in?”

Operations: “Who are you again?”

Danny: “Bob Billings. I work for Ed Trenton.” Operations: “Yeah, I know him.”

Danny: “My office is on the second floor (2202B). Next to Roy Tucker. It'd be easier if you could just get my Secure ID out of my desk. I think it's in the upper left drawer.” (Danny knew the guy wouldn't want to walk to a distant part of the building and search someone else's office.)

Operations: “I'll have to talk to my boss.”

After a pause, the operations technician came back on and asked Danny to call his manager on his cell phone. After talking with the manager and providing some basic information to “prove” he was Bob Billings, Danny kept asking about having the Operations technician go to “his” office.

Finally, the manager decided to let Danny use the Secure ID in the Operations Center. The manager called the technician and gave permission for him to tell “Bob” the one-time password displayed on their Secure ID any time he called that weekend. Danny was in.

__________

SOURCE: Kevin Mitnick and William Simon, The Art of Deception, John Wiley and Sons, 2002.

Responding to an intrusion can be more complicated than it at first seems. For example, suppose the IPS detects a DoS attack from a certain IP address. The immediate reaction could be to discard all packets from that IP address; however, in the age of IP spoofing, the attacker could fake the address of your best customer and trick you into discarding packets from it.

10.4.8 Intrusion Recovery

Once an intrusion has been detected, the first step is to identify how the intruder gained unauthorized access and prevent others from breaking in the same way. Some organizations will simply choose to close the door on the attacker and fix the security problem. About 30 percent of organizations take a more aggressive response by logging the intruder's activities and working with police to catch the individuals involved. Once identified, the attacker will be charged with criminal activities and/or sued in civil court. Several states and provinces have introduced laws requiring organizations to report intrusions and theft of customer data, so the percent of intrusions reported and prosecuted will increase.

images

FIGURE 10.16 Intrusion prevention system (IPS). DMZ = demilitarized zone; DNS = Domain Name Service; NAT = network address translation

A whole new area called computer forensics has recently opened up. Computer forensics is the use of computer analysis techniques to gather evidence for criminal and/or civil trials. The basic steps of computer forensics are similar to those of traditional forensics, but the techniques are different. First, identify potential evidence. Second, preserve evidence by making backup copies and use those copies for all analysis. Third, analyze the evidence. Finally, prepare a detailed legal report for use in prosecutions. While companies are sometimes tempted to launch counterattacks (or counterhacks) against intruders, this is illegal.

Some organizations have taken their own steps to snare intruders by using entrapment techniques. The objective is to divert the attacker's attention from the real network to an attractive server that contains only fake information. This server is often called a honey pot. The honey pot server contains highly interesting, fake information available only through illegal intrusion to “bait” the intruder. The honey pot server has sophisticated tracking software to monitor access to this information that allows the organization and law enforcement officials to trace and legally document the intruder's actions. Possession of this information then becomes final legal proof of the intrusion.

10.5 BEST PRACTICE RECOMMENDATIONS

This chapter provides numerous suggestions on business continuity planning and intrusion prevention. Good security starts with a clear disaster recovery plan and a solid security policy. Probably the best security investment is user training: training individual users on data recovery and ways to defeat social engineering. But this doesn't mean that technologies aren't needed either.

Figure 10.17 shows the most commonly used security controls. Most organizations now routinely use antivirus software, firewalls, VPNs, encryption, and IPS.

Even so, rarely does a week pass without a new warning of a major vulnerability. Leave a server unattended for two weeks, and you may find that you have five critical patches to install.

People are now asking, “Will it end?” Is (in) security just a permanent part of the information systems landscape? In a way, yes. The growth of information systems, along with the new and dangerous ability to reach into them from around the world, has created new opportunities for criminals. Mix the possibilities of stealing valuable, marketable information with the low possibilities for getting caught and punished, and we would expect increasing numbers of attacks.

Perhaps the question should be: Does it have to be this bad? Unquestionably, we could be protecting ourselves better. We could better enforce security policies and restrict access. But all of this has a cost. Attackers are writing and distributing a new generation of attack tools right before us—tools that are very powerful, more difficult to detect, and very easy to use. Usually such tools are much easier to use than their defensive countermeasures.

The attackers have another advantage, too. Whereas the defenders have to protect all vulnerable points all the time in order to be safe, the attacker just has to break into one place one time to be successful.

images

FIGURE 10.17 What security controls are used

So what may we expect in the future in “secure” organizational environments? We would expect to see strong desktop management, including the use of thin clients (perhaps even network PCs that lack hard disks). Centralized desktop management, in which individual users are not permitted to change the settings on their computers, may become common, along with regular reimaging of computers to prevent Trojans and viruses and to install the most recent security patches. All external software downloads will likely be prohibited.

Continuous content filtering, in which all incoming packets (e.g., Web, email) are scanned, may become common, thus significantly slowing down the network. All server files and communications with client computers would be encrypted, further slowing down transmissions.

Finally, all written security policies would be rigorously enforced. Violations of security policies might even become a “capital offense” (i.e., meaning one violation and you are fired).

We may look forlornly back to the early days of the Internet when we could “do anything” as its Golden Days.

10.6 IMPLICATIONS FOR MANAGEMENT

Network security was once an esoteric field of interest to only a few dedicated professionals. Today, it is the fastest-growing area in networking. The cost of network security will continue to increase as the tools available to network attackers become more sophisticated, as organizations rely more and more on networks for critical business operations, and as information warfare perpetrated by nations or terrorists becomes more common. As the cost of networking technology decreases, the cost of staff and networking technologies providing security will become an increasingly larger proportion of an organization's networking budget. As organizations and governments see this, there will be a call for tougher laws and better investigation and prosecution of network attackers.

A Day in the Life: Network Security Manager

M anaging security is a combination of detective work and prognostication about the future.”

A network security manager spends much of his or her time doing three major things. First, much time is spent looking outside the organization by reading and researching potential security holes and new attacks because the technology and attack opportunities change so fast. It is important to understand new attack threats, new scripting tools used to create viruses, remote access Trojans and other harmful software, and the general direction in which the hacking community is moving. Much important information is contained at Web sites such as those maintained by CERT (www.cert.org) and SANS (www.sans.org). This information is used to create new versions of standard computer images that are more robust in defeating attacks, and to develop recommendations for the installation of application security patches. It also means that he or she must update the organization's written security policies and inform users of any changes.

Second, the network security manager looks inward toward the networks he or she is responsible for. He or she must check the vulnerability of those networks by thinking like a hacker to understand how the networks may be susceptible to attack, which often means scanning for open ports and unguarded parts of the networks and looking for computers that have not been updated with the latest security patches. It also means looking for symptoms of compromised machines such as new patterns of network activity or unknown services that have been recently opened on a computer.

Third, the network security manager must respond to security incidents. This usually means “firefighting”—quickly responding to any security breach, identifying the cause, collecting forensic evidence for use in court, and fixing the computer or software application that has been compromised.

With thanks to Kenn Crook

Security tools available to organizations will continue to increase in sophistication and the use of encryption will become widespread in most organizations. There will be an ongoing “arms race” between security officers in organizations and attackers. Software security will become an important factor in selecting operating systems, networking software, and application software. Those companies that provide more secure software will see a steady increase in market share while those that don't will gradually lose ground.

SUMMARY

Types of Security Threats In general, network security threats can be classified into one of two categories: (1) business continuity and (2) intrusions. Business continuity can be interrupted by disruptions that are minor and temporary, but some may also result in the destruction of data. Natural (or man-made) disasters may occur that destroy host computers or large sections of the network. Intrusion refers to intruders (external attackers or organizational employees) gaining unauthorized access to files. The intruder may gain knowledge, change files to commit fraud or theft, or destroy information to injure the organization.

Risk Assessment Developing a secure network means developing controls that reduce or eliminate threats to the network. Controls prevent, detect, and correct whatever might happen to the organization when its computer-based systems are threatened. The first step in developing a secure network is to conduct a risk assessment. This is done by identifying the key assets and threats and comparing the nature of the threats to the controls designed to protect the assets. A control spreadsheet lists the assets, threats, and controls that a network manager uses to assess the level of risk.

Business Continuity The major threats to business continuity are viruses, theft, denial of service attacks, device failure, and disasters. Installing and regularly updating antivirus software is one of the most important and commonly used security controls. Protecting against denial of service attacks is challenging and often requires special hardware. Theft is one of the most often overlooked threats and can be prevented by good physical security, especially the physical security of laptop computers. Devices fail, so the best way to prevent network outages is to ensure that the network has redundant circuits and devices (e.g., switches and routers) on mission critical network segments (e.g., the Internet connection and core backbone). Avoiding disasters can take a few commonsense steps, but no disaster can be completely avoided; most organizations focus on ensuring important data are backed up off-site and having a good, tested, disaster recovery plan.

Intrusion Prevention Intruders can be organization employees or external hackers who steal data (e.g., customer credit card numbers) or destroy important records. A security policy defines the key stakeholders and their roles, including what users can and cannot do. Firewalls often stop intruders at the network perimeter by permitting only authorized packets into the network, by examining application layer packets for known attacks, and/or by hiding the organization's private IP addresses from the public Internet. Physical and dial-up security are also useful perimeter security controls. Patching security holes—known bugs in an operating system or application software package—is important to prevent intruders from using these to break in. Single key or public key encryption can protect data in transit or data stored on servers. User authentication ensures only authorized users can enter the network and can be based on something you know (passwords), something you have (access cards), or something you are (biometrics). Preventing social engineering, where hackers trick users into revealing their passwords, is very difficult. Intrusion prevention systems are tools that detect known attacks and unusual activity and enable network managers to stop an intrusion in progress. Intrusion recovery involves correcting any damaged data, reporting the intrusion to the authorities, and taking steps to prevent the other intruders from gaining access the same way.

KEY TERMS

access card

access control list (ACL)

account Advanced Encryption Standard (AES)

adware

algorithm

anomaly detection

antivirus software

application-level firewall

asset

asymmetric encryption

authentication

authentication server

availability

backup controls

biometric system

brute-force attack

business continuity

central authentication

certificate

certificate authority (CA)

ciphertext

closed source

Computer Emergency Response Team (CERT)

computer forensics

confidentiality

continuous data protection (CDP)

control spreadsheet

controls

corrective control

cracker

Data Encryption Standard (DES)

DDoS agent

DDoS handler

decryption

Delphi team

denial-of-service (DoS) attack

desktop management

detective control

disaster recovery drill

disaster recovery firm

disaster recovery plan

disk mirroring

distributed denial-of-service (DDoS) attack

eavesdropping

encryption

entrapment

fault-tolerant server

firewall

hacker

honey pot

host-based IPS

information warfare

integrity Internet Key Exchange (IKE)

intrusion prevention system (IPS)

IP Security Protocol (IPSec)

IP spoofing

IPS management console

IPS sensor

IPSec transport mode

IPSec tunnel mode

Kerberos

key

key management

mission-critical application

misuse detection

NAT firewall

network address translation (NAT)

network-based IPS

one-time password

online backup

open source

packet-level firewall

passphrase

password

patch

phishing

physical security

plaintext

Pretty Good Privacy (PGP)

preventive control

private key

public key

public key encryption

public key infrastructure (PKI)

RC4

recovery controls

redundancy

redundant array of independent disks (RAID)

risk assessment

rootkit

RSA

script kiddies

Secure Sockets Layer (SSL)

secure switch

security hole

security policy

smart card

sniffer program

social engineering

something you are

something you have

something you know

spyware

symmetric encryption

threat

time-based token

token

traffic analysis

traffic anomaly analyzer

traffic anomaly detector

traffic filtering

traffic limiting

triple DES (3DES)

Trojan horse

uninterruptible power supply (UPS)

user profile

user authentication

virus

worm

zero-day attack

QUESTIONS

  1. What factors have brought increased emphasis on network security?
  2. Briefly outline the steps required to complete a risk assessment.
  3. Name at least six assets that should have controls in a data communication network.
  4. What are some of the criteria that can be used to rank security risks?
  5. What are the most common security threats? What are the most critical? Why?
  6. Explain the primary principle of device failure protection.
  7. What is the purpose of a disaster recovery plan? What are five major elements of a typical disaster recovery plan?
  8. What is a computer virus? What is a worm?
  9. How can one reduce the impact of a disaster?
  10. Explain how a denial-of-service attack works.
  11. How does a denial-of-service attack differ from a distributed denial-of-service attack?
  12. What is a disaster recovery firm? When and why would you establish a contract with them?
  13. What is online backup?
  14. People who attempt intrusion can be classified into four different categories. Describe them.
  15. There are many components in a typical security policy. Describe three important components.
  16. What are three major aspects of intrusion prevention (not counting the security policy)?
  17. How do you secure the network perimeter?
  18. What is physical security and why is it important?
  19. What is eavesdropping in a computer security sense?
  20. What is a sniffer?
  21. How do you secure dial-in access?
  22. Describe how an ANI modem works.
  23. What is a firewall?
  24. How do the different types of firewalls work?
  25. What is IP spoofing?
  26. What is a NAT firewall and how does it work?
  27. What is a security hole and how do you fix it?
  28. Explain how a Trojan horse works.
  29. Compare and contrast symmetric and asymmetric encryption.
  30. Describe how symmetric encryption and decryption work.
  31. Describe how asymmetric encryption and decryption work.
  32. What is key management?
  33. How does DES differ from 3DES? From RC4? From AES?
  34. Compare and contrast DES and public key encryption.
  35. Explain how authentication works.
  36. What is PKI and why is it important?
  37. What is a certificate authority?
  38. How does PGP differ from SSL?
  39. How does SSL differ from IPSec?
  40. Compare and contrast IPSec tunnel mode and IPSec transfer mode.
  41. What are the three major ways of authenticating users? What are the pros and cons of each approach?
  42. What are the different types of one-time passwords and how do they work?
  43. Explain how a biometric system can improve security. What are the problems with it?
  44. Why is the management of user profiles an important aspect of a security policy?
  45. How does network authentication work and why is it useful?
  46. What is social engineering? Why does it work so well?
  47. What techniques can be used to reduce the chance that social engineering will be successful?
  48. What is an intrusion prevention system?
  49. Compare and contrast a network-based IPS and a host-based IPS.
  50. How does IPS anomaly detection differ from misuse detection?
  51. What is computer forensics?
  52. What is a honey pot?
  53. What is desktop management?
  54. A few security consultants have said that broadband and wireless technologies are their best friends. Explain.
  55. Most hackers start their careers breaking into computer systems as teenagers. What can we as a community of computer professionals do to reduce the temptation to become a hacker?
  56. Some experts argue that CERT's posting of security holes on its Web site causes more security break-ins than it prevents and should be stopped. What are the pros and cons on both sides of this argument? Do you think CERT should continue to post security holes?
  57. What is one of the major risks of downloading unauthorized copies of music files from the Internet (aside from the risk of jail, fines, and lawsuits)?
  58. Suppose you started working as a network manager at a medium-sized firm with an Internet presence, and discovered that the previous network manager had done a terrible job of network security. Which four security controls would be your first priority? Why?
  59. How can we reduce the number of viruses that are created every month?
  60. Although it is important to protect all servers, some servers are more important than others. What server(s) are the most important to protect and why?

EXERCISES

10-1. Conduct a risk assessment of your organization's networks. Some information may be confidential, so report what you can.

10-2. Investigate and report on the activities of CERT (the Computer Emergency Response Team).

10-3. Investigate the capabilities and costs of a disaster recovery service.

10-4. Investigate the capabilities and costs of a firewall.

10-5. Investigate the capabilities and costs of an intrusion prevention system.

10-6. Investigate the capabilities and costs of an encryption package.

10-7. Investigate the capabilities and costs of an online backup service.

MINI-CASES

I. Belmont State Bank

Belmont State Bank is a large bank with hundreds of branches that are connected to a central computer system. Some branches are connected over dedicated circuits and others use Multiprotocol Label Switching (MPLS). Each branch has a variety of client computers and ATMs connected to a server. The server stores the branch's daily transaction data and transmits it several times during the day to the central computer system. Tellers at each branch use a four-digit numeric password, and each teller's computer is transaction-coded to accept only its authorized transactions. Perform a risk assessment.

II. Western Bank

Western Bank is a small, family-owned bank with six branches spread over the county. It has decided to move onto the Internet with a Web site that permits customers to access their accounts and pay bills. Design the key security hardware and software the bank should use.

III. Classic Catalog Company, Part 1

Classic Catalog Company runs a small but rapidly growing catalog sales business. It outsourced its Web operations to a local ISP for several years but as sales over the Web have become a larger portion of its business, it has decided to move its Web site onto its own internal computer systems. It has also decided to undertake a major upgrade of its own internal networks. The company has two buildings, an office complex, and a warehouse. The two-story office building has 60 computers. The first floor has 40 computers, 30 of which are devoted to telephone sales. The warehouse, located 400 feet across the company's parking lot from the office building, has about 100,000 square feet, all on one floor. The warehouse has 15 computers in the shipping department located at one end of the warehouse. The company is about to experiment with using wireless handheld computers to help employees more quickly locate and pick products for customer orders. Based on traffic projections for the coming year, the company plans to use a T1 connection from its office to its ISP. It has three servers: the main Web server, an email server, and an internal application server for its application systems (e.g., orders, payroll). Perform a risk assessment.

IV. Classic Catalog Company, Part 2

Read Minicase III above. Outline a brief business continuity plan including controls to reduce the risks in advance as well as a disaster recovery plan.

V. Classic Catalog Company, Part 3

Read Minicase III above. Outline a brief security policy and the controls you would implement to control unauthorized access.

VI. Classic Catalog Company, Part 4

Read Minicase III above. What patching policy would you recommend for Classic Catalog?

VII. Personal Security

Conduct a risk assessment and develop a business continuity plan and security policy for the computer(s) you own.

CASE STUDY

NEXT-DAY AIR SERVICE

See the Web site.

HANDS-ON ACTIVITY 10A

Securing Your Computer

This chapter has focused on security, including risk analysis, business continuity, and intrusion prevention. At first glance, you may think security applies to corporate networks, not your network. However, if you have a LAN at your house or apartment, or even if you just own a desktop or laptop computer, security should be one of your concerns. There are so many potential threats to your business continuity—which might be your education—and to intrusion into your computer(s) that you need to take action.

You should perform your own risk analysis, but this section provides a brief summary of some simple actions you should take that will greatly increase your security. Do this this week; don't procrastinate. Our focus is on Windows security, because most readers of this book use Windows computers, but the same advice (but different commands) applies to Apple computers.

Business Continuity

If you run your own business, then ensuring business continuity should be a major focus of your efforts. But even if you are “just” an employee or a student, business continuity is important. What would happen if your hard disk failed just before the due date for a major report?

  1. The first and most important security action you can take is to configure Windows to perform automatic updates. This will ensure you have the latest patches and updates installed.
  2. The second most important action is to buy and install antivirus software such as that from Symantec. Be sure to configure it for regular updates too. If you perform just these two actions, you will be relatively secure from viruses, but you should scan your system for viruses on a regular basis, such as the first of every month when you pay your rent or mortgage.
  3. Spyware is another threat. You should buy and install antispyware software that provides the same protection that anti-virus software does for viruses. Spybot is a good package. Be sure to configure this software for regular updates and scan your system on a regular basis.
  4. One of the largest sources of viruses, spyware, and adware is free software and music/video files downloaded from the Internet. Simply put, don't download any file unless it is from a trusted vendor or distributor of software and files.
  5. Develop a disaster recovery plan. You should plan today for what you would do if your computer was destroyed. What files would you need? If there are any important files that you wouldn't want to lose (e.g., reports you're working on, key data, or precious photos), you should develop a backup and recovery plan for them. The simplest is to copy the files to a shared directory on another computer on your LAN. But this won't enable you to recover the files if your apartment or house was destroyed by fire, for example (see Management Focus 10.5). A better plan is to suscribe to a free online backup service such as mozy.com (think CDP on the cheap). If you don't use such a site, buy a large USB drive, copy your files to it, and store it off-site in your office or at a friend's house. A plan is only good if it is followed, so your data should be regularly backed up, such as doing so the first of every month.

Deliverables

  1. Perform risk analysis for your home network.
  2. Prepare a disaster recovery plan for your home network.
  3. Research antivirus and antispyware software that you can purchase for your home network.

Intrusion Prevention

With the increase of Internet-based attacks, everyone's computer is at greater risk for intrusion, not just the computers of prominent organizations. There are a few commonsense steps you can take to prevent intrusion.

  1. Think good physical security. Always turn off your computer when you are finished using it. A computer that is off cannot be attacked, either over the Internet or from someone walking by your desk.
  2. Windows has the ability to have multiple user accounts. The default accounts are Administrator and Guest. You should disable the Guest account and to change the name of the administrator account so that any intruders attacking the computer will have to guess the user names as well as the passwords. It's also a good idea to create an account other than the administrator account that you can use on a day-to-day basis. The administrator account should only be used when you are installing software or changing configurations that require administrator privileges on your computer. You can manage these user accounts from the Control Panel, User Accounts. Be sure to add passwords that are secure, but easy to remember for all the accounts that you use.
  3. Turn on the Windows Firewall. Use Control Panel, Security Center to examine your security settings, including the “firewall” built into Windows. The firewall is software that prevents other computers from accessing your computer. You can turn it on and examine the settings. The default settings are usually adequate, but you may want to make changes. Click on Internet Options. This will enable you to configure the firewall for four different types of site: the Internet, your local intranet (i.e., LAN), trusted sites (that have a valid PKI certificate), and restricted sites (that are sites of known hackers). Figure 10.18 shows some of the different security settings.
  4. Disable unneeded services. Windows was designed to support as many applications as the developers could think of. Many of these services are not needed by most users, and unfortunately, some have become targets of intruders. For example, Windows is a Telnet server (see Chapter 2) so that anyone with a Telnet client can connect to your computer and issue operating system commands. The Telnet server is usually turned off by the person who installed Windows on your computer, but it is safer to make sure.
    1. Right click on My Computer and select Manage.
    2. Click on Services and Applications and then click on Services.
    3. You should see a screen like that in Figure 10.19. Make sure the Telnet service says “Disabled.” If it doesn't, right click on it, Select Properties, and change the Startup Type to Disabled.
    4. Three other services that should be set to Disabled are Messenger (don't worry, this is not any type of Instant Messenger), Remote Registry, and Routing and Remote Access.

    images

    FIGURE 10.18 Security controls in Windows

    images

    FIGURE 10.18 Security controls in Windows

  5. If you have a LAN in your apartment or house, be sure the router connecting you to the Internet is a NAT firewall. This will prevent many intruders from attacking your computers. The Disable WAN connections option on my router permits me to deny any TCP request from the Internet side of the router—that is, my client computer can establish outgoing TCP connections, but no one on the Internet can establish a TCP connection to a computer in my LAN.
  6. In Chapter 6, we described how to share files on your LAN. If you don't need to share files right now, this capability should be turned off. See Chapter 6 for more details.
  7. Avoid phishing attacks. A recent analysis of email found that 80 percent of all email was spam and phishing attacks. That's right, “real” email is outnumbered more than two-to-one by fake email. Do not ever click on a link in an email. No exceptions. Never click an email link. Even if you are a valued customer, have been offered a chance to participate in a survey, or receive a low-cost mortgage. Even if the email appears to be from a well-known firm. Let us say that again: Never click an email link. If you want to visit a Web site mentioned in an email, open a new browser window and manually type the correct address. Figure 10.20 shows a phishing attack I received. Looks real, doesn't it? I particularly enjoyed the parts that talk about spotting and avoiding fraudulent emails. If I had clicked on the link, it would have taken me to a Web site owned by a Singaporean company.

Deliverables

  1. Print out the report for your computer (like Figure 10.19).
  2. Find 3 examples of phishing emails and explain which one is the best.

HANDS-ON ACTIVITY 10B

Testing Your Computer's Intrusion Prevention

There are many ways an intruder could attack your computer. Many attacks use well-known weaknesses in Windows and Mac operating systems that can easily be used by novice hackers. There are several Web sites that will test your computer's vulnerability to these commonly used attacks. Our favorite is Shields-Up by Gibson Research Corporation.

  1. Go to the Gibson Research home page at www.grc.com and click on the link to Shields-Up.
  2. The first screen will provide some background information on the tests to be conducted and then will provide the application layer name of your computer (if it has one). After you read the details, click the Proceed Button.
  3. Figure 10.21 displays the main menu page. Shields Up can conduct four different security tests: file sharing, port scanning, messenger spam, and browser information.
  4. The file sharing test will see if your computer is set up to enable other users to read files on your computer. Windows was originally designed to be very simple to operate in a LAN environment in a peer-to-peer sharing mode using NetBIOS and NetBEUI. Unfortunately, most users don't want their computer to share their personal files with anyone on the Internet who knows how to issue these requests. Click the File Sharing button to run the test.
  5. Figure 10.22 shows the test results for my computer. My computer has this function disabled so it is secure from this type of attack. If your computer fails this test, Shields Up will explain what you need to do.
  6. Scroll this screen and near the bottom of the page, you will see the main menu again. Let's run the port scanning test. As we explained in Chapter 5, TCP ports are used to connect the network to the application software. There are about two dozen commonly used ports for standard Internet applications such as Web servers (80), SMTP email (25), Telnet (23), and so on. It is not unusual for Windows computers to have some of these ports operational; for example, Windows has a built-in Web server that operates on port 80. Hackers who know this and have the right software can send HTTP messages to your computer's Web server and use it to break into your computer and read your files.

    images

    FIGURE 10.20 Phishing attack

    images

    FIGURE 10.21 Shields Up main menu page

    images

    FIGURE 10.22 Shields Up scanning test

  7. Figure 10.23 shows the results of the port scan on common ports for my computer. I have disabled all of the standard ports on my computer because I do not want anyone to use these ports to talk to software on my computer. If your computer fails this test, Shields Up will explain what you need to do.
  8. Scroll this screen and near the bottom of the page, you will see the main menu again. You can also scan all service ports, which literally scans every port number that is possible to define on your computer, not just the commonly used port (this takes a long time). This is useful in detecting Trojan horses that have installed themselves and are using a noncommon port to send and receive messages.
  9. The next test is the messenger spam test. In addition to Instant Messenger (IM), AOL IM, MSN Messenger, and all those applications, Windows also provides Messenger. Messenger is a separate application that was designed to be used by network administrators to send short messages to users. This means that anyone with the right tools (e.g., a spammer) can send your computer a message that will pop up on your screen. Scroll to the bottom of this screen to see the main menu again. Click the Messenger Spam button.
  10. This will show a page that enables you to type a message and send it to your computer via Messenger. If you receive the spam message, you can read the prior Hands-On activity for information on how to turn off Messenger.
  11. Scroll to the bottom of this screen and use the main menu to do the Browser test. This will show you the information your Web browser is sending as part of the HTTP request header.

Deliverables

  1. Perform the four different security tests described in point 3 and make a printout of the results.
  2. Did your computer pass the test? If not, what steps are you planning to take to make it secure?

HANDS-ON ACTIVITY 10C

Encryption Lab

The purpose of this lab is to practice encrypting and decrypting email messages using a standard called PGP (Pretty Good Privacy) that is implemented in an open source software Gnu Privacy Guard. You will need to download and install the Kleopatra software on your computer from this web-site: http://ftp.gpg4win.org/Beta/gpg4win-2.1.0-rc2.exe. For Mac OS X users, please visit this website http://macgpg.sourceforge.net/.

  1. Open Kleopatra. The first step in sending encrypted messages is to create your personal OpenPGP key pair—your personal private and public key.
  2. Click on File and select New Certificate and then select Create a personal OpenPGP key pair and click Next.
  3. Fill out your name as you want it to be displayed with your public key and the email address from which you will be sending and receiving emails. The comment window is optional and you can leave it empty. Click Next. Check and make sure that your name and email address are correctly entered. If this is the case, click the Create Key.
  4. The system will now prompt you to enter a passphrase. This is your password to access your key and it will also allow you to encrypt and decrypt messages. If the passphrase is not secure enough, the system will tell you. The quality indicator has to be green and show 100% for acceptable passphrase. Once your passphrase is accepted, the system will prompt you to re-enter the passphrase. Once this is done, Kleopatra will create your public and private key pair.
  5. The next screen will indicate that a “fingerprint” of your newly created key pair is generated. This fingerprint is unique and no one else has this fingerprint. You don't need to select any of the next steps suggested by the system.

    images

    FIGURE 10.23 Shields Up port Scanning test

    images

    FIGURE 10.24 Example of a public key

  6. The next step is to make your public key public so that other people can send encrypted messages to you. In the Kleopatra window, right click on your certificate and select Export Certificates from the menu. Select a folder on your computer where you want to save the public key and name it YourName public key.asc.
  7. To see your public key, open this file in Notepad. You should see a block of fairly confusing text and numbers. My public key is shown in Figure 10.24. To share this public key, post your asc file on the class website. This key should be made public, so don't worry about sharing it. You can even post it on your own website so that other people can send you encrypted messages.
  8. Now, you should import the public key of the person you want to exchange encrypted messages with. Save the asc file with the public key on your computer. Then click the Import Certificates icon in Kleopatra. Select the asc file you want to import and click OK. Kleopatra will acknowledge the successful import of the public key.
  9. The final step in importing the public key is to set the trust level to full trust. Left click on the certificate and from the menu select Change Owner Trust, and select “I believe checks are very accurate.”
  10. Now you are ready to exchange encrypted messages! Open Webmail, Outlook (or any other email client) and compose a message. Copy the text of the message into clipboard by marking it and hitting CTRL + X. Right-click the Kleopatra icon on your status bar and select Clipboard and Encrypt (Figure 10.25). Click on Add Recipient and select the person you want to send this message to (Figure 10.26). I will send a message to Alan. Once the recipient is selected, just click Next. Kleopatra will return a screen that Encryption was successful.

    images

    FIGURE 10.25 Encrypting a message using Kleopatra

    images

    FIGURE 10.26 Selecting a recipient of an encrypted message

  11. The encrypted message is stored in your computer's clipboard. Open the email message window and past (CTRL+V) the encrypted message to the body of the email. Now you are ready to send your first encrypted email!
  12. To decrypt an encrypted message, just select the text in the email (you need to select the entire message from BEGIN PGP MESSAGE to END PGP MESSAGE). Copy the message to clipboard via CTRL+ C. Right click Kleopatra icon on your status bar, and then select Clipboard and Decrypt & Verify. This is very similar to how you encrypted the message. The decrypted message will be stored in the clipboard. To read it, just paste it to Word or any other text editor. You are done!

Deliverables

  1. Create your PGP key pair using Kleopatra. Post the asc file of your public key on a server/class website as instructed by your professor.
  2. Import a certificate (public key) of your professor to Kleopatra. Send your instructor an encrypted message that contains information about your favorite food, hobbies, places to travel, and so on.
  3. Your professor will send you a response that will be encrypted. Decrypt the email and print its content so that you can submit a hard copy in class.

1 This chapter was written by Alan Dennis and Dwight Worker.

2 CERT maintains a Web site on security at www.cert.org. Another site for security information is www.infosyssec.net.

3 The statistics in this chapter are based on surveys conducted by CSO magazine (www.csoonline.com) and the Computer Security Institute (www.gocsi.com).

4 John J. Tkacik Jr., “Trojan Dragon: China's Cyber Threat,” Backgrounder, no. 2106, February 2008, The Heritage Foundation.

5 CERT has developed a detailed risk assessment procedure called OCTAVESM, which is available at www.cert.org/octave.

6 We should point out, though, that the losses associated with computer fraud are small compared with other sources of fraud.

7 There are many good business continuity planning sites such as www.disasterrecoveryworld.com.

8 Most routers and firewalls manufactured by Linksys (a manufacturer of networking equipment for home and small office use owned by Cisco) use NAT. Rather than setting the internal address to 10.x.x.x, Linksys sets them to 192.168.1.x, which is another subnet reserved for private intranets. If you have Linksys equipment with a NAT firewall, your internal IP address is likely to be 192.168.1.100.

9 For an example of one CERT advisory posted about problems with the most common DNS server software used on the Internet, see www.cert.org/advisories/CA-2001-02.html. The history in this advisory shows that it took about eight months for the patch for the previous advisory in this family (issued in November 1999) to be installed on most DNS servers around the world. This site also has histories of more recent advisories.

10 For more information on cryptography, see the FAQ at www.rsa.com.

11 If you use Windows, you can encrypt files on your hard disk: Just use the Help facility and search on encryption to learn how.

12 There are several versions of 3DES. One version (called 3DES-EEE) simply encrypts the message three times with different keys as one would expect. Another version (3DES-EDE) encrypts with one key, decrypts with a second key (i.e., reverse encrypts), and then encrypts with a third key. There are other variants, as you can imagine.

13 The rules have been changed several times in recent years, so for more recent information, see www.bis.doc.gov.

14 Rivest, Shamir, and Adleman have traditionally been given credit as the original developers of public key encryption (based on theoretical work by Whitfield Diffie and Martin Hellman), but recently declassified material has revealed that public key encryption was actually first developed years earlier by Clifford Cocks based on theoretical work by James Ellis, both of whom were employees of a British spy agency.

15 For more on the PKI, go to www.ietf.org and search on PKI.

16 For example, Cisco posts the public keys it uses for security incident reporting on its Web site; go to www.cisco.com and search on “security incident response.” For more information on PGP, see www.pgpi.org and www.pgp.com.

17 This is done using the Diffie-Hellman process; see the FAQ at www.rsa.com.

18 For more information about social engineering and many good examples, see The Art of Deception by Kevin Mitnick and William Simon.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.8.127