Chapter 4. Where the Exposures Lie

Now that we have examined the lurking threat to computer security and analyzed the profiles of potential hackers, we need to look at where the holes lie in systems and networks that allow these hackers to be successful. These security holes, which can be due to misconfiguration or poor programming, should be identified for several reasons. First, common security holes are the areas the organization should address quickly. You need to either close the hole or learn more about it in order to mitigate the risk created by the exposure. Second, the common holes are the areas you need to look for during your penetration test. These holes are often called the “low-hanging fruit” in reference to being fairly easy to identify and exploit.

Breaking into systems can be relatively simple if someone has not properly patched and secured the systems against the latest vulnerabilities. Keeping systems up to date has become increasingly difficult with larger multi-OS distributed networks and smaller staff budgets. The issue facing administrators trying to keep systems up to date is that 20–70 new vulnerabilities are published each month on Bugtraq, eSecurityonline, and other vulnerability services. Unfortunately, hackers have a window of opportunity between the time someone publishes the vulnerability and the time the vulnerability is patched or addressed on the systems. The longer this window stays open, the more the odds of compromise increase. One of the keys to keeping your network secure is to constantly monitor for emerging vulnerabilities and to patch your systems against them. The more responsive administrators are to closing the holes, the more secure your systems will be.

Configuration errors create a risk that enables attackers to penetrate systems. Examples of configuration errors include leaving unnecessary services open, assigning incorrect file permission, and using poor controls for passwords and other settings that a system administrator can set. Organizations can reduce configuration errors by creating baseline standards and configuration management procedures. In addition, proper penetration testing will identify many configuration holes that could allow an attacker to gain access to systems.

There is no way to close all possible access points to a network. With enough time or money, any system could be compromised. However, keeping patches up to date and testing your systems will effectively close 80–90 percent of the holes.

Our experience with testing system security has revealed exposures that consistently resurface in multiple companies. Consequently, we have developed a list of common security holes that we have successfully exploited. The list is not all inclusive, but it can serve as a starting point for organizations taking steps to secure their systems. Organizations should look for these and other vulnerabilities when performing penetration testing.

Not surprisingly, many of the holes we list in this chapter are the same as those published by the System Administration, Networking, and Security (SANS) Institute in October 2001. The SANS Institute did an excellent job of consolidating its list to the top 20 high-risk vulnerabilities. Our list covers many of the SANS items plus other holes we have found to affect networks. The SANS list is an excellent reference, and a complete copy of the report can be found in Appendix B.

Some of the vulnerabilities we list below enabled us to directly compromise the target systems, while others provided information that helped us develop our attack. Some of the holes are specific, while others cover larger, more general issues. We follow the list with a description of each vulnerability and, where applicable, give countermeasures to help close the hole.

Application Holes

Application holes is a general category referring to specific programming errors or oversights that allow hackers to penetrate systems. (Throughout the list we separately cover holes in specific applications that we are able to exploit frequently (such as sendmail).) As part of a penetration test you identify applications running on remote systems. Once identified, you can perform a search for vulnerabilities and exploits that affect the applications. Application identification is often performed by capturing the application's banner, which frequently offers version information. By searching vulnerability databases and the Web for exploits specific to these versions, you can often find exploits or processes that can lead to a system compromise. For example, in one engagement we were initially unable to gain access to any of the systems in the company's demilitarized zone (DMZ), but we did identify several applications and versions that were running on the systems. After performing some research, we discovered a vulnerability in the Compaq Web management service that enabled us to capture the backup SAM file out of the system's repair directory. The system OS was patched and configured correctly. However, the applications running on the system were not.

Berkeley Internet Name Domain (BIND) Implementations

BIND is a common package used to provide domain name service. Systems use DNS to resolve host names to IP addresses and vice versa. The SANS list names BIND as one of the top security threats. Since BIND is so widely distributed and the DNS servers on which it is installed are usually accessible from the Internet, it is a common target for attacks. Unfortunately, many versions of BIND are vulnerable to exploits that enable hackers to gain control of the system or extract information that will help exploit the DNS server or other system. The BIND vulnerabilities commonly found include buffer overflows and denial of service attacks.

BIND should be limited to only those servers that are performing a DNS role. These servers should have the latest version of BIND installed and a process in place to keep these systems up to date. In addition, BIND can be run as a nonprivileged account and should be installed in a chroot()ed directory structure.

Common Gateway Interface (CGI)

CGI vulnerabilities can be found on many Web servers. CGI programs make Web pages interactive by enabling them to collect information, run programs, or access files. Vulnerable CGI programs normally run with the same privileges as the Web server software. Therefore, a hacker who can exploit CGI programs can deface Web pages, attempt to steal information, or compromise the system.

Developers need to think about the security implications of the CGI programs they develop and incorporate security into them. CGI programs should run with the minimum privileges needed to complete the operations they were designed to accomplish. Also, Web servers should not run as the system's root or administrator. Interpreters used with CGI script, such as “perl” and “sh,” should be removed from CGI program directories. Leaving these interpreters in CGI program directories allows attackers to execute malicious CGI scripts. Using scanning software such as vulnerability scanners or CGI scanners can also help find and provide information to correct CGI vulnerabilities. More information on vulnerability scanners and CGI scanners can be found in Chapters 11 and 17, respectively.

Clear Text Services

Clear text (unencrypted data) services represent another weakness in networks. Clear text services transmit all information, including user names and passwords, in unencrypted format. Hackers with sniffers (tools that passively view network traffic) can identify user name and password pairs and use them to gain unauthorized access. Services such as HTTP basic authentication, e-mail, file transfer protocol (FTP), and telnet are examples of services that transmit all communications in clear text. A hacker with a sniffer could easily capture the user name and password from the network without anyone's knowledge and gain administrator access to the system.

You should avoid using clear text services. Secure services that encrypt communications, such as Secure Shell (SSH) and Secure Socket Layer (SSL), should be used. Additionally, network segmentation using switches and routers canhelp defend against sniffing. You can find more information on sniffers in Chapter 14.

Default Accounts

Some applications install with default accounts and passwords. In some instances, the installation documentation uses a default user ID and password that the installer uses with the intention of changing them later. Most of these default accounts have default passwords associated with them, and even if administrators have changed the default passwords on these accounts, the accounts themselves are common targets for attack. Hackers know these default account names and use them as a starting point for brute force attacks and password guessing. The hacker can supply the default account to a brute force tool so that the tool then has to find only the correct password. Often these default application accounts have administrator privileges. Therefore, once a hacker compromises the account, he or she has administrator rights over the system. System administrators should rename or delete these default accounts so that they are less likely to become targets for attackers.

Domain Name Service (DNS)

While the DNS software BIND has vulnerabilities associated with it, the DNS service in general also has exposures that affect security. Systems use DNS to resolve host names to IP addresses and vice versa. Unfortunately, many servers are configured to provide too much information about a network. For instance, a DNS server can be misconfigured to allow zone transfers by which an attacker can obtain host information about an entire domain. In addition, DNS records may provide unnecessary information, such as the address of the internal servers, text lines, system secondary names, and system roles that an attacker could use to formulate an attack.

Organizations should verify the information their DNS servers are providing to ensure no unnecessary information can be obtained from the Internet. In addition, administrators should configure DNS servers to restrict zone transfers. Discovery tools are helpful for performing zone transfers and DNS queries to review the information provided by the server.

Unfortunately, since these servers need to be accessible from the Internet in order to provide the?service, they are also a popular target for attackers. Steps should be taken to make sure the DNS server has been securely configured and that the system (hardware, operating system, and any applications running on it) is updated and monitored for vulnerabilities. Zone transfers should be limited to specific IP addresses that require the ability to update zone information. Vulnerability scanners and discovery tools can be used to help identify exposures in DNS implementations. You can find more information on these tools in Chapters 11 and 12, respectively.

File Permissions

Improper file permissions can be the source of several vulnerabilities. File permissions determine not only what the user has access to but also what programs that user can run. Additionally, since some programs will run under the context of a higher-level user, misconfiguration on these programs might allow a user to elevate his or her access. Sometimes directories are made world writable or give full control to the “everyone” group, leaving hackers with an open door into the systems. You should regularly review file permissions and set them at the most restrictive level possible while still achieving the desired result of the sharing operation.

FTP and telnet

We mentioned FTP and telnet earlier under clear text services, but they have other security exposures in addition to transmitting information in unencrypted format. If an attacker can obtain access to a login prompt for FTP or telnet, he or she may be able to use brute force to guess a user name and password. In addition, anonymous FTP is frequently open on systems running FTP. Normally the anonymous user can obtain only read access, but even read access can yield valuable information that will enable the hacker to exploit more systems. Improperly configured anonymous FTP may allow write access or enable the attacker to access directories other than the FTP directory (for example, /etc/passwd or /winnt/repair/sam._).

Also, many versions of FTP have vulnerabilities that can lead to compromise of the system. For example, WFTP is reported to be vulnerable to several buffer overflows that enable an attacker to execute code on the host or to view files and directory structures. The FTP server that was included with older versions of Solaris was susceptible to a buffer overflow that could enable an attacker to recover passwords for local users. You should research the version of FTP to see whether there are any vulnerabilities associated with it.

If telnet and FTP are not needed on a system, they should be removed. Also, rather than using services like FTP and telnet, administrators should use products such as SSH that encrypt the entire session. In addition, system administrators could limit access to the login prompts for these applications to specific IP addresses using programs that allow for TCP wrappers.

ICMP

We have found many organizations fail to block ICMP at the border router or firewall. ICMP is commonly famous for the ping utility, as well as its use in many denial-of-service tools. In addition, other vulnerabilities are associated with ICMP, such as obtaining the network mask, time stamp, and other useful information. Several scanner programs are configured, by default, to not scan systems that are unresponsive to pings. Disabling ICMP makes it more difficult for unskilled hackers to scan the network. Ping and traceroute, which use ICMP, are often used to troubleshoot systems by determining whether the systems' network interface cards are functioning or where, in a network path, communications errors may be occurring. However, attackers can use ping to identify systems as targets. The attacker can also use traceroute to map network paths to systems.

While ICMP is useful in troubleshooting, it should be carefully reviewed for its necessity. ICMP should be denied at the border router and firewall. If ICMP is necessary, it should be limited to select hosts for troubleshooting capabilities.

IMAP and POP

IMAP and POP are mail protocols that enable users to remotely access e-mail. Since these protocols are designed and used for remotely accessing mail, holes are frequently open in the firewall allowing IMAP and POP traffic to pass into and out of the internal network. Because this access is open to the Internet, hackers frequently target these protocols for attack. Many exploits are available that enable hackers to gain root access to systems running IMAP and POP protocols.

To defend against these exploits, system administrators should first remove IMAP and POP from the systems that do not need these services. Additionally, system administrators should ensure they are running the latest versions of the software and should monitor for and obtain all system patches.

Modems

Rogue modems on user desktop machines represent another back door into corporate networks, usually unknown to system administrators. In addition, we have found several instances where some system administrators used modems to connect to internal corporate systems from their homes. In some cases, employees put modems on their desktop PCs when they left for the day so they could continue working or Internet surfing from home. The systems containing these unknown modems are often poorly configured and are susceptible to attacks. Hackers use brute force dialing programs called war dialers to scan ranges of corporate phone numbers to identify modems. Some war dialer programs can also identify the type of system to which the modem is connected. Hackers can exploit such a modem connection to gain access to the system and use it as an entry point into the network. Poorly controlled or unknown modems contribute to a major security weakness in today's corporate environment.

Organizations should develop strong policies against the use of unauthorized modems. Security administrators should routinely scan their company's phone number blocks looking for unknown modems and identifying the response of known modems. Authentication for authorized modems should be strengthened to two-factor or token-based authentication. War dialing and dial-up penetration testing are covered in more detail in Chapter 6.

Lack of Monitoring and Intrusion Detection

Lack of monitoring and intrusion detection is another common hole that enables attackers to penetrate systems undetected. Many of the organizations we have encountered do not have monitoring in place, have it improperly configured, or do not review it on a regular basis. Without proper monitoring, attacks can go unnoticed. If not detected, an attacker can perform more intrusive techniques to compromise the systems. Given enough time the attacker can probe the systems until he or she finds a weakness. In addition, the attacker can run brute force tools until successful or until someone finally notices the attack. Proper monitoring and intrusion detection are essential to security. We cover monitoring and intrusion detection in greater detail in Chapter 19.

Network Architecture

In several engagements poor network architecture has enabled us to bypass firewalls and other controls to obtain access to the internal network. A secure network architecture should be designed to segment the internal network from the Internet and filter all traffic through a firewall (see Figure 4-1). Also, publicly accessible systems such as Web servers, DNS servers, and mail relays should be located in secure DMZs. The organizations we have found that did not follow these best practices experienced weaknesses that enabled us to obtain unauthorized access. For instance, several organizations have dual-homed hosts in the DMZ. A dual-homed host is one that has a second network card connected to another network segment and is not intended to act as a router. In these instances, the second network card was connected to the internal network. Therefore, by exploiting the dual-homed host in the DMZ we were able to access the internal network without having to penetrate the firewall. In other cases, publicly accessible systems were placed in front of the firewall with no protection. To make matters worse, administrators allowed some of these systems to communicate with internal systems through the firewall. By compromising these external systems, we were able to go through the firewall (since the rules permitted these hosts to communicate with internal systems) to internal systems. Administrators should not allow systems in DMZs to initiate communications with internal systems.

Network architecture diagram

Figure 4-1. Network architecture diagram

For instance, a DMZ system should not be allowed to FTP to an internal system. The internal system should FTP to the DMZ system. In this way, if an attacker compromises a DMZ system, he or she is less likely to be able to access the internal network.

The essential point is that network architectures need to be designed properly to enforce proper security policies. Organizations should not allow DMZ systems to be dual-homed connections to internal networks. Firewall rules should not permit external systems or DMZ systems to connect to the internal network. Chapter 20 describes network architecture in greater detail.

Network File System (NFS)

NFS is used for sharing files and drives on UNIX systems. Exported NFS systems that are accessible to the Internet are an open target for hackers. Improperly configured permissions on NFS shares can provide attackers with access to sensitive information or write access. For instance, an attacker could write an entry to an “.rhosts” file to permit his or her IP address to rlogin to the system. Additionally, there are other vulnerabilities associated with NFS. Vulnerabilities within versions of the NFS daemon, “nfsd,” enable attackers to access file systems with root privileges.

If NFS is needed, ensure it is configured properly. The ports used to access the networked file shares, normally 2049, should be blocked at the firewall and filtering routers. Additionally, permissions should be set appropriately to control access. Finally, you should install the latest patches for the NFS services you are using. You should constantly monitor for newly published vulnerabilities and system patches for NFS.

NT Ports 135–139

File sharing on NT systems is just as vulnerable as on UNIX. NT systems share files and communicate over NetBIOS ports 135–139. On Windows 2000 systems the communications port is 445. All unnecessary ports should be blocked at the firewall, but administrators should verify that these ports (135–139 and 445) are closed. These ports allow for enumeration of users, open shares, and system information. In addition, these ports enable attackers to use many of the “NET” commands listed in Chapter 16. Hackers frequently scan the Internet for file-sharing ports 135–139, 2049, and 445. Any site with these ports open will most likely become a target for attacks.

NT Null Connection

Related to NT file sharing is the NT null connection, which we felt was important enough to mention separately. A null connection consists of an anonymous connection with no password to the NT default interprocess communication share IPC$. With a null connection, attackers are able to connect to this IPC$ share and enumerate critical information about the NT systems. Hackers can gather this information either manually using NET commands or with tools such as DUMP SEC. Attackers are able to obtain a list of all users on the system, their account statuses, account policies, share information, registry settings, and other information that is useful in building attacks.

To defend against this attack, set the RestrictAnonymous registry key. This can be accomplished by following the steps below.

  1. Launch the regedt32 Registry Editor.

  2. Locate the following registry key:

    HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLSA
    
  3. Create or modify the value of RestrictAnonymous. A REG_DWORD value of 1 will enable this feature.

  4. Exit the Registry Editor and restart the computer for the change to take effect. Null connections can still be established but no information can be obtained.

Poor Passwords and User IDs

One of the biggest vulnerabilities affecting systems today is weak passwords. This is a problem that will go away only with the use of stronger authentication systems, such as digital certificates, one-time passwords, and two-factor authentication. Even though there are techniques for remembering secure passwords, users often select easy-to-remember, insecure passwords. This is often due to a lack of security awareness and enforcement of strong passwords. New password-cracking programs are so effective that any word in the dictionary can be cracked in minutes. Simple permutations of dictionary words, such as spelling them backwards, adding a number to the beginning or end of the word, and other simple manipulations of the word, are almost as susceptible as the original dictionary word. Users often make it even easier for hackers by selecting very simple passwords such as names, dates, sports teams, or other significant facts that can be easily guessed.

System administrators are at times just as guilty as users in selecting poor passwords or even sharing passwords. On several occasions, we have found administrator passwords that were very simple so that several administrators could remember them. In addition, we have encountered situations where system administrators did not regularly change the administrator password because so many systems would have to be updated and many other administrators notified. Thus, the accounts that are most powerful are frequently just as easy to compromise. Administrators should belong to an “Admin” group with individual passwords. On UNIX, each administrator should log into his or her own account and use the su command to change to root.

Users and administrators need to select strong passwords consisting of metacharacters and nondictionary words. Passwords should be set to expire often, and password history should prevent users from reusing old passwords. One way to test password strength is to use password-cracking tools such as L0pht Crack or John the Ripper (see Chapter 15 for further information on password cracking). In addition, system administrators should use utilities such as the NT passflt.dll to force users to select strong passwords. On Windows NT, system administrators should also use Syskey encryption to further secure the password files. Syskey adds a second layer of encryption to the password hashes on NT systems, making them harder to obtain. On UNIX systems, administrators should use password shadowing. Password shadowing makes the UNIX passwords accessible only to root.

Poor passwords are just part of a larger problem involving weak authentication methods. Many systems rely on user names and passwords, personal identification numbers, or cookies (a digital identifier used by many Web applications to maintain sessions or identify users) for authentication. These means of authentication can be easy to bypass, enabling a hacker to obtain unauthorized access to an account, data, or services. Authentication methods that securely identify users are key to improving security. Digital certificates, public key infrastructure (PKI), biometrics, and smart cards are all examples of authentication methods that are generally considered very secure. These improved methods of authentication involve the principle of something you possess and something you know. If your method of authentication relies solely on something you know (a password) or something you possess (a token), either one could be stolen or compromised. By requiring both means of authentication, something you know and something you possess, or a biometric feature based on something you are (like a fingerprint), the authentication process becomes much more secure. The problem is that many of these authentication mechanisms are still being refined or are very expensive and complex to implement.

Remote Administration Services

Another common vulnerability originates from the method in which system administrators manage remote systems. We have already discussed the insecurities of using FTP and telnet, but other relatively secure remote control programs also have vulnerabilities associated with them. We have come across several system administrators who use programs such as pcAnywhere and Virtual Network Computing (VNC) for remote system administration. Administrators might install these services with improper or insufficient security controls. By exploiting these services, hackers could gain administrator access to the systems.

If system administrators are going to use remote administration tools, they should make sure the tools are secure. The tools should encrypt all communications, support strong authentication, lock out accounts after several invalid login attempts, and support logging to detect unauthorized access attempts. For desktop machines, the programs should force the user to accept the remote connection before establishing it. In addition, access to these remote administration programs should be limited to specific IP addresses of administrator terminals.

Remote Procedure Call (RPC)

RPCs are another system area where we commonly find new exploits. RPC enables a remote system to communicate with a second system to execute programs. RPCs are common in network environments, especially where file sharing such as NFS is being used. Unfortunately, there are holes in RPC that enable hackers to exploit the service. RPC vulnerabilities can be used for denial-of-service attacks or to enable attackers to gain unauthorized access to the system.

Administrators should not use RPC services on systems directly connected to the Internet. The firewall should block all RPC services so that remote attackers cannot access them from the Internet. To defend against the internal threat, administrators should remove RPC services from any system that does not need them. On systems that need RPC services, it becomes critical to update and patch the system. Vulnerability scanners and port scanners can help identify RPC services running on the network. Chapters 11 and 13 cover these tools in greater detail.

SENDMAIL

Sendmail is another service that may install by default on some UNIX systems. While sendmail is an SMTP implementation, it is deployed widely enough and has a sufficient number of vulnerabilities so that we felt it should be covered independently. It has been a favorite target for hackers over the years since there are numerous exploits associated with it. The exploits include commands designed to send spam mail, to extract password files, and to invoke a denial of service. Patches have been developed to address almost all known vulnerabilities, and the latest versions of sendmail should include these patches. There have been instances when sendmail was running on a system without the system administrator's knowledge. Therefore, you may want to check the installed services and, if it is there, remove it. If you do need sendmail, upgrade to the latest version and keep current with patches.

Services Started by Default

Many times when installing an application or even an operating system, services are installed and started without the knowledge of the installer. For instance, some installations of UNIX start several services, such as sendmail, FTP, rstat, rspray, and rmount, that are not normally required and may open vulnerabilities on the system. Many installations of Windows NT include Internet Information Server (IIS), even when it is not needed. Turnover in the system administrator community is common, and the new system administrator may not identify the services running on each system. Because of this, the new system administrator may have no idea that vulnerable services are running on a system. Penetration testing can often reveal services running on systems of which the administrator was not aware. This information can be extrapolated to other systems to secure similar installations.

Read the documentation to learn of any services that may be installed by the software package and test the system after the installation. New system administrators should determine what services are running on the servers for which they are responsible. In addition, system administrators should periodically scan servers with port scanners to verify no new services have been started. Finally, all unnecessary ports should be blocked at the firewall so that a remote attacker on the Internet cannot access a service that was mistakenly started.

Simple Mail Transport Protocol (SMTP)

SMTP is another service that is a popular target since it is accessible from the Internet. There are many different implementations of SMTP including sendmail, which we have covered in its own category. Each implementation of SMTP has its own vulnerabilities, but they are usually similar. The vulnerabilities involve commands designed to relay mail through the server, buffer overflows, and denial-of-service attacks.

Patches have been developed to address most known vulnerabilities, and the latest versions of the software should include these patches. System administrators should constantly monitor for and apply the latest patches for their SMTP servers.

Simple Network Management Protocol (SNMP) Community Strings

Improperly configured SNMP devices can yield useful information to hackers or enable them to gain unauthorized access to the network. SNMP is used to manage network devices such as routers, hubs, and switches. SNMP devices can be configured for read only or read/write SNMP access. Access to these privileges is controlled by the use of relatively insecure community strings. A community string is essentially a password used to access SNMP. The default community strings are set to “public” (read) and “private” (read/write) and sometimes have been changed to another easily guessed word. Any user who can access the SNMP device could supply the community string and gain access to the SNMP device. If a user can gain write access to the device, he or she may be able to reconfigure it, shut it down, or install unauthorized services as back doors. If a user can only gain read access, he or she can still obtain valuable network and system information that may enable the attacker to compromise the actual SNMP device or other hosts on the network.

To defend against SNMP insecurities, system administrators should configure SNMP devices to respond only to secret, unique, difficult-to-guess community strings. Additionally, all SNMP access should be blocked at the firewall, and SNMP access should be controlled through the use of access lists (ACLs) on internal and external routers. Information about tools for testing SNMP can be found in Chapter 12.

Viruses and Hidden Code

We have already discussed the amount of devastation viruses can wreak on systems. Melissa, I LOVE YOU, Love Bug, and other viruses shut down companies for days to deal with the cleanup and recovery from the virus. The threat from viruses varies with the type of malicious activity they attempt to perform. Some viruses offer only simple annoyances, while others enable remote attackers to gain unauthorized access to systems. The widespread problems resulting from these viruses demonstrate hackers' abilities to hide malicious code relatively well. It also shows how easy it is for users to unknowingly execute this code and compromise the security of the company. Virus-scanning products are quite advanced now, but the scanners are only as good as the virus definitions. Virus scanners must be constantly updated. Additionally, many new viruses may not appear in the database and may be missed. Virus-scanning tools that employ heuristics and sandboxes should be used to attempt to catch these undefined viruses. Heuristics involve looking for code or programs that resemble or could potentially be viruses. Sandboxes actually execute the code in a quarantined environment and examine what the program does. If the program appears to be a virus, the virus package quarantines the program and performs an alert function. The heuristics and sandboxes hopefully catch any newly developed exploits and viruses that may not have been included in the most recent virus definitions update.

Hidden code is directly related to viruses. A hacker can trick users into executing hidden code that will open access for the hacker into the internal network or system. The code could be hidden a number of ways. Hackers can hide remote Java or Active X code on a remote Web server. Users could unknowingly execute this code while browsing the site. Hackers also frequently hide malicious code in e-mails or e-mail attachments. The malicious programs and scripts commonly open holes in the victim's system, enabling the hacker to effectively bypass firewalls and other perimeter controls and directly access the internal network.

System administrators need to take a layered approach to defend against this threat. First, users must be educated not to accept and open e-mail and attachments from unknown sources. Perimeter virus and heuristics scanning should be installed at the network's border to scan all incoming e-mail, attachments, and Internet downloads. E-mail and Internet downloads should be scanned before they are allowed to enter the network. By employing a layered scanning defense (heuristics, gateway scanning, and desktop scanning), security administrators hopefully will be able to catch viruses that may have been able to bypass one or two layers of the defense. Finally, administrators should configure user browsers to not run remote Java and Active X scripts.

Web Server Sample Files

Almost every type of Web server software installs sample files by default. Microsoft IIS, Apache, Cold Fusion, Netscape, and others all install sample files to assist in the installation and maintenance of the server or to provide an example of how to use the software. While these files are often useful to first-time developers or administrators, the sample files are often susceptible to exploits. Several well-known exploits have been developed for these sample files, such as the IIS Showcode.asp and others. Hackers exploit the known code contained in these sample files to perform unauthorized functions. Since the hackers have direct access to these files on other systems and know the exact locations where the sample files will be placed on the server, they can develop detailed surgical attacks targeting these files.

The best defense against these types of attacks is to remove all sample files on the Web server. If the sample files are needed, move them to a different location and ensure that they are not on production systems. In addition, scan the systems with a vulnerability scanner to help identify vulnerabilities associated with the Web server software.

Web Server General Vulnerabilities

There are many general vulnerabilities on Web servers such as Microsoft's IIS, Netscape, Apache, and others. Since these systems are accessible from the Internet, they have been targets for attackers. IIS seems to have been a favorite target for hackers, but most complex Web servers also have vulnerabilities associated with them. The vendors are very responsive in providing patches to address new vulnerabilities as they are discovered. However, if the patch is not applied quickly, the system is at risk. A quick search for exploits associated with each of these Web-hosting applications yields several responses. Many of these Web exploits enable attackers to gain administrative privileges over the server.

Many of the popular vulnerability scanners are fairly accurate in detecting vulnerabilities on Web servers. However, the safest way to ensure protection is to keep up to date on the system patches.

Monitoring Vulnerabilities

We have touched on many of the more common vulnerabilities found in today's computing environment. There are numerous other vulnerabilities associated with operating systems and applications. We have seen a common theme in our recommended procedures to deal with each vulnerability—monitor for and install system patches as they become available. Each month between 20 and 70 new vulnerabilities are published on the Internet. There is a critical time period between the publication of the vulnerability and the application of the patch that needs to be managed. In addition, security monitoring of intrusion detection systems and system logs can detect attacks as they occur and enable the organization to respond accordingly. Appropriate incident response procedures may prevent the attack from being successful or may help to minimize and contain any potential damage.

While vendors are generally responsive in publishing newly discovered vulnerabilities and the patches or procedures to address them, system administrators do not have time to visit each vendor Web site on a daily or even weekly basis. There are mailing lists such as CERT, Bugtraq, and others that will notify subscribers as new vulnerabilities are published. However, the e-mails cover all systems and can be overwhelming to read and sort through. Fortunately, there are services to help system administrators monitor and locate system patches. Vulnerability subscription services provide information on the new vulnerabilities as they become published. The level of information included with the services varies from a straight listing of vulnerabilities to searchable databases to customized profiles that e-mail you when a new vulnerability affecting your profile is published. Subscribing to or monitoring one of these services is the only way to keep up to date with emerging vulnerabilities. There are several free services that publish new vulnerabilities as they are found. Sites such as Security Focus (www.securityfocus.com), eSecurityonline (www.esecurityonline.com), and the Computer Security Division of the National Institute for Standards and Technologies (NIST) ICAT (http://csrc.nist.gov/icat/) site, pictured in Figure 4-2, contain searchable databases of vulnerabilities. Searchable databases enable administrators to look for new vulnerabilities related to products they use. Many of the databases enable a user to search by operating system, application, severity, date, and other fields.

ICAT vulnerability database

Figure 4-2. ICAT vulnerability database

While these searchable vulnerability databases provide a starting point for system administrators trying to track new vulnerabilities, they do not completely solve the problem. One of the biggest problems for the system administrator trying to monitor newly emerging vulnerabilities is time. Even using sites that e-mail vulnerabilities tends to overwhelm administrators with e-mail of vulnerabilities that do not pertain to the systems under their control. Using services that are customizable and notify system administrators when a new vulnerability emerges that affects their systems is a way administrators can save time in addressing vulnerabilities on a regular basis.

Cutting down on the work involved with vulnerability monitoring is a step in the right direction. However, to eliminate the exposures to new vulnerabilities, an enforcement mechanism is needed to validate that identified vulnerabilities are addressed and repaired in a timely manner. Testing using the techniques and tools described in this book is one method of enforcement. Even these steps require quite a bit of structure and coordination to be effective over time. Automated security scans and monitoring cut down on the time required to determine whether security exposures have been addressed. Regular scans using tools such as Cybercop, ISS Internet Scanner, or Nessus will help in this area. Configuration management tools such as Symantec's Omniguard Enterprise Security Manager (ESM) provide another enforcement mechanism. These tools are not cheap, but the implications of not plugging security holes regularly are not cheap either. Vulnerability scanning tools are discussed in further detail in Chapter 11.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.123.120