We’ve come a long way since the 1980s, when Richard Stallman advocated using a carriage return as your password—and a long, sad trip it’s been. Today’s highly connected systems and the very existence of the Internet have provided exponential increases in productivity. The downside of this connectivity is that it also provides infinite opportunities for malicious intruders to crack your systems. The goals in attempting this range from curiosity to industrial espionage, but you can’t tell who’s who or take any chances. It’s the responsibility of every system administrator to make sure that the systems that they’re responsible for are secure and don’t end up as worm-infested zombies or warez servers serving up bootleg software and every episode of SG-1 to P2P users everywhere.
The hacks in this chapter address system security at multiple levels. Several discuss how to set up secure systems, detect network intrusions, and lock out hosts that clearly have no business trying to access your machines. Others discuss software that enables you to record the official state of your machine’s filesystems and catch changes to files that shouldn’t be changing. Another hack discusses how to automatically detect well-known types of Trojan horse software that, once installed, let intruders roam unmolested by hiding their existence from standard system commands. Together, the hacks in this chapter discuss a wide spectrum of system security applications and techniques that will help you minimize or (hopefully) eliminate intrusions, but also protect you if someone does manage to crack your network or a specific box.
Many network services that may be enabled by default are both unnecessary and insecure. Take the minimalist approach and enable only what you need.
Though today’s systems are powerful and have gobs of memory, optimizing the processes they start by default is a good idea for two primary reasons. First, regardless of how much memory you have, why waste it by running things that you don’t need or use? Secondly, and more importantly, every service you run on your system is a point of exposure, a potential cracking opportunity for the enlightened or lucky intruder or script kiddie.
There are three standard places from which system services can be started on a Linux system. The first is /etc/inittab. The second is scripts in the /etc/rc.d/rc?. d
directories (/etc/init.d/rc?.d
on SUSE and other more LSB-compliant Linux distributions). The third is by the Internet daemon, which is usually inetd or xinetd. This hack explores the basic Linux startup process, shows where and how services are started, and explains easy ways of disabling superfluous services to minimize the places where your systems can be attacked.
Changes to /etc/inittab itself are rarely necessary, but this file is the key to most of the startup processes on systems such as Linux that use what is known as the “Sys V init” mechanism (this startup mechanism was first implemented on AT&T’s System V Unix systems). The /etc/inittab file initiates the standard sequence of startup scripts, as described in the next section. The commands that start the initialization sequence for each runlevel are contained in the following entries from /etc/inittab. These run the scripts in the runlevel control directory associated with each runlevel:
l0:0:wait:/etc/rc.d/rc 0 l1:1:wait:/etc/rc.d/rc 1 l2:2:wait:/etc/rc.d/rc 2 l3:3:wait:/etc/rc.d/rc 3 l4:4:wait:/etc/rc.d/rc 4 l5:5:wait:/etc/rc.d/rc 5 l6:6:wait:/etc/rc.d/rc 6
When the init process (the seminal process on Linux and Unix systems) encounters these entries, it runs the startup scripts in the directory associated with its target runlevel in numerical order, as discussed in the next section.
As shown in the previous section, there are usually seven rc?.d
directories, numbered 0 through 6 that are found in the /etc/init.d or the /etc/rc.d directory, depending on your Linux distribution. The numbers correspond to the Linux runlevels. A description of each runlevel, appropriate for the age and type of Linux distribution that you’re using, can be found in the init man page. (Thanks a lot, Debian!) Common runlevels for most Linux distributions are 3 (multi-user text) and 5 (multi-user graphical).
The directory for each runlevel contains symbolic links to the actual scripts that start and stop various services, which reside in /etc/rc.d/init.d or /etc/init.d. Links that begin with S will be started when entering that runlevel, while links that begin with K will be stopped (or killed) when leaving that runlevel. The numbers after the S or K determine the order in which the scripts are executed, in ascending order.
The easiest way to disable a service is to remove the S script that is associated with it, but I tend to make a directory called DISABLED in each runlevel directory and move the symlinks to start and kill scripts that I don’t want to run there. This enables me to see what services were previously started or terminated when entering and leaving each runlevel, should I discover that some important service is no longer functioning correctly at a specified runlevel.
One of the startup scripts in the directory for each runlevel starts the Internet daemon, which is inetd on older Linux distributions or
xinetd on most newer Linux distributions. The Internet daemon starts specified services in response to incoming requests and eliminates the need for your system to permanently run daemons that are accessed only infrequently. If your distribution is still using inetd and you want to disable specific services, edit /etc/inetd.conf and comment out the line related to the service you wish to disable. To disable services managed by xinetd, cd
to the directory /etc/xinetd.conf, which is the directory that contains its service control scripts, and edit the file associated with the service you no longer want to provide. To disable a specific service, set the disabled
entry in each stanza in its control file to yes
. After making changes to /etc/inetd.conf or any of the control files in /etc/xinetd.conf, you’ll need to send a HUP signal to inetd or xinetd to cause it to restart and re-read its configuration information:
#kill –HUP
PID
Many Linux distributions provide tools that simplify managing rc scripts and xinetd configuration. For example, Red Hat Linux provides chkconfig, while SUSE Linux provides this functionality within its YaST administration tool.
Of course, the specific services each system requires depends on what you’re using it for. However, if you’re setting up an out-of-the-box Linux distribution, you will often want to deactivate default services such as a web server, an FTP server, a TFTP server, NFS support, and so on.
Running extra services on your systems consumes system resources and provides opportunities for malicious users to attempt to compromise your systems. Following the suggestions in this hack can help you increase the performance and security of the systems that you or the company you work for depend upon.
—Lance Tost
Using the power of your text editor, you can quickly lock out malicious systems.
When running secure services, you’ll often find that you want to allow and/or deny access to and from certain machines. There are many different ways you can go about this. For instance, you could implement access control lists (ACLs) at the switch or router level. Alternatively, you could configure iptables or ipchains to implement your access restrictions. However, a simpler method of implementing access control is via the proper configuration of the /etc/hosts.allow and /etc/hosts.deny files. These are standard text files found in the /etc directory on almost every Linux system. Like many configuration files found within Linux, they can appear daunting at first glance, but with a little help, setting them up is actually quite easy.
Before we jump into writing complex network access rules, we need to spend a few moments reviewing the way the Linux access control software works. Inbound packets to tpcd, the Linux TCP daemon, are filtered through the rules in hosts.allow first, and then, if there are no matches, they are checked against the rules in hosts.deny. It’s important to note this order, because if you have contradictory rules in each file you should be aware that the rule in hosts.allow will always be implemented, as the first match is found there. This ceases the filtering, and the incoming packets are never checked against hosts.deny. If a matching rule is not found in either file, access is granted.
In their most simple form, the lines in each of these files should conform to the following format:
daemon-name: hostname or ip-address
Here’s a more recognizable example:
sshd: 192.168.1.55,192.168.155.56
If we inserted this line into hosts.allow, all SSH traffic between our local host and 192.168.1.55 and 192.168.1.56 would be allowed. Conversely, if we placed it in hosts.deny, no SSH traffic would be permitted from those two machines to the local host. This would seem to limit the usability of these files for access control—but wait, there’s more!
The Linux TCP daemon provides an excellent language and syntax for configuring access control restrictions in the hosts.allow and hosts.deny files. This syntax includes pattern matching, operators, wildcards, and even shell commands to extend the capabilities. This might sound confusing at first, but we’ll run through some examples that should clear things up. Continuing with our previous SSH example, let’s expand the capabilities of the rule a bit:
#hosts.allow sshd: .foo.bar
In the example above, take note of the leading dot. This tells Linux to match anything with .foo.bar in its hostname. In this example, both www.foo.bar and mail.foo.bar would be granted access. Alternatively, you can place a trailing dot to filter anything that matches the prefix:
#hosts.deny sshd: 192.168.2.
This would effectively block SSH connections from every address between 192. 168.2.1 and 192.168.2.255. Another way to block a subnet is to provide the full network address and subnet mask in the xxx.xxx.xxx.xxx/mmm.mmm.mmm.mmm format, where the xs represent the network address and the ms represent the subnet mask.
A simple example of this is the following:
sshd: 192.168.6.0/255.255.255.0
This entry is equivalent to the previous example but uses the network/subnet mask syntax.
Several other wildcards can be used to specify client addresses, but we’ll focus on the two that are most useful: ALL
and LOCAL. ALL
is the universal wildcard. Everything will match this, and access will be granted or denied based on which file you’ve used it in. Being careless with this wildcard can leave you open to attacks that you would normally think you’re safe from, so make sure that you mean to open up a service to the world when you use it in hosts.allow. LOCAL
is used to specify any hostname that doesn’t have a dot (.) within it. This can be used to match against any entries contained in the local /etc/hosts file.
Now that we’ve mastered all that, let’s move on to a more complex setup. We’ll set up a hosts.allow configuration that allows SSH connections from anywhere and restricts HTTP traffic to our local network and entries specifically configured in our hosts file. As intelligent sysadmins, we know that telnet shares many of the same security features as string cheese, so we’ll use hosts.deny to deny telnet connections from everywhere as well.
First, edit hosts.allow to read:
sshd: ALL httpd: LOCAL, 192.168.1.0/255.255.255.0
Next, edit hosts.deny to read:
telnet: ALL
As you can see, securing your machine locally isn’t that hard. If you need to filter on a much more complicated scale, employing network-level ACLs or using iptables to create specific packet-filtering rules might be appropriate. However, for simple access control, the simplicity of hosts.allow and hosts.deny can’t be beat.
One thing to keep in mind is that it is typically bad practice to perform this kind of filtering upon hostnames. If you rely on hostnames, you’re also relying on name resolution. Should your network lose the ability to resolve hostnames, you could potentially leave yourself wide open to attack, or cause all your protected services to come to a screeching halt as all network traffic to them is denied. Usually, it’s better to play it safe and stick to IP addresses.
Wouldn’t it be cool if we could set up a rule in our access control files that alerted us whenever an attempt was made from an unauthorized IP address? The hosts.allow and hosts.deny files provide a way to do just that! To make this work, we’ll have to use the shell command option from the previously mentioned syntax. Here’s an example hosts.deny config to get you started:
sshd: 192.168.2. spawn (/bin/echo illegal connection attempt from %h %a to %d %p at 'date' >>/var/log/unauthorized.log | tee /var/log/unauthorized.log| mail root
Using this command in our hosts.deny file will append the hostname (%h), address (%a), daemon process (%d), and PID (%p), as well as the date and time, to the file /var/log/unauthorized.log. Traditionally, the finger
or safe_ finger
commands are used; however, you’re certainly not limited to these.
Let snort watch for network intruders and log attacks—and alert you when problems arise.
Security is a big deal in today’s connected world. Every school and company of any decent size has an internal network and a web site, and they are often directly connected to the Internet. Many connected sites use dedicated firewall hardware to allow only certain types of access through certain network ports or from certain network sites, networks, and subnets. However, when you’re traveling and using random Internet connections from hotels, cafes, or trade shows, you can’t necessarily bank on the security that your academic or work environment traditionally provides. Your machine may actually be on the Net, and therefore a potential target for script kiddies and dedicated hackers anywhere. Similarly, if your school or business has machines that are directly on the Net with no intervening hardware, you may as well paint a big red bull’s-eye on yourself.
Most Linux distributions nowadays come with built-in firewalls based on the in-kernel packet-filtering rules that are supported by the most excellent iptables package. However, these can be complex even to iptables devotees, and they can also be irritating if you need to use standard old-school transfer and connectivity protocols such as TFTP or telnet, since these are often blocked by firewall rule sets. Unfortunately, this leads many people to disable the firewall rules, which is the conceptual equivalent of dropping your pants on the Internet. You’re exposed!
This hack explores the snort package, an open source software intrusion detection system (IDS) that monitors incoming network requests to your system, alerts you to activity that appears to be spurious, and captures an evidence trail. While there are a number of other popular open source packages that help you detect and react to network intruders, none is as powerful, flexible, and actively supported as snort.
The source code for snort is freely available from its home page at http://www.snort.org. At the time this book was written, the current version was 2.4. Because snort needs to be able to capture and interpret raw Ethernet packets, it requires that you have the Packet Capture library and headers (libpcap) installed on your system. libpcap is installed as a part of most modern Linux distributions, but it is also available in source form from http://www.tcpdump.org.
You can configure and build snort with the standard configuration, build, and install commands used by any software package that uses autoconf:
$tar zxf snort-2.4.0.tar.gz
$cd snort-2.4.0
$./configure
[much output removed] $make
[much output removed]
As with most open source software, installing into /usr/local is the default. You can change this behavior by specifying a new location, using the configure
command’s --prefix
option. To install snort, su
to root or use sudo
to install the software to the appropriate subdirectories of /usr/local using the standard make install
command:
# make install
At this point, you can begin using snort in various simple packet capture modes, but to take advantage of its full capabilities, you’ll want to create a snort configuration file and install a number of default rule sets, as explained in the next section.
snort is a highly customizable IDS that is driven by a combination of configuration statements and loadable rule sets. The default snort configuration file is the file /etc/snort.conf, though you can use a configuration file in any location by specifying the full path to and name of the configuration file using the snort
command’s -c
option. The snort source package includes a generic configuration file that is preconfigured to load many sets of rules, which are also available from the snort web site at http://www.snort.org/pub-bin/downloads.cgi.
To get up-to-the-minute rule sets, subscribe to the latest snort updates from the SourceFire folks, the people who wrote, support, and update snort. Subscriptions are explained at http://www.snort.org/rules/why_subscribe.html. This is generally a good idea, especially if you’re using snort in a business environment, but this hack focuses on using the free rule sets that are also available from the snort site.
It’s perfectly fine to create your own configuration file, but since the template provided with the snort source is quite complete and shows how to take advantage of many of the capabilities of snort, we’ll focus on adapting the template configuration file to your system.
To begin customizing snort, su
to root and create two directories that we’ll use to hold information produced by and about snort:
#mkdir -p /var/log/snort
#mkdir -p /etc/snort/rules
The /var/log/snort directory is required by snort; this is where alerts are recorded and packet captures are archived. The /etc/snort directory and its subdirectories are where I like to centralize snort configuration information and rules. You can select any location that you want, but the instructions in this hack will assume that you’re putting everything in /etc/snort.
Next, cd
to /etc/snort and copy the files snort.conf and unicode.map to the parent directory (/etc). The /etc directory is the default location specified in the source code for these core snort configuration files. As we’ll see in the rest of this hack, we’ll put everything else in our own /etc/snort directory.
Now you can bring up the file /etc/snort.conf in your favorite text editor (which should be emacs, by the way), and start making changes.
First, set the value of the HOME_NET
variable to the base value of your home or business network. This prevents snort from logging outbound and generic intermachine communication on your network unless it triggers an IDS rule.
If the machine on which you’ll be running snort gets its IP address via DHCP, you can set HOME_NET
using the declaration var HOME_NET $eth0_ADDRESS
, which sets the variable to the IP address assigned to your Ethernet interface. Note that this will require restarting snort if the interface goes down and comes back up while snort is running.
Next, set the variable EXTERNAL_NET
to identify the hosts/networks from which you want to monitor traffic. To avoid logging local traffic between hosts on the network, the most convenient setting is !$HOME_NET
:
var EXTERNAL_NET !$HOME_NET
Forgetting the $
is a common mistake that will generate an error about snort not being able to resolve the address HOME_NET. Make sure you include the $
so that snort references the value of the $HOME_NET
variable, not the string HOME_NET.
If your network runs various servers, the next step is to update the configuration file to identify the hosts on which they are running. This enables snort to focus on looking for certain types of attacks on systems that are actually running those services. snort provides a number of variables for various services, all of which are set to the value of the HOME_NET
variable by default:
# List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of snmp servers on your network var SNMP_SERVERS $HOME_NET
Next, copy the classification.config and reference.config files to /etc/snort and set the include
statements for these in snort.conf to point to the full path to these files:
include /etc/snort/classification.config include /etc/snort/reference.config
Now set the value of the RULE_PATH
variable in the snort configuration file to /etc/snort/rules (this variable can point anywhere, of course, but I prefer to centralize as much of the snort configuration information in /etc/snort as possible):
var RULE_PATH /etc/snort/rules
Finally, configure snort’s output plug-ins to log rule transgressions (known as alerts) however you’d like. By default, snort enables you to log alerts to the system log and various databases, and also makes it easy for you to define custom alert mechanisms. I’ll focus on using the system log, since that’s the most common (and generic) logging mechanism. To enable logging alerts to the system log (/var/log/messages), simply uncomment the following line in /etc/snort.conf:
output alert_syslog: LOG_AUTH LOG_ALERT
Almost there! You’re now ready to download and install the rules files that are referenced in your snort configuration file. As mentioned previously, you should seriously consider subscribing to these if you’re using snort in an enterprise environment, both in order to support further development of snort and because it’s simply the right thing to do. For the purposes of this hack, you can retrieve and install the free (unregistered user) rules files from http://www.snort.org/pub-bin/downloads.cgi by searching the page for the “unregistered user release” section and retrieving a gzipped tarball of the rules that match the version of snort you’ve built.
To install these rules, change directory to your /etc/snort directory and su
to root or use sudo
to extract the contents of the tarball with a standard tar
incantation:
$cd /etc/snort
$sudo tar zxvf /home/wvh/snortrules-pr-2.4.tar.gz
This will create /rules and /doc subdirectories in /etc/snort. (Again, these rules can actually live anywhere on your system since their location is identified by the RULE_PATH
variable in the snort configuration file. We set this variable to /etc/snort/rules earlier.)
At this point, you’re ready to run snort. Though snort offers a daemon mode, it’s generally useful to run it in interactive mode from the command line until you’re sure you’ve made the correct modifications to your /etc/snort.conf file. To do this, execute the following command:
# snort -A full
You’ll see a lot of output as snort parses your configuration file and rule sets. If you’ve done everything right and not made any typos, this output will conclude with the following block of output:
--== Initialization Complete ==-- ,,_ -*> Snort! <*- o" )~ Version 2.4.0 (Build 18) x86_64 '''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html (C) Copyright 1998-2005 Sourcefire Inc., et al.
If you see this, all is well and snort is running correctly. If not, correct the problems identified by the snort error messages (which are usually quite good), and try the snort
command again until snort starts correctly.
One especially common and irritating message when getting started using snort is the following:
socket: Address family not supported by protocol
You will see this message if your system’s kernel is not configured to support the CONFIG_PACKET
option, which enables applications (the packet capture library, in this case) to read directly from network interfaces. This capability can be compiled directly into the kernel, but it’s more commonly built as a loadable kernel module (LKM) with the name af_packet.ko (af_packet.o if you’re still running a pre-2.6 Linux kernel).
If this capability is provided as an LKM on your system, you can generally load it by executing the modprobe af_packet.ko
command as root or via sudo
. If modprobe
doesn’t work for some reason, you can load the module directly using the insmod
command. The name of the appropriate /lib/modules subdirectory where the module is located is contingent on the version of the kernel you’re running, which you can determine by executing the uname -r
command. For example:
#uname -r
2.6.11.4-21.8-default #insmod /lib/modules/2.6.11.4-21.8-default/kernel/net/packet/af_packet.ko
Testing Snort
The fact that snort is running without complaints is all well and good, but executing correctly isn’t the same thing as doing what you want it to do. It’s therefore useful to actually test snort by triggering one of its rules. The easiest of these to trigger are the port scan rules. To test these, connect to a machine outside your network and issue the nmap
command, identifying the machine on which you’re running snort as the target, as in the following example:
$nmap -P0
24.3.53.235
Starting nmap V. 2.54BETA31 ( www.insecure.org/nmap/ ) Warning: You are not root -- using TCP pingscan rather than ICMP Nmap run completed -- 1 IP address (0 hosts up) scanned in 60 seconds
You can now check /var/log/snort, in which you should see a filenames alert with contents like the following:
a[**] [122:17:0] (portscan) UDP Portscan [**] 09/14-20:53:16.024463 24.3.53.235 -> 192.168.6.64 RAW TTL:0 TOS:0xC0 ID:29863 IpLen:20 DgmLen:163
You will also see a directory with the name 24.3.53.235. This directory contains logs of the offending packets that triggered the alert. Congratulations! snort is working correctly.
If you have port forwarding active on a home or business gateway, you’ll probably see a file with the IP address of the gateway instead of the IP address of the host from which you did the port scan.
Once you’re satisfied that snort is working correctly, you’ll probably want to terminate the interactive snort session we started earlier and restart snort in daemon mode, using the following command:
# snort -A full -D
This starts snort in the background and sends its initialization messages to /var/log/messages. To add this command to your system’s startup mechanisms, either append it to a startup script such as /etc/rc.local or integrate it into the standard system startup process by creating a start/stop script in /etc/init.d and adding the appropriate symbolic links to the /etc/rc.runlevel
directory that corresponds to the default runlevel for the system on which you’re running snort.
You can extend snort in an infinite number of ways. One of the easiest is to take advantage of more of its default capabilities by activating additional rule sets that are provided in the bundle that you downloaded but are commented out of the default snort configuration file template. Some of my favorites to uncomment are the following:
include $RULE_PATH/web-attacks.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/virus.rules
Once you uncomment these and restart snort, you’ll probably start to see additional snort alerts such as the following:
[**] [1:651:8] SHELLCODE x86 stealth NOOP [**] [Classification: Executable code was detected] [Priority: 1] 09/15-04:49:32.299135 70.48.80.189:6881 -> 192.168.6.64:52757 TCP TTL:109 TOS:0x0 ID:53803 IpLen:20 DgmLen:1432 DF ***AP*** Seq: 0x1869E9D1 Ack: 0x18F60ED8 Win: 0xFFFF TcpLen: 32 TCP Options (3) => NOP NOP TS: 719694 594700245 [Xref => http://www.whitehats.com/info/IDS291]
Better to know about attempted attacks than to be blissfully unaware! Of course, whether or not you want to monitor your network for these types of attacks is entirely dependent on your site’s network policies—which is why they’re commented out of the snort configuration file template. Your mileage may vary, but I find these quite useful.
snort is an extremely powerful, flexible, and configurable intrusion detection system. This hack focused on getting it up and running in a standard fashion—explaining how to create your own rules and take advantage of all of its capabilities would require its own book. Actually, a number of books on snort are available, as well as extensive discussions in more general networking texts such as O’Reilly’s own Network Security Hacks, by Andrew Lockhart.
If you’re interested in a simpler network-monitoring package, PortSentry (http://sourceforge.net/projects/sentrytools/) is one of the best known, though it hasn’t been updated for quite a while now. However, snort is a much more powerful tool and is actively under development. Newer snort developments include the ability to actively respond to certain types of attacks by sending certain types of packages (known as flexresp, or flexible response) and increasing integration with dynamic notification tools on both the Linux and Windows platforms. In today’s connected world, you can’t really afford not to firewall your hosts and scan for clever folks that can still punch through your defenses. In the open source world, there’s no better tool for the latter task than snort.
“Monitor Network Traffic with MRTG” [Hack #79]
Network Security Hacks, by Andrew Lockhart (O’Reilly)
man snort
Snort Central: http://www.snort.org
The Tripwire program is a great intrusion-detection system, but it can also be a pain to configure. Save yourself time and trouble with these tips and tricks.
Do you ever wake up in a cold sweat at night, worrying about someone compromising your servers? Have you ever found yourself wondering if the ls binary that you execute on your machine is actually telling you the truth about the files in your home directory? If so, welcome to the wonderful world of system administrator paranoia. And here’s a tip: you should look into the possibility of deploying an intrusion-detection system on your servers so that you can rest easy every night.
There are many different types of IDS out there. Some focus on analyzing incoming network connections, some simply monitor logs and send alerts to sleeping sysadmins, and others analyze the binaries, configuration files, and libraries on a system and notify sysadmins of any changes. Tripwire is an excellent example of the third type of IDS software. It creates a database of the characteristics of the files in your filesystem and can then monitor the integrity of every single file and directory on your server. But while such security can be massively reassuring to the paranoid sysadmin, it doesn’t come without a cost. Tripwire can be a beast to set up and configure properly, and hours of tweaking may be required to tune it properly for your filesystem. However, with a little bit of help, you can have Tripwire running strong on your system without too much effort.
Obviously, the first step is to obtain and install the software. You have two options for this. The first, and by far the easiest, is to use your package management software to install Tripwire. Alternatively, you can install from an RPM available on a third-party site. The procedure I’m going to go through is for installing Tripwire on Fedora Core 4 via the RPM available on an independent Fedora software site, but the procedure should be similar for any other RPM-based distribution.
First, download the RPM from http://rpm.chaz6.com/?p=fedora/tripwire/tripwire-2.3.1-18.fdr.3.1.fc4.i686.rpm. Install it as normal from the command line:
#rpm -Uvh
tripwire-2.3.1-18.fdr.3.1.fc2.i686.rpm
If you don’t have any unsatisfied dependencies, the RPM will successfully load Tripwire onto your system.
Now that the application is installed, take a moment to become familiar with the configuration files that control Tripwire. There are two main files, and we’ll cover each of them in detail.
The file /etc/tripwire/twcfg.txt controls the environment and manner in which Tripwire operates. It is in this file that you can specify alternate installation directories, the location of the policy and database files, where to output reports, and where to find the site and local keys so that everything can be securely signed. The following is a sample twcfg.txt file:
ROOT =/usr/sbin POLFILE =/etc/tripwire/tw.pol DBFILE =/var/lib/tripwire/$(HOSTNAME).twd REPORTFILE =/var/lib/tripwire/report/$(HOSTNAME)-$(DATE).twr SITEKEYFILE =/etc/tripwire/site.key LOCALKEYFILE =/etc/tripwire/$(HOSTNAME)-local.key EDITOR =/bin/vi LATEPROMPTING =false LOOSEDIRECTORYCHECKING =false MAILNOVIOLATIONS =true EMAILREPORTLEVEL =3 REPORTLEVEL =3 MAILMETHOD =SENDMAIL SYSLOGREPORTING =false MAILPROGRAM =/usr/sbin/sendmail -oi -t
Most of the directives within this file are self-explanatory; however, there are a few that can be somewhat misleading. My favorites are:
LATEPROMPTING
Controls how long Tripwire will wait before asking for a password. If this option is set to true, Tripwire will wait as long as possible before prompting the user for a password. This limits the password’s time of exposure within system memory, therefore keeping it more secure.
LOOSEDIRECTORYCHECKING
Used to configure Tripwire to notice how files change within directories that are modified. If this is set to false
and a file within a watched directory changes, Tripwire will notify you that both the directory and the file have changed. When set to true
, it will simply notify you that the file has changed. This option is present to prevent you from becoming inundated with redundant messages within the Tripwire reports.
MAILNOVIOLATIONS
Instructs Tripwire whether or not to email you even if everything has checked out okay. When set to true
, Tripwire will send you email just to let you know everything is okay. When set to false
, only problem reports are sent.
EMAILREPORTLEVEL
Configures the level of detail that Tripwire should report. Experiment with this one and see how you prefer it. Alternatively, you may override this option when launching Tripwire from the command line.
MAILMETHOD
Enables you to identify how Tripwire reports are delivered via email. There are two possible values: SMTP
, for using an open SMTP relay; and SENDMAIL
, for using your own Sendmail server. This variable should be configured to reflect the configuration of your network and mail servers.
MAILPROGRAM
Tells Tripwire where to find the mail program you want it to use to send out email notifications.
SYSLOGREPORTING
Tells Tripwire whether or not it should report its findings to syslog. Working directly with syslog can help to configure this further.
Now that we’ve configured how Tripwire will execute and behave, let’s examine the configuration file that controls how and what it analyzes.
The file /etc/tripwire/twpol.txt tells Tripwire how you want your filesystem monitored. This file can seem overwhelming at first, but don’t panic! It’s actually quite straightforward once you know what you’re looking at. Tripwire includes a sample configuration file on which you can base your configuration. In our case some tweaking will be needed, as this template file is geared toward a default Red Hat system.
The first part of the configuration file that you should pay attention to is the section labeled @@section FS
. This section provides the details that should be taken into account when checking different types of files. For instance, SIG_HI
is used to monitor files that are critical aspects of a system’s overall vulnerability, including binaries devoted to kernel modification, IP and routing commands, and a host of other applications. Another good one to pay attention to is SEC_LOG
, which notes ownership permissions, inodes, and other attributes. Files watched by this parameter will not trip the alarm if their file sizes change, as log files often do.
The best way to learn the syntax of the Tripwire policy file is by modifying an existing config file. We won’t go into much detail here—Tripwire is powerful and complex enough that a complete explanation of effective Tripwire policies deserves a book of its own—but we will go through one simple modification.
Since this file is based on a default Red Hat installation, YaST would not be protected if we were to install it on a SUSE box. Let’s make some minor changes to the twpol.txt file to fix that:
#protect the yast binaries ( rulename = "Watch Yast Binaries" severity = $(SIG_CRIT) ) { /sbin/yast -> $ (SEC_CRIT) ; /sbin/yast2 -> $ (SEC_CRIT) ; /sbin/zast -> $ (SEC_CRIT) ; /sbin/zast2 -> $ (SEC_CRIT) ; }
This is a very simple rule that doesn’t take advantage of even a quarter of Tripwire’s customization features. In this case, the entries between the opening parentheses define the name of the rule and its severity. The parentheses are followed by a list of binaries to check, enclosed within curly braces.
As you can imagine, creating a perfect Tripwire policy will take some trial and error. You’ll need to take into account every application that you have installed and make sure that they’re being adequately monitored. Start with the sample policy, and begin adding and modifying from there. It will take a few runs, but sooner or later you’ll end up with a perfect policy for your system. For more information on generating a strong policy and a full explanation of the features, consult the man page for Tripwire and the official open source Tripwire documentation at http://sourceforge.net/project/shownotes.php?release_id=18142.
Once you have Tripwire configured, you need to perform a couple of steps before you can run it. To begin, cd
to /etc/tripwire and run the Tripwire installation script:
# ./twinstall.sh
Once you’ve done this, you’ll need to accept the license agreement by typing accept
at the prompt. After you’ve accepted the license terms, you’ll then move on to generating the site and local keys. These are keys that Tripwire uses to sign your configuration files, policies, and the filesystem database. Be sure to use good, strong keys for this:
---------------------------------------------- Creating key files… (When selecting a passphrase, keep in mind that good passphrases typically have upper and lower case letters, digits and punctuation marks, and are at least 8 characters in length.) Enter the site keyfile passphrase: Verify the site keyfile passphrase: Generating key (this may take several minutes)…Key generation complete.
Once the key files have been generated, you’ll have to enter your site and local passphrases again so that Tripwire can sign your configuration files. Using your unique passphrase to generate a key to sign the important application files ensures that no one will be able to replace your configuration files with doctored ones that might ignore suspicious activity. Signing them also keeps them from being read in plain text.
Once everything is installed, the next step is to initialize your Tripwire database. Do this by running the following command:
# /usr/sbin/tripwire –init
When you do this for the first time, you’re likely to get a lot of errors. This is OK; you’ll just need to note what errors come up and fix them in the policy file. It might take several minutes to fully initialize your Tripwire database, so don’t worry if you think it’s taking too long.
Once the database has been initialized, you’ll want to run your first integrity check:
# /usr/sbin/tripwire –check
Again, this will take a few minutes, but when it’s done you can examine the report that it generates on stdout for changes that have occurred within your filesystem.
Once you’ve done that, there’s not much to do but fine-tune your policy file and add Tripwire to cron to run as often as you want. To add Tripwire to root’s list of nightly cron jobs, run the following command as root:
# crontab -e
This will open root’s crontab file in your default text editor. Add the following line, substituting the appropriate path:
0 1 * * * /path/to
/tripwire -check
This will schedule Tripwire to run every night at 1 A.M. Running Tripwire once per night is usually sufficient (especially because, depending on the complexity of your Tripwire configuration file, it can take a long time to run).
As you make changes to your twpolicy.txt and twcfg.txt files, you’ll need to use the twadmin tool to re-encrypt them with your passphrase. To recreate your policy, use the following syntax:
# /usr/sbin/twadmin –create-polfile –S site.key /etc/tripwire/twpol.txt
You should follow a few simple policies and procedures in order to keep your Tripwire installation secure. First, don’t leave the twpol.txt and twcfg.txt files that you used to generate your Tripwire database on your hard drive. Instead, store them somewhere off the server. If your system’s security is compromised, as long as these files aren’t available the intruder will not be able to view them to identify any unmonitored parts of your filesystem. Second, it’s a good idea to change the Tripwire configuration and policy files so that your database is stored on some form of read-only media, such as a CD. This prevents anyone from being able to recreate your database with modifications, thus hiding root-kits or other malware. And finally, don’t wait until your machine has been exposed to the Internet to install and configure Tripwire. It will serve you best when it’s been installed on a clean machine and is able to begin keeping track of your filesystem from a fresh install. This way, you can be assured that you’re not monitoring a system that has already been compromised.
While it might seem at first that Tripwire is too overwhelming to bother with, this is not actually the case. The policy file is good at scaring people off, and the default settings and initial setup can generate a lot of noise and strange error messages. However, with a little bit of work and some exploration of your own filesystem, you can learn quite a bit about how your system operates while you configure Tripwire. In addition, Tripwire has many uses outside the security realm. For example, you can use Tripwire to ensure that an application uninstalls all of its components or to identify all the changes made when you install an RPM. The possible uses for Tripwire are endless, and after you’ve mastered it, it can be an incredibly powerful tool for monitoring and maintaining your systems.
—Brian Warshawsky
Monitor filesystem integrity with this easy-to-use tool.
Online security concerns grow every day as new viruses and worms are released. Because of this, it is now more important than ever to monitor your server’s filesystem for signs of compromise. “Tame Tripwire” [Hack #66] introduced intrusion detection systems and discussed using the filesystem integrity checker Tripwire to monitor the multitude of changes that occur within your filesystem. Tripwire is an excellent tool, but to many people the steep learning curve is a big turnoff in deploying it. If for whatever reason Tripwire isn’t for you, other integrity checkers are available. This is Linux, after all! Afick (Another File Integrity Checker) is one such tool that provides numerous configuration methods, including a perl/tk GUI and a Web-min module. This hack will get you up and running using Afick while your other sysadmin friends are still reading the Tripwire manual.
There are few dependencies involved in deploying Afick. Since Afick is written in Perl, you’ll obviously need to have Perl and its libraries installed. Beyond that, simply download the source code from http://afick.sourceforge.net, unpack it to your favorite build location, and run the installation as follows:
# perl Makefile
If you don’t want to install the perl/tk GUI, you can ignore any warnings you may see regarding missing perl/tk modules.
Once Perl has finished processing the Makefile, run the following command to actually install the software:
# make install
Now that we’ve built and installed Afick, let’s configure it and put it through its paces.
The first step in configuring Afick to suit your filesystem is editing the Afick configuration file, which determines what attributes of your filesystem Afick pays attention to when scanning, and thus how it knows when to alert you to specific changes. Afick provides a default configuration file, but as every system is different, you should not depend on it to keep your server safe. Ultimately, fine-tuning Afick to match your filesystem will be a process of trial and error.
To start this process, first take a look at the Afick configuration file, which is called linux.conf and is located in the directory where you unpacked Afick. The configuration file contains several sections, two of which are of particular interest to us. The file is presented and laid out in a very user-friendly manner, making the sections of the file very easy to differentiate.
The first section we’re interested in is the alias
section. In this section, we’ll set up the different combinations of file checks that Afick can perform. We will later apply the aliases defined here to specific types of files and directories. Here are some common aliases:
# alias : ######### DIR = p+i+n+u+g ETC = p+d+i+u+g+s+md5 Logs = p+n+u+g MyRule = p+d+i+n+u+g+s+b+md5+m
The first part of each directive is simply the name of the alias being defined. You’ll use this later to assign these aliases to specific files and directories. The second part of each alias is a list of the filesystem checks to be performed, separated by plus signs. A list of these options is presented in Table 7-1 for your reference.
Option |
Associated filesystem check |
|
Verify md5 checksum of file contents |
|
Verify sha1 checksum of file contents |
|
Verify major and minor number of device |
|
Verify inode number |
|
Verify file permissions |
|
Verify number of links |
|
Verify file ownership (user) |
|
Verify file ownership (group) |
|
Verify file size |
|
Verify number of blocks allocated to file |
|
Verify last modidication time ( |
|
Verify last change time ( |
|
Verify last access time ( |
The second part of the configuration file we’re interested in is the Files to Scan
section. In this section, you can define which individual Afick checks or combinations of them that you defined as aliases will be performed against specific files and directories on your filesystem. Here are some examples for you to use to start the process of tuning your configuration:
/etc/adjtime ETC /etc/aliases.db ETC -md5 /etc/mail/statistics ETC -md5 /etc/dhcpd.conf c+sha1+s+p !/etc/cups/certs/0
This excerpt highlights much of the syntax of the config file. Each of the first three files uses the predefined ETC alias to specify what attributes should be checked. However, the second two use the -md5
directive to tell Afick to use the ETC alias minus the md5 checking option. This approach is useful if you’d like to specify a generic alias to work from with a little modification for different files. The fourth entry checks only the last modification time, sha1 checksum, file size, and permissions of the file /etc/dhcpd.conf. The final entry listed above uses the ! option (or bang, for you old school *nixers out there), which tells Afick not to check the specified file or directory at all. This option should be used sparingly, and only where truly necessary.
Once you’ve taken a few minutes to adjust the configuration file to suit your filesystem, you’re ready to run Afick for the first time. Afick operates by creating a snapshot of your filesystem in the form of a database. When you run Afick for the first time, this database will be initialized, stored, and used as the basis for comparison in later integrity checks. To create the database, run the following command:
#afick -c
/path_to_linux.conf
/linux.conf -i
The -c
directive tells Afick where to find the configuration file it should use, while the -i
tells Afick to create an initial database. This operation may take a few minutes, but when it completes you’ll find the database in the location specified in the first directive within your linux.conf file. Once the initial database is created, wait a few moments and rerun Afick, this time with the -k
option:
#afick -c
/path_to_linux.conf
/linux.conf -k
The -k
option tells Afick to compare the existing filesystem against the snapshot in the database and report any errors. It is at this point that you’ll begin the trial-and-error phase of your Afick configuration. As errors and changes are reported, sort through them and modify your configuration file accordingly. As long as you aren’t changing things, and your system is in a quiet state, what will show up are things on your system that are probably constantly changing. In some cases it will be appropriate to continue monitoring attributes such as ownership and inodes, but not mtime
or atime
values. Experiment and adjust your config file accordingly. Once you can run Afick without returning a flood of alerts, you’re ready to add it to root’s crontab to automate it to run on a schedule. To have Afick added to root’s crontab, run the following command as root:
# crontab -e
This will open root’s crontab in your default text editor. Add the following line, substituting the appropriate path:
0*/8 * * * root /path_to_afick.cron
/afick.cron
This will schedule Afick to run every eight hours, emailing root with any changes that occur.
Once you’ve reached this point in your configuration, you should consider moving your database to a read-only storage medium. In my experience, an old zip disk is an excellent choice (although you can also use a CD-R or DVD). To move your database to a zip disk, first mount the zip drive and then run the following command:
# mv /var/lib/afick/afick.pag /mnt/zip/afick.pag
Once you’ve done this, make sure you modify your configuration file to point to your newly moved database using a database
:= /path/to/database
entry. You can then move your configuration file over to the zip disk as well, and flip the switch on the back of the zip disk to mark the disk as being read-only. By doing this, you’re protecting your database and configuration file from being modified by anyone without physical access to the server.
When you make changes to your filesystem, you’ll need to update your database. You can do this by issuing the following command:
#afick -c
/path_to_linux.conf
/linux.conf -u
Once the command finishes executing, your database is updated. You should perform an update any time you upgrade an application, apply new software or kernel patches, or perform any other activity that will alter your filesystem.
As you can probably tell, Afick is a less complicated version of Tripwire. The two applications share many similarities, but I find Afick to be the more useful and user-friendly of the two. In my experience with Afick, I’ve found a few other uses for it beyond ensuring my system isn’t compromised. Among these uses are ensuring that applications properly uninstall themselves as well as tracking the exact changes made by running applications. There are many other uses to be found for this and other integrity checkers, and just a little bit of experimentation is guaranteed to reveal one or two that are relevant to you.
Let chkrootkit automatically check your externally facing machines for rootkits and other attacks.
A rootkit is a software package that enables an unauthorized user to obtain root or administrative privileges on a machine. Rootkits are usually installed by exploiting a known security problem. Once installed, they can capture passwords, monitor system status, send system authentication information to other hosts, and even execute programs at scheduled intervals.
While rootkits are conceptually quite interesting, being “rooted” (the term for being compromised such that unauthorized people have root access to your system) is not. Luckily, just as there are plenty of scripts that automate installing rootkits, there are also some great software packages that detect rootkits and identify compromised systems and applications. Some packages, such as Tripwire [Hack #66] and Afick [Hack #67] , generally monitor file sizes and signatures and let you know if something has changed that shouldn’t have. This hack explores chkrootkit, one of the most powerful and popular software packages for actually detecting rootkits themselves and discusses how to install and use it to detect and close down invasions.
Linux rootkits work in various ways, usually as kernel modules, user-space software packages that replace system binaries, or a combination of both. Kernel rootkits insert loadable kernel modules that replace system calls with hacked versions that capture information and often hide information about specific processes from the user, whereas user-space rootkits generally replace system binaries such as ps, login, passwd, and so on with hacked versions that also capture information and hide information about specific processes and directories. For example, the t0rn rootkit mentioned in the “True Confessions” sidebar replaces system binaries such as ps, top, and ls with versions that won’t list anything that is running from its /usr/src/.puta directory. Pretty clever, actually.
chkrootkit runs on Linux systems using any 2.x kernel and has also been used and tested on FreeBSD 2.2.x, 3.x, 4.x and 5.x systems; OpenBSD 2.x and 3.x systems; NetBSD 1.6.x systems; Solaris 2.5.1, 2.6, 8.0, and 9.0 systems; and various HP-UX, Tru64, and BSDI system releases. At the time that this book was written, chkrootkit could detect rootkits such as 55808.A Worm, Adore LKM, Adore Worm, AjaKit, Anonoying, Aquatica, ARK, Bobkit, dsc-rootkit, duarawkz, Ducoci, ESRK, Fu, George, Gold2, Hidrootkit, Illogic, Kenga3, kenny-rk, knark LKM, Lion Worm, LOC, LPD Worm, lrk, Madalin, Maniac-RK, MithRa’s Rootkit, Monkit, Omega Worm, OpenBSD rk v1, Optickit, Pizdakit, Ramen Worm, rh-shaper, RK17, Romanian, RSHA, RST.b trojan, Scalper, Sebek LKM, ShitC Worm, Shkit, Showtee, shv4, SK, Slapper A-D, SucKIT, TC2 Worm, t0rn, TRK, Volc, Wormkit Worm, x.c Worm, zaRwT, and ZK.
A basic problem in rootkit detection is that any system on which a rootkit has been installed can’t be trusted to detect rootkits. This can be resolved by doing regular system maintenance by running chkrootkit from a bootable CD. We’ll come back to that later. For now, let’s install chkrootkit and put it through its paces.
chkrootkit is open source and is freely available from http://www.chkrootkit.org/download. The current version at the time this book was written was 0.45. Newer versions are better, since each version of chkrootkit adds software and support for detecting more and more rootkits. The chkrootkit executable is a shell script that runs the binaries and other scripts that are included as part of the chkrootkit package.
After downloading the source tarball, you can build chkrootkit as shown in the following example:
$tar zxf chkrootkit.tar.gz
$cd chkrootkit-0.45
$make
*** stopping make sense *** make[1]: Entering directory `/home/wvh/src/chkrootkit-0.45' gcc -DHAVE_LASTLOG_H -o chklastlog chklastlog.c gcc -DHAVE_LASTLOG_H -o chkwtmp chkwtmp.c gcc -DHAVE_LASTLOG_H -D_FILE_OFFSET_BITS=64 -o ifpromisc ifpromisc.c gcc -o chkproc chkproc.c gcc -o chkdirs chkdirs.c gcc -o check_wtmpx check_wtmpx.c gcc -static -o strings-static strings.c gcc -o chkutmp chkutmp.c make[1]: Leaving directory `/home/wvh/src/chkrootkit-0.45'
chkrootkit’s Makefile doesn’t provide an install target, so you must either manually copy its binaries somewhere or run it from the directory in which you built it. If you do the latter, I’d suggest removing all the source code files to make it harder for anyone who has cracked your system to hack your chkrootkit installation—not impossible, just harder.
Once you’ve built chkrootkit, you simply run it from wherever you’ve put the binaries by executing ./chkrootkit or by invoking the full pathname to the chkrootkit shell script. You must execute chkrootkit as the root user or via sudo
. The output from a run of chkrootkit looks like the following:
# ./chkrootkit
ROOTDIR is '/'
Checking 'amd'… not found
Checking 'basename'… not infected
Checking 'biff'… not found
Checking 'chfn'… not infected
Checking 'chsh'… not infected
Checking 'cron'… not infected
Checking 'date'… not infected
Checking 'du'… not infected
Checking 'dirname'… not infected
Checking 'echo'… not infected
Checking 'egrep'… not infected
Checking 'env'… not infected
Checking 'find'… not infected
Checking 'fingerd'… not found
Checking 'gpm'… not infected
Checking 'grep'… not infected
Checking 'hdparm'… not infected
Checking 'su'… not infected
Checking 'ifconfig'… not infected
Checking 'inetd'… not tested
Checking 'inetdconf'… not found
Checking 'identd'… not found
Checking 'init'… not infected
Checking 'killall'… not infected
Checking 'ldsopreload'… not infected
Checking 'login'… not infected
Checking 'ls'… not infected
Checking 'lsof'… not infected
Checking 'mail'… not infected
Checking 'mingetty'… not infected
Checking 'netstat'… not infected
Checking 'named'… not infected
Checking 'passwd'… not infected
Checking 'pidof'… not infected
Checking 'pop2'… not found
Checking 'pop3'… not found
Checking 'ps'… not infected
Checking 'pstree'… not infected
Checking 'rpcinfo'… not infected
Checking 'rlogind'… not found
Checking 'rshd'… not found
Checking 'slogin'… not infected
Checking 'sendmail'… not infected
Checking 'sshd'… not infected
Checking 'syslogd'… not infected
Checking 'tar'… not infected
Checking 'tcpd'… not infected
Checking 'tcpdump'… not infected
Checking 'top'… not infected
Checking 'telnetd'… not found
Checking 'timed'… not found
Checking 'traceroute'… not infected
Checking 'vdir'… not infected
Checking 'w'… not infected
Checking 'write'… not infected
Checking 'aliens'… no suspect files
Searching for sniffer's logs, it may take a while… nothing found
Searching for HiDrootkit's default dir… nothing found
Searching for t0rn's default files and dirs… nothing found
Searching for t0rn's v8 defaults… nothing found
Searching for Lion Worm default files and dirs… nothing found
Searching for RSHA's default files and dir… nothing found
Searching for RH-Sharpe's default files… nothing found
Searching for Ambient's rootkit (ark) default files and dirs…nothing found
Searching for suspicious files and dirs, it may take a while…
/usr/lib/jvm/java-1.4.2-sun-1.4.2.08/jre/.systemPrefs
/usr/lib/perl5/5.8.6/x86_64-linux-thread-multi/.packlist
Searching for LPD Worm files and dirs… nothing found
Searching for Ramen Worm files and dirs… nothing found
Searching for Maniac files and dirs… nothing found
Searching for RK17 files and dirs… nothing found
Searching for Ducoci rootkit… nothing found
Searching for Adore Worm… nothing found
Searching for ShitC Worm… nothing found
Searching for Omega Worm… nothing found
Searching for Sadmind/IIS Worm… nothing found
Searching for MonKit… nothing found
Searching for Showtee… nothing found
Searching for OpticKit… nothing found
Searching for T.R.K… nothing found
Searching for Mithra… nothing found
Searching for OBSD rk v1… nothing found
Searching for LOC rootkit… nothing found
Searching for Romanian rootkit… nothing found
Searching for Suckit rootkit… nothing found
Searching for Volc rootkit… nothing found
Searching for Gold2 rootkit… nothing found
Searching for TC2 Worm default files and dirs… nothing found
Searching for Anonoying rootkit default files and dirs… nothing found
Searching for ZK rootkit default files and dirs… nothing found
Searching for ShKit rootkit default files and dirs… nothing found
Searching for AjaKit rootkit default files and dirs… nothing found
Searching for zaRwT rootkit default files and dirs… nothing found
Searching for Madalin rootkit default files… nothing found
Searching for Fu rootkit default files… nothing found
Searching for ESRK rootkit default files… nothing found
Searching for anomalies in shell history files… nothing found
Checking 'asp'… not infected Checking 'bindshell'… not infected
Checking 'lkm'… chkproc: nothing detected
Checking 'rexedcs'… not found
Checking 'sniffer'…
eth0: not promisc and no PF_PACKET sockets
vmnet8: not promisc and no PF_PACKET sockets
vmnet1: not promisc and no PF_PACKET sockets
Checking 'w55808'… not infected
Checking 'wted'… chkwtmp: nothing deleted
Checking 'scalper'… not infected
Checking 'slapper'… not infected
Checking 'z2'… chklastlog: nothing deleted
Checking 'chkutmp'… chkutmp: nothing deleted
It seems like I’m clean, and that’s a lot of tests! As you can see, chkrootkit first checks a variety of system binaries for strings that would indicate that they’ve been hacked, then checks for the indicators of known rootkits, checks network ports for spurious processes, and so on. I feel better already.
If you are running additional security software such as PortSentry (http://sourceforge.net/projects/sentrytools/), you may get false positives (i.e., reports of problems that aren’t actually problems) from the bindshell test, which looks for processes that are monitoring specific ports.
If you want to be even more paranoid than chkrootkit’s normal behavior, you can run chkrootkit with its -x
(expert) option. This option causes chkrootkit to display detailed test output in order to give you the opportunity to detect potential problems that may be evidence of rootkits that the version of chkrootkit that you’re using may not (yet) be able to identify.
Running chkrootkit “every so often” is a good idea, but running it regularly via cron is a better one. To run chkrootkit automatically, log in as root, su
to root, or use sudo
to run crontab -e
and add chkrootkit to root’s list of processes that are run automatically by cron. For example, the following entry would run chkrootkit every night at 1 A.M. and would mail its output to [email protected]:
03***(cd /path/to
/chkrootkit; ./chkrootkit 2>&1 | mail -s "chkrootkit
output" [email protected])
A basic problem in rootkit detection is that any system on which a rootkit has been installed can’t be trusted to detect rootkits. Even if you follow the instructions in this hack and run chkrootkit via cron, you only have a small window of opportunity before the clever cracker checks root’s crontab entry and either disables or hacks chkrootkit itself. The combination of chkrootkit and software such as Tripwire or Afick can help make this window as small as possible, but regular system security checks of externally facing machines from a bootable CD that includes chkrootkit, such as Inside Security’s Insert Security Rescue CD (http://sourceforge.net/projects/insert/), is your best solution for identifying rootkits so that you can restore compromised systems.
“Tame Tripwire” [Hack #66]
“Verify Fileystem Integrity with Afick” [Hack #67]
Insert Security Rescue CD: http://www.inside-security.de/insert_en.html
Rootkit Hunter: http://www.rootkit.nl
Windows users: http://research.microsoft.com/rootkit/
Windows users: http://www.sysinternals.com/utilities/rootkitrevealer.html
18.119.192.110