CHAPTER 12

Improving System Security

Early in this book, we established that managing the contents and permission of files is the core of UNIX/Linux system administration. UNIX/Linux security is also almost entirely concerned with file contents and permissions. Even when configuring network settings for security reasons, we're usually configuring file contents. This means that, in general, we'll be performing very familiar operations when using cfengine to increase the security of our UNIX and Linux hosts.

At various points in this book, we've taken security into account when configuring our systems or when implementing some new functionality

  • We centralized our user account files right away in our example site, in order to easily change passwords and add and remove accounts across our site.
  • We run all of our internal web sites over HTTPS only (Nagios, Ganglia, and Subversion).
  • We compiled our own Apache from source so that our externally facing web site has the fewest features possible, which should decrease the likelihood of our site being vulnerable to remote Apache exploits.
  • We don't allow root privileges over NFS.
  • We set up a central log host along with automated log reporting.
  • We made sure our centralized cfexecd log uploads were protected against malicious users.
  • We configured version control and backups at our site. This may seem like more of a disaster recovery measure, but modern data security is just as concerned with a disaster destroying information as it is about damage from attackers.

In this chapter, we focus on security itself, but we don't mean to give you the idea that security is a separate duty from your normal ones. If treated as an afterthought, good security is difficult to obtain and, in fact, becomes something of a burden if addressed during the later phases of a project.

Since we're working only on the hosts on our network, we're addressing host-based security. We feel that the importance of host-based security measures cannot be overstated. Many sites implement network security through the use of firewalls and put very little work into the security of the hosts on their network. Such an approach assumes (or naively hopes) that no threats exist on the internal network. Most firewalls by their very natureallow particular traffic through to hosts on the internal network. This traffic could be utilized by attackers to compromise internal hosts, which can then be used as a jumping off point to attack other hosts.

We need to remember that internal users are a major risk. Even if the users themselves aren't malicious, their credentials or their computer systems can be compromised and used by attackers to access the internal network via a VPN or other remote access methods. No modern network should have a crunchy exterior and a chewy interior—meaning perimeter network protection without internal protection mechanisms.

Host-based security mechanisms go a long way toward hardening the internal network. Shutting down unneeded daemons, removing unnecessary accounts, removing or minimizing trust between hosts, implementing proper file permissions and host-based firewalls, and frequently applying system patches and updated packages will address the vast majority of local and remote vulnerabilities.


Note As you might guess, we can't provide a comprehensive security guide in just one chapter. What we can do, however, is recommend the book Practical UNIX &Internet Security by Simson Garfinkel, Alan Schwartz, and Gene Spafford (O'Reilly Media Inc., 2003).


Security Enhancement with cfengine

Cfengine can improve system security in many ways. First, it allows you to automatically configure systems in a consistent manner. The cfengine configuration is general enough that you can quickly apply your changes to other hosts in the same or different classes, even to systems that haven't been installed yet. This means that if you correct a security problem on your Linux systems through cfengine, and then later install a new Linux system, the security problem will be fixed there as well (if necessary).

Some other ways cfengine can help with system security are illustrated within the following sections. Just be aware that this is far from a comprehensive list. Your own systems will almost certainly have more areas where you can use cfengine to enhance their security. You may choose to run applications like FTP servers that can be serious security problems if not properly configured. We can't cover all of these situations, but a good security book will tell you what to configure, and cfengine can do the actual configuration for you.

As always, we do all of our system administration in our example infrastructure using cfengine, so this final chapter doesn't look all that different from the earlier ones. The difference here is that we're not focusing much on the cfengine configuration but more on the security gains from the changes we make.

Removing the SUID Bit

One of the most common ways for a malicious user to gain privileged access is via flaws in programs with the setuid (or SUID) bit set. This permission setting causes a program to be executed with the privileges of the file's owner, not those of the user executing the program. It is a UNIX mechanism that allows nonprivileged users to perform tasks that require elevated privileges (usually, though not always, root privileges). A programming error or flaw in such a program is often disastrous to local security. The two ways to avoid becoming a victim of such a flaw are to keep your system up to date with security and bug fixes and to limit the number of setuid binaries on your system that are owned by the root user.

We should first give you an idea of what SUID binaries are present on our systems, which will allow us to make educated decisions about what to exclude from a file sweep that removes the SUID bit. The following find command will work on all systems at our example site, should be run as root, and allows us to view the list and determine what to allow:

# find / -fstype nfs -prune -o -user root -perm −04000 -ls | tee /var/tmp/suid.list

This find command will not descend into filesystems mounted over NFS and will find programs owned by the root user that have the SUID bit set. It then uses the tee command to save the output into a file for later investigation, while still displaying the output to the screen.

On our Debian systems, which we imaged with FAI and configured via cfengine, the output was rather short, a total of 20 programs. Part of the reason for that is because we haven't installed the X Window System, but it mainly reflects a very security-conscious Linux distribution.

On our Solaris system imaged with Jumpstart, we got an astonishingly long list, with 75 total entries.

On the Red Hat system that we imaged via Kickstart, we found 36 SUID root-owned files, which also isn't too bad. Kudos go to Red Hat for cleaning up the situation; in the past, Red Hat was one of the worst offenders among Linux distributions.

To remove the SUID bit from all the binaries except those that we deemed important, we created a task at PROD/inputs/tasks/os/cf.suid_removal with these contents:

files:
       debian.Hr03.Min40_45::
               /
                       filter=rootownedfiles
                       mode=-4000      # no SUID for rootownedfiles
                       recurse=inf
                       action=fixall
                       inform=true
                       ignore=/usr/bin/passwd
                       ignore=/usr/bin/traceroute.lbl
                       ignore=/usr/pkg/nagios-plugins-1.4.12/libexec/check_icmp
                       ignore=/usr/pkg/nagios-plugins-1.4.12/libexec/check_dhcp
                       ignore=/usr/lib/pt_chown
                       ignore=/sbin/unix_chkpwd
                       ignore=/bin/ping
                       ignore=/bin/su
                       syslog=on
                       xdev=on

       redhat.Hr03.Min40_45::
               /
                       filter=rootownedfiles
                       mode=-4000      # no SUID for rootownedfiles
                       recurse=inf
                       action=fixall
                       inform=true
                       ignore=/usr/pkg/nagios-plugins-1.4.12/libexec/check_dhcp
                       ignore=/usr/pkg/nagios-plugins-1.4.12/libexec/check_icmp
                       ignore=/usr/bin/sudo
                       ignore=/usr/bin/crontab
                       ignore=/usr/bin/at
                       ignore=/usr/bin/sudoedit
                       ignore=/usr/sbin/ccreds_validate
                       ignore=/bin/ping
                       ignore=/bin/su
                       ignore=/sbin/unix_chkpwd
                       ignore=/sbin/pam_timestamp_check
                       syslog=on
                       xdev=on
       (solaris|solarisx86).Hr03.Min40_45::
               /
                       filter=rootownedfiles
                       mode=u-s         # no SUID
                       recurse=inf
                       action=fixall
                       inform=true
                       ignore=/proc
                       ignore=/opt/csw/bin/sudo.minimal
                       ignore=/opt/csw/bin/sudo
                       ignore=/opt/csw/bin/sudoedit
                       ignore=/usr/bin/at
                       ignore=/usr/bin/atq
                       ignore=/usr/bin/atrm
                       ignore=/usr/bin/crontab
                       ignore=/usr/bin/su
                       ignore=/usr/lib/pt_chmod
                       ignore=/usr/lib/utmp_update
                       ignore=/usr/sbin/traceroute
                       ignore=/usr/sbin/ping
                       ignore=/usr/pkg/nagios-plugins-1.4.12/libexec/check_dhcp
                       ignore=/usr/pkg/nagios-plugins-1.4.12/libexec/check_icmp
                       ignore=/usr/pkg/nagios-plugins-1.4.12/libexec/pst3
                       syslog=on
                       xdev=on

We set xdev=on so that cfengine doesn't cross filesystem boundaries. We know that we imaged all of our systems with a single root filesystem, so this keeps us from crawling the NFS directories. Even if we wanted to fix the permissions on NFS mounts, we couldn't because the root user is mapped to the nobody user over NFS (unless the no_root_squash option is used on the NFS server, which we don't use; refer to the NFS section in Chapter 8).

We utilized the rootownedfiles filter from the file PROD/inputs/filters/cf.root_owned, which is imported from cfagent.conf. The file has these contents:

filters:

        { rootownedfiles

        Owner: "root"

        Result: "Owner"
}

Filters in cfengine can get very complicated and are able to look for several items with particular attributes in order to successfully match. The preceding filter is a very simple file one that matches when a file is owned by root. In conjunction with these lines from

cf.suid_removal

mode=u-s         # no SUID
recurse=inf
action=fixall

we tell cfengine that we want the files to lack the SUID bit, that cfengine should infinitely recurse directories, and that the action to take is to fix the files. The final setting is to ignore the files that we don't want changed, using the ignore= lines.

To activate this task, we added this line to PROD/inputs/hostgroups/cf.any:

tasks/os/cf.suid_removal

Be careful to test out these changes on just one host of each platform. As a temporary measure, you can override the hostgroups mechanism with lines like these in PROD/inputs/hostgroups/cf.any:

aurora|rhlamp|loghost1::
     tasks/os/cf.suid_removal

any::

Just be sure to set the any:: class again at the end, since any entries added below later on will apply only to the three hosts specified. It will help avoid issues if another task needs to be imported to all hosts but is erroneously only imported for the three hosts mentioned previously. We don't want to leave an entry like this in place long term, since it circumvents our hostgroups cfengine configuration file organization method. Anytime that you specify hostnames as classes directly in any sort of actions, even imports, you're making it harder to maintain your infrastructure. Sticking with role-based classes aids maintainability in the long term. Ideally, hostnames should only show up in class definitions.

When new software is installed, be aware that if it installs a root-owned program with the setuid bit set, that the software may break due to this nightly-run task. We consider this a feature, not a bug. No new programs will last more than a day with the setuid bit set on our systems.

Protecting System Accounts

Standard system accounts are commonly used for brute force login attempts to systems. Every day, lists of common system accounts along with common passwords are used to attempt unauthorized logins by attackers.

We can protect ourselves against such attacks in three ways:

  • Set system accounts to use nonworking shells.
  • Remove unneeded system accounts.
  • Lock the system accounts passwords.

We will attempt to make the system accounts on our systems unusable for interactive login. We have already set up our new accounts (such as the ganglia user) to not have a valid shell:

ganglia:x:106:109:Ganglia Monitor:/var/lib/ganglia:/bin/false

We need to duplicate this shell entry for all system accounts, with the notable exception of the root account.


Note In the past, we've observed problems with daemons that utilized su – ACCOUNT in start-up scripts. If a daemon or script tries to execute a login shell this way, it won't function in our environment. Such start-up scripts don't require us to give the account a working shell, we can simply modify the script to use the -s/bin/sh option to su in order to make them work.


Since we have automated the distribution of centralized /etc/passwd files in our environment, we simply need to edit the copies in our Subversion DEV repository and test on some nonproduction hosts. We feel that an extra level of caution is needed with such changes. Once tested, merge the changed passwd files back to the PROD branch, and perform a Subversion check out in the production working copy on your cfengine master.

While editing the system accounts to change the shell to /bin/false, remove any accounts that aren't needed at your site. This may take some trial and error and should also be tested in a nonproduction environment before the changes are used in the PROD branch.

Next, edit the shadow files for all your site's platforms. Make sure that each account's encrypted password entry has an invalid string:

nagios:!:14115:0:99999:7:::

The bang (!) character in the encrypted password field of the nagios user account is an invalid string, locking the account. You can validate this with the -S argument to the passwd command on Linux:

$ sudo passwd -S nagios
nagios L 08/24/2008 0 99999 7 −1

The L in the output shows that the account is locked. This is the desired state for all our system accounts (besides the root account, of course). On Solaris the -s argument is used:

$ sudo passwd -s nagios
nagios    PS    08/24/08     0  99999     7

The PS field denotes either "passworded" or "locked," but we know our nagios account has no valid password! The Solaris passwd command expects a particular string in the encrypted password in order for it to report the LK (locked) status—the string *LK*. We can leave the account with just the bang and know that we're safe even through the Solaris passwd command doesn't understand it.

Applying Patches and Vendor Updates

Both Debian and Red Hat distributions make keeping systems up to date extremely easy with security patches and bug fixes. When using Red Hat Enterprise or the stable Debian branch (as we are), automatically updating system software is quite safe. Simple shellcommands sections that execute these commands will keep your Debian and Red Hat Enterprise systems fully patched and up to date:

  • Red Hat: # /usr/bin/yum upgrade
  • Debian: # /usr/bin/apt-get update && /usr/bin/apt-get upgrade

Solaris is another matter entirely. At the shops where we work full time, we still utilize Sun-recommended patch clusters and install them on a per-system basis in single-user mode. Every Sun tool that claims to automate system patches has either not worked to our satisfaction or required major infrastructure changes to accommodate the suite of Sun tools that are required. We find it useful to have a console connection to view the patch cluster output before attempting a system reboot, as serious problems have resulted that don't allow a proper reboot without prior repair.

One of the wisest ways to patch Sun systems is probably the Sun Live Upgrade procedure, where a patched Solaris operating system is installed to an alternate slice on a system's disks, and the host is then booted into the newly patched OS. If there are problems, the system can be booted back into the original OS install and full functionality is restored.

This approach requires some planning at initial installation time, since unused space needs to be left on the drives. The system's swap slice can be used, but this method isn't ideal, since the system is deprived of swap space and the swap slice often isn't large enough to hold a complete Solaris installation.

At the time of this writing, we recommend Live Upgrade and look forward to developing a proper automated mechanism for the third edition of this book.

Shutting Down Unneeded Daemons

Programs that accept network connections are like a door into your systems. Those doors might be locked, but most doors—like many network-enabled daemons—can be forced open. If you don't need the program, it should be shut down to reduce the overall exposure of your systems to network-based intrusion.

In this section, we will develop a task that shuts down a single service on each of the platforms in our example infrastructure to give you an example of how to do it on your own. Please carefully examine all running processes on your systems, and where possible, you should disable the unneeded daemons at installation time. We will write our cfengine task in such a way that if the programs aren't enabled, cfengine will do nothing.

We placed a task at PROD/inputs/tasks/os/cf.kill_unwanted_services with these contents:

control:
         any::
                AddInstallable          = ( disable_xfs )

processes:
        solarisx86|solaris::
                "dtlogin" signal=kill

        redhat::
                "xfs" action=warn matches=<1 define=disable_xfs

shellcommands:
        redhat.disable_xfs::
                "/sbin/service xfs stop"  timeout=60 inform=true
                "/sbin/chkconfig xfs off" timeout=60 inform=true

disable:
        solarisx86|solaris::
                /etc/rc2.d/S99dtlogin

We chose to shut down two different daemons used for the X Window System. On Solaris, the dtlogin daemon handles graphical logins, which we don't need on our server systems. On Red Hat, the xfs daemon is the X font server, also not needed on our server systems.

Fortunately for our security, but unfortunately for this book, none of our Debian systems was running any unneeded daemons. Going off the examples here and the experience gained so far in this book, you shouldn't have a trouble working out how to shut down Debian services. It could be done the same way the Solaris dtlogin daemon is shut down, via a process kill along with a disable of the start-up script.

We added the cf.kill_unwanted_services task to the cf.any hostgroup, checked in our changes, and updated the PROD tree on the cfengine master.

Removing Unsafe Files

You can use cfengine to disable a variety of files and programs on your system (if they exist). When executables and any other files are disabled, they are renamed with a .cf-disabled extension and their permissions are set to 0400. In our example environment, we use a global backup directory ($workdir/backups), so the files are moved there for long-term storage.

Here is an example:

disable:
    any::
        /root/.rhosts       inform=true
        /etc/hosts.equiv inform=true

        # SunOS / NSDAP Rootkit
        /usr/lib/vold/nsdap/.kit            inform=true
        /usr/lib/vold/nsdap/defines     inform=true
        /usr/lib/vold/nsdap/patcher     inform=true

This disables the files /root/.rhosts and /etc/hosts.equiv on all systems (class any) because using these files is often considered a security risk. We also remove some files that result from the installation of an old rootkit. Rootkits are ready-to-run code made available on the Internet for attackers to maintain control of compromised hosts.

The inform=true entries will result in cfagent sending a message to standard output if and when it disables the files. This message will show up in cfexecd e-mails, as well as in the cfoutputs and syslog reports (see Chapter 9). Here's an example pair of syslog entries (one for the file rename and one for the move to the cfengine backup repository):

Sep 23 01:52:04 aurora cfengine:aurora[10573]: [ID 702911 daemon.notice]
Disabling/renaming file /etc/hosts.equiv to /etc/hosts.equiv.cfdisabled
(pending repository move)

Sep 23 01:52:04 aurora cfengine:aurora[10573]: [ID 702911 daemon.notice] Moved
/etc/hosts.equiv.cfdisabled to repository location
/var/cfengine/backups/_etc_hosts.equiv.cfdisabled

Note Removing the example rootkit files with cfengine's disable action doesn't remove a rootkit from your system. Look into rootkit detection programs such as chkrootkit. If you confirm that a rootkit is installed on one of your systems, remove the system from the network, retrieve any important data, and reimage the host. The follow-on actions are to confirm that your data isn't compromised, that the attacker isn't on any of your other systems, and that your system is secured after reimaging (preferably during reimaging) so that the attacker doesn't get back in again.


File Checksum Monitoring

You can also use cfengine to monitor binary files on your system. Like any other file, the permissions of a binary file can be checked and any problems can be fixed. For binaries, particularly those of the setuid root variety, this feature can be very useful. You can also use cfengine to provide some tripwire functionality: you can use it to monitor the MD5 checksum of a file. Here is an example:

files:
   /bin/mount mode=4555 owner=root group=root action=fixall checksum=md5

On many systems, the /bin/mount program has the setuid bit set and is owned by the root user. This allows normal users to mount specific drives without superuser privileges. The parameters given in this example tell cfengine to check the permissions on this binary (and all others that are setuid root) and to record its checksum in a database.

If the checksum does change, you will be notified every time cfagent runs. This notification will continue until you execute cfagent with the following setting in the control section:

control:
   ChecksumUpdates = ( on )

This setting will cause all stored file checksums to be updated to their current values.


Using the Lightweight Directory Access Protocol

The Lightweight Directory Access Protocol (LDAP) allows you to use a central information repository for a variety of system and application uses. Although just about any information can be stored in an LDAP server, the most common thing to store is your user account information. For each user, you can specify an account, full name, phone number, office location, and any other information you may need.

Using LDAP for user directory and authentication at your site can increase your site's overall security, because a centralized authentication directory service enables the following:

  • You can set up user account lockout when a user has a certain number of failed logins across one or many systems. If the lockout settings are local to each system, an attacker can attempt guesses against all systems at your site before the account is totally locked out.
  • Passwords can be centralized across more applications than just UNIX/Linux logins, which allows the administrator to enable a single sign-on infrastructure. The administrators can then enforce strong password policies in this centralized directory.

We already have user account information at our example site centralized in the account files on our cfengine master. We have many of the benefits of using LDAP for centralized authentication, such as easy account auditing, easy password changes, and unified user IDs across all systems.

Any LDAP-aware application can retrieve data from the LDAP server. The Apache web server, for example, can use this information when it is authenticating users who are visiting a restricted web site. It is even more common to use LDAP to store the actual user accounts for your systems. Your operating system can probably use a remote LDAP server in addition to the local user list (/etc/passwd), since most modern UNIX systems support Pluggable Authentication Modules (PAM).

If your system does not come with an LDAP server or you need additional LDAP clients, take a look at OpenLDAP (http://www.openldap.org/). It provides an LDAP server as well as client libraries and compiles on a wide variety of systems. A second, newer alternative is the Fedora Directory Server (http://directory.fedoraproject.org/). We haven't used it, but the existence of a graphical utility for Fedora Directory Server administration will surely help many new LDAP administrators.

We think LDAP is a great system for a medium to large company or other organization. It takes a bit of work to set up, and you have to make sure your systems can take advantage of it, but it is worth it when you have a lot of account information to manage. If you decide to use LDAP, take a look at LDAP System Administration by Gerald Carter (O'Reilly Media Inc., 2003).

Security with Kerberos

Kerberos is an authentication system designed to be used between trusted hosts on an untrusted network. Most commonly, a Kerberos server is used to authenticate remote users without sending their passwords over the network. Kerberos is a pretty common security system and basic information can be found at http://web.mit.edu/kerberos/www/.

Kerberos is the best option (that we know of) available today for authenticating the same accounts across multiple systems. Unlike many other options, the users' passwords are rarely sent over the network. When they are, they are strongly encrypted.

Using Kerberos for authentication on your systems is not always easy, unfortunately. First of all, you need to set up a Kerberos server, which is beyond the scope of this book. It isn't the hardest thing in the world to do, but it will require a fairly serious time investment. Good documentation can be found at MIT's Kerberos site: http://web.mit.edu/kerberos/www/.

You will also need to make sure any programs that require user authentication on your systems are able to use Kerberos. Most systems support PAM, which allows you to use Kerberos easily for all system-level authentication. If you do have PAM, probably most of the applications that came with your systems and require authentication can also use PAM. Other applications, like Apache and Samba, may directly support Kerberos as well (with or without PAM).

Another advantage of Kerberos is its ability to use one authentication service from several unique software packages. It is not uncommon for each user to have a separate password for logging into systems over SSH, accessing a restricted web server, and accessing a Samba share. With Kerberos, you can use the same user password for all of these different services and any other services that support Kerberos.

Like LDAP, Kerberos is an excellent choice if you have a large number of user accounts and a decent number of systems. In fact, if you have a large enough number of systems, it can be worth the effort regardless of the number of accounts you use. Because Kerberos is also the safest way to authenticate users over the network and can be used from such a wide variety of software, it is something you should consider using in almost any environment.

Implementing Host-Based Firewalls

Firewalls are any hardware or software that blocks or otherwise disallows IP traffic, based on rules or policy settings. Deploying firewalls at the periphery of a network, usually on or near the links that connect to other networks or to the Internet, is common practice. In recent years, it has become increasingly common for individual computers to run firewall software.

Even if a host isn't running any unneeded network daemons, a local firewall can help in several ways:

  • If unwanted traffic makes it through a perimeter firewall, a local firewall can still block it. The practice of running several, redundant security systems at once is called defense in depth and is a wise way to handle security.
  • A system can prevent connections from unwanted hosts on the local network where there is no network-based firewall between the hosts.
  • UNIX operating systems sometimes require daemons to run and listen on the network in order for the base system to work properly. There may be no need for the daemon to accept connections from remote hosts, so protecting the program with a firewall is the only remaining option for protecting this service from the network. This problem is less prevalent with base UNIX installs these days, but this issue might still come up with third-party software.

Software that blocks IP traffic directly in a system's TCP/IP stack is called packet-filtering software. True to their name, packet filters use attributes of an incoming packet such as source IP and destination port to block and/or allow network traffic.

Software that proxies connections and only allows permitted application protocol operations is also a firewall, but we don't cover proxying in this book. We do recommend that you evaluate the use of proxy software for both inbound and outbound traffic at your site, where appropriate.

Software that runs outside the operating system kernel to block traffic is also firewall software, though most people don't think of it as such. Software such as TCP Wrappers (covered in the next section) fits this description.

Using TCP Wrappers

You will always want some network services to remain active. If any of these services are executed by inetd, using TCP Wrappers is a good idea. TCP Wrappers is a program (usually named tcpd or in.tcpd) that can be executed by inetd. It performs some checks on the network connection, applies any access control rules, and ultimately launches the necessary program.

All of the systems in our example network come with TCP Wrappers installed by default.

Even though the TCP Wrappers program is already installed (in a location like /usr/sbin/tcpd), you need to make sure your systems use it. A system without TCP Wrappers enabled would have a /etc/inetd.conf with entries like this (your file location and entry format may vary):

ftp stream tcp nowait root /usr/sbin/in.ftpd in.ftpd
telnet stream tcp nowait root /usr/sbin/in.telnetd in.telnetd

To activate TCP Wrappers, you want to modify these entries to call the tcpd program as follows:

ftp stream tcp nowait root /usr/sbin/tcpd in.ftpd
telnet stream tcp nowait root /usr/sbin/tcpd in.telnetd

You can do this using the editfiles section:

editfiles:
   { /etc/inetd.conf
      ReplaceAll "/usr/sbin/in.ftpd" With "/usr/sbin/tcpd"
      ReplaceAll "/usr/sbin/in.telnetd" With "/usr/sbin/tcpd"
      DefineClasses "modified_inetd"
   }

This will cause your system to use TCP Wrappers for both the FTP and Telnet services.


Note We don't recommend using Telnet for remote system access; this is for demonstration purposes only.


Don't forget to send the HUP signal to inetd:

processes:
   modified_inetd::
      "inetd" signal=hup

Simply enabling TCP Wrappers enhances the security of the selected network services. You can gain additional benefits by restricting access to these services using /etc/hosts.allow and /etc/hosts.deny. A properly configured corporate firewall, a system-level firewall if possible (as described in the next section), and TCP Wrappers with access control enabled provide three tiers of protection for your network services. Using all three may seem like overkill, but when you can do all of this automatically, there really is little reason not to be overly cautious. Any one of these security devices could fail or be misconfigured, but probably not all three.

Using Host-Based Packet Filtering

As previously mentioned, packet filtering is a way of allowing or disallowing IP traffic as it comes into a system's network interface, based on filtering rules. All three of our example operating systems at our site install packet-filtering software with the base system. On both Linux distributions (Debian and Red Hat), the iptables software is used, and on Solaris the ipfilter software is used.

In this section, we'll provide a quick introduction to iptables and demonstrate how to fully enforce a local host packet filtering policy. From there, it will be up to you to configure a firewall policy that's appropriate for your site.

For help with ipfilter, consult the project home page at http://coombs.anu.edu.au/~avalon/ and the Sun online documentation at http://docs.sun.com/app/docs/doc/816-4554/eupsq?a=view.

Iptables on Debian

Iptables is the packet filtering framework used by the Linux kernel since major version 2.4. It consists of kernel code and user-space tools to set up, maintain, and inspect the kernel tables of IP packet filter rules. Each table contains several built-in chains and may also contain user-defined chains.

A chain is simply a list of iptables rules with patterns to match particular packets. Each rule specifies a target, which defines what to do with the packet (i.e., allow or drop the packet). A target can also be a jump to a user-defined chain in the same table.

Our Red Hat systems have an iptables firewall installed and configured at boot, as automated by our Kickstart configuration. We also automated the distribution of the firewall configuration file to our Red Hat web server (using cfengine) back in Chapter 10, so that we could remotely connect to the NRPE daemon. Since iptables on Red Hat is already configured and automated on our network, we'll focus on setting up packet filtering for Debian. We'll focus on our Debian-based log host, called loghost1. This host is ideal because of its security-related duties.

In order to set up iptables on Debian, we'll need to

  1. Define a firewall policy
  2. Create iptables rules that implement our policy.
  3. Copy the file to loghost1 using cfengine.
  4. Configure the system to start the firewall rules before the network interfaces are brought up.
  5. Restart a network interface or reboot the host, and verify our firewall settings.

We think a very simple firewall policy is appropriate for our log host. We will allow incoming network connections only for these daemons and disallow the rest:

  • syslog-ng
  • NRPE
  • sshd
  • cfengine (to the cfservd daemon)

The daemons and processes on the local system that connect to services on remote hosts will be allowed by our policy, as well as any return traffic for those connections. Any incoming traffic to services other than those listed previously will be blocked.

The rules defined by iptables are enabled by the iptables command line utility. Rules apply to traffic as it comes into an interface, as it leaves an interface, or as it is forwarded between interfaces.

An iptables rule set that implements our log host policy follows. The rules are evaluated in order, and packets that fail to match any explicit rules will have the default policy applied.

#!/bin/sh

# make sure we use the right iptables command
PATH=/sbin:$PATH

# policies (policy can be either ACCEPT or DROP)
# block incoming traffic by default
iptables -P INPUT DENY
# don't forward any traffic
iptables -P FORWARD DENY
# we allow all outbound traffic
iptables -P OUTPUT ACCEPT

# flush old rules so that we start with a blank slate
iptables -F

# flush the nat table so that we start with a blank slate
iptables -F -t nat

# delete any user-defined chains, again, blank slate :)
iptables -X

# allow all loopback interface traffic
iptables -I INPUT -i lo -j ACCEPT
# A TCP connection is initiated with the SYN flag.

# allow new SSH connections.
iptables -A INPUT -i eth0 -p TCP --dport 22 --syn -j ACCEPT
# allow new cfengine connections
iptables -A INPUT -i eth0 -p TCP --dport 5308 --syn -j ACCEPT
# allow new NRPE connections
iptables -A INPUT -i eth0 -p TCP --dport 5666 --syn -j ACCEPT
# allow new syslog-ng over TCP connections
iptables -A INPUT -i eth0 -p TCP --dport 51400 --syn -j ACCEPT

# allow syslog, UDP port 514. UDP lacks state so allow all.
iptables -A INPUT -i eth0 -p UDP --dport 514 -j ACCEPT

# drop invalid packets (not associated with any connection)
# and any new connections
iptables -A INPUT -m state --state NEW,INVALID -j DROP

# stateful filter, allow all traffic to previously allowed connections
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# no final rule, so the default policies apply

Note that this is a shell script. It is possible to save currently active iptables rules using the iptables-save command, by redirecting the command's output to a file and loading the rules via the iptables-restore command. We configure our host using a shell script because this setup is easy for you use and experiment with on your own.

We copied this script to into the directory /etc/network/if-pre-up.d/ on loghost1 and made sure the script was executable. Scripts in this directory are run before the network interfaces are brought up. Linux allows firewall rules to be defined for interfaces that don't exist, so with this configuration we never bring up interfaces without packet filtering rules.

Note that we didn't go into the details of how we copied the file using cfengine. By this point in the book, we think that you are probably an expert at copying files using cfengine and don't need yet another example.

Once the iptables script was copied in place, we rebooted the host loghost1. When it came back up, we ran this command as root to inspect the current iptables rule set:

# iptables -L -n -v
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target    prot opt in out source destination
 11  1084 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:514
  5   300 ACCEPT   tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:51400 flags:0×17/0×02
  0     0 ACCEPT     tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5666 flags:0×17/0×02
  0     0 ACCEPT     tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5308 flags:0×17/0×02
  3   180 ACCEPT   tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 flags:0×17/0×02
 328 67720 ACCEPT 0 -- lo   *  0.0.0.0/0 0.0.0.0/0
 76  8793 DROP       0  -- *    *  0.0.0.0/0 0.0.0.0/0  state INVALID,NEW
 1033  157K ACCEPT  0 -- *  *  0.0.0.0/0 0.0.0.0/0  state RELATED,ESTABLISHED

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 1712 packets, 216K bytes)
 pkts bytes target     prot opt in     out     source               destination

We ran iptables with the options to list all rules in all chains and to disable DNS resolution for the IP addresses listed in the rules. The long lines wrap around the page, making it harder to read, but the active rule set matches our policy. The host loghost1 will not allow any inbound network traffic except that required for administration, monitoring, and the one network service it offers to the network: syslog.

For further information on iptables consult the iptables home page at http://www.netfilter.org/.


Enabling Sudo at Our Example Site

We discussed sudo extensively in Chapter 1, but we didn't have any systems to deploy it to back then. We need to start using sudo at our example site.

The sudoers file installed by the sudo package on Red Hat has a rich set of commands grouped into related tasks, which can be easily delegated to users with particular roles. You might find it to be a good starting point for the global sudoers file at your site.

We used the default Red Hat sudoers file for a new file at the location PROD/repl/root/etc/sudoers/sudoers (plus we added it to our Subversion repository), and simply added this line:

%root           ALL=(ALL)       ALL

To have a good audit trail, we want administrators to execute commands that require root privileges with single sudo command like this:

$ sudo chmod 700 /root

This way, root commands are logged via syslog by the sudo command, so our log host gets the logs, and the regular logcheck reports will include all commands run as root.

There is a problem, though. Nothing stops our administrators from running a command that gives them a root shell:

$ sudo /bin/sh

When a shell is executed as root, sudo will not (and cannot) log each command run inside the shell. This is a blind spot in our audit trail, and the way to avoid this is to not give unlimited sudo command access to administrators. Instead, you should build on the examples provided in the Red Hat sudoers file to provide only the needed set of commands to your administrator staff. Here are some example entries from the Red Hat sudoers file that delegate privileges in a desirable manner (slightly modified for example purposes):

## User Aliases
## These aren't often necessary, as you can use regular groups
## (ie, from files, LDAP, NIS, etc) in this file - just use %groupname
## rather than USERALIAS
User_Alias ADMINS = nate, kirk

## Command Aliases
## These are groups of related commands...

## Networking
Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient,
 /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig,
 /sbin/mii-tool

## Installation and management of software
Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum

## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
## Syntax:
##
##      user    MACHINE=COMMANDS
##
## The COMMANDS section may have other options added to it.

## Allows members of the 'sys' group to run networking
%sys ALL = NETWORKING

## Allows the ADMINS user alias to run software commands
ADMINS ALL = SOFTWARE

The two command aliases (SOFTWARE and NETWORKING) are perfect examples of using roles to delegate privileges. If a user or administrator needs access to only commands to modify network settings or to install software, the preceding command aliases allow this. The delegation of NETWORKING to a group of users is done via traditional UNIX group membership in this example, and the delegation of SOFTWARE privileges is done via a list of users in the sudoers file itself.


Note Always check to make sure the commands you enable, especially the ones that grant root privileges, don't have shell escapes. Shell escapes are features that allow shell commands to be executed. Any such commands will run with root privileges and completely circumvent the access limitations that we're using sudo for in the first place.


To copy our new sudoers file to the hosts in our example environment, we added a task to PROD/inputs/tasks/app/sudo/cf.copy_sudoers with these contents:

control:
        any::
               AllowRedefinitionOf      = ( sudoers_destination )
               sudoers_destination     = ( "/etc/sudoers" )

        solaris|solarisx86::
               sudoers_destination     = ( "/opt/csw/etc/sudoers" )

copy:
       any::
               $(master_etc)/sudoers/sudoers
                       dest=$(sudoers_destination)
                       mode=440
                       owner=root
                       group=root
                       server=$(fileserver)
                       type=checksum
                       encrypt=true
                       inform=true

We imported the task in PROD/inputs/hostgroups/cf.any, committed our change to Subversion, and checked it out to the live PROD tree on the cfengine master.

We were dismayed to find that we are missing the sudo package on our Debian hosts. To get sudo installed via FAI on future Debian installs, we added the line sudo to PROD/repl/root/srv/fai/config/package_config/FAIBASE. Our Solaris Jumpstart postinstall script already installs sudo, and our Red Hat systems come with it as well. For now, you can manually install sudo on your Debian systems to avoid reimaging just for one package to get installed.

Be sure to add ignore=/usr/bin/sudo to the Debian section of the PROD/inputs/tasks/os/cf.suid_removal task so that sudo actually works for more than one day!

Security Is a Journey, Not a Destination

We need to be mindful that a secure state is never reached. We can only increase security to where we feel that we have decreased the risk of successful penetration to a low level. We need to keep up to date with security announcements and have security in mind with all administrative activities at our site.

We have now enhanced the security at our site by reducing the overall exposure of our systems to the network, as well as to local threats. Even if an attacker gained access to one of our systems using a nonprivileged account, only a limited number of SUID binaries owned by root can be used for privilege escalation, and local software should be up to date and therefore free of publicly known vulnerabilities.

Host-based security measures are the final line of defense when network firewalls fail to protect our internal hosts. Systems that run daemons that are accessible from outside networks (or the Internet at large) should also be firewalled off from internal networks such as workstation and internal server networks. These measures help prevent exposure in the event that a remote attacker gains access to systems that have to be exposed to hostile networks themselves.

The final weak spot that we didn't cover is that of any internally developed software in use at your site. Such software is especially risky if it is exposed to other networks or the Internet at large. Security advisories and vendor announcements address problems with vendor and open source software, but only source code audits by trusted third parties and good coding practices can protect internally developed software. If you support such software, be sure to take extra steps to firewall the hosts running the software from the rest of the hosts on your network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.66.149