Objectives
Advanced security tools and techniques
How to enhance security for DNS using chroot
To modify kernel parameters to enhance network security
Some advanced IPTables security tips
Advanced backup techniques
The use of ClamAV to check for viruses
To configure basic intrusion detection using TripWire
To detect root kits using Root Kit Hunter and chkrootkit
To use SELinux to prevent crackers from modifying critical system files
Introduction
Security is often overlooked or only considered as an afterthought. Despite the fact that this chapter comes near the end of this course, security has been a primary consideration throughout. We have looked at many aspects of security as we explored the administration of Fedora.
There are still some things we need to consider and this chapter looks at aspects of security that have not yet been discussed. It is important to realize that security is obtrusive. It will get in your way and frustrate you when you least need that. Security that does not cause you at least some inconvenience is not going to deter any decent cracker.
Advanced DNS security
The BIND DNS service is not especially secure and, with certain vulnerabilities, allow a malicious user to gain access to the root filesystem and possible privilege escalation. This issue is easily resolvable with the use of the BIND chroot package.
We have already had a brief brush with the chroot command in Volume 1, Chapter 19, but did not cover it in any detail. Tightening security on BIND DNS requires that we use chroot, so let’s take a closer look.
About chroot
The chroot tool is used to create a secure copy of parts of the Linux filesystem. In this way, if a cracker does force a vulnerability in BIND to access the filesystem, this chroot’ed copy of the filesystem is the only thing at risk. A simple restart of BIND is sufficient to revert to a clean and unhacked version of the filesystem. The chroot utility can be used for more than adding security to BIND, but this is one of the best illustrations for its use.
Enabling bind-chroot
It takes only a little work to enable the chroot’ed BIND environment. We installed the bind-chroot package in Chapter 4 of this volume and we will now put it to use.
Experiment 16-1
Perform this task as the root user on StudentVM2. Because the bind-chroot package was previously installed, we need only to stop and disable the named service and then enable and start the named-chroot service.
First, explore the /var/named directory. It already contains the chroot subdirectory because we installed the bind-chroot package. Explore the contents of the /var/named/chroot directory for a few moments. Note that there are directories for /dev, /etc, /run, /usr, and /var. Each of those directories contains copies of only the files required to run a chroot’ed version of BIND. This if a cracker gains access to the host via BIND, these copies are all that they will have access to.
Notice also that there are no zone or other configuration files in the /var/named/chroot/var/named/ directory structure.
You should also check to see if the correct results are returned for external domains such as www.example.org, opensource.com, and apress.com.
Note
When using the chroot’ed version of named, changes to the zone files must be made in /var/named/chroot/var/named.
In the simple example shown in Figure 16-1, we allow queries from the local network and block queries from the network that leads to the outside world.
Hardening the network
There are some additional steps we can take to harden our network interfaces. Several lines can be added to the /etc/sysctl.d/98-network.conf file which will make our network interfaces more secure. This is highly advisable on the firewall/router.
Experiment 16-2
Begin this experiment as the root user on StudentVM2.
Add the entries shown to the /etc/ sysctl.d/98-network.conf file so that the file looks like that shown in Figure 16-2. We created this file in Chapter 6 of this volume when we made our StudentVM2 host into a router. we added two lines to this file, one of which is a comment.
After making the above additions to the /etc/ sysctl.d/98-network.conf file, reboot the computer. We don’t really need to reboot, but it is faster for the purposes of this experiment – unless you want to make these changes directly to the specified files in the /proc filesystem. In my opinion, testing these changes with a reboot is the correct way to test this because the file is intended to set these variable values during the Linux startup.
After the reboot, verify in the /proc filesystem that the variables have their values set as defined in the sysctl.conf file.
It is difficult to test how the changes themselves work without a way in which to generate offending packets. What we can test is that things still work as they should. Ping each host from the other and login to each from the other using SSH. From StudentVM1, ping a host outside the local network, use SSH to log in to an external host, send email, and use a browser to view an external web site. If these tests work, then everything should be in good shape.
I found an error during this testing, so you might also. In my case, I had not set ip_forward to 1 in order to configure StudentVM2 as a router. As a result, I could not ping hosts outside the local network.
These changes can be added to all Linux hosts on a network but should always be added to a system acting as a router with an outside connection to the Internet. Be sure to change the statements related to routing as required for a non-routing host.
Advanced iptables
We have been working throughout this course but especially in this volume, with IPTables to provide firewall rules for our hosts. There are some additional things we can do to improve the efficacy of our firewall.
The primary method for enhancing our IPTables rule set is to specify the network from which the firewall will accept packets requesting a connection either to the firewall or to an external network. We do this with the -s (source) option which allows us to specify a network or specific hosts by IP address. We could also specify the source by interface name such as enp0s3.
Experiment 16-3
Start this experiment as the root user on StudentVM2. Let’s start by performing a test to verify that an SSH connection can be made from the external network, 10.0.2.0/24.
Testing from the outside network is a bit complicated, but we can do it. Remember our StudentVM3 virtual machine? We can use that VM for this purpose. It should already be configured to use the outside network, “StudentNetwork,” and it should still be configured to boot from the Live USB image. Because the external network is not configured for DHCP, we must configure the network after the VM has completed its startup.
To make this a little easier, before starting StudentVM3, open the general settings page for StudentVM3 and configure the shared clipboard to be bidirectional. You should also power off StudentVM2 and do the same thing.
Boot StudentVM2 and then Boot StudentVM3 to the Fedora Live image. When the startup has completed, open a terminal session and su to root. Add a new file, /etc/sysconfig/network-scripts/ifcfg-enp0s3, and add the following content. I did this easily; I copied the content of the ifcfg-enp0s3 from StidentVM2. I then opened a new enp0s3 file in an editor on StudentVM3 and pasted the data into it. I made the changes necessary to create the file below and saved it.
So before changing the IPTables rules, we can open an SSH connection to StudentVM2 from StudentVM3.
The CIDR notation for our internal network IP address range allows SSH connections from our internal network and no other. SSH connection attempts from the outside network, 10.0.2.0/24 are rejected by the last rule in the filter table because the packets do not match any other rules.
This attempt now generates a “No route to host” error message.
Allow StudentVM3 to continue to run because you will need it for the exercises at the end of the chapter. If you turn it off, you will need to reconfigure the network by adding the ifcfg-enp0s3 file again.
Advanced backups
We have already discussed backups as a necessary part of a good security policy, but we did not go into any detail. Let’s do that now and look at the rsync utility as a tool for backups.
rsync
None of the commercial or more complex open source backup solutions fully met my needs, and restoring from a tarball can be time-consuming and sometimes a bit frustrating. I also really wanted to use another tool I had heard about, rsync.1
I had been experimenting with the rsync command which has some very interesting features that I have been able to use to good advantage. My primary objectives were to create backups from which users could locate and restore files quickly without having to extract data from a backup tarball and to reduce the amount of time taken to create and the backups.
This section is intended only to describe my own use of rsync in a backup scenario. It is not a look at all of the capabilities of rsync or the many other interesting ways in which it can be used.
The rsync command was written by Andrew Tridgell and Paul Mackerras and first released in 1996. The primary intention for rsync is to remotely synchronize the files on one computer with those on another. Did you notice what they did to create the name there? rsync is open source software and is provided with all of the distros with which I am familiar.
The rsync command can be used to synchronize two directories or directory trees whether they are on the same computer or on different computers, but it can do so much more than that. rsync creates or updates the target directory to be identical to the source directory. The target directory is freely accessible by all the usual Linux tools because it is not stored in a tarball or zip file or any other archival file type; it is just a regular directory with regular Linux files that can be navigated by regular users using basic Linux tools. This meets one of my primary objectives.
One of the most important features of rsync is the method it uses to synchronize preexisting files that have changed in the source directory. Rather than copying the entire file from the source, it uses checksums to compare blocks of the source and target files. If all of the blocks in the two files are the same, no data is transferred. If the data differs, only the blocks that have changed on the source are transferred to the target. This saves an immense amount of time and network bandwidth for remote sync. For example, when I first used my rsync Bash script to back up all of my hosts to a large external USB hard drive, it took about 3 hours. That is because all of the data had to be transferred because none of it had been previously backed up. Subsequent backups took between 3 and 8 minutes of real time, depending upon how many files had been changed or created since the previous backup. I used the time command to determine this, so it is empirical data. Last night, for example, it took 3 minutes and 12 seconds to complete a backup of approximately 750GB of data from 6 remote systems and the local workstation. Of course, only a few hundred megabytes of data were actually altered during the day and needed to be backed up.
Let’s see how this works.
Experiment 16-4
Start this experiment as the student user on StudentVM2. Note the current contents of the student user’s home directory. On StudentVM1, also note the contents of the student user’s home directory. They should be quite different.
That was easy! Now check the home directory for the student user on StudentVM2. All of the student user’s file that were present on StudentVM1 are now on StudentVM2.
Verify that file7.txt (or whichever file you chose to work with) is not the same, larger size on both hosts. Compare the times for both instances of the command. The real time is not important because that includes the time we took to type in the password. The important times are the amount of user and system time used by the commands which is significantly less during the second invocation. Although some of that savings may be due to caching, on a system where the command is run once a day to synchronize huge amounts of data, the time savings is very noticeable.
Now let’s assume that yesterday we used rsync to synchronize two directories. Today we want to re-synchronize them, but we have deleted some files from the source directory. The normal way in which rsync would work using the syntax we used in Experiment 16-4 is to simply copy all the new or changed files to the target location and leave the deleted files in place on the target. This may be the behavior you want, but if you would prefer that files deleted from the source also be deleted from the target, that is, the backup, you can add the --delete option to make that happen.
Another interesting option, and my personal favorite because it increases the power and flexibility of rsync immensely, is the --link-dest option. The --link-dest option uses hard links2,3 to create a series of daily backups that take up very little additional space for each day and also take very little time to create.
Specify the previous day’s target directory with this option and a new directory for today. The rsync command then creates today’s new directory and a hard link for each file in yesterday’s directory is created in today’s directory. So we now have a bunch of hard links to yesterday’s files in today’s directory. No new files have been created or duplicated. After creating the target directory for today with this set of hard links to yesterday’s target directory, rsync performs its sync as usual, but when a change is detected in a file, the target hard link is replaced by a copy of the file from yesterday and the changes to the file are then copied from the source to the target.
If there are changes to files in the source directory, rsync deletes the hard link to the file in yesterday’s backup directory and makes an exact copy of the file from yesterday’s backup. It then copies the changes made to the file from the source directory to today’s target backup directory. It also deletes files on the target drive or directory that have been deleted from the source directory.
Note that each file pattern you want to exclude must have a separate exclude option.
The rsync command has a very large number of options that you can use to customize the synchronization process. For the most part, the relatively simple commands that I have described here are perfect for making backups for my personal needs. Be sure to read the extensive man page for rsync to learn about more of its capabilities as well as details of the options discussed here.
Performing backups
I automated my backups because – “automate everything.” I wrote a Bash script, rsbu, that handles the details of creating a series of daily backups using rsync. This includes ensuring that the backup medium is mounted, generating the names for yesterday and today’s backup directories, creating appropriate directory structures on the backup medium if they are not already there, performing the actual backups, and unmounting the medium.
The end result of the method in which I employ the rsync command in my script is that I end up with a date-sequence of backups for each host in my network. The backup drives end up with a structure similar to the one shown in Figure 16-6. This makes it easy to locate specific files that might need to be restored.
So, starting with an empty disk on January 1, the rsbu script makes a complete backup for each host of all the files and directories that I have specified in the configuration file. This first backup can take several hours if you have a lot of data like I do.
On January 2, the rsync command uses the –link-dest= option to create a complete new directory structure identical to that of January 1; then it looks for files that have changed in the source directories. If any have changed, a copy of the original file from January 1 is made in the January 2 directory and then the parts of the file that have been altered are updated from the original.
Figure 16-6 also shows a bit more detail for the host2 series of backups for one file, /home/student/file1.txt, on the dates January 1, 2, and 3. On January 2, the file has not changed since January 1. In this case, the rsync backup does not copy the original data from January 1. It simply creates a directory entry with a hard link in the January 2 directory to the January 1 directory which is a very fast procedure. We now have two directory entries pointing to the same data on the hard drive. On January 3, the file has been changed. In this case, the data for ../2018-01-02/home/student/file1.txt is copied to the new directory, ../2018-01-03/home/student/file1.txt and any data blocks that have changed are then copied to the backup file for January 3. These strategies, which are implemented using features of the rsync program, allow backing up huge amounts of data while saving disk space and much of the time that would otherwise be required to copy data files that are identical.
One of my procedures is to run the backup script twice each day from a single cron job. The first iteration performs a backup to an internal 4TB hard drive. This is the backup that is always available and always at the most recent version of all my data. If something happens and I need to recover one file or all of them, the most I could possibly lose is a few hours’ worth of work.
The second backup is made to one of a rotating series of 4TB external USB hard drive. I take the most recent drive to my safe deposit box at the bank at least once per week. If my home office is destroyed and the backups I maintain there are destroyed along with it, I just have to get the external hard drive from the bank and I have lost at most a single week of data. That type of loss is easily recovered.
The drives I am using for backups, not just the internal hard drive but also the external USB hard drives that I rotate weekly, never fill up. This is because the rsbu script I wrote checks the ages in days of the backups on each drive before a new backup is made. If there are any backups on the drive that are older than the specified number of days, they are deleted. The script uses the find command to locate these backups. The number of days is specified in the rsbu.conf configuration file.
Of course, after a complete disaster, I would first have to find a new place to live with office space for my wife and I, purchase parts and build new computers, restore from the remaining backup, and then recreate any lost data.
My script, rsbu, is available along with its configuration file, rsbu.conf, and a READ.ME file as a tarball, rsbu.tar, from https://github.com/Apress/using-and-administering-linux-volume-3/raw/master/rsbu.tar.gz.
You can use that script as the basis for your own backup procedures. Be sure to make any modifications you need and test thoroughly.
Recovery testing
No backup regimen would be complete without testing. You should regularly test recovery of random files or entire directory structures to ensure not only that the backups are working, but that the data in the backups can be recovered for use after a disaster. I have seen too many instances where a backup could not be restored for one reason or another and valuable data was lost because the lack of testing prevented discovery of the problem.
Just select a file or directory to test and restore it to a test location such as /tmp so that you won’t overwrite a file that may have been updated since the backup was performed. Verify that the files’ contents are as you expect them to be. Restoring files from a backup made using the preceding rsync commands is simply a matter of finding the file you want to restore from the backup and then copying it to the location to which you want to restore it.
I have had a few circumstances where I have had to restore individual files and, occasionally, a complete directory structure. I have had to restore the entire contents of a hard drive on a couple occasions. Most of the time this has been self-inflicted when I accidentally deleted a file or directory. At least a few times it has been due to a crashed hard drive. So those backups do come in handy.
Restrict SSH remote root login
Sometimes it is necessary to allow SSH connections from external sources and it may not be possible to specify which IP addresses might be the source. In this situation we can prevent root logins via SSH entirely. It would be necessary to login to the firewall host as a non-root user and then su to root or SSH to an internal host as that non-root user and then su to root.
Experiment 16-5
As the root user on StudentVM1, login to StudentVM2 via SSH. You should be able to do this. After confirming that you have logged into your neighbor's computer, log out again.
And restart SSHD to enable the change. As the root user on StudentVM1, try to log in to StudentVM2 as root.
You should receive a “Permission denied” error. Also be sure to verify that you can login to StudentVM2 as the student user.
Change this back and revert to allowing remote root login on SSH and test to ensure that you can login to StudentVM2 as root.
Malware
Protecting our systems against malware like viruses, root kits, and Trojan horses is a big part of security. We have several tools we can use to do this, four of which we will cover here. Viruses and Trojan horses are usually delivery agents and can be used to deliver malware such as root kits.
Root kits
A root kit is malware that replaces or modifies legitimate GNU Utilities to both perform its own activities and to hide the existence of its own files. For example, a root kit can replace tools like ls so that it won’t display any of the files installed by the root kit. Other tools can scan log files and remove any entries that might betray the existence of files belonging to the root kit.
Most root kits are intended to allow a remote attacker to take over a computer and use it for their own purposes. With this type of malware, the objective of the attacker is to remain undetected. They are not usually after ransom or to damage your files.
There are two good programs that can be used to scan your system for rootkits. The chkrootkit4 and Root Kit Hunter5 tools are both used to locate files that may have been infected, replaced, or compromised by root kits.
Root Kit Hunter also checks for network breaches such as backdoor ports that have been opened. normal services that are listening on various ports such as HTTP and IMAPS. If those services are listening, a warning is printed.
Experiment 16-6
You can see all of the checks performed by this tool. Any anomalies would be noted. There is no man page for chkrootkit but there is some documentation in /usr/share/doc/chkrootkit. Be sure to read that for additional information.
I think that the Root Kit Hunter program is a better and more complete program. It is more flexible because it can update the signature files without upgrading the entire program. Like chkrootkit, it also checks for changes to certain system executable files that are frequently targeted by crackers.
Note
The rkhunter --propupd command should be run after updates are installed and after upgrades to new releases such as from Fedora 29 to Fedora 30.
This program also displays a long list of tests and their results as it runs, along with a nice summary at the end. You can find a complete log with even more detailed information at /var/log/rkhunter/rkhunter.log.
Note that the installation RPM for Root Kit Hunter sets up a daily cron job with a script in /etc/cron.daily. The script performs this check every morning at about 3 a.m. If a problem is detected, an email message is sent to root. If no problems are detected, no email or any other indication that the rkhunter program was even run is provided.
Clam-AV
ClamAV is one open source anti-virus program. There are others and there are some that are not Open Source. ClamAV can be used to scan a computer for viruses.
ClamAV is not installed on your host by default. It will be installed with an empty database file and will fail when run if a valid database is not installed. We will install the ClamAV update utility which will also install all dependencies. Installing the clamav-update package allows easy update of the ClamAV database using the freshclam command.
Experiment 16-7
You will notice that there are a couple warnings in that output data. ClamAV needs to be updated, but the latest version has not yet been uploaded to the Fedora repository.
This command emits a very long data stream, so I reproduced only a bit of it here. Using the tee command records the data stream in the specified file while also sending it on to STDOUT. This makes it easy to use different tools and searches on the file.
View the content of the clamscan.txt file. See if you can find files that do not have “OK” appended to the end of the line.
The clamscan utility should be run on a regular basis to ensure that no viruses have penetrated your defenses.
Tripwire
Tripwire is intrusion detection software. It can report on system files that have been altered in some way, possibly by malware installed as part of a root kit or Trojan horse. Like many tools of this type, Tripwire cannot prevent an intrusion; it can only report on one after it has occurred and left behind some evidence that can be detected and identified.
Tripwire6 is also a commercial company that sells a version of Tripwire and other cybersecurity products. We will install an open source version of Tripwire and configure it for use on our server.
Experiemnt 16-8
The Tripwire RPM for Fedora does not create a complete and working configuration. The documentation in the /usr/share/doc/tripwire/README.Fedora file contains instructions for performing that configuration. I strongly suggest you read that file, but we will proceed here with the bare minimum required to get Tripwire working.
You will see some warnings about files that the default policy expects to see. You can ignore those for this experiment, but for a production environment, you would want to create a policy file that reflects the files you actually have and the actions to be taken if one changes.
Once again, Tripwire generates the same warnings. Explore the Tripwire report file we created which contains a nice summary near the beginning.
SELinux
We disabled SELinux early in this course so we would not need to deal with side effects in other experiments caused by this important security tool. SELinux was developed by the NSA to provide a highly secure computing environment. True to the GPL, they have made this code available to the rest of the Linux community, and it is included as part of nearly every mainstream distribution.
I have no idea how much we should trust the NSA itself, but because the code is open source and can be and has been examined by many programmers around the world, the likelihood of it containing malicious code is quite low. With that said, SELinux is an excellent security tool.
SELinux provides Mandatory Access Control (MAC) which ensures that users must be provided explicit access rights to each object in the host system. The objective of SELinux7 is to prevent a security breach – an intrusion – and to limit the damage that they may wreak if they do manage to access a protected host. It accomplishes this by labeling every filesystem object and processes. It uses policy rules to define the possible interactions between labeled objects and the kernel enforces these rules.
Red Hat has a well-done document that covers SELinux.8 Although written for RHEL 7, it will also apply to all current versions of RHEL, CentOS, Fedora, and other Red Hat-derived distributions.
In this section, we will explore some basic SELinux tasks.
Experiment 16-9
Perform this experiment as the root user on StudentVM2.
Each policy is installed in a subdirectory of /etc/selinux. Look at the contents of the /etc/selinux directory again and notice the new minimum and mls subdirectories for their respective policy files.
You should now find over 900 relevant man pages.
to display the context of the running processes. Note that many processes are unconfined but that some processes, such as various kernel and HTTPD ones, are running in the system_u:system_r context. Some services run in the kernel_t domain while the HTTPD service tasks run in a special httpd_t domain.
Users who do not have authority for those contexts are unable to manipulate those processes, even when they su to root. However, the “targeted enforcing” mode allows all users to have all privileges so it would be necessary to restrict some or all users in the seusers file.
To see this, as root, stop the HTTPD service and verify that it has stopped.
It is not necessary to reboot or restart SELinux. Now log in as the user student and su to root. What happens? What is the user's current context?
This is a rather blunt approach, but SELinux does allow you to get much more granular. Creating and compiling those more granular policies is beyond the scope of this course.
Now set the policy mode to “permissive” using the setenforce command and try again to su to root. What happens?
Do a bit of cleanup and edit the /etc/selinux/config file again and set SELINUX=disabled. Reboot your studentvm2 host.
Additional SELinux considerations
Making changes to the filesystem while SELinux is disabled may result in improperly labeled objects and possible vulnerabilities. The best way to ensure that everything is properly labeled is to add an empty file named /.autorelabel in the root directory and reboot the system.
SELinux is intolerant of extra whitespace. Be sure to eliminate extra whitespace in SELinux configuration files in order to ensure that there are no errors.
Social engineering
A search on “Internet safety” will result in a huge number (well over a billion) of hits, but the best results will be in the first few pages. Many are aimed at youth, teens, and parents, but they have good information for everyone.
Chapter summary
This chapter has explored some additional security precautions that we can take to further harden our Fedora systems against various types of cracking attacks. It also explored some advanced backup techniques because, failing all else, good, usable backups can allow us to recover from most any disaster including crackers.
None of the tools discussed in this chapter provide a single solution for Linux system security – there is no such thing. Taken together in combinations that make sense for your environment, as well as along with all of the other security we have previously implemented in this course, these tools can significantly improve the security of any Linux host. Although our virtual network and the virtual machines contained in it are now safer, there is always more that can be done. The question we need to ask is whether the cost of the effort required to lock down our systems and networks even more is worth the benefits accrued by doing so.
Remember, like most of the subjects we have covered in this course, we have just touched the surface. You should now be aware of a few of the dangers and some of the tools we have to counter those threats. This is only the beginning and you should explore these tools and others not covered here, in more depth in order to ensure that the security of Linux hosts for which you have responsibility are secured to the greatest extent possible.
Exercises
- 1.
In Experiment 16-3, there is no valid DNS server that can be used for our SSH command to StudentVM2. Why does the name server on StudentVM2 not work for this?
- 2.
On StudentVM2, identify the network ports that we have open with IPTables rules and which should be open only on the internal network and not the external network. Modify those rules to accept connections only from the internal network. Test the results.
- 3.
If you have not already, download the rsbu.tar.gz file from the Apress web site https://github.com/Apress/using-and-administering-linux-volume-3/raw/master/rsbu.tar.gz and install it. Using the enclosed script and configuration file, set up a simple backup configuration that runs once per day and backs up the entire home directories of both StudentVM1 and StudentVM2.
- 4.
With SELinux enabled, determine the student user's context.
- 5.
Why should clamscan be run on the /home directory of a mail server?
- 6.
Configure Tripwire to the files that do not exist on StudentVM2. Initialize the database and run the integrity check.
- 7.
Why are the Tripwire report files encrypted?
- 8.
What other services, besides HTTPD, have their own SELinux domains?