© David Both 2020
D. BothUsing and Administering Linux: Volume 3https://doi.org/10.1007/978-1-4842-5485-1_16

16. Security

David Both1 
(1)
Raleigh, NC, USA
 

Objectives

In this chapter, you will learn
  • Advanced security tools and techniques

  • How to enhance security for DNS using chroot

  • To modify kernel parameters to enhance network security

  • Some advanced IPTables security tips

  • Advanced backup techniques

  • The use of ClamAV to check for viruses

  • To configure basic intrusion detection using TripWire

  • To detect root kits using Root Kit Hunter and chkrootkit

  • To use SELinux to prevent crackers from modifying critical system files

Introduction

Security is often overlooked or only considered as an afterthought. Despite the fact that this chapter comes near the end of this course, security has been a primary consideration throughout. We have looked at many aspects of security as we explored the administration of Fedora.

There are still some things we need to consider and this chapter looks at aspects of security that have not yet been discussed. It is important to realize that security is obtrusive. It will get in your way and frustrate you when you least need that. Security that does not cause you at least some inconvenience is not going to deter any decent cracker.

Advanced DNS security

The BIND DNS service is not especially secure and, with certain vulnerabilities, allow a malicious user to gain access to the root filesystem and possible privilege escalation. This issue is easily resolvable with the use of the BIND chroot package.

We have already had a brief brush with the chroot command in Volume 1, Chapter 19, but did not cover it in any detail. Tightening security on BIND DNS requires that we use chroot, so let’s take a closer look.

About chroot

The chroot tool is used to create a secure copy of parts of the Linux filesystem. In this way, if a cracker does force a vulnerability in BIND to access the filesystem, this chroot’ed copy of the filesystem is the only thing at risk. A simple restart of BIND is sufficient to revert to a clean and unhacked version of the filesystem. The chroot utility can be used for more than adding security to BIND, but this is one of the best illustrations for its use.

Enabling bind-chroot

It takes only a little work to enable the chroot’ed BIND environment. We installed the bind-chroot package in Chapter 4 of this volume and we will now put it to use.

Experiment 16-1

Perform this task as the root user on StudentVM2. Because the bind-chroot package was previously installed, we need only to stop and disable the named service and then enable and start the named-chroot service.

First, explore the /var/named directory. It already contains the chroot subdirectory because we installed the bind-chroot package. Explore the contents of the /var/named/chroot directory for a few moments. Note that there are directories for /dev, /etc, /run, /usr, and /var. Each of those directories contains copies of only the files required to run a chroot’ed version of BIND. This if a cracker gains access to the host via BIND, these copies are all that they will have access to.

Notice also that there are no zone or other configuration files in the /var/named/chroot/var/named/ directory structure.

Make /var/named the PWD. Stop and disable NAMED.
[root@studentvm2 ~]# systemctl disable named ; systemctl stop named
Removed /etc/systemd/system/multi-user.target.wants/named.service.
[root@studentvm2 ~]#
Now start and enable the named-chroot service.
[root@studentvm2 ~]# systemctl enable named-chroot ; systemctl start named-chroot
Created symlink /etc/systemd/system/multi-user.target.wants/named-chroot.service → /usr/lib/systemd/system/named-chroot.service.
[root@studentvm2 ~]#
Now examine the /var/named/chroot/var/named/ directory and see that the required configuration files are present. Verify the status of the named-chroot service.
[root@studentvm2 ~]#  systemctl status named-chroot
• named-chroot.service - Berkeley Internet Name Domain (DNS)
   Loaded: loaded (/usr/lib/systemd/system/named-chroot.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-08-26 13:46:51 EDT; 2min 43s ago
  Process: 20092 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} -t /var/named/chroot $OPTIONS (code=>
  Process: 20089 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/na>
 Main PID: 20093 (named)
    Tasks: 5 (limit: 4696)
   Memory: 54.7M
   CGroup: /system.slice/named-chroot.service
           └─20093 /usr/sbin/named -u named -c /etc/named.conf -t /var/named/chroot
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './DNSKEY/IN': 2001:5>
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './NS/IN': 2001:503:b>
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './DNSKEY/IN': 2001:5>
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './NS/IN': 2001:500:2>
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './DNSKEY/IN': 2001:5>
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './NS/IN': 2001:500:2>
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './DNSKEY/IN': 2001:d>
Aug 26 13:46:51 studentvm2.example.com named[20093]: network unreachable resolving './NS/IN': 2001:dc3::>
Aug 26 13:46:51 studentvm2.example.com named[20093]: managed-keys-zone: Key 20326 for zone . acceptance >
Aug 26 13:46:51 studentvm2.example.com named[20093]: resolver priming query complete
lines 1-21/21 (END)
Now do a lookup to further verify that it is working as it should. Check that the server that responds to this query has the correct IP address for StudentVM2.
[root@studentvm2 ~]# dig studentvm1.example.com
; <<>> DiG 9.11.6-P1-RedHat-9.11.6-2.P1.fc29 <<>> studentvm1.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50661
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: dff58dbd1c4a55385fc785a65d641d0c5ea6f3a3d5550099 (good)
;; QUESTION SECTION:
;studentvm1.example.com.                IN      A
;; ANSWER SECTION:
studentvm1.example.com. 86400   IN      A       192.168.56.21
;; AUTHORITY SECTION:
example.com.            86400   IN      NS      studentvm2.example.com.
;; ADDITIONAL SECTION:
studentvm2.example.com. 86400   IN      A       192.168.56.1
;; Query time: 0 msec
;; SERVER: 192.168.56.1#53(192.168.56.1)
;; WHEN: Mon Aug 26 13:55:24 EDT 2019
;; MSG SIZE  rcvd: 136
[root@studentvm2 ~]#

You should also check to see if the correct results are returned for external domains such as www.example.org, opensource.com, and apress.com.

Note

When using the chroot’ed version of named, changes to the zone files must be made in /var/named/chroot/var/named.

It is also possible to add ACL (Access Control List) to specify which hosts are allowed to access the name server. These ACL definitions and host lists are added to the /etc/named.conf file. Hosts can be explicitly allowed or denied. Figure 16-1 shows how this can be configured. These statements would be added to the options section of /etc/named.conf.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig1_HTML.png
Figure 16-1

Sample ACL entries for named.conf

In the simple example shown in Figure 16-1, we allow queries from the local network and block queries from the network that leads to the outside world.

Hardening the network

There are some additional steps we can take to harden our network interfaces. Several lines can be added to the /etc/sysctl.d/98-network.conf file which will make our network interfaces more secure. This is highly advisable on the firewall/router.

Experiment 16-2

Begin this experiment as the root user on StudentVM2.

Add the entries shown to the /etc/ sysctl.d/98-network.conf file so that the file looks like that shown in Figure 16-2. We created this file in Chapter 6 of this volume when we made our StudentVM2 host into a router. we added two lines to this file, one of which is a comment.

These changes will take effect at the next boot. Of course, you can also enable them immediately without a reboot by making the appropriate changes to the associated files in the /proc filesystem.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig2_HTML.png
Figure 16-2

These entries to the /etc/sysctl.conf file provide additional network security

After making the above additions to the /etc/ sysctl.d/98-network.conf file, reboot the computer. We don’t really need to reboot, but it is faster for the purposes of this experiment – unless you want to make these changes directly to the specified files in the /proc filesystem. In my opinion, testing these changes with a reboot is the correct way to test this because the file is intended to set these variable values during the Linux startup.

After the reboot, verify in the /proc filesystem that the variables have their values set as defined in the sysctl.conf file.

It is difficult to test how the changes themselves work without a way in which to generate offending packets. What we can test is that things still work as they should. Ping each host from the other and login to each from the other using SSH. From StudentVM1, ping a host outside the local network, use SSH to log in to an external host, send email, and use a browser to view an external web site. If these tests work, then everything should be in good shape.

I found an error during this testing, so you might also. In my case, I had not set ip_forward to 1 in order to configure StudentVM2 as a router. As a result, I could not ping hosts outside the local network.

These changes can be added to all Linux hosts on a network but should always be added to a system acting as a router with an outside connection to the Internet. Be sure to change the statements related to routing as required for a non-routing host.

Advanced iptables

We have been working throughout this course but especially in this volume, with IPTables to provide firewall rules for our hosts. There are some additional things we can do to improve the efficacy of our firewall.

The primary method for enhancing our IPTables rule set is to specify the network from which the firewall will accept packets requesting a connection either to the firewall or to an external network. We do this with the -s (source) option which allows us to specify a network or specific hosts by IP address. We could also specify the source by interface name such as enp0s3.

Experiment 16-3

Start this experiment as the root user on StudentVM2. Let’s start by performing a test to verify that an SSH connection can be made from the external network, 10.0.2.0/24.

Testing from the outside network is a bit complicated, but we can do it. Remember our StudentVM3 virtual machine? We can use that VM for this purpose. It should already be configured to use the outside network, “StudentNetwork,” and it should still be configured to boot from the Live USB image. Because the external network is not configured for DHCP, we must configure the network after the VM has completed its startup.

To make this a little easier, before starting StudentVM3, open the general settings page for StudentVM3 and configure the shared clipboard to be bidirectional. You should also power off StudentVM2 and do the same thing.

Boot StudentVM2 and then Boot StudentVM3 to the Fedora Live image. When the startup has completed, open a terminal session and su to root. Add a new file, /etc/sysconfig/network-scripts/ifcfg-enp0s3, and add the following content. I did this easily; I copied the content of the ifcfg-enp0s3 from StidentVM2. I then opened a new enp0s3 file in an editor on StudentVM3 and pasted the data into it. I made the changes necessary to create the file below and saved it.

I removed the UUID line and set the IPADDR to 10.0.2.31. Everything else can stay the same.
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.0.2.31
PREFIX=24
GATEWAY=10.0.2.1
DNS1=192.168.56.1
DNS2=10.0.2.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
DEVICE=enp0s3
ONBOOT=yes
It should not be necessary to enable the connection as that should happen almost immediately after the file is saved. If it does not start, you can use the ip command.
[liveuser@localhost-live ~]$ ip link set enp0s3 up
Now we can do our initial testing. As the root user on StudentVM3, open an SSH connection to StudentVM2. Because we do not have a valid name server that will resolve studentvm2, we must use its IP address.
[root@localhost-live ~]# ssh 10.0.2.11
The authenticity of host '10.0.2.11 (10.0.2.11)' can't be established.
ECDSA key fingerprint is SHA256:NDM/B5L3eRJaalex6IOUdnJsE1sm0SiQNWgaI8BwcVs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.2.11' (ECDSA) to the list of known hosts.
Password: <Enter Password>
Last login: Tue Aug 27 10:41:45 2019 from 10.0.2.31
[root@studentvm2 ~]#

So before changing the IPTables rules, we can open an SSH connection to StudentVM2 from StudentVM3.

Now edit the /etc/sysconfig/iptables file on StudentVM2 and change the filter table rule for SSH from this:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
to this:
-A INPUT -s 192.168.56.0/24 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT

The CIDR notation for our internal network IP address range allows SSH connections from our internal network and no other. SSH connection attempts from the outside network, 10.0.2.0/24 are rejected by the last rule in the filter table because the packets do not match any other rules.

Save the file and activate the revised rule set.
[root@studentvm2 ~]# cd /etc/sysconfig/ ; iptables-restore iptables
Begin testing this by ensuring that an SSH connection can still be initiated from StudentVM1.
[root@studentvm1 ~]# ssh studentvm2
Last login: Mon Aug 26 16:41:27 2019
[root@studentvm2 ~]# exit
logout
Connection to studentvm2 closed.
[root@studentvm1 ~]#
As root on StudentVM3, ping StudentVM2 just to verify the connection.
[root@localhost-live ~]# ping -c2 10.0.2.11
PING 10.0.2.11 (10.0.2.11) 56(84) bytes of data.
64 bytes from 10.0.2.11: icmp_seq=1 ttl=64 time=0.450 ms
64 bytes from 10.0.2.11: icmp_seq=2 ttl=64 time=0.497 ms
Now try to initiate an SSH connection from StudentVM3.
--- 10.0.2.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 56ms
rtt min/avg/max/mdev = 0.450/0.473/0.497/0.032 ms
[root@localhost-live ~]# ssh 10.0.2.11
ssh: connect to host 10.0.2.11 port 22: No route to host
[root@localhost-live ~]#

This attempt now generates a “No route to host” error message.

Allow StudentVM3 to continue to run because you will need it for the exercises at the end of the chapter. If you turn it off, you will need to reconfigure the network by adding the ifcfg-enp0s3 file again.

Advanced backups

We have already discussed backups as a necessary part of a good security policy, but we did not go into any detail. Let’s do that now and look at the rsync utility as a tool for backups.

rsync

None of the commercial or more complex open source backup solutions fully met my needs, and restoring from a tarball can be time-consuming and sometimes a bit frustrating. I also really wanted to use another tool I had heard about, rsync.1

I had been experimenting with the rsync command which has some very interesting features that I have been able to use to good advantage. My primary objectives were to create backups from which users could locate and restore files quickly without having to extract data from a backup tarball and to reduce the amount of time taken to create and the backups.

This section is intended only to describe my own use of rsync in a backup scenario. It is not a look at all of the capabilities of rsync or the many other interesting ways in which it can be used.

The rsync command was written by Andrew Tridgell and Paul Mackerras and first released in 1996. The primary intention for rsync is to remotely synchronize the files on one computer with those on another. Did you notice what they did to create the name there? rsync is open source software and is provided with all of the distros with which I am familiar.

The rsync command can be used to synchronize two directories or directory trees whether they are on the same computer or on different computers, but it can do so much more than that. rsync creates or updates the target directory to be identical to the source directory. The target directory is freely accessible by all the usual Linux tools because it is not stored in a tarball or zip file or any other archival file type; it is just a regular directory with regular Linux files that can be navigated by regular users using basic Linux tools. This meets one of my primary objectives.

One of the most important features of rsync is the method it uses to synchronize preexisting files that have changed in the source directory. Rather than copying the entire file from the source, it uses checksums to compare blocks of the source and target files. If all of the blocks in the two files are the same, no data is transferred. If the data differs, only the blocks that have changed on the source are transferred to the target. This saves an immense amount of time and network bandwidth for remote sync. For example, when I first used my rsync Bash script to back up all of my hosts to a large external USB hard drive, it took about 3 hours. That is because all of the data had to be transferred because none of it had been previously backed up. Subsequent backups took between 3 and 8 minutes of real time, depending upon how many files had been changed or created since the previous backup. I used the time command to determine this, so it is empirical data. Last night, for example, it took 3 minutes and 12 seconds to complete a backup of approximately 750GB of data from 6 remote systems and the local workstation. Of course, only a few hundred megabytes of data were actually altered during the day and needed to be backed up.

The simple rsync command shown in Figure 16-3 can be used to synchronize the contents of two directories and any of their subdirectories. That is, the contents of the target directory are synchronized with the contents of the source directory so that at the end of the sync, the target directory is identical to the source directory.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig3_HTML.png
Figure 16-3

The minimum command necessary to synchronize two directories using rsync

Let’s see how this works.

Experiment 16-4

Start this experiment as the student user on StudentVM2. Note the current contents of the student user’s home directory. On StudentVM1, also note the contents of the student user’s home directory. They should be quite different.

Now we want to sync the student user’s home directory on StudentVM1 to that on StudentVM2. As the student user on StudentVM1, use the following command to do that.
[student@studentvm1 ~]$ time rsync -aH . studentvm2:/home/student
Password: <Enter password>
real    0m7.517s
user    0m0.101s
sys     0m0.172s
[student@studentvm1 ~]$

That was easy! Now check the home directory for the student user on StudentVM2. All of the student user’s file that were present on StudentVM1 are now on StudentVM2.

Now let’s change a file and see what happens when we run the same command. Pick and existing file and append some more data to it. I did this.
[student@studentvm1 ~]$ dmesg >> file7.txt
Verify the file sizes on both VMs. they should be different. Now run the previous command again.
[student@studentvm1 ~]$ time rsync -aH . studentvm2:/home/student
Password: <Enter password>
real    0m3.136s
user    0m0.021s
sys     0m0.052s
[student@studentvm1 ~]$

Verify that file7.txt (or whichever file you chose to work with) is not the same, larger size on both hosts. Compare the times for both instances of the command. The real time is not important because that includes the time we took to type in the password. The important times are the amount of user and system time used by the commands which is significantly less during the second invocation. Although some of that savings may be due to caching, on a system where the command is run once a day to synchronize huge amounts of data, the time savings is very noticeable.

Now let’s assume that yesterday we used rsync to synchronize two directories. Today we want to re-synchronize them, but we have deleted some files from the source directory. The normal way in which rsync would work using the syntax we used in Experiment 16-4 is to simply copy all the new or changed files to the target location and leave the deleted files in place on the target. This may be the behavior you want, but if you would prefer that files deleted from the source also be deleted from the target, that is, the backup, you can add the --delete option to make that happen.

Another interesting option, and my personal favorite because it increases the power and flexibility of rsync immensely, is the --link-dest option. The --link-dest option uses hard links2,3 to create a series of daily backups that take up very little additional space for each day and also take very little time to create.

Specify the previous day’s target directory with this option and a new directory for today. The rsync command then creates today’s new directory and a hard link for each file in yesterday’s directory is created in today’s directory. So we now have a bunch of hard links to yesterday’s files in today’s directory. No new files have been created or duplicated. After creating the target directory for today with this set of hard links to yesterday’s target directory, rsync performs its sync as usual, but when a change is detected in a file, the target hard link is replaced by a copy of the file from yesterday and the changes to the file are then copied from the source to the target.

So now our command looks like that in Figure 16-4. This version of our rsync command first creates hard links in today’s backup directory for each file in yesterday’s backup directory. The files in the source directory – the one being backed up – are then compared to the hard links that were just created. If there are no changes to the files in the source directory, no further action is taken.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig4_HTML.png
Figure 16-4

This command uses hard links to link unchanged files from yesterday’s directory to today’s. This saves a lot of time

If there are changes to files in the source directory, rsync deletes the hard link to the file in yesterday’s backup directory and makes an exact copy of the file from yesterday’s backup. It then copies the changes made to the file from the source directory to today’s target backup directory. It also deletes files on the target drive or directory that have been deleted from the source directory.

There are also times when it is desirable to exclude certain directories or files from being synchronized. We usually do not care about backing up cache directories and, because of the large amount of data they can contain, the amount of time required to back them up can be huge compared to other data directories. For this there is the --exclude option. Use this option and the pattern for the files or directories you want to exclude. You might want to exclude browser cache files so your new command will look like Figure 16-5.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig5_HTML.png
Figure 16-5

This syntax can be used to exclude specified directories or files based on a pattern

Note that each file pattern you want to exclude must have a separate exclude option.

The rsync command has a very large number of options that you can use to customize the synchronization process. For the most part, the relatively simple commands that I have described here are perfect for making backups for my personal needs. Be sure to read the extensive man page for rsync to learn about more of its capabilities as well as details of the options discussed here.

Performing backups

I automated my backups because – “automate everything.” I wrote a Bash script, rsbu, that handles the details of creating a series of daily backups using rsync. This includes ensuring that the backup medium is mounted, generating the names for yesterday and today’s backup directories, creating appropriate directory structures on the backup medium if they are not already there, performing the actual backups, and unmounting the medium.

The end result of the method in which I employ the rsync command in my script is that I end up with a date-sequence of backups for each host in my network. The backup drives end up with a structure similar to the one shown in Figure 16-6. This makes it easy to locate specific files that might need to be restored.

So, starting with an empty disk on January 1, the rsbu script makes a complete backup for each host of all the files and directories that I have specified in the configuration file. This first backup can take several hours if you have a lot of data like I do.

On January 2, the rsync command uses the –link-dest= option to create a complete new directory structure identical to that of January 1; then it looks for files that have changed in the source directories. If any have changed, a copy of the original file from January 1 is made in the January 2 directory and then the parts of the file that have been altered are updated from the original.

After the first backup onto an empty drive, the backups take very little time because the hard links are created first, and then only the files that have been changed since the previous backup need any further work.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig6_HTML.png
Figure 16-6

The directory structure for my backup data disks

Figure 16-6 also shows a bit more detail for the host2 series of backups for one file, /home/student/file1.txt, on the dates January 1, 2, and 3. On January 2, the file has not changed since January 1. In this case, the rsync backup does not copy the original data from January 1. It simply creates a directory entry with a hard link in the January 2 directory to the January 1 directory which is a very fast procedure. We now have two directory entries pointing to the same data on the hard drive. On January 3, the file has been changed. In this case, the data for ../2018-01-02/home/student/file1.txt is copied to the new directory, ../2018-01-03/home/student/file1.txt and any data blocks that have changed are then copied to the backup file for January 3. These strategies, which are implemented using features of the rsync program, allow backing up huge amounts of data while saving disk space and much of the time that would otherwise be required to copy data files that are identical.

One of my procedures is to run the backup script twice each day from a single cron job. The first iteration performs a backup to an internal 4TB hard drive. This is the backup that is always available and always at the most recent version of all my data. If something happens and I need to recover one file or all of them, the most I could possibly lose is a few hours’ worth of work.

The second backup is made to one of a rotating series of 4TB external USB hard drive. I take the most recent drive to my safe deposit box at the bank at least once per week. If my home office is destroyed and the backups I maintain there are destroyed along with it, I just have to get the external hard drive from the bank and I have lost at most a single week of data. That type of loss is easily recovered.

The drives I am using for backups, not just the internal hard drive but also the external USB hard drives that I rotate weekly, never fill up. This is because the rsbu script I wrote checks the ages in days of the backups on each drive before a new backup is made. If there are any backups on the drive that are older than the specified number of days, they are deleted. The script uses the find command to locate these backups. The number of days is specified in the rsbu.conf configuration file.

Of course, after a complete disaster, I would first have to find a new place to live with office space for my wife and I, purchase parts and build new computers, restore from the remaining backup, and then recreate any lost data.

My script, rsbu, is available along with its configuration file, rsbu.conf, and a READ.ME file as a tarball, rsbu.tar, from https://github.com/Apress/using-and-administering-linux-volume-3/raw/master/rsbu.tar.gz.

You can use that script as the basis for your own backup procedures. Be sure to make any modifications you need and test thoroughly.

Recovery testing

No backup regimen would be complete without testing. You should regularly test recovery of random files or entire directory structures to ensure not only that the backups are working, but that the data in the backups can be recovered for use after a disaster. I have seen too many instances where a backup could not be restored for one reason or another and valuable data was lost because the lack of testing prevented discovery of the problem.

Just select a file or directory to test and restore it to a test location such as /tmp so that you won’t overwrite a file that may have been updated since the backup was performed. Verify that the files’ contents are as you expect them to be. Restoring files from a backup made using the preceding rsync commands is simply a matter of finding the file you want to restore from the backup and then copying it to the location to which you want to restore it.

I have had a few circumstances where I have had to restore individual files and, occasionally, a complete directory structure. I have had to restore the entire contents of a hard drive on a couple occasions. Most of the time this has been self-inflicted when I accidentally deleted a file or directory. At least a few times it has been due to a crashed hard drive. So those backups do come in handy.

Restrict SSH remote root login

Sometimes it is necessary to allow SSH connections from external sources and it may not be possible to specify which IP addresses might be the source. In this situation we can prevent root logins via SSH entirely. It would be necessary to login to the firewall host as a non-root user and then su to root or SSH to an internal host as that non-root user and then su to root.

Experiment 16-5

As the root user on StudentVM1, login to StudentVM2 via SSH. You should be able to do this. After confirming that you have logged into your neighbor's computer, log out again.

On StudentVM2, edit the etc/ssh/sshd_config file and change the following line:
#PermitRootLogin yes
To:
PermitRootLogin no

And restart SSHD to enable the change. As the root user on StudentVM1, try to log in to StudentVM2 as root.

You should receive a “Permission denied” error. Also be sure to verify that you can login to StudentVM2 as the student user.

Change this back and revert to allowing remote root login on SSH and test to ensure that you can login to StudentVM2 as root.

Malware

Protecting our systems against malware like viruses, root kits, and Trojan horses is a big part of security. We have several tools we can use to do this, four of which we will cover here. Viruses and Trojan horses are usually delivery agents and can be used to deliver malware such as root kits.

Root kits

A root kit is malware that replaces or modifies legitimate GNU Utilities to both perform its own activities and to hide the existence of its own files. For example, a root kit can replace tools like ls so that it won’t display any of the files installed by the root kit. Other tools can scan log files and remove any entries that might betray the existence of files belonging to the root kit.

Most root kits are intended to allow a remote attacker to take over a computer and use it for their own purposes. With this type of malware, the objective of the attacker is to remain undetected. They are not usually after ransom or to damage your files.

There are two good programs that can be used to scan your system for rootkits. The chkrootkit4 and Root Kit Hunter5 tools are both used to locate files that may have been infected, replaced, or compromised by root kits.

Root Kit Hunter also checks for network breaches such as backdoor ports that have been opened. normal services that are listening on various ports such as HTTP and IMAPS. If those services are listening, a warning is printed.

Experiment 16-6

Perform this experiment as root on StudentVM2. Install the chkrootkit and rkhunter RPMs.
[root@studentvm2 ~]# dnf -y install chkrootkit rkhunter
Run the chkrootkit command. You should get a long list of tests as they are run.
[root@studentvm2 ~]# chkrootkit
ROOTDIR is `/'
Checking `amd'... not found
Checking `basename'... not infected
Checking `biff'... not found
Checking `chfn'... not infected
Checking `chsh'... not infected
Checking `cron'... not infected
Checking `crontab'... not infected
Checking `date'... not infected
Checking `du'... not infected
Checking `dirname'... not infected
Checking `echo'... not infected
Checking `egrep'... not infected
<snip>
Searching for Hidden Cobra ... nothing found
Searching for Rocke Miner ... nothing found
Searching for suspect PHP files... nothing found
Searching for anomalies in shell history files... nothing found
Checking `asp'... not infected
Checking `bindshell'... not infected
Checking `lkm'... chkproc: nothing detected
chkdirs: nothing detected
Checking `rexedcs'... not found
Checking `sniffer'... enp0s3: PF_PACKET(/usr/sbin/NetworkManager)
enp0s8: PF_PACKET(/usr/sbin/dhcpd, /usr/sbin/NetworkManager)
Checking `w55808'... not infected
Checking `wted'... 2 deletion(s) between Sat Dec 22 12:59:04 2018 and Sat Dec 22 13:16:15 2018
2 deletion(s) between Sat Dec 22 13:22:00 2018 and Sat Dec 22 14:28:58 2018
2 deletion(s) between Sat Dec 22 14:29:23 2018 and Sat Dec 22 20:45:25 2018
Checking `scalper'... not infected
Checking `slapper'... not infected
Checking `z2'... chklastlog: nothing deleted
Checking `chkutmp'...  The tty of the following user process(es) were not found
 in /var/run/utmp !
! RUID          PID TTY    CMD
! student      2310 pts/0  bash
! student      2339 pts/0  su -
! root         2344 pts/0  -bash
! root         2367 pts/0  screen
! -oPubkeyAcceptedKeyTypes=rsa-sha256,[email protected],ecdsa-sha2-nistp256,[email protected],ecdsa-sha2-nistp384,[email protected],rsa-sha2-512,[email protected],ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@op   28783 [email protected],-oPubkeyAcceptedKeyTypes=rsa-sha256,[email protected],ecdsa-sha2-nistp256,[email protected],ecdsa-sha2-nistp384,[email protected],rsa-sha2-512,[email protected],ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@op 256,[email protected],ecdsa-sha2-nistp256,[email protected],ecdsa-sha2-nistp384,[email protected],rsa-sha2-512,[email protected],ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@opchkutmp: nothing deleted
Checking `OSX_RSPLUG'... not tested

You can see all of the checks performed by this tool. Any anomalies would be noted. There is no man page for chkrootkit but there is some documentation in /usr/share/doc/chkrootkit. Be sure to read that for additional information.

I think that the Root Kit Hunter program is a better and more complete program. It is more flexible because it can update the signature files without upgrading the entire program. Like chkrootkit, it also checks for changes to certain system executable files that are frequently targeted by crackers.

Before running RootKit Hunter the first time, update the signature files.
[root@testvm3 sbin]# rkhunter --update
Checking rkhunter data files...
  Checking file mirrors.dat                                  [ Updated ]
  Checking file programs_bad.dat                             [ Updated ]
  Checking file backdoorports.dat                            [ No update ]
  Checking file suspscan.dat                                 [ Updated ]
  Checking file i18n/cn                                      [ No update ]
  Checking file i18n/de                                      [ Updated ]
  Checking file i18n/en                                      [ No update ]
  Checking file i18n/tr                                      [ Updated ]
  Checking file i18n/tr.utf8                                 [ Updated ]
  Checking file i18n/zh                                      [ Updated ]
  Checking file i18n/zh.utf8                                 [ Updated ]
  Checking file i18n/ja                                      [ Updated ]
[root@studentvm2 ~]#
Now create the initial database of critical files.
[root@studentvm2 ~]# rkhunter --propupd
[ Rootkit Hunter version 1.4.6 ]
File created: searched for 177 files, found 138
[root@studentvm2 ~]#

Note

The rkhunter --propupd command should be run after updates are installed and after upgrades to new releases such as from Fedora 29 to Fedora 30.

Now run the command to check for rootkits. The --sk option skips the normal pause between the different tests. The -c option tells rkhunter to check for rootkits.
[ Rootkit Hunter version 1.4.6 ]
Checking system commands...
  Performing 'strings' command checks
    Checking 'strings' command                               [ OK ]
  Performing 'shared libraries' checks
    Checking for preloading variables                        [ None found ]
    Checking for preloaded libraries                         [ None found ]
    Checking LD_LIBRARY_PATH variable                        [ Not found ]
  Performing file properties checks
    Checking for prerequisites                               [ OK ]
    /usr/sbin/adduser                                        [ OK ]
    /usr/sbin/chkconfig                                      [ OK ]
<snip>
    Knark Rootkit                                            [ Not found ]
    ld-linuxv.so Rootkit                                     [ Not found ]
    Li0n Worm                                                [ Not found ]
    Lockit / LJK2 Rootkit                                    [ Not found ]
    Mokes backdoor                                           [ Not found ]
    Mood-NT Rootkit                                          [ Not found ]
    MRK Rootkit                                              [ Not found ]
    Ni0 Rootkit                                              [ Not found ]
    Ohhara Rootkit                                           [ Not found ]
    Optic Kit (Tux) Worm                                     [ Not found ]
    Oz Rootkit                                               [ Not found ]
    Phalanx Rootkit                                          [ Not found ]
    Phalanx2 Rootkit                                         [ Not found ]
    Phalanx2 Rootkit (extended tests)                        [ Not found ]
    Portacelo Rootkit                                        [ Not found ]
    R3dstorm Toolkit                                         [ Not found ]
<snip>

This program also displays a long list of tests and their results as it runs, along with a nice summary at the end. You can find a complete log with even more detailed information at /var/log/rkhunter/rkhunter.log.

Note that the installation RPM for Root Kit Hunter sets up a daily cron job with a script in /etc/cron.daily. The script performs this check every morning at about 3 a.m. If a problem is detected, an email message is sent to root. If no problems are detected, no email or any other indication that the rkhunter program was even run is provided.

Clam-AV

ClamAV is one open source anti-virus program. There are others and there are some that are not Open Source. ClamAV can be used to scan a computer for viruses.

ClamAV is not installed on your host by default. It will be installed with an empty database file and will fail when run if a valid database is not installed. We will install the ClamAV update utility which will also install all dependencies. Installing the clamav-update package allows easy update of the ClamAV database using the freshclam command.

Experiment 16-7

Perform this experiment as the root user on StudentVM2. First, install clamAV and clamav-update.
[root@studentvm2 ~]# dnf -y install clamav clamav-update
Now edit the /etc/freshclam.conf file and delete or comment out the “Example” line. Update the ClamAV database.
[root@studentvm2 ~]# freshclam
ClamAV update process started at Thu Aug 29 12:17:31 2019
WARNING: Your ClamAV installation is OUTDATED!
WARNING: Local version: 0.101.3 Recommended version: 0.101.4
DON'T PANIC! Read https://www.clamav.net/documents/upgrading-clamav
Downloading main.cvd [100%]
main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Downloading daily.cvd [100%]
daily.cvd updated (version: 25556, sigs: 1740591, f-level: 63, builder: raynman)
Downloading bytecode.cvd [100%]
bytecode.cvd updated (version: 330, sigs: 94, f-level: 63, builder: neo)
Database updated (6306934 signatures) from database.clamav.net (IP: 104.16.219.84)
[root@studentvm2 ~]#

You will notice that there are a couple warnings in that output data. ClamAV needs to be updated, but the latest version has not yet been uploaded to the Fedora repository.

Now you can run the clamscan command on arbitrary specified directories. Using the -r option scans directories recursively. The output data stream is a list of files. Those that are not infected have OK at the end of the line.
[root@studentvm2 ~]# clamscan -r /root /var/spool /home
/root/.viminfo: OK
/root/.local/share/mc/history: OK
/root/.razor/server.c303.cloudmark.com.conf: OK
/root/.razor/server.c302.cloudmark.com.conf: OK
/root/.razor/server.c301.cloudmark.com.conf: OK
/root/.razor/servers.nomination.lst: OK
/root/.razor/servers.discovery.lst: OK
/root/.razor/servers.catalogue.lst: OK
/root/.config/htop/htoprc: OK
/root/.config/mc/panels.ini: Empty file
<snip>
/home/student/.thunderbird/w453leb8.default/AlternateServices.txt: Empty file
/home/student/.thunderbird/w453leb8.default/SecurityPreloadState.txt: Empty file
/home/student2/.bash_logout: OK
/home/student2/.bashrc: OK
/home/student2/.bash_profile: OK
/home/student2/.bash_history: OK
/home/email1/.bash_logout: OK
/home/email1/.bashrc: OK
/home/email1/.esd_auth: OK
/home/email1/.bash_profile: OK
/home/email1/.config/pulse/b62e5e58cdf74e0e967b39bc94328d81-default-source: OK
/home/email1/.config/pulse/b62e5e58cdf74e0e967b39bc94328d81-device-volumes.tdb: OK
/home/email1/.config/pulse/b62e5e58cdf74e0e967b39bc94328d81-card-database.tdb: OK
/home/email1/.config/pulse/b62e5e58cdf74e0e967b39bc94328d81-stream-volumes.tdb: OK
/home/email1/.config/pulse/cookie: OK
/home/email1/.config/pulse/b62e5e58cdf74e0e967b39bc94328d81-default-sink: OK
/home/smauth/.bash_logout: OK
/home/smauth/.bashrc: OK
/home/smauth/.bash_profile: OK
----------- SCAN SUMMARY -----------
Known viruses: 6296694
Engine version: 0.101.3
Scanned directories: 431
Scanned files: 3463
Infected files: 0
Data scanned: 574.65 MB
Data read: 489.07 MB (ratio 1.17:1)
Time: 479.099 sec (7 m 59 s)
[root@studentvm2 ~]#

This command emits a very long data stream, so I reproduced only a bit of it here. Using the tee command records the data stream in the specified file while also sending it on to STDOUT. This makes it easy to use different tools and searches on the file.

View the content of the clamscan.txt file. See if you can find files that do not have “OK” appended to the end of the line.

The clamscan utility should be run on a regular basis to ensure that no viruses have penetrated your defenses.

Tripwire

Tripwire is intrusion detection software. It can report on system files that have been altered in some way, possibly by malware installed as part of a root kit or Trojan horse. Like many tools of this type, Tripwire cannot prevent an intrusion; it can only report on one after it has occurred and left behind some evidence that can be detected and identified.

Tripwire6 is also a commercial company that sells a version of Tripwire and other cybersecurity products. We will install an open source version of Tripwire and configure it for use on our server.

Experiemnt 16-8

Perform this experiment as the root user on StudentVM2. First, install Tripwire.
[root@studentvm2 ~]# dnf -y install tripwire

The Tripwire RPM for Fedora does not create a complete and working configuration. The documentation in the /usr/share/doc/tripwire/README.Fedora file contains instructions for performing that configuration. I strongly suggest you read that file, but we will proceed here with the bare minimum required to get Tripwire working.

Next we need to create the tripwire keyfiles that will be used to encrypt and sign the database files.
[root@studentvm2 tripwire]# tripwire-setup-keyfiles
----------------------------------------------
The Tripwire site and local passphrases are used to sign a  variety  of
files, such as the configuration, policy, and database files.
Passphrases should be at least 8 characters in length and contain  both
letters and numbers.
See the Tripwire manual for more information.
----------------------------------------------
Creating key files...
(When selecting a passphrase, keep in mind that good passphrases typically
have upper and lower case letters, digits and punctuation marks, and are
at least 8 characters in length.)
Enter the site keyfile passphrase:<Enter passphrase>
Verify the site keyfile passphrase:<Enter passphrase>
Generating key (this may take several minutes)...Key generation complete.
(When selecting a passphrase, keep in mind that good passphrases typically
have upper and lower case letters, digits and punctuation marks, and are
at least 8 characters in length.)
Enter the local keyfile passphrase:<Enter passphrase>
Verify the local keyfile passphrase:<Enter passphrase>
Generating key (this may take several minutes)...Key generation complete.
----------------------------------------------
Signing configuration file...
Please enter your site passphrase: <Enter passphrase>
Wrote configuration file: /etc/tripwire/tw.cfg
A clear-text version of the Tripwire configuration file:
/etc/tripwire/twcfg.txt
has been preserved for your inspection.  It  is  recommended  that  you
move this file to a secure location and/or encrypt it in place (using a
tool such as GPG, for example) after you have examined it.
----------------------------------------------
Signing policy file...
Please enter your site passphrase: <Enter passphrase>
Wrote policy file: /etc/tripwire/tw.pol
A clear-text version of the Tripwire policy file:
/etc/tripwire/twpol.txt
has been preserved for  your  inspection.  This  implements  a  minimal
policy, intended only to test  essential  Tripwire  functionality.  You
should edit the policy file to  describe  your  system,  and  then  use
twadmin to generate a new signed copy of the Tripwire policy.
Once you have a satisfactory Tripwire policy file, you should move  the
clear-text version to a secure location  and/or  encrypt  it  in  place
(using a tool such as GPG, for example).
Now run "tripwire --init" to enter Database Initialization  Mode.  This
reads the policy file, generates a database based on its contents,  and
then cryptographically signs the resulting  database.  Options  can  be
entered on the command line to specify which policy, configuration, and
key files are used  to  create  the  database.  The  filename  for  the
database can be specified as well. If no  options  are  specified,  the
default values from the current configuration file are used.
[root@studentvm2 tripwire]#
Now we need to initialize the Tripwire database. This command scans the files and creates a signature for each file. It encrypts and signs the database to ensure that it cannot be altered without alerting us to that fact. This command can take several minutes to complete. We can use the --init option or its synonym, -m i.
[root@studentvm2 ~]# tripwire --init

You will see some warnings about files that the default policy expects to see. You can ignore those for this experiment, but for a production environment, you would want to create a policy file that reflects the files you actually have and the actions to be taken if one changes.

We can now run an integrity check of our system.
[root@studentvm2 tripwire]# tripwire --check | tee /root/tripwire.txt

Once again, Tripwire generates the same warnings. Explore the Tripwire report file we created which contains a nice summary near the beginning.

Note that the file created by Tripwire, /var/lib/tripwire/report/studentvm2.example.com-20190829-144939.twr is an encrypted file. It can only be viewed using the twprint utility to view those files.
[root@studentvm2 ~]# twprint --print-report --twrfile /var/lib/tripwire/report/studentvm2.example.com-20190829-144939.twr | less

SELinux

We disabled SELinux early in this course so we would not need to deal with side effects in other experiments caused by this important security tool. SELinux was developed by the NSA to provide a highly secure computing environment. True to the GPL, they have made this code available to the rest of the Linux community, and it is included as part of nearly every mainstream distribution.

I have no idea how much we should trust the NSA itself, but because the code is open source and can be and has been examined by many programmers around the world, the likelihood of it containing malicious code is quite low. With that said, SELinux is an excellent security tool.

SELinux provides Mandatory Access Control (MAC) which ensures that users must be provided explicit access rights to each object in the host system. The objective of SELinux7 is to prevent a security breach – an intrusion – and to limit the damage that they may wreak if they do manage to access a protected host. It accomplishes this by labeling every filesystem object and processes. It uses policy rules to define the possible interactions between labeled objects and the kernel enforces these rules.

Red Hat has a well-done document that covers SELinux.8 Although written for RHEL 7, it will also apply to all current versions of RHEL, CentOS, Fedora, and other Red Hat-derived distributions.

Under Fedora, SELinux has provided three sets of policy files, although you can create your own and other distros may have other pre-configured policies. By default, only the Targeted policy files are installed by Fedora. Figure 16-7 shows the pre-configured policies along with a short description of each.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig7_HTML.png
Figure 16-7

These are the three default SELinux policies provided by Fedora

SELinux also has three modes of operation, enforcing and permissive, as described in Figure 16-8.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig8_HTML.png
Figure 16-8

SELinux operational modes

In this section, we will explore some basic SELinux tasks.

Experiment 16-9

Perform this experiment as the root user on StudentVM2.

Go to the /etc/selinux directory and view the directories there. You should see that, by default, only the Targeted policy files are installed by Fedora. Install the MLS and Minimal SELinux policy files and look at the contents of this directory again.
[root@studentvm2 selinux]# dnf install -y selinux-policy-minimum selinux-policy-mls

Each policy is installed in a subdirectory of /etc/selinux. Look at the contents of the /etc/selinux directory again and notice the new minimum and mls subdirectories for their respective policy files.

First, let’s see if there are any man pages to which we can refer if we have questions.
[root@david ~]# apropos selinux
No pages are listed so we should also install the SELinux man pages and documentation.
[root@david ~]# dnf install selinux-policy-doc
Now look for SELinux man pages. Still nothing. We forgot to rebuild the man page database.
[root@david ~]# mandb

You should now find over 900 relevant man pages.

The default mode for SELinux is “Targeted – Permissive.” An early experiment had you disable SELinux. Edit the /etc/selinux/config file and set the following options.
SELINUX=permissive
SELINUXTYPE=targeted
Reboot the system. It will take several minutes during the first reboot while SELinux relabels the targeted files and directories. This can be seen in Figure 16-9. Labeling is the process of assigning a security context to a process or a file. The system will reboot again at end of the relabel process.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig9_HTML.jpg
Figure 16-9

SELinux relabels system objects during a reboot

Login to the desktop as student. Open a terminal session as student and another as root. Run the command id -Z in both terminals. The results should be the same, with both IDs being completely unconfined.
[root@studentvm2 ~]# id -Z
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
As root, use the getenforce command to verify the current state of enforcement.
[root@studentvm2 ~]# getenforce
Permissive
Run the sestatus command to view an overall status for SELinux. The following sample output shows typical results.
[root@studentvm2 etc]# sestatus -v
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      31
Process contexts:
Current context:                unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
Init context:                   system_u:system_r:init_t:s0
/usr/sbin/sshd                  system_u:system_r:sshd_t:s0-s0:c0.c1023
File contexts:
Controlling terminal:           unconfined_u:object_r:user_devpts_t:s0
/etc/passwd                     system_u:object_r:passwd_file_t:s0
/etc/shadow                     system_u:object_r:shadow_t:s0
/bin/bash                       system_u:object_r:shell_exec_t:s0
/bin/login                      system_u:object_r:login_exec_t:s0
/bin/sh                         system_u:object_r:bin_t:s0 -> system_u:object_r:shell_exec_t:s0
/sbin/agetty                    system_u:object_r:getty_exec_t:s0
/sbin/init                      system_u:object_r:bin_t:s0 -> system_u:object_r:init_exec_t:s0
/usr/sbin/sshd                  system_u:object_r:sshd_exec_t:s0
[root@studentvm2 ~]#
Run the following command to set the mode to enforcing. Run the sestatus -v command to verify that the SELinux mode is set to “enforcing”. Run the id command to determine the user's context. The user should still be unconfined.
[root@studentvm2 ~]# setenforce enforcing
Start the HTTPD service if it is not already running, and run the command
[root@studentvm2 ~]# ps -efZ

to display the context of the running processes. Note that many processes are unconfined but that some processes, such as various kernel and HTTPD ones, are running in the system_u:system_r context. Some services run in the kernel_t domain while the HTTPD service tasks run in a special httpd_t domain.

Users who do not have authority for those contexts are unable to manipulate those processes, even when they su to root. However, the “targeted enforcing” mode allows all users to have all privileges so it would be necessary to restrict some or all users in the seusers file.

To see this, as root, stop the HTTPD service and verify that it has stopped.

Logout of the user student session. As root, add the following line to the /etc/selinux/targeted/seusers file. Note that each policy has its own seusers file.
student:user_u:s0-s0:c0.c1023

It is not necessary to reboot or restart SELinux. Now log in as the user student and su to root. What happens? What is the user's current context?

This is a rather blunt approach, but SELinux does allow you to get much more granular. Creating and compiling those more granular policies is beyond the scope of this course.

Now set the policy mode to “permissive” using the setenforce command and try again to su to root. What happens?

Do a bit of cleanup and edit the /etc/selinux/config file again and set SELINUX=disabled. Reboot your studentvm2 host.

Additional SELinux considerations

Making changes to the filesystem while SELinux is disabled may result in improperly labeled objects and possible vulnerabilities. The best way to ensure that everything is properly labeled is to add an empty file named /.autorelabel in the root directory and reboot the system.

SELinux is intolerant of extra whitespace. Be sure to eliminate extra whitespace in SELinux configuration files in order to ensure that there are no errors.

Social engineering

There is not room or time to delve into all the ways that crackers can use social engineering to convince users to click a URL that will take them to a web site that will infect their computer with some form of malware. The human factor is well beyond the scope of this book, but there are some excellent web sites that can be used as references to help users understand the online threats and how to protect themselves. Figure 16-10 lists some of these web sites.
../images/473483_1_En_16_Chapter/473483_1_En_16_Fig10_HTML.png
Figure 16-10

A few of the many web sites that provide Internet-safety materials

A search on “Internet safety” will result in a huge number (well over a billion) of hits, but the best results will be in the first few pages. Many are aimed at youth, teens, and parents, but they have good information for everyone.

Chapter summary

This chapter has explored some additional security precautions that we can take to further harden our Fedora systems against various types of cracking attacks. It also explored some advanced backup techniques because, failing all else, good, usable backups can allow us to recover from most any disaster including crackers.

None of the tools discussed in this chapter provide a single solution for Linux system security – there is no such thing. Taken together in combinations that make sense for your environment, as well as along with all of the other security we have previously implemented in this course, these tools can significantly improve the security of any Linux host. Although our virtual network and the virtual machines contained in it are now safer, there is always more that can be done. The question we need to ask is whether the cost of the effort required to lock down our systems and networks even more is worth the benefits accrued by doing so.

Remember, like most of the subjects we have covered in this course, we have just touched the surface. You should now be aware of a few of the dangers and some of the tools we have to counter those threats. This is only the beginning and you should explore these tools and others not covered here, in more depth in order to ensure that the security of Linux hosts for which you have responsibility are secured to the greatest extent possible.

Exercises

Perform the following exercises to complete this chapter.
  1. 1.

    In Experiment 16-3, there is no valid DNS server that can be used for our SSH command to StudentVM2. Why does the name server on StudentVM2 not work for this?

     
  2. 2.

    On StudentVM2, identify the network ports that we have open with IPTables rules and which should be open only on the internal network and not the external network. Modify those rules to accept connections only from the internal network. Test the results.

     
  3. 3.

    If you have not already, download the rsbu.tar.gz file from the Apress web site https://github.com/Apress/using-and-administering-linux-volume-3/raw/master/rsbu.tar.gz and install it. Using the enclosed script and configuration file, set up a simple backup configuration that runs once per day and backs up the entire home directories of both StudentVM1 and StudentVM2.

     
  4. 4.

    With SELinux enabled, determine the student user's context.

     
  5. 5.

    Why should clamscan be run on the /home directory of a mail server?

     
  6. 6.

    Configure Tripwire to the files that do not exist on StudentVM2. Initialize the database and run the integrity check.

     
  7. 7.

    Why are the Tripwire report files encrypted?

     
  8. 8.

    What other services, besides HTTPD, have their own SELinux domains?

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.97.48