21

Securing Your Server

It seems like every month there are new reports about companies getting their servers compromised. In some cases, entire databases end up freely available on the internet, which may even include sensitive user information that can aid miscreants in stealing identities. Linux is a very secure platform, but it’s only as secure as the administrator who sets it up. Security patches are made available on a regular basis, but they offer no value unless you install them. OpenSSH is indispensable for remote administration, but it’s also a popular target for threat actors trying to break into servers. Backups are a must-have but are potentially useless if they’re not tested regularly or they fall into the wrong hands. In some cases, even your own employees can cause intentional or unintentional damage. In this chapter, we’ll look at some of the ways you can secure your servers from threats.

In this chapter, we will cover:

  • Lowering your attack surface
  • Understanding and responding to Common Vulnerabilities and Exposures (CVEs)
  • Installing security updates
  • Automatically installing patches with the Canonical Livepatch service
  • Securing OpenSSH
  • Installing and configuring Fail2ban
  • MariaDB best practices for secure database servers
  • Setting up a firewall
  • Encrypting and decrypting disks with Linux Unified Key Setup (LUKS)
  • Locking down sudo

To get started, let’s first talk about ways you may be able to lower your attack surface.

Lowering your attack surface

Your Ubuntu Server installations will likely have one or more important applications running on them, some of which might be available to the public internet. This is very common for web servers, for example, as it’s the primary goal of a web server to offer a website that your users can access.

Every application that is accessible from outside the walls of your organization is a potential entry point for threat actors who might attempt to break into your server. The attack surface of a server is essentially a list of all the things that are potentially exploitable. In regards to security, it’s important to understand which applications must be accessible remotely, and which ones you can lock down. Every application you lock down lowers the likelihood of it being taken over by an outside threat. The process of locking things down is what we refer to as lowering your attack surface.

Ideally, in a perfect world, we would disallow all outside connections to all of our servers. Threat actors can’t break into a server that is completely inaccessible from the outside. That doesn’t mean that there aren’t any threats at all, as disgruntled employees are always a potential risk. But a server that’s completely inaccessible is the most secure of all. However, it’s often not feasible to disallow all outside connections. If your company provides a popular public website, then it has to be publicly available. However, if you have an application running on your server that is only used by users internally, then you should lock it down if you can. Whenever possible, it’s good to implement a policy that outside connections are always disallowed by default unless there’s a business need to open it up.

What do we mean by “disallow?” There are multiple ways you can disallow access to an application on your server. The most effective of these is to completely uninstall the application. If you don’t have an application installed at all, it’s impossible for it to be a problem. It probably goes without saying that you should uninstall applications that aren’t necessary, but the entire point of running a server is to serve resources to users, so you’ll always have applications running on your server (otherwise there, wouldn’t be a point in having a server at all to begin with). Aside from removing an application, you can utilize a firewall to only allow specific connections. We’ll actually take a look at setting up a firewall later on in this chapter.

Most importantly, after a new server is implemented, an administrator should always perform a security check to ensure that it’s as secure as it can possibly be. No administrator can think of everything, and even the best among us can make a mistake, but it’s always important that we do our best to ensure we secure a server as much as we can. There are many ways you can secure a server, but the first thing you should do is lower your attack surface. This means that you should close as many holes as you can, and limit the number of things that outsiders could potentially be able to access. In a nutshell, if it’s not required to be available from the outside, lock it down. If it’s not necessary at all, remove it.

To start inspecting your attack surface, the first thing you should do is see which ports are listening for network connections. When an attacker wants to break into your server, it’s almost certain that a port scan is the first thing they will perform. They’ll inventory which ports on your server are listening for connections, determine what kind of application is listening on those ports, and then try a list of known vulnerabilities to try to gain access. To inspect which ports are listening on your server, you can do a simple port query with the ss command:

sudo ss -tulpn

The sudo portion of that command is optional, but if you do include sudo, you’ll see more information in the output. Normally I’d include a screenshot here, but there’s so much information that it won’t fit on this page. From the output, you’ll see a list of ports that are listening for connections. If the port is listening on 0.0.0.0, then it’s listening for connections from any network.

This is potentially bad. If the port is listening on 127.0.0.1, then it’s not actually accepting outside connections. Take a minute to inspect one of your servers with the ss command and note which services are listening for outside connections.

Armed with the knowledge of what ports your server is listening on, you can make a decision about what to do with each one. Some of those ports may be required, as the entire purpose of a server is to serve something, which usually means communicating over the network. All of these legitimate ports should be protected in some way, which usually means configuring the service after reviewing its documentation for best practices (which will depend on the particular service) or enabling a firewall, which we’ll get to in the Setting up a firewall section. If any of the ports are not needed, you should close them down. You can either stop their daemon and disable it, or remove the package outright. I usually go for the latter, since it would just be a matter of reinstalling the package if I changed my mind.

OpenSSH is a service that you’re almost always going to have running on your servers. As you are already well aware, it’s a great tool for remote administration. But as useful as it is, it’s usually going to be the first target for any attacker attempting to gain entry into your server.

We won’t want to remove this though, because it’s something we’ll want to take advantage of. What should we do? Not to worry, I’ll be dedicating a section to securing OpenSSH later in this chapter. I mention this now in order to make sure you’re aware that lowering your attack surface will absolutely need to include at least a basic amount of security tweaking for OpenSSH. In addition, I’ll go over Fail2ban in this chapter as well, which can help add an additional layer of security to OpenSSH.

As I’ve mentioned, I’m a big fan of removing packages that aren’t needed. The more packages you have installed, the larger your attack surface is. It’s important to remove anything you don’t absolutely need. Even if a package isn’t listed as an open port, it could still be leveraged in a vulnerability chain. If an attacker uses a vulnerability chain, that essentially means that they first break into one service and then use a vulnerability in another (possibly unrelated) package to elevate their privileges and attempt to gain full access. For that reason, I will need to underscore the fact that you should remove any packages you don’t need on your server. An easy way to get a list of all the packages you have installed is with the following command:

dpkg --get-selections > installed_packages.txt

This command will result in the creation of a text file that will contain a list of all the packages that you have installed on your server. Take a moment to look at it. Does anything stand out that you know for sure you don’t need? You most likely won’t know the purpose of every single package, and there could be hundreds or more. A lot of the packages that will be contained in the text file are distribution-required packages you can’t remove if you want your server to boot up the next time you go to restart it. If you don’t know whether or not a package can be removed, do some research on Google. If you still don’t know, maybe you should leave that package alone and move on to inspect others. By going through this exercise on your servers, you’ll never really remember the purpose of every single package and library, but you should still find some things that you’ll be able to clean up. Eventually, you’ll come up with a list of typical packages most of your servers don’t need, which you can make sure are removed each time you set up a new server. You could even curate a list of unneeded packages, and then create an Ansible playbook to make sure they’re not installed.

While attempting to clean up unneeded packages, a useful trick is to use the following command to check whether or not other packages depend on the package you are thinking of removing:

apt-cache rdepends <package-name>

As an example, I ran that command against the tmux package that I installed on a test server, but you can use whichever package name you’d like as an argument to check to see if anything depends on it:

apt-cache rdepends tmux

The output I received on my end is the following:

Figure 21.1: Updating packages on an Ubuntu server

With the output of the previous command, you can easily identify if another package depends on the package you are thinking about removing. In the example output, we can see that tmux is actually installed as a dependency of the ubuntu-server package. This means that tmux is quite possibly installed by default on your server, but that may vary depending on whether or not you’ve installed Ubuntu Server yourself or are using a cloud image. Cloud providers don’t always configure Ubuntu Server images the same way. But at the very least, you can identify dependencies and make a more informed decision on whether or not you can safely remove a package.

Even if the output shows a package has no dependencies, you still may not want to remove it unless you understand the functionality it provides and what impact removing the package may have on your system. You can always Google the package name for more details, but at the very least you should look for open ports and focus on those first, since open ports have a greater impact on the security of your server. We’ll look at this in more detail later in this chapter, in the Setting up a firewall section.

Another important consideration is making sure to use only strong passwords. This probably goes without saying, since I’m sure you already understand the importance of strong passwords. However, I’ve seen hacks recently in the news caused by administrators who set weak passwords for their external-facing database or web console, so you never know. The most important rule is that if you must have a service listening for outside connections, then it absolutely must have a strong, randomly generated password. Granted, some daemons don’t have a password associated with them (Apache is one example; it doesn’t require authentication for someone to view a web page on port 80). However, if a daemon does have authentication, it should have a very strong password. OpenSSH is an example of this. If you must allow external access to OpenSSH, that user account should have a strong randomly generated password. Otherwise, it will likely be taken over within a couple of weeks by a multitude of bots that routinely go around scanning for these types of things. In fact, it’s best to disable password authentication in OpenSSH entirely, which we will do later in this chapter. Disabling password authentication increases the security around OpenSSH quite a bit.

Finally, it’s important to employ the principle of least privilege for all your user accounts. You’ve probably gotten the impression from several points I’ve made throughout the book that I distrust users. While I always want to think the best of everyone, sometimes the biggest threats can come from within (disgruntled employees, accidental deletions of critical files, and so on). Therefore, it’s important to lock down user accounts as much as possible, and allow them access to only what they actually need to perform their job. This may involve, but is certainly not limited to:

  • Adding a user to the smallest possible number of groups
  • Defaulting all network shares to read-only (users can’t delete what they don’t have permission to delete)
  • Routinely auditing all your servers for user accounts that have not been logged into for a time
  • Setting account expirations for user accounts, and requiring users to reapply to maintain account status (this prevents hanging user accounts)
  • Allowing user accounts to access as few system directories as possible (preferably none, if you can help it)
  • Restricting sudo to specific commands (more on that later on in this chapter)

Above all, make sure you document each of the changes you make to your servers, in the name of security. After you develop a good list, you can turn that list into a security checklist to serve as a baseline for securing your servers. Then, you can set reminders to routinely scan your servers for unused user accounts, unnecessary group memberships, and any newly opened ports.

Now you should have some good ideas on how you can lower your attack surface. It’s also important to keep up to date on the current trends and notices surrounding security issues that were reported. In the next section, we’ll take a look at Common Vulnerabilities and Exposures (CVEs), which can help you better understand the nature of threats in the wild.

Understanding and responding to CVEs

I’ve already mentioned some of the things you can do in order to protect your server from some common threats, and I’ll give you more tips later on in this chapter. But how does one know when there’s a vulnerability that needs to be patched? How do you know when to take action? The best practices I’ll mention in this chapter will only go so far; at some point, there may be some sort of security issue that will require you to do something beyond generating a strong password or locking down a port.

The most important thing to do is to keep up with the news. Subscribe to sites that report news on security vulnerabilities, and I’ll even place a few of these in the Further reading section of this chapter. When a security flaw is revealed, it’s typically reported on these sites and given a CVE number, where security researchers will document their findings.

CVEs are found in special online catalogs detailing security vulnerabilities and their related information. In fact, many Linux distributions (Ubuntu included) maintain their own CVE catalogs with vulnerabilities specific to their platform. On such a page, you can see which CVEs the version of your distribution is vulnerable to, which have been responded to, and what updates to install in order to address them.

Often, when a security vulnerability is discovered, it will receive a CVE identification right away, even before mitigation techniques are known. In my case, I’ll often watch a CVE page for a flaw when one is discovered, and look for it to be updated with information on how to mitigate it once that’s determined.

Most often, closing the hole will involve installing a security update, which the security team for Ubuntu will create to address the flaw. In some cases, the new update will require restarting the server or at least a running service, which means I may have to wait for a maintenance period to perform the mitigation.

I recommend taking a look at the Ubuntu CVE tracker, available at https://ubuntu.com/security/cves. On this site, Canonical (the makers of Ubuntu) keeps information regarding CVEs that affect the Ubuntu platform. There, you can get a list of vulnerabilities that are known to the platform as well as the steps required to address them. There’s no one rule about securing your server, but paying attention to CVEs is a good place to start. We’ll go over installing security updates in the next section, which is the most common method of mitigation.

Installing security updates

Since I’ve mentioned updating packages several times, let’s have a formal conversation about it. Updated packages are made available for Ubuntu quite often, sometimes even daily. These updates mainly include the latest security updates but may also include new features. Since Ubuntu 22.04 is an LTS release, security updates are much more common than feature updates.

Installing the latest updates on your server is a very important practice, but, unfortunately, it’s not something that all administrators keep up with for various reasons.

When installed, security updates very rarely make many changes to your server, other than helping to keep it secure against the latest threats. However, it’s always possible that a security update that’s intended to fix a security issue ends up breaking something else. This is rare, but I’ve seen it happen. When it comes to production servers, it’s often difficult to keep them updated, since it may be catastrophic to your organization to introduce change within a server that’s responsible for a large portion of your profits. If a server goes down, it could be very costly. Then again, if your servers become compromised and your organization ends up the subject of a CNN hacking story, you’ll definitely wish you had kept your packages up to date!

The key to a happy data center is to test all updates before you install them. Many administrators will feature a system where updates will graduate from one environment to the next. For example, some may create virtual clones of their production servers, update them, and then see whether anything breaks. If nothing breaks, then those updates will be allowed on the production servers.

In a clustered environment, an administrator may just update one of the production servers, see how it gets impacted, and then schedule a time to update the rest. In the case of workstations, I’ve seen policies where select users are chosen for security updates before they are uploaded to the rest of the population. I’m not necessarily suggesting you treat your users as guinea pigs, but everyone’s organization is different, and finding the right balance for installing updates is very important. Although these updates represent change, there’s a reason that Ubuntu’s developers went through the hassle of making them available. These updates fix issues, some of which are security concerns that are already being exploited as you read this.

To begin the process of installing security updates, the first step is to update your local repository index. As we’ve discussed before, the way to do so is to run sudo apt update. This will instruct your server to check all of its subscribed repositories to see whether any new packages were added or whether any out-of-date packages were removed. Then, you can start the actual process.

There are two commands you can use to update packages. You can run either sudo apt upgrade or sudo apt dist-upgrade.

The difference is that running apt upgrade will not remove any packages and is the safest to use. However, this command won’t pull down any new dependencies either. Basically, the apt upgrade command simply updates any packages on your server that have already been installed, without adding or removing anything. Since this command won’t install anything new, this also means your server will not have updated kernels installed either.

The apt dist-upgrade command will update absolutely everything available. It will make sure all packages on your server are updated, even if that means installing a new package as a dependency that wasn’t required before. If a package needs to be removed in order to satisfy a dependency, it will do that as well. If an updated kernel is available, it will be installed. If you use this command, just take a moment to look at the proposed changes before you agree to have it run, as it will allow you to confirm the changes during the process.

Generally speaking, the dist-upgrade variation should represent your end goal, but it’s not necessarily where you should start. Updated kernels are important, since your distribution’s kernel receives security updates just like any other package. All packages should be updated eventually, even if that means something is removed because it’s no longer needed or something new ends up getting installed.

When you start the process of updating, it will look similar to the following:

Figure 21.2: Updating packages on an Ubuntu server

Before the update process actually starts, you’ll be given an overview of what it wants to do. In my case, it wants to upgrade 11 packages. If you were to enter Y and then press Enter, the update process would begin. At this point, I’ll need to leave the terminal window open; it’s actually dangerous to close it in the middle of the update process. Closing the terminal window in the middle of a package management task may result in corrupted or partially installed packages.

Assuming that this process finishes successfully, we can run the apt dist-upgrade command to update the rest – specifically, the packages that were held back because they would’ve installed new packages or removed existing ones. There weren’t any in my case, but in such a situation you may see text indicating that some upgrades were held back, which is normal with apt upgrade. At that point, you’ll run sudo apt dist-upgrade to install any remaining updates that didn’t get installed with the first command.

In regard to updating the kernel, this process deserves some additional discussion. Some distributions are very risky when it comes to updating the kernel. Arch Linux is an example of this, where only one kernel is installed at any one time. Therefore, when that kernel gets updated, you really need to reboot the machine so that it can use it properly (sometimes, various system components may have difficulty in a case where you have a pending reboot after installing a new kernel).

Ubuntu, on the other hand, handles kernel upgrades very efficiently. When you update a kernel in Ubuntu, it doesn’t replace the kernel your server is currently running on. Instead, it installs the updated kernel alongside your existing one.

In fact, these kernels will continue to be stacked and none of them will be removed as new ones are installed. When new versions of the Ubuntu kernel are installed, the GNU GRUB boot loader will be updated automatically to boot the new kernel the next time you perform a reboot.

Until you do, you can continue to run on your current kernel for as long as you need to, and you shouldn’t notice any difference. The only real difference is the fact that you’re not taking advantage of the additional security patches of the new kernel until you reboot, which you can do during your next maintenance window. The reason this method of updating is great is that if you run into a problem where the new kernel doesn’t boot or has some sort of issue, you’ll have a chance to press Esc at the beginning of the boot process, where you’ll be able to browse a list of all of your installed kernels. Using this list, you can choose between your previous (known, working) kernels and continue to use your server as you were before you updated the kernel. This is a valuable safety net!

After you update the packages on your server, you may want to restart services in order to take advantage of the new security updates. In the case of kernels, you would need to reboot your entire server in order to take advantage of kernel updates, but other updates don’t require a reboot. Instead, if you restart the associated service, you’ll generally be fine (if the update itself didn’t already trigger a restart of a service). For example, if your DNS service (bind9) was updated, you would only need to execute the following to restart the service:

sudo systemctl restart bind9

In addition to keeping packages up to date, it’s also important that you understand how to roll back an updated package in a situation where something went wrong. You can recover from such a situation by simply reinstalling an older version of a package manually. Previously downloaded packages are stored in the following directory:

/var/cache/apt/archives

There, you should find the actual packages that were downloaded as a part of your update process. In a case where you need to restore an updated package to a previously installed version, you can manually install a package with the dpkg command. Generally, the syntax will be similar to the following:

sudo dpkg -i /path/to/package.deb

To be more precise, you would use a command such as the following to reinstall a previously downloaded package, using an older Linux kernel as an example:

sudo dpkg -i /var/cache/apt/archives/linux-image-5.15.0-30-generic_5.15.0-30.31_amd64.deb

However, with the dpkg command, dependencies aren’t handled automatically, so if you are missing a package that your target package requires as a dependency, the package will still be installed, but you’ll have unresolved dependencies you’ll need to fix. You can try to resolve this situation with apt:

sudo apt -f install

The apt -f install command will attempt to fix your installed packages, looking for packages that are missing (but are required by an installed package), and will offer to install the missing dependencies for you. In a case where it cannot find a missing dependency, it will offer to remove the package that requires the missing packages if the situation cannot be worked out any other way.

Well, there you have it. At this point, you should be well on your way to not only installing packages but keeping them updated as well. There’s also a feature in Ubuntu that you can utilize to take advantage of the concept of live patching, which you can use to patch your server’s kernel automatically. That’s what we’ll cover in the next section.

Automatically installing patches with the Canonical Livepatch service

In the previous section, I mentioned that if your updates include an update to the kernel, you’ll need to reboot your server for the new kernel to take effect. While this is generally true, Canonical offers a Livepatch service for Ubuntu, which allows it to receive updates and have them applied without rebooting. This is a game changer, as it takes care of keeping your running kernel patched without you having to do anything, not even reboot. This is a massive benefit to security, as it gives you the benefits of the latest security patches without the inconvenience of scheduling a restart of your servers right away.

However, the service is not free or included with Ubuntu by default. Even so, you can install the Livepatch service on three of your servers without paying, so it’s still something you may want to consider. You’re even able to utilize this service on the desktop version of Ubuntu if you’d like. Since you can use this service for free on three servers, I see no reason why you shouldn’t benefit from this on your most critical resources.

Even though you generally won’t need to reboot your server in order to take advantage of patches with the Livepatch service, there may be some exceptions depending on the nature of the vulnerability. There have been exploits in the past that required complex changes, and even servers subscribed to this service still needed to reboot. This is the exception rather than the rule, though. Most of the time, a reboot is simply not something you’ll need to worry about if you’re utilizing Livepatch. More often than not, your server will have all patches applied and inserted right into the running kernel, which is an amazing thing.

One important thing to note is that this doesn’t stop you from needing to install updates via apt. Live patches are inserted right into the kernel, but they’re not permanent. You’ll still want to install all of your package updates on a regular basis through the regular means. At the very least, live patches will make it so that you won’t be in such a hurry to reboot. If an exploit is revealed on Monday but you aren’t able to reboot your server until Sunday, it’s no big deal.

Since the Livepatch service requires a subscription, you’ll need to create an account in order to get started using it. You can get started with this process at https://auth.livepatch.canonical.com/.

The process will involve having you create an Ubuntu One account (https://login.ubuntu.com/), which is Canonical’s centralized login system. You’ll enter your email address, choose a password, and then at the end of the process, you’ll be given a token to use with your Livepatch service, which will be a string of random characters.

Now that you have a token, you can decide on the three servers that are most important to you. On each of those servers, you can run the following commands to get started:

sudo snap install canonical-livepatch
sudo canonical-livepatch enable <token>

Believe it or not, that’s all there is to it. With how amazing the Livepatch service is, you’d think it would be a complicated process to set up. The most time-consuming part is registering for a new account, as it only takes two commands to set this service up on a server. You can check the status of Livepatch with the following command:

sudo canonical-livepatch status

Depending on the budget of your organization, you may decide that this service is worth paying for, which will allow you to benefit from having it on more than three servers. It’s definitely worth considering. You’ll need to contact Canonical to inquire about additional support, should you decide to explore that option.

At this point, we should switch gears and discuss some things we can do to better secure OpenSSH. I’ve mentioned a few times throughout this chapter that OpenSSH is a common target for outside threat actors, so in the next section, it’s time to take a closer look at this.

Securing OpenSSH

OpenSSH is a very useful utility; it allows us to configure our servers from a remote location as if we were sitting in front of the console. In the case of cloud resources, it’s typically the only way to access our servers. Considering the nature of OpenSSH itself (remote administration), it’s a very tempting target for miscreants who are looking to cause trouble. If we simply leave OpenSSH unsecured, this useful utility may be our worst nightmare.

Thankfully, configuring OpenSSH itself is very easy. However, the large number of configuration options may be intimidating to someone who doesn’t have much experience tuning it. While it’s a good idea to peruse the documentation for OpenSSH, in this section, we’ll take a look at the common configuration options you’ll want to focus your attention on first.

The configuration file for OpenSSH itself is located at /etc/ssh/sshd_config, and we touched on it in Chapter 10, Connecting to Networks. This is the file we’re going to focus on in this section, as the configuration options I’m going to give you are to be placed in that file.

With each of the tweaks in this section, make sure you first look through the file in order to see whether the setting is already there and change it accordingly. If the setting is not present in the file, add it. After you make your changes, it’s important to restart the OpenSSH daemon:

sudo systemctl restart ssh

Go ahead and open this file in your editor, and we’ll go through some tweaks.

One really easy tweak is to change the port number that OpenSSH listens on, which defaults to port 22. Since this is the first port that hackers will attempt, it makes sense to change it, and it’s a very easy change to make. However, I don’t want you to think that just because you change the port for OpenSSH, it’s magically hidden and cannot be detected. A persistent threat actor will still be able to find the port by running a port scan against your server. However, with the change being so easy to tweak, why not do it? To change it, simply look for the port number in the /etc/ssh/sshd_config file and change it from its default of 22:

Port 65332

The only downsides I can think of in regards to changing the SSH port are that you’ll have to remember to specify the port number when using SSH, and you’ll have to communicate the change to anyone that uses the server. To specify the port, we use the -p option with the ssh command:

ssh -p 65332 myhost

If you’re using scp, you’ll need to use an uppercase P instead:

scp -P 65332 myfile myserver:/path/to/dir

Even though changing the port number won’t make your server bulletproof, we shouldn’t underestimate the value of doing so. In a hypothetical example where an attacker is scanning servers on the internet for an open port 22, they’ll probably skip your server and move on to the next. Only determined attackers that specifically want to break into your server will scan other ports looking for it. This also keeps your log file clean; you’ll see intrusion attempts only from miscreants doing aggressive port scans, rather than random bots looking for open ports.

If your server is internet-facing, this will result in far fewer entries in the logs! OpenSSH logs connection attempts in the authorization log, located at /var/log/auth.log. Feel free to check out that log file to see what typical logging looks like.

Another change that’s worth mentioning is which protocol OpenSSH listens for. Most versions of OpenSSH available in repositories today default to Protocol 2. This is what you want. Protocol 2 is much more secure than Protocol 1. You should never allow Protocol 1 in production under any circumstances. Chances are you’re probably already using the default of Protocol 2 on your server, unless you changed it for some reason. I mention it here just in case you have older servers still in production that are defaulting to the older protocol. Nowadays, OpenSSH is always on Protocol 2 in any modern release of a Linux distribution. If you do have an older server that’s still using Protocol 1, you can adjust that by finding the following line in the /etc/ssh/sshd_config file:

Protocol 1

Switching OpenSSH to use Protocol 2 is as simple as changing the 1 on that line to 2, and then restarting the OpenSSH server:

sudo systemctl restart ssh

Next, I’ll give you two tweaks for the price of one. There are two settings that deal with which users and groups are allowed to log in via SSH: AllowUsers and AllowGroups, respectively. By default, every user you create is allowed to log in to your server via SSH. With regards to root, that’s actually not allowed by default (more on that later). But each user you create is allowed in. However, only users that must have access should be allowed in. There are two ways to accomplish this.

One option is to use AllowUsers. With the AllowUsers option, you can specifically set which users can log in to your server. With AllowUsers present (which is not found in the config file by default), your server will not allow anyone to use SSH that you don’t specifically call out with that option. You can separate each user with a space:

AllowUsers larry moe curly

Personally, I find AllowGroups easier to manage. It works pretty much the same as AllowUsers, but with groups. If present, it will restrict OpenSSH connections to users who are a member of this group. To use it, you’ll first create the group in question (you can name it whatever makes sense to you):

sudo groupadd sshusers

Then, you’ll make one or more users a member of that group:

sudo usermod -aG sshusers myuser

Once you have added the group and made a user or two a member of that group, add the following to your /etc/ssh/sshd_config file, replacing the sample groups with yours:

AllowGroups admins sshusers gremlins

It’s fine to use only one group. Just make sure you add yourself to the group before you log out; otherwise, you’ll lock yourself out. I recommend you use only one or the other between AllowUsers and AllowGroups. I think that it’s much easier to use AllowGroups, since you’ll never need to touch the sshd_config file again; you’ll simply add or remove user accounts to and from the group to control access. Just so you’re aware, AllowUsers overrides AllowGroups.

Another important option is PermitRootLogin, which controls whether or not the root user account is able to make SSH connections. This should always be set to no. By default, this is usually set to prohibit-password, which means key authentication is allowed for root while passwords for root aren’t accepted. I don’t see any reason for this either. In my opinion, you should turn this off. Having root being able to log in to your server over a network connection is never a good idea. This is always the first user account attackers will try to use:

PermitRootLogin no

There is one exception to the no-root rule with SSH. Some providers of cloud servers, such as Linode, may have you log in as root by default. This isn’t really typical, but some providers are set up that way. In such a case, I recommend creating a regular user with sudo access, and then disallowing root login.

My next suggestion is by no means easy to set up, but it’s worth it. By default, OpenSSH allows users to authenticate via passwords. This is one of the first things I disable on all my servers. Allowing users to enter passwords to establish a connection means that attackers will also be able to brute-force your server. If passwords aren’t allowed, then they can’t do that. What’s tricky is that before you can disable password authentication for SSH, you’ll first need to configure and test an alternate means of authenticating, which will usually be public key authentication. This is something we’ve gone over, in Chapter 10, Connecting to Networks. Basically, you can generate an SSH key pair on your local workstation, and then add that key to the authorized_keys file on the server, which will allow you in without a password. Again, refer to Chapter 10, Connecting to Networks, if you haven’t played around with this yet.

If you disable password authentication for OpenSSH, then public key authentication will be the only way in. If someone tries to connect to your server and they don’t have the appropriate key, the server will deny their access immediately. If password authentication is enabled and you have a key relationship, then the server will ask the user for their password if their key isn’t installed. In my view, after you set up access via public key cryptography, you should disable password authentication (just make sure you test it first):

PasswordAuthentication no

There you are – those are my most recommended tweaks for securing OpenSSH. There’s certainly more where that came from, but those are the settings you’ll benefit from the most. In the next section, we’ll add an additional layer, in the form of Fail2ban. With Fail2ban protecting OpenSSH and coupled with the tweaks I mentioned in this section, attackers will have a tough time trying to break into your server. For your convenience, here are all the OpenSSH configuration options I’ve covered in this section:

Port 65332 
Protocol 2 
AllowUsers larry moe curly 
AllowGroups admins sshusers gremlins 
PermitRootLogin no 
PasswordAuthentication no 

With OpenSSH better secured, we should be a bit more confident now when it comes to the security of our server. However, each tweak or improvement we make to improve security only helps us so much. The more protections we implement, the better. In the next section, we’ll explore Fail2ban, which can greatly increase the security of our server.

Installing and configuring Fail2ban

Fail2ban, how I love thee! Fail2ban is one of those tools that once I learned how valuable it is, I wondered how I ever lived so long without it. Fail2ban is able to keep an eye on your log files, looking for authentication failures. You can set the number of failures that are allowed from any given IP address, and if there are more than the allowed number of failures, Fail2ban will block that individual’s IP address. It’s highly configurable and can enhance the security of your server.

Installing and configuring Fail2ban is relatively straightforward. First, install its package:

sudo apt install fail2ban

After installation, the fail2ban daemon will start up and be configured to automatically start at boot time. Configuring fail2ban is simply a matter of creating a configuration file. But this is one of the more interesting aspects of Fail2ban: you shouldn’t use its default config file. The default file is /etc/fail2ban/jail.conf. The problem with this file is that it can be overwritten when you install security updates, if those security updates ever include Fail2ban itself. To remedy this, Fail2ban also reads the /etc/fail2ban/jail.local file, if it exists. It will never replace that file, and the presence of a jail.local file will supersede the jail.conf file. The simplest way to get started is to make a copy of jail.conf and save it as jail.local:

sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Next, I’ll go over some of the very important settings you should configure, so open up the /etc/fail2ban/jail.local file you just copied in a text editor. The first configuration item to change is located on or around line 92 and is commented out:

#ignoreip = 127.0.0.1/8 ::1

First of all, uncomment it. Then, you should add additional networks that you don’t want to be blocked by Fail2ban. Basically, this will help prevent you from getting locked out in a situation where you accidentally trigger Fail2ban. Fail2ban is relentless; it will block any service that meets its block criteria, and it won’t think twice about it. This includes blocking you. To rectify this, add your company’s network here, as well as any other IP address you never want to be blocked. Make sure to leave the localhost IP intact:

Ignoreip = 127.0.0.1/8 ::1 192.168.1.0/24 192.168.1.245/24

In that example, I added the 192.168.1.0/24 network, as well as a single IP address of 192.168.1.245/24. Add your networks to this line to ensure you don’t lock yourself out.

Next, line 101 includes the bantime option. This option pertains to how many seconds a host is banned when Fail2ban blocks it. This option defaults to 10m, or 10 minutes:

bantime  = 10m

Change this number to whatever you find reasonable, or just leave it as its default, which will also be fine. If a host gets banned, it will be banned for this specific number of minutes, and then it will eventually be allowed again.

Continuing, we have the maxretry setting:

maxretry = 5

This is specifically the number of failures that need to occur before Fail2ban takes action. If a service it’s watching reaches the number set here, game over! The IP will be blocked for the number of minutes included in the bantime option.

You can change this if you want to, if you don’t find 5 failures to be reasonable. The highest I would set it to is 7, for those users on your network who insist they’re typing the correct password and they type the same (wrong) thing over and over. Hopefully, they’ll realize their error before their seventh attempt and won’t need to call the helpdesk.

Skipping ahead all the way down to line 272 or thereabouts, we have the Jails section. From here, the config file will list several jails you can configure, which is basically another word for something Fail2ban will pay attention to. The first is [sshd], which configures its protection of the OpenSSH daemon. Look for this option underneath [sshd]:

port    = ssh

port being equal to ssh basically means that it’s defaulting to port 22. If you’ve changed your SSH port, change this to reflect whatever that port is. There are two such occurrences, one under [sshd] and another underneath [sshd-ddos]:

port    = 65332

Before we go too much further, I want to underscore the fact that we should test whether Fail2ban is working after each configuration change we make. To do this, restart Fail2ban and then check its status:

sudo systemctl restart fail2ban
sudo systemctl status -l fail2ban

The status should always be active (running). If it’s anything else (such as failed), that means that Fail2ban doesn’t like something in your configuration. Usually, that means that Fail2ban’s status will reflect that it exited. So, as we go, make sure to restart Fail2ban after each change and make sure it’s not complaining about something. The status command will show lines from Fail2ban’s log file for your convenience.

Another useful command to run after restarting Fail2ban is the following:

sudo fail2ban-client status

The output from that command will show all the jails that you have enabled. If you enable a new jail in the config file, you should see it listed within the output of that command.

So, how do you enable a jail? By default, all jails are disabled, except for the one for OpenSSH. To enable a jail, place the following within its config block in the /etc/fail2ban/jail.local file:

enabled = true 

If you want to enable the apache-auth jail, find its section, and place enabled = true right underneath its stanza. For example, apache-auth will look like the following after you add the enabled line:

[apache-auth] 
enabled = true 
port     = http,https 
logpath  = %(apache_error_log) 

In that example, the enabled = true portion wasn’t present in the default file. I added it. Now that I’ve enabled a new jail, we should restart fail2ban:

sudo systemctl restart fail2ban

Next, check its status to make sure it didn’t explode on startup:

sudo systemctl status -l fail2ban

Assuming all went well, we should see the new jail listed in the output of the following command:

sudo fail2ban-client status

On my test server, the output became the following once I enabled apache-auth:

Status
|- Number of jail: 2
  '- Jail list:    apache-auth, sshd

If you enable a jail for a service you don’t have installed, Fail2ban may fail to start up. In my example, I actually did have apache2 installed on that server before I enabled its jail. If I hadn’t, Fail2ban would likely have exited, complaining that it wasn’t able to find log files for Apache. This is the reason why I recommend that you test Fail2ban after enabling any jail. If Fail2ban decides it doesn’t like something, or something it’s looking for isn’t present, it may stop. Then, it won’t be protecting you at all, which is not good.

The basic order of operations for Fail2ban is to peruse the jail config file, looking for any jails you may benefit from. If you have a daemon running on your server, there’s a chance that there’s a jail for that. If there is, enable it and see whether Fail2ban breaks. If not, you’re in good shape. If it does fail to restart properly, inspect the status output and check what it’s complaining about.

One thing you may want to do is add the enabled = true line to [sshd] and [sshd-ddos]. Sure, the [sshd] jail is already enabled by default, but since it wasn’t specifically called out in the config file, I don’t trust it. So you might as well add an enabled line to be safe. There are several jails you may benefit from. If you are using SSL with Apache, enable [apache-modsecurity]. Also, consider enabling [apache-shellshock] while you’re at it to potentially protect Apache from the Shellshock vulnerability. If you’re running your own mail server and have Roundcube running, enable [roundcube-auth] and [postfix]. There are a lot of default jails at your disposal!

Like all security applications, Fail2ban isn’t going to automatically make your server impervious to all attacks, but it is a helpful additional layer you can add to your security regimen. When it comes to the jails for OpenSSH, Fail2ban is worth its weight in gold, and that’s really the least you should enable. Go ahead and give Fail2ban a go on your servers—just make sure you also add your own network to the Ignoreip list that was covered earlier, in case you accidentally type your own SSH password incorrectly too many times and potentially lock yourself out. Fail2ban doesn’t discriminate; it’ll block anyone. Once you get it fully configured, I think you’ll agree that Fail2ban is a worthy ally for your servers.

Earlier, I mentioned that each service that runs on your computer listening for connections is a potential target. While it’s impossible to go over every service you could possibly run on your server and how to secure it, we will want to consider securing our database server (if we have one) since organizations typically store valuable data there. We’ll learn some methods we can utilize to better secure MariaDB next.

MariaDB best practices for secure database servers

MariaDB, as well as MySQL, is a very useful resource to have at your disposal. However, it can also be used against you if configured improperly. Thankfully, it’s not too hard to secure, but there are several points of consideration to make regarding your database server when developing your security design.

The first point is probably obvious to most of you, and I have mentioned it before, but I’ll mention it just in case. Your database server should not be reachable from the internet. I do understand that there are some edge cases when developing a network, and certain applications may require access to a MySQL database over the internet. However, if your database server is accessible over the internet, miscreants will try their best to attack it and gain entry. If there’s any vulnerability in your version of MariaDB or MySQL, they’ll most likely be able to hack into it.

In most organizations, a great way to implement a database server is to make it accessible by only internal servers. This means that while your web server would obviously be accessible from the internet, its backend database should exist on a different server on your internal network and accept communications only from the web server. If your database server is a VPS or cloud instance, it should especially be configured to only accept communications from your web server, as VPS machines are accessible via the internet by default. Therefore, it’s still possible for your database server to be breached if your web server is also breached, but it would be less likely to be compromised if it resides on a separate and restricted server.

Some VPS providers, such as DigitalOcean and Linode, feature local networking, which you can leverage for your database server instead of allowing it to be accessible over the internet. If your VPS provider features local networking, you should definitely utilize it and deny traffic from outside the local network.

With regards to limiting which servers are able to access a database server, there are a few tweaks we can use to accomplish this. First, we can leverage the /etc/hosts.allow and /etc/hosts.deny files. With the /etc/hosts.deny file, we can stop traffic from certain networks or from specific services. With /etc/hosts.allow, we allow the traffic. This works because IP addresses included in /etc/hosts.allow override /etc/hosts.deny. So basically, if you deny everything in /etc/hosts.deny and allow a resource or two in /etc/hosts.allow, you’re saying, deny everything, except resources I explicitly allow from the /etc/hosts.deny file.

To make this change, we’ll want to edit the /etc/hosts.allow file first. By default, this file has no configuration other than some helpful comments. Within the file, we can include a list of resources we’d like to be able to access our server, no matter what. Make sure that you include your web server here, and also make sure that you immediately add the IP address you’ll be using to SSH into the machine; otherwise, you’ll lock yourself out once we edit the /etc/hosts.deny file.

Here are some example hosts.allow entries, with a description of what each example rule does.

The first example rule allows a machine with an IP address of 192.168.1.50 to access the server:

ALL: 192.168.1.50

This rule allows any machine within the 192.168.1.0/24 network to access the server:

ALL: 192.168.1.0/255.255.255.0

In this rule, we have an incomplete IP address. This acts as a wildcard, which means that any IP address beginning with 192.168.1 is allowed:

ALL: 192.168.1.

This rule allows everything. You definitely don’t want to do this:

ALL: ALL

We can also allow specific daemons. Here, I’m allowing OpenSSH traffic originating from any IP address beginning with 192.168.1:

ssh: 192.168.1.

On your end, if you wish to utilize this security approach, add the resources on the database server you’ll be comfortable accepting communications from. Make sure you at least add the IP address of another server with access to OpenSSH, so you’ll have a way to manage the machine. You can also add all your internal IP addresses with a rule similar to the previous examples. Once you have this set up, we can edit the /etc/hosts.deny file.

The /etc/hosts.deny file utilizes the same syntax as /etc/hosts.allow. To finish this little exercise, we can block any traffic not included in the /etc/hosts.allow file with the following rule:

ALL: ALL

The /etc/hosts.allow and /etc/hosts.deny files don’t represent a complete layer of security but are a great first step in securing a database server, especially one that might contain sensitive user or financial information. They’re by no means specific to MariaDB either, but I mention them here because databases very often contain data that, if leaked, could potentially wreak havoc on your organization and even put someone out of business. A database server should only ever be accessible by the application that needs to utilize it.

Another point of consideration is user security. We walked through creating database users in Chapter 13, Managing Databases. In that chapter, we walked through the MySQL commands for creating a user as well as GRANT, performing both in one single command. This is the example I used:

GRANT SELECT ON mysampledb.* TO 'appuser'@'localhost' IDENTIFIED BY 'password';

What’s important here is that we’re allowing access to the mysampledb database by a user named appuser. If you look closer at the command, we’re also specifying that this connection is allowed only if it’s coming in from localhost. If we tried to access this database remotely, it wouldn’t be allowed.

This is a great default. But you’ll also, at some point, need to access the database from a different server. Perhaps your web server and database server are separate machines, which is a common enterprise. You could do this:

GRANT SELECT ON mysampledb.* TO 'appuser'@'%' IDENTIFIED BY 'password';

However, in my opinion, this is a very bad practice. The % character in a MySQL GRANT command is a wildcard, similar to * with other commands. Here, we’re basically telling our MariaDB or MySQL instance to accept connections from this user, from any network. There is almost never a good reason to do this. I’ve heard some administrators use the argument that they don’t allow external traffic from their company firewall, so allowing MySQL traffic from any machine shouldn’t be a problem. However, that logic breaks down when you consider that if an attacker does gain access to any machine in your network, they can immediately target your database server. If an internal employee gets angry at management and wants to destroy the database, they’ll be able to access it from their workstation. If an employee’s workstation becomes affected by malware that targets database servers, it may find your database server and try to brute-force it. I could go on and on with examples of why allowing access to your database server from any machine is a bad idea. Just don’t do it!

If we want to give access to a specific IP address, we can do so with the following instead:

GRANT SELECT ON mysampledb.* TO 'appuser'@'192.168.1.50' IDENTIFIED BY 'password';

With the previous example, only a server or workstation with an IP address of 192.168.1.50 is allowed to use the appuser account to obtain access to the database. That’s much better. You can, of course, allow an entire subnet as well:

GRANT SELECT ON mysampledb.* TO 'appuser'@'192.168.1.% IDENTIFIED BY 'password';

Here, any IP address beginning with 192.168.1 is allowed. Honestly, I really don’t like allowing an entire subnet. But depending on your network design, you may have a dozen or so machines that need access. Hopefully, the subnet you allow is not the same subnet your users’ workstations use!

Finally, another point of consideration is security patches for your database server software. I know I talk about updates quite a bit, but as I’ve mentioned, these updates exist for a reason. Developers don’t release patches for enterprise software simply because they’re bored; these updates often patch real problems that real people are taking advantage of right now as you read this. Install updates regularly. I understand that updates on server applications can scare some people, as an update always comes with the risk that it may disrupt business. But as an administrator, it’s up to you to create a rollout plan for security patches, and ensure they’re installed in a timely fashion. Sure, it’s tough and often has to be done after hours. But the last thing I want to do is read about yet another company where the contents of their database server were leaked and posted freely online. A good security design includes regular patching.

Now that our database server is more secure, there’s another topic worth diving into, and that is the subject of implementing a firewall. There are several different firewall solutions out there, but UFW is a great choice. It’s easy to set up, and quite effective. In the next section, I’ll go over how to implement it.

Setting up a firewall

Firewalls are a very important aspect to include in your network and security design. Firewalls are extremely easy to implement, but sometimes hard to implement well. The problem with firewalls is that they can sometimes offer a false sense of security to those who aren’t familiar with the best ways to manage them. Sure, they’re good to have, but simply having a firewall isn’t enough by itself.

The false sense of security comes when someone thinks that they’re protected just because a firewall is installed and enabled, but they’re also often opening traffic from any network to internal ports. Take into consideration the firewall that was introduced with Windows XP and enabled by default with Windows XP Service Pack 2. Yes, it was a good step but users simply clicked the allow button whenever something wanted access, which defeats the entire purpose of having a firewall. Windows implements this better nowadays, but the false sense of security it created remains. Firewalls are not a “set it and forget it” solution!

Firewalls work by allowing or disallowing access to a network port from other networks. Most good firewalls block outside traffic by default. When a user or administrator enables a service, they open a port for it. Then, that service is allowed in. This is great in theory, but where it breaks down is that administrators will often allow access from everywhere when they open a port. If an administrator does this, they may as well not have a firewall at all. If you need access to a server via OpenSSH, you may open up port 22 (or whatever port OpenSSH is listening on) to allow it through the firewall. But if you simply allow the port, it’s open for everyone else as well.

When configured properly, a firewall will enable access to a port only from specific places. For example, rather than allowing port 22 for OpenSSH to your entire network, why not just allow traffic to port 22 from specific IP addresses or subnets? Now we’re getting somewhere! In my opinion, allowing all traffic through a port is usually a bad idea, though some services actually do need this (such as web traffic to your web server). If you can help it, only allow traffic from specific networks when you open a port. This is where the use case for a firewall really shines.

In Ubuntu Server, Uncomplicated Firewall (UFW) is a really useful tool for configuring your firewall. As the name suggests, it makes firewall management a breeze. To get started, install the ufw package:

sudo apt install ufw

By default, the UFW is inactive. This is a good thing, because we wouldn’t want to enable a firewall until after we’ve configured it. The ufw package features its own command for checking its status:

sudo ufw status

Unless you’ve already configured your firewall, the status will come back as inactive.

With the ufw package installed, the first thing we’ll want to do is enable traffic via SSH, so we won’t get locked out when we do enable the firewall:

sudo ufw allow from 192.168.1.156 to any port 22

You can probably see from that example how easy UFW’s syntax is. With that example, we’re allowing the 192.168.1.156 IP address access to port 22 via TCP as well as UDP. In your case, you would change the IP address accordingly, as well as the port number if you’re not using the OpenSSH default port. The any option refers to any protocol (TCP or UDP).

You can also allow traffic by subnet:

sudo ufw allow from 192.168.1.0/24 to any port 22

Although I don’t recommend this, you can allow all traffic from a specific IP to access anything on your server. Use this with care, if you have to use it at all:

sudo ufw allow from 192.168.1.50

Now that we’ve configured our firewall to allow access via OpenSSH, you should also allow any other ports or IP addresses that are required for your server to operate efficiently. If your server is a web server, for example, you’ll want to allow traffic from ports 80 and 443. This is one of those few exceptions where you’ll want to allow traffic from any network, assuming your web server serves an external page on the internet:

sudo ufw allow 80
sudo ufw allow 443

There are various other use patterns for the ufw command; refer to the main page (http://manpages.ubuntu.com/manpages/focal/man8/ufw.8.html) for more. In a nutshell, these examples should enable you to allow traffic through specific ports, as well as via specific networks and IP addresses. Once you’ve finished configuring the firewall, we can enable it:

sudo ufw enable
Firewall is active and enabled on system startup

Just as the output suggests, our firewall is active and will start up automatically whenever we reboot the server.

The UFW package is basically an easy-to-use frontend to the iptables firewall, and it acts as the default firewall for Ubuntu. The commands we’ve executed so far in this section trigger the iptables command, which is a command that administrators can use to set up a firewall manually. A full walk-through of iptables is outside the scope of this chapter, and it’s essentially unnecessary, since Ubuntu features UFW as its preferred firewall administration tool and it’s the tool you should use while administering a firewall on your Ubuntu server.

With a well-planned firewall implementation, you can better secure your Ubuntu Server installation from outside threats. Preferably, each port you open should only be accessible from specific machines, with the exception being servers that are meant to serve data or resources to external networks. Like all security solutions, a firewall won’t make your server invincible, but it does represent an additional layer that attackers would have to bypass in order to do harm.

If your company stores sensitive information, it’s important to ensure the storage underneath that data is encrypted. Next, we’re going to look at Linux Unified Key Setup (LUKS), which will help us encrypt and decrypt disks.

Encrypting and decrypting disks with LUKS

An important aspect of security that many people don’t even think about is encryption. As I’m sure you know, backups are essential for business continuity. If a server breaks down, or a resource stops functioning, backups will be your saving grace. But what happens if your backup medium gets stolen or somehow falls into the wrong hands? If your backup is not encrypted, then anyone will be able to view its contents. Some data isn’t sensitive, so encryption isn’t always required. But anything that contains personally identifiable information, company secrets, or anything else that would cause any kind of hardship if leaked should be encrypted. In this section, I’ll walk you through setting up LUKS encryption on an external backup drive.

Before we get into that though, I want to quickly mention the importance of full-disk encryption for your distribution as well. Although this section is going to go over how to encrypt external disks, it’s possible to encrypt the volume for your entire Linux installation as well. In the case of Ubuntu, full-disk encryption is an option during installation, for both the server and workstation flavors. This is especially important when it comes to mobile devices, such as laptops, which are stolen quite frequently. If a laptop is planned to store confidential data that you cannot afford to have leaked out, you should choose the option during installation to encrypt your entire Ubuntu installation. If you don’t, anyone that knows how to boot a Live OS disc and mount a hard drive will be able to view your data. I’ve seen unencrypted company laptops get stolen before, and it’s not a wonderful experience.

Anyway, back to the topic of encrypting external volumes. For the purpose of encrypting disks, we’ll need to install the cryptsetup package:

sudo apt install cryptsetup

The cryptsetup utility allows us to encrypt and unencrypt disks. To continue, you’ll need an external disk you can safely format, as encrypting the disk will remove any data stored on it. This can be an external hard disk or a flash drive. Both can be treated the exact same way. In addition, you can use this same process to encrypt a secondary internal hard disk attached to your virtual machine or server. I’m assuming that you don’t care about the contents saved on the drive, because the process of setting up encryption will wipe it.

If you’re using an external disk, use the fdisk -l command as root or the lsblk command to view a list of hard disks attached to your computer or server before you insert it. After you insert your external disk or flash drive, run the command again to determine the device designation for your removable media.

In my examples, I used /dev/sdb, but you should use whatever designation your device was given. This is important, because you don’t want to wipe out your root partition or an existing data partition!

First, we’ll need to use cryptsetup to format our disk:

sudo cryptsetup luksFormat /dev/sdb

You’ll receive the following warning:

WARNING!
========
This will overwrite data on /dev/sdb irrevocably.
Are you sure? (Type uppercase yes):

Type YES and press Enter to continue. Next, you’ll be asked for the passphrase. This passphrase will be required in order to unlock the drive. Make sure you use a good, randomly generated password and that you store it somewhere safe. If you lose it, you will not be able to unlock the drive. You’ll be asked to confirm the passphrase.

Once the command completes, we can format our encrypted disk. At this point, it has no filesystem, so we’ll need to create one. First, open the disk with the following command:

sudo cryptsetup luksOpen /dev/sdb backup_drive

The backup_drive name can be anything you want; it’s just an arbitrary name you can refer to the disk as. At this point, the disk will be attached to /dev/mapper/disk_name, where disk_name is whatever you called your disk in the previous command (in my case, backup_drive). Next, we can format the disk. The following command will create an ext4 filesystem on the encrypted disk:

sudo mkfs.ext4 -L "backup_drive" /dev/mapper/backup_drive

The -L option allows us to add a label to the drive, so feel free to change that label to whatever you prefer to name the drive.

With the formatting out of the way, we can now mount the disk:

sudo mount /dev/mapper/backup_drive /media/backup_drive

The mount command will mount the encrypted disk located at /dev/mapper/backup_drive and attach it to a mount point, such as /media/backup_drive in my example. The target mount directory must already exist. With the disk mounted, you can now save data onto the device as you would any other volume. When finished, you can unmount the device with the following commands:

sudo umount /media/backup_drive
sudo cryptsetup luksClose /dev/mapper/backup_drive

First, we unmount the volume just like we normally would. Then, we tell cryptsetup to close the volume. To mount it again, we would issue the following commands:

sudo cryptsetup luksOpen /dev/sdb backup_drive
sudo mount /dev/mapper/backup_drive /media/backup_drive

The first of those commands should prompt you for your passphrase. If successful, you can use the second of those commands to mount the volume.

If we wish to change the passphrase, we can use the following command. The disk must not be mounted or open in order for this to work:

sudo cryptsetup luksChangeKey /dev/sdb -S 0

The command will ask you for the current passphrase, and then the new one twice.

Keep in mind that you should absolutely be careful typing in the new passphrase, so that you don’t lock yourself out of the drive.

That’s basically all there is to it. With the cryptsetup utility, you can set up your own LUKS-encrypted volumes for storing your most sensitive information. If the disk ever falls into the wrong hands, it won’t be as bad a situation as it would have been if the disk had been unencrypted. Breaking a LUKS-encrypted volume would take considerable effort that wouldn’t be feasible.

In the next section, we’ll explore how we can lock down sudo. Since sudo is an essential command that gives us the ability to run tasks as other users, we’ll want to be sure to lock that down too.

Locking down sudo

We’ve been using the sudo command throughout the book. In fact, we took a deeper look at it in Chapter 2, Managing Users and Permissions. Therefore, I won’t go into too much detail regarding sudo here, but some things bear repeating as sudo has a direct impact on security.

First and foremost, access to sudo should be locked down as much as possible. A user with full sudo access is a threat, plain and simple. All it would take is for someone with full sudo access to make a single mistake with the rm command to cause you to lose data or render your entire server useless. After all, a user with full sudo access can do anything root can do (which is everything).

By default, the user you’ve created during installation will be made a member of the sudo group. Members of this group have full access to the sudo command. Therefore, you shouldn’t make any users a member of this group unless you absolutely have to. In Chapter 2, Managing Users and Permissions, I talked about how to control access to sudo with the visudo command; refer to that chapter for a refresher if you need it. In a nutshell, you can lock down access to sudo to specific commands, rather than allowing your users to do everything. For example, if a user needs access to shut down or reboot a server, you can give them access to perform those tasks (and only those tasks) with the following setting:

charlie    ALL=(ALL:ALL) /usr/sbin/reboot,/usr/sbin/shutdown

For the most part, if a user needs access to sudo, just give them access to the specific commands that are required as part of their job. If a user needs access to work with removable media, give them sudo access for the mount and umount commands. If they need to be able to install new software, give them access to the apt suite of commands, and so on. The fewer permissions you give a user, the better. This goes all the way back to the principle of least privilege that we went over near the beginning of this chapter.

Although most of the information in this section is not new to anyone who has already read Chapter 2, Managing Users and Permissions, sudo access is one of those things a lot of people don’t think about when it comes to security. The sudo command with full access is equivalent to giving someone full access to the entire server. Therefore, it’s an important thing to keep in mind when it comes to hardening the security of your network.

Summary

In this chapter, we looked at the ways in which we can harden the security of our server. A single chapter or book can never give you an all-inclusive list of all the security settings you could possibly configure, but the examples we worked through in this chapter are a great starting point. Along the way, we looked at the concepts of lowering your attack surface, as well as the principle of least privilege. We also looked into securing OpenSSH, which is a common service that many attackers will attempt to use in their favor.

We also looked into Fail2ban, which is a handy daemon that can block other nodes when there are a certain number of authentication failures. We also discussed configuring our firewall, using the UFW utility. Since data theft is also unfortunately common, we covered encrypting our backup disks.

In the next chapter, we’ll take a look at troubleshooting our server when things go wrong.

Further reading

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/LWaZ0

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.248.216