12

Sharing and Transferring Files

In the previous chapter, we looked at what's involved in the process of setting up a few network services, such as DHCP and DNS. Those are two important components of a network, but there are quite a few different types of applications and resources you can make available on your network to further enhance it. A file server is one such example, which can give your users a central place to store critical files and can even enhance collaboration.

Perhaps you've used a file server before, or even set one up on a different platform. With Ubuntu Server, there are multiple methods to not only store files, but also to transfer files from one node to another over a network link. In this chapter, we'll look into setting up a central file server using both Samba and NFS, as well as how to transfer files between nodes with utilities such as scp and rsync. We'll also go over some situations in which one solution may work better for you than another. As we go through these concepts, we will cover the following topics:

  • File server considerations
  • Sharing files with Windows users via Samba
  • Setting up NFS shares
  • Transferring files with rsync
  • Transferring files with scp
  • Mounting remote directories with SSHFS

Before we can get started with configuring our server to enable it to share files with other users, we should first understand what our available options are to enable us to choose the best technology for our use case.

File server considerations

When it comes to setting up a file server, the process is a matter of setting up some sort of daemon to accept connections and share specific directories, and ensure the appropriate users are able to access those directories. You'll also implement permissions to determine who can access specific directories, and what type of access they will have (read/write, read-only, and so on). When deciding how to share the files, it's generally a choice between two common technologies that can facilitate the actual sharing, Samba and NFS.

All in all, there's nothing stopping you from hosting both Samba and NFS shares on a single server. The two technologies can actually co-exist on the same device. However, each of the two popular solutions is valid for particular use cases. Before we get started with setting up our file server, we should first understand the differences between Samba and NFS, so we can make an informed decision as to which one is more appropriate for our environment. As a general rule of thumb, Samba is great for mixed environments (where you have Windows as well as Linux clients), and NFS is more appropriate for use in Linux or Unix environments, but there's a bit more to it than that.

Samba is a great solution for many environments, because it allows you to share files with Windows, Linux, and macOS machines. Basically, pretty much everyone will be able to access your shares, provided you give them permission to do so. The reason this works is because Samba is a re-implementation of the Server Message Block (SMB) protocol, which is primarily used by Windows systems. However, you don't need to use the Windows platform in order to be able to access Samba shares, since many platforms offer support for this protocol. Even Android phones are able to access Samba file shares with the appropriate app, as well as other platforms.

You may be wondering why I am going to cover two different solutions in this chapter. After all, if Samba shares can be accessed by pretty much everything and everyone, why bother with anything else? Even with Samba's many strengths, there are also weaknesses as well. First of all, permissions are handled very differently, so you'll need to configure your shares in specific ways in order to prevent access to users that shouldn't be able to see confidential data. With NFS, full support for standard UNIX permissions is provided, so you'll only need to configure your permissions once. If permissions and confidentiality are important to you, you may want to look closer at NFS.

It's also not accurate to say that Windows systems cannot access NFS shares, because some versions actually can. By default, no version of Windows supports NFS outright, but some editions offer a plugin you can install that enables this support. The name of the NFS plugin in Windows has changed from one version to another (such as Services for UNIX, Subsystem for UNIX-based Applications, NFS Client, and most recently, Windows Subsystem for Linux) but the idea is the same. In the past, Microsoft required a more expensive Windows license on your laptop or desktop to allow the installation of the NFS client, but that's no longer the case. The Windows Subsystem for Linux can be installed on any version of Windows 10, so this licensing restriction only comes into play on older versions. If you do have legacy Windows machines in use (which is becoming increasingly rare), using NFS may actually increase costs. In that situation, Samba is a clear winner.

Regarding an all-Linux environment or a situation where you only have Linux machines that need to access your shares, NFS is a great choice because it integrates much more tightly with the rest of the distribution. Permissions can be more easily enforced and, depending on your hardware, performance may be higher. The specifics of your computing environment will ultimately make your decision for you. Perhaps you'll choose Samba for your mixed environment, or NFS for your all-Linux environment. Maybe you'll even set up both NFS and Samba, having shares available for each platform. My recommendation is to learn and practice both, since you'll use both solutions at one point or another during your career anyway.

Before you continue to the sections on setting up Samba and NFS, I recommend you first decide where in your filesystem you'd like to act as a parent directory for your file shares. This isn't actually required, but I think it makes for better organization. There is no one right place to store your shares, but personally I like to create a /share directory at the root filesystem and create sub-directories for my network shares within it. For example, I can create /share/documents, /share/public, and so on for Samba shares. With regard to NFS, I usually create shared directories within /exports. You can choose how to set up your directory structure. As you read the remainder of this chapter, make sure to change my example paths to match yours if you use a different style.

Sharing files with Windows users via Samba

In this section, I'll walk you through setting up your very own Samba file server. I'll also go over a sample configuration to get you started so that you can add your own shares. One feature that Samba supports is integration with Active Directory, but that's outside the scope of this book as that's a feature specific to Windows Server.

I mention that here because our Samba implementation will be relatively wide open, which is a bad practice. With Active Directory, you can have more effective control over user access. But to keep it simple, we'll create a simple Samba server to get you started, and then from there, you can research more complex implementations if it makes sense to do so for your organization.

First, we'll need to make sure that the samba package is installed on our server:

sudo apt install samba 

When you install the samba package, you'll have a new daemon installed on your server, smbd. The smbd daemon will be automatically started and enabled for you. You'll also be provided with a default configuration file for Samba, located at /etc/samba/smb.conf. For now, I recommend stopping samba since we have yet to configure it:

sudo systemctl stop smbd 

Since we're going to configure Samba from scratch, we should start with an empty configuration file. Let's back up the original file, rather than overwrite it. The default file includes some useful notes and samples, so we should keep it around for future reference:

sudo mv /etc/samba/smb.conf /etc/samba/smb.conf.orig

Now, we can begin a fresh configuration. Although it's not required, I like to split my Samba configuration up between two files, /etc/samba/smb.conf and /etc/samba/smbshared.conf. You don't have to do this, but I think it makes the configuration cleaner and easier to read. First, here is a sample /etc/samba/smb.conf file:

[global] 
server string = File Server 
workgroup = WORKGROUP 
security = user 
map to guest = Bad User 
name resolve order = bcast wins 
include = /etc/samba/smbshared.conf 

As you can see, this is a really short file. Basically, we're including only the lines we absolutely need to in order to set up a file server with Samba. Next, I'll explain each line and what it does.

[global] 

With the [global] stanza, we're declaring the global section of our configuration, which will consist of settings that will impact Samba as a whole. There will also be additional stanzas for individual shares, which we'll get to later.

server string = File Server 

The server string is somewhat of a description field for the File Server. If you've browsed networks from Windows computers before, you may have seen this field. Whatever you type here will display underneath the server's name in Windows Explorer. This isn't required, but it's nice to have.

workgroup = WORKGROUP 

Here, we're setting the workgroup, which is the exact same thing as a workgroup on Windows PCs. In short, the workgroup is a namespace that describes a group of machines. When browsing network shares on Windows systems, you'll see a list of workgroups, and then one or more computers within that workgroup. In short, this is a way to logically group your nodes. You can set this to whatever you like. If you already have a workgroup in your organization, you should set it here to match the workgroup names of your other machines. The default workgroup name is simply WORKGROUP on Windows PCs, if you haven't customized the workgroup name at all.

security = user 

This setting sets up Samba to utilize usernames and passwords for authentication to the server. Here, we're setting the security mode to user, which means we're using local users to authenticate, rather than other options such as ads (Active Directory) or domain (domain controller), which are both outside the scope of this book.

map to guest = Bad User 

This option configures Samba to treat unauthenticated users as guest users. Basically, unauthenticated users will still be able to access shares, but they will have guest permissions instead of full permissions. If that's not something you want, then you can omit this line from your file. Note that if you do omit this, you'll need to make sure that both your server and client PCs have the same user account names on either side. Ideally, we want to use directory-based authentication, but that's beyond the scope of this book.

name resolve order = bcast wins 

The name resolve order setting configures how Samba resolves hostnames. In this case, we're using the broadcast name first, followed by wins. Since wins has been pretty much abandoned (and replaced by DNS), we include it here solely for compatibility with legacy networks.

include = /etc/samba/smbshared.conf 

Remember how I mentioned that I usually split my Samba configurations into two different files? On this line, I'm calling that second /etc/samba/smbshared.conf file. The contents of the smbshared.conf file will be inserted right here, as if we only had one file. We haven't created the smbshared.conf file yet. Let's take care of that next. Here's a sample smbshared.conf file:

[Documents] 
path = /share/documents 
force user = myuser 
force group = users 
public = yes 
writable = no 
 
[Public] 
path = /share/public 
force user = myuser 
force group = users 
create mask = 0664 
force create mode = 0664 
directory mask = 0777 
force directory mode = 0777 
public = yes 
writable = yes 

As you can see, I'm separating share declarations in their own file. We can see several interesting things within smbshared.conf. First, we have two stanzas, [Documents] and [Public]. Each stanza is a share name, which will allow Windows users to access the share under //servername/share-name. In this case, this file will give us two shares: //servername/Documents and //servername/Public. The Public share is writable for everyone, though the Documents share is restricted to read only. The Documents share has the following options:

path = /share/documents 

This is the path to the share, which must exist on the server's filesystem. In this case, when a user reads files from //servername/Documents on a Windows system, they will be reading data from /share/documents on the Ubuntu server that's housing the share.

force user = myuser 
force group = users 

These two lines are basically bypassing user ownership. When a user reads this share, they are treated as myuser instead of their actual user account. Normally, you would want to set up LDAP or Active Directory to manage your user accounts and handle their mapping to Ubuntu Server, but a full discussion of directory-based user access is beyond the scope of this book, so I provided the force options as an easy starting point. The user account you set here must exist on the server.

public = yes 
writable = no 

With these two lines, we're configuring what users are able to do once they connect to this share. In this case, public = yes means that the share is publicly available, though writable = no prevents anyone from making changes to the contents of this share. This is useful if you want to share files with others, but you want to restrict access and stop anyone from being able to modify the content.

The Public share has some additional settings that weren't found in the Documents share:

create mask = 0664 
force create mode = 0664 
directory mask = 0777 
force directory mode = 0777 

With these options, I'm setting up how the permissions of files and directories will be handled when new content is added to the share. Directories will be given 777 permissions and files will be given permissions of 664. Yes, these permissions are very open; note that the share is named Public, which implies full access anyway, and its intended purpose is to house data that isn't confidential or restricted:

public = yes 
writable = yes 

Just as I did with the previous share, I'm setting up the share to be publicly available, but this time I'm also configuring it to allow users to make changes.

To take advantage of this configuration, we need to start the Samba daemon. Before we do though, we want to double-check that the directories we entered into our smbshared.conf file exist, so if you're using my example, you'll need to create /share/documents and /share/public:

sudo mkdir /share
sudo mkdir /share/documents
sudo mkdir /share/public 

Also, the user account that was referenced in the force user and the group referenced in the force group must both exist and have ownership over the shared directories:

sudo chown -R jay /share 

At this point, it's a good idea to use the testparm command, which will test the syntax of our Samba configuration files for us. It won't necessarily catch every error we could have made, but it is a good command to run to quickly check the sanity. This command will first check the syntax, then it will print the entire configuration to the terminal for you to have a chance to review it. If you see no errors here, then you can proceed to start the service:

sudo systemctl start smbd 

Then, check the status to ensure that the daemon is running:

sudo systemctl status smbd

This will produce output similar to the following:

Figure 12.1: Checking the status of the smbd daemon

That really should be all there is to it; you should now have Documents and Public shares on your file server that Windows users should be able to access. In fact, your Linux machines should be able to access these shares as well. On Windows, Windows Explorer has the ability to browse file shares on your network. If in doubt, try pressing the Windows key and the r key at the same time to open the Run dialog box, and then type the Universal Naming Convention (UNC) path to the share (\servernameDocuments or \servernamePublic). You should be able to see any files stored in either of those directories. In the case of the Public share, you should be able to create new files there as well.

On Linux systems, if you have a desktop environment installed, most of them feature a file manager that supports browsing network shares. Since there are a handful of different desktop environments available, the method varies from one distribution or configuration to another. Typically, most Linux file managers will have a network link within the file manager, which will allow you to easily browse your local shares:

Figure 12.2: Browsing a Samba share from a Linux client

If your file manager doesn't show you the available shares on your server, you can also access a Samba share by adding an entry for it in the /etc/fstab file, such as the following:

//myserver/share/documents /mnt/documents cifs username=myuser,noauto 0 0 

In order for the fstab entry to work, your Linux client will need to have the Samba client packages installed. If your distribution is Debian-based (such as Ubuntu), you will need to install the smbclient and cifs-utils packages:

sudo apt install smbclient cifs-utils 

Then, assuming the local directory exists (/mnt/documents in the example fstab line), you should be able to mount the share with the following command:

sudo mount /mnt/documents 

In the fstab entry, I included the noauto option so that your system won't mount the Samba share at boot time (you'll need to do so manually with the mount command). If you do want the Samba share automatically mounted at boot time, change noauto to auto. However, you may receive errors during boot if for some reason the server hosting your Samba shares isn't accessible, which is why I prefer the noauto option.

If you'd prefer to mount the Samba share without adding an fstab entry, the following example command should do the trick; just change the share name and mount point to match your local configuration:

sudo mount -t cifs //myserver/Documents -o username=myuser /mnt/documents

From here, feel free to experiment. You can add additional shares as appropriate, and customize your Samba implementation as you see fit. In the next section, we'll explore NFS.

Setting up NFS shares

An alternative to Samba is Network File System (NFS). It's a great method of sharing files from a Linux or Unix server to a Linux or Unix server. As I mentioned earlier in the chapter, Windows systems can access NFS shares as well, but that requires an add-on to be enabled. Therefore, NFS is preferred in a Linux or Unix environment, since it fully supports Linux- and Unix-style permissions. As you can see from our dive into Samba earlier, we essentially forced all shares to be treated as being accessed by a particular user, which was messy, but was the easiest example of setting up a Samba server without also walking you through setting up a Windows Active Directory controller. Samba can certainly support per-user access restrictions and benefits greatly from a centralized directory server, though that would basically be a book of its own! NFS is a bit more involved to set up, but in the long run, I think it's easier and integrates better in a non-mixed environment.

Earlier, we set up a parent directory in our filesystem to house our Samba shares, and we should do the same thing with NFS. While it wasn't mandatory to have a special parent directory with Samba (I had you do that in order to be neat, but you weren't required to), NFS really does want its own directory to house all of its shares. It's not strictly required with NFS either, but there's an added benefit in doing so, which I'll go over before the end of this section. In my case, I'll use /exports as an example, so you should make sure that that directory, or whatever you've chosen for NFS, exists:

sudo mkdir /exports 

Next, let's install the required NFS packages on our server. The following command will install NFS and its dependencies:

sudo apt install nfs-kernel-server 

Once you install the nfs-kernel-server package, the nfs-kernel-server daemon will start up automatically. It will also create a default /etc/exports file (which is the main file that NFS reads its share information from), but it doesn't contain any useful settings, just some commented lines. Let's back up the /etc/exports file, since we'll be creating our own:

sudo mv /etc/exports /etc/exports.orig 

To set up NFS, let's first create some directories that we will share with other users. Each share in NFS is known as an export. I'll use the following directories as examples, but you can export any directory you like:

/exports/backup 
/exports/documents 
/exports/public 

In the /etc/exports file (which we're creating fresh), I'll insert the following four lines:

/exports *(ro,fsid=0,no_subtree_check) 
/exports/backup 192.168.1.0/255.255.255.0(rw,no_subtree_check) 
/exports/documents 192.168.1.0/255.255.255.0(ro,no_subtree_check) 
/exports/public 192.168.1.0/255.255.255.0(rw,no_subtree_check) 

The first line is export root, which I'll go over a bit later. The next three lines are individual shares or exports. The /backup, /documents, and /public directories are being shared from the /exports parent directory. Each of these lines is not only specifying which directory is being shared with each export, but also which network is able to access them.

In this case, after the directory is called out in a line, we're also setting which network is able to access them (192.168.1.0/255.255.255.0 in our case). This means that if you're connecting from a different network, your access will be denied. Each connecting machine must be a member of the 192.168.1.0/24 network in order to proceed (so make sure you change this to match your IP scheme). Finally, we include some options for each export, for example, rw,no_subtree_check.

As far as what these options do, the first (rw) is rather self-explanatory. Here, we can set whether or not other nodes will be able to make changes to data within the export. In the examples I gave, the /documents export is read-only (ro), while the others allow read and write.

The next option in each example is no_subtree_check. This option is known to increase reliability and is mainly implied by default. However, not including it may make NFS complain when it restarts, but nothing that will actually stop it from working. Particularly, this option disables what is known as subtree checking, which has had some stability issues in the past. Normally, when a directory is exported, NFS might scan parent directories as well, which is sometimes problematic, and can cause issues when it comes to open file handles.

There are several other options that can be included in an export, and you can read more about them by checking the man page for export:

man exports

One option you'll see quite often in the wild is no_root_squash. Normally, the root user on one system is mapped to nobody on the other for security reasons. In most cases, one system having root access to another is a bad idea. The no_root_squash option disables this, and it allows the root user on one end to be treated as the root user on the other. I can't think of a reason, personally, where this would be useful (or even recommended), but I have seen this option quite often in the wild, so I figured I would bring it up. Again, check the man pages for export for more information on additional options you can pass to your exports.

Next, we have one more file to edit before we can actually seal the deal on our NFS setup. The /etc/idmapd.conf file is necessary for mapping permissions on one node to another. In Chapter 2, Managing User Permissions, we talked about the fact that each user has an ID (UID) assigned to them. The problem, though, is that from one system to another, a user will not typically have the same UID. For example, user jdoe may be UID 1001 on server A, but 1007 on server B. When it comes to NFS, this greatly confuses the situation, because UIDs are used in order to reference permissions. Mapping IDs with idmapd allows this to stay consistent and handles translating each user properly, though it must be configured correctly and consistently on each node. Basically, as long as you use the same domain name on each server and client and configure the /etc/idmapd.conf file properly on each, you should be fine.

To configure this, open /etc/idmapd.conf in your text editor. Look for an option that is similar to the following:

# Domain = localdomain 

First, remove the # symbol from that line to uncomment it. Then, change the domain to match the one used within the rest of your network. You can leave this as it is as long as it's the same on each node, but if you recall from Chapter 11, Setting Up Network Services, we used a sample domain of local.lan in our DHCP configuration, so it's best to make sure you use the same domain name everywhere—even the domain provided by DHCP. Basically, just be as consistent as you can and you'll have a much easier time overall. You'll also want to edit the /etc/idmapd.conf file on each node that will access your file server, to ensure they are configured the same as well.

With our /etc/exports and /etc/idmapd.conf files in place, and assuming you've already created the exported directories on your filesystem, we should be all set to restart NFS to activate our configuration:

sudo systemctl restart nfs-kernel-server 

After restarting NFS, you should check the daemon's output via systemctl to ensure that there are no errors:

systemctl status nfs-kernel-server 

As long as there are no errors, our NFS server should be working. Now, we just need to learn how to mount these shares on another system. Unlike Samba, using a Linux file manager and browsing the network will not show NFS exports by default; we'll need to mount them manually. Client machines, assuming they are Debian-based (Ubuntu fits this description) will need the nfs-common package installed in order to access these exports. It may already be installed, but if it's not, we can install it with apt like any other package:

sudo apt install nfs-common 

With the client installed, we can now use the mount command to mount NFS exports on a client. For example, with regards to our documents export, we can use the following variation of the mount command to do the trick:

sudo mount myserver:/documents /mnt/documents 

Replace myserver with either your server's hostname or its IP address, documents with the name of the actual share on the server, and /mnt/documents with the path on your local server where you want to mount the share. From this point forward, you should be able to access the contents of the documents export on your file server. Notice, however, that the exported directory on the server was /exports/documents, but we only asked for /documents instead of the full path with the example mount command. The reason this works is because we identified an export root of /exports. To save you from flipping back, here's the first line from the /etc/exports file, where we identified our export root:

/exports *(ro,fsid=0,no_subtree_check) 

With the export root, we basically set the base directory for our NFS exports. We set it as read-only (ro), because we don't want anyone making any changes to the /exports directory itself. Other directories within /exports have their own permissions and will thus override the ro setting on a per-export basis, so there's no real reason to set our export root as anything other than read-only. With our export root set, we don't have to call out the entire path of the export when we mount it; we only need the directory name. This is why we can mount an NFS export from myserver:/documents instead of having to type the entire path. While this does save us a bit of typing, it's also useful because from the user's perspective, they aren't required to know anything about the underlying filesystem on the server. There's simply no value in the user having to memorize the fact that the server is sharing a document's directory from /exports; all they're interested in is getting to their data. Another benefit is if we ever need to move our export root to a different directory (during a maintenance period), our users won't have to change their configuration to reference the new place; they'll only need to unmount and remount the exports.

So, at this point, you'll have three directories being exported from your file server, and you can always add others as you go. However, whenever you add a new export, it won't be automatically added and read by NFS. You can restart NFS to activate new exports, but that's not really a good idea while users may be connected to them, since that will disrupt their access. Thankfully, the following command will cause NFS to reread the /etc/exports file without disrupting existing connections. This will allow you to activate new exports immediately without having to wait for users to finish what they're working on:

sudo exportfs -a 

With this section out of the way, you should be able to export a directory from your Ubuntu Server, and then mount that export on another Linux machine. Feel free to practice creating and mounting exports until you get the hang of it. In addition, you should familiarize yourself with a few additional options and settings that are allowable in the /etc/exports file, after consulting the man page on exports.

When you've had more NFS practice than you can tolerate, we'll move on to a few ways in which you can copy files from one node to another without needing to set up an intermediary service or daemon.

Transferring files with rsync

Of all the countless tools and utilities available in the Linux and Unix world, few are as beloved as rsync. rsync is a utility that you can use to copy data from one place to another very easily, and there are many options available to allow you to be very specific about how you want the data to be transferred. Examples of its many use cases include copying files while preserving permissions, copying files while backing up replaced files, and even setting up incremental backups. If you don't already know how to use rsync, you'll probably want to get lots of practice with it, as it's something you'll soon see will be indispensable during your career as a Linux administrator, and it is also something that the Linux community generally assumes you already know. rsync is not hard to learn. Most administrators can learn the basic usage in about an hour or less, but the countless options available will lead you to learn new tricks even years down the road.

Another aspect that makes rsync flexible is the many ways you can manipulate the source and target directories. I mentioned earlier that rsync is a tool you can use to copy data from one place to another. The beauty of this is that the source and target can literally be anywhere you'd like. For example, the most common usage of rsync is to copy data from a directory on one server to a directory on another server over the network. However, you don't even have to use the network; you can even copy data from one directory to another on the same server. While this may not seem like a useful thing to do at first, consider that the target directory may be a mount point that leads to a backup disk, or an NFS share that actually exists on another server. This also works in reverse: you can copy data from a network location to a local directory if you desire.

To get started with practicing with rsync, I recommend that you find some sample files to work with. Perhaps you have a collection of documents you can use, MP3 files, videos, text files, basically any kind of data you have lying around. It's important to make a copy of this data. If we make a mistake we could overwrite things, so it's best to work with a copy of the data, or data you don't care about, while you're practicing. If you don't have any files to work with, you can create some text files. The idea is to practice copying files from one place to another; it really doesn't matter what you copy or where you send it to. I'll walk you through some rsync examples that will progressively increase in complexity. The first few examples will show you how to back up a home directory, but later examples will be potentially destructive so you will probably want to work with sample files until you get the hang of it.

Here's our first example:

sudo rsync -r /home/myuser /backup 

With that command, we're using rsync (as root) to copy the contents of the home directory for the myuser directory to a backup directory, /backup (make sure the target directory exists). In the example, I used the -r option, which means rsync will grab directories recursively as well. You should now see a copy of the /home/myuser directory inside your /backup directory.

However, we have a bit of a problem. If you look at the permissions in the /backup/myuser directory, you can see that everything in the target is now owned by root. This isn't a good thing; when you back up a user's home directory, you'll want to retain their permissions. In addition, you should retain as much metadata as you can, including things like timestamps. Let's try another variation of rsync. Don't worry about the fact that /backup already has a copy of the myuser home directory from our previous backup. Let's perform the backup again, but this time, we'll use the -a option:

sudo rsync -a /home/myuser /backup

This time, we replaced the -r option with -a (archive), which retains as much metadata as possible (in most cases, it should make everything an exact copy). What you should notice now is that the permissions within the backup match the permissions within the user's home directory we copied from. The timestamps of the files will now match as well. This works because whenever rsync runs, it will copy what's different from the last time it ran. The files from our first backup were already there, but the permissions were wrong. When we ran the second command, rsync only needed to copy what was different, so it applied the correct permissions to the files. If any new files were added to the source directory since we last ran the command, the new or updated files would be copied over as well.

The archive mode (the -a option that we used with the previous command) is actually very popular; you'll probably see it a lot during your travels. The -a option is actually a wrapper option that includes the following options all at the same time:

-rlptgoD 

If you're curious about what each of these options do, consult the man page for rsync for more detailed information. In summary, the -r option copies data recursively (which we already know), the -l option copies symbolic links, -p preserves permissions, -g preserves group ownership, -o preserves the owner, and -D preserves device files. If you put those options together, we get -rlptgoD. Therefore, -a is actually equal to -rlptgoD. I find -a easier to remember.

The archive mode is great and all, but wouldn't it be nice to be able to watch what rsync is up to when it runs? Add the -v option and try the command again:

sudo rsync -av /home/myuser /backup 

This time, rsync will display on your terminal what it's doing as it runs (-v activates verbose mode). This is actually one of my favorite variations of the rsync command, as I like to copy everything and retain all the metadata, as well as watch what rsync is doing as it works.

What if I told you that rsync supports SSH by default? It's true! Using rsync, you can easily copy data from one node to another, even over SSH. The same options apply, so you don't actually have to do anything different other than point rsync to the other server, rather than to another directory on your local server:

sudo rsync -av /home/myuser [email protected]:/backup 

With this example, I'm copying the home directory for myuser to the /backup directory on server 192.168.1.5. I'm connecting to the other server as the admin user. Make sure you change the user account and IP address accordingly, and also make sure the user account you use has access to the /backup directory. When you run this command, you should get prompted for the SSH password as you would when using plain SSH to connect to the server. After the connection is established, the files will be copied to the target server and directory.

Now, we'll get into some even cooler examples (some of which are potentially destructive), and we probably won't want to work with an actual home directory for these, unless it's a test account and you don't care about its contents. As I've mentioned before, you should have some test files to play with. When practicing, simply replace my directories with yours. Here's another variation worth trying:

sudo rsync -av --delete /src /target 

Now I'm introducing you to the --delete option. This option allows you to synchronize two directories. Let me explain why this is important. With every rsync example up until now, we've been copying files from point A to point B, but we weren't deleting anything. For example, let's say you've already used rsync to copy contents from point A to point B. Then, you delete some files from point A. When you use rsync to copy files from point A to point B again, the files you deleted in point A won't be deleted in point B. They'll still be there. This is because by default, rsync copies data between two locations, but it doesn't remove anything. With the --delete option, you're effectively synchronizing the two points, thus you're telling rsync to make them the same by allowing it to delete files in the target that are no longer in the source.

Next, we'll add the -b (backup) option:

sudo rsync -avb --delete /src /target 

This one is particularly useful. Normally, when a file is updated on /src and then copied over to /target, the copy on /target is overwritten with the new version. But what if you don't want any files to be replaced? The -b option renames files on the target that are being overwritten, so you'll still have the original file. If you add the --backup-dir option, things get really interesting:

sudo rsync -avb --delete --backup-dir=/backup/incremental /src /target 

Now, we're copying files from /src to /target as we were before, but we're now sending replaced files to the /backup/incremental directory. This means that when a file is going to be replaced on the target, the original file will be copied to /backup/incremental. This works because we used the -b option (backup) but we also used the --backup-dir option, which means that the replaced files won't be renamed, they'll simply be moved to the designated directory. This allows us to effectively perform incremental backups.

Building on our previous example, we can use the Bash shell itself to make incremental backups work even better. Consider these commands:

CURDATE=$(date +%m-%d-%Y) 
export $CURDATE 
sudo rsync -avb --delete --backup-dir=/backup/incremental/$CURDATE /src /target 

With this example, we grab the current date and store it in a variable (CURDATE). We'll also export the new variable so that it's fully available. In the rsync portion of the command, we use that variable for the --backup-dir option. This will copy the replaced files to a backup directory named after the date the command was run. Basically, if today's date was 08-17-2020, the resulting command would be the same as if we had run the following:

sudo rsync -avb --delete --backup-dir=/backup/incremental/08-17-2020 /src /target 

Hopefully, you can see how flexible rsync is and how it can be used to not only copy files between directories and/or nodes, but also to serve as a backup solution as well (assuming you have a remote destination to copy files to). The best part is that this is only the beginning. If you consult the man page for rsync, you'll see that there are a lot of options you can use to customize it even further. Give it some practice, and you should get the hang of it in no time.

Transferring files with SCP

A useful alternative to rsync is the Secure Copy (SCP) utility, which comes bundled with the OpenSSH client. It allows you to quickly copy files from one node to another. While rsync also allows you to copy files to other network nodes via SSH, SCP is more practical for one-off tasks; rsync is geared toward more complex jobs. If your goal is to send a single file or a small number of files to another machine, SCP is a great tool you can use to get the job done. If nothing else, it's yet another item for your administration toolbox. To utilize SCP, we'll use the scp command. Since you most likely already have the OpenSSH client installed, you should already have the scp command available. If you execute which scp, you should receive the following output:

/usr/bin/scp 

If you don't see any output, make sure that the openssh-client package is installed.

Using SCP is very similar in nature to rsync. The command requires a source, a target, and a filename. To transfer a single file from your local machine to another, the resulting command would look similar to the following:

scp myfile.txt [email protected]:/home/jdoe 

With this example, we're copying the myfile.txt file (which is located in our current working directory) to a server located at 192.168.1.50. If the target server is recognized by DNS, we could've used the DNS name instead of the IP address. The command will connect to the server as user jdoe and place the file into that user's home directory. Actually, we can shorten that command a bit:

scp myfile.txt [email protected]: 

Notice that I removed the target path, which was /home/jdoe. I'm able to omit the path to the target, since the home directory is assumed if you don't give the scp command a target path. Therefore, the myfile.txt file will end up in /home/jdoe whether or not I included the path to the home directory explicitly. If I wanted to copy the file somewhere else, I would definitely need to call out the location. Make sure you always include at least the colon when copying a file, since if you don't include it, you'll end up copying the file to your current working directory instead of the target.

The scp command also works in reverse:

scp [email protected]:myfile.txt  . 

With this example, we're assuming that myfile.txt is located in the home directory for the user jdoe. This command will copy that file to the current working directory of our local machine, since I designated the local path as a single period (which corresponds to our current working directory). Using scp in reverse isn't always practical, since you have to already know where the desired file is stored on the target before transferring it.

With our previous scp examples, we've only been copying a single file. If we want to transfer or download an entire directory and its contents, we will need to use the -r option, which allows us to do a recursive copy:

scp -r /home/jdoe/downloads/linux_iso [email protected]:downloads 

With this example, we're copying the local folder /home/jdoe/downloads/linux_iso to remote machine 192.168.1.50. Since we used the -r option, scp will transfer the linux_iso folder and all of its contents. On the remote end, we're again connecting via the user jdoe. Notice that the target path is simply downloads. Since scp defaults to the user's home directory, this will copy the linux_iso directory from the source machine to the target machine under the /home/jdoe/downloads directory. The following command would've had the exact same result:

scp -r /home/jdoe/downloads/linux_iso [email protected]:/home/jdoe/downloads 

The home directory is not the only assumption the scp command makes. It also assumes that SSH is listening on port 22 on the remote machine. Since it's possible to change the SSH port on a server to something else, port 22 may or may not be what's in use. If you need to designate a different port for scp to use, use the -P option:

scp -P 2222 -r /home/jdoe/downloads/linux_iso [email protected]:downloads 

With that example, we're connecting to the remote machine via port 2222. If you've configured SSH to listen on a different port, change the number accordingly.

Although port 22 is always the default for OpenSSH, it's common for some administrators to change it to something else. While changing the SSH port doesn't add a great deal of benefit in regard to security (an intensive port scan will still find your SSH daemon), it's a relatively easy change to make, and making it even just a little bit harder to find is beneficial. We'll discuss this further in Chapter 21, Securing Your Server.

Like most commands in the Linux world, the scp command supports verbose mode. If you want to see how the scp command progresses as it copies multiple files, add the -v option:

scp -rv /home/jdoe/downloads/linux_iso [email protected]:downloads 

Well, there you have it. The scp command isn't overly complex or advanced, but it's really great for situations in which you want to perform a one-time copy of a file from one node to another. Since it copies files over SSH, you benefit from its security, and it also integrates well with your existing SSH configuration. An example of this integration is the fact that scp recognizes your ~/.ssh/config file (if you have one), so you can shorten the command even further. Go ahead and practice with it a bit, and in the next section, we'll go over yet another trick that OpenSSH has up its sleeve.

Mounting remote directories with SSHFS

Earlier in this chapter, we took a look at several ways in which we can set up a Linux file server using Samba and/or NFS. There's another type of file-sharing solution I haven't mentioned yet, the SSH Filesystem (SSHFS). NFS and Samba are great solutions for designating file shares that are to be made available to other users, but these technologies may be more complex than necessary if you want to set up a temporary file-sharing service to use for a specific period of time. SSHFS allows you to mount a remote directory on your local machine, and have it treated just like any other directory. The mounted SSHFS directory will be available for the life of the SSH connection. When you're finished, you simply disconnect the SSHFS mount.

There are some downsides when it comes to SSHFS, however. First, the performance of file transfers won't be as fast as with an NFS mount, since there's encryption that needs to be taken into consideration as well. However, unless you're performing really resource-intensive work, you probably won't notice much of a difference anyway. Another downside is that you'd want to save your work regularly as you work on files within an SSHFS mount, because if the SSH connection drops for any reason, you may lose data. This logic is also true of NFS and Samba shares, but SSHFS is more of an on-demand solution and not something intended to remain connected and in place all the time.

To get started with SSHFS, we'll need to install it:

sudo apt install sshfs 

Now we're ready to roll. For SSHFS to work, we'll need a directory on both your local Linux machine and a remote Linux server. SSHFS can be used to mount any directory from the remote server you would normally be able to access via SSH. That's really the only requirement. What follows is an example command to mount an external directory to a local one via SSHFS. In your tests, make sure to replace my sample directories with actual directories on your nodes, as well as using a valid user account:

sshfs [email protected]:/share/myfiles /mnt/myfiles 

As you can see, the sshfs command is fairly straightforward. With this example, we're mounting /share/myfiles on 192.168.1.50 to /mnt/myfiles on our local machine. Assuming the command didn't provide an error (such as access denied, if you didn't have access to one of the directories on either side), your local directory should show the contents of the remote directory. Any changes you make to the files in the local directory will be made to the target. The SSHFS mount will basically function in the same way as if you had mounted an NFS or Samba share locally.

When we're finished with the mount, we should unmount it. There are two ways to do so. First, we can use the umount command as the root (just like we normally would):

sudo umount /mnt/myfiles 

Using the umount command isn't always practical for SSHFS, though. The user that's setting up the SSHFS link may not have root permissions, which means that they won't be able to unmount it with the umount command. If you tried the umount command as a regular user, you would see an error similar to the following:

umount: /mnt/myfiles: Permission denied 

It may seem rather strange that a normal user can mount an external directory via SSHFS, but not unmount it. Thankfully, there's a specific command a normal user can use, so we won't need to give them root or sudo access:

fusermount -u /mnt/myfiles 

That should do it. With the fusermount command, we can unmount the SSHFS connection we set up, even without root access. The fusermount command is part of the Filesystem in Userspace (FUSE) suite, which is what SSHFS uses as its virtual filesystem to facilitate such a connection. The -u option, as you've probably guessed, is for unmounting the connection normally. There is also the -z option, which unmounts the SSHFS mount lazily. By lazily, I mean it basically unmounts the filesystem without any cleanup of open resources. This is a last resort that you should rarely need to use, as it could result in data loss.

Connecting to an external resource via SSHFS can be simplified by adding an entry for it in /etc/fstab. Here's an example entry using our previous example:

[email protected]:/share/myfiles    /mnt/myfiles    fuse.sshfs  rw,noauto,users,_netdev  0  0 

Notice that I used the noauto option in the fstab entry, which means that your system will not automatically attempt to bring up this SSHFS mount when it boots. Typically, this is ideal. The nature of SSHFS is to create on-demand connections to external resources, and we wouldn't be able to input the password for the connection while the system is in the process of booting anyway. Even if we set up password-less authentication, the SSH daemon may not be ready by the time the system attempts to mount the directory, so it's best to leave the noauto option in place and just use SSHFS as the on-demand solution it is. With this /etc/fstab entry in place, any time we would like to mount that resource via SSHFS, we would only need to execute the following command going forward:

mount /mnt/myfiles 

Since we now have an entry for /mnt/myfiles in /etc/fstab, the mount command knows that this is an SSHFS mount, where to locate it, and which user account to use for the connection. After you execute the example mount command, you should be asked for the user's SSH password (if you don't have password-less authentication configured) and then the resource should be mounted.

SSH sure does have a few unexpected tricks up its sleeve. Not only is it the de facto standard in the industry for connecting to Linux servers, but it also offers us a neat way of transferring files quickly and mounting external directories. I find SSHFS very useful in situations where I'm working on a large number of files on a remote server but want to work on them with applications I have installed on my local workstation. SSHFS allows us to do exactly that.

Summary

In this chapter, we explored multiple ways of accessing remote resources. Just about every network has a central location for storing files, and we explored two ways of accomplishing this with NFS and Samba. Both NFS and Samba have their place in the data center and are very useful ways we can make resources on a server available to our users who need to access them. We also talked about rsync and scp, two great utilities for transferring data without needing to set up a permanent share. We closed off the chapter with a look at SSHFS, which is a very handy utility for mounting external resources locally, on demand.

Next up is Chapter 13, Managing Databases. Now that we have all kinds of useful services running on our Ubuntu Server network, it's only fitting that we take a look at serving databases as well. Specifically, we'll look at MariaDB. See you there!

Further reading

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.162.110