Chapter 6. Standardizing, Sharing, and Synchronizing Resources

Hacks 56–62: Introduction

Once you finally get over the hump of setting up centralized access to various resources in your environment, you won’t know how you lived without it. Maintaining resources in a central location for use by the masses saves endless numbers of trips to peoples’ offices, and it can save you money because you’ll only have to back up a central file server instead of its individual clients.

This chapter will delve into various methods of file sharing, each applicable in different circumstances. For web farms, an NFS server can store the web pages, making backups and repurposing a breeze. For end user file access, Samba can provide cross-platform, authenticated file sharing. For web-based collaboration, have a look at WebDAV.

Centralize Resources Using NFS

Make recovering from disaster—and preparing for it—simpler by centralizing shared resources and service configuration.

A key goal of all system administrators is to maximize the availability of the services they maintain. With an unlimited budget you could create a scenario where there are two or three “hot standby” machines for every one machine in production, waiting to seamlessly take over in the event of a problem. But who has an unlimited budget?

Standalone machines that store their own local copies of configuration and data can be nice, if you have lots of them, and you have load balancers, and you have a good cloning mechanism so you don’t spend all your time making sure all of your mail servers (for example) are identical. Oh yeah, and when you make a configuration change to one, you’ll need a system to push it out to the other clones. This could take quite a bit of time and/or money to get right—and this doesn’t even touch on the expense of putting backup software on every single machine on your network. I’m sure there are some smaller sites using standard Unix and Linux utilities for backup and nothing else, but the majority of sites are using commercial products, and they’re not cheap!

Wouldn’t it be nice if a test box could be repurposed in a matter of minutes to take over for a server with a failed drive? Wouldn’t it be great if you only needed to back up from a couple of file servers instead of every single service machine? NFS, the Network File System, can get you to this place, and this hack will show you how.

Admins new to Linux, particularly those coming from Microsoft products, may not be familiar with NFS, the file-sharing protocol used in traditional Unix shops. What’s great about NFS is that it allows you to store configuration files and data in a centralized location and transparently access that location from multiple machines, which treat the remote share as a local filesystem.

Let’s say you have five Apache web servers, all on separate hardware. One is the main web presence for your company, one is a backup, and the other three perform other functions, such as hosting user home pages, an intranet site, and a trouble-ticket system. They’re all configured to be standalone machines right now, but you want to set things up so that the machine that’s currently just a hot standby to the main web server can serve as a standby for pretty much any web server.

To do this, we’ll create an NFS server with mountable partitions that provide the configuration information, as well as the content, to the web servers. The first step is to configure the NFS server.

Configuring the NFS Server

To configure the NFS server, you must first create a directory hierarchy to hold Apache configurations for all of your different web servers, since it’s hubris to assume they’re all configured identically. There are numerous ways to organize the hierarchy. You could try to emulate the native filesystem as closely as possible, using symlinks to get it all perfect. You could also create a tree for each web server to hold its configuration, so that when you add another web server you can just add another directory on the NFS server for its configuration. I’ve found the latter method to be a bit less taxing on the brain.

The first thing to do on the NFS server is to create the space where this information will live. Let’s say your servers are numbered web1 through web5. Here’s an example of what the directory structure might look like:

	/servconf
		mail/
		common/
		web/ 
	web1/
		conf/
			httpd.conf
			access.conf
			modules.conf
		conf.d/
			php4.conf
	web2/
		conf/
			httpd.conf
			access.conf
			modules.conf
		conf.d/
			php4.conf
			python.conf
			mod_auth_mysql.conf

This sample hierarchy illustrates a few interesting points. First, notice the directories mail/ and common/. As these show, the configuration tree doesn’t need to be limited to a single service. In fact, it doesn’t actually have to be service-specific at all! For example, the common/ tree can hold configuration files for things like global shell initialization files that you want to be constant on all production service machines (you want this, believe me) and the OpenSSH server configuration file, which ensures that the ssh daemon acts the same way on each machine.

That last sentence brings up another potential benefit of centralized configuration: if you want to make global changes to something like the ssh daemon, you can make the changes in one place instead of many, since all of the ssh daemons will be looking at the centralized configuration file. Once a change is made, the daemons will need to be restarted or sent a SIGHUP to pick up the change. “Execute Commands Simultaneously on Multiple Servers” [Hack #29] ) shows a method that will alow you to do this on multiple servers quickly.

All of this is wonderful, and some sites can actually use a hierarchy like this to have a single NFS server provide configuration to all the services in their department or business. However, it’s important to recognize that, depending on how robust your NFS deployment is, you could be setting yourself up with the world’s largest single point of failure. It’s one thing to provide configuration to all your web servers, in which case a failure of the NFS server affects the web servers. It’s quite another to use a single NFS server to provide configuration data to every production service across the board. In this case, if there’s a problem with the file server, you’re pretty much dead in the water, all owing to a glitch in a single machine! It would be smart to either invest in technologies that ensure the availability of the NFS service, or break up the NFS servers to lessen the impact of a failure of any one server.

Now it’s time to export our configuration tree. It’s important to note that some NFS daemons are somewhat “all or nothing” in the sense that they cannot export a subdirectory of an already exported directory. The exception to that rule is if the subdirectory is actually living on a separate physical device on the NFS server. For safety’s sake, I’ve made it a rule never to do this anyway, in the event that changes in the future cause the subdirectory to share a device with its parent. Note that the same rule applies to exporting a subdirectory and then trying to export a parent directory separately.

Some implementations of the nfsd server do allow subdirectory exports, but for the sake of simplicity I avoid this, because it has implications as to the rules applied to a particular exported directory and can make debugging quite nightmarish.

Let’s see how this works. Using the above “best practices,” you cannot export the whole /servconf tree in our example to one server, and then export mail/ separately to the mail servers. You can export each of the directories under /servconf separately if /servconf itself is not exported, but that would make it slightly more work to repurpose a server, because you’d have to make sure permissions were in place to allow the mount of the new configuration tree, and you’d have to make sure the /etc/fstab file on the NFS client was updated—otherwise, a reboot would cause bad things to happen.

It’s easier just to export the entire /servconf tree to a well-defined subset of the machines, so that /etc/fstab never has to be changed and permissions are not an issue from the NFS server side of the equation. That’s what we’ll do here. The file that tells the NFS server who can mount what is almost always /etc/exports. After all this discussion, here’s the single line we need to accomplish the goal of allowing our web servers to mount the /servconf directory:

	/servconf 192.168.198.0/24(ro,root_squash) @trusted(rw,no_root_squash)

The network specified above is a DMZ where my service machines live. Two important things to note here are the options applied to the export. The ro option ensures that changes cannot be made to the configuration of a given machine by logging into the machine itself. This is for the sake of heightened security, to help guarantee that a compromised machine can’t be used to change the configuration files of all the other machines. Also to that end, I’ve explicitly added the root_squash option. This is a default in some NFS implementations, but I always state it explicitly in case that default ever changes (this is generally good practice for all applications, by the way). This option maps UID 0 on the client to nobody on the server, so even root on the client machine cannot make changes to files anywhere under the mount point.

The second group of hosts I’m exporting this mount point to are those listed in an NIS netgroup named trusted. This netgroup consists of two machines that are locked down and isolated such that only administrators can get access to them. I’ve given those hosts read/write (rw) access, which allows administrators to make changes to configuration files from machines other than the NFS server itself. I’ve also specified the no_root_squash option here, so that admins can use these machines even to change configuration files on the central server owned by root.

For the Apache web server example, we can create a very similar hierarchy on our NFS server to store content served up by the servers, and export it in the exact same way we did for the configuration. However, keep in mind that many web sites assume they can write in the directories they own, so you’ll need to make sure that you either export a writable directory for these applications to use, or export the content tree with read/write privileges.

Configuring the NFS Clients

Getting NFS clients working is usually a breeze. You’ll need to decide where you want the local Apache daemon to find its configuration and content, create the mount points for any trees you’ll need to mount, and then edit the /etc/fstab file to make sure that the directory is always mounted at boot time.

Generally, I tend to create the local mount points under the root directory, mainly for the sake of consistency. No matter what server I’m logged in to, I know I can always run ls -l / and see all of the mount points on that server. This is simpler than having to remember what services are running on the machine, then hunting around the filesystem to check that the mount points are all there. Putting them under / means that if I run the mount command to see what is mounted, and something is missing, I can run one command to make sure the mount point exists, which is usually the first step in troubleshooting an NFS-related issue.

I also attempt to name the mount point the same as the exported directory on the server. This makes debugging a bit simpler, because I don’t have to remember that the mount point named webstuff on the client is actually servconf on the server. So, we create a mount point on the NFS client like this:

	# mkdir /servconf

Then we add a line like the following to our /etc/fstab file:

	mynfs:/servconf /servconf nfs ro,intr,nfsvers=3,proto=tcp 0 0

Now we’re assured that the tree will be mounted at boot time. The other important factor to consider is that the tree is mounted before the service that needs the files living there is started. It should be safe to assume that this will just work, but if you’re trying to debug services that seem to be ignoring configuration directives, or that fail to start at all, you’ll want to double check, just in case!

Configuring the Service

We’ve now mounted our web server configuration data to all of our web servers. Let’s assume for now that you’ve done the same with the content. What we’ve essentially accomplished is a way to have one hot spare machine, which also mounts all of this information, that can take over for any failed web server in the blink of an eye. Two ways to get it to work are to use symlinks or to edit the service’s initialization script.

To use the symlink method, you consult the initialization script for the service. In the case of Apache, the script will most likely be /etc/init.d/apache or /etc/init.d/httpd. This script, like almost all service initialization scripts, will tell you where the daemon will look for its configuration file(s). In my case, it looks under /etc/apache. The next thing to do is to move this directory out of the way and make a symlink to the directory that will take its place. This is done with commands like the following:

	# mv /etc/apache /etc/apache.DIST
	# ln -s /servconf/web/web1 /etc/apache

Now when the service starts up, it will use whichever configuration files are pointed to by the symlink. The critical thing to make sure of here is that the files under the mount point conform to what the initialization script expects. For example, if the initialization script for Apache in this case was looking for /etc/apache/config/httpd.conf, it would fail to start at all, because the /etc/apache directory is now a symlink to a mount point that has put the file under a subdirectory called conf/, not config/. These little “gotchas” are generally few, and are worked out early in the testing phase of any such deployment.

Now, if we want to make our hot spare look like web3 instead of web1, we can simply remove the symlink we had in place, create a new symlink to point to web3’s configuration directory, and restart the service. Note that if all of the web servers mount the content in the same way under the same mount points, you don’t have to change any symlinks for content, since the configuration file in Apache’s case tells the daemon where to find the content, not the initialization script! Here are the commands to change the personality of our hot spare to web3:

	# rm /etc/apache; ln -s /servconf/web/web3 /etc/apache
	# /etc/init.d/apache restart

The commands used to restart Apache can vary depending on the platform. You might run the apachectl program directly, or you might use the service command available on some Linux distributions.

A Final Consideration

You can’t assume that you’re completely out of the woods just because a server looks and acts like the one it replaces. In the case of Apache, you’ll also want to make sure that your hot spare is actually reachable by clients without them having to change any of their bookmarks. This might involve taking down the failed web server and assigning its IP address to the hot spare or making the DNS record for the failed web server point to the hot spare.

Automount NFS Home Directories with autofs

Let users log in from any machine and be in familiar territory.

If you administer an environment that supports large numbers of users who occasionally need access to any one of a wide array of hosts on your network, you might find it a bit tiring having to answer support calls every time your users try to log into a machine only to find that their home directories are nowhere to be found. Sure, you could run over and edit the /etc/fstab file to NFS-mount the remote home directories and fix things using that machine’s NFS client, but there are a couple of downsides to handling things in this way.

First, your /etc/fstab file will eventually grow quite large as you add more and more mounts. Second, if a user leaves your department, you’ll be left with the choice of either dealing with failed mount requests in your logfiles (assuming you removed the user’s home directory at the time of departure) or running around and editing files on all of the machines that have the entry causing the error. Which machines have the offending entry? Well, you’ll just have to look, won’t you? This is not a position you want to find yourself in if you maintain large labs, clusters, and testing or development environments.

One thought might be to mount a directory from an NFS server that holds the /etc/fstab file. This is asking for trouble, since this file is in charge of handling not only NFS mounts, but the mounts of your local devices (read: hard drives). In the end, you’re sure to find that centralizing this file on an NFS share is impossible, since the local machine needs to mount the hard drives before it can do anything with the network, including mounting NFS shares.

A good solution is one that allows you to mount NFS shares without using /etc/fstab. Ideally, it could also mount shares dynamically, as they are requested, so that when they’re not in use there aren’t all of these unused directories hanging around and messing up your ls -l output. In a perfect world, we could centralize the mount configuration file and allow it to be used by all machines that need the service, so that when a user leaves, we just delete the mount from one configuration file and go on our merry way.

Happily, you can do just this with the Linux autofs daemon. The autofs daemon lives in the kernel and reads its configuration from “maps,” which can be stored in local files, centralized NFS-mounted files, or directory services such as NIS or LDAP. Of course, there has to be a master configuration file to tell autofs where to find its mounting information. That file is almost always stored in /etc/auto.master. Let’s have a look at a simple example configuration file:

	/.autofs	file:/etc/auto.direct	--timeout 300
	/mnt		file:/etc/auto.mnt		--timeout 60
	/u			yp:homedirs				--timeout 300

The main purpose of this file is to let the daemon know where to create its mount points on the local system (detailed in the first column of the file), and then where to find the mounts that should live under each mount point (detailed in the second column). The rest of each line consists of mount options. In this case, the only option is a timeout, in seconds. If the mount is idle for that many seconds, it will be unmounted.

In our example configuration, starting the autofs service will create three mount points. /u is one of them, and that’s where we’re going to put our home directories. The data for that mount point comes from the homedirs map on our NIS server. Running ypcat homedirs shows us the following line:

	hdserv:/vol/home:users

The server that houses all of the home directories is called hdserv. When the automounter starts up, it will read the entry in auto.master, contact the NIS server, ask for the homedirs map, get the above information back, and then contact hdserv and ask to mount /vol/home/users. (The colon in the file path above is an NIS-specific requirement. Everything under the directory named after the colon will be mounted.) If things complete successfully, everything that lives under /vol/home/users on the server will now appear under /u on the client.

Of course, we don’t have to use NIS to store our mount maps—we can store them in an LDAP directory or in a plain-text file on an NFS share. Let’s explore this latter option, for those who aren’t working with a directory service or don’t want to use their directory service for automount maps.

The first thing we’ll need to alter is our auto.master file, which currently thinks that everything under /u is mounted according to NIS information. Instead, we’ll now tell it to look in a file, by replacing the original /u line with this one:

	/u		file:/usr/local/etc/auto.home	 --timeout 300

This tells the automounter that the file /usr/local/etc/auto.home is the authoritative source for information regarding all things mounted under the local /u directory.

In the file on my system are the following lines:

	jonesy	-rw hdserv:/vol/home/users/&
	matt	-rw hdserv:/vol/home/usrs/&

What?! One line for every single user in my environment?! Well, no. I’m doing this to prove a point. In order to hack the automounter, we have to know what these fields mean.

The first field is called a key. The key in the first line is jonesy. Since this is a map for things to be found under /u, this first line’s key specifies that this entry defines how to mount /u/jonesy on the local machine.

The second field is a list of mount options, which are pretty self-explanatory. We want all users to be able to mount their directories with read/write access (-rw).

The third field is the location field, which specifies the server from which the automounter should request the mount. In this case, our first entry says that /u/jonesy will be mounted from the server hdserv. The path on the server that will be requested is /vol/home/users/&. The ampersand is a wildcard that will be replaced in the outgoing mount request with the key. Since our key in the first line is jonesy, the location field will be transformed to a request for hdserv:/vol/home/users/jonesy.

Now for the big shortcut. There’s an extra wildcard you can use in the key field, which allows you to shorten the configuration for every user’s home directory to a single line that looks like this:

	*	-rw hdserv:/vol/home/users/&

The * means, for all intents and purposes, “anything.” Since we already know the ampersand takes the value of the key, we can now see that, in English, this line is really saying “Whichever directory a user requests under /u, that is the key, so replace the ampersand with the key value and mount that directory from the server.”

This is wonderful for two reasons. First, my configuration file is a single line. Second, as user home directories are added and removed from the system, I don’t have to edit this configuration file at all. If a user requests a directory that doesn’t exist, he’ll get back an error. If a new directory is created on the file server, this configuration line already allows it to be mounted.

Keep Filesystems Handy, but Out of Your Way

Use the amd automounter, and some handy defaults, to maintain mounted resources without doing without your own local resources.

The amd automounter isn’t the most ubiquitous production service I’ve ever seen, but it can certainly be a valuable tool for administrators in the setup of their own desktop machines. Why? Because it gives you the power to be able to easily and conveniently access any NFS share in your environment, and the default settings for amd put all of them under their own directory, out of the way, without you having to do much more than simply start the service.

Here’s an example of how useful this can be. I work in an environment in which the /usr/local directories on our production machines are mounted from a central NFS server. This is great, because if we need to build software for our servers that isn’t supplied by the distribution vendor, we can just build it from source in that tree, and all of the servers can access it as soon as it’s built. However, occasionally we receive support tickets saying that something is acting strangely or isn’t working. Most times, the issue is environmental: the user is getting at the wrong binary because /usr/local is not in her PATH, or something simple like that. Sometimes, though, the problem is ours, and we need to troubleshoot it.

The most convenient way to do that is just to mount the shared /usr/local to our desktops and use it in place of our own. For me, however, this is suboptimal, because I like to use my system’s /usr/local to test new software. So I need another way to mount the shared /usr/local without conflicting with my own /usr/local. This is where amd comes in, as it allows me to get at all of the shares I need, on the fly, without interfering with my local setup.

Here’s an example of how this works. I know that the server that serves up the /usr/local partition is named fs, and I know that the file mounted as /usr/local on the clients is actually called /linux/local on the server. With a properly configured amd, I just run the following command to mount the shared directory:

	$ cd /net/fs/linux/local

There I am, ready to test whatever needs to be tested, having done next to no configuration whatsoever!

The funny thing is, I’ve run into lots of administrators who don’t use amd and didn’t know that it performed this particular function. This is because the amd mount configuration is a little bit cryptic. To understand it, let’s take a look at how amd is configured. Soon you’ll be mounting remote shares with ease.

amd Configuration in a Nutshell

The main amd configuration file is almost always /etc/amd.conf. This file sets up default behaviors for the daemon and defines other configuration files that are authoritative for each configured mount point. Here’s a quick look at a totally untouched configuration file, as supplied with the Fedora Core 4 am-utils package, which supplies the amd automounter:

	[ global ]
	normalize_hostnames =	no
	print_pid =				yes
	pid_file =				/var/run/amd.pid
	restart_mounts =		yes
	auto_dir =				/.automount
	#log_file =				/var/log/amd
	log_file =				syslog	
	log_options =			all
	#debug_options =		all
	plock =				no
	selectors_on_default =	yes
	print_version =			no
	# set map_type to "nis" for NIS maps, or comment it out to search for all
	# types
	map_type =				file
	search_path =			/etc
	browsable_dirs =		yes
	show_statfs_entries =	no
	fully_qualified_hosts = no
	cache_duration =		300

	# DEFINE AN AMD MOUNT POINT
	[ /net ]
	map_name =				amd.net
	map_type =				file

The options in the [global] section specify behaviors of the daemon itself and rarely need changing. You’ll notice that search_path is set to /etc, which means it will look for mount maps under the /etc directory. You’ll also see that auto_dir is set to /.automount. This is where amd will mount the directories you request. Since amd cannot perform mounts “in-place,” directly under the mount point you define, it actually performs all mounts under the auto_dir directory, and then returns a symlink to that directory in response to the incoming mount requests. We’ll explore that more after we look at the configuration for the [/net] mount point.

From looking at the above configuration file, we can tell that the file that tells amd how to mount things under /net is amd.net. Since the search_path option in the [global] section is set to /etc, it’ll really be looking for /etc/amd.net at startup time. Here are the contents of that file:

	/defaults fs:=${autodir}/${rhost}/root/${rfs};opts:=nosuid,nodev
	* rhost:=${key};type:=host;rfs:=/

Eyes glazing over? Well, then let’s translate this into English. The first entry is /defaults, which is there to define the symlink that gets returned in response to requests for directories under [/net] in amd.conf. Here’s a quick tour of the variables being used here:

  • ${autodir} gets its value from the auto_dir setting in amd.conf, which in this case will be /.automount .

  • ${rhost} is the name of the remote file server, which in our example is fs. It is followed closely by /root, which is really just a placeholder for / on the remote host.

  • ${rfs} is the actual path under the / directory on the remote host that gets mounted.

Also note that fs: on the /defaults line specifies the local location where the remote filesystem is to be mounted. It’s not the name of our remote file server.

In reality, there are a couple of other variables in play behind the scenes that help resolve the values of these variables, but this is enough to discern what’s going on with our automounter. You should now be able to figure out what was really happening in our simple cd command earlier in this hack.

Because of the configuration settings in amd.conf and amd.net, when I ran the cd command earlier, I was actually requesting a mount of fs:/linux/local under the directory /net/fs/linux/local.amd, behind my back, replaced that directory with a symlink to /.automount/fs/root/linux/local, and that’s where I really wound up. Running pwd with no options will say you’re in /net/fs/linux/local, but there’s a quick way to tell where you really are, taking symlinks into account. Look at the output from these two pwd commands:

	$ pwd
	/net/fs/linux/local
	$ pwd -P
	/.automount/root/fs/linux/local

The–P option reveals your true location.

So, now that we have some clue as to how the amd.net /defaults entry works, we need to figure out exactly why our wonderful hack works. After all, we haven’t yet told amd to explicitly mount anything!

Here’s the entry in /etc/amd.net that makes this functionality possible:

	* rhost:=${key};type:=host;rfs:=/

The * wildcard entry says to attempt to mount any requested directory, rather than specifying one explicitly. When you request a mount, the part of the path after /net defines the host and path to mount. If amd is able to perform the mount, it is served up to the user on the client host. The rfs=/ bit means that amd should request whatever directory is requested from the server under the root directory of that server. So, if we set rfs=/mnt and then request /linux/local, the request will be for fs:/mnt/linux/local.

Synchronize root Environments with rsync

When you’re managing multiple servers with local root logins, rsync provides an easy way to synchronize the root environments across your systems.

Synchronizing files between multiple computer systems is a classic problem. Say you’ve made some improvements to a file on one machine, and you would like to propagate it to others. What’s the best way? Individual users often encounter this problem when trying to work on files on multiple computer systems, but it’s even more common for system administrators who tend to use many different computer systems in the course of their daily activities.

rsync is a popular and well-known remote file and directory synchronization program that enables you to ensure that specified files and directories are identical on multiple systems. Some files that you may want to include for synchronization are:

  • .profile

  • .bash_profile

  • .bashrc

  • .cshrc

  • .login

  • .logout

Choose one server as your source server (referred to as srchost in the examples in this hack). This is the server where you will maintain the master copies of the files that you want to synchronize across multiple systems’ root environments. After selecting this system, you’ll add a stanza to the rsync configuration file (/etc/rsyncd.conf) containing, at a minimum, options for specifying the path to the directory that you want to synchronize (path), preventing remote clients from uploading files to the source server (read only), the user ID that you want synchronization to be performed as (uid), a list of files and directories that you want to exclude from synchronization (exclude), and the list of files that you want to synchronize (include). A sample stanza will look like this:

	[rootenv]
		path = /
		uid = root # default uid is nobody
		read only = yes
		exclude = * .* 	
		include = .bashrc .bash_profile .aliases
		hosts allow = 192.168.1.
		hosts deny = *

Then add the following command to your shell’s login command file (.profile, .bash_profile, .login, etc.) on the source host:

	rsync -qa rsync://srchost/rootenv /

Next, you’ll need to manually synchronize the files for the first time. After that, they will automatically be synchronized when your shell’s login command file is executed. On each server you wish to synchronize, run this rsync command on the host as root:

	rsync -qa rsync://srchost/rootenv /

For convenience, add the following alias to your .bashrc file, or add an equivalent statement to the command file for whatever shell you’re using (.cshrc, .kshrc, etc.):

	alias envsync='rsync -qa rsync::/srchost/rootenv / && source .bashrc'

By running the envsync alias, you can immediately sync up and source your rc files.

To increase security, you can use the /etc/hosts.allow and /etc/hosts.deny files to ensure that only specified hosts can use rsync on your systems [Hack #64]

See Also

  • man rsync

Lance Tost

Share Files Across Platforms Using Samba

Linux, Windows, and Mac OS X all speak SMB/CIFS, which makes Samba a one-stop shop for all of their resource-sharing needs.

It used to be that if you wanted to share resources in a mixed-platform environment, you needed NFS for your Unix machines, AppleTalk for your Mac crowd, and Samba or a Windows file and print server to handle the Windows users. Nowadays, all three platforms can mount file shares and use printing and other resources through SMB/CIFS, and Samba can serve them all.

Samba can be configured in a seemingly endless number of ways. It can share just files, or printer and application resources as well. You can authenticate users for some or all of the services using local files, an LDAP directory, or a Windows domain server. This makes Samba an extremely powerful, flexible tool in the fight to standardize on a single daemon to serve all of the hosts in your network.

At this point, you may be wondering why you would ever need to use Samba with a Linux client, since Linux clients can just use NFS. Well, that’s true, but whether that’s what you really want to do is another question. Some sites have users in engineering or development environments who maintain their own laptops and workstations. These folks have the local root password on their Linux machines. One mistyped NFS export line, or a chink in the armor of your NFS daemon’s security, and you could be inadvertently allowing remote, untrusted users free rein on the shares they can access. Samba can be a great solution in cases like this, because it allows you to grant those users access to what they need without sacrificing the security of your environment.

This is possible because Samba can be (and generally is, in my experience) configured to ask for a username and password before allowing a user to mount anything. Whichever user supplies the username and password to perform the mount operation is the user whose permissions are enforced on the server. Thus, if a user becomes root on his local machine it needn’t concern you, because local root access is trumped by the credentials of the user who performed the mount.

Setting Up Simple Samba Shares

Technically, the Samba service consists of two daemons, smbd and nmbd. The smbd daemon is the one that handles the SMB file- and print-sharing protocol. When a client requests a shared directory from the server, it’s talking to smbd. The nmbd daemon is in charge of answering NetBIOS over IP name service requests. When a Windows client broadcasts to browse Windows shares on the network, nmbd replies to those broadcasts.

The configuration file for the Samba service is /etc/samba/smb.conf on both Debian and Red Hat systems. If you have a tool called swat installed, you can use it to help you generate a working configuration without ever opening vi—just uncomment the swat line in /etc/inetd.conf on Debian systems, or edit /etc/xinetd.d/swat on Red Hat and other systems, changing the disable key’s value to no. Once that’s done, restart your inetd or xinetd service, and you should be able to get to swat’s graphical interface by pointing a browser at http://localhost:901.

Many servers are installed without swat, though, and for those systems editing the configuration file works just fine. Let’s go over the config file for a simple setup that gives access to file and printer shares to authenticated users. The file is broken down into sections. The first section, which is always called [global], is the section that tells Samba what its “personality” should be on the network. There are a myriad of possibilities here, since Samba can act as a primary or backup domain controller in a Windows domain, can use various printing subsystem interfaces and various authentication backends, and can provide various different services to clients.

Let’s take a look at a simple [global] section:

	[global]
	   workgroup = PVT
	   server string = apollo
	   hosts allow = 192.168.42. 127.0.0.
	   printcap name = CUPS
	   load printers = yes
	   printing = CUPS
	   logfile = /var/log/samba/log.smbd
	   max log size = 50
	   security = user
	   smb passwd file = /etc/samba/smbpasswd
	   socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192	
	   interfaces = eth0
	   wins support = yes
	   dns proxy = no

Much of this is self-explanatory. This excerpt is taken from a working configuration on a private SOHO network, which is evidenced by the hosts allow values. This option can take values in many different formats, and it uses the same syntax as the /etc/hosts.allow and /etc/hosts.deny files (see hosts_access(8) and “Allow or Deny Access by IP Address” [Hack #64] ). Here, it allows access from the local host and any host whose IP address matches the pattern 192.168.42.*. Note that a netmask is not given or assumed—it’s a pure regex match on the IP address of the connecting host. Note also that this setting can be removed from the [global] section and placed in each subsection. If it exists in the [global] section, however, it will supersede any settings in other areas of the configuration file.

In this configuration, I’ve opted to use CUPS as the printing mechanism. There’s a CUPS server on the local machine where the Samba server lives, so Samba users will be able to see all the printers that CUPS knows about when they browse the PVT workgroup, and use them (more on this in a minute).

The server string setting determines the server name users will see when the host shows up in a Network Neighborhood listing, or in other SMB network browsing software. I generally set this to the actual hostname of the server if it’s practical, so that if users need to manually request something from the Samba server, they don’t try to ask to mount files from my Linux Samba server by trying to address it as “Samba Server.”

The other important setting here is security. If you’re happy with using the /etc/samba/smbpasswd file for authentication, this setting is fine. There are many other ways to configure authentication, however, so you should definitely read the fine (and copious) Samba documentation to see how it can be integrated with just about any authentication backend. Samba includes native support for LDAP and PAM authentication. There are PAM modules available to sync Unix and Samba passwords, as well as to authenticate to remote SMB servers.

We’re starting with a simple password file in our configuration. Included with the Samba package is a tool called mksmbpasswd.sh, which will add users to the password file en masse so you don’t have to do it by hand. However, it cannot migrate Unix passwords to the file, because the cryptographic algorithm is a one-way hash and the Windows hash sent to Samba by the clients doesn’t match.

To change the Samba password for a user, run the following command on the server:

	# smbpasswd username

This will prompt you for the new password, and then ask you to confirm it by typing it again. If a user ran the command, she’d be prompted for her current Samba password first. If you want to manually add a user to the password file, you can use the -a flag, like this:

	# smbpasswd -a username

This will also prompt for the password that should be assigned to the user.

Now that we have users, let’s see what they have access to by looking at the sections for each share. In our configuration, users can access their home directories, all printers available through the local CUPS server, and a public share for users to dabble in. Let’s look at the home directory configuration first:

	[homes]
	   comment = Home Directories
	   browseable = no
	   writable = yes

The [homes] section, like the [global] section, is recognized by the server as a “special” section. Without any more settings than these few minimal ones, Samba will, by default, take the username given during a client connection and look it up in the local password file. If it exists, and the correct password has been provided, Samba clones the [homes] section on the fly, creating a new share named after the user. Since we didn’t use a path setting, the actual directory that gets served up is the home directory of the user, as supplied by the local Linux system. However, since we’ve set browseable = no, users will only be able to see their own home directories in the list of available shares, rather than those of every other user on the system.

Here’s the printer share section:

	[printers]
	   comment = All Printers
	   path = /var/spool/samba
	   browseable = yes
	   public = yes
	   guest ok = yes
	   writable = no
	   printable = yes
	   use client driver = yes

This section is also a “special” section, which works much like the [homes] special section. It clones the section to create a share for the printer being requested by the user, with the settings specified here. We’ve made printers browseable, so that users know which printers are available. This configuration will let any authenticated user view and print to any printer known to Samba.

Finally, here’s our public space, which anyone can read or write to:

	[tmp]
       comment = Temporary file space
	   path = /tmp
	   read only = no
	   public = yes

This space will show up in a browse listing as “tmp on Apollo,” and it is accessible in read/write mode by anyone authenticated to the server. This is useful in our situation, since users cannot mount and read from each other’s home directories. This space can be mounted by anyone, so it provides a way for users to easily exchange files without, say, gumming up your email server.

Once your smb.conf file is in place, start up your smb service and give it a quick test. You can do this by logging into a Linux client host and using a command like this one:

	$ smbmount '//apollo/jonesy' ~/foo/ -o username=jonesy,workgroup=PVT

This command will mount my home directory on Apollo to ~/foo/ on the local machine. I’ve passed along my username and the workgroup name, and the command will prompt for my password and happily perform the mount. If it doesn’t, check your logfiles for clues as to what went wrong.

You can also log in to a Windows client, and see if your new Samba server shows up in your Network Neighborhood (or My Network Places under Windows XP).

If things don’t go well, another command you can try is smbclient. Run the following command as a normal user:

	$ smbclient -L apollo

On my test machine, the output looks like this:

	Domain=[APOLLO] OS=[Unix] Server=[Samba 3.0.14a-2]
		
		Sharename    Type	  Comment
		---------    ----     -------
		tmp			 Disk	  Temporary file space
		IPC$		 IPC	  IPC Service (Samba Server)
		ADMIN$		 IPC	  IPC Service (Samba Server)
		MP780		 Printer  MP780
		hp4m		 Printer  HP LaserJet 4m
		jonesy		 Disk	  Home Directories
	Domain=[APOLLO] OS=[Unix] Server=[Samba 3.0.14a-2]
		Server		 Comment
		---------	 -------
		
		Workgroup	 Master
		---------	 -------
		PVT			 APOLLO

This list shows the services available to me from the Samba server, and I can also use it to confirm that I’m using the correct workgroup name.

Quick and Dirty NAS

Combining LVM, NFS, and Samba on new file servers is a quick and easy solution when you need more shared disk resources.

Network Attached Storage (NAS) and Storage Area Networks (SANs) aren’t making as many people rich nowadays as they did during the dot-com boom, but they’re still important concepts for any system administrator. SANs depend on high-speed disk and network interfaces, and they’re responsible for the increasing popularity of other magic acronyms such as iSCSI (Internet Small Computer Systems Interface) and AoE (ATA over Ethernet), which are cool and upcoming technologies for transferring block-oriented disk data over fast Ethernet interfaces. On the other hand, NAS is quick and easy to set up: it just involves hanging new boxes with shared, exported storage on your network.

“Disk use will always expand to fill all available storage” is one of the immutable laws of computing. It’s sad that it’s as true today, when you can pick up a 400-GB disk for just over $200, as it was when I got my CS degree and the entire department ran on some DEC-10s that together had a whopping 900 MB of storage (yes, I am old). Since then, every computing environment I’ve ever worked in has eventually run out of disk space. And let’s face it—adding more disks to existing machines can be a PITA (pain in the ass). You have to take down the desktop systems, add disks, create filesystems, mount them, copy data around, reboot, and then figure out how and where you’re going to back up all the new space.

This is why NAS is so great. Need more space? Simply hang a few more storage devices off the network and give your users access to them. Many companies made gigabucks off this simple concept during the dot-com boom (more often by selling themselves than by selling hardware, but that’s beside the point). The key for us in this hack is that Linux makes it easy to assemble your own NAS boxes from inexpensive PCs and add them to your network for a fraction of the cost of preassembled, nicely painted, dedicated NAS hardware. This hack is essentially a meta-hack, in which you can combine many of the tips and tricks presented throughout this book to save your organization money while increasing the control you have over how you deploy networked storage, and thus your general sysadmin comfort level. Here’s how.

Selecting the Hardware

Like all hardware purchases, what you end up with is contingent on your budget. I tend to use inexpensive PCs as the basis for NAS boxes, and I’m completely comfortable with basing NAS solutions on today’s reliable, high-speed EIDE drives. The speed of the disk controller(s), disks, and network interfaces is far more important than the CPU speed. This is not to say that recycling an old 300-MHz Pentium as the core of your NAS solutions is a good idea, but any reasonably modern 1.5-GHz or greater processor is more than sufficient. Most of what the box will be doing is serving data, not playing Doom. Thus, motherboards with built-in graphics are also fine for this purpose, since fast, hi-res graphics are equally unimportant in the NAS environment.

Tip

In this hack, I’ll describe minimum requirements for hardware characteristics and capabilities rather than making specific recommendations. As I often say professionally, “Anything better is better.” That’s not me taking the easy way out—it’s me ensuring that this book won’t be outdated before it actually hits the shelves.

My recipe for a reasonable NAS box is the following:

  • A mini-tower case with at least three external, full-height drive bays (four is preferable) and a 500-watt or greater power supply with the best cooling fan available. If you can get a case with mounting brackets for extra cooling fans on the sides or bottom, do so, and purchase the right number of extra cooling fans. This machine is always going to be on, pushing at least four disks, so it’s a good idea to get as much power and cooling as possible.

  • A motherboard with integrated video hardware, at least 10/100 onboard Ethernet (10/100/1000 is preferable), and USB or FireWire support. Make sure that the motherboard supports booting from external USB (or FireWire, if available) drives, so that you won’t have to waste a drive bay on a CD or DVD drive. If at all possible, on-board SATA is a great idea, since that will enable you to put the operating system and swap space on an internal disk and devote all of the drive bays to storage that will be available to users. I’ll assume that you have on-board SATA in the rest of this hack.

  • A 1.5-GHz or better Celeron, Pentium 4, or AMD processor compatible with your motherboard.

  • 256 MB of memory.

  • Five removable EIDE/ATA drive racks and trays, hot-swappable if possible. Four are for the system itself; the extra one gives you a spare tray to use when a drive inevitably fails.

  • One small SATA drive (40 GB or so).

  • Four identical EIDE drives, as large as you can afford. At the time I’m writing this, 300-GB drives with 16-MB buffers cost under $150. If possible, buy a fifth so that you have a spare and two others for backup purposes.

  • An external CD/DVD USB or FireWire drive for installing the OS.

I can’t really describe the details of assembling the hardware because I don’t know exactly what configuration you’ll end up purchasing, but the key idea is that you put a drive tray in each of the external bays, with one of the IDE/ATA drives in each, and put the SATA drive in an internal drive bay. This means that you’ll still have to open up the box to replace the system disk if it ever fails, but it enables you to maximize the storage that this system makes available to users, which is its whole reason for being. Putting the EIDE/ATA disks in drive trays means that you can easily replace a failed drive without taking down the system if the trays are hot-swappable. Even if they’re not, you can bounce a system pretty quickly if all you have to do is swap in another drive and you already have a spare tray available.

At the time I wrote this the hardware setup cost me around $1000 (exclusive of the backup hard drives) with some clever shopping, thanks to http://www.pricewatch.com. This got me a four-bay case; a motherboard with onboard GigE, SATA, and USB; four 300-GB drives with 16-MB buffers; hot-swappable drive racks; and a few extra cooling fans.

Installing and Configuring Linux

As I’ve always told everyone (regardless of whether they ask), I always install everything, regardless of which Linux distribution I’m using. I personally prefer SUSE for commercial deployments, because it’s supported, you can get regular updates, and I’ve always found it to be an up-to-date distribution in terms of supporting the latest hardware and providing the latest kernel tweaks. Your mileage may vary. I’m still mad at Red Hat for abandoning everyone on the desktop, and I don’t like GNOME (though I install it “because it’s there” and because I need its libraries to run Evolution, which is my mailer of choice due to its ability to interact with Microsoft Exchange). Installing everything is easy. We’re building a NAS box here, not a desktop system, so 80% of what I install will probably never be used, but I hate to find that some tool I’d like to use isn’t installed.

To install the Linux distribution of your choice, attach the external CD/DVD drive to your machine and configure the BIOS to boot from it first and the SATA drive second. Put your installation media in the external CD/DVD drive and boot the system. Install Linux on the internal SATA drive. As discussed in “Reduce Restart Times with Journaling Filesystems” [Hack #70] , I use ext3 for the /boot and / partitions on my systems so that I can easily repair them if anything ever goes wrong, and because every Linux distribution and rescue disk in the known universe can handle ext2/ext3 partitions. There are simply more ext2/ext3 tools out there than there are for any other filesystem. You don’t have to partition or format the drives in the bays—we’ll do that after the operating system is installed and booting.

Done installing Linux? Let’s add and configure some storage.

Configuring User Storage

Determining how you want to partition and allocate your disk drives is one of the key decisions you’ll need to make, because it affects both how much space your new NAS box will be able to deliver to users and how maintainable your system will be. To build a reliable NAS box, I use Linux software RAID to mirror the master on the primary IDE interface to the master on the secondary IDE interface and the slave on the primary IDE interface to the slave on the secondary IDE interface. I put them in the case in the following order (from the top down): master primary, slave primary, master secondary, and slave secondary. Having a consistent, specific order makes it easy to know which is which since the drive letter assignments will be a, b, c, and d from the top down, and also makes it easy to know in advance how to jumper any new drive that I’m swapping in without having to check.

By default, I then set up Linux software RAID and LVM so that the two drives on the primary IDE interface are in a logical volume group [Hack #47] .

On systems with 300-GB disks, this gives me 600 GB of reliable, mirrored storage to provide to users. If you’re less nervous than I am, you can skip the RAID step and just use LVM to deliver all 1.2 TB to your users, but backing that up will be a nightmare, and if any of the drives ever fail, you’ll have 1.2 TB worth of angry, unproductive users. If you need 1.2 TB of storage, I’d strongly suggest that you spend the extra $1000 to build a second one of the boxes described in this hack. Mirroring is your friend, and it doesn’t get much more stable than mirroring a pair of drives to two identical drives.

Tip

If you experience performance problems and you need to export filesystems through both NFS and Samba, you may want to consider simply making each of the drives on the main IDE interface its own volume group, keeping the same mirroring layout, and exporting each drive as a single filesystem—one for SMB storage for your Windows users and the other for your Linux/Unix NFS users.

The next step is to decide how you want to partition the logical storage. This depends on the type of users you’ll be delivering this storage to. If you need to provide storage to both Windows and Linux users, I suggest creating separate partitions for SMB and NFS users. The access patterns for the two classes of users and the different protocols used for the two types of networked filesystems are different enough that it’s not a good idea to export a filesystem via NFS and have other people accessing it via SMB. With separate partitions they’re still both coming to the same box, but at least the disk and operating system can cache reads and handle writes appropriately and separately for each type of filesystem.

Getting insights into the usage patterns of your users can help you decide what type of filesystem you want to use on each of the exported filesystems [Hack #70] . I’m a big ext3 fan because so many utilities are available for correcting problems with ext2/ext3 filesystems.

Regardless of the type of filesystem you select, you’ll want to mount it using noatime to minimize file and filesystem updates due to access times. Creation time (ctime) and modification time (mtime) are important, but I’ve never cared much about access time and it can cause a big performance hit in a shared, networked filesystem. Here’s a sample entry from /etc/fstab that includes the noatime mount option:

	/dev/data/music   /mnt/music   xfs   defaults,noatime   0 0

Similarly, since many users will share the filesystems in your system, you’ll want to create the filesystem with a relatively large log. For ext3 filesystems, the size of the journal is always at least 1,024 filesystem blocks, but larger logs can be useful for performance reasons on heavily used systems. I typically use a log of 64 MB on NAS boxes, because that seems to give the best tradeoff between caching filesystem updates and the effects of occasionally flushing the logs. If you are using ext3, you can also specify the journal flush/sync interval using the commit=number-of-seconds mount option. Higher values help performance, and anywhere between 15 and 30 seconds is a reasonable value on a heavily used NAS box (the default value is 5 seconds). Here’s how you would specify this option in /etc/fstab:

	/dev/data/writing	/mnt/writing	ext3	defaults, cls, commit=15 0 0

A final consideration is how to back up all this shiny new storage. I generally let the RAID subsystem do my backups for me by shutting down the systems weekly, swapping out the mirrored drives with a spare pair, and letting the RAID system rebuild the mirrors automatically when the system comes back up. Disk backups are cheaper and less time-consuming than tape [Hack #50] , and letting RAID mirror the drives for you saves you the manual copy step discussed in that hack.

Configuring System Services

Fine-tuning the services running on the soon-to-be NAS box is an important step. Turn off any services you don’t need [Hack #63] . The core services you will need are an NFS server, a Samba server, a distributed authentication mechanism, and NTP. It’s always a good idea to run an NTP server [Hack #22] on networked storage systems to keep the NAS box’s clock in sync with the rest of your environment—otherwise, you can get some weird behavior from programs such as make.

You should also configure the system to boot in a non-graphical runlevel, which is usually runlevel 3 unless you’re a Debian fan. I also typically install Fluxbox [Hack #73] on my NAS boxes and configure X to automatically start that rather than a desktop environment such as GNOME or KDE. Why waste cycles?

“Centralize Resources Using NFS” [Hack #56] explained setting up NFS and “Share Files Across Platforms Using Samba” [Hack #60] shows the same for Samba. If you don’t have Windows users, you have my congratulations, and you don’t have to worry about Samba.

The last step involved in configuring your system is to select the appropriate authentication mechanism so that you have the same users on the NAS box as you do on your desktop systems. This is completely dependent on the authentication mechanism used in your environment in general. Chapter 1 of this book discusses a variety of available authentication mechanisms and how to set them up. If you’re working in an environment with heavy dependencies on Windows for infrastructure such as Exchange (shudder!), it’s often best to bite the bullet and configure the NAS box to use Windows authentication. The critical point for NAS storage is that your NAS box must share the same UIDs, users, and groups as your desktop systems, or you’re going to have problems with users using the new storage provided by the NAS box. One round of authentication problems is generally enough for any sysadmin to fall in love with a distributed authentication mechanism—which one you choose depends on how your computing environment has been set up in general and what types of machines it contains.

Deploying NAS Storage

The final step in building your NAS box is to actually make it available to your users. This involves creating some number of directories for the users and groups who will be accessing the new storage. For Linux users and groups who are focused on NFS, you can create top-level directories for each user and automatically mount them for your users using the NFS automounter and a similar technique to that explained in [Hack #57] , wherein you automount your users’ NAS directories as dedicated subdirectories somewhere in their accounts. For Windows users who are focused on Samba, you can do the same thing by setting up an [NAS] section in the Samba server configuration file on your NAS box and exporting your users’ directories as a named NAS share.

Summary

Building and deploying your own NAS storage isn’t really hard, and it can save you a significant amount of money over buying an off-the-shelf NAS box. Building your own NAS systems also helps you understand how they’re organized, which simplifies maintenance, repairs, backups, and even the occasional but inevitable replacement of failed components. Try it—you’ll like it!

See Also

Share Files and Directories over the Web

WebDAV is a powerful, platform-independent mechanism for sharing files over the Web without resorting to standard networked filesystems.

WebDAV (Web-based Distributed Authoring and Versioning) lets you edit and manage files stored on remote web servers. Many applications support direct access to WebDAV servers, including web-based editors, file-transfer clients, and more. WebDAV enables you to edit files where they live on your web server, without making you go through a standard but tedious download, edit, and upload cycle.

Because it relies on the HTTP protocol rather than a specific networked filesystem protocol, WebDAV provides yet another way to leverage the inherent platform-independence of the Web. Though many Linux applications can access WebDAV servers directly, Linux also provides a convenient mechanism for accessing WebDAV directories from the command line through the davfs filesystem driver. This hack will show you how to setup WebDAV support on the Apache web server, which is the most common mechanism for accessing WebDAV files and directories.

Installing and Configuring Apache’s WebDAV Support

WebDAV support in Apache is made possible by the mod_dav module. Servers running Apache 2.x will already have mod_dav included in the package apache2-common, so you should only need to make a simple change to your Apache configuration in order to run mod_dav. If you compiled your own version of Apache, make sure that you compiled it with the–enable -dav option to enable and integrate WebDAV support.

Tip

To enable WebDAV on an Apache server that is still running Apache 1.x, you must download and install the original Version 1.0 of mod_dav, which is stable but is no longer being actively developed. This version can be found at http://www.webdav.org/mod_dav/.

If WebDAV support wasn’t statically linked into your version of Apache2, you’ll need to load the modules that provide WebDAV support. To load the Apache2 modules for WebDAV, do the following:

	# cd /etc/apache2/mods-enabled/
	# ln -s /etc/apache2/mods-available/dav.load dav.load
	# ln -s /etc/apache2/mods-available/dav_fs.load dav_fs.load
	# ln -s /etc/apache2/mods-available/dav_fs.conf dav_fs.conf

Next, add these two commands to your httpd.conf file to set variables used by Apache’s WebDAV support:

	DAVLockDB /tmp/DAVLock
	DAVMinTimeout 600!

These can be added anywhere in the top level of your httpd.conf file—in other words, anywhere that is not specific to the definition of a single directory or server. The DAVLockDB statement identifies the directory where locks should be stored. This directory must exist and should be owned by the Apache service account’s user and group. The DAVMinTimeout variable specifies the period of time after which a lock will automatically be released.

Next, you’ll need to create a WebDAV root directory. Users will have their own subdirectories beneath this one, so it’s a bit like an alternative /home directory. This directory must be readable and writable by the Apache service account. On most distributions, this user will probably be called apache or www-data. You can check this by searching for the Apache process in ps using one of the following commands:

	# ps -ef | grep apache2
	# ps -ef | grep httpd

A good location for the WebDAV root is at the same level as your Apache document root. Apache’s document root is usually at /var/www/apache2-default (or, on some systems, /var/www/html). I tend to use /var/www/webdav as a standard WebDAV root on my systems.

Create this directory and give read and write access to the Apache service account (apache, www-data, or whatever other name is used on your systems):

	# mkdir /var/www/webdav
	# chown root:www-data /var/www/webdav
	# chmod 750 /var/www/webdav

Now that you’ve created your directory, you’ll need to enable it for WebDAV in Apache. This is done with a simple Dav On directive, which can be located inside a directory definition anywhere in your Apache configuration file (httpd.conf):

	<Directory /var/www/webdav>
	Dav On
	</Directory>

Creating WebDAV Users and Directories

If you simply activate WebDAV on a directory, any user can access and modify the files in that directory through a web browser. While a complete absence of security is convenient, it is not “the right thing” in any modern computing environment. You will therefore want to apply the standard Apache techniques for specifying the authentication requirements for a given directory in order to properly protect files stored in WebDAV.

As an example, to set up simple password authentication you can use the htpasswd command to create a password file and set up an initial user, whom we’ll call joe:

	# mkdir /etc/apache2/passwd
	# htpasswd -c /etc/apache2/passwd/htpass.dav joe

Tip

The htpasswd command’s -c flag creates a new password file, over-writing any previously created file (and all usernames and passwords it contains), so it should only be used the first time the password file is created.

The htpasswd command will prompt you once for joe’s new WebDAV password, and then again for confirmation. Once you’ve specified the password, you should set the permissions on your new password file so that it can’t be read by standard users but is readable by any member of the Apache service account group:

	# chown root:www-data /etc/apache2/passwd/htpass.dav
	# chmod 640 /etc/apache2/passwd/htpass.dav

Next, the sample user joe will need a WebDAV directory of his own, with the right permissions set:

	# mkdir /var/www/webdav/joe
	# chown www-data:www-data /var/www/webdav/joe
	# chmod 750 /var/www/webdav/joe

The sample user will also need to use the password file that you just created with htpasswd to authenticate access to his directory, so you’ll have to update httpd.conf with another directive for that directory:

	<Directory /var/www/webdav/joe/>
		 require user joe
	</Directory>

Tip

WebDAV in Apache uses the same authorization conventions as any Apache authentication declaration. You can therefore require group membership, enable access to a single directory by multiple users by listing them, and so on. See your Apache documentation for more information.

Now just restart your Apache server, and you’re done with the Apache side of things:

	# /usr/sbin/apache2ctl restart

At this point, you should be able to connect to your web server and access files in /var/www/webdav/joe as the user joe from any WebDAV-enabled application.

See Also

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.82.191