CHAPTER 9
Bringing It All Together

So far, this book has focused mostly on technical details of the portability of the shell. Good portable code, however, requires additional skills. It is impossible to successfully test on everything; sooner or later, a script you write will be used on a system that didn't even exist when you wrote it. New standards will come out, new extensions will be defined, and new bugs will sneak into production releases. This chapter discusses some of the ways in which you can write scripts that are more likely to survive new systems.

It is not usually enough to have a script that will run on the existing systems you are targeting. Furthermore, it may not be enough to have a script that runs everywhere. If your script is confusing or unmaintainable as a result of your portability efforts, you will end up with more bugs; this makes your script useless to you.

Robustness

A program is called robust when it works despite unexpected failures. Robustness is useful on many levels. A robust program is more likely to work when something minor goes wrong; it is more likely to give useful diagnostics when something major goes wrong. Robust programs are more likely to detect and correct for bugs, to handle unexpected circumstances, and to survive transitions or changes in their working environment.

Robustness matters a little more in portable code than in other code because there are more things that might go unexpectedly wrong. Programs may provide multiple incompatible versions of a utility, but if you try to specify a particular one by path, your code may not survive a transition to other systems.

Computer security and reliability people often advocate a strategy of having multiple redundant layers of protection against errors; this is called defense in depth. You should design your code to detect, and protect against, errors at multiple points. Verify that file names are valid; check that operations succeed. There will be bugs sooner or later, even in your error-handling code. Test your assumptions early on, but test them later, too. Sanity-check values. If you think you've gotten the absolute path to a file, it had better have a path separator in it; if it doesn't, something went wrong.

Handling Failure

The essence of robust code is that it handles failure. You cannot ensure that nothing will ever fail; all you can do is check for failure and handle it. Handling an error need not mean correcting it. Sometimes, all you can do is diagnose that something went wrong and possibly abort execution before things get worse.

Handling Is Not Always Correction

In some cases, you can correct an error. That's great. In some cases, it is not possible to correct an error. At this point, you should emit a diagnostic explaining what went wrong, clean up, and abort. It is rarely beneficial to try to continue after a problem, although in some cases it can be. As a general rule, if future operations are not dependent on previous operations, try them all, reporting errors for the ones that fail. If operations are in a logical sequence, abort execution once something has gone wrong.

If You Can't, Don't

Let me start with the most important lesson of all in failure handling: If you cannot do anything about an error, it is useless to check for it. This doesn't mean you shouldn't check for errors that you can't completely correct; only errors you can't do anything about. For instance, consider the following code fragment:

func_die() {
  if echo "$@" >&2
  then :
  else # what goes here?
  fi
  exit 1
}

This function tries to display a message and then exit unsuccessfully, much like the standard Perl function die. It tests to ensure that the echo command succeeded, and if it doesn't... well, now what? If you can't write to standard error, you can't display a message on standard error saying that you can't write to standard error. The script was already going to exit with an abnormal exit code, so it can't use that to communicate that something has gone horribly wrong. While being unable to display error messages is perhaps a problem, it is not a problem you can solve. There is nothing that can be done to correct this error, or accommodate it, or work around it. If you are thinking, "Well, you could try to write a message to standard output," you get bonus points for creativity. But this is a very bad idea. If there is one thing worse than an undiagnosed error, it's an error diagnostic making it into what should have been a pure data stream.

There is another potential problem with this proposed function. If your script was expecting to do cleanup after a problem, that cleanup code may not get run, leaving temporary files or other objects in limbo, possibly cluttering things or causing errors on future runs. If you have cleanup code, run it before calling any function that is designed to exit the script (or use trap cleanup_code 0; see Chapter 6).

When You Find Yourself in a Hole, Stop Digging

The great disasters of my script programming career have usually been code that worked perfectly running after code that failed. Here is a sample of the sort of thing I have done wrong:

for i in $names
do
  mkdir $i
  cd $i
  [... do stuff ...]
  cd ..
  rm -rf $i
done

Nice, simple script, right? Here's my advice: Do not try this script when there is a possibility that one of the names in $names will be something like . (the current directory), will have a space in it, or anything else weird. Here's what happens with the word . in $names:

mkdir .            # fails
cd .               # succeeds, but I'm still in the directory I started in
[... do stuff ...] # might or might not work
cd ..              # oops, I'm now above the directory I started in
rm -rf .           # and now I remove the new working directory

Nicely catastrophic for a seemingly harmless chunk of code. Now, imagine that I'd written this with even minimal error checking:

for i in $names
do
  mkdir $i || continue
  ( cd $i || exit
  [... do stuff ...]
  )
  rm -rf $i
done

In this case, if the initial mkdir fails, nothing gets done at all. (No diagnostic message, which is bad style, but nothing happens, so at least I don't have to go looking for backups.) Putting the cd command in a subshell ensures that the next command after the subshell is in the directory I started in, no matter what happens. Whether I create or change directories during "do stuff," whatever happens is in a subshell and does not affect the calling code. There is still a lot of room for cleaning up this code and fixing it, but the two most common errors are now prevented.

Do not omit error checking. For readability and brevity, this book omits a large amount of error checking in many examples. Be more careful than that in production code.

One possibility to consider is using set -e to cause the shell to abort if an error occurs. When the -e option is set, the shell exits immediately after executing any complete command that yields a non-zero exit status. Commands that are in explicit tests, such as the control statements of if or while loops, do not cause the shell to exit, but individual commands in the body of a loop will cause an exit. If you use set -e, any command that could fail must be explicitly tested, or the script will abort without comment. For instance, the following fragment would not be safe:

diff -u file.old file.new > file.diffs

The diff command, in addition to writing differences between two files to standard output, returns a non-zero status if it encounters any differences. In cases where you simply do not care about the exit status of a command, you can follow it by || true, ensuring that the command as a whole yields a successful return status, as in the following example:

diff -u file.old file.new > file.diffs || true

I do not recommend using set -e; it is uncomfortably vulnerable to overlooking boundary conditions that are genuinely harmless. Furthermore, the lack of any diagnostic message from the shell makes it hard to figure out what went wrong. (You can use a trap to print some kind of diagnostic, but there is still no way to say what went wrong.)

Temporary Files and Cleanup

In previous chapters, passing reference was made to using the trap command to handle cleanup of temporary files. The first thing you must know is that you cannot ensure that cleanup will be run successfully. If someone sends your script a SIGKILL command, execution ceases and nothing more gets done. If you need to ensure that data are never exposed, do not put them in temporary files.

The first stage of implementing good cleanup is simply to perform cleanup. If you create temporary files, delete them when you are done with them. However, there are a number of additional subtleties to the creation and use of temporary files. First, it is hard to create a temporary file securely, ensuring that other programs cannot create problems for your script's temporary file, whether maliciously or accidentally. Secondly, cleaning up temporary files can be complicated, especially if you create a number of them.

Creating Temporary Files

There are a number of issues you need to consider when creating a temporary file. First, you must ensure that there are no clashes. You want to make sure that other programs will not inadvertently end up using the same files you do. This applies both to instances of other programs and to multiple instances of the same script running at the same time. As a general rule, a good starting place is to use the process ID as one component of a file name. This generally provides reasonable protection against accidents. The only major caveat to keep in mind is that a long-running system will eventually recycle process IDs, so be sure to empty or truncate temporary files before using them even when using a process ID. Since two processes running at the same time cannot have the same pid, this may be enough.

However, there are a few limitations. If the location in which your temporary files are created is shared storage, there may be two programs running on different computers with the same pid, leading to clashes. Subshells have the same $$ value as their parent process, so a subshell trying to generate a unique name might clash. Finally, there is the most serious problem: Not all clashes are accidents. Malicious users often use the semi-predictable naming of temporary files as a way to attack vulnerable programs.

It is not sufficient to check whether a file exists before creating it; the window between the existence check and the creation of the file is plenty of time for an attacker to create a file your script can open, giving the attacker access to your temporary file. (Do not rely on the notion that this is too rare to occur; the attacker only has to get lucky once, but you have to get lucky every time.)

Quite simply, you cannot portably avoid this problem in shell. It isn't even entirely portable to work around this in C. The good solutions are not as widely standardized as you might hope. When you open a file, it is possible that it already existed; if it did, your script is compromised.

So, here is the secret to creating temporary files safely: Don't. If you really need a temporary file, you have to be in control not just of the file, but of the directory it is created in.

Creating Temporary Directories

While file creation is prone to risks, directory creation has a significant advantage: If you try to create a directory that already exists, mkdir fails. This allows you to be sure that the directory you finally create is actually owned by you. The only hard part is ensuring that the directory is not writeable by other users; otherwise, any attempt to create files in the directory is vulnerable to the problems previously described for temporary files. Ensure that the directory's mode is restrictive by using umask or using the -m mode option to mkdir. The following two examples are functionally equivalent:

(umask 077; mkdir $tmpdir)
mkdir -m 0700 $tmpdir

The -m mode option is portable to modern systems but avoids use of a subshell. If you need to target Windows systems, you might prefer it. To use this in a script, be sure to check whether mkdir succeeded. It is not sufficient to check whether the directory exists; if an attacker created the directory already, it will exist but will not be under your control. A typical usage might look like this:

if mkdir -m 0700 "$tmpdir" 2>/dev/null; then
  echo "Successful creation of temporary directory." >&2
else
  echo "Could not create temporary directory." >&2
fi

You might want to wrap this in a loop to try to generate likely directory names. For a more complete solution, look at the func_mktempdir function in libtool. There are a number of additional utilities, such as mktemp, that might help you out but are not universally available. Know your target systems. If mktemp is not available, at least try to make your temporary file names a little unpredictable. Using your pid ($$) alone is not very good at protecting against attackers; in shells that have the $RANDOM variable, you can use that, as in the following example (extracted from libtool):

my_tmpdir="${my_template}-${RANDOM-0}$$"

In a shell that has no special $RANDOM parameter, ${RANDOM-0} expands to 0 (or whatever a user may have set it to).

Once you have succeeded in creating a directory, you can use it to hold temporary files. Because the directory is owned by you and has a restrictive mode, you do not need to worry about race conditions or attackers, as long as the temporary directory itself is reasonably secure. (Of course, a user with root privileges can override this, but a user with root privileges always wins a security fight.)

Do not use the -p option with mkdir in this circumstance. First, it is nonportable. Second, mkdir -p silently succeeds if the target directory already exists. This eliminates the security benefit of using mkdir instead of just creating individual temporary files. If you wish to make an arbitrarily nested directory, you can do so by looping through making the directories one at a time. For a more detailed example, examine the func_mkdir_p implementation in libtoolize.

Removing Temporary Files

When you are done with temporary files, delete them. (For debugging, you may wish to have an option to your script that suppresses this normally desired behavior.) If you are using a temporary directory, and you should be, this is made much easier by the fact that you can simply delete the whole directory and its contents.

In general, you should be aggressive about deleting files as soon as you can; this reduces the amount of junk left around the system if a script is killed unexpectedly. Try to avoid relying on exit traps (see the discussion in the next section, "Handling Interrupts"); instead, ensure that files are deleted as soon as you are done with them. It may make sense to store a list of files to remove when done, or you can remove a whole temporary directory at once. In general, though, leaving all the files until the end is careless and may result in unwanted surprises.

Handling Interrupts

The shell does not have a real exception handling mechanism in the sense that some more recent programming languages do. However, the trap command can provide for some simple emergency recovery after errors. In particular, you can use the special signal 0 to perform cleanup tasks whenever the script exits, assuming it exits cleanly (rather than being killed by another signal, for instance). The following script fragment creates a temporary directory (using an admittedly insecure name for brevity), then registers a handler to remove it on exit:

mkdir /tmp/example.$$
trap "rm -rf /tmp/example.$$" 0

This example works as designed, but it is vulnerable to a subtle bug. Imagine that two pieces of code in your script do the same thing:

mkdir /tmp/example_a.$$
trap "rm -rf /tmp/example_a.$$" 0
mkdir /tmp/example_b.$$
trap "rm -rf /tmp/example_b.$$" 0

This script removes the example_b directory, but not the example_a directory; the second exit trap replaces the first.

While this provides for last-minute cleanup for normal script exits, it doesn't do anything when a signal is caught by another handler. If the shell exits from an interrupt handler, it is likely to run an exit trap handler (though zsh does not).

There is no definite rule as to whether or not you should trap interrupts. In general, it is nice to clean up any temporary files you are creating, although you may want an option to suppress this behavior; it can be maddening to debug a script that deletes all the evidence when it screws up. Leave that strategy to the politicians. The case where it's most important to start trapping interrupts is code with critical sections where a system's intermediate states are unusable. If you are modifying system files in a script, it may make sense to trap interrupts to prevent accidents. Most scripts have no reason to trap most signals.

Startup Files and Environment Variables

The entire environment in which you write scripts is potentially subject to user interference. Executing commands relies on the $PATH environment variable, but there's more. Many utilities have behavior that can be influenced by environment variables. The $BLOCKSIZE environment variable can, on some systems, alter the output of many common utilities.

In some shells, there are startup scripts that may be processed even when running a shell script. For instance, pdksh and ash run the $ENV setup script at the start of execution even when running a script. Because shell functions and aliases both take priority over external commands, it is possible that a user's startup environment will substantially alter the behavior of a script.

There is very little you can do to be sure that none of this has happened to you. You can set the $BLOCKSIZE variable to an empty string while you are using utilities that rely on it. But be sure to set it back later; users probably set it for a reason. By the time your code is executing, however, it is too late to try to prevent $ENV from being run.

Ultimately, this is an unwinnable fight. Take a few reasonable precautions, but apart from that, if users run in a sufficiently misconfigured shell, scripts will fail. This is a good reason for users not to configure their environment badly. There are clever tricks (quoting the names of aliased commands, trying to reexecute the script with $ENV set to an empty string), but ultimately it is not worth the hassle. It is up to the user not to give you a hopelessly misconfigured environment.

Documentation and Comments

In general, the shell does not care about comments. You should.

The concept of defense in depth extends beyond just the question of how you try to ensure that there are no bugs. There will be bugs. You will need to maintain this code, or someone else will. (And don't get careless about that; you'll be the "someone else" for other peoples' code some day.) When you have to debug a script that you or someone else wrote long ago, you will need to understand how it works to identify the bugs. Good comments are a big part of successful debugging.

Furthermore, beyond the mere question of individual code comments explaining code fragments, be sure to have some top-level documentation. What is this script? What does it do? What arguments does it take? What arguments are valid? What systems has it been tested on? What assumptions does it make? Every one of those questions could, quite easily, turn out to be the source of a major problem somewhere down the road. Answer them early on, ideally in comments within your script.

When you validate arguments (and you should always do this), be sure you give a clear error message (to standard error, not standard output) showing your script's usage options. Stick to the normal UNIX conventions to express options and arguments.

What to Document

Describe the basic purpose and design of your script. Explain what job it does, and how it should be invoked. Here's a sample:

# errno: Explain error names or numbers
# usage: errno error...
# e.g., errno ENOENT
# output:
# ENOENT [2]:            No such file or directory
# inputs should be integers or symbolic errno values.
# relies on /usr/include/sys/errno.h

This small chunk of text tells you pretty much what you need to know to maintain this program, and if you're an experienced C programmer who's used a number of platforms, it even gives you a pretty good idea of what's likely to go wrong. Different systems use different files to hold the error definitions, here described as existing in /usr/include/sys/errno.h. If this utility gives cryptic error messages (other than those intended) on a new system, it is quite likely that the problem has to do with the choice of header file. If I had taken the two minutes it took to write that back in 1994 when I wrote this script, I would have saved myself about ten minutes of staring at a script I no longer remembered anything much about in 2008. (This was the one shell script from my previous work that needed modification when I started doing more Linux work.)

What to Comment

Not everything. There is no code so unreadable as code that has been commented by someone who thinks everything needs comments. Avoid commenting on common and well-known idioms; use such idioms frequently so that you need fewer comments. Your goal is not to comment lots; it is to comment well and clearly. A reader only needs to see this once to lose all hope that the program in question is going to work:

count=`expr $count + 1`# add 1 to count

In general, comments should tell the reader something that might not be obvious. Go ahead and assume that your reader knows what basic UNIX commands do. The subject of comments should be an explanation of why you are doing something, rather than just a simple description of what you are doing. Compare these two comments:

args="$args $i" # append $i to $args
args="$args $i" # build list of files

The former comment is useless; the latter comment at least tells you what the purpose or goal of the code is.

Any function you define should have at least a brief comment explaining its arguments, behaviors, and any outputs. Distinguish between return code, output, and side effects (such as file modification). This description should be in addition to any comments needed on the function's actual code.

Comment mechanisms sparingly, but there are times when this is appropriate. If it takes you a while to get a very small piece of code right, go ahead and explain it. I have rarely seen a shell using eval in a way that could not have benefited from an explanatory comment.

Stylistically, feel free to put small comments on the same line as the code they explain, although I recommend a bit of extra space to make them visually distinct. If you have several commented lines in a row, aligning the comments can make them easier to read. Longer comments or comments on whole blocks of code tend to look better above the code they explain. Some programs are obliged to process options, even if they occur later in the command line. The following script fragment does this and explains what it is doing:

# sort arguments into options and file names
files=""
opts=""
for i
do
  case $i in
  -*) opts="$opts $i";; # name starting with - is an option
  *) files="$files $i";;
  esac
done

The comment at the top of the fragment explains the purpose of the code; the inline comment explains a particular convention to the reader. Of course, you would do better to use something more flexible, such as the command processing code illustrated in Chapter 6.

The most common problem with comments in old code, and one of the key arguments against over-commenting, is that comments tend to become inaccurate over time. When you update code, be sure to update the comments as well. This is more work in code with more comments, especially trivial comments. It is common for tiny details of a script to change; it is rare for fundamental algorithms or designs to change. This suggests a good guideline in commenting; your comments should explain the code, not repeat it. Otherwise, you end up with comments that start out useless, and eventually become wrong. Imagine encountering the following line in a script you are debugging:

BLOCK=4096    # use one-kilobyte BLOCKSIZE

Is this a bug? If it is, is the bug that the comment is wrong or that the definition of BLOCK is wrong? Is the name of the variable wrong? This comment creates more questions than it answers. It is also, distressingly, not a particularly atypical comment. In this case, the best guess is that the code originally read BLOCKSIZE=1024, and that the code has changed and the comment hasn't. If you are looking for a bug, especially a bug involving handling of block sizes, it is quite possible that this is it. (This example is based on real code I saw, although it was not in a shell script.)

Degrade Gracefully

Programs that do the best they can, correctly, rather than failing dismally, are said to degrade gracefully. It is quite reasonable to try to provide extra features when possible, but if those features impact portability, it is often better to provide an alternative, even if it may be less functional. For instance, some installation scripts that need root privileges try to use the sudo utility to gain them. When the utility is installed, and when the user has access to it, this can be quite convenient. However, if the sudo utility is missing, such a script may fail unconditionally, even when run as root. That makes the script less useful to users who have root access but lack the sudo utility. A better choice would be to try to use sudo only if it is installed. If the utility is unavailable, check for permissions instead. If you need additional privileges, tell the user what privileges you need, and exit gracefully without doing anything else; if you already have the needed privileges, just run normally.

If there is some check you must make in a fairly frequent operation, make it into a shell function. There is no reason to write your test for a given utility, or even just a conditional operation, dozens of times. You might want to use the system's install utility if it is available, but fall back on manual copying. (A disclaimer: There are differences between traditional BSD and System V install programs. You may not want to use either.) First, you would determine the path to the system utility, if it's in $PATH:

found_install=` IFS=':';
  for dir in $PATH; do
    test -x "$dir"/install && { echo "$dir"/install; exit 0; }
  done `

This mildly elaborate chunk of code checks each directory in $PATH for an executable named install; if it finds one, it echoes the name and the subshell exits. (The exit is needed in case of a system on which there is more than one such program in $PATH.) Given this, you could write code like the following to install a program in $HOME/bin:

if test -n "$found_install"; then
  "$found_install" -m 755 newscript "$HOME/bin"
else
  rm -f "$HOME/bin/newscript"
  cp newscript "$HOME/bin"
  chmod 755 "$HOME/bin/newscript"
fi

While this works fine for a single file, it quickly becomes awkward. The first step in correcting this is to move it into a function:

func_install() {
  for file
  do
    if test -n "$found_install"; then
      "$found_install" -m 755 "$file" "$HOME/bin"
    else
      rm -f "$HOME/bin/$file"
      cp newscript "$HOME/bin"
      chmod 755 "$HOME/bin/$file"
    fi
  done
}

Now, calls to this function are much briefer, and easier to write, than the longer if-else construct was. However, there is another improvement possible. In general, the value of $found_install should never change. So why test it all the time?

if test -n "$found_install"; then
  func_install() {
    for file
    do
      "$found_install" -m 755 "$file" "$HOME/bin"
    done
  }
else
  func_install() {
    for file
    do
      rm -f "$HOME/bin/$file"
      cp newscript "$HOME/bin"
      chmod 755 "$HOME/bin/$file"
    done
  }
fi

Now, the function definition depends on the results of the initial test, and each function call omits the separate test. While in this case the behaviors are fairly similar (though not identical), this works even when the net result is a noticeable difference in provided functionality.

Of course, this assumes that all the install utilities are compatible; they are not, and this is why many programs ship with an external install-sh script, which tries to provide a reasonably stable set of options and semantics. The tricky part is that the conventional System V install utility has completely different semantics for the -c option. The BSD semantics are probably better; this is why people tend to specify it or provide wrappers (such as the portable install-sh distributed with many configure scripts).

Specify, and Test For, Requirements

Whenever possible, test for the preconditions your script requires rather than just failing dismally. A script that needs root privileges should test for them first and give an informative error message if it doesn't have them. Trying to run with a genuine requirement absent is crazy. It is not just that your script may not work; it is that it may work partially. A few pages full of "Permission denied" messages are bad enough, but the commands that don't fail may have surprising effects (see "When You Find Yourself in a Hole, Stop Digging" earlier in this chapter).

As a general rule, once you have a list of requirements for your script (inputs, valid arguments, privileges, programs you depend on), you should check for them all before starting to do anything substantial. (This can impose a substantial performance cost; see the next section, "Scripts That Write Scripts.")

As you write the documentation describing your requirements, write tests for any that you can figure out a way to test for. Be as cautious and thorough as you have time for; the frequency with which surprising things go wrong is itself surprising.

Finally, if you come up with an elegant test for a requirement, and it implies a work-around, feel free to remove the requirement and just write the script to be more portable in the first place. It will save you time later.

Scripts That Write Scripts

Sometimes the best way to develop a portable script is to use another utility to create the final script. There are two major ways to pursue this. One is to use a tool like m4sh to build a very carefully tuned portable script while hiding most of the hard work from you. Another is to write a script that creates as output a less portable script tuned for a given system.

Building a Script for a Specific Target

If it is practical to ensure that a script is always recreated for each target system, you can run another program (often a script) on each target that performs all the usual tests and builds the script correctly for a particular target. The result is an output file that is not portable, but is built in a way that allows it to target multiple systems. People who have worked in compiled languages will find this oddly familiar. This can noticeably improve the runtime performance of a script on a given system, but it does leave you with a problem: If you fix bugs on a given system, you have to do extra work to propagate them to other systems. This is usually only useful if performance is very important for a given script. A few milliseconds of startup time are usually a nonissue.

The simplest way to do this is to run something similar to an execution preamble; however, instead of executing the variable definitions and function definitions, write them into a file that becomes a header for a script. The remainder of the script code can be appended to this preamble to create a working script. For instance, the following header might work on a POSIX standard system with no special requirements:

#!/bin/sh

A system where the default shell is pre-POSIX might need a more elaborate header:

#!/bin/zsh
emulate sh
NULLCMD=:

In each case, the idea is to replace ten or 20 lines of execution preamble with the special case code needed for a particular system. The "script file" appended to these would lack the shebang line and be written with the assumption that the shell is always a standard POSIX shell. The preceding headers could be generated by a simple script:

#!/bin/sh
if eval '! false' 2>/dev/null; then
  func_script() {
    for i
    do
      ( echo "#!/bin/sh"; cat $i ) > $i.out
    done
  }
else
  func_script() {
    for i
    do
     ( echo "#!/bin/zsh"; echo "emulate sh"; echo "NULLCMD=:"; cat $i ) > $i.out
    done
  }
fi
func_script "$@"

Given the names of input script files, this creates new files (with .out appended to their names) with suitable execution preambles. There is plenty more you could usefully do in such a script; this is a minimal example to illustrate the technique. (More complete examples may be found in Chapter 7.)

Mixing with Other Languages

It is not necessary that a program used to create shell scripts be itself a shell script; the m4sh language uses m4, and some people have done reasonably well using make to create shell scripts. In particular, you should be comfortable with using both sed and awk, which are excellent candidates for textual manipulation (such as rewriting or modifying shell scripts). It is also useful to learn m4. Chapter 11 comes back to the question of how to mix shell code with other languages.

There are very few targets for which Perl is not available, but there are a fair number on which it is not installed out of the box (most notably, NetBSD). Although Perl is undoubtedly a more powerful and convenient language than the Bourne shell for many tasks, I continue to write many scripts primarily in shell.

What's Next?

Chapter 10 gets even farther away from the fiddly details and explores the question of what makes a shell script work well in a broader environment: conventions your scripts should follow, ways to be sure your script will stay useful on new systems and in new circumstances, and more.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.21.158