Nobody’s perfect. We all make mistakes, especially when we are first learning something new. We have all been there, done that. You know, the silly mistake that seems so obvious once you’ve had it explained, or the time you thought for sure that the system must be broken because you were doing it exactly right, only to find that you were off by one little character—one which made all the difference. Certain mistakes seem common, almost predictable, among beginners. We’ve all had to learn the hard way that scripts don’t run unless you set execute permissions on them—a real newbie kind of error. Now that we’re experienced, we never make those mistakes anymore. What, never? Well, hardly ever. After all, nobody’s perfect.
You have two choices. First, you could invoke bash and give it the name of the script as a parameter:
bash my.script
Or second (and better still), you could set execute permissions on the script so that you can run it directly:
chmod a+x my.script ./my.script
Either method will get the script running. You’ll probably want to set execute permissions on the script if you intend to use it over and over. You only have to do this once, thereafter allowing you to invoke it directly. With the permissions set it feels more like a command, since you don’t have to explicitly invoke bash (of course, behind the scenes bash is still being invoked, but you don’t have to type it).
In setting the permissions here, we used a+x
to give execute permissions to all. There’s little reason to restrict execute permissions on the file unless it is in some directory where others might accidentally encounter your executable (e.g., if as a system admin you were putting something of your own in /usr/bin). Besides, if the file has read permissions for all, then others can still execute the script if they use our first form of invocation, with the explicit reference to bash. In octal mode, common permissions on shell scripts are 0700
for the suspicious/careful folk (giving read/write/execute permission to only the owner) and 0755
for the more open/carefree folk (giving read and execute permissions to all others).
You’ve set execute permissions as described in Recipe 19.1, but when you run the script you get a “No such file or directory” error.
Try running the script using bash explicitly:
bash ./busted
If it works, you have some kind of permissions error, or a typo in your shebang line. If you get a bunch more errors, you probably have the wrong line endings. This can happen if you’ve edited the file on Windows (perhaps via Samba), or if you’ve simply copied the file around.
If you run the file command on your suspect script, it can tell you if your line endings are wrong. It may say something like this:
$ file ./busted ./busted: Bourne-Again shell script, ASCII text executable, with CRLF line terminators $
To fix it, try the dos2unix program if you have it, or see Recipe 8.11. Note that if you use dos2unix it will probably create a new file and delete the old one, which will change the permissions and might also change the owner or group and affect hard links. If you’re not sure what any of that means, the key point is that you’ll probably have to chmod it again (Recipe 19.1).
If you really do have bad line endings (i.e., anything that isn’t ASCII 10
or hex 0a
), the error you get depends on your shebang line. Here are some examples for a script named busted:
$ cat busted#!/bin/bash -
echo "Hello World!" # This works $ ./busted Hello World! # But if the file gets DOS line endings, we get: $ ./busted: invalid option
Usage: /bin/bash [GNU long option] [option] ... [...] # Different shebang line $ cat ./busted#!/usr/bin/env bash
echo "Hello World!" $ ./busted: No such file or directory
It is a common mistake for beginners to forget to add the leading ./
to the name of the script that they want to execute. We have had a lot of discussion about the $PATH
variable, so we won’t repeat ourselves here except to remind you of a solution for frequently used scripts.
A common practice is to keep your useful and often-used scripts in a directory called bin inside of your home directory, and to add that bin directory to your $PATH
variable so that you can execute those scripts without needing the leading ./
.
The important part about adding your own bin directory to your $PATH
variable is to place the change that modifies your $PATH
variable in the right startup script. You don’t want it in the .bashrc script because that gets invoked by every interactive subshell, which would mean that your path would get added to every time you “shell out” of an editor, or run some other commands. You don’t need repeated copies of your bin directory in the $PATH
variable.
Instead, put it in the appropriate login profile for bash. According to the bash manpage, when you log in bash “looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable.” So, edit whichever one of those you already have in your home directory or, if none exists, create ~/.bash_profile and put this line in at the bottom of the file (or elsewhere if you understand enough of what else the profile is doing):
PATH="${PATH}:$HOME/bin"
You typed up a bash script to test out some of this interesting material that you’ve been reading about. You typed it exactly right, and you even remembered to set execute permissions on the file and put it in one of the directories in your $PATH
, but when you try to run it, nothing happens.
It is natural enough to want to name a file test when you just want a quick scratch file for trying out some small bit of code. The problem is that test is a shell builtin command, making it a kind of shell reserved word. You can see this with the type command:
$ type test test is a shell builtin $
Since it is a builtin, no adjusting of the path will override this. You would have to create an alias, but we strongly advise against it in this case. Just name your script something else, or invoke it with a pathname, as in: ./test
or /home/path/test
.
You can’t get a subscript or script to pass an exported variable back to its parent shell or script. For example, the following script will set a value, invoke a second script, and then display the value after the second script completes, so as to show what (if anything) has changed:
$ cat first.sh # # a simple example of a common mistake # # set the value: export VAL=5 printf "VAL=%d " $VAL # invoke our other script: ./second.sh # # now see what changed (hint: nothing!) printf "%b" "back in first " printf "VAL=%d " $VAL $
The second script messes with the variable named $VAL
, too:
$ cat second.sh printf "%b" "in second " printf "initially VAL=%d " $VAL VAL=12 printf "changed so VAL=%d " $VAL $
When you run the first script (which invokes the second one) here’s what you get:
$ ./first.sh VAL=5 in second initially VAL=5 changed so VAL=10 back in first VAL=5 $
The old joke goes something like this:
Patient: “Doctor, it hurts when I do this.”
Doctor: “Then don’t do that.”
The solution here is going to sound like the doctor’s advice: don’t do that. You will have to structure your shell scripts so that such a handoff is not necessary. One way to do that is by explicitly echoing the results of the second script so that the first script can invoke it with the $()
operator (or ``
for the old shell hands). In the first script, the line ./second.sh
becomes VAL=$(./second.sh
), and the second script has to echo the final value (and only the final value) to STDOUT (it could redirect its other messages to STDERR):
$ cat second.sh printf "%b" "in second " >&2 printf "initially VAL=%d " $VAL >&2 VAL=12 printf "changed so VAL=%d " $VAL >&2 echo $VAL $
Exported environment variables are not globals that are shared between scripts. They are a one-way communication. All the exported environment variables are marshaled and passed together as part of the invocation of a Linux or Unix (sub)process (see the fork(2) manpage). There is no mechanism whereby these environment variables are passed back to the parent process. (Remember that a parent process can fork lots and lots of subprocesses…so if you could return values from a child process, which child’s values would the parent get?)
Your script is assigning some values to a variable, but when you run it, the shell reports “command not found” on part of what you thought you assigned to the variable:
$ cat goof1.sh #!/bin/bash - # common goof: # X=$Y $Z # isn't the same as # X="$Y $Z" # OPT1=-l OPT2=-h
ALLOPT=$OPT1 $OPT2
ls $ALLOPT . $ ./goof1.sh goof1.sh: line 9:-h:
command not found aaa.awk cdscript.prev ifexpr.sh oldsrc xspin2.sh $
You need quotes around the righthand side of the assignment to $ALLOPT
. What is written in the script as:
ALLOPT=$OPT1 $OPT2
really should be:
ALLOPT="$OPT1 $OPT2"
This problem arises because of the space between the arguments. If the arguments were separated by an intervening slash, for example, or if there were no space at all between them, this problem wouldn’t crop up—it would all be a single word, and thus a single assignment.
But that intervening space tells bash to parse this into two words. The first word is a variable assignment. Such assignments at the beginning of a command tell bash to set a variable to a given value just for the duration of the command—the command being the word that follows next on the command line. At the next line, the variable is back to its prior value (if any) or just not set.
The second word of our example statement is therefore seen as a command. That word is the command that is reported as “not found.” Of course, it is possible that the value for $OPT2
might have been something that actually was the name of an executable (though that’s not likely in this case, with ls). Such a situation could lead to very undesirable results.
Did you notice, in our example, that when ls ran, it didn’t use the long-format output even though we had (tried to) set the -l
option? That shows that $ALLOPT
was no longer set. It had only been set for the duration of the previous command, which was the (nonexistent) -h command bash attempted to run.
An assignment on a line by itself sets a variable for the remainder of the script. An assignment at the beginning of a line, one that has an additional command invoked on that line, sets the variable only for the execution of that command.
It’s generally a good idea to quote your assignments to a shell variable. That way you are assured of getting only one assignment and not encountering this problem.
bash will alphabetize the data in a pattern match:
$ echo x.[ba] x.a x.b $
Even though you specified b
then a
in the square brackets, when the pattern matching is done and the results found, they will be alphabetized before being given to the command to execute. That means that you don’t want to do this:
mv x.[ba]
thinking that it will expand to:
mv x.b x.a
Rather, it will expand to:
mv x.a x.b
since bash alpha-sorts the results before putting them in the command line, which is exactly the opposite of what you intended!
However, if you use braces to enumerate your different values, it will keep them in the specified order. This will do what you intended and not change the order:
mv x.{b,a}
You have a script that works just fine, reading input in a while
loop:
# This works as expected
COUNT
=
0while
read
ALINEdo
let
COUNT++done
echo
$COUNT
And then you change it like this, to read from a file, with the name of that file specified as the first parameter to the script:
# Don't use; this does NOT work as expected!
COUNT
=
0 cat$1
|
while
read
ALINEdo
let
COUNT++done
echo
$COUNT
# $COUNT is always '0', which is useless
But now it no longer works; $COUNT
keeps coming out as zero.
Pipelines create subshells. Changes in the while
loop do not affect the variables in the outer part of the script, because this while
loop, as with each command of a pipeline, is run in a subshell. (The cat command is run in a subshell, too, but it doesn’t alter shell variables.)
One solution: don’t do that (if you can help it). That is, don’t use a pipeline. In this example, there was no need to use cat to pipe the file’s content into the while
statement—you could use I/O redirection rather than setting up a pipeline:
# Avoid the | and subshell; use "done < $1" instead
# It now works as expected
COUNT
=
0while
read
ALINEdo
let
COUNT++done
<$1
# <<<< This is the line with the key difference
echo
$COUNT
Such an easy rearrangement might not work for your problem, however, in which case you’ll have to use another technique.
As of version 4 of bash, you can prevent this problem in a script simply by setting the shell option lastpipe
early on in the script:
shopt
-s lastpipe
If that still doesn’t work or you’re using a version of bash older than 4.0, see the discussion.
If you add an echo
statement inside the while
loop of the example script, you can see $COUNT
increasing, but once you exit the loop, $COUNT
will be back to zero. The way that bash sets up the pipeline of commands means that each command in the pipeline will execute in its own subshell. So the while
loop is in a subshell, not in the main shell. The while
loop will begin with the same value that the main shell script was using for $COUNT
, but since the while
loop is executing in a subshell there is no way to get the value back up to the parent shell.
One approach to deal with this is to take all the additional work and make it part of the same subshell that includes the while
loop. For example:
COUNT
=
0 cat$1
|
{
while
read
ALINEdo
let
COUNT++done
echo
$COUNT
;
}
# spaces are important here
The placement of the braces is crucial here. What we’ve done is explicitly delineate a section of the script to be run together in the same (sub)shell. It includes both the while
loop and the other work that we want to do after the while
loop completes (here all we’re doing is echoing $COUNT
). Since the while
and echo
statements are not connected via a pipeline, they will both run in the same subshell delineated by the braces. The $COUNT
that was accumulated during the while
loop will remain until the end of the subshell—that is, until the close brace is reached.
If you use this technique it might be good to format the statements a bit differently, to make the use of the bracketed subshell stand out more. Here’s the example script reformatted:
COUNT
=
0 cat$1
|
{
while
read
ALINEdo
let
COUNT++done
echo
$COUNT
}
This issue can be avoided altogether if you are using version 4 of bash. In your script, simply set the shell option lastpipe
(some sysadmins might even want to set this in their /etc/profile or a related .rc file so no one else needs to set it):
shopt
-s lastpipe
This option tells the shell to run the last command of a pipeline in the current shell, rather than a subshell, thereby making its variables available to the rest of the shell script that comes after the pipeline.
Here is an example similar to the previous example, though it uses ls rather than cat as the source of its data:
shopt
-s lastpipe# as of ver. 4 bash
COUNT
=
0 ls|
while
read
ALINEdo
let
COUNT++done
echo
$COUNT
Try it with and without the shopt
statement and you can see the effect.
The lastpipe
behavior only works if job control is disabled, which is the default condition for noninteractive shells (i.e., bash scripts). If you want to use lastpipe
interactively, then you need to disable job control with set +m
—but in doing so you lose the ability to interupt (^C
) or to suspend (^Z
) a running command, and you cannot use the fg and bg commands. We recommend against doing so.
Type stty sane
and then press the Enter key, even if you can’t see what you are typing, to restore sane terminal settings. You may want to hit Enter a few times first, to make sure you don’t have anything else on your input line before you start typing the stty command.
If you do this a lot, you might consider creating an alias that’s easier to type blind (see Recipe 10.7).
Aborting some older versions of ssh at a password prompt may leave terminal echo (the displaying of characters as you type them, not the shell echo command) turned off so you can’t see what you are typing. Depending on what kind of terminal emulation you are using, displaying a binary file can also accidentally change terminal settings. In either case, stty’s sane
setting attempts to return all terminal settings to their default values. This includes restoring echo capability, so that what you type on the keyboard appears in your terminal window. It will also likely undo whatever strangeness has occurred with other terminal settings.
Your terminal application may have some kind of reset function too, so explore the menu options and documentation. You may also want to try the reset and tset commands, though in our testing stty sane
worked as desired while reset and tset were more drastic in what they fixed.
man reset
man stty
man tset
Never do:
rm -rf $files_to_delete
Never, ever, ever do:
rm -rf /$files_to_delete
Use this instead:
[ -n "$files_to_delete" ] && rm -rf $files_to_delete
The first example isn’t too bad; it’ll just throw an error. The second one is pretty bad because it will try to delete your root directory. If you are running as a regular user (and you should be—see Recipe 14.18), it may not be too bad, but if you are running as root then you’ve just killed your system but good. (Yes, we’ve done this.)
The solution is easy. First, make sure that there is some value in the variable you’re using, and second, never precede that variable with a /
.
Your script is giving you values that don’t match what you expected. Consider this simple script and its output:
$ bash oddscript good nodes: 0 bad nodes: 6 miss nodes: 0 GOOD=6 BAD=0 MISS=0 $ cat oddscript #!/bin/bash - badnode=6 printf "good nodes: %d " $goodnode printf "bad nodes: %d " $badnode printf "miss nodes: %d " $missnode printf "GOOD=%d BAD=%d MISS=%d " $goodnode $badnode $missnode
Why is 6
showing up as the value for the good count, when it is supposed to be the value for the bad count?
Either give the variables an initial value (e.g., 0
) or put quotes around the references to them on printf
lines.
What’s happening here? bash does its substitutions on that last line, and when it evaluates $goodnode
and $missnode
they both come out null, empty, not there. So the line that is handed off to printf to execute looks like this:
printf "GOOD=%d BAD=%d MISS=%d " 6
When printf tries to print the three decimal values (the three %d
formats), it has a value (i.e., 6
) for the first one but doesn’t have anything for the next two, so they come out as 0
and you get:
GOOD=6 BAD=0 MISS=0
You can’t really blame printf, since it never saw the other arguments; bash had done its parameter substitution before printf ever got to run.
Even declaring them as integer values, like this:
declare -i goodnode badnode missnode
isn’t enough. You need to actually assign them a value.
The other way to avoid this problem is to quote the arguments when they are used in the printf
statement, like this:
printf "GOOD=%d BAD=%d MISS=%d " "$goodnode" "$badnode" "$missnode"
Then the first argument won’t disappear, but an empty string will be put in its place, so that what printf gets is the three needed arguments:
printf "GOOD=%d BAD=%d MISS=%d " "" "6" ""
While we’re on the subject of printf, it has one other odd behavior. We have just seen how it behaves when there are too few arguments; when there are too many arguments, printf will keep repeating and reusing the format line and it will look like you are getting multiple lines of output when you expected only one.
Of course, this can be put to good use, as in the following case:
$ dirs /usr/bin /tmp ~/scratch/misc $ printf "%s " $(dirs) /usr/bin /tmp ~/scratch/misc $
Here, printf takes the directory stack (i.e., the output from the dirs command) and displays the directories one per line, repeating and reusing the format, as described earlier.
Let’s summarize the best practices:
Initialize your variables, especially if they are numbers and you want to use them in printf
statements.
Put quotes around your arguments if they could ever be null, and especially when used in printf
statements.
Make sure you have the correct number of arguments, especially considering what the line will look like after the shell substitutions have occurred.
The safest way to display an arbitrary string is to use printf '%s
' "$string"
.
Use the -n
argument to bash to test syntax often, ideally after every save, and certainly before committing any changes to a revision control system:
$ bash -n my_script $ echo 'echo "Broken line' >> my_script $ bash -n my_script my_script: line 4: unexpected EOF while looking for matching `"' my_script: line 5: syntax error: unexpected end of file
The -n
option is tricky to find in the bash manpage or other reference material since it’s located under the set builtin. It is noted in passing in bash --help
for -D
, but it is never explained there. This flag tells bash to “read commands but do not execute them,” which of course will find bash syntax errors.
As with all syntax checkers, this will not catch logic errors or syntax errors in other commands called by the script.
man bash
bash --help
bash -c "help set"
Add set -x
to the top of the script when you run it, or use set -x
to turn on xtrace before a troublesome spot and set +x
to turn it off after. You may also wish to experiment with the $PS4
prompt (Recipe 16.2). xtrace also works on the interactive command line. Example 19-1 is a script that we suspect is buggy.
#!/usr/bin/env bash
# cookbook filename: buggy
#
set
-xresult
=
$1
[
$result
=
1
]
&&
{
echo
"Result is 1; excellent."
;
exit
0;
}
||
{
echo
"Uh-oh, ummm, RUN AWAY! "
;
exit
120;
}
Now we invoke this script, but first we set and export the value of the $PS4
prompt. bash will print out the value of $PS4
before each command that it displays during an execution trace (i.e., after a set -x
):
$ export PS4='+xtrace $LINENO:' $ echo $PS4 +xtrace $LINENO: $ ./buggy +xtrace 4: result= +xtrace 6: '[' = 1 ']' ./buggy: line 6: [: =: unary operator expected +xtrace 8: echo 'Uh-oh, ummm, RUN AWAY! ' Uh-oh, ummm, RUN AWAY! $ ./buggy 1 +xtrace 4: result=1 +xtrace 6: '[' 1 = 1 ']' +xtrace 7: echo 'Result is 1; excellent.' Result is 1; excellent. $ ./buggy 2 +xtrace 4: result=2 +xtrace 6: '[' 2 = 1 ']' +xtrace 8: echo 'Uh-oh, ummm, RUN AWAY! ' Uh-oh, ummm, RUN AWAY! $ /tmp/jp-test.sh 3 +xtrace 4: result=3 +xtrace 6: '[' 3 = 1 ']' +xtrace 8: echo 'Uh-oh, ummm, RUN AWAY! ' Uh-oh, ummm, RUN AWAY!
It may seem odd to turn something on using -
and turn it off using +
, but that’s just the way it worked out. Many Unix tools use -n
for options or flags, and since you need a way to turn -x
off, +x
seems natural.
As of bash 3.0 there are a number of new variables to better support debugging: $BASH_ARGC
, $BASH_ARGV
, $BASH_SOURCE
, $BASH_LINENO
, $BASH_SUBSHELL
, $BASH_EXECUTION_STRING
, and $BASH_COMMAND
. There is also a new extdebug shell option. These are in addition to existing bash variables like $LINENO
and the array variable $FUNCNAME
.
From the Bash Reference Manual:
If [
extdebug
is] set at shell invocation, arrange to execute the debugger profile before the shell starts, identical to the--debugger
option. If set after invocation, behavior intended for use by debuggers is enabled:
The
-F
option to the declare builtin…displays the source file name and line number corresponding to each function name supplied as an argument.If the command run by the
DEBUG
trap returns a nonzero value, the next command is skipped and not executed.If the command run by the
DEBUG
trap returns a value of 2, and the shell is executing in a subroutine (a shell function or a shell script executed by the.
orsource
builtins), the shell simulates a call toreturn
.
BASH_ARGC
andBASH_ARGV
are updated…Function tracing is enabled: command substitution, shell functions, and subshells invoked with
( command )
inherit theDEBUG
andRETURN
traps.Error tracing is enabled: command substitution, shell functions, and subshells invoked with
( command )
inherit theERR
trap.
Using xtrace is a very handy debugging technique, but it is not the same as having a real debugger. For that, see the Bash Debugger Project, which contains patched sources to bash that enable better debugging support as well as improved error reporting. In addition, this project contains, in the developer’s words, “the most comprehensive source-code debugger for BASH that has been written.”
help set
man bash
Chapter 9 in Cameron Newham’s Learning the bash Shell, 3rd Edition (O’Reilly), which includes a shell script for debugging other shell scripts
https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
Shell scripts are read and executed in a top-to-bottom linear way, so you must define any functions before you use them.
Some other languages, such as Perl, go through intermediate steps during which the entire script is parsed as a unit. That allows you to write your code so that main()
is at the top, and functions (or subroutines) are defined later. By contrast, a shell script is read into memory and then executed one line at a time, so you can’t use a function before you define it.
Relax; take a deep breath. You’re probably confused because you’re learning so much (or just using it too infrequently to remember it). Practice makes perfect, so keep trying.
The rules aren’t that hard to remember for bash itself. After all, regular expression syntax is only used with the =~
comparison operator in bash. All of the other expressions in bash use shell pattern matching.
The pattern matching used by bash uses some of the same symbols as regular expressions, but with different meanings. But it is also the case that you often have calls in your shell scripts to commands that use regular expressions—commands like grep and sed.
We asked Chet Ramey, the current keeper of the bash source and all-around bash guru, if it was really the case that the =~
was the only use of regular expressions in bash. He said yes. He also was kind enough to supply a list of the various parts of bash syntax that use shell pattern matching. We’ve covered most, but not all of these topics in various recipes in this book. We offer the list here for completeness.
Shell pattern matching is performed by:
Filename globbing (pathname expansion)
==
and !=
operators for [[
case
statements
$GLOBIGNORE
handling
$HISTIGNORE
handling
${parameter#[#]word}
${parameter%[%]word}
${parameter/pattern/string}
Several bindable readline commands (glob-expand-word, glob-complete-word, etc.)
complete -G
and compgen -G
complete -X
and compgen -X
The help builtin’s pattern
argument
Thanks, Chet!
Learn to read the manpage for bash and refer to it often—it is long but precise. If you want an online version of the bash manpage or other bash-related documents, visit http://www.bashcookbook.com for the latest bash information. Keep this book handy for reference, too.
18.222.67.251