Chapter 3. Shells and Scripting

In this chapter we will focus on interacting with Linux on the terminal, that is, via the shell which exposes a command line interface (CLI). It is vitally important to be able to use the shell effectively to accomplish everyday tasks and to that end we focus on usability, here.

First, we review some terminology and provide a gentle and concise introduction to shell basics. Then we have a look at modern, human-friendly shells, such as the Fish shell. We look at configuration and common tasks in the shell. Then, we move on to the topic of how to effectively work on the CLI using a terminal multiplexer, enabling you to work with multiple sessions, local or remote alike. In the last part of this chapter we switch gears and focus on automating tasks in the shell using scripts, including best practices how to write scripts in a safe, secure, and portable manner and also how to lint and test scripts.

There are two major ways to interact with Linux, from a CLI perspective. The first way is manual, that is, a human user sits in front of the terminal, interactively typing commands and consuming the output. This ad-hoc interaction makes most of the things you want to do in the shell on a day-to-day basis, including:

  • Listing directories, finding files, or looking for content in files.

  • Copying files between directories or to remote machines.

  • Reading emails, news or tweet from the terminal.

Further, we will learn how to conveniently and efficiently work with multiple shell sessions at the same time.

The other mode of operation is the automated processing of a series of commands in a special kind of file that the shell interprets for you and in turn executes. This mode is usually called shell scripting or just scripting. You typically want to use a script rather than manually repeating certain tasks. Also, scripts are the basis of many config and install systems. Scripts are indeed very convenient. However they can also pose a danger, if used without precautions. So, whenever you think about writing a script, keep the XKCD web comic shown in Figure 3-1 in mind, with kudos to Randall Munroe, made available under CC BY-NC 2.5.

xkcd on automation
Figure 3-1. XKCD on Automation

I strongly recommend that you have a Linux environment available and try out the examples shown here right away. With that, are you ready for some (inter)action? If so, then let’s start with some terminology and basic shell usage.


Before we get into different options and configurations, let’s focus on some basic terms such as terminal and shell. In this section I will define the terminology and show you how to accomplish everyday tasks in the shell. We will also review modern commands and see them in action.


We start with the terminal, or terminal emulator, or soft terminal, all of which refer to the same thing: a terminal is a program that provides a textual user interface. That is, a terminal supports reading characters from the keyboard and displaying them on the screen. Many years ago these used to be integrated devices (keyboard and screen together) but nowadays terminals are simply apps.

In addtion to the basic character-oriented input and output, terminals support so called escape sequences or escape codes, for cursor and screen handling and potentially support for colors. For example, pressing CTRL+H causes a backspace, that is, deletes the character to the left of the cursor.

The environment variable TERM has the teriminal emulater in use and its configuration is available via infocmp as follows (note that the output has been shortened):

$ infocmp
#       Reconstructed via infocmp from file: /lib/terminfo/s/screen-256color
screen-256color|GNU Screen with 256 colors,
        am, km, mir, msgr, xenl,
        colors#0x100, cols#80, it#8, lines#24, pairs#0x10000,
        bel=^G, blink=E[5m, bold=E[1m, cbt=E[Z, civis=E[?25l,
        clear=E[HE[J, cnorm=E[34hE[?25h, cr=

Examples of terminals include not only xterm, rxvt, and the Gnome terminator, but also new generation ones that utilize the GPU such as Alacritty, kitty or warp.

In “Terminal multiplexer” we will come back to the topic of the terminal.


Next up is the shell, a program that runs inside the terminal and acts as a command interpreter. The shell offers input and output handling via streams, supports variables, has some built-in commands you can use, deals with command execution and status, and usually supports both interactive usage as well as scripted usage (“Scripting”).

The shell is formally defined in sh and we often come across the term POSIX shell which will become more important in the context of scripts and portability.

Originally we had the Bourne shell sh, named after the author, but nowadays you usually find it replaced with the bash shell—a wordplay on the original version, short for “Bourne Again Shell”—which is widely used as the default.

If you are curious what you are using, use the file -h /bin/sh command to find out or if that fails, try echo $0 or echo $SHELL.


At least in this section, we assume the Bash shell (bash), unless we call it out explicitly.

There are many more implementations of sh as well as other varianats such as the Korn shell ksh and C Shell csh, nowadays not widely used. We will, however, review modern bash replacements in “Human-friendly Shells”.

Let’s start our shell basics with two fundamental features: streams and variables.


Let’s start with the topic of input (streams) and output (streams) or I/O for short. How can you feed a program some input? How do you control where the output of a program lands, say, on the terminal or in a file?

First off, the shell equips every process with three default file descriptors (FD) for input and output:

  • stdin (FD 0)

  • stdout (FD 1)

  • stderr (FD 3)

These FDs are, as depicted in Figure 3-2, by default connected to your screen and keyboard, respectively. In other words, unless you specify something else, a command you enter in the shell wil take its input (stdin) from your keyboard and will deliver its output (stdout) to your screen, like so:

$ cat
This is some input I type on the keyboard and read on the screen^C

Above, using cat as an example, you see the defaults in action and also note that I used CTRL+c (shown as ^C) to termintate the command.

shell streams
Figure 3-2. Shell I/O default streams

If you don’t want to use the defaults the shell gives you, for example, you don’t want stderr to be outputed on the screen but want to save it in a file, you can redirect the streams.

You redirect the output stream of a processes using $FD> and <$FD, with $FD being the file descriptor, for example 2> means redirect the stderr stream. Note that 1> and > is the same since stdout is the default, if you want to redirect both stdout and stderr use &> and when you want to get rid of a stream then you can use /dev/null.

Let’s see how that works in the context of a concrete example, downloading some HTML content via curl:

$ curl &> /dev/null 1

$ curl > /tmp/content.txt 2> /tmp/curl-status 2
$ head -3 /tmp/content.txt
<!doctype html>
$ cat /tmp/curl-status
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1256  100  1256    0     0   3187      0 --:--:-- --:--:-- --:--:--  3195

$ cat > /tmp/interactive-input.txt 3

$ tr < /tmp/curl-status [A-Z] [a-z] 4
  % total    % received % xferd  average speed   time    time     time  current
                                 dload  upload   total   spent    left  speed
100  1256  100  1256    0     0   3187      0 --:--:-- --:--:-- --:--:--  3195

Discard all output by redirecting both stdout and stderr to /dev/null.


Redirect the output and status to different files.


Interactively enter input and save to file, use CTRL+D to stop capturing and store the content.


Lowercase all words, using the tr command that reads from stdin.

Shells usually understand a number of special characters, such as:

  • & … at the end of a command, executes it in the background, see also “Job Control”.

  • … continue a command on the next line, use this for better readability of long commands.

  • | … the pipe, connects stdout of one process with the stdin of the next process, allowing you to pass data without having to store it in files as a temporary place.

Again, let’s see some of the theoretical content in action. Let’s try to figure out how many lines a HTML file contains by downloading it using curl and then pipe the content to the wc tool:

$ curl 2> /dev/null |  1
  wc -l 2

Use curl to download the content from URL and discard the status that it outputs on stderr (note: in practice you’d use the -s option of curl but we want to learn how to apply our hard-gained knowledge, right?).


The stdout of curl is fed to stdin of wc which counts the number of lines with the -l option.

Now that you have a basic understanding of commands, streams and redirection, let’s move on to another core shell feature, the handling of variables.


A term you will come across often in the context of shells is that of variables. Whenever you don’t want to or can not hard code a value you can use a variable to store and change a value. Use cases include:

  • Configuration items that Linux exposes, for example, the place where the shell looks for executables captured in the $PATH variable. This is kind of an interface where a variable might be read/write.

  • You want to interactively query the user for a value, say, in the context of a script.

  • When you want to shorten input by defining a long value once. For example, the URL of an HTTP API. This use case roughly corresponds to a const value in a program language since you don’t change the value after you have declared the variable.

We distinguish between two kinds of variables:

  1. Environment variables are system-wide settings; list them with env.

  2. Shell variables are valid in the context of the current execution; list with set in Bash. Shell variables are not inherited by sub-processes.

You can, in Bash, use export to create an environment variable. When you want to access the value of a variable, then you need to put an $ in front of it and when you want to get rid of it, use unset.

OK, that was a lot of information, let’s see how that looks in practice (Bash):

$ set MY_VAR=42 1
$ set | grep MY_VAR 2

$ export MY_GLOBAL_VAR="fun with vars" 3

$ set | grep 'MY_*' 4
MY_GLOBAL_VAR='fun with vars'

$ env | grep 'MY_*' 5
MY_GLOBAL_VAR=fun with vars

$ bash 6
$ echo $MY_GLOBAL_VAR 7
fun with vars

$ set | grep 'MY_*' 8
MY_GLOBAL_VAR='fun with vars'

$ exit 9
$ unset $MY_VAR
$ set | grep 'MY_*'
MY_GLOBAL_VAR='fun with vars'

Create a shell variable called MY_VAR and assign a value of 42.


List shell variables and filter out MY_VAR, note the _= indicating it’s not exported.


Create a new environment variable called MY_GLOBAL_VAR.


List shell variables and filter out all that start with MY_ and we see, as expected, both the variables we created in the previous steps.


List environment variables, and we see MY_GLOBAL_VAR as we would hope for.


Create new shell session, that is, a child process of the current shell session which doesn’t inherit MY_VAR.


Access environment variable MY_GLOBAL_VAR.


List shell variables, which gives us only MY_GLOBAL_VAR since we’re in a child process.


Exit child process, remove the MY_VAR shell variable and list our shell variables; as expected MY_VAR is gone.

In Table 3-1 I put together common shell and environment variables. You will find those variables almost everywhere and they are important to understand and to use. For any of the variables you can have a look at the respective value using echo $XXX with XXX being the variable name.

Table 3-1. Common shell and environment variables
Variable Type Semantics



the path to program used by default to edit files



the path of the home directory of the current user


Bash shell

the name of the current host



list of characters to separate fields, used when the shell splits words on expansion



contains a list of directories in which the shell looks for executable programs, binaries or scripts alike



the primary prompt string in use



the full path of the working directory


Bash shell

a random integer between 0 and 32767



contains the currently used shell



the terminal emulator used



current user unique ID (integer value)



current user name


Bash shell

last argument to the previous command executed in the foreground


Bash shell

exit status, see “Exit Status”


Bash shell

the ID of the current process (integer value)


Bash shell

the name of the current process

Further, check out the full list of Bash specific variables and also note that the variables from Table 3-1 will come in handy again in the context of “Scripting”.

Exit Status

The shell communicates the completion of a command execution to the caller using what is called the exist status. In general, it is expected that a Linux command returns a status when it terminates. This can either be a normal termination (happy path) or an abnormal termination (something went wrong). A 0 exit status means that the command was successful run, without any errors, whereas a non-zero value between 1 to 255 signals a failure. To query the exit status use echo $?.

Be careful with exit status handling in a pipeline, since some shells, only make the last status available. However, you can work around that limitation by using $PIPESTATUS.

Built-in Commands

Shells come with a number of built-in commands. Some useful examples are yes, echo, cat, or read. You can use help to list them, and remember that everything else is a shell-external program which you usually can find in /usr/bin. How do you know where to find an executable? Here are some ways:

$ which ls

$ type ls
ls is aliased to `ls --color=auto'

Job Control

A feature most shells support is called job control. By default, when you enter a command, it takes control of the screen and the keyboard, which we usually call running in the foreground. But what if you don’t want to run something interactively, or, in case of a server, what if there is no input from stdin at all? Enter job control and background jobs: to launch a process in the background put an & at the end or to send a foreground process to the background press CTRL+Z.

The following example shows this in action, giving you a rough idea:

$ watch -n 5 "ls" & 1

$ jobs 2
Job     Group   CPU     State   Command
1       3021    0%      stopped watch -n 5 "ls" &

$ fg 3
Every 5.0s: ls                                         Sat Aug 28 11:34:32 2021


By putting the & at the end we launch the command in the background.


List all jobs.


With the fg command we can bring a process to the foreground.

Further, if you want to keep a background process running, even after you close the shell you can prepend the nohup command. If you want to get rid of a process you can use the kill command, see also Chapter 9.

Rather than job control, I recommend to use terminal multiplexer as discussed in “Terminal multiplexer”. These programs take care of the most common use cases (shell closes, multiple processes running and need coordination, etc.) and also support working with remote systems.

Let’s move on to discuss modern replacements for frequently used core commands that have been around forever.

Modern Commands

There are a handful of commands you will find yourself using over and over again, on a daily basis. This includes directory navigation (cd), listing the content of a directory (ls), finding files (find), or displaying the content of files (cat, less). Given that you are using these commands so often, you want to be as efficient as possible, every keystroke counts.

Now, for some of these often used commands there exist modern variations. Some of them are drop-in replacements others extend the functionality. All of them offer somewhat sane default values for common operations, rich output generally easier to comprehend, and they usually lead to you typing less to accomplish the same task. This reduces the friction when you work with the shell, making it more enjoyable and improving the flow. If you want to learn more about modern tooling check out Appendix B.

Listing Directory Contents with exa

Whenever you want to know what a directory contains, you use ls or one of its variants with parameters. For example, in Bash I used to have l aliased to ls -GAhltr. But there’s a better way: exa, a modern replacement for ls, written in Rust, with built-in support for Git and tree rendering. In this context, what would you guess is the most often used command after you’ve listed the directory content? In my experience it’s to clear the screen and very often people are using clear. That’s typing five characters and then hitting ENTER. You can have the same effect much faster, simply use CTRL+L.

Viewing File Contents with bat

Let’s assume that you listed a directory content and found a file you want to inspect. You’d use cat, maybe? There’s something better I recommend you to have a look at: bat. The bat command, shown in Figure 3-3 comes with syntax highlighting, shows non-printable characters, supports Git, and has a pager—the page-wise viewing of files longer than what can be displayed on the screen—integrated.

bat rendering
Figure 3-3. Rendering of a Go file (left) and a YAML file (right) by bat

Finding Content in Files with rg

Traditionally, you would use grep to find something in a file. However, there’s a modern command, rg, which is fast and powerful.

We’re going to compare rg to a find and grep combination in this example, where we want to find YAML files that contain the string “sample”:

$ find . -type f -name "*.yaml" -exec grep "sample" '{}' ; -print 1
      app: sample
        app: sample

$ rg -t "yaml" sample 2
9:      app: sample
14:        app: sample

Use find and grep together to find a string in YAML files.


Use rg for the same task.

If you compare the commands and the results in the previous example you see that not only is rg easier to use but also the results are more informative (providing context).

JSON Data Processing with jq

And now for a bonus command. This one, jq is not an actual replacement but more like a specialized tool for JSON, a popular textual data format. You find JSON in HTTP APIs and configuration files, alike.

So, use jq rather than awk or sed to pick out certain values. For example, by using a JSON generator to generate some random data, I have a 2.4 kB large JSON file example.json that looks something like this (only showing the first record here):

    "_id": "612297a64a057a3fa3a56fcf",
    "latitude": -25.750679,
    "longitude": 130.044327,
    "friends": [
        "id": 0,
        "name": "Tara Holland"
        "id": 1,
        "name": "Giles Glover"
        "id": 2,
        "name": "Pennington Shannon"
    "favoriteFruit": "strawberry"

Let’s say we’re interested in all “first” friends, that is, entry 0 in the friends array, of people whose favorite fruit is “strawberry”. With jq you would do the following:

$ jq 'select(.[].favoriteFruit=="strawberry") | .[].friends[0].name' example.json
"Tara Holland"
"Christy Mullins"
"Snider Thornton"
"Jana Clay"
"Wilma King"

That was some CLI fun, right? If you’re interested in finding out more about the topic of modern commands and what other candidates there might be for you to replace, check out the mondern-unix repo, listing suggestions. Let’s now move our focus to some common tasks beyond directory navigation and file content viewing and how to go about them.

Common Tasks

There’s a number of things you find yourself doing often and in addition there are certain tricks you can use to speed up your tasks in the shell. Let’s review these common tasks and see how we can be more efficient.

Shorten Often-used Commands

One fundamental insight with interfaces is that commands that you are using very often should take the least effort, should be quick to enter. Now apply this idea to the shell: rather than git diff --color-moved I type d (a single character), since I’m viewing changes in my repositories many hundreds of times per day. Depending on the shell, there are different ways to achieve this: in Bash this is called an alias and in Fish (“Fish Shell”) there are abbreviations you can use.


When you enter commands on the shell prompt there are a number of things you might want to do, such as navigation (for example, move cursor to the start of the line) or manipulate the line (delete everything left to the cursor. In Table 3-2 you see common shell shortcuts listed.

Table 3-2. Shell navigation an editing shortcuts
Action Command Note

move cursor to start of line



move cursor to end of line



move cursor forward one character



move cursor back one character



move cursor forward one word


move cursor back one word



delete current character



delete character left of cursor



delete word left of cursor



delete everything right of cursor



delete everything left of cursor



clear screen



cancel command





Bash only

search history


Some shells

cancel search


Some shells

Note that not all shortcuts may be supported in all shells and that certain actions such as history management may be implemented differently in certain shells. Take the table as a starting point and try out what your shell supports.

File Content Management

You don’t always want to fire up an editor such as vi to add a single line of text. Also, sometimes you can’t do it, for example, when you’re in the context of writing a shell script (“Scripting”).

So, how can you manipulate textual content? Let’s have a look at a few examples:

$ echo "First line" > /tmp/something 1

$ cat /tmp/something 2
First line

$ echo "Second line" >> /tmp/something &&  3
  cat /tmp/something
First line
Second line

$ sed 's/line/LINE/' /tmp/something 4
First LINE
Second LINE

$ cat << 'EOF' > /tmp/another 5
First line
Second line
Third line

$ diff -y /tmp/something /tmp/another 6
First line                                                      First line
Second line                                                     Second line
                                                              > Third line

Create a file by redirecting the echo output.


View content of file.


Append a line to file using the >> operator and then view content.


Replace content from file using sed and output to stdout.


Create a file using the here document.


Show differences between the files we created above.

Now that you know the basic file content manipulation techniques let’s have a look at advanced viewing of file contents.

Viewing Long Files

For long files, that is, files that have more lines than the shell can display on your screen, you can use pagers like less or bat (that comes with a build-in pager). With paging, a program splits the output into pages where each page fits into what the screen can display and some commands to navigate the pages (view next page, previous page, etc.).

Another way to deal with long files is to only display a select region of the file like the first few lines. There are two handy commands for this: head and tail.

For example, to display the beginning of a file:

$ for i in {1..100} ; do echo $i >> /tmp/longfile ; done 1

$ head -5 /tmp/longfile 2

Create a long file (100 lines here).


Display the first five lines of the long file.

Or, to get live updates of a file that is constantly growing, we could use:

$ sudo tail -f /var/log/Xorg.0.log 1
[ 36065.898] (II) event14 - ALPS01:00 0911:5288 Mouse: is tagged by udev as: Mouse
[ 36065.898] (II) event14 - ALPS01:00 0911:5288 Mouse: device is a pointer
[ 36065.900] (II) event15 - ALPS01:00 0911:5288 Touchpad: is tagged by udev as: Touchpad
[ 36065.900] (II) event15 - ALPS01:00 0911:5288 Touchpad: device is a touchpad
[ 36065.901] (II) event4  - Intel HID events: is tagged by udev as: Keyboard
[ 36065.901] (II) event4  - Intel HID events: device is a keyboard

Display the end of a log file using tail with the -f option meaning to follow, that is, to update periodically.

Lastly in this section we look at dealing with date and time.

Date and Time Handling

The date command can be a useful way to generate unique file names. It allows you to generate dates in various formats including the Unix time stamp as well as to convert between different date and time formats.

$ date +%s 1

$ date -d @1629742883 '+%m/%d/%Y:%H:%M:%S' 2

Create a Unix time stamp.


Convert Unix time stamp to a human-readable date.

With that we wrap up the shell basics section. By now you should have a good understanding what terminals and shells are and how to use them to do basic tasks such as navigating the filesystem, finding files and more. We move on to the topic of humand-friendly shells.

Human-friendly Shells

While the Bash shell is likely still the most widely used shell, it is not necessarily the most human-friendly one. It has been around since the late 1980s and the age sometimes shows. There are a number of modern, human-friendly shells I strongly recommend you to evaluate and use instead of Bash.

We will first do a detailed examination on one concrete example of a modern, human-friendly shell called the Fish shell, and then briefly discuss others, just to make sure you have an idea about the range of choices. We wrap up this section with a quick recommendation and conclusion in “Which Shell Should I Use?”.

Fish Shell

The Fish shell describes itself as a smart and user-friendly command line shell. Let’s have a look at some basic usage first and then move on to configuration topics.

Basic Usage

For many of the daily tasks you won’t notice a big difference to Bash in terms of input, most of the commands provided in Table 3-2 are valid. However, there are two areas where fish is different and much more convenient than bash:

  • No explicit history management. You simply type and you get previous executions of a command shown. You can use the up and down key to select one, see Figure 3-4.

  • Autosuggestions for many commands. As shown in Figure 3-5. In addition, when you press Tab, the Fish shell will try to complete the command, argument, or path, giving you visual hints such as coloring your input in red if it doesn’t recognize the command.

fish history
Figure 3-4. Fish history handling in action
fish autocompletion
Figure 3-5. Fish autosuggestion in action

In Table 3-3 you see some common fish commands listed, and in this context, note specifically the handling of environment variables.

Table 3-3. Fish shell reference
Task Command

Export environment variable KEY with value VAL

set -x KEY VAL

Delete environment variable KEY

set -e KEY

Inline env var KEY for command cmd

env KEY=VAL cmd

Change path length to 1

set -g fish_prompt_pwd_dir_length 1

Manage abbreviations


Manage functions

functions and funcd

Unlike other shells, fish stores the exit status of the last command in a variable called $status instead of in $?.

If you’re coming from Bash, you may also want to consult the Fish FAQ which addresses most of the gotchas.


To configure the Fish shell, you simply enter the fish_config command and fish will launch a server via http://localhost:8000 and automtically open your default browser with a fancy UI shown in Figure 3-6 which allows you to view and change settings.

fish config ui
Figure 3-6. Fish shell configuration via browser

Let’s now see how I have configured my environment.

My config is rather short, in I have the following:

set -x FZF_DEFAULT_OPTS "-m --bind='ctrl-o:execute(nvim {})+abort'"
set -x FZF_DEFAULT_COMMAND 'rg --files'
set -g FZF_CTRL_T_COMMAND "command find -L $dir -type f 2> /dev/null | sed '1d; s#^./##'"
set -x EDITOR nvim
set -x KUBE_EDITOR nvim
set -ga fish_user_paths /usr/local/bin

My prompt, defined in looks as follows:

function fish_prompt
    set -l retc red
    test $status = 0; and set retc blue

    set -q __fish_git_prompt_showupstream
    or set -g __fish_git_prompt_showupstream auto

    function _nim_prompt_wrapper
        set retc $argv[1]
        set field_name $argv[2]
        set field_value $argv[3]

        set_color normal
        set_color $retc
        echo -n '─'
        set_color -o blue
        echo -n '['
        set_color normal
        test -n $field_name
        and echo -n $field_name:
        set_color $retc
        echo -n $field_value
        set_color -o blue
        echo -n ']'

    set_color $retc
    echo -n '┬─'
    set_color -o blue
    echo -n [
    set_color normal
    set_color c07933
    echo -n (prompt_pwd)
    set_color -o blue
    echo -n ']'
     # Virtual Environment
    set -q VIRTUAL_ENV
    and _nim_prompt_wrapper $retc V (basename "$VIRTUAL_ENV")

    # git
    set prompt_git (fish_git_prompt | string trim -c ' ()')
    test -n "$prompt_git"
    and _nim_prompt_wrapper $retc G $prompt_git

    # New line

    # Background jobs
    set_color normal
    for job in (jobs)
        set_color $retc
        echo -n '│ '
        set_color brown
        echo $job
    set_color blue
    echo -n '╰─> '
        set_color -o blue
    echo -n '$ '
    set_color normal

The above prompt definition yields a prompt shown in Figure 3-7 and there note the difference between a directory that contains a Git repo and one that does not, yet another built-in visual contextual information, speeding up your flow. Also, notice the current time on the right-hand side.

fish prompt
Figure 3-7. Fish shell prompt

My abbreviations—think: alias replacement, found in other shells—look as follows:

$ abbr
abbr -a -U -- :q exit
abbr -a -U -- cat bat
abbr -a -U -- d 'git diff --color-moved'
abbr -a -U -- g git
abbr -a -U -- grep 'grep --color=auto'
abbr -a -U -- k kubectl
abbr -a -U -- l 'exa --long --all --git'
abbr -a -U -- ll 'ls -GAhltr'
abbr -a -U -- m make
abbr -a -U -- p 'git push'
abbr -a -U -- pu 'git pull'
abbr -a -U -- s 'git status'
abbr -a -U -- stat 'stat -x'
abbr -a -U -- vi nvim
abbr -a -U -- wget 'wget -c'

To add a new abbreviation use abbr --add. Abbreviations are handy for simple commands that take no arguments. What if you have a more complicated construct you want to shorten? Say, you want to shorten a sequence involving git that also takes an argument? Meet functions in Fish.

Let’s now take a look at an example function, defined in We can use the functions command to list functions, the function command to create a new one and, in this case, the funced c command to edit it:

function c
    git add --all
    git commit -m "$argv"

With that we have reached the end of the Fish section, providing you a usage tutorial and configuration tips and now let’s have a quick look at other modern shells.

The Z-shell

Z-shell or zsh is a Bourne-like shell with a powerful completion system and rich theming support. With Oh My Zsh you can pretty much configure and use zsh in the way you’ve seen earlier on with fish while retaining wide backwards compatibility with Bash.

zsh uses five startup files as shown in the following (note that if $ZDOTDIR is not set, then zsh uses $HOME instead):

$ZDOTDIR/.zshenv 1
$ZDOTDIR/.zprofile 2
$ZDOTDIR/.zshrc 3
$ZDOTDIR/.zlogin 4
$ZDOTDIR/.zlogout 5

Sourced on all invocations of the shell, should contain commands to set the search path, plus other important environment variables. but should not contain commands that produce output or assume the shell is attached to a tty.


Is meant as an alternative to .zlogin for ksh fans (these two are not intended to be used together); similar to .zlogin, except that it is sourced before .zshrc.


Sourced in interactive shells, should contain commands to set up aliases, functions, options, key bindings, etc.


Sourced in login shells. It should contain commands that should be executed only in login shells. Note that .zlogin is not the place for alias definitions, options, environment variable settings, etc.


Sourced when login shells exit.

For more zsh plugins see also the awesome-zsh-plugins repo on GitHub and if you want to learn zsh, consider reading An Introduction to the Z Shell by Paul Falstad and Bas de Bakker.

Other Modern Shells

In addition to fish and zsh there are a number of other interesting, but not necessarily always Bash compatible shells available out there. When you have a look at those, ask yourself what the focus of the respective shell is (interactive usage vs. scripting) and how active the community around it is.

Some examples of modern shells for Linux I came across and can recommend you to have a look at include:

  • Oil shell is targetting Python and JavaScript users. Put in other words: the focus is less on interactive use but more on scripting.

  • murex, a POSIX shell that sports interesting features such as an integrated testing framework, typed pipelines, and event-driven programming.

  • Nushell is an experimental new shell paradigm, featuring tabular output with a powerful query language. Learn more via the detailed Nu Book.

  • PowerShell, a cross-platform shell that started off as a fork of the Windows PowerShell and offers a different set of semantics and interactions than POSIX shells.

There are many more out there, keep looking and try out what works best for you, try thinking beyond Bash and optimize for your use case.

Which Shell Should I Use?

At this point in time, every modern shell—other than Bash—seems like a good choice, from a human-centric perspective. Smooth auto-complete, easy config, and smart environments are no luxury in 2021 and, given the time you usually spend on the command line, you should try out different shells and pick the one you like most. I personally use the Fish shell, but many of my peers are super happy with the Z-shell.

You may have issues that make you hesitant to move away from Bash, specifically:

  • Remote systems/can not install my own shell, have to use Bash.

  • Compatibility, muscle memory. It can be hard to get rid of certain habits.

  • Almost all instructions (implicitly) assume Bash, for example, you would see instructions like export FOO=BAR which is Bash specific.

It turns out that above issues are by and large not relevant to most users. While it may be the case that you have to temporarily use Bash in a remote system most of the time you will be working in an environment that you control. There is a learning curve, but the investment pays off in the long run.

With that, let’s focus on another way to boost your productivity in the terminal: multiplexer.

Terminal multiplexer

We came across terminals already at the beginning of this chapter, in “Terminals”. Now let’s dive deeper into the topic of how to improve your terminal usage, building on a concept that is both simple and powerful: multiplexing.

Think of it the following way: you usually work on different things that can be grouped together, for example, you may work on an open source project, authoring of a blog post or docs, some server remote access, interacting with an HTTP API to test things, and so forth. These tasks may each require one or more terminal windows and oftentimes you want to or need to do potentially interdependent tasks in two windows at the same time, for example:

  • You are using the watch command to periodically execute a directory listing and at the same time edit a file.

  • You start a server process (a Web server or application server) and want to have it running in the foreground (see also “Job Control”) to keep an eye on the logs.

  • You want to edit a file using vi and at the same time use git to query the status and commit changes.

  • You have a VM running in the public cloud and want to ssh into it while having the possibility to manage files locally.

Think of all of the above examples as things that logically belong together, and in terms of time duration can range for short-term (a few minutes) to long term (days and weeks). The grouping of those tasks is usually called a session.

Now, there are a number of challenges if you want to achieve above:

  • You need multiple windows, so one solution is to launch multiple terminals or if the UI supports it, multiple instances (tabs).

  • You would like to have all the windows and paths around, even if you close the terminal or the remote side closes down.

  • You want to expand or zoom in and out to focus on certain tasks, while keeping an overview of all your sessions, being able to navigate between them.

To enable these tasks, people came up with the idea of overlaying a terminal with multiple windows (and sessions, to group windows). Put in other words: to multiplex the terminal I/O.

Let’s have a brief look at the original implementation of terminal multiplexing, called screen. Then we focus in-depth on a widely used implement called tmux and wrap up with other options in this space.


screen is the original terminal multiplexer and is still used. Unless you’re in a remote environment where nothing else is available and/or you can’t install another multiplexer you should probably not be using screen nowadays. One reason is that it’s not actively mainted anymore, another that it is not very flexible and lacks a number of features modern terminal multiplexer have.


tmux is a flexible and rich terminal multiplexer that you can bend to your needs. As you can see in Figure 3-8 there are three core elements you’re interacting with in tmux, from coarse-grained to fine-grained units:

Figure 3-8. The tmux elements: sessions, windows, and panes.
  • Sessions: a logically unit, think of it as a working environment dedicated to a specific task such as “working on project X” or “writing blog post Y”. It’s the container for all other units.

  • Windows: you can think of a window as a tab in a browser, belonging to a session. It’s optional to use and oftentimes you only have one window per session.

  • Panes: those are your workhorses, effectively a single shell instance running. A pane is part of a window, and you can easily split it vertically or horizontally, as well as expand/collapse it (think: zoom), and close panes as you need them.

Just like screen you have the concept of attaching and detaching to a session, in tmux. Let’s assume we start from scratch, let’s launch it with a session called test:

$ tmux new -s test

With above command tmux is running as a server and you find yourself in a shell you’ve configured in tmux, running as the client. This client/server model allows you to create, enter, leave, destroy sessions and use the shells running in it without having to think of the processes running (or: failing) in it.

tmux uses CTRL+b as the default keyboard shortcut also called prefix or trigger. So for example, to list all windows you would press CTRL+b and then w or to expand the current (active) pane you would use CTRL+b and then z.


In tmux the default trigger is CTRL+b. To improve the flow, I mapped the trigger to an unused key, so a single keystroke is sufficient. The way I did it is as follows: I mapped the trigger to the Home key in tmux and further that Home key to the CAPS LOCK key by changing its mapping in /usr/share/X11/xkb/symbols/pc to key <CAPS> { [ Home ] };.

The double-mapping described here is a workaround I needed to do. So, depending on your target key or terminal you might not have to do this, but I strongly encourage you to map CTRL+b to an unused key you can easily reach since you will press it many times a day.

You can now use any of the commands listed in Table 3-4 to manage further sessions, windows, panes and also, when pressing CTRL+b + d you can detach sessions. This means effectively that you put tmux into the background.

When you then start a new terminal instance or, say, you ssh to your maching from a remote place, you can then attach to an existing session, so let’s do that with the test session we created earlier:

$ tmux attach -t test 1

Attach to existing session called test. Note that if you want to detach the session from its previous terminal you would also supply the -d parameter.

Table 3-4 lists common tmux commands grouped by the units we discussed above, from widest scope (session) to narrowest one (pane).

Table 3-4. tmux reference
Target Task Command


create new

:new -s NAME



trigger + $


list all

trigger + s





create new

trigger + c



trigger + ,


switch to

trigger + 19


list all

trigger + w



trigger + &


split horizontal

trigger + "


split vertical

trigger + %



trigger + z



trigger + x

Now that you have a baisc idea how to use tmux let’s turn our attention on how to configure and customize it. My .tmux.conf looks as follows:

unbind C-b 1
set -g prefix Home
bind Home send-prefix
bind r source-file ~/.tmux.conf ; display "tmux config reloaded :)" 2
bind \ split-window -h -c "#{pane_current_path}" 3
bind - split-window -v -c "#{pane_current_path}"
bind X confirm-before kill-session 4
set -s escape-time 1 5
set-option -g mouse on 6
set -g default-terminal "screen-256color" 7
set-option -g status-position top 8
set -g status-bg colour103
set -g status-fg colour215
set -g status-right-length 120
set -g status-left-length 50
set -g window-status-style fg=colour215
set -g pane-active-border-style fg=colour215
set -g @plugin 'tmux-plugins/tmux-resurrect' 9
set -g @plugin 'tmux-plugins/tmux-continuum'
set -g @continuum-restore 'on'
run '~/.tmux/plugins/tpm/tpm'

This line and the next two lines changes the trigger to Home.


Reload config via TRIGGER + r


This line and next redefines pane splitting; retain current directory of existing pane.


Adds shortcuts for new and kill sessions.


No delays.


Enable mouse selections.


Set the default terminal mode to 256color mode


Theme settings (next six lines).


From here to the end: plugin management.

First install tpm, the tmux plugin manager and then TRIGGER + I for the plugins. The plugins used here are:

  • tmux-resurrect, allows to restore sessions with Ctrl-s (safe) and Ctrl-r (restore).

  • tmux-continuum, automatically saves/restores session (15min interval)

Figure 3-9 shows my Alacritty terminal running tmux, you see the sessions with the shortcuts 0 to 9, located in the left upper corner.

tmux example detail
Figure 3-9. An example tmux instance in action, showing available sessions

While tmux certainly is an excellent choice, there are indeed other options than tmux, so let’s have a peek.

Other Multiplexer

Other terminal multiplexer you can have a look at and try out include:

  • tmuxinator is a meta-tool, allowing you to manage tmux sessions.

  • Byobu is wrapper around either screen or tmux, especially interesting for you if you’re using the Ubuntu or Debian-based Linux distros.

  • Zellij calls itself a terminal workspace, is written in Rust and goes beyond what tmux offers, including a layout engine and a powerful plugin system.

  • dvtm brings the concept of tiling window management, to the terminal; powerful but also a learning curve like tmux has.

  • 3mux is a simple terminal mulitplexer written in Go, easy to use but not as powerful as tmux.

With this quick review of mulitplexer options out of the way, let’s talk about selecting one.

Which Mulitplexer Should I Use?

Unlike with shells for human users I do have a concrete preference here in the context of terminal mulitplexer: use tmux. The reasons are manifold: it is mature, stable, rich (many plugins) and flexible. Many folks are using it, so there’s plenty of material out there to read up on as well as help available. The others are exciting but relatively new or, as the case with screen, their prime time has been already some time ago.

With that, I hope I was able to convince you to consider using a terminal multiplexer to improve your terminal and shell experience, speed up your tasks and make the overall flow smoother.

Now, we turn our attention to the last topic in this chapter, automating tasks with shell scripts.


In the previous sections of this chapter we focused on the manual, interactive usage of the shell. Once you’ve done a certain task over and over again manually on the prompt, it’s likely time to automate the task. This is where scripts come in.

We focus on writing scripts in Bash, here. This is due to two reasons:

  • Most of the scripts out there are written in Bash and hence you will find a lot of examples and help available for Bash scripts.

  • The likelihood of finding Bash available on a target system is high, making your potentially user base bigger than if you’d be using a (potentially more powerful but esoteric and not widely used) alternative to Bash.

Just to provide you some with some context before we start, there are shell scripts out there that clock in at several thousands of lines of code. Not that I encourage you to aim for this, quite the opposite: if you find yourself writing long scripts, ask yourself if a proper scripting language such as Python or Ruby is the better choice.

Let’s step back now and develop a short but useful example, applying good practices along the way. Let’s assume we want to automate the task of displaying a single statement on the screen that, given a user’s GitHub handle, shows when the user joined, using their full name. Something along the line of:

XXXX XXXXX joined GitHub in YYYY

How do we go about automating this task with a script? Let’s start with the basics, then review portability, and work our way up to the “business logic” of the script.

Scripting Basics

The good news is that by interactively using a shell you already know most of the relevant terms and techniques. In addition to variables, streams and redirection, and common commands, there are a few specific things you want to be familiar with in the context of scripts, so let’s review them.

Advanced Data Types

While shells usually treat everything as strings (if you want to perform some more complicated numerical tasks you should probably not use a shell script) they do support some advanced data types such as arrays.

Let’s have a look at arrays in action:

os=('Linux' 'macOS' 'Windows') 1
echo ${os[0]} 2
numberofos=${#os[@]} 3

Define an array with three elements.


Access the first element, would print Linux.


Get the length of the array, resulting in numberofos being 3.

Flow Control

Flow control allows you to branch (if) or repeat (for and while) in your script, making the execution dependent on a certain condition.

Some usage examples of flow control:

for afile in /tmp ; do 1
  echo $afile

for i in {1..10}; do 2
    echo $i

while true; do
done 3

Basic loop iterating over a directory, printing each file name.


Range loop.


Forever loop, break out with CTRL+c.


Functions allow you to write more modular and reusable scripts. You have to define the function before you use it since the shell interpets the script from top to bottom.

A simple function example:

sayhi() { 1
    echo "Hi $1 hope you are well!"

sayhi "Michael" 2

Function definition, parameters implicitly passed via $n.


Function invocation, the output is “Hi Michael hope you are well!”.

Advanced I/O

With read you can read user input from stdin that you can use to elicit runtime input, for example, a menu of options. Further, rather than using echo, consider printf which allows you fine-grained control over the output, including colors. printf is also more portable than echo.

An example usage of the advanced I/O in action:

read name 1
printf "Hello %s" $name 2

Read value from user input.


Output value read in the previous step.

There are other, more advanced concepts available for you such as signals and traps. Given that we only want to provide an overview and introduction to the scripting topic here, I will refer you to the excellent Bash scripting cheatsheet for a comprehensive reference of all the relevant constructs. If you are serious about writing shell scripts, I can recommend you to read the bash Cookbook by Carl Albing, JP Vossen, and Cameron Newham which contains lots and lots of great snippets you can use as a starting point.

Writing Portable Bash Scripts

We now have a look at what it means to write portable scripts in Bash. But wait. What does portable mean and why should you care?

At the beginning of “Shells” we defined what POSIX means, so let’s build on that. When I say portable, I mean that we are not making to many assumptions—implicitly or explicitly—about the environment a script will be executed. If a script is portable, it runs on many different systems (shells, Linux distros, etc.).

But remember that, even if you pin down the type of shell, in our case to Bash, not all features work the same way across different versions of a shell. At the end of the day it boils down to the number of different environments you can test your script.

Executing Portable Scripts

How are scripts executed? First, let’s state that scripts really are simple text files, the extension doesn’t matter, although often you find .sh as a convention used. But there are two things that turn a text file into a script that is executable, and able to be run by the shell:

  • The text file needs to declare the interpreter in the first line, using what is called shebang (or hashbang) that is written as #!, see also the first line of the template below.

  • Then, you need to make the script executable using, for example, with chmod +x which allows everyone to run it, or even better chmod 750 which is more along the lines of least privileges. We will dive deep into this topic in Chapter 4.

Now that you know about the basics, let’s have a look at a concrete template we can use as a starting point.

A Skeleton Template

A skeleton template for a portable Bash shell script that you can use as a seed looks as follows:

#!/usr/bin/env bash 1
set -o errexit 2
set -o nounset 3
set -o pipefail 4

firstargument=${1:-somedefaultvalue} 5

echo $firstargument

The hashbang instructing the program loader that we want it to use bash to interpret this script.


Define that we want to stop the script execution if an error happens.


Define that we treat unset variables as an error (so the script is less likely to fail silently).


Define that when one part of a pipe fails the whole pipe should be considered failed. This helps to avoid silent failures.


An example command line parameter with a default value.

We will use this template later in this section two implement our GitHub info script.

Good Practices

I’m using “good practices” instead of “best practices” because what you should do depends on the situation and how far you want to go. There is a difference between a script you write for yourself vs. one that you ship to thousands of users, but in general, high-level good practices writing scripts are as follows:

Fail fast and loud

Avoid silent fails and fail fast, things like errexit and pipefail do that for you. Since Bash tends to fail silently by default, failing fast is almost always a good idea.

Sensitive information

Don’t hardcode any sensitive information such as passwords into the script. Such information should be provided at runtime, via user input or calling out to an API. Also, consider that a ps reveals program parameters and more so that’s another way how sensitive information can be leaked.

Input sanitization

Set and provide sane defaults for variables where possible as well as sanitize the input you receive. For example, launch parameters provided or interactively ingested via read to avoid situations where an innocent looking rm -rf "$PROJECTHOME/"* wipes your drive because the variable wasn’t set.

Check dependencies

Don’t assume that a certain tool or command is available, unless it’s a build-in or you know your target environment. Just because your machine has curl installed doesn’t mean the target machine has. If possible, provide fallbacks, for example, if no curl is available use wget.

Error handling

When your script fails (and it’s not a matter if but only when and where) provide actionable instructions for your users. For example, rather than Error 123 say what has failed and how your user can fix the situation, such as Tried to write to /project/xyz/ but seems this is read-only for me.


Document your scripts inline (using # Some doc here) for main blocks and try to stick to 80 columns width for readability and diffing.


Consider versioning your scripts using Git.


Lint and test the scripts, and since it’s such an important practice we will discuss this in greater detail in “Linting and Testing Scripts”.

Let’s now move on to making scripts safe(r) by linting them while developing and testing them before you distribute them.

Linting and Testing Scripts

While you’re developing, you want to check and lint your scripts, making sure that you’re using commands and instructions right. There’s a nice way to do that, depicted in Figure 3-10, a program called shellcheck; you can download and install it locally or you can use also use the online version via

Figure 3-10. A screenshot of the online shellcheck tool

And further, before you check your script into a repo, consider using bats to test it: bats stands for “Bash Automated Testing System” and allows you to define test files as a Bash script with special syntax for test cases. Each test case is simply a Bash function with a description and you would typically invoke these scripts as part of a CI pipeline, for example as a GitHub action.

Now let’s put our good practices for script writing, linting, and testing into practice. Let us implement the example script we specified in the beginning of this section.

End-to-end Example: GitHub User Info Script

In this end-to-end example we bring all of above tips and tooling together to implement our example script that is supposed to take a GitHub user handle and print out a message that contains what year the user joined, along with their full name.

This is how one implementation looks like, taking the good practices into account. Store the following in a file called and make it executable:

#!/usr/bin/env bash

set -o errexit
set -o errtrace
set -o nounset
set -o pipefail

### Command line parameter:
targetuser="${1:-mhausenblas}" 1

### Check if our dependencies are met:
if ! [ -x "$(command -v jq)" ]
  echo "jq is not installed" >&2
  exit 1

### Main:

result=$(curl -s $githubapi$targetuser) 2
echo $result > $tmpuserdump

name=$(jq .name $tmpuserdump -r) 3
created_at=$(jq .created_at $tmpuserdump -r)

joinyear=$(echo $created_at | cut -f1 -d"-") 4
echo $name joined GitHub in $joinyear 5

Provide a default value to use if user doesn’t supply us with one.


Using curl, access the GitHub API to download the user info as a JSON file and store it in a temporary file (next line).


Using jq pull out the fields we need. Note that the created_at field has a value that looks something like "2009-02-07T16:07:32Z".


Using cut to extract the year from created_at field in the JSON file.


Assemble the output message and print to screen.

Now let’s run it with the defaults:

$ ./
Michael Hausenblas joined GitHub in 2009

Congratulations, you now have everything at our disposal to use the shell, both interactively on the prompt and for scripting. Before we wrap up, take a moment to think about the following, concerning our script:

  • What if the JSON blob the GitHub API returns is not valid? What if we encounter a 500 HTTP error? Maybe adding a message along the line “try later” is more useful if there’s nothing the user can do themselves.

  • For the script to work you need network access, otherwise the curl call will fail. What could you do about a lack of network access? Informing the user about it and suggest what they can do to check networking may be an option.

  • Think about improvements around dependency checks, for example, we implicitly assume here that curl is installed. Can you maybe add a check that makes the binary variable and falls back to wget?

  • How about adding some usage help? Maybe, if the script is called with an -h or --help parameter, show a concrete usage example and the options that users can use to influenc the execution (ideally, including defining default values used).

You see now that, although this script looks good and works in most cases, there’s always something you can improve, make the user epxerience better and making the script more robust, failing with helpful and actionable user messages. In this context, consider using frameworks such as bashing, rerun, or rr to improve modularity.


In this chapter we focused on working with Linux in the terminal, a textual user interface. We discussed shell terminology, provided a hands-on introduction to using the shell basics, and reviewed common tasks, and how you can improve your shell productivity using modern variants of certain commands.

Then, we looked at modern, human-friendly shells, specifically at fish, how to configure and use it. Further, we covered terminal multiplexer by using tmux as the hands-on example, enabling you to work with multiple local or remote sessions, windows, and panes.

Lastly, we discussed automating tasks by writing safe and portable shell scripts, including linting and testing said scripts. Remember that shells effectively are command interpreters, and as with any kind of language you have to practice to get fluent. Having said this, now that you’re equipped with the basics of using Linux from the command line, you can already work with the majority of Linux-based systems out there, be it an embedded system or a cloud VM. In any case, you will find a way to get hold of a terminal and issue commands interactively or via executing scripts.

If you want to dive deeper into the topics discussed in the chapter, here are a some further resources:

  1. Terminals:

  2. Shells:

  3. Terminal multiplexer:

  4. Shell scripts:

With the shell basics at our disposal we now turn our focus to access control and enforcement in Linux.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.