Jamie's Blog

How to write BASH without falling over and punching yourself in the face

Tags: Programming

The working title, “How to BASH one out in 30 minutes,” was rejected by committee.

Note that this article has been recently recovered after a fire. Some formatting issues may be present.

First of all, put the following at the top of every file:

set -eu

set -e means, if a command produces an error code, stop executing the script. set -u means that substituting a variable we haven’t set yet is an error.

If you’re definitely using Bash, then prefer

set -euo pipefail

At the end of the day, run your script through shellcheck.

There is never a good reason not to do this.

Wha? Huh? What is this?

In this short, easily digestible guide, I’m going to show you how to write Bash.

Bash is a unique, much-maligned, yet indispensable programming environment. Describing it as you might any other language, it offers unparalleled primitives for:

  • file-system access,
  • process execution,
  • inter-process communication,
  • multi-processing,

as well as extensive ecosystem support for text and data handling, multimedia processing, and networking.

Taken together, these make it an excellent “glue-code” for combining the output of other programs, and for writing throw-away batch operations on files. Its ubiquity and close synergy with Linux mean many developers, sysadmins, scientists and even casual users have some experience with it. The only problem is that approximately 1 in every 10 Bash scripts spontaneously explodes, killing the author and many of their friends and family.

The aim of this guide is to not let that happen.

Who this guide is for


That is, if you’re someone who:

  • Needs to write Bash scripts …
  • … without collapsing over your desk, setting your computer on fire, or deleting your dog.

You might also be interested if any of the following things are true:

  • You prefer learning how things actually work to a string of “How do I \(x\)” google searches.
  • You want to learn more about Linux, /The UNIX Way/(TM), or program execution.
  • You harbour some deep-seated masochism, self-loathing or obstinate compulsion to suffer.

You really should have experience in another programming language – please don’t learn Bash as your first language. Go and learn Python instead, I’m begging you – I’ll wait!

A Proper Introduction

Bash is a class of programs known as a shell – you often hear these terms used interchangeably. In fact, its name is an acronym for “Bourne Again Shell”. There are many shells available, including Bash, tcsh, Zsh. In this article, I’ll say “Bash” only when talking about a Bash-specific concept. Normally, I’ll use “shell” or “the shell” to refer more generally to “POSIX-compliant” shells1, a category that includes Bash. By sticking to POSIX compliance, we increase the chance our script will run the same after we update our computer or send it to a friend.

It’s also worth distinguishing between the terms “shell” and “terminal” – a terminal is a graphical user interface (Gnome terminal, urxvt, &c.) that attaches to an interactive shell session, which is the program that actually interprets input, runs commands, and produces their output. Terminals offer a variety of different features, and can re-interpret IO (for example, displaying ligatures, turning specially-formatted output into rendered images, or inserting and displaying Unicode characters), but they are really only a window onto the shell.

A typical interactive shell looks like this:

[jamie@JAMIE-T580-ARCH ~]$ ls
Desktop Documents Downloads Projects Pictures Videos bash_guide.md

That’s me, at my computer – in case I ever forget. After the $ is the command line, and on the following line is result of running the ls command. In this guide, code snippets, commands and file paths will appear like this. Some examples will be designed specifically for typing into an interactive shell, in which case they’ll be shown with a single dollar at the start of every shell command, followed by its output:

$ echo "hi"

However, the shell doesn’t just run interactively. You can write a file called a shell script, in which every line executes a command exactly as though you’d typed it into the shell yourself. As such, most examples will be presented with no dollar and no output:

echo "hi"

We’re going to talk a lot about “commands” and “programs”. My use of these terms is slightly at odds with the common terminology, but helps drive home some important points for the purposes of this guide. In the outside world, the more common term for “program” is “command”, and “command” would be more formally termed “command string”.

I prefer explaining things pedantically, but even so not everything in this guide is completely accurate – the aim is to approximate well enough that the inaccuracies won’t come back and bite you. The examples will be dense with information – the best way to read this guide is with a shell open in another window so you can play with things as I introduce them, answering your own questions and they work for yourself.

If you have any feedback, complaints, or an inaccuracy does come back and bite you, don’t @ me.

With that all out of the way, let the suffering commence!


Comments start with # anywhere in a line and continue to the end of the line. There is no multi-line comment syntax, but you can do this at the beginning of a line:

: "

By the end you should be able to explain why this works.


Shell is a stringly typed programming language. There is no such thing as a number in shell. The value 1 is the string 1, hence there is no way to add two numbers.2

Every string is split on white-space into a list of “words”. Repeated spaces between words are ignored. In a command string, the 0th word becomes the name of the program to run, with the rest its arguments:

            echo    "hello there" my     "good          friend"
          # ----     -----------  --      --------------------
  # word:    0            1       2                3

# All of these are equivalent:
echo "hello there" my "good          friend"
echo hello" "there my good"          "friend
"echo" "hello there"        "my"        "good          friend"

Yes: the program to run is just a string like everything else.

Strings do not need to be wrapped in quotes. Rather, " and ' disable word splitting and escape special characters (see Special Characters, next).

Quotes are removed from the resulting words, so in echo "hello", echo sees hello, not "hello"

Finally, quoted strings continue until the closing quote, and will contain any white-space that appears:

$ echo "quick



Special Characters

Outputting special characters in shell is a special pain in the arse.

The characters that have special meaning to the shell must be escaped (have their special meaning removed) with the \ special character:

$ echo '   # Invalid
$ echo \'

You can also wrap them in quotes, see Things That Aren’t Strings, next.

Other special characters are more tricky. The echo command is quite nice, but lulls you into a false sense of security: its familiar handling of \n and \t to mean “newline” and “tab” is internal, the shell doesn’t interpret them for you. To insert a newline character in shell, the correct syntax is $'\n'.

echo has two flags, -e and -E, which turn on and off its special interpretation of backslash sequences. Different systems have different default behaviours, so if you rely on either behaviour you must specify it.

Things That Aren’t Strings

There are a few things in shell that aren’t strings. This is an almost-exhaustive list.

Any of these at the start of a line: if, fi, for, while, do, done case, continue, break, until, !, and any number of varname=word pairs. (! is also the special history accessor in Bash, so needs to be escaped everywhere).

Other than that, the only non-strings are the following special symbols:

Category Characters
Quotes, escape and comments ' " \ #
Sequencing Operations && || ; ;;
Variables $varname ${varname}
Globbing and home * ? ~ ~user
IO Redirections < << <<< > >> |
Sub-processes and conditions $( ... ) ` { } () [[ ]] &

[ and ] are strings, but a glob character class like [0-9] is not. $ with no variable name is a string, and str~ is a string.

Don’t expect to memorise all this, of course. What is worth remembering is that single quotes escape everything except themselves, while double quotes escape all except the $ forms, \, and themselves.

This leads to unusual escape sequences to write a '. Can you tell why these are the same?

echo 'Shell is my favourite '\''language'\''...'
echo 'Shell is my favourite '\'language\''...'
echo "Shell is my favourite 'language'..."


In the terminology of this guide, “command”, “process” and “program” all reflect different qualities of the same thing: a process is a running thread of execution. A program is an executable file on your computer (or a shell builtin); a command is a line of shell that leads to a program being run as a process.

What makes the shell so unique is that almost all of its functionality comes from other programs on your system. Very rarely will the commands you run be implemented in shell, or run as part of your interpreter’s process. In fact, shell only barely constitutes a language – what makes it so powerful is the ease with which it can execute and compose these other programs, according to a standardised interprocess communication (IPC) system provided by the operating system. This means you can interface with any program using the same basic techniques, no matter what language it was originally written in or how it executes.

This means that the shell command syntax is incredibly simple, but the actual program semantics are up to the person who wrote that program. This leads to program behaviours that are as wildly inconsistent and frustrating as anything you could possibly imagine: what holds true for the behaviour of one program may not hold for another. For that reason, there are conventions on how programs should behave when executed from the shell, which we’ll cover shortly. Most programs we’ll talk about in this guide are well-behaved.

cat is a program that takes a list of filenames and con cat enates them. echo is a program which takes a list of strings and echo es them. These are programs living on your computer3, most likely under /usr/bin:

$ echo Hi there, how are you?
Hi there, how are you?
$ which echo
$ /usr/bin/echo Hi there, how are you?
Hi there, how are you?

The 0th word in each line is the program to be executed, and the following words are the arguments to the program, together constituting a command.

Separate commands can be sequenced with ; instead of newlines, as in many other languages:

sleep 1; sleep 5; sleep 10

Commands can be chained in a few other ways, some of which we’ll see shortly4:

sleep 1 && sleep 5 && sleep 10
sleep 1 || sleep 5 || sleep 10
sleep 1 & sleep 5 & sleep 10
sleep 1 | sleep 5 | sleep 10

Try these out and see if you can work out their behaviours!

Command Output

Every command has three (-ish) possible outputs:

  • Its exit code (always)
  • Its “stdout” (Standard Output) and “stderr” (Standard Error)
  • Any file-system outputs performed by the program (creating a file, updating a system log, deleting your hard-drive)

Exit Codes

Every command you run will result in an exit code, which is a number between 0 and 255. If it’s 0, then the program has (probably) succeeded, while anything else indicates some kind of failure. Immediately after a command, the $? variable is set to its exit-code.

There are some programs which only output exit codes:

$ true
$ echo $?
$ false
$ echo $?
$ test -n ""
$ echo $?

The first two are fairly self-explanatory. The third is also called [ (existing as both /usr/bin/test and /usr/bin/[), and we’ll see it again very soon.

If set -e is in place and a command returns a non-zero exit-code, your script will fail. You can test this in a terminal you don’t care much about like this:

$ set -e
$ false # Terminal will immediately close

This may seem bad, but consider the alternative: an integral part of your script fails, but it just carries on executing, whatever that means in the context of its failure. Putting set -e at the top of your script as a liturgy against sin.

! before a command “negates” the exit code:

! true
echo $?   # Result: 1, execution continues
! false
echo $?   # Result: 0

What if set -e is rightfully applied, but you expect an error from a particular command? In this case, you can either wrap it in set +e; command; set -e, or prepend !. A common example where this is needed is when making a directory with mkdir: mkdir t will return 1 if folder t already exists; if you wish to create a folder that may already exist, you can do:

! mkdir t

What do you think would happen if you didn’t include a space before the command? What about multiple spaces? Try it out!

The uses of exit-codes are two-fold. One use is obviously error handling. As you may now guess, the other is control flow. For example, exit-codes are the mechanism for the shell implementation of if and while:

if command; then
elif command; then

while command; do

This is where [ comes in: [, AKA test, is a program that looks like a syntactical structure, which takes some arguments and returns an exit-code. For example,

if [ "my string" = "my string" ]; then
    # something here

test is a pretty terrible program. Most of its tests takes a list of one-character flags which are completely inscrutable, like -n or -z. Some of them are more obvious, like -gt and -eq, but what isn’t obvious is that -eq is for numbers, while = is for strings:

[ -n "" ] # exit-code: 1
[ -z "" ] # exit-code: 0
[ 5 = " 5" ] # exit-code: 1
[ 05 -eq " 5" ] # exit-code: 0

There are loads of file-related flags like -e to check a file exists, -w to check if we can write to it, &c. – a full list can be found in man test.

Another problem with test is that its Boolean logic is absolutely disgusting:

[ s = s -a 0 -gt 1 ] # exit-code: 1
[ s = s -o 0 -gt 1 ] # exit-code: 0
[ s = s -a \( 0 -gt 1 -o 1 -gt 0 \) ] # exit-code: 0

It’s better to write this as

[ s = s ] && [ 0 -gt 1 ]
[ s = s ] || [ 0 -gt 1 ]
[ s = s ] && ( [ 0 -gt 1 ] || [  1 -gt 0  ] )

but by the last line your eyes have started rolling back in their sockets, blood is pouring out of your mouth, and you f e e l yo ur bo*dy thirsting with an insatiab ble! de*sire! for flesh.

To prevent that from happening, Bash also provides a builtin, [[. It behaves similarly to test, but with some advantages. It can evaluate arithmetic, match regular expressions, match globs (see Pathname Expansion and Globbing), and has nicer syntax for Boolean logic:

# Globbing
[[ hello == he* ]] # exit-code: 0

# Boolean logic
[[ hello == he* && ! ( 0 -gt 1  && 1 -gt 0 ) ]] # exit-code: 0

# Regular expressions
[[ 93.a7   =~ "[93]+\.[[:alnum:]]{2}$" ]] # exit-code: 0
[[ 93.a7a  =~ "[93]+\.[[:alnum:]]{2}$" ]] # exit-code: 1

The downside is that this code requires Bash (see Compatibility)

As a shorthand for if statements guarding single commands, you can also sequence commands using && (and) and || (or), where a && b executes b only if a exits zero, while a || b only executes b~if ~a is non-zero.

$ false || echo "hello!"
$ true && echo "hello!"
$ [ -n "$var" ] || echo "var is null"
var is null

This is comme il faut in shell5.

stdout, stderr

Every line of text you see resulting from a command is printed over either standard output (stdout) or standard error (stderr). For example, we’ve seen echo: it’s a “print” program, to give us feedback from our script. But another way to look at it is as a program which takes input as arguments and outputs them on stdout.

Usually, the output you’re working with is ASCII or Unicode text, often with newlines and other structuring elements, but there’s no limitation on what kind of data a program can send to stdout. To see an example of some non-text, try head -c 100 /usr/bin/head to use the head program to take the first 100 bytes of the head program. “Oh fuck,” you scream, as you watch your terminal try to eat it itself. To convert this into a more copacetic ASCII format, we can do head -c 100 /usr/bin/head | xxd.

stdin and stdout are both output streams. Streams are a key concept of the shell, and much shell scripting is an exercise in moving data to and from streams. Conceptually, a stream is just a load of ordered information. When you start reading from a stream, its information may not have arrived yet, such as a file downloaded from the internet; it may not even exist yet, such as the result of compression or a numerical algorithm. In the worst case, you may got nuthin’ comin’. Therefore, a stream has indeterminate length – it does not make sense to ask how long it is.

  1. Redirections

    Redirections allow us to redirect a program’s streams to any file. This is one of the most powerful features of the shell; coupled with Linux’s “everything is a file” philosophy, it can accomplish a remarkable array of behaviours.

    In this diagram, the running program is the 0th word, $0, and its arguments are the /n/th words $1, $2, &c. The program starts with two file-descriptors, 1 and 2, which start out pointing to the stdout and stderr end-points respectively.

    |  Command $0           +----->  (1) stdout
    |                       |
    |  Args $1 $2 ...       +----->  (2) stderr

    We can redirect 2 to a log-file with the following syntax:

    command arg1 arg2 2>word

    where I mean “word” in the word-splitting sense. This word will be interpreted as a file, for example,

    mkdir directory 2>errors.log
    |  mkdir                +----->  (1) stdout
    |                       |
    |  directory            +----->  (2) errors.log

    This is called “redirecting stderr to a file”.

    A major gotcha is that n>file truncates the file if it already exists, deleting its original contents. In shell terminology, this is called “clobbering” the file, and if you think there is a risk this might happen (say, misconfiguration by the user), you can make your shell throw an error instead with set -o noclobber.

    An alternative to > is >>, which behaves exactly the same except that it appends the new data to the end of the file.

    A very common use of output redirection to silence a command whose output we don’t want to see:

    echo "hi" >/dev/null  # output sent to virtual file which 
                          # throws away all input
    echo "hi" >&-  # program has its stdout forcefully closed
                   # (may quit or crash)
    |  echo        +----->  (1) /dev/null
    |              |
    |  hi          +----->  (2) stderr
    |  echo        +-/
    |              |
    |  hi          +----->  (2) stderr

    Oh yeah – if you don’t provide the n in n>target, it defaults to 1.

    You can also redirect a file descriptor to &m, where m is the number of another file descriptor. This works like pointers in C – I’m sure that will clear it up for you.

    Just in case, though – if file descriptor m points to /some/file, then n>&m makes n point to /some/file too:

    echo hi 2>&1
    |  echo        +----> (1,2) stdout
    |              | /
    |  hi          '/   /-----> stderr

    Here’s the most complicated output redirection you’re likely to need – see if you can work it out before reading the explanation:

    echo qwertyuiop 3>&1 >/dev/null 2>&3
    +--------------+  stdout
    |  echo        +--->  (1,3) -----(1)--------> (1) /dev/null
    |              |           \ 
    |              |            ---> (3) -----> (2,3) stdout
    |              |                       /
    |  qwertyuiop  +--->  (2) -----> (2) -/   /-----> stderr
    +--------------+  stderr

    We create a new file handle 3 and point it at what 1 is pointing at: stdout. Then, we point 1 to /dev/null3 still points to stdout. Finally we point 2 at what 3 is pointing at.

    The end result is to throw away the program’s normal stdout, but send its stderr to stdout instead.

Output Files

Another possible output of programs is to a file. An example which writes a file is tar:

tar -cf example.tar myfile

This creates the archive file example.tar in the working directory (see The Environment), containing the file myfile. Note that we tell the program which file to create through its arguments (the argument to the -f flag). To the shell, this is just a string – it’s up to tar to interpret it as a path and open it.

In practice, very few programs write to a file unless you tell them to, but again, nothing guarantees this. By default any program runs with the same permissions as your user, so can access any file you can.

Of course, I’m cheating a bit here – everything is a file, so programs can and will also output over the network, “pwning” your “boxes” and sending all your secrets to the NASA.

Command Input

There are essentially four different ways to get input into a program6:

  • the command arguments, which are all the split words after word 0;
  • the program’s “stdin” (Standard Input);
  • any file-handles opened by the program itself;
  • environment variables”, which we’ll cover later.


As we’ve learned, arguments are just all the words following the 0th word. All arguments are equal in the eyes of the shell. We talked about how words are split in Strings, and we’ll see later how various shell constructs are expanded in Variables and Other Substitutions. The only thing not covered by those topics is flags.

The typical way to configure a program is to provide it with arguments called flags. You’ve seen some already:

# Outputs "Hello" with no newline
echo -n "Hello"

Flags aren’t special from the shell’s perspective, they’re just strings. It’s entirely down to whoever wrote a program to decide which flags it accepts, what format they take, and what they do. Don’t assume a flag for one program works for another!

There are some common conventions: single-letter flags are preceded with a single -, like echo -n, set -e -u. Flags can take an argument, like head -n 20, and the space is often optional, as in head -n20. Many programs allow you to combine single-letter flags for brevity: set -eu. Only the last flag so combined can take an argument, like -o in set -euo pipefail.

Most single-letter flags have a longname equivalent: head --lines 10. Longnames rarely allow spaces between name and value, but often allow an optional =, as in --lines=10.

Many commands which take a file as an argument to read from will also support taking the file contents over stdin; commonly, you must pass them a single - argument to enable this behaviour. For example cat can take both files and stdin at once with cat file1 - file2.

Some commands take a list of arguments which they pass to another command, while also taking arguments themselves. In these cases, it is common to use -- between the arguments for the main command and the arguments for the subcommand.

Again, not every program will follow these conventions. Some programs have shorthand for very common flags, like head -20. It’s very unusual to allow --lines10, but it might happen! Sometimes programs insist no space come between single letter flags and their arguments; others insist it be there. Sometimes programs will offer only longnames or only shortnames. Some programs don’t use flags at all, taking arguments by position only, or parsing some special domain-specific language instead of arguments.

It’s a mess!


stdin is – you guessed it – a stream. If it wore trousers, it would wear them like this:

          |  Command $0      +----->  (1) stdout
(0) ----->+                  |
    stdin |  Args $1 $2 ...  +----->  (2) stderr

By default, if a program reads stdin, it gets… you! (Or, the interactive terminal it’s plugged into).

A command that reads stdin is cat:

$ cat

Run it in your terminal now. You’ll notice that, like other cats, cat is hungry for input, ravenously devouring every character and new-line you give it. You can tell it to quit with Control + C, but like most programs taking input, what it’s really waiting for is the end-of-file signal. In the interactive shell, you can send this with Control + D.

We can of course redirect into stdin:

wc -l <my_file

I won’t draw a diagram for that because it would be too obscene for print.

(and because you should get the picture by now.)

In fact, if there isn’t a terminal or some other source for stdin, but some part of your script tries to read it, then your script will crash or hang. For example:

$ cat <&-
cat: -: Bad file descriptor
cat: closing standard input: Bad file descriptor

Input and Output. At the same time.

We’ve seen how to get input into a command, using arguments and the < redirection. We’ve seen how to get output out of a command, using the > redirection. Now, I’m going to blow your mind:

$ echo "hello" | sed 's/l/n/'

On the left, we have a command which produces hello on its stdout. On the right, we have a command which replaces an l with an n, on every line that comes through its stdin.

          +--------+      +----------+
          | echo   +----->+ sed      +----->  (1) stdout
(0) --/ /-|        .      |          .
    stdin | hello  |\     | s/l/n/   |\
          +--------+ \    +----------+ \
                      \                 \     (2) stderr

This is called piping, and | is called a pipe (to the French-speaking audience members, please try to maintain your composure). Together, commands piped into each other form a pipeline. These commands all start at the same time, and the stdout of commands on the left of a | is connected to the stdin of commands on the right, appearing as it becomes available7.

Man pages

The best thing I can teach you is how to learn :shades:.

Almost every command worth using in a shell script has a man page, which can be accessed, from the interactive shell, with man [program_name]. Man pages are rendered in the less program, which allows searching by typing /search-word<Enter>. If you want to know how the -p flag works for mkdir, you can do man mkdir, followed by /-p<Enter>.

The only way to learn how an unfamiliar program behaves is to inspect its man page or other documentation. If you’re writing a shell script, you should consult the man-page for almost every unfamiliar command you write.

The key things to work out about a program are:

  • What flags does it take? What do they do?
  • How does it get its configuration?
    • Just flags? A config file? The environment?
  • How does it get its input?
    • Likely answers: stdin or a file
    • Many programs have special arguments to take input
  • How does it produce output? What format does that take?
    • Likely answers: stdout or a file

It’s worth reading the man pages even for simple programs you think you understand – some will surprise you. For example, shuf, a program to ~shuf~fle its input, can also work as a dice roller for your D&D campaign:

shuf -rn 3 -i1-20
# Output:

man bash contains pretty much everything you could possibly want to know about Bash, (though maybe not in a format you would want to read). You can often search it for a particular syntactic construct and find what you’re looking for. Let’s say you want to understand this code:

cat <<HERE

but you don’t know what to search for. In this case, you can open up man bash, forward-search (/) for << with /<<, and hit enter to find the section on Here Documents. It also describes all of the Bash builtin commands, such as read – to read about read, go to the end of the man page by pressing Shift + G, then reverse-search (?) with a regex to find the entry: ~? +read ~. If you don’t use the regex, then you’ll get every occurence of the word “read” in the man page, which is a lot of read to read.

Some programs have multiple sections, denoted programname(n), but confusingly the section goes before the program name, e.g. man n program_name. Different section numbers are dedicated to different kinds of documentation; for example, man 1 exec will tell you about the exec command, while man 2 exec will tell you about the Linux C function exec(). A P suffix indicates the POSIX standard documentation, which may not fully describe the version you’re actually using, but guarantees compatibility8. Be sure you’re reading the section you think you are!

Some useful programs

Finding the programs for the task at hand is down to you, but there are some generally quite useful ones for scripting. In some rough order of usefulness:

  • echo, printf, cat, read (printf sorely underrated!)
  • mv, rm, cp, mkdir, ln, rmdir, touch (file handling)
  • find
  • grep, tr, sed, awk (progressively more advanced string manipulation)
  • head, tail, cut, paste, join, jq (slicing and dicing text)
  • basename, dirname
  • wc (count lines or characters of input)
  • sort, uniq, shuf
  • xargs, parallel
  • pv, tee (piping utilities)
  • seq, [, :, true, false, yes (scripting utilities)
  • stat, realpath, readlink, du, df (file and file-system information)
  • sleep, ps, pgrep, wait, jobs, fuser, kill (processes and multi-processing)
  • netcat, curl, scp, rsync, wget (networking)
  • diff, comm (file comparison)
  • strings, xxd, od (binary data)
  • dc, bc (calculators)

You may have noticed that useful programs tend to have short names – this is a premier feature of the shell. All of these barring parallel, netcat, pv, jq, bc and dc are likely to be on your system already, and some of them are shell builtins. If you’re serious about learning shell scripting, set yourself the challenge of skimming the man page for each of these commands, say one a day.

There are some other scripting languages which are useful in a shell context, but you’d have to go away and learn them. Perl, sed and awk crop up frequently in shell scripts, but you can even write inline Python if you want. If you’ll be writing a lot of shell I highly recommend the sed and awk book.

There are also some programs you shouldn’t use when writing scripts. Generally this is because their behaviour varies or is subject to change in future, or perhaps that they’re unlikely to even be available on other systems. For example, ls, apt, and a lot of git commands: ls output varies wildly depending on which shell you use – a superior alternative is find – while apt and “porcelain” git commands are intended solely for interactive usage.


Variables are the bread and butter of programming. Most programming guides start with variables – nice, easy to understand, and one of the most important building blocks of any code you may write.

Variables in the shell are not as important and not as easy to understand. I leave them so late to discourage you from defining dozens of the bloody things – there are too many illegible shell scripts out there already. Many algorithms make only minimal use of variables when written with the right commands, redirections and substitutions.

One thing is straightforward, at least: every variable in shell holds a string.

Variable definition

Most guides start with some basic operations you can perform on variables. For shell variables there are no operations besides assignment and expansion, and no operators that work on them besides $. But so you don’t feel you’re missing out, here’s how you assign to one:

varname="A string"

There are no spaces around = – otherwise, the shell would think varname was a program – and that’s not the only booby-trap:

$ varname=A string
sh: string: command not found

We’ll see why this happens in The Environment. Meanwhile, make sure you wrap assignments in quotes!

Variable (Parameter) expansion

echo $varname
echo ${varname}
echo "$varname"
echo "${varname}"
# But NOT
echo '$varname'

We use the word “expansion” to remind ourselves what’s happening: $varname “expands into” the string value held by the variable varname. Taking “hello world” as an example:

Command Expanded words
echo $varname echo hello world
echo ${varname} echo hello world
echo "$varname" echo hello world
echo "${varname}" echo hello world
echo '$varname' echo $varname

This has huge consequences for any variable storing file paths or user input. Consider that a UNIX filename can start with ' '. Now imagine I have a folder, / my folder. The following code does not do quite what I want it to do:

my_folder="/ my folder"
rm -rf $my_folder
# removes the folders "/" "./my" "./folder"

For this reason, whenever working with paths or other values which may have spaces, wrap every variable substitution in double quotes.

Even wrapping things in quotes we can still come a cropper. A common case is if we have a variable prefix for a file:

for prefix in a b c; do

This will try to substitute the variable prefix_1, which doesn’t exist. If set -u is in place, this will crash our script. 😢.

To fix this, we need to add braces:

for prefix in a b c; do

This is definitely a lot to keep in your head at once. If in doubt, no one will judge you if you wrap every variable expansion in "${}" and let god sort them out.

Note that the variable in the for loop is called prefix, not $prefix. $ is the expansion operator9 – you don’t include it when you assign to or create a variable, such as when naming the for loop variable or a target of the read command. However, in this guide, I’ll always include the $ in inline text to make it clear that a name refers to a variable.

Special Shell Variables

The shell has a number of important built-in variables, which you will need to use on occasion. We’ve already seen $? in Exit Codes, and $0, $1, &c. in Redirections, but we’ll go over them again here.

We can explain the $n variables by returning to one of our first examples:

        echo    "hello again" my     "old           friend"
#       ----     -----------  --      --------------------
# $n:    $0          $1       $2               $3

               # -----------xx--xxxxxx--------------------
# combined:                      $@, $*

Here we see $@, which contains all of the arguments for a program. Now realise that our shell script is a program too – it may also take arguments, which are then available from these variables.

To show how $@ works, we’ll use the printf program, which takes a format string. The format string contains some number of format specifiers, and matches any number of arguments. For example:

$ printf '%s!\n' war huh "good god, yall" 
good god, yall!

Now let’s see how it works with $@. In the interactive shell, we can simulate arguments with set:

$ set war huh "good god, yall"
$ printf '%s!\n' $@

Well, that’s no good. We really want to split up the arguments, but avoid splitting within arguments. To do this, we use "$@". If this makes no sense to you, good: you’ve been paying attention. "$@" is a special exception to the quoting rules for just this purpose, and it’s almost always what we want when we use $@:

$ printf '%s!\n' "$@"
good god, yall!

It’s worth remembering that $* also expands to all the arguments, but it has idiosyncratic expansion rules whose uses are niche, so I won’t explain it here. "$@" is the droid you are looking for.

If your script takes arguments, you often need to know how many arguments you’ve received. $# will tell you. There is a builtin, shift, which “shifts” the arguments down one: $2 becomes $1, and $1 is thrown away. This is useful for processing a list of arguments in a while loop, e.g. for consuming flags:

while [ $# -gt 0 ]; do
    # process $1 ...

Another sometimes-useful variable is $? (gasp!). As a reminder, it gives the exit code of the last command. Usually, you can just use the command directly, which is better:

grep --silent "word" my_file
if [ $? -ne 0 ]; then
    echo "hi"

# is the same as
if ! grep --silent "word" my_file; then
    echo "hi"

# (is the same as)
grep --silent "word" my_file || echo "hi"

If you need to run a command, but refer to its error-code later on, store the error code to a variable immediately:

grep --silent "word" my_file

The final shell variable I think you need to know about is $IFS, which stands for inter-field separator. $IFS should be treated with caution, and a full description of its behaviours merits a full section to itself. Essentially, it controls the behaviour of the read builtin and $*. Its default value is ~ ~10, but this value is treated specially. An idiom for using $IFS is:

# Do something with IFS as ':'

If you forget to reset $IFS, weird shit will happen and it won’t be my fault.

Bash Arrays

To be honest, I don’t often use Bash arrays, and I’m not going to talk about them much. Part of the reason is that they’re non-portable, but the real problem is that the syntax is absolutely appalling, and the language support isn’t much better, making them error-prone and surprisingly difficult to use.

Usually, if your program is complicated enough to need many arrays, you’re better off switching to Python. If you’re convinced you need the benefits the shell brings, then seriously consider Ruby or (shudder) Perl.

With that said, an array looks like this:

$ my_array=(war huh "good god, yall")

Using {} when accessing arrays is non-optional:

$ echo ${my_array[0]}
$ echo ${my_array[-2]}
$ echo ${#my_array}

The special indices @ and * work exactly as $@ and $* for arguments:

$ printf '%s!\n' "${my_array[@]}"
good god, yall!

You can append an array to another array:

$ # Note: no spaces around +=!
$ my_array+=(what is it good for)
$ printf '%s\n' "${my_array[@]}"
good god, yall

If you hurt yourself, don’t come crying to me.

Other Substitutions

We’ve talked about how command strings are broken up and executed. We’re about to see a bunch of things that aren’t strings that also appear in commands: “expansions” and “substitutions”. They’re called this because they “expand” into strings and are “substituted” with strings. You can think of every “command”, with its expansions, special symbols and what-have-you, as being boiled down into a “command string” which is then split into words and executed.

Variable expansion is but one form of replacement the shell performs. Three other well-known substitutions are “tilde expansion”, “pathname expansion” and “command substitution”.

Remember, though, that things like redirections and pipes are not strings and so you cannot create them by substitution.

Tilde Expansion

~ expands to your home directory, unless it’s followed by a username (e.g. ~jamie), in which case it expands to their home directory. This is safer than doing /home/jamie, as home directories aren’t guaranteed to be in /home – for example, the root user’s home directory is usually /root11.

There’s a gotcha here, though. cd ~/.config will expand correctly, but some_command --path=~/.config won’t, because ~ only expands at the front of a string. To safeguard against this, it’s better to use the $HOME variable, which we’ll see later in The Environment.

Pathname Expansion and Globbing

Pathname expansions happen on “globs”. A familiar example of a glob is path/*, which expands to every file in the folder ./path. . here means the current directory, which is why you’ll see a lone . appear in the output of the ls -a command. Its twin, .., refers to the previous directory, as in cd ... Relative paths (not starting with /) have an implicit ./, where . starts off as the directory from which your script is run. (We’ll talk about this later in The Environment).

Globs are similar to regular expressions, but not the same. They’re much simpler. If you know regular expressions, here’s an equivalence table:

Regex Glob Match
.* * Zero or more characters
. ? Any one character
[a-z] [a-z] Any character within the [] once. Ranges work as shown.

Globs expand to match files – the presence of a glob in a word tells the shell “try to match this with files on the the file-system”.

Depending on shell, glob patterns may be treated as strings if the glob fails (this is the default setting in Bash). This makes globbing in scripts dangerous – consider this apparently sensible for loop:

for file in folder/file_prefix*; do
    echo file: $file

In the case that folder doesn’t exist, or contains no files starting with file_prefix, the output would be the glob-string itself12:

file: folder/file_prefix*

In these cases it’s better to use the find utility, which can interpret globs and regex internally, but produces no output without a match.

It’s also dangerous to use glob characters unescaped when you expect no matches:

$ echo continue?

This could print continue? as intended, but it could also print nothing in some shells, or continue0 continue1 continue2 if those happen to be files in the current directory. However, glob characters don’t expand in quotes, so this is fine:

$ echo "hello?"

In that case, how do you combine globbing with variable expansion, without word-splitting the variable? This works fine:

$ ls "$my_folder"/*

TL;DR: avoid globs in scripts.

Command Substitution

You can use the output of a command in another command by wrapping that command in $(command). command can include redirections, pipes, variable expansions, and even other command substitutions13. All that good stuff.

[ $(wc -l file) -gt 100 ] && echo "Large file!"

The most common use-case is to store the output of a command in a variable:

start_time=$(date +%H:%M:%S)

Note that you don’t need quotes when assigning to a variable – beware that this isn’t the case when it appears in a command:

$ printf "%s!\n" $(yes | head -3)

As usuaul with shell, it won’t hurt to use quotes in both cases - better safe than sorry!

You can use "" to wrap a command substitution that also contains "":

$ printf "%s!!!!!\n" "$(yes "Hello World" | head -3 | tr '\n' ' ')"
Hello World Hello World Hello World!!!!!

However, strategic use of word-splitting can achieve things that would be annoying to achieve another way:

echo "file_one\nfile_two\nfile_three" > file_list
grep "keywords" $(<file_list)

You can store a whole file in a variable:


However, there are some caveats: variables cannot store the 0 byte, and surrounding white-space characters are stripped, making them totally unsuitable for working with binary data14.

Expansion Order

When the shell processes a line, it makes replacements in the following order:

  1. ~ expansion, variable expansion and command substitution from left to right
  2. Word-splitting
  3. Pathname expansion
  4. Redirections
  5. Execution

Given that, can you work out what this expands to?

$(echo -n "ls ")/?$(echo -n o)$var*

First, ls ~ is substituted for ~$(echo -n "ls "). Then o for $(echo -n o). Then m for $var. This leaves us with

ls /?om*

Next, word-splitting separates ls from /?om*, a glob which will hopefully expand to /home. Finally, with all expansions completed, the command executes, and the ls command gets the argument /home.

Looping Constructs

Whenever you think you need a loop, stop and think: do you really need a loop? Many common programs, such as grep, wc, rm will happily take the same list of files that for will, and many text-processing tools like grep , cut, paste will work just as happily on a full file as on a single line extracted by while. Combined with tools like find and xargs, there are many things that can be done more efficiently and economically without loop constructs.


The shell has while loops and they behave much as you would expect:

echo "W\nh"
while true; do
    echo e

# Shorthand:
while :; do
    echo e

(Control + C to stop this).

: is identical to true, which is just a program that exits 0.

A common idiom for reading input is:

while read -r line; do
    echo $line

This will read a line from stdin, storing the result in $line, until it encounters an EOF. The -r prevents read from interpretting any \ characters specially, and you should always include it.

read takes any number of arguments, and will try to fill them from the result of splitting the input (to show this requires a here document15, which I won’t bother to explain):

$ read first second <<HERE
a b c
$ echo $first
$ echo $second
b c

What’s more, you can redirect into while:

while read -r first_name surname; do
    echo "$first_name" "$surname"
Alyosha Karamazov
Dmitri Karamazov
Ivan Karamazov

You can even pipe into while, but beware that if you do this, you cannot modify variables that appear outside the while loop16.

Exercise for the reader: use $IFS with while read -r to parse simple CSV data or the $PATH variable. Under what circumstances does your CSV reader fail?


For loops are deceptively simple in shell. They just loop over every split word in their argument:

for word in word1 word2 word3; do
    echo $word

This is the case whether you use variable expansions, pathname expansions like /usr/lib/*, or command substitutions like $(pgrep sh). As a result, most errors in for loops are simply the standard word-splitting gotchas: unmatching globs, empty expansions, and words splitting unexpectedly within a single value.

Writing and Running a Script

With all this in hand, a mere 7000 words later, you’re ready to write a real shell script. I hope you’re excited!

You go away and write your masterpiece. It’s got redirections, command substitutions, pipes left right and center. Satisfied, with a final deft patter of keys, you save it to the file, opus.sh.

Now, how do you run it?

Your first option is to explicitly run the sh (or bash) command:

$ sh opus.sh

This works, but what if someone else needs to run it? There’s no way for them to know whether it should be sh opus.sh or bash opus.sh or zsh opus.sh. If you don’t give it a .sh extension, the mystery deepens further – it could be python opus or pretty much anything.

To associate a script with its interpreter, UNIX-like operating systems will read a specially formatted line at the top of a file affectionately known as a shebang:


The name comes from “hash” (#) “bang” (!). You can designate literally any program as a shebang; when you run your script, the operating system will read the first line, extract the shebang, and execute that program instead, passing the path to your script as an argument. It’s usually only useful to use language interpreters17.

If you use the #!/bin/sh shebang, you’re indicating that your script is POSIX-compliant, and can be run by any POSIX shell. If you use #!/bin/bash, you’re indicating that it must be run with bash or it may behave unexpectedly. We’ll talk more about this later in Compatibility.

With a shebang at the top of your script, you need to mark the file as executable:

$ chmod a+x opus.sh

and finally, you can run it like so:

$ ./opus.sh

The ./ is a security feature18.

You may be tempted to run your script another way, sourcing, which would look like this:

$ source opus.sh
$ . opus.sh      # both are equivalent

However, this is a generally a bad idea. When you source a script, every line is run by the current (interactive) shell. For one thing, this ignores the shebang, which can lead to trouble, especially if you use an unusual shell like zsh or fish. It also means any variables the script modifies will be modified for your current interactive session, which in turn means that running the same script twice could break things. If you continue to do this anyway, I will find you and I will put you in the shell jail.

The Working Directory

When you run a command like rm file, you expect rm to delete the file in your current working directory – not just any old file it finds in the filesystem. Luckily, rm looks for paths you give it relative to the directory you run it in. To support this, every program on your system has a “current working directory”, to which every relative path is… well, relative.

When you write a script, it may depend on other files you provide along with it, like a settings file or a database. In this case, it’s very important to remember that relative links in your script will be interpreted from the directory that your user runs the script, not the directory that the script resides. To find that path reliably, you must construct it by examining $0 and the value of $PWD.


Writing a function is like making your own tiny little program right here in the shell, with all the input and output maguffins we talked about earlier. It has a standard output, which is the result of all of its commands’ standard outputs:

my_func() {
    echo $1 $3 $1 $2
    echo $0

my_func a b c
# Output:
# a c a b
# my_func

Its exit-code is the exit-code of its last command:

truer() {
    false; false; true

truer && echo "hi"   # Output: hi

It has stdin, but there’s a gotcha here: each command that reads from stdin will consume what it reads. This means that subsequent commands may not see what you expect them to:

readreadread() {
    read -r a
    read -r b
    read -r c
    echo "$a, $b, $c"

readreadread <<EOF

# Output:
# apple, banan, pear

If readreadread started with cat, cat would consume all of the input, leaving the read commands nothing. There are some poorly behaved programs which read stdin whenever it’s available, which can totally ruin your day if you’re not expecting it – I’m looking at you, ffmpeg.

In Bash, functions can also be defined with the function keyword19, but only in Bash, so it’s best not to:

function my_function() {  # this will break if the shell isn't bash
    echo "hi"


Variable scope in Bash is dynamic, as compared to most languages which use some form of lexical scope. In English, this means that a variables are global, and a function can refer to a variable which hasn’t been declared yet, so long as it has been set by some other part of the program by the time the function is called.

our_function() {
    echo "$global"  # refer to global before it is defined

global=hello  # define global
our_function  # run our_function

This is generally kind of a nightmare, and there’s a reason most languages have use lexical scope instead. I recommend against taking advantage of it, unless it’s to win a round of code golf.

You can define function-local variables with the local keyword, and you should do so wherever you can.

The Environment

Every process in Linux has an environment, which is nothing but a set of environment variables. In a shell, you can see your current environment by running the env program with no arguments. This is a combination of the program’s initial environment with any environment variables set since startup. To see the initial environment of your interactive shell, run the slightly cryptic:

$ tr '\0' '\n' </proc/$$/environ

This zero-delimited list is stored in read-only memory by the operating system for every process, and does not update to reflect the current environment.

In the shell, there is no syntactic distinction between environment variables and just variables – they work the same. To put a shell variable in the shell’s environment, you use the export command:

local_var=foo        # create a local variable
export local_var     # add local_var to the environment
export ENV_VAR=bar   # ENV_VAR starts out as an environment variable

You have to do this every time you change the value, or the change will not be stored to the environment.

Conventionally, environment variables are ALL CAPS, which has led to the misguided practice of naming every variable in capitals.

So how do we use environment variables as input? Firstly, our current environment is automatically inherited by any child processes we spawn – this is how the env program shows us our environment in the first place. We can add or override environment variables seen by a single command by prepending it with variable assignments:

$ temp_var=abc PATH="/usr/bin" env

We can also remove a variable in a few ways:

CXX_FLAGS= make             # CXX_FLAGS is null but set
env --unset CXX_FLAGS make  # CXX_FLAGS is unset

Finally, we can stop automatic inheritance of the environment completely:

env - make
env --ignore-environment make # equivalent

You can do this for security, and re-add environment variables the process will need.

Useful Environment Variables

There are several environment variables that are guaranteed to be present in the shell, or are typically set in the terminal environment. Here are a few useful ones.

$PATH contains a set of file-system paths separated by ~:~s. The shell knows how to interpret this variable – whenever you type a program name without a path, it will search each path in the $PATH in order for an executable beginning with that name.

$USER gives you the name (not the UID) of the current user.

$HOME expands to the current user’s home directory, e.g. /home/jamie.

$PWD contains the current working directory – it’s equivalent to $(pwd).

$SHELL contains the path to the user’s login shell (not the current shell!) for example /bin/zsh.

$COLUMNS and $LINES give the width and height of the current terminal in characters, which can be useful for formatting output.

In Bash, $UID contains the user ID for the current user. The POSIX-compliant way to get this value is to use the id program:

UID=$(id -u $USER)
GID=$(id -g $USER)


I’ve talked about compatibility throughout this guide, highlighting the things most likely to trip you up. Here, we’ll recap that information in one place, with some additional things to keep in mind.

Remember when I said most tasks in shell are achieved via external programs? Well, I only harped on about it for ten paragraphs, so you can be forgiven for not paying attention. What if you use a program that exists on your system, but not on someone else’s? What if you use a program that exists on both systems, but with differing behaviour? This is the crux of portability issues in shell.

The normative resource for portable shell scripting is the POSIX shell standard There’s a list of POSIX utilities and builtins (some utilities of which are optional), and so long as you stick within the standard, it is reasonable to assume that your script will be portable across many systems and shells. If you target a specific shell you can use its builtins as well. Other than that, it’s your duty to correctly identify which commands your script depends on, and what additional software your users may need to install.

When talking about compatibility, it’s important to understand the difference between a program and a builtin, which we’ve conflated so far. As we discussed, /usr/bin/echo is most likely a program on your computer20. However, when you call echo, it is most likely the shell builtin echo which is invoked instead. There are a few reasons for this – one, it is more efficient for the shell to avoid spawning a new program, which it can do by executing the expected behaviour itself; two, it can improve portability for a script targetting a particular shell, say Bash, knowing that all Bash interpreters do the same thing when you say echo; three, it allows the shell to provide additional functionality. This third point is where the problem arises, as it makes it more difficult to be sure that your script is portable between shells.

If you write a script with the shebang #!/bin/sh, it will typically be run by either ash, dash, or Bash in POSIX compatibility mode, depending on the system. The problem is, bash in POSIX compatibility mode… ain’t all that POSIX compatible. It still allows a number of non-POSIX syntactic constructs, and many of its builtins support non-POSIX behaviours. This means you could write your #!/bin/sh script on your Bash-based system, blissfully unaware that you had depended on Bash specific behaviour (Bashisms). To avoid this you can use shellcheck, which is very good at picking up on this kind of error, but the best thing is to know ahead of time which things are Bash specific and which things aren’t.

On the other hand, some Bashisms are really useful. If you decide you want a Bash feature, that’s fine; Bash is very widely available. But you must make sure to set your shebang to /bin/bash appropriately.

You may hear people saying that #!/bin/sh isn’t portable, and you should use #!/usr/bin/env sh instead. They’re half-right, and this advice is unreservedly correct for languages like Perl, Python, &c. For sh, while POSIX doesn’t guarantee that the shell lives at /bin/sh, systems that do not support #!/bin/sh for practical purposes do not exist. If a system does move sh somewhere else, chances are good env will be somewhere unexpected as well. In fact, there are systems today where env lives in /bin. The official POSIX guidance is that it is up to the system installing a script to amend any shebangs to account for unusual paths (see man 1P sh21).


Arrays are bashisms, as is the double bracket construct [[.

Bash also offers the <(command) substitution and <<<word redirection. These fill gaps in the cartesian product of possible IO redirections. commandA <(commandB) will execute commandB, storing the result in a temporary file22, and pass the path to that file to commandA. It’s equivalent to the following POSIX commands:

$ fifo=$(mktemp -u)
$ mkfifo $fifo
$ commandB > $fifo
$ commandA $fifo
$ rm $fifo

I bet you can’t wait to write that every time.

<<<word is not quite such a space-saver, but is much more widely applicable. command<<<word is equivalent to the following:

$ echo "hello" | command

That means that in Bash, there is never any reason to do echo string |.

The final bashism I’ll mention is named file descriptors. This requires you understand Global Redirections from the next section, so read that first. Read it? Great.

exec {myname}>some_file
echo >&${myname}

This can make your script a bit more legible if you’re redirecting to many file descriptors. Wild.

Extra Credit

Once you’ve internalised all of this, you can hopefully write an entire shell script without immediately collapsing on the floor, pantsing yourself in front of your childhood sweetheart, and begging for death. Congratulations. By no means can you now write shell without suffering – but here are a few things that might help ease the pain.

Useless use of cat

If you need to work with a file, it’s tempting to use cat to get the stream going. To take columns 2-4 of a simple CSV, sorted numerically by the first, we can do:

cat data.csv | sort -t, -k1n | cut -d, -f2-4

But in this case, quick consultation of man sort informs us:

       sort [OPTION]... [FILE]...
       sort [OPTION]... --files0-from=F

So we can do better by writing:

sort -t, -k1n data.csv | cut -d, -f2-4

In some cases, commands really do only take input via stdin. One example is, um… 🤔

tr! Anyway, even with tr, we could write the pipeline as:

<data.csv tr ',' '!' | sort -t'!' -k1n | cut -d'!' -f2-4

There is never any reason to write cat file |.

Arithmetic Expansions

Remember when I said there was no way to add two numbers in the shell? And you believed that? You fool. You absolute buffoon. Watch this:

$ a=1 b=2
$ echo $(( (a + b) / 3 ))

This $((expr)) construct is called arithmetic expansion. A lot of C-like arithmetic operations are available. You can even assign to variables, but remember that the final result will be expanded. The full set of permissible expressions differs a lot from shell to shell.

Default Expansions

At any point you make a variable substitution, you can provide a default value for the case that it’s empty or not set:

$ assigned=abc
$ echo ${assigned:-My Default!}
$ echo ${unassigned:-My Default!}
My Default!

What if you want to use myvar many times after this, with the same default? In that case, you can do:

$ echo ${myvar:=Assigned Default Value!}
Assigned Default Value!
$ echo $myvar
Assigned Default Value!

You may sometimes want to read an environment variable, but have a default if it isn’t set. To do this, a slightly strange incantation is needed:

: ${CC:=clang}

This looks like some weird new syntax, but recall that : is in fact a program, identical to true.

A quickhand for requiring a variable (e.g. a positional argument) is:

$ target_folder=${1:?Error: must provide target folder as first argument!}

There’s some subtlety to using := vs = forms, but you almost always want the : versions. I often struggle to remember the exact rules, so I either consult this table or man bash. There’s one last expansion in this group, ${var:+string}: try to work out what it does yourself.

Slicing Expansions

We often work with a variable containing a filename, such as a for loop over *.png *.jpg. What if we know that for every image we find, there is an associated .txt file? For this purpose, we can use the special ${var%glob} pattern:

$ x=selfie.png
$ echo ${x%.*}.txt

How about we want to take a path, relative or absolute and find the root folder? For example, for a/b/c/d, we want a. We could use repeated application of dirname, but who has time for that? In this case, we use %%. % will take the shortest match, while %% will take the longest, because %% is longer than %:

$ path=a/b/c/d
$ echo "${path%%/*}"

There are equivalent options which take from the front of a string, # and ##:

$ file="file_with.some.ridiculous.ext"
$ ext=${file#*.}
$ echo $ext
$ echo ${file##*.*}

I always had to look them up to get them the right way round, but now I remember that in other contexts # starts a comment, so it goes in front of things, while a percentage sign comes after a number.

Global Redirections

We’ve seen that you can redirect the stderr of a command to a file. This can be useful for generating log files for your scripts – on the other hand, it seems a bit cluttered to add 2>$logfile to every command. We could wrap our whole script in a function and then redirect its output, but somehow that doesn’t feel right either. For that reason, shell allows us to set global redirections using the exec command:

$ exec >$outfile
$ echo "hi"        # outfile contains hi\n
$ echo "bye"       # outfile contains hi\nbye\n
$ exec >$outfile   # outfile truncated
$ echo "hi"        # outfile contains hi\n


  • Variables you want to be set from the environment can be CAPITAL_SNAKE, the rest should be snake_case.
  • On the command line, prefer single-character flags. In scripts, prefer longnames.
  • Maintain indentation, 4 spaces per tab.
  • Prefer && and || to if statements for one- or two-line expressions.
  • Use functions to wrap unpleasant single-line commands in human-readable names (I’m looking at you, sed).
  • Prefer < to cat, and | to storing intermediary results in variables.
  • Use [[ over [.
  • Avoid arrays if possible.


Thanks to Linas Kondrackis for thoughtful questions, comments and corrections.

  1. There’s nothing wrong with non-POSIX compliant shells like fish, of course – using them as your interactive shell can yield many benefits. However, if you script in them, you may struggle to distribute your scripts to other machines.↩︎

  2. This is the first and most important lie. Please believe it.↩︎

  3. In fact, while echo probably exists as a program on your system, bash and most shells provide an echo builtin for efficiency and portability.↩︎

  4. The & is a slightly advanced topic, which I may cover in a later post.↩︎

  5. that’s French for, like, “really good,” or something↩︎

  6. In fact, there are only three distinct mechanisms for input – stdin and other streams are essentially files, which are created and opened automatically by the shell for all commands.↩︎

  7. Linas asks: Can you emulate | with > and <? The answer is yes, by making use of named pipes:

    mkfifo pipe
    cmd1 >pipe & cmd2 <pipe
    rm pipe

    is equivalent to

    cmd1 | cmd2

    However, that’s cheating, as a named pipe is still a pipe. If you were to try to do this with a normal file, then cmd2 could reach the end of the file before cmd1 had finished writing to it. One of the main benefits of a pipe is that each process knows if the other has closed its end of the pipe and can act accordingly.↩︎

  8. Shell executables are page 1, the Linux C API is page 2, while the POSIX sections are accessible at 1P, 2P, &c. See the man Wikipedia page.↩︎

  9. Artificially similar to the pointer dereferencing operator * in C and other languages.↩︎

  10. In quite recent versions of Bash, you can see the escaped value of IFS (or any variable) by typing echo -E ${IFS@Q}.↩︎

  11. In fact, users are not guaranteed to have a home directory at all. If this footnote had a name, it would be “E.T. NO HOME”.↩︎

  12. In Bash, you can do shopt -s nullglob at the top of your script to make empty globs expand to the empty string.↩︎

  13. The older syntax `command` also works, but is discouraged because it doesn’t nest well.↩︎

  14. If you really want to then you can use zsh, which has neither of these shortcomings.↩︎

  15. This is because a pipe creates a fork, so you can no longer modify variables in the current shell.↩︎

  16. This is because a pipe creates a fork, so you can no longer modify variables in the current shell.↩︎

  17. Try having some fun with shebangs. What programs work? What programs don’t?↩︎

  18. if you could run your script with just opus, then you could also run a script that was just called cd. Then, to well and truly pwnz you, all I would have to do is convince you to download an evil script called cd. If you ever cd’d into and then out of your Downloads directory. To prevent this, we can only execute scripts that are not found in our $PATH variable by giving an explicit path to them – in this case, the current directory, ..↩︎

  19. The motivation for this was to avoid an unfortunate clash with alias. If you do alias a=b; a() {...}, the alias is invoked and you actually end up with b() {...}. function a() disables aliasing for a.↩︎

  20. Or at least $(which echo) will be.↩︎

  21. You may need to install the POSIX man-pages from your system packages.↩︎

  22. Technically, the result of <(command) is stored in a named pipe.↩︎