In the fight against Unix, one of your first opponents is the Bourne shell,
/bin/sh. It is your best and only hope for writing portable code
for Unix systems. When you write anything for Unix, you write shell scripts,
and the most portable shell you can use is the Bourne shell. (The scripts may
be embedded in makefiles, but they're there.)
For additional, less thoughtful ranting, read this.
The Bourne shell has some serious problems as an interactive shell, but these don't interest me at the moment. Anyway, for interactive use, we now have many excellent alternatives, including Bash, Tcsh, and, yes, that's right, Emacs!
The sad thing is that, although
/bin/sh is useless as an
interactive shell, the Bourne shell's programming language is really nasty.
So it's really not good for anything. But we have to use it anyway.
The Bourne shell has pros and cons, just like any other tool. On the plus
side, it's small, it comes with essentially every Unix system, it's almost
/bin/sh, and it's reflective via
so it's easy to extend. These features give you a good deal of assurance that
you can write a simple Bourne shell script, top it off with
#!/bin/sh, and expect it to work almost everywhere. Of course,
you have to be careful about the dialect you use and the programs you run from
the script. But it's possible to write portable Bourne shell scripts that do
Unfortunately, the Bourne shell typifies Unix in that it is evil. It has dynamic scope and numbered (rather than named) parameters. The semantics are based on string substitution, which is gross. There are no data structures, so don't even try to do anything interesting and expect to be able to keep your program clear. The Bourne shell doesn't even have numbers!
No one blames this Bourne guy, certainly not me. The Bourne shell was designed long ago. Who knew how important scripting would become or how wonderful it could be with the right language?
Back to the positive side, the Bourne shell's competition has some problems, even when you include all those scripting languages that aren't ubiquitous.
/bin/csh, was supposed to be useful for programming, as the name suggests. Unfortunately, it sucks.
configurescript and reducing the resource requirements of the runtime system. More importantly, Scsh is not currently portable to 64-bit Unices, because the underlying Scheme interpreter, Scheme 48, happens to have some 32-bit things built in at the VM level. These problems are being fixed, though, as I write this.
So, I tend to write Unix code with the Bourne shell.
Something I've thought about is how to make Bourne shell scripts that can be used as both commands and subroutine libraries. That is, writing scripts that can be run like any other Unix program but can also be loaded directly into a running Bourne shell interpreter without running anything. It doesn't work.
Making a shell script executable as a command means that
On the other hand, making a shell script directly loadable via the Bourne
.' primitive means that
Making a shell script executable and loadable means that you can choose the appropriate tradeoff for your application. Unfortunately, it turns out to be more than a little tricky to write a single shell script that works well as a command and as a function library.
The problem is that a client's positional parameters (
are handed directly to the code read by `
.'. When you are
reading a script for the purpose of defining a bunch of functions, you usually
don't want it to look at the positional parameters. If it did something with
them, you would have to rearrange them before calling the script. You just
want it to define some functions that you will call.
On the other hand, if you run a script as a command, you obviously want it
to look at its parameters. The script, however, has no simple way to tell if
it's running as a command or being loaded by `
More importantly, the behaviour of `
.' is rather odd. If you
load a script without passing any arguments, like `
. foo.sh', it
sees your positional parameters. If you pass it some arguments, like
. foo.sh bar', it sees those arguments as its positional
parameters. (But this behavior isn't even portable! See below.)
As long as the script doesn't change the positional parameters, they will
be reset after it is read. Thus, if
foo.sh doesn't modify its
positional parameters, the expression `
set bar; . foo.sh baz; echo
"$@"' will print
baz. (Now I know
that this behavior, in shells that provide it, is a compatibility hack for
shells that don't have dot-parameters. Arg!)
However, regardless of whether you pass a script parameters or
not, a script that you load can modify your positional
parameters using the Bourne shell's
set primitive. Thus, reading
a script with `
.' is not quite like a function call.
I've considered some possible protocols for dealing with this problem.
"$0"to guess whether you are being run as a command or read by `
.'. This is a pretty bad solution, but it has the pleasant property that clients don't have to worry about it much until it breaks. They just say `
. foo.sh' to load you or `
foo.sh bar' to run you with an argument. You can just check if
"$0"is of the form
foo.sh|*/foo.sh, in which case you are probably being run as a command. The obvious breakage is that you may be loaded by a program with the same name as yourself, in this example
foo.sh. Also, symlinks to you will not work unless they have the same basename.
foo.shcould specify that, if it is passed the single option
-nop, then it will act like a subroutine library, and otherwise it will process its arguments as usual. This requires some up-front care by clients that load your script, but I prefer this technique. For clients that forget the
-nopargument, very weird behavior might result, but if you have very stringent argument checking then your clients will catch their error quickly. (But this technique isn't portable! See below.)
foo.shcould ignore its arguments entirely unless the first one was
-do. Clients that load you with `
.' can pretty much ignore their arguments. But clients that
execyou will be very confused if they make a mistake, because you will just sit there and do nothing all the time. Worse, the
-dooption doesn't scale well, because you may be loaded from another script that uses the same convention, defeating the whole purpose. For these reasons, I don't like this technique at all.
foo.shand the command
foo. (As you know from the parenthetical expressions above, this is your only real option. Welcome to the future.)
Regardless of which way you make your command scripts loadable, you have to
be careful about
exiting from functions in your script.
Remember, the last command executed in the body of a function determines its
return status, so you can use that to indicate success or failure. You don't
need to call
exit just to return a status!
Also, you can never set your positional parameters in a script intended be
executed and loaded. (Obviously, it's OK to set them inside a
function, however.) It's easy to make this happen by defining a
main function and calling it with the positional
parameters. Then the shell allocates a call frame for the function's
"$@", and it can play with it's own copy as much as it wants.
And get this: passing parameters to a script via `
even portable! (I'm sure it's never flagged as an error, the manual just
doesn't specify what happens. It's one of those bugs turned feature.) God
damn it, I give up. There's no hope. Just use two files, one for the
function library and one for the front-end command.
I think I will skip schizophrenic scripts scrupulously.
These are some things I keep in mind while writing scripts.
setto chop simple things up.
pwdand magic automounter symlinks.
>&-, but only when that's what you mean.
You may need to refer to the manual before these tips make sense.
-- Peter Szilagyi <email@example.com>, 1.33