When I first encountered the Linux command line, I was pretty bad at it.

I thought of it as a purely imperative interface.

You run xyz <enter> and you see some output. Then you can run xyz again if you want, or a different command if you prefer.

It’s just the “push a button, get a result” model of execution. It’s not wrong — every process does return an exit code indicating success or failure — but it’s a bit limiting.

The problem with the imperative model is: you can only push buttons. If you want to do something for which there’s no button, you’re stuck. It’s not a very extensible approach.

I’ve learned a few things since then, but I still feel like I’m bad at the command line.

Map and filter

Eventually I learned python and discovered map and filter operators.

Sometime afterwards, it started to sink in that the command line is also a playground for functional transformations of an input. (Conventionally, the inputs are line delimited plaintext or maybe CSV/TSV, but with tools like jq it’s easier to transform other structured formats too. I’m constantly reaching for curl | jq.)

I find it helpful to think of something like grep as a tool that implements filter(), and something like cut as something that implements map(). That way, you can see the general patterns beneath the arcane Unix tool names. For example, you can grep "WARN" log.txt | cut -d ':' -f2 to effectively filter and then map the bits of a log line that you want to read.

I’ll mention xargs (or find -exec) as essential functional programming tools too. Often you want to do something with the results of your functional pipelines.

Process stacks and trees

The pipe | that we use for data transformations above is obviously a way of chaining processes together in linear execution order. It’s a bit like .then() in Javascript promises. But you can also think of the shell as a stack of processes that you can navigate up and down.

I remember learning really early on about & to run something in the background. I can’t say that I use & much anymore. Remember when it was common to have an account on some big Unix server? The kind where you could leave some processes running in the background, log off, and come back later? Those days are gone. Now it’s all ephemeral containers and replaceable VMs.

Anyway, the process stack got more interactive when I found out about using ^Z to background an interactive process, and then fg to restore it. For example:

$ bin/rails console
>     # do some rails console commands
^Z    # leaves the rails console running in the background

$     # back at the regular shell to run something there...
$ fg
>     # back to the rails console again...

Meanwhile, it’s also sometimes helpful to picture the larger process space as an irregular tree, one whose branches are weakly linked together by dictionaries of environment variables that propagate from parent to child processes.

Shell vocabulary

Those are a couple of conceptual approaches to the shell environment. There are functional pipelines; there are process stacks; there are process trees.

But it’s not just about concepts either, right? It’s about learning the “vocabulary” or, dare I say, the “culture” of the shell environment. You can’t get too far at the command line without finding out about shell configuration and aliases, or why it’s annoying to use sh for your shell, or how $PATH works, or how Unix groups and file modes work. It’s hugely useful to know how redirection operators < and > work. And certain tools like vim or curl are in constant use – it doesn’t always matter if there is an inner logic to them; you just have to get used to them to find your way around. (I actually like vim.)

The shell’s culture is kind of vast, and I always feel like I’m still learning more about it.

Things I’m still bad at

It’s funny to be writing a command line tool and still feel like a beginner, in some ways.

After a while, you’re not really a beginner anymore. But you’re experienced enough to start understanding your own limits, and so you still feel … modest.