When you spend a few years writing code, the principles of programming can start to spill over into other parts of your life. Programming has so many of its own names, its own procedures, its little rituals. Some of them are (as anthropologists like to say) “good to think with,” providing useful metaphors that we can take elsewhere.
I’ve gotten interested in programming as a stock of useful metaphors for thinking about intellectual labor. Here I want to think about scholarly reading in terms of what programmers call caching. Never heard of caching? Here’s what Wikipedia says:
In computing, a cache is a component that stores data so future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation, or the duplicate of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests can be served from the cache, the faster the system performs.
Basically the idea is that, if you need information about X, and it is time-consuming to get that information, then it makes more sense to look up X once and then keep the results nearby for future use. That way, if you refer to X over and over, you don’t waste time retrieving it again and again. You just look up X in your cache; the cache is designed to be quick to access.