scholarly economies – decasia https://decasia.org/academic_culture critical anthropology of academic culture Thu, 01 Feb 2018 17:15:43 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.1 Academic work as charity https://decasia.org/academic_culture/2017/03/15/academic-work-as-charity/ Wed, 15 Mar 2017 20:46:45 +0000 https://decasia.org/academic_culture/?p=2334 In so many ways, academic work is hard to recognize as being work in the standard wage-labor sense of that word. It can take place at all hours of day or night, outside of standard workplaces, without wearing standard work clothing — in bed with the laptop at midnight, perhaps. American popular stereotypes allege that teaching is outside the realm of productive action and thus second-rate — “those who can’t do, teach.” That’s a maxim which devalues the feminine work of reproduction in favor of an implicitly masculine image of labor, but I digress; my point here is just that such claims reinforce the image of academic work as being in a world of its own.


The motivations for academic work are similarly supposed to be other than pecuniary. One is supposed to work for existential reasons, or out of commitments to higher values that go beyond the purely economic — the “pursuit of knowledge” in some quarters, the dedication to making citizens or producing social justice in others. Yet it’s no criticism of these values to observe, as many have already observed, that these higher values can become alibis for an amplified self-exploitation. “You’re doing it out of personal commitment,” they tell you as you donate your weekend to the institution.

A strange moment in this process, though, is the moment where colleges and universities beg their own employees for charitable donations.

Thus I’ve been surprised to receive email and paper mail requests numerous times per year from my current employer, Whittier College, originating in their Office of Advancement. As the illustration for this post shows, they even emailed me before the end of 2016 to suggest that “Charitable giving might help reduce your income tax bill.” But the only reason I have a tax bill is because they themselves are paying me a salary. So if I gave them a donation, that would … essentially be returning a portion of my salary to my employer.

Which amounts to asking me to work for free, or anyway for less, as if, again, academic work wasn’t actually something you do for a living. (I say “for a living” and not “for the money” to signal that what motivates me is the practical survival of our household, rather than money for its own sake. For people motivated by the latter goal, academia is obviously an inefficient route.) In any event, this seems a strange message to send to one’s employees. The same thing used to happen when I worked at the University of Chicago, so it isn’t just Whittier College in question; but in that case at least I was actually an alumnus.

I would recommend to academic employers that they at least ask their employees to opt in to the list of prospective donors, rather than giving their names to Institutional Advancement purely because of the mere fact of their employment.

]]>
Overproduction as mass existentialism https://decasia.org/academic_culture/2016/04/20/overproduction-as-mass-existentialism/ Wed, 20 Apr 2016 23:56:52 +0000 http://decasia.org/academic_culture/?p=2166 Earlier this year, I observed that there are two kinds of scholarly overproduction, “herd” overproduction and “star” overproduction. I’d like to come back to that line of thought to push it a bit farther.

I previously argued that if academic overproduction is in many ways market-like we might want to push for a better regulated market in knowledge. I suggested that this could be a complementary strategy to the usual denunciations of market forms in academic life. There is nothing the matter with critiques of market forms, I will stress again; but for all that, they need not be the end point of our thinking.

Continuing that line of thought, I’m wondering whether mass overproduction of academic knowledge may not have some unexpected effects. Its most obvious effect, of course, is the massive amount of “waste knowledge” it generates — the papers that are never read (or barely), citation for its own sake, prolixity for institutional or career reasons, pressures to publish half-finished or mediocre work, etc. All of these are the seemingly “bad” effects of mass overproduction.

But does mass overproduction have any clearly good effects? I like to imagine that one day, machine learning will advance to the point where all the unread scholarly papers of the early 21st century will become accessible to new syntheses, new forms of searching, and so on. We don’t know how our unread work might be used in the future; perhaps it will be a useful archive for someone.

More immediately, I’m also wondering if mass overproduction is creating new forms of self-consciousness in the present. In Anglophone cultural anthropology, it seems to me that mass overproduction is forcing us to constantly ask “what is at stake here?” Older scholarship seldom needed to ask itself that question, as far as I can tell, and certainly not routinely, with every article published. It became common, somewhere along the way, to ask, “so what?”

As one crude measure of this, I checked how often the literal phrase “what is at stake” co-appeared with “anthropology” in works indexed by Google Scholar, dividing up by decade (1951-2010). What I found out is that this exact phrase occurred last decade in 14,600 out of 853,000 scholarly works in anthropology. (Or at least matching the keyword “anthropology.”) This comes to 1.71% of anthropological scholarship published last decade. Obviously, 1.71% is not a large percentage, but what’s important as a barometer of tendencies in the field is that the percentage has risen considerably since the 1950s. Back in 1951-1960, only 35 publications mentioned “what is at stake” (0.2% of the 17,300 works published that decade).

Here’s the data:

                          Hits incl.  Percent
               Hits for   "what is    incl.
Decade         Anthro     at stake"   "at stake"
1951-1960      17300      35          0.20%
1961-1970      37700      160         0.42%
1971-1980      89900      480         0.53%
1981-1990      198000     1480        0.75%
1991-2000      609000     5860        0.96%
2001-2010      853000     14600       1.71%

Growth since   49.3x      417.1x      8.5x
1950s

Put another way, there was 49 times more anthropology published in 2001-2010 than in 1951-1960, but the expression “what is at stake” was used 417 times more often in 2001-2010 than in 1951-1960, thereby growing a bit more than 8 times as fast as the field in general. Google Scholar’s crude keyword search is too imprecise to measure how much work actually discusses what is at stake one way or another, but I expect that a more sophisticated linguistic analysis would show similar patterns over time.

So. Let’s say it’s true that cultural anthropologists now talk about “what is at stake” much more than they used to. The standard explanation for this is basically cultural and political. Cultural anthropologists are just much more self-conscious than they used to be, or so the story goes. They’re attuned to the politics of their representations. They’ve had to ask themselves about the relationship between their theories and colonial regimes. They no longer write under the assumption that producing objective knowledge is possible or even desirable. That’s what many of my colleagues would say, I think.

There’s plenty of truth there. But I wonder whether the sheer fact of overproduction – the massive flood of publications, the massive pressure to publish, the fact that we are not just a small village where everyone knows each other – may not also contribute to a sort of routinization of existential crisis. After all, if we are in a massive market of knowledge and attention that’s driven by the pressure to constantly produce, it stands to reason that the value of our product is constantly under scrutiny. I think that that’s partly what the “stakes” question reveals: an assumption that, until proven otherwise, our epistemic product has no value.

On some level, it is of course ridiculous to constantly have to prove that something major is at stake in every article, because when one is in a system of mass production, it is illogical to demand that the mass-produced part be singular, or even distinctly valuable. On the bright side, this massive existential focus on “the stakes” does help puncture an older generation’s dogma that scholarship is intrinsically virtuous. Existential self-doubt is a healthy thing, in some measure.

The downside, though, is that this focus on the stakes can oblige us to constantly exaggerate the value of our work— if only in order to get published and to attract readers. When everyone has to declare the great stakes of their scholarly products, this opens up a vast new space for self-promotional hyperbole. One might conclude, then, that mass overproduction can produce new forms of existential self-consciousness and self-scrutiny; but ironically, this existential awareness can itself readily become a new self-marketing opportunity.

]]>
Reading as caching https://decasia.org/academic_culture/2016/03/03/reading-as-caching/ Fri, 04 Mar 2016 07:03:47 +0000 http://decasia.org/academic_culture/?p=2130 When you spend a few years writing code, the principles of programming can start to spill over into other parts of your life. Programming has so many of its own names, its own procedures, its little rituals. Some of them are (as anthropologists like to say) “good to think with,” providing useful metaphors that we can take elsewhere.

I’ve gotten interested in programming as a stock of useful metaphors for thinking about intellectual labor. Here I want to think about scholarly reading in terms of what programmers call caching. Never heard of caching? Here’s what Wikipedia says:

In computing, a cache is a component that stores data so future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation, or the duplicate of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests can be served from the cache, the faster the system performs.

Basically the idea is that, if you need information about X, and it is time-consuming to get that information, then it makes more sense to look up X once and then keep the results nearby for future use. That way, if you refer to X over and over, you don’t waste time retrieving it again and again. You just look up X in your cache; the cache is designed to be quick to access.

Caching – like pretty much everything that programmers do – is a tradeoff. You gain one thing, you lose something else. Typically, with a cache, you save time, but you take up more space in memory, because the cached data has to get stored someplace. For example, in my former programming job, we used to keep a cache of campus directory data. Instead of having to query a central server for our users’ names and email addresses, we would just request all the data we needed every night, around 2am, and keep it on hand for 24 hours. That used up some space on our servers but made our systems run much faster.

One day, I had a thought: scholarly reading is really just a form of caching. When you read, in essence, you are caching a representation of some text in your head. Maybe your cache focuses on the main argument; maybe it focuses on the methodology; maybe on the examples or evidence. In any event, though, what you stick in your memory is always a provisional representation of whatever the original document says. If you are not sure whether your representation is accurate, you can consult the original, but consulting your memory is much faster.

I should probably issue a disclaimer here. I’m intentionally leaving aside a lot of other things about reading in order to make my point. Of course, academic reading isn’t only caching. Reading can be a form of pleasure, a form of experience valuable in itself; it can be a process of imaginary argument, or a way of training your brain to absorb scholarly ideas (which is why graduate students do a lot of it), or a way of forming a more general representation of an academic field. All of that is, of course, valuable and important. But I find that, after you spend long enough in academia, you don’t need to have imaginary arguments with every journal article; you don’t need to love the experience of reading; and you don’t need to constantly remind yourself about the overall shape of your field. Often, you need to read only a relatively well-defined set of things that are directly relevant to your own immediate research.

The analogy between reading and caching becomes important, in any event, when you start to ask yourself a question that haunts lots of graduate students: what should I read? I used to go around feeling terribly guilty that there were dozens, or probably hundreds, of books in my field that I should, theoretically, have been reading. I bought lots of these books, but honestly, I mostly never got around to reading them. That wasn’t because I don’t like reading; I do. It’s because reading (especially when done carefully) is very time-consuming, and time is in horribly short supply for most academics, precarious or not.

Now if we think about reading as a form of caching, we begin to realize that it might be pretty pointless to prematurely cache data that we may never use. For that’s what it is to read books pre-emptively, out of a general sense of moral obligation — you’re essentially caching scholarly knowledge whether or not it has any immediate use-value. To be sure, up to a point, it’s good to read just to get a sense of your field. But there is so much scholarship now that no one human being can, in effect, cache it all in their brain. It’s just not possible to have comprehensive knowledge of a field anymore.

I find this a comforting thought. Once you drop comprehensive knowledge as an impossible academic ideal, you can replace it with something better: knowing how to look things up. In other words, you do need to know how to go find the right knowledge when you need it. If you’re writing about political protests, you need to cache some of the recent literature on protests in your brain. But you don’t need to do this years in advance. You can just do this as part of the writing process.

That’s a rather instrumentalist view of reading, I know, and I don’t always follow it. I do read things sometimes purely because they seem fascinating, or because my friends wrote them, or whatever. But these days, given the time pressures affecting every part of an academic career, we ought to know how to be efficient when that’s appropriate. So: have a caching strategy, and try not to cache scholarly knowledge prematurely.

]]>
Failed research ought to count https://decasia.org/academic_culture/2016/01/20/failed-research-ought-to-count/ https://decasia.org/academic_culture/2016/01/20/failed-research-ought-to-count/#comments Wed, 20 Jan 2016 19:36:32 +0000 http://decasia.org/academic_culture/?p=2119 Failed research projects ought to count for something! It’s too bad they don’t. They just disappear into nowhere, it seems to me: into filing cabinets, abandoned notebooks, or forgotten folders on some computer. The data goes nowhere; nothing is published about it and no talks are given; no blog posts are written and no credit is claimed. You stop telling anyone you’re working on your dead projects, once they’re dead.

I’m imagining here that other social researchers are like me: they have a lot of ideas for research projects, but only some of them come to fruition. Here are some of mine:

  • Interview project on the personal experience of people applying to graduate school in English and Physics. (It got started, but didn’t have a successful strategy for subject recruitment.)
  • Interview project on student representatives to university Boards of Trustees in the Chicago area. (I got started with this, but didn’t have the time to continue.)
  • Historical research project on what I hypothesized was a long-term decline of organized campus labor at the University of Chicago. (I only ever did some preliminary archival poking around.)
  • Project on faculty homes in the Paris region. (I only had fragmentary data about this, and it was too hard to collect more, and never the main focus of my work.)
  • Discourse analysis project on “bad writing” in the U.S. humanities. (I did write my MA thesis about this topic, but it needed a lot more work to continue, and for now it just sits there, half-dead.)

One might even argue that my dissertation research project in France was a sort of “failure,” in the sense that I never really did what I set out to do, methodologically. The original project was going to be a multi-sited, comparative ethnography of French philosophy departments. But it took a long time to really get accustomed to the first department where I did research (at Paris 8); and although I did preliminary research at a couple of other departments, after 18 months I was just too worn out to throw myself into them. So I made my dissertation into a study of a single department instead. Most ethnographers wouldn’t call that a failure, exactly — it felt more like pragmatism in the face of fieldwork. But at some level, it wasn’t what I originally wanted to do.

This reminds me that failure is one of those ambiguous, retroactively assigned states. How do you know something failed? Because it never “succeeded”, so eventually you did something else, or stopped trying. You don’t have to classify as “failure” everything that doesn’t succeed; my dissertation research evolved into something different and more doable, and its very criteria of success shifted along the way. Some things are neither success nor failure, they just morph. Or sit in limbo, somewhere between failure and success. Maybe I’ll revive some of my failed projects someday.

But failure’s ambiguity doesn’t entail that there is no such thing as failure. And my point here is that, even though academics live in a world where they are supposed to constantly project success, it would be better if failure was treated more openly. I suspect a lot of us have failed projects. I think they should be something you can list on your CV. They’re a barometer of your ambitions, a diary of how you became a better researcher, a set of unfinished paths that someone else might want to follow. In short, failed projects are a kind of (negative) knowledge. As such, it strikes me that they ought to have a more dignified existence.

]]>
https://decasia.org/academic_culture/2016/01/20/failed-research-ought-to-count/feed/ 1
Herd overproduction and star overproduction https://decasia.org/academic_culture/2016/01/13/herd-overproduction-and-star-overproduction/ https://decasia.org/academic_culture/2016/01/13/herd-overproduction-and-star-overproduction/#comments Thu, 14 Jan 2016 02:40:55 +0000 http://decasia.org/academic_culture/?p=2113 I’ve been thinking about certain scholars who have written, for lack of a more precise way of putting it, a lot. The sort of people who seem to write a book a year for thirty years. I don’t necessarily mean scholars in, say, the laboratory sciences, but more like the humanists, the anthropologists, the philosophers. Today a post by Brian Leiter quoting a caustic review of the prolific scholar Steve Fuller reminded me of the topic.

If one description of scholarly activity is “producing knowledge,” then logically, wouldn’t we expect that there would be such a thing as “overproducing knowledge”? Can there be an overproduction crisis of scholarship?

It’s been said before. For instance, here’s Tim Burke from Swarthmore writing in 2005:

The drive to scholarly overproduction which now reaches even the least selective institutions and touches every corner and niche of academia is a key underlying source of the degradation of the entire scholarly enterprise. It produces repetition. It encourages obscurantism. It generates knowledge that has no declared purpose or passion behind it, not even the purpose of anti-purpose, of knowledge undertaken for knowledge’s sake. It fills the academic day with a tremendous excess of peer review and distractions. It makes it increasingly hard to know anything, because to increase one’s knowledge requires every more demanding heuristics for ignoring the tremendous outflow of material from the academy. It forces overspecialization as a strategy for controlling the domains to which one is responsible as a scholar and teacher.

You can’t blame anyone in particular for this. Everyone is doing the simple thing, the required thing, when they publish the same chapter from an upcoming manuscript in six different journals, when they go out on the conference circuit, when they churn out iterations of the same project in five different manuscripts over ten years. None of that takes conscious effort: it’s just being swept along by an irresistible tide. It’s the result of a rigged market: it’s as if some gigantic institutional machinery has placed an order for scholarship by the truckload regardless of whether it’s wanted or needed. It’s like the world’s worst Five-Year Plan ever: a mountain of gaskets without any machines to place them in.

But this isn’t exactly the kind of overproduction that I’m talking about. This is what I would call herd or mass overproduction, a sort of overproduction that “ordinary academics” produce as a matter of survival in a scholarly system that incentivizes publication quantity. “Ordinary academics” — the ones who have jobs or want jobs of the kind where publishing is required, that is — which isn’t all academic jobs, by a long shot.

Herd overproduction, on Burke’s view, is a generic state of being, a thing “everyone” is doing. But what I’m thinking about is a different kind of overproduction: let’s call it elite or star overproduction. That is, the kind of overproduction that academostars often practice. To be clear, not all recognized academostars overproduce in quite the way I mean, and conversely, some singularly prolific academics are not necessarily at the very top of the scholarly prestige hierarchy — but there is a strong correlation between hyper-prolific writing and star status, in my experience.

Star overproduction does something different than just produce a mass of mass expertise. If herd overproduction produces relatively generic, interchangeable, unremarkable research commodities, then star overproduction reinforces big-name scholarly brands; it’s more like releasing a new iPhone every year than building a minor variation on a cheap digital watch. Star overproduction captures an outsized share of academics’ collective attention, more as a matter of general brand loyalty (“I like to keep up on Zizek”) than because it is necessarily the highest quality academic product. As a corollary to this — and this particularly irks me — certain hyper-prolific academics really let the quality of their work slip as they begin to hyper-overproduce. It’s as if they just have too many obligations, too much exposure, like a decent band that just doesn’t have a dozen good albums in it. Fact-checking sometimes gets iffy; the same arguments get repeated.

All this makes me wonder: after a certain point in a hyper-productive career, might it be more ethical to pass the floor to other, more marginalized academics?

More generally, if there is a market for scholarship, could it be a better-regulated one? Scholars like Marc Bousquet (The Rhetoric of “Job Market” and the Reality of the Academic Labor System) have rightly criticized the notion of a “market” as an adequate description of academic labor allocation, but market-style social dynamics do crop up in a lot of academic life, in my experience, and the critique of market ideology need not preclude regulatory projects of one sort or another. For instance, might we have a collective interest in preventing oligopolies of knowledge? In preventing overly large accumulations of academic capital? Could we help the marginalized publish more by placing limits on publishing success?

Or to be a bit hyperbolic, but also more concrete: Could there be, hypothetically, a lifetime quota on publication, a career-length word limit? Suppose, for instance, that such a limit were set at a very high level that most of us would never approach — but if you did get to the limit, your time would be up?

]]>
https://decasia.org/academic_culture/2016/01/13/herd-overproduction-and-star-overproduction/feed/ 2