University neoliberalism in America: Greenwood on Spellings

I hadn’t meant to take such a long break from the blog. I will try to write weekly, at least, since there is so much here in France to write about. But for the time being, one more in a series of posts on neoliberalism…

Davydd Greenwood, an economic anthropologist turned action researcher from Cornell University, has been writing critically about social science and higher education for at least a decade now. In a long stream of essays, often co-written with his collaborator Morten Levin, he has castigated the “inhumanities and inaction research” that he views as leading to socially useless theoreticism, commented on Taylorist organization in university structures, and argued for far more extensive social research on academic institutions.

In a recent essay that I want to talk about here, Greenwood takes up what he calls “Bologna in America,” which is to say, the belated importation of neoliberal reform projects into U.S. higher education. His primary symptom of this phenomenon is a 2006 report put out by George W. Bush’s secretary of education, Margaret Spellings, which advocated a program of newly imposed “accountability” regimes for American universities, a “reform through imposed free market discipline.” Greenwood is quick to point out the contradiction inherent in the “imposition” of a “free” market:

“If [these would-be reformers] actually believed in the free market, this would make no sense. After all, by free market logic, institutions that are not accountable, not transparent, not affordable and not efficient would simply be put to death by the market itself. However, in their world view, the free market always needs the oversight of authoritative policymakers who know better than the consumers and producers what they all need” (22).

Now oversight, as the Spellings Report imagines it, consists of several things: a changing regulatory and financial structure, a new push towards policy integration of university and economy, and particularly a new regime of “transparency and accountability.” Accountability here largely involves instituting “output controls” instead of “input controls” — “output controls” meaning measuring the results of an education (demonstrable skills afterwards, job placement) rather than the inputs (money spent, teacher qualifications, or whatever). Greenwood notes correctly that no reasonable person could be against understanding the results of educational processes and trying to improve them, but as he points out, the Spellings Report’s version of accountability involves reducing educational processes to a set of uniform, quantifiable outcomes.

The report itself complains that it is currently hard to measure “how much students learn in college or whether they learn more at one college than another” (Spellings 13). This presupposes, first of all, that learning can be adequately quantified: they think of learning as what linguists call a mass noun, like a heap of grain, susceptible to indefinite expansion and to precise measurements of how much learning has happened. And second, clearly, it presupposes that all college educations are fully, quantitatively comparable to each other.

There is an elaborate fantasy later in the report (20f) about creating a massive national “consumer database” that will track every student and thus make possible centralized data on all universities’ aggregate student outcomes. Though Greenwood notes that this proposal raised major privacy concerns, it strikes me that a great deal of useful critical sociology could probably be derived from such a database. At any rate, the system was never implemented (we will return below to the outcome of the Report). But it’s interesting to see that this database, clearly envisioned as the primary means of evaluating higher education, would have contained a massive tacit bias towards a uniform, fully quantified, and almost entirely vocationally-oriented view of higher education. For educational “results” for the Spellings Report largely means “job placement.” That’s a sign in itself of the ongoing integration of higher education into the American system of labor and class reproduction, but also a symptom of not-entirely-satisfied corporate desires for universities to produce an even more “prepared” workforce.

This integration with the American labor system is one that permeates the very definition of learning employed by the report writers. When they explain what they think is wrong with American college education, they say that college students are “not prepared to work, lacking the critical thinking, writing and problem-solving skills needed in today’s workplaces” (3). What they mean by “critical” thinking is itself interesting here. Needless to say, what’s optimistically called critical thinking in the American liberal arts is often not allowed in work contexts, since critique can threaten or at least annoy the established order. The famous cartoon about the grad student deconstructing the Mexican takeout menu might end in dismissal if it was the cook doing the deconstructing.

But “critical” is a word seldom used in the abstract in this report. “Critical” occurs nine times in the report’s text; by my count, they mention abstract “critical thinking” skills twice, but seven times they talk about education being critical to something else. Certain disciplines are “critical to global competitiveness” (15); the workforce has certain “critical needs” (24); literacy is “critical to the nation’s continued success in the global economy” (26). Criticality here, in short, means economic instrumentality; what’s critical is what’s economically useful. (I note in passing that, while many humanists would instinctively critique such a definition of critique, any defense of non-economic critical values is likely to have its own economic conditions of possibility which quite often are concealed in the defense of supposedly higher values. Humanists still get a paycheck.)

Greenwood suggests that there is practically nothing about actual teaching in the report, but I’m struck myself, reading it over, by its emphasis at certain moments on an incredibly crude and low-level set of educational indicators. These consist essentially of performance on standardized tests of literacy and math. According to the report, as of 2003, 31% of college graduates were “proficient in prose literacy,” 25% in “document literacy,” and 31% in “quantitative literacy” (13). I have no idea what these numbers really measure or how accurate they are, but they certainly sound astoundingly low in every case. Less than one in three students is apparently “literate” in any of these senses. As my friend Mike Bishop might put it, whatever you may think about schooling reform, people ought to leave being able to read… but as Greenwood points out, it is equally mistaken to blame schools and universities for lacking or unequally distributed resources that are controlled by broader sociopolitical forces. Just to take Connecticut where I grew up as one example, variation between schools seemed to be largely the product of wealth differences between different towns coupled to a mainly town-based system of school funding. Apparently there was no effective mechanism for equalizing wealth (or cultural capital) disparities between towns. Any purely internal reform of education is misguided in such circumstances (a point which has been stressed by critics of No Child Left Behind).

At any rate, the Spellings Report cited bad test scores for basic skills and lack of (quantified, standardized) accountability mechanisms, in addition to a bad financial aid system, unequal access, impossibly rising costs, and lacking “innovation,” as the main areas needing remedy in American higher ed. In fact, however, the Spellings Report didn’t directly yield major reforms. As Greenwood notes, the major difference from a European Minister of Education is the U.S. Secretary of Education has no power to directly control American universities. Her powers mainly cover the dispersal of federal funds and the control of accreditation agencies for colleges and universities. Spellings had no power to directly reform university practice.

Moreover, the Spellings Report got in trouble over its proposal for an invasively comprehensive national student database, and, Greenwood tells us, Spellings’ policy approach was eventually repudiated by a prominent Republican, Lamar Alexandar. However, and this I think is one of Greenwood’s most provocative claims, the apparent defeat of the Spellings initiative did not spell the end of neoliberal “accountability” reform in American universities. In Greenwood’s view:

“Everywhere in the U.S. now, every institution and all accreditation bodies are scrambling to create output controls, systems of evaluation and accountability like the ones envisioned by the Commission… [there is] a clear recognition that gestures in the direction of quality assurance and accreditation are necessary to keep the federal government from taking even more authoritarian actions to control higher education. The higher education press I read and the people I talk to make it clear that they believe the only strategy is to keep your head down and appear to play along. The secretary’s agenda for output measures and input controls is therefore not being implemented by executive fiat but by universities’ doing it to themselves” (24).

As if in a ruse of neoliberal history, the decentralization of American higher education, which provides such a powerful defense against the kind of top-down neoliberal reforms that have happened in other nations, makes it possible to adopt a seemingly “voluntary” route to neoliberal audit cultures. Greenwood’s reading is pessimistic, as if resistance was almost impossible. “There can be very little question,” he comments, “that accountability and increased transparency in higher education are with us now for the long haul” (34). To be sure, he is no historical determinist; it is just that, in his analysis, the faculty with the power to mobilize for better reforms are currently too ignorant and passive to do so.

I would have liked, at this point, to have seen some consideration of the various activist movements that do currently exist within the university. Unionization efforts and labor movements among adjunct faculty and graduate students are one major new phenomenon, perhaps the most pragmatic; there have been social justice campaigns on campus over the years, though not always successful ones; currently there seems to be significant mobilization in the University of California. I would have liked also to have seen more substantial ethnographic examination of the internal life of American university administrators and of the new forms of auditing that they are supposedly all implementing. Greenwood, who has occupied various administrative tasks, would be well placed to conduct a real ethnographic study of that world, which is inaccessible to younger researchers like myself.

And I am not sure of his blanket statement that “all” American universities are voluntarily implementing audit controls on the scale of the UK’s Quality Assurance Agency. I would like to hear more empirical details about this, since it isn’t something I have seen personally in the US. For the time being, however, it crosses my mind that there is a broader lesson to be drawn here about neoliberalism. While Greenwood writes as if neoliberal reform was almost inevitable, we could equally view the political ups and downs of the Spellings Report as a sign that the political results we call neoliberal are somewhat historically unstable, contingently instituted, dependent on the shifting balance of political forces in a given national moment, and, far from an inevitable historical force, necessarily mediated by a local political process.

It might thus be better to think of neoliberalism as being more like a pliable, portable political ideology than a concrete set of historical results, though that would then raise the problem of the relation between the ideology and the remarkably uniform set of neoliberal institutional reforms around the world. But let’s give Greenwood the last word. “To me,” he says, “the ‘neo’ in neo-liberalism seems out of place. What is taking place is a reversion to commodity capitalism, with its pseudo-free markets, state and elite control and the imposition of discipline on non-state actors whose survival requires them to accept subjection to a particular version of the market that serves elite interests. Rather than ‘neo’, it marks a return to the end of Robber Baron capitalism” (7). Which seems a thought worth pondering.

20 thoughts on “University neoliberalism in America: Greenwood on Spellings

  1. Eli: Thanks again for yet another thoughtful post. Your continued public writing on the subject is helping me think about my own work as a historian dealing with the creation of a university in a late twentieth-century context. But it looks like you should be studying at Cornell under Professor Greenwood?! Then again, you wouldn’t guess he’d be doing this kind of work given his background. – TL

  2. Living in Korea, I’ve noticed that higher education largely follows the model suggested by the Spellings report, but that government, college administrators, and academics favor a progressive shift toward a more “critical” (as in “critical thought”) model. Oddly, even as America shifts toward this workforce utilitarianism model in its higher education system, Korean administrators look to the American system as a guidepost for the opposite. It would be strange, indeed, if at the same time that America closes in on pure workforce utilitarianism in higher education, Korea were to somehow become a beacon for the “critical” model.

  3. Hi Max and Tim,
    Nice to hear from you both. Max, that’s interesting what you say about Korean universities. How do your observations fit with that paper I sent you about Korean reforms? I recall Korean reforms as being at least somewhat business oriented — aren’t ‘business universities’ being constructed? And I have to say: it’s not like the American system has ever been fully dedicated to the pursuit of pure critical thought, so even if it is becoming more vocationally oriented than ever, it would still seem to be the case that Koreans would be aspiring more to an ideal humanistic-critical model of the American university than to something that would really replicate the whole American system. I mean, American higher ed includes things like endless freshman comp classes and underfunded state university branch campuses… presumably Korean reformers are not dreaming of such things?

    Tim, I’d love to know what your research is about. Which university are you looking at? And as for Greenwood, I did go to college at Cornell, which is why I know him. But one isn’t encouraged to go to grad school where one goes to college, for better or worse.

  4. Mere terminology:
    “Critical thinking” does not usually refer to anything like “critical theory.” I don’t particularly like either use of the word critical. http://www.aft.org/pubs-reports/american_educator/issues/summer07/Crit_Thinking.pdf

    Neoliberalism is used as a pejorative term (almost exclusively) and as such, is not conducive to dialogue with people supporting the ideas so labeled.
    I also avoid it because it is too vague for the sort of conversations I want to have.

  5. Now onto the substance… I’m sure I could find some things to disagree with the Spellings commission about, but I think there are a lot of very reasonable ideas in that report. If an institution receives a lot of money from the government/tax payers, I think they should be required to share some basic data on what courses are offered, who is taking them, what grades they receive, who is teaching them, tuition and financial aid, etc. This seems like the bare minimum. It should also be possible to link data on students as they move from one school to another without endangering privacy.

    The goals include giving researchers the ability to analyze our education system, and giving prospective students additional information on which they might base their educational decisions. I don’t see how one can complain about that.

  6. Hi Mike,

    good to hear from you as usual. I sort of agree with you about the first part, actually; ‘neoliberalism’ is a somewhat vague and definitely pejorative term. I avoided it for a long time too, but on starting to read more international comparisons in university reform I think that there is something true about there being global trends in higher education reforms (maybe in governance more generally, people allege that but I’m no expert) and these often involve elements that are collectively called neoliberal. See my first post on neoliberalism for a list. I think before you dismiss it, you should think twice about whether in fact there is something useful about the term (if used precisely, obviously). I can send you the articles that form my empirical evidence if you want too.

    Thanks for the link about “critical thinking.” In contexts where I work, I think it really is quite close to “critical theory” or, rather, to the spirit of argument involved in critical theory (in the large sense that refers to more than just the Frankfurt School).

    Finally, while I did in fact say something similar to you about the useful sociology that could be derived from massive national data sets, I think you have a longing as a policy person to live in a world with a stronger national government France where this kind of data is generally far more centralized. I don’t really trust your intuition that a huge government database is automatically a good thing — I think it would take a historical research project to examine the costs and benefits of that kind of centralization of information. And your overall reading of the Spellings Report, I have to say, strikes me as a little short-sighted. Yes, I agree that some of the information-gathering proposals are potentially useful. Even Greenwood agrees with parts of the report. But you don’t begin to address his larger questions about standardization, about the ways that assessment can distort an educational system, about the obsession with job placement, and indeed about the ideological agenda of the Bush administration in general. Surely these things are relevant to an analysis of who authored the report and what they said? Here’s my broader question for you: how do you theorize the place of ideology and political positions in your analysis of educational reforms? Like for example: how do you theorize the influence of business and the corporate world on education and furthermore on the policy circles that decide education policy?

  7. The government has fairly little control over what is happening in our institutions of higher education. This gives us quite a bit of protection from the ideology of those in power, and more importantly, special interest groups. The freedom that our institutions of higher ed. enjoy may be the primary reason why higher education in the U.S. is often considered to be the best in the world while our primary and secondary education is considered mediocre.

    Corporations do have an influence on education, but it is not, primarily, through policy. It is because they hire students from institutions that have historically produced good employees, and those who have taken courses of study considered relevant to their jobs. Students then seek out those institutions and those courses of study. I’m glad that many students care about other things as well… choosing courses that they find interesting or enriching, but I am also glad they can choose things that set them up for a career. I want them to have whatever information they would like to help them make these choices.

    While I’m inclined to give private individuals and organizations a decent amount of privacy, I think the default for organizations that receive lots of public money should be transparency. I made some specific proposals (off the top of my head) regarding information that should be released: “what courses are offered, who is taking them, what grades they receive, who is teaching them, tuition and financial aid,” to that I will add student course reviews, and reviews of other aspects of the institution, and the degree completion rate. Which of these reporting requirements would you object to?

  8. I’m going to leave aside the question about reporting requirements for now, because my claim there was simply that one probably can’t assume a priori that massive new government information collection is automatically a good thing. I don’t think you agree with that claim and are on the side of more information being automatically better, but this isn’t really the place to settle the issue. On the broader level, however, it sounds like you’re overlooking a lot of major avenues of government and corporate influence on higher ed, not to mention an important point from the actual blog post here that we’re commenting on.

    For one of the most striking assertions in Greenwood’s article is that the output controls that the Spellings Report advocated are now being put in place voluntarily by university administrators that want to preempt any future government regulation. That’s one mechanism that approaches government control of universities. To which one could add: the influence of big donors and granting agencies over university research and investments (the Milton Friedman Institute was originally conceived as something that would be popular with donors, don’t forget), the incredible corporate dominance of boards of trustees (which you have to be crazy to imagine has no impact)… in fact I’ll just post a quote from this article by Christopher Newfield:

    “By the year 2000, university-industry relations seemed all-encompassing. Athletes had become human billboards for sporting goods companies while their coaches collected large endorsement fees. Student centers had assumed most of the functions of suburban shopping malls, and a large portion of campus Internet traffic was devoted to consumer uses like downloading music files. Universities marketed themselves as prestige brands to the most affluent demographic and raised tuition rates so consistently that graduates carried credit card debt to rival the ever-increasing size of their student loans. From coast to coast, campus life seemed as much about buying stuff as about learning things. After two decades of marketing tie-ins, fiscal crises, and financial incentives, commerce had moved from the edges to the core of the academic mission.”

    You might find newfield’s article worth reading: it’s schematic but it really conflicts with your generally cheerful reading of university-capitalism relations.

  9. What are the scary “output controls” that universities are implementing?

    The specifics mentioned in the Newfield quote don’t seem like such a big deal, and the sweeping declarations about how learning is being damaged seem woefully unsupported. I am somewhat concerned about the cost of college, and that is one of the things the Spellings report actually tries to address.

    Yes, individual and corporate donors influence things, but mainly by adding new things, rather than eliminating older things. I’m not aware that the MFI takes anything away from sociologists’ educational experience.

  10. Mike, do you ever have the feeling that we are on separate intellectual worlds? I find my position (and, mostly, Greenwood’s) a priori plausible and you find it a priori implausible. You find corporate influence generally unimportant and/or insignificant and I am inclined to view it as malign and probably widespread. You are eager to agree with whatever there is to agree with in the Spellings report and Greenwood (whose views I am trying to report here) is eager to disagree with whatever there is to disagree with. About the cost of college, everyone agrees that it is problematic, but some find the Spellings Report’s proposed solutions at least as bad, if not worse (the argument being that they are complicit in the conservative scheme for shrinking public institutions by cutting public funding while demanding that they cut their own budgets to become more efficient).

    Given the conceptual rifts between our outlooks on this question, I don’t see that much is to be gained by continuing this particular discussion. I too would like to know what kinds of voluntary output controls Greenwood is talking about, but I also rather doubt that you are likely to be sympathetic to his case, given the sarcasm of your tone.

  11. Right. Our moral judgment rests, to a large extent, on our different beliefs about the world. I believe there is a way forward for people in our situation. It is to focus our discussion on narrower claims for which we can summon stronger empirical evidence.

    From a scientific point of view, the ideal would be for the federal government to randomly assign some policies to different states and we could see what effects they had. More realistically, we look at historical data of states/countries (ideally a lot of them) change policies and do our best to evaluate them. Of course, the outcomes we choose to measure are important, every policy is destined to help some groups/goals and hurt other groups/goals. But with sufficient sharing of evidence, I think we could get much closer to agreement on the desirability of various policies.

  12. Since Eli invited me to respond to the specific issue of voluntary compliance in an e-mail but suggested I read your exchanges, it seems smarter to respond here briefly.

    My point about voluntary controls is based on a variety of sources. It is relatively easy to document the scramble of at least feighned compliance with this mode of operation in the accrediation agencies. They stumbled all over each other to implement statistical, commodity production models and are still at it. This cascades directly down to institutional officials who are now imposing this kind of accountability on their campuses. Many institutional research offices at big institutions are doing this and many small colleges either have new VP´s for this or have hired consultants. The faculty then are tasked with coming up with outcome models. Is this voluntary? I think the voluntary-involuntary distinction in a deeply ideological universe with lots of money at stake is probably not good enough.

    Regarding neo-liberalism, I actually don´t like the term either. However, it certainly is not conservatism or liberalism. What I don´t like about the term is that there is nothing “neo” about pseudo-market corporatism in politics. This is the old time religion that has led us into one cycle of socio-economic disasters after another.

    I have nothing against measurement and evaluation. It is true that universities have been shamelessly careless about seeing whether what they do and teach is good for anything and adjusting their behavior in response. However, stupid variables stupidly measured without any thought for the nature of what education actually might be will produce stupid policies and stupid behaviors. Garbage in, garbage out.

    Finally, I have found more of value on higher education in the writings of serious conservative economists like William MacMahon and Ronald Ehrenberg than in the diatribes against corporatization that abount.

  13. “However, stupid variables stupidly measured without any thought for the nature of what education actually might be will produce stupid policies and stupid behaviors. Garbage in, garbage out.”

    I agree. Currently, the most important variables are whatever U.S. News and World Report decides to measure. Some of what they measure may have some value, but we could do far better if colleges were required to share more information.

    I take it the voluntary controls are “statistical, commodity production models” but I still don’t understand what exactly they are measuring, and whether/how this significantly impedes other goals.

  14. Hi Davydd — following up on Mike’s recent question, I’m curious to have a bit more clarification on your view on program evaluation. I take it you are not in favor of a purely qualitative and therefore almost wholly incommensurable method of evaluating the educational outcomes of a particular program; but what kinds of quantitative or cross-institutional data would you like to see compared? If you are ultimately just advocating for a richer plurality of forms of evaluation, your views are probably quite a lot like Mike’s, but if you are more rigorously against this kind of nation-wide quantitative evaluation project, your views are probably more in conflict.

    In thinking again about your paper, I gather that you are against the apparent Spellings fantasy of a world where policymakers or student-consumers will judge all institutions according to some fairly simplistic sets of outcome ratings — but I assume you are not actually against large-scale data collection, but rather are more concerned about simplistic or even punitive means of using such data in top-down policymaking? This would suggest a further question: how do your views on national/quantitative data collection relate to your views on the appropriate use of this data? Is higher ed data collection necessarily going to be tightly coupled to the use of that data in governance regimes?

  15. One further question. I hate to badger you about this, but when it comes to the issue of output controls etc, I still just don’t quite know where to look to get a more concrete sense of what is happening. As you know, we are graduate students, hence have very poor access to trends in the university administrative world… I did look through the websites of 7 or 8 institutional research offices to try to get a sense of the trend you describe, but most of their more detailed analyses seem not to be publicly available, and only one or two explicitly mentioned program evaluation as one of their tasks. Publicly, they give the impression is that they basically just do demographics, HR, and budget analysis — which is pretty far from the kind of outcomes evaluation we’re talking about. I looked up a couple of accreditation agency websites too, but didn’t really find anything out about ongoing changes in their assessment practices.

    Can you give any help in looking for more detail about this, at least at the accreditation agency level? And about the local institutional cases, I would love to hear some details or a sketch of a case study, if you feel inclined… Is this happening specifically at the Ivies, for example? At Cornell? Or perhaps the details will just have to await some future research project…

  16. If all we are talking about is requiring colleges to report some data for the feds to put up on a website then I don’t think it deserves the label “output controls,” or even “evaluation,” because the government is not imposing any controls or consequences on the basis of the data.

  17. Hi, Eli and Mike, I gather from your posts and also from Elí´s email there there are two questions here. One is what evidence I have for the idea of voluntary imposition of output controls. The answer is complicated in nature but also made worse because I am in Spain without my library of hundreds of sources.

    The first part of this goes to the imposition effort by Secretary Spellings. That debate is well documented and her attempts to impost a unit record database are easily found. She did not have the authority to impose it and the Congress made her back off. However, she discovered that all accrediting agencies (it appears that accreditation is itself voluntary, sometimes involving a choice of which accreditor you want and that agencies have been very loosely watched). In any case, there are a large number but I found out that the one authority the Secretary has is to officialize or not such agencies. There was a huge stir about this and the agencies all rushed to start using her accountancy models as part of their accreditation proces, claiming it was a voluntary action. It is now already taken for granted. Given that, is it voluntary or not?

    A good way to get the flavor of this is to go to Inside Higher Education and then search on “accreditation”. You will find 27 pages of articles on the subject in reverse chronological. Skimming through, you can see the contours of this process.

    The second issue is whether I believe in large-scale comparisons and databases. My answer is yes and know. Do I think there is a method for deciding if you learn more and better anthropology at Duke or Brown, then I say there is no method that compares apples and oranges. See Robert Hunt´s book on comparison for an anthropologist´s careful view of this.

    Do I think that knowing the relationship between what we think we are doing when we teach and what the students find themselves able to do with it throughout their life course? Yes. But even this is a qualified yes.

    I am an action researcher and thus I really only accept the merit and value of formative evaluation rather than summative evaluation. Formative evaluation involves evaluating what you are doing in order to improve what you are doing by consulting all parties and by examining the long term effects of what has been done. Maresi Nerad´s work is the closest Iknow to this and even that still needs lots of work. But simpleminded statements about outputs are at least as meaningless as the pathetic ranking systems being used.

    So I come back to evaluation for what? Spellings had only one goal: total control of the schools and the curriculum. Many administrators on campuses are happy to adopt this goal and use the justification for slashing subjects they don´t like and building those they do like. If, on the other hand, the goal is to compare the fates of anthropology Ph.D. from 200 schools 5, 10, 15, and 20 years and to talk to their employers too, then let´s get at it. My impression is much of what is taught is utterly useless and much of what is needed is not taught. I would be happy to have both quantitative and qualitative data on this. However, this is not what most people are volunteering for. They are trying to co-opt the evaluation process in order both to keep doing what they have been doing and to put the screws to people they don´t like.

    Is this clearer?

    Davydd

  18. Davydd, Thank you for writing such a helpful response. I wish I had checked back here sooner. (Eli, can we get an “email me with new posts to the thread” option?)

    It is interesting to learn that the Ed. department certifies accreditation agencies. That does seem like a way to exercise some influence. But even taking that into consideration, it doesn’t seem like that much influence. To put this in perspective, I would say the federal government has ten times as much influence on curriculum and pedagogy in secondary education as they do in higher education. Yet even in secondary education, the feds influence is far less than the individual states influence. (Note, I think this is for the best)

    Perhaps Spellings would have liked “total control of the schools and the curriculum,” leaders, both good and bad, almost always want more power. But framing this as Spellings “one goal” is as bad as when Sean Hannity calls Obama a socialist. The secretary of education will never have that kind of power. The policy proposal under discussion is whether institutions which get a lot of federal money should be required to report some basic data on the students they are educating (info on what data here: http://www.universityofcalifornia.edu/senate/news/source/nces.pdf ) I think that students would benefit from having more detailed and accurate information on the graduation rate, financial aid, and the popularity of different majors. What exactly are the consequences people fear from making this data available?

    The type of research on anthro phd education you recommend sounds great and would complement the data collection being discussed.

    It is no surprise to me that universities oppose this proposal. The less information that is available, the less competition they face. The less information that is available, the less they have to worry about critics.

    I’m open to being convinced that some particular type of data should not be reported, but I think we should embrace the basic idea that more information should be made public.

    1. Yeah, I’ve read Carey’s piece. It basically falls in the Spellings camp when it comes to the discussions we’ve discussed here, and fails to acknowledge the existence of a view on higher ed that would be against the kind of mass quantitative market-oriented position that it presumes here as an obvious good. But we needn’t get into the details for now…

Comments are closed.