The Rankings Game

A few months back, I was meeting with one of our alums, and he asked me how I would categorize universities in terms of quality. I was taken by surprise, so I had to think for a minute, and then I told him that U.S. universities could be separated into three categories:

  1. Ivies, and Ivy equivalents – I won’t try and list them, but there aren’t many and you could probably name most of them off the top of your head, although people will argue around the margins. These schools are so prestigious that just going there creates enormous opportunities for students regardless of what they actually do while they are there. Their extensive wealth and ample endowments allows for an impressive array of student services, and actually makes the cost of going there lower than many other schools even if their stated tuition is high. Their loyal alumni are always available to lend a hand or open a door for fellow alums.
  2. Many for-profit institutions, and some mismanaged or unscrupulous non-profits – These schools don’t provide quality instruction, advisory, or services, and their retention and graduation rates are abysmal. Their stock in trade is accepting students who are ill prepared for college work and who also don’t fully understand the costs involved. These students then load up on cheap government loans that they use to pay their tuition. When they inevitably drop out, they are left with a lot of debt that they have no way of paying and no degree to boot.
  3. Everyone else.

That’s right, I said it – I believe that outside of the very top and bottom tier, the prestige level of every school is more or less the same, and the decision of which to attend for students should be a matter of personal fit (programs, geographic area, campus vibe, etc.) and economics rather than grasping for the “best” school. That is why I’ve also suggested that schools should focus on finding points of differentiation between themselves and their competition and trying to attract students for whom those points are meaningful.

If you don’t believe me, look around your own workplace. Do you really know where anyone went to school? Do you really care? I’m the dean of a business school with 23 full time faculty members, and I’d have to wrack my brain to come up with a list of the schools where they did their doctoral work (I could probably hit 80%, but I’d have to think for a while!). Your undergrad alma mater matters for your first job and/or graduate school application. After that, it just comes down to what football team you follow.

Yet many schools don’t believe me. The administration of these schools feel strongly that every tick up in the rankings means more and better applicants. My doctoral advisor Alan Meyer warned me against taking a faculty position at what he called a “program on the make,” and that’s what he meant – a program that is playing The Rankings Game. This classic study of how business school deans and administrators viewed rankings by Business Week and US News found that “…the rankings are widely perceived to be the single most useful gauge of a school’s ability (or inability) to compete in this marketplace” (p. 321). They also found that administrators felt that they had to play the game, even though the game is inherently unwinnable. An analysis of movement within the Business Week rankings supported that view. In a 12-year period, only 15 schools had been ranked in the top ten, and only 29 in the top 25. In fact, the rankings are remarkably stable, and the top slots are held by a small number of elite, mostly private, schools with large endowments. This finding was confirmed by a later study. This study also found that the Business Week rankings are remarkably stable over time. The original top 20 list came out in 1988. In 2004, 18 of the original 20 programs had always been ranked, and all 20 were currently ranked. In fact, the two factors that explained most of the variance in the rankings were the age of the program and its initial 1988 ranking. Since schools can’t change either of these factors, the rankings game seems doomed to failure.

Lest you think that these results only apply to B schools, a very important study was just released in Research in Higher Education (I’m linking to an article about it in Inside Higher Ed because our library doesn’t carry this journal). This study confirmed the same finding for the general US News survey of the top US universities. They found that the rankings were very stable year in and year out, and that any movement was likely just statistical noise. They also found out why. In order to make a serious move in the rankings, a school would have to simultaneously dramatically increase expenses on student services, faculty salaries, etc. (by potentially hundreds of millions of dollars), dramatically improve their entering student profile (at a time of declining enrollments and increased competition for top students), and dramatically change their reputation with other school administrators (this is worth 15% of a school’s ranking). I don’t think it’s a stretch to say that doing all of these things at once is essentially impossible.

The lesson here is that schools CAN change, sometimes dramatically, and those changes CAN result in increased enrollment and retention. However, those changes are unlikely to cause the school to make a large move in the various school rankings, so any time or money devoted to that cause is likely misspent.

This entry was posted in Higher Ed and tagged , , , , , , , . Bookmark the permalink.

Comments are closed.