I was recently interviewed by Koran Addo of the (Baton Rouge) Advocate regarding my work with the Washington Monthly college rankings. I’ve had quite a few phone and e-mail exchanges with college officials and the media about my work, but I want to highlight the resulting article both because it was extremely well done and because it highlights what I consider to be the foolish obsession with college rankings.
Two pieces of the article deserve special attention. First, consider this tidbit:
“LSU System President and Baton Rouge Chancellor William Jenkins said he was ‘clearly disappointed’ to learn that LSU had tumbled six spots from 128th last year to 134th in the U.S. News ‘Best Colleges 2013’ list.”
I wish that college rankings came with confidence intervals—which would provide a rough guide of whether a change over time is more than what we would expect by chance or statistical noise. Based on my work with rankings, I can safely say that such a small change in the rankings is not statistically significant and certainly not educationally meaningful.
The next fun quote from the article is from LSU’s director of research and economic development, Nicole Baute Honorée. She argues that only rankings from the National Science Foundation matter:
“Universities are in the knowledge business, as in creating new knowledge and passing it along. That’s why the NSF rankings are the gold standard.”
The problem is that research expenditures (a) do not guarantee high-quality undergraduate education, (b) do not have to be used effectively in order to generate a high score, and (c) do not reward many disciplines (such as the humanities). They are a useful measure of research clout in the sciences, but I would rely on them as only one of many measures (which is what the Washington Monthly rankings have done since long before I took the reins).
Once again, I urge readers not to rely on a single measure of college quality—and to make sure any measure is actually aligned with student success.