The 2013 U.S. News college rankings were released today and are certain to be the topic of discussion for much of the higher education community. Many people grumble about the rankings, but it’s hard to dismiss the rankings due to their profound impact on how colleges and universities operate. It is not uncommon for colleges to set goals to improve their ranking, and data falsification is sadly a real occurrence. As someone who does research in the fields of college rankings and accountability policy, I am glad to see the rankings come out every fall. However, I urge readers to take these rankings as what they intend to be—a measure of prestige rather than college effectiveness.
The measures used to calculate the rankings are generally the same as last year and focus on six or seven factors, depending on the type of university:
–Academic reputation (from peers and/or high school counselors)
–Selectivity (admit rate, high school rank, and ACT/SAT scores)
–Faculty resources (salary, terminal degree status, full-time status, and class size)
–Graduation and retention rates
–Financial resources per student
–Alumni giving rates
–Graduation rate performance (only for research universities and liberal arts colleges)
Most of these measures can directly be controlled by increasing tuition and/or enrolling only the most academically prepared students. The only measure that is truly independent of prestige is the graduation rate performance measure, in which the actual graduation rate is regressed on student characteristics and spending to generate a predicted graduation rate. While U.S. News doesn’t release its methodology for calculating its predicted graduation rate measure, the results are likely similar to what I did in the Washington Monthly rankings.
I am pleased to see that U.S. News is using an average of multiple years of data for some of its measures. I do this in my own work on estimating college effectiveness, although this was not a part of Washington Monthly’s methodology this year (it may be in the future, however). The use of multiple years of data does reduce the effects of random variation (and helps to smooth the rankings), but I am concerned that U.S. News uses only the submitted years of data if not all years are submitted. This gives colleges an incentive to not report a year of bad data on measures such as alumni giving rates.
Overall, these rankings are virtually unchanged from last year, but watch colleges crow about moving up two or three spots when their score hardly changed. These rankings are big business in the prestige market; hopefully, students who care more about educational effectiveness consider other measures of college quality in addition to these rankings.
I’ll put up a follow-up post in the next few days discussing the so-called “Best Value” college list from U.S. News. As a preview, I don’t hold it in high regard.
Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.