I can always tell when a piece about college rankings makes an appearance in the general media. College administrators see the piece and tend to panic while reaching out to their institutional research and/or enrollment management staffs. The question asked is typically the same: why don’t we look better in this set of college rankings? As the methodologist for Washington Monthly magazine’s rankings, I get a flurry of e-mails from these panicked analysts trying to get answers for their leaders—as well as from local journalists asking questions about their hometown institution.
The most recent article to generate a burst of questions to me was on the front page of Monday’s New York Times. It noted the rise in lists that look at colleges’ value to students instead of the overall performance on a broader set of criteria. (A list of the top ten value colleges across numerous criteria can be found here.) While Washington Monthly’s bang-for-the-buck article from 2012 was not the first effort at looking at a value list (Princeton Review has that honor, to the best of my knowledge), we were the first to incorporate a cost-adjusted performance measure that accounts for student characteristics and the net price of attendance.
When I talk with institutional researchers or journalists, my answer is straightforward. To look better on a bang-for-the-buck list, colleges have to either increase their bang (higher graduation rates and lower default rates, for example) or lower their buck (with a lower net price of attendance). Prioritizing these measures does come with concerns (see Daniel Luzer’s Washington Monthly piece), but the good most likely outweighs the bad.
Moving forward, it will be interesting to see how these lists continue to develop, and whether they are influenced by the Obama Administration’s proposed college ratings. It’s an interesting time in the world of college rankings, ratings, and guides.
Actually, my past experience was that the chancellor would be briefed by the IR staff and then the communication experts would begin the spin campus articles. In my university (your alma mater) our press releases went from criticizing and challenging the myriad of commercial rankings to the more recent campus press releases which celebrate the positive commercial rankings. How times change.
Thanks for the comment, Noel. I generally don’t see much of the institutions’ responses, but I do get a fair number of e-mails from IR staff or senior administrators either wanting to know more about the methodology or challenging the data used in the rankings (even though they reported it to the feds in most cases). I think that most college leaders are just taking rankings as a given now and highlighting the ones in which they either look good or the methodology supports their strategic plan.
Is there a thesis topic or a dissertation topic in this exchange? I’ve read several analyzing the commercial ranking industry, but not much on how presidents/chancellors receive, interpret, and then communicate “their” ratings via web sites, press releases, and fundraising efforts.
I think this would be a really interesting dissertation topic. The biggest challenge would be getting access to a few university presidents who would be willing to talk about a fairly sensitive topic. If that can be addressed, then it’s a great idea for someone’s research!
Yea, a random sample wouldn’t be the appropriate methodology…how about a representative sample of recently retired presidents/chancellors? Usually they are more candid than “sitting” leaders.
I like your suggestion about recently retired leaders. I’m more okay with a convenience sample of leaders here, since it’s a dissertation and I don’t think there has been a great study on this topic before. Anything would be an advance in the literature, as long as the sample can be justified in some meaningful way.