How the New Carnegie Classifications Scrambled College Rankings

Carnegie classifications are one of the wonkiest, most inside baseball concepts in the world of higher education policy. Updated every three years by the good folks at Indiana University, these classifications serve as a useful tool to group similar colleges based on their mix of programs, degree offerings, and research intensity. And since I have been considered “a reliable source of deep-weeds wonkery” in the past, I wrote about the most recent changes to Carnegie classifications earlier this year.

But for most people outside institutional research offices, the first time the updated Carnegie classifications really got noticed was with this fall’s college rankings season. Both the Washington Monthly rankings that I compile and the U.S. News rankings that I get asked to comment about quite a bit rely on Carnegie classifications to define the group of national universities. We both use the Carnegie doctoral/research university category for this, putting master’s institutions to a master’s university category (us) or regional universities (U.S. News). With the number of Carnegie research universities spiking from 334 in the 2015 classifications to 423 in the most recent 2018 classifications, this introduces a bunch of new universities into the national rankings.

To be more exact, 92 universities appeared in Washington Monthly’s national university rankings for the first time this year, with nearly all of these universities coming out of the master’s rankings last year. The full dataset of these colleges and their rankings in both the US News and Washington Monthly rankings can be downloaded here, but I will highlight a few colleges that cracked the top 100 in either ranking below:

Santa Clara University: #54 in US News, #137 in Washington Monthly

Loyola Marymount University: #64 in US News, #258 in Washington Monthly

Gonzaga University: #79 in US News, #211 in Washington Monthly

Elon University: #84 in US News, #282 in Washington Monthly

Rutgers University-Camden: #166 in US News, #57 in Washington Monthly

Towson University: #197 in US News, #59 in Washington Monthly

Mary Baldwin University: #272 in US News, #35 in Washington Monthly

These new colleges appearing in the national university rankings means that other colleges got squeezed down the rankings. Given the priority that many colleges and their boards place on the US News rankings, it’s a tough day on some campuses. Meanwhile, judging by press releases, the new top-100 national universities are probably having a good time right now.

Rankings, Rankings, and More Rankings!

We’re finally reaching the end of the college rankings season for 2014. Money magazine started off the season with its rankings of 665 four-year colleges based on “educational quality, affordability, and alumni earnings.” (I generally like these rankings, in spite of the inherent limitations of using Rate My Professor scores and Payscale data in lieu of more complete information.) I jumped in the fray late in August with my friends at Washington Monthly for our annual college guide and rankings. This was closely followed by a truly bizarre list from the Daily Caller of “The 52 Best Colleges In America PERIOD When You Consider Absolutely Everything That Matters.

But like any good infomercial, there’s more! Last night, the New York Times released its set of rankings focusing on how elite colleges are serving students from lower-income families. They examined the roughly 100 colleges with a four-year graduation rate of 75% or higher, only three of which (University of North Carolina-Chapel Hill, University of Virginia, and the College of William and Mary) are public. By examining the percentage of students receiving Pell Grants in the past three years and the net price of attendance (the total sticker price less all grant aid) for 2012-13, they created a “College Access Index” looking at how many standard deviations from the mean each college was.

My first reaction upon reading the list is that it seems a lot like what we introduced in Washington Monthly’s College Guide this year—a list of “Affordable Elite” colleges. We looked at the 224 most selective colleges (including many public universities) and ranked them using graduation rate, graduation rate performance (are they performing as well as we would expect given the students they enroll?), and student loan default rates in addition to percent Pell and net price. Four University of California colleges were in our top ten, with the NYT’s top college (Vassar) coming in fifth on our list.

I’m glad to see the New York Times focusing on economic diversity in their list, but it would be nice to look at a slightly broader swath of colleges that serve more than a handful of lower-income students. As The Chronicle of Higher Education notes, the Big Ten Conference enrolls more Pell recipients than all of the colleges ranked by the NYT. Focusing on the net price for families making between $30,000 and $48,000 per year is also a concern at these institutions due to small sample sizes. In 2011-12 (the most recent year of publicly available data), Vassar enrolled 669 first-year students, of whom 67 were in the $30,000-$48,000 income bracket.

The U.S. News & World Report college rankings also came out this morning, and not much changed from last year. Princeton, which is currently fighting a lawsuit challenging whether the entire university should be considered a nonprofit enterprise, is the top national university on the list, while Williams College in Massachusetts is the top liberal arts college. Nick Anderson at the Washington Post has put together a nice table showing changes in rankings over five years; most changes wouldn’t register as being statistically significant. Northeastern University, which has risen into the top 50 in recent years, is an exception. However, as this great piece in Boston Magazine explains, Northeastern’s only focus is to rise in the U.S. News rankings. (They’re near the bottom of the Washington Monthly rankings, in part because they’re really expensive.)

Going forward, the biggest set of rankings for the rest of the fall will be the new college football rankings—as the Bowl Championship Series rankings have been replaced by a 13-person committee. (And no, Bob Morse from U.S. News is not a member, although Condoleezza Rice is.) I like Gregg Easterbrook’s idea at ESPN about including academic performance as a component in college football rankings. That might be worth considering as a tiebreaker if the playoff committee gets deadlocked solely using on-field performance. They could also use the Washington Monthly rankings, but Minnesota has a better chance of winning a Rose Bowl before that happens.

[ADDENDUM: Let’s also not forget about the federal government’s effort to rate (not rank) colleges through the Postsecondary Institution Ratings System (PIRS). That is supposed to come out this fall, as well.]

Free the Pell Graduation Data!

Today is an exciting data in my little corner of academia, as the end of the partial government shutdown means that federal education datasets are once again available for researchers to use. But the most exciting data to come out today is from Bob Morse, rankings guru for U.S. News and World Report. He has collected graduation rates for Pell Grant recipients, long an unknown for the majority of colleges. Despite the nearly $35 billion per year we spend on the Pell program, we have no idea what the national graduation rate is for Pell recipients. (Richard Vedder, economist of higher education at Ohio University, has mentioned a ballpark estimate of 30%-40% in many public appearances, but he notes that is just a guess.)

Morse notes in his blog post that colleges have been required to collect and disclose graduation rates for Pell recipients since the 2009 renewal of the Higher Education Act. I’ve heard rumors of this for years, but these data have not yet made their way into IPEDS. I have absolutely no problems with him using the data he collects in the proprietary U.S. News rankings, nor do I object to him holding the data very tight—after all, U.S. News did spend time and money collecting it.

However, given that the federal government requires that Pell graduation rates be collected, the Department of Education should collect this data and make it freely and publicly available as soon as possible. This would also be a good place for foundations to step in and help collect this data in the meantime, as it is certainly a potential metric for the President’s proposed college ratings.

Update: An earlier version of this post stated that the Pell graduation data are a part of the Common Data Set. Bob Morse tweeted me to note that they are not a part of that set and are collected by U.S. News. My apologies for the initial error! He also agreed that NCES should collect the data, which only understates the importance of this collection.

Burning Money on the Quad? Why Rankings May Increase College Costs

Regardless of whether President Obama’s proposed rating system for colleges based on affordability and performance becomes reality (I expect ratings to appear in 2015, but not have a great deal of meaning), his announcement has affected the higher education community. My article listing “bang for the buck” colleges in Washington Monthly ran the same day he announced his plan, a few days ahead of our initial timeline. We were well-positioned with respect to the President’s plan, which led to much more media attention than we would have expected.

A few weeks after the President’s media blitz, U.S. News & World Report unveiled their annual rankings to the great interest of many students, their families, and higher education professionals as well as to the typical criticism of their methodology. But they also faced a new set of critiques based on their perceived focus on prestige and selectivity instead of affordability and social mobility. Bob Morse, U.S. News’s methodologist, answered some of those critiques in a recent blog post. Most of what Morse said isn’t terribly surprising, especially his noting that U.S. News has much different goals than the President’s goals. He also hopes to take advantage of any additional data the federal government collects for its ratings, and I certainly share that interest. However, I strongly disagree with one particular part of his post.

When asked whether U.S. News rewards colleges for raising costs and spending more money, Morse said no. He reminded readers that the methodology only counts spending on the broadly defined category of educational expenditures, implying that additional spending on instruction, student services, research, and academic support always benefits students. (Spending on items such as recreation, housing, and food service does not count.)

I contend that rewarding colleges for spending more in the broad area of educational expenditures is definitely a way to increase the cost of college, particularly since this category makes up 10% of the rankings. Morse and the U.S. News team desire to have their rankings based on academic quality, which can be enhanced by additional spending—I think this is the point they are trying to make. But the critique is mechanically true, as more spending on “good” expenditures still would raise the cost of college. Additionally, this additional spending need not be on factors that benefit undergraduate students and may not be cost-effective. I discuss both of these two points below.

1. Additional spending on “educational expenditures” may not benefit undergraduate students. A good example of this is spending on research, which runs in the tens or even hundreds of millions of dollars per year at many larger universities. Raising tuition to pay for research would increase educational expenditures—and hence an institution’s spot in the U.S. News rankings—but primarily would benefit faculty, graduate students, and postdoctoral scholars. This sort of spending may very well benefit the public through increased research productivity, but it is very unlikely to benefit first-year and second-year undergraduates.

[Lest this be seen solely as a critique of the U.S. News rankings, the Washington Monthly rankings (for which I’m the methodologist) can also be criticized for potentially contributing to the increase in college costs. Our rankings also reward colleges for research expenditures, so the same critiques apply.]

2. Additional spending may fail a cost-effectiveness test. As I previously noted, any spending on the broad area of “educational expenditures” would be a positive. But there is no requirement that the money be used in an efficient way, or even an effective one. I am reminded of a quote by John Duffy, formerly on the faculty of George Washington University’s law school. He famously said in a 2011 New York Times article: “I once joked with my dean that there is a certain amount of money that we could drag into the middle of the school’s quadrangle and burn, and when the flames died down, we’d be a Top 10 school as long as the point of the bonfire was to teach our students.” On a more serious note, additional spending could be used for legitimate programs that fail to move the needle on student achievement, perhaps due to diminishing returns.

I have a great deal of respect for Bob Morse and the U.S. News team, but they are incorrect to claim that their rankings do not have the potential to increase the cost of college. I urge them to reconsider that statement, instead focusing on why the additional spending for primarily educational purposes could benefit students.

Comparing the US News and Washington Monthly Rankings

In yesterday’s post, I discussed the newly released 2014 college rankings from U.S. News & World Report and how they changed from last year. In spite of some changes in methodology that were billed as “significant,” the R-squared value when comparing this year’s rankings with last year’s rankings among ranked national universities and liberal arts colleges was about 0.98. That means that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a nearly perfect prediction.

In today’s post, I compare the results of the U.S. News rankings to those from the Washington Monthly rankings for national universities and liberal arts colleges ranked by both sources. The rankings from Washington Monthly (for which I’m the consulting methodologist and compiled them) are based on three criteria: social mobility, research, and service, which are not the particular goals of the U.S. News rankings. Yet it could still be the case that colleges that recruit high-quality students, have lots of resources, and have a great reputation (the main factors in the U.S. News rankings) do a good job recruiting students from low-income families, produce outstanding research, and graduate servant-leaders.

The results of my comparisons show large differences between the two sets of rankings, particularly at liberal arts colleges. The R-squared value at national universities is 0.34, but only 0.17 at liberal arts colleges, as shown below:

uswm_natl

uswm_libarts

It is worth highlighting some of the colleges that are high on both rankings. Harvard, Stanford, Swarthmore, Pomona, and Carleton all rank in the top ten in both magazines, showing that it is possible to be both highly selective and serve the public in an admirable way. (Of course, we should expect that to be the case given the size of their endowments and their favorable tax treatment!) However, Middlebury and Claremont McKenna check in around 100th in the Washington Monthly rankings in spite of a top-ten U.S. News ranking. These well-endowed institutions don’t seem to have the same commitment to the public good as some of their highly selective peers.

On the other hand, colleges ranked lower by U.S. News do well in the Washington Monthly ranking. Some examples include the University of California-Riverside (2nd in WM, 112th in U.S. News), Berea College (3rd in WM, 76th in U.S. News), and the New College of Florida (8th in WM, 89th in U.S. News). If nothing else, the high ranks in the Washington Monthly rankings give these institutions a chance to toot their own hour and highlight their own successes.

I fully realize that only a small percentage of prospective students will be interested in the Washington Monthly rankings compared to those from U.S. News. But it is worth highlighting the differences across college rankings so students and policymakers can decide what institutions are better for them given their own demands and preferences.

Breaking Down the 2014 US News Rankings

Today is a red-letter day for many people in the higher education community—the release of the annual college rankings from U.S. News and World Report. While many people love to hate the rankings for an array of reasons (from the perceived focus on prestige to a general dislike of accountability in some sectors), their influence on colleges and universities is undeniable. Colleges love to put out press releases touting their place in the rankings even while decrying their general premise.

I’m no stranger to the college ranking business, having been the consulting methodologist for Washington Monthly’s annual college rankings for the past two years. (All opinions in this piece, of course, are my own.) While Washington Monthly’s rankings rank colleges based on social mobility, service, and research performance, U.S. News ranks colleges primarily based on “academic quality,” which consists of inputs such as financial resources and standardized test scores as well as peer assessments for certain types of colleges.

I’m not necessarily in the U.S. News-bashing camp here, as they provide a useful service for people who are interested in prestige-based rankings (which I think is most people who want to buy college guides). But the public policy discussion, driven in part by the President’s proposal to create a college rating system, has been moving toward an outcome-based focus. The Washington Monthly rankings do capture some elements of this focus, as can be seen in my recent appearance on MSNBC and an outstanding panel discussion hosted by New America and Washington Monthly last week in Washington.

Perhaps in response to criticism or the apparent direction of public policy, Robert Morse (the well-known and well-respected methodologist for U.S. News) announced some changes last week in the magazine’s methodology for this year’s rankings. The changes place slightly less weight on peer assessment and selectivity, while putting slightly more weight on graduation rate performance and graduation/retention rates. Yet Morse bills the changes as meaningful, noting that “many schools’ ranks will change in the 2014 [this year’s] edition of the Best Colleges rankings compared with the 2013 edition.”

But the rankings have tended to be quite stable from year to year (here are the 2014 rankings). The top six research universities in the first U.S. News survey (in 1983—based on peer assessments by college presidents) were Stanford, Harvard, Yale, Princeton, Berkeley, and Chicago, with Amherst, Swarthmore, Williams, Carleton, and Oberlin being the top five liberal arts colleges. All of the research universities except Berkeley are in the top six this year and all of the liberal arts colleges except Oberlin are in the top eight.

In this post, I’ve examined all national universities (just over 200) and liberal arts colleges (about 180) ranked by U.S. News in this year’s and last year’s rankings. Note that this is only a portion of qualifying colleges, but the magazine doesn’t rank lower-tier institutions. The two graphs below show the changes in the rankings for national universities and liberal arts colleges between the two years.

usnews_natl

usnews_libarts

The first thing that jumps out at me is the high R-squared, around 0.98 for both classifications. What this essentially means is that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a remarkable amount of persistence even when considering the slow-moving nature of colleges. The graphs show more movement among liberal arts colleges, which are much smaller and can be affected by random noise much more than large research universities.

The biggest blip in the national university rankings is South Carolina State, which went from 147th last year to unranked (no higher than 202nd) this year. Other universities which fell more than 20 spots are Howard University, the University of Missouri-Kansas City, and Rutgers University-Newark, all urban and/or minority-serving institutions. Could the change in formulas have hurt these types of institutions?

In tomorrow’s post, I’ll compare the U.S. News rankings to the Washington Monthly rankings for this same sample of institutions. Stay tuned!

To Be a “Best Value,” Charge Higher Tuition

In addition to the better-known college rankings from U.S. News & World Report, the magazine also publishes a listing of “Best Value” colleges. The listing seems helpful enough, with the goal of highlighting colleges which are a good value for students and their families. However, this list rewards colleges for charging higher tuition and being more selective, factors that are not necessarily associated with true educational effectiveness.

U.S. News uses dubious methodology to calculate its Best Value list (a more detailed explanation can be found here). Before I get into the rankings components, there are two serious flaws with the methodology. First, colleges are only eligible to be on the list if they are approximately in the top half of the overall rankings. Since we already know that the rankings better measure prestige than educational effectiveness, the best value list must be taken with a shaker of salt right away. Additionally, for public colleges, U.S. News uses the cost of attendance for out-of-state students, despite the fact that the vast majority of students come from in-state. It is true that relatively more students at elite public universities (like the University of Wisconsin-Madison) come from out of state, but even here over 70% of freshmen come from Wisconsin or Minnesota. This decision inflates the cost of attending public institutions and thus shoves them farther down the list

The rankings components are as follows:

(1) “Ratio of quality to price”—60%. This is the score on the U.S. News ranking (their measure of quality) divided by the net price of attendance, which is the cost of attendance less need-based financial aid. It is similar to what I did in the Washington Monthly rankings to calculate the cost-adjusted graduation rate measure. This measure has some merits, but suffers from the flaws of a prestige-based numerator and a net price of attendance that is biased toward private institutions.

(2) The percentage of undergraduates receiving need-based grants—25%. This measure rewards colleges with lots of Pell Grant recipients (which is just fine) as well as colleges with large endowments or high posted tuition which can offer lots of grants (which isn’t related to the actual price a student pays). If every student with a household income of under one million dollars received a $5 need-based grant, a college would look good on this measure…this measure can be gamed.

(3) Average discount—15%. This is the average amount of need-based grants divided by the net price of attendance. This certainly rewards colleges with high posted tuition and lots of financial aid.

Once again, I don’t focus on the actual rankings, but I will say that the top of the list is dominated by elite private colleges with massive endowments. Daniel Luzer of Washington Monthly (with whom I’ve had the pleasure of working over the past six months) has a good take on the top of the Best Value list. He notes that although these well-endowed institutions do provide a lot of financial aid to needy students, they don’t educate too many of these students.

I am glad that the 800-pound gorilla in the college rankings game is thinking about whether a college is a good value to students, but their methodological choices mean that colleges which are really educating students in a cost-effective manner are not being rewarded.

Measuring Prestige: Analyzing the U.S. News & World Report College Rankings

The 2013 U.S. News college rankings were released today and are certain to be the topic of discussion for much of the higher education community. Many people grumble about the rankings, but it’s hard to dismiss the rankings due to their profound impact on how colleges and universities operate. It is not uncommon for colleges to set goals to improve their ranking, and data falsification is sadly a real occurrence.  As someone who does research in the fields of college rankings and accountability policy, I am glad to see the rankings come out every fall. However, I urge readers to take these rankings as what they intend to be—a measure of prestige rather than college effectiveness.

The measures used to calculate the rankings are generally the same as last year and focus on six or seven factors, depending on the type of university:

–Academic reputation (from peers and/or high school counselors)

–Selectivity (admit rate, high school rank, and ACT/SAT scores)

–Faculty resources (salary, terminal degree status, full-time status, and class size)

–Graduation and retention rates

–Financial resources per student

–Alumni giving rates

–Graduation rate performance (only for research universities and liberal arts colleges)

Most of these measures can directly be controlled by increasing tuition and/or enrolling only the most academically prepared students. The only measure that is truly independent of prestige is the graduation rate performance measure, in which the actual graduation rate is regressed on student characteristics and spending to generate a predicted graduation rate. While U.S. News doesn’t release its methodology for calculating its predicted graduation rate measure, the results are likely similar to what I did in the Washington Monthly rankings.

I am pleased to see that U.S. News is using an average of multiple years of data for some of its measures. I do this in my own work on estimating college effectiveness, although this was not a part of Washington Monthly’s methodology this year (it may be in the future, however). The use of multiple years of data does reduce the effects of random variation (and helps to smooth the rankings), but I am concerned that U.S. News uses only the submitted years of data if not all years are submitted. This gives colleges an incentive to not report a year of bad data on measures such as alumni giving rates.

Overall, these rankings are virtually unchanged from last year, but watch colleges crow about moving up two or three spots when their score hardly changed. These rankings are big business in the prestige market; hopefully, students who care more about educational effectiveness consider other measures of college quality in addition to these rankings.

I’ll put up a follow-up post in the next few days discussing the so-called “Best Value” college list from U.S. News. As a preview, I don’t hold it in high regard.

Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.