Beware Dubious College Rankings

Just like the leaves starting to change colors (in spite of the miserable 93-degree heat outside my New Jersey office window) and students returning to school are clear signs of fall, another indicator of the change in seasons is the proliferation of college rankings that get released in late August and early September. The Washington Monthly college rankings that I compile were released the week before Labor Day, and MONEY and The Wall Street Journal have also released their rankings recently. U.S. News & World Report caps off rankings season by unveiling their undergraduate rankings later this month.

People quibble with the methodology of these rankings all the time (I get e-mails by the dozens about the Washington Monthly rankings, and we’re not the 800-pound gorilla of the industry). Yet at least these rankings are all based on data that can be defended to at least some extent and the methodologies are generally transparent. Even rankings of party schools, such as this Princeton Review list, have a methodology section that does not seem patently absurd.

But since America loves college rankings—and colleges love touting rankings they do well in and grumbling about the rest of them—a number of dubious college rankings have developed over the years. I was forwarded a press release about one particular set of rankings that immediately set my BS detectors into overdrive. This press release was about a ranking of the top 20 fastest online doctoral programs, and here is a link to the rankings that will not boost their search engine results.

First, let’s take a walk through the methods section. There are three red flags that immediately stand out:

(1) The writing resembles a “word salad” and clearly was never edited by anyone. Reputable rankings sites use copy editors to help methodologists communicate with the public.

(2) College Navigator is a good data source for undergraduates, but does not contain any information on graduate programs (which they are trying to rank) other than the number of graduates.

(3) Reputable rankings will publish their full methodology, even if certain data elements are proprietary and cannot be shared. And trust me—nobody wants to duplicate this set of rankings!

As an example of what these rankings look like, here is a screenshot of how Seton Hall’s online EdD in higher education is presented. Again, let’s walk through the issues.

(1) There are typos galore in their description of the university. This is not a good sign.

(2) Acceptance/retention rate data are for undergraduate students, not for a doctoral program. The only way they could get these data are by contacting programs, which costs money and runs into logistical problems.

(3) Seton Hall is accredited by Middle States, not the Higher Learning Commission. (Thanks to Sam Michalowski for bringing this to my attention via Twitter.)

(4) In a slightly important point, Seton Hall does not offer an online EdD in higher education. Given that I teach in the higher education graduate programs and am featured on the webpage for the in-person EdD program, I’m pretty confident in this statement.

For any higher education professionals who are reading this post, I have a few recommendations. First, be skeptical of any rankings that come from sources that you are not familiar with—and triple that skepticism for any program-level rankings. (Ranking programs is generally much harder due to a lack of available data.) Second, look through the methodology with the help of institutional research staff members and/or higher education faculty members. Does it pass the smell test? And finally, keep in mind that many rankings websites are only able to be profitable by getting colleges to highlight their rankings, thus driving clicks to these sites. If colleges were more cautious about posting dubious rankings, it would shut down some of these websites while also avoiding embarrassment when someone finds out that a college fell for what is essentially a ruse.

Comments on the Brookings Value-Added Rankings

Jonathan Rothwell and Siddharth Kulkarni of the Metropolitan Policy Program at Brookings made a big splash today with the release of a set of college “value-added” rankings (link to full study and Inside Higher Ed summary) focused primarily on labor market outcomes. Value-added measures, which adjust for student and institutional characteristics to get a better handle on a college’s contribution to student outcomes, are becoming increasingly common in higher education. (I’ve written about college value-added in the past, which led to me taking the reins as Washington Monthly’s rankings methodologist.) Pretty much all of the major college rankings at this point include at least one value-added component, and this set of rankings actually shares some similarities with Money’s rankings. And the Brookings report does mention correlations with the U.S. News, Money, and Forbes rankings—but not Washington Monthly. (Sigh.)

The Brookings report uses five different outcome measures, which are then adjusted for available student characteristics and institutional characteristics such as the sector of the college and where it is located:

(1) Mid-career salary of alumni: This measures the median salary of full-time workers with a degree from a particular college and at least ten years of experience. The data are from PayScale, which suffers from being self-reported data for a subset of students, but the data likely still have value for two reasons. First, the authors do a careful job of trying to decompose any biases in the data—for example, correlating PayScale reported earnings with data from other sources. Second, even if there is an upward bias in the data, it should be similar across institutions. As I’ve written about before, I trust the order of colleges in PayScale data more than I trust the dollar values—which are likely inflated.

But there are still a few concerns with this measure. Some of the concerns, such as limiting just to graduates (excluding dropouts) and dropping students with an advanced degree, are fairly well-known. And the focus on salary definitely rewards colleges with large engineering programs, as evidenced by those colleges’ dominance of the value-added list (while art schools look horrible). However, given that ACT and SAT math scores are the other academic preparation measure used, the bias favoring engineering schools may actually be smaller than if verbal/reading scores were also used. I would also have estimated models separately for two-year and four-year colleges instead of putting them in the same model with a dummy variable for sector, but that’s just my preference.

(2) Student loan repayment rate: This represents the opposite of the average three-year student loan cohort default rate over the last three years (so a 10% default rate is framed as a 90% repayment rate). This measure is pretty straightforward, although I do have to question the value-added estimates for colleges with very high repayment rates. Value-added estimates are difficult to conceptualize for colleges with a high probability of success, as there is typically little room for improvement. But here, the highest predicted repayment rate is 96.8% for four-year colleges, while several dozen colleges have actual repayment rates in excess of 96.8%. It appears that linear regressions were used, while some type of robust generalized linear model should have also been considered. (In the Washington Monthly rankings, I use simple linear regressions for graduation rate performance, but very few colleges are so close to the ceiling of 100%.)

(3) Occupational earnings potential: This is a pretty nifty measure that uses LinkedIn data to get a handle of which occupations a college’s graduates pursue during their career. This mix of occupations is then tied to Bureau of Labor Statistics data to estimate the average salary of a college’s graduate, where advanced degree holders are also included. The value-added measure attempts to control for student and institutional characteristics, although it doesn’t control for the preferences of students toward certain majors when entering college.

I’m excited by the potential to use LinkedIn data (warts and all) to look at students’ eventual outcomes. However, it should be noted that LinkedIn is more heavily used in some fields that might be expected (business and engineering) and others that might not be expected (communication and cultural studies). The authors adjust for these differences in representation and are very transparent about it in the appendix. This appendix is definitely on the technical side, but I welcome their transparency.

They also report five different quality measures which are not included in the value-added estimate: ‘curriculum value’ (the value of the degrees offered by the college), the value of skills alumni list on LinkedIn, the percentage of graduates deemed STEM-ready, completion rates within 200% of normal time (8 years for a 4-year college, or 4 years for a 2-year college), and average institutional grant aid. These measures are not input-adjusted, but generally reflect what people think of as quality. However, average institutional grant aid is a lousy measure to include as it rewards colleges with a high-tuition, high-aid model over colleges with a low-tuition, low-aid model—even if students pay the exact same price.

In conclusion, the Brookings report tells readers some things we already know (engineering programs are where to go to make money), but provides a good—albeit partial—look at outcomes across an unusually broad swath of American higher education. I would advise readers to focus on comparing colleges with similar missions and goals, given the importance of occupation in determining earnings. I would also be more hesitant to use the metrics for very small colleges, where all of these measures can be influenced by a relatively small number of people. But the transparency of the methodology and use of new data sources make these value-added rankings a valuable contribution to the public discourse.

Comments on the CollegeNET-PayScale Social Mobility Index

The last two years have seen a great deal of attention being placed on the social mobility function that many people expect colleges to perform. Are colleges giving students from lower-income families the tools and skills they need in order to do well (and good) in society? The Washington Monthly college rankings (which I calculate) were the first entrant in this field nearly a decade ago, and we also put out lists of the Best Bang for the Buck and Affordable Elite colleges in this year’s issue. The New York Times put out a social mobility ranking in September, which essentially was a more elite version of our Affordable Elite list, which looked at only about 100 colleges with a 75% four-year graduation rate.

The newest entity in the cottage industry of social mobility rankings comes from PayScale and CollegeNET, an information technology and scholarship provider. Their Social Mobility Index (SMI) includes five components for 539 four-year colleges, with the following weights:

Tuition (lower is better): 126 points

Economic background (percent of students with family incomes below $48,000): 125 points

Graduation rate (apparently six years): 66 points

Early career salary (from PayScale data): 65 points

Endowment (lower is better): 30 points

The top five colleges in the rankings are Montana Tech, Rowan , Florida A&M, Cal Poly-Ponoma, and Cal State-Northridge, while the bottom five are Oberlin, Colby, Berklee College of music, Washington University, and the Culinary Institute of America.

Many people will critique the use of PayScale’s data in rankings, and I would partially agree—although it’s the best data that is available nationwide at this point until the ban on unit record data is eliminated. My two main critiques of these rankings are the following:

Tuition isn’t the best measure of college affordability. Judging by the numbers used in the rankings, it’s clear that the SMI uses posted tuition and fees for affordability. This doesn’t necessarily reflect what the typical lower-income student would actually pay for two reasons, as it excludes room, board, and other necessary expenses while also excluding any grant aid. The net price of attendance (the total cost of attendance less all grant aid) is a far better measure of what students from lower-income families may pay, even though the SMI measure does capture sticker shock.

The weights are justified, but still arbitrary. The SMI methodology includes the following howler of a sentence:

“Unlike the popular periodicals, we did not arbitrarily assign a percentage weight to the five variables in the SMI formula and add those values together to obtain a score.”

Not to put my philosopher hat on too tightly, but any weights given in college rankings are arbitrarily assigned. A good set of rankings is fairly insensitive to changes in the weighting methodology, while the SMI does not answer that question.

I’m pleased to welcome another college rankings website to this increasingly fascinating mix of providers—and I remain curious the extent to which these rankings (along with many others) will be used as either an accountability or a consumer information tool.

Rankings, Rankings, and More Rankings!

We’re finally reaching the end of the college rankings season for 2014. Money magazine started off the season with its rankings of 665 four-year colleges based on “educational quality, affordability, and alumni earnings.” (I generally like these rankings, in spite of the inherent limitations of using Rate My Professor scores and Payscale data in lieu of more complete information.) I jumped in the fray late in August with my friends at Washington Monthly for our annual college guide and rankings. This was closely followed by a truly bizarre list from the Daily Caller of “The 52 Best Colleges In America PERIOD When You Consider Absolutely Everything That Matters.

But like any good infomercial, there’s more! Last night, the New York Times released its set of rankings focusing on how elite colleges are serving students from lower-income families. They examined the roughly 100 colleges with a four-year graduation rate of 75% or higher, only three of which (University of North Carolina-Chapel Hill, University of Virginia, and the College of William and Mary) are public. By examining the percentage of students receiving Pell Grants in the past three years and the net price of attendance (the total sticker price less all grant aid) for 2012-13, they created a “College Access Index” looking at how many standard deviations from the mean each college was.

My first reaction upon reading the list is that it seems a lot like what we introduced in Washington Monthly’s College Guide this year—a list of “Affordable Elite” colleges. We looked at the 224 most selective colleges (including many public universities) and ranked them using graduation rate, graduation rate performance (are they performing as well as we would expect given the students they enroll?), and student loan default rates in addition to percent Pell and net price. Four University of California colleges were in our top ten, with the NYT’s top college (Vassar) coming in fifth on our list.

I’m glad to see the New York Times focusing on economic diversity in their list, but it would be nice to look at a slightly broader swath of colleges that serve more than a handful of lower-income students. As The Chronicle of Higher Education notes, the Big Ten Conference enrolls more Pell recipients than all of the colleges ranked by the NYT. Focusing on the net price for families making between $30,000 and $48,000 per year is also a concern at these institutions due to small sample sizes. In 2011-12 (the most recent year of publicly available data), Vassar enrolled 669 first-year students, of whom 67 were in the $30,000-$48,000 income bracket.

The U.S. News & World Report college rankings also came out this morning, and not much changed from last year. Princeton, which is currently fighting a lawsuit challenging whether the entire university should be considered a nonprofit enterprise, is the top national university on the list, while Williams College in Massachusetts is the top liberal arts college. Nick Anderson at the Washington Post has put together a nice table showing changes in rankings over five years; most changes wouldn’t register as being statistically significant. Northeastern University, which has risen into the top 50 in recent years, is an exception. However, as this great piece in Boston Magazine explains, Northeastern’s only focus is to rise in the U.S. News rankings. (They’re near the bottom of the Washington Monthly rankings, in part because they’re really expensive.)

Going forward, the biggest set of rankings for the rest of the fall will be the new college football rankings—as the Bowl Championship Series rankings have been replaced by a 13-person committee. (And no, Bob Morse from U.S. News is not a member, although Condoleezza Rice is.) I like Gregg Easterbrook’s idea at ESPN about including academic performance as a component in college football rankings. That might be worth considering as a tiebreaker if the playoff committee gets deadlocked solely using on-field performance. They could also use the Washington Monthly rankings, but Minnesota has a better chance of winning a Rose Bowl before that happens.

[ADDENDUM: Let’s also not forget about the federal government’s effort to rate (not rank) colleges through the Postsecondary Institution Ratings System (PIRS). That is supposed to come out this fall, as well.]

The Value of “Best Value” Lists

I can always tell when a piece about college rankings makes an appearance in the general media. College administrators see the piece and tend to panic while reaching out to their institutional research and/or enrollment management staffs. The question asked is typically the same: why don’t we look better in this set of college rankings? As the methodologist for Washington Monthly magazine’s rankings, I get a flurry of e-mails from these panicked analysts trying to get answers for their leaders—as well as from local journalists asking questions about their hometown institution.

The most recent article to generate a burst of questions to me was on the front page of Monday’s New York Times.  It noted the rise in lists that look at colleges’ value to students instead of the overall performance on a broader set of criteria. (A list of the top ten value colleges across numerous criteria can be found here.) While Washington Monthly’s bang-for-the-buck article from 2012 was not the first effort at looking at a value list (Princeton Review has that honor, to the best of my knowledge), we were the first to incorporate a cost-adjusted performance measure that accounts for student characteristics and the net price of attendance.

When I talk with institutional researchers or journalists, my answer is straightforward. To look better on a bang-for-the-buck list, colleges have to either increase their bang (higher graduation rates and lower default rates, for example) or lower their buck (with a lower net price of attendance). Prioritizing these measures does come with concerns (see Daniel Luzer’s Washington Monthly piece), but the good most likely outweighs the bad.

Moving forward, it will be interesting to see how these lists continue to develop, and whether they are influenced by the Obama Administration’s proposed college ratings. It’s an interesting time in the world of college rankings, ratings, and guides.

What Should Be in the President’s College Ratings?

President Obama’s August announcement that his administration would work to develop a college rating system by 2015 has been the topic of a great deal of discussion in the higher education community. While some prominent voices have spoken out against the ratings system (including my former dissertation advisor at Wisconsin, Sara Goldrick-Rab), the Administration appears to have redoubled its efforts to create a rating system during the next eighteen months. (Of course, that assumes the federal government’s partial shutdown is over by then!)

As the ratings system is being developed, Secretary Duncan and his staff must make a number of important decisions:

(1) Do they push for ratings to be tied to federal financial aid (requiring Congressional authorization), or should they just be made available to the public as one of many information sources?

(2) Should they be designed to highlight the highest-performing colleges, or should they call out the lowest-performing institutions?

(3) Should public, private nonprofit, and for-profit colleges be held to separate standards?

(4) Should community colleges be included in the ratings?

(5) Will outcome measures be adjusted for student characteristics (similar to the value-added models often used in K-12 education)?

After these decisions have been made, then the Department of Education can focus on selecting possible outcomes. Graduation rates and student loan default rates are likely to be a part of the college ratings, but what other measures could be considered—both now and in the future? An expanded version of gainful employment, which is currently used for vocationally-oriented programs, is certainly a possibility, as is some measure of earnings. These measures may be subject to additional legal challenges. Some measure of cost may also make its way into the ratings, rewarding colleges that operate in a more efficient manner.

I would like to hear your thoughts (in the comments section below) about whether these ratings are a good idea and what measures should be included. And when the Department of Education starts accepting comments on the ratings, likely sometime in 2014, I encourage you to submit your thoughts directly to them!

Burning Money on the Quad? Why Rankings May Increase College Costs

Regardless of whether President Obama’s proposed rating system for colleges based on affordability and performance becomes reality (I expect ratings to appear in 2015, but not have a great deal of meaning), his announcement has affected the higher education community. My article listing “bang for the buck” colleges in Washington Monthly ran the same day he announced his plan, a few days ahead of our initial timeline. We were well-positioned with respect to the President’s plan, which led to much more media attention than we would have expected.

A few weeks after the President’s media blitz, U.S. News & World Report unveiled their annual rankings to the great interest of many students, their families, and higher education professionals as well as to the typical criticism of their methodology. But they also faced a new set of critiques based on their perceived focus on prestige and selectivity instead of affordability and social mobility. Bob Morse, U.S. News’s methodologist, answered some of those critiques in a recent blog post. Most of what Morse said isn’t terribly surprising, especially his noting that U.S. News has much different goals than the President’s goals. He also hopes to take advantage of any additional data the federal government collects for its ratings, and I certainly share that interest. However, I strongly disagree with one particular part of his post.

When asked whether U.S. News rewards colleges for raising costs and spending more money, Morse said no. He reminded readers that the methodology only counts spending on the broadly defined category of educational expenditures, implying that additional spending on instruction, student services, research, and academic support always benefits students. (Spending on items such as recreation, housing, and food service does not count.)

I contend that rewarding colleges for spending more in the broad area of educational expenditures is definitely a way to increase the cost of college, particularly since this category makes up 10% of the rankings. Morse and the U.S. News team desire to have their rankings based on academic quality, which can be enhanced by additional spending—I think this is the point they are trying to make. But the critique is mechanically true, as more spending on “good” expenditures still would raise the cost of college. Additionally, this additional spending need not be on factors that benefit undergraduate students and may not be cost-effective. I discuss both of these two points below.

1. Additional spending on “educational expenditures” may not benefit undergraduate students. A good example of this is spending on research, which runs in the tens or even hundreds of millions of dollars per year at many larger universities. Raising tuition to pay for research would increase educational expenditures—and hence an institution’s spot in the U.S. News rankings—but primarily would benefit faculty, graduate students, and postdoctoral scholars. This sort of spending may very well benefit the public through increased research productivity, but it is very unlikely to benefit first-year and second-year undergraduates.

[Lest this be seen solely as a critique of the U.S. News rankings, the Washington Monthly rankings (for which I’m the methodologist) can also be criticized for potentially contributing to the increase in college costs. Our rankings also reward colleges for research expenditures, so the same critiques apply.]

2. Additional spending may fail a cost-effectiveness test. As I previously noted, any spending on the broad area of “educational expenditures” would be a positive. But there is no requirement that the money be used in an efficient way, or even an effective one. I am reminded of a quote by John Duffy, formerly on the faculty of George Washington University’s law school. He famously said in a 2011 New York Times article: “I once joked with my dean that there is a certain amount of money that we could drag into the middle of the school’s quadrangle and burn, and when the flames died down, we’d be a Top 10 school as long as the point of the bonfire was to teach our students.” On a more serious note, additional spending could be used for legitimate programs that fail to move the needle on student achievement, perhaps due to diminishing returns.

I have a great deal of respect for Bob Morse and the U.S. News team, but they are incorrect to claim that their rankings do not have the potential to increase the cost of college. I urge them to reconsider that statement, instead focusing on why the additional spending for primarily educational purposes could benefit students.

Comparing the US News and Washington Monthly Rankings

In yesterday’s post, I discussed the newly released 2014 college rankings from U.S. News & World Report and how they changed from last year. In spite of some changes in methodology that were billed as “significant,” the R-squared value when comparing this year’s rankings with last year’s rankings among ranked national universities and liberal arts colleges was about 0.98. That means that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a nearly perfect prediction.

In today’s post, I compare the results of the U.S. News rankings to those from the Washington Monthly rankings for national universities and liberal arts colleges ranked by both sources. The rankings from Washington Monthly (for which I’m the consulting methodologist and compiled them) are based on three criteria: social mobility, research, and service, which are not the particular goals of the U.S. News rankings. Yet it could still be the case that colleges that recruit high-quality students, have lots of resources, and have a great reputation (the main factors in the U.S. News rankings) do a good job recruiting students from low-income families, produce outstanding research, and graduate servant-leaders.

The results of my comparisons show large differences between the two sets of rankings, particularly at liberal arts colleges. The R-squared value at national universities is 0.34, but only 0.17 at liberal arts colleges, as shown below:

uswm_natl

uswm_libarts

It is worth highlighting some of the colleges that are high on both rankings. Harvard, Stanford, Swarthmore, Pomona, and Carleton all rank in the top ten in both magazines, showing that it is possible to be both highly selective and serve the public in an admirable way. (Of course, we should expect that to be the case given the size of their endowments and their favorable tax treatment!) However, Middlebury and Claremont McKenna check in around 100th in the Washington Monthly rankings in spite of a top-ten U.S. News ranking. These well-endowed institutions don’t seem to have the same commitment to the public good as some of their highly selective peers.

On the other hand, colleges ranked lower by U.S. News do well in the Washington Monthly ranking. Some examples include the University of California-Riverside (2nd in WM, 112th in U.S. News), Berea College (3rd in WM, 76th in U.S. News), and the New College of Florida (8th in WM, 89th in U.S. News). If nothing else, the high ranks in the Washington Monthly rankings give these institutions a chance to toot their own hour and highlight their own successes.

I fully realize that only a small percentage of prospective students will be interested in the Washington Monthly rankings compared to those from U.S. News. But it is worth highlighting the differences across college rankings so students and policymakers can decide what institutions are better for them given their own demands and preferences.

Breaking Down the 2014 US News Rankings

Today is a red-letter day for many people in the higher education community—the release of the annual college rankings from U.S. News and World Report. While many people love to hate the rankings for an array of reasons (from the perceived focus on prestige to a general dislike of accountability in some sectors), their influence on colleges and universities is undeniable. Colleges love to put out press releases touting their place in the rankings even while decrying their general premise.

I’m no stranger to the college ranking business, having been the consulting methodologist for Washington Monthly’s annual college rankings for the past two years. (All opinions in this piece, of course, are my own.) While Washington Monthly’s rankings rank colleges based on social mobility, service, and research performance, U.S. News ranks colleges primarily based on “academic quality,” which consists of inputs such as financial resources and standardized test scores as well as peer assessments for certain types of colleges.

I’m not necessarily in the U.S. News-bashing camp here, as they provide a useful service for people who are interested in prestige-based rankings (which I think is most people who want to buy college guides). But the public policy discussion, driven in part by the President’s proposal to create a college rating system, has been moving toward an outcome-based focus. The Washington Monthly rankings do capture some elements of this focus, as can be seen in my recent appearance on MSNBC and an outstanding panel discussion hosted by New America and Washington Monthly last week in Washington.

Perhaps in response to criticism or the apparent direction of public policy, Robert Morse (the well-known and well-respected methodologist for U.S. News) announced some changes last week in the magazine’s methodology for this year’s rankings. The changes place slightly less weight on peer assessment and selectivity, while putting slightly more weight on graduation rate performance and graduation/retention rates. Yet Morse bills the changes as meaningful, noting that “many schools’ ranks will change in the 2014 [this year’s] edition of the Best Colleges rankings compared with the 2013 edition.”

But the rankings have tended to be quite stable from year to year (here are the 2014 rankings). The top six research universities in the first U.S. News survey (in 1983—based on peer assessments by college presidents) were Stanford, Harvard, Yale, Princeton, Berkeley, and Chicago, with Amherst, Swarthmore, Williams, Carleton, and Oberlin being the top five liberal arts colleges. All of the research universities except Berkeley are in the top six this year and all of the liberal arts colleges except Oberlin are in the top eight.

In this post, I’ve examined all national universities (just over 200) and liberal arts colleges (about 180) ranked by U.S. News in this year’s and last year’s rankings. Note that this is only a portion of qualifying colleges, but the magazine doesn’t rank lower-tier institutions. The two graphs below show the changes in the rankings for national universities and liberal arts colleges between the two years.

usnews_natl

usnews_libarts

The first thing that jumps out at me is the high R-squared, around 0.98 for both classifications. What this essentially means is that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a remarkable amount of persistence even when considering the slow-moving nature of colleges. The graphs show more movement among liberal arts colleges, which are much smaller and can be affected by random noise much more than large research universities.

The biggest blip in the national university rankings is South Carolina State, which went from 147th last year to unranked (no higher than 202nd) this year. Other universities which fell more than 20 spots are Howard University, the University of Missouri-Kansas City, and Rutgers University-Newark, all urban and/or minority-serving institutions. Could the change in formulas have hurt these types of institutions?

In tomorrow’s post, I’ll compare the U.S. News rankings to the Washington Monthly rankings for this same sample of institutions. Stay tuned!

“Bang for the Buck” and College Ratings

President Obama made headlines in the higher education world last week with a series of speeches about possible federal plans designed to bring down the cost of college. While the President made several interesting points (such as cutting law school from three to two years), the most interesting proposal to me was has plan to create a series of federal ratings based on whether colleges provide “good value” to students—tying funding to those ratings.

How could those ratings be constructed? As noted by Libby Nelson in Politico, the federal government plans to publish currently collected data on the net price of attendance (what students pay after taking grant aid into account), average borrowing amounts, and enrollment of Pell Grant recipients. Other measures could potentially be included, some of which are already collected but not readily available (graduation rates for Pell recipients) and others which would be brand new (let your imagination run wild).

Regular readers of this blog are probably aware of my work with Washington Monthly magazine’s annual set of college rankings. Last year was my first year as the consulting methodologist, meaning that I collected the data underlying the rankings, compiled it, and created the rankings—including a new measure of cost-adjusted graduation rate performance. This measure seeks to reward colleges which do a good job serving and graduating students from modest economic means, a far cry from many prestige-based rankings.

The metrics in the Washington Monthly rankings are at least somewhat similar to those proposed by President Obama in his speeches. As a result, we bumped up the release of the new 2013 “bang for the buck” rankings to Thursday afternoon. These rankings reward colleges which performed well on four different metrics:

  • Have a graduation rate of at least 50%.
  • Match or exceed their predicted graduation rate given student and institutional characteristics.
  • Have at least 20% of students receive Pell Grants (a measure of effort in enrolling low-income students).
  • Have a three-year student loan default rate of less than 10%.

Only one in five four-year colleges in America met all four of those criteria, which highlighted a different group of colleges than is normally highlighted. Colleges such as CUNY Baruch College and Cal State University-Fullerton ranked well, while most Ivy League institutions failed to make the list due to Pell Grant enrollment rates in the teens.

This work caught the eye of the media, as I was asked to be on MSNBC’s “All in with Chris Hayes” on Friday night to discuss the rankings and their policy implications. Here is a link to the full segment, where I’m on with Matt Taibbi of Rolling Stone and well-known author Anna Kamenetz:

http://video.msnbc.msn.com/all-in-/52832257/

This was a fun experience, and now I can put the “As Seen on TV” label on my CV. (Right?) Seriously, though, stay tuned for the full Washington Monthly rankings coming out in the morning!