How the New Carnegie Classifications Scrambled College Rankings

Carnegie classifications are one of the wonkiest, most inside baseball concepts in the world of higher education policy. Updated every three years by the good folks at Indiana University, these classifications serve as a useful tool to group similar colleges based on their mix of programs, degree offerings, and research intensity. And since I have been considered “a reliable source of deep-weeds wonkery” in the past, I wrote about the most recent changes to Carnegie classifications earlier this year.

But for most people outside institutional research offices, the first time the updated Carnegie classifications really got noticed was with this fall’s college rankings season. Both the Washington Monthly rankings that I compile and the U.S. News rankings that I get asked to comment about quite a bit rely on Carnegie classifications to define the group of national universities. We both use the Carnegie doctoral/research university category for this, putting master’s institutions to a master’s university category (us) or regional universities (U.S. News). With the number of Carnegie research universities spiking from 334 in the 2015 classifications to 423 in the most recent 2018 classifications, this introduces a bunch of new universities into the national rankings.

To be more exact, 92 universities appeared in Washington Monthly’s national university rankings for the first time this year, with nearly all of these universities coming out of the master’s rankings last year. The full dataset of these colleges and their rankings in both the US News and Washington Monthly rankings can be downloaded here, but I will highlight a few colleges that cracked the top 100 in either ranking below:

Santa Clara University: #54 in US News, #137 in Washington Monthly

Loyola Marymount University: #64 in US News, #258 in Washington Monthly

Gonzaga University: #79 in US News, #211 in Washington Monthly

Elon University: #84 in US News, #282 in Washington Monthly

Rutgers University-Camden: #166 in US News, #57 in Washington Monthly

Towson University: #197 in US News, #59 in Washington Monthly

Mary Baldwin University: #272 in US News, #35 in Washington Monthly

These new colleges appearing in the national university rankings means that other colleges got squeezed down the rankings. Given the priority that many colleges and their boards place on the US News rankings, it’s a tough day on some campuses. Meanwhile, judging by press releases, the new top-100 national universities are probably having a good time right now.

Comments on the CollegeNET-PayScale Social Mobility Index

The last two years have seen a great deal of attention being placed on the social mobility function that many people expect colleges to perform. Are colleges giving students from lower-income families the tools and skills they need in order to do well (and good) in society? The Washington Monthly college rankings (which I calculate) were the first entrant in this field nearly a decade ago, and we also put out lists of the Best Bang for the Buck and Affordable Elite colleges in this year’s issue. The New York Times put out a social mobility ranking in September, which essentially was a more elite version of our Affordable Elite list, which looked at only about 100 colleges with a 75% four-year graduation rate.

The newest entity in the cottage industry of social mobility rankings comes from PayScale and CollegeNET, an information technology and scholarship provider. Their Social Mobility Index (SMI) includes five components for 539 four-year colleges, with the following weights:

Tuition (lower is better): 126 points

Economic background (percent of students with family incomes below $48,000): 125 points

Graduation rate (apparently six years): 66 points

Early career salary (from PayScale data): 65 points

Endowment (lower is better): 30 points

The top five colleges in the rankings are Montana Tech, Rowan , Florida A&M, Cal Poly-Ponoma, and Cal State-Northridge, while the bottom five are Oberlin, Colby, Berklee College of music, Washington University, and the Culinary Institute of America.

Many people will critique the use of PayScale’s data in rankings, and I would partially agree—although it’s the best data that is available nationwide at this point until the ban on unit record data is eliminated. My two main critiques of these rankings are the following:

Tuition isn’t the best measure of college affordability. Judging by the numbers used in the rankings, it’s clear that the SMI uses posted tuition and fees for affordability. This doesn’t necessarily reflect what the typical lower-income student would actually pay for two reasons, as it excludes room, board, and other necessary expenses while also excluding any grant aid. The net price of attendance (the total cost of attendance less all grant aid) is a far better measure of what students from lower-income families may pay, even though the SMI measure does capture sticker shock.

The weights are justified, but still arbitrary. The SMI methodology includes the following howler of a sentence:

“Unlike the popular periodicals, we did not arbitrarily assign a percentage weight to the five variables in the SMI formula and add those values together to obtain a score.”

Not to put my philosopher hat on too tightly, but any weights given in college rankings are arbitrarily assigned. A good set of rankings is fairly insensitive to changes in the weighting methodology, while the SMI does not answer that question.

I’m pleased to welcome another college rankings website to this increasingly fascinating mix of providers—and I remain curious the extent to which these rankings (along with many others) will be used as either an accountability or a consumer information tool.

Rankings, Rankings, and More Rankings!

We’re finally reaching the end of the college rankings season for 2014. Money magazine started off the season with its rankings of 665 four-year colleges based on “educational quality, affordability, and alumni earnings.” (I generally like these rankings, in spite of the inherent limitations of using Rate My Professor scores and Payscale data in lieu of more complete information.) I jumped in the fray late in August with my friends at Washington Monthly for our annual college guide and rankings. This was closely followed by a truly bizarre list from the Daily Caller of “The 52 Best Colleges In America PERIOD When You Consider Absolutely Everything That Matters.

But like any good infomercial, there’s more! Last night, the New York Times released its set of rankings focusing on how elite colleges are serving students from lower-income families. They examined the roughly 100 colleges with a four-year graduation rate of 75% or higher, only three of which (University of North Carolina-Chapel Hill, University of Virginia, and the College of William and Mary) are public. By examining the percentage of students receiving Pell Grants in the past three years and the net price of attendance (the total sticker price less all grant aid) for 2012-13, they created a “College Access Index” looking at how many standard deviations from the mean each college was.

My first reaction upon reading the list is that it seems a lot like what we introduced in Washington Monthly’s College Guide this year—a list of “Affordable Elite” colleges. We looked at the 224 most selective colleges (including many public universities) and ranked them using graduation rate, graduation rate performance (are they performing as well as we would expect given the students they enroll?), and student loan default rates in addition to percent Pell and net price. Four University of California colleges were in our top ten, with the NYT’s top college (Vassar) coming in fifth on our list.

I’m glad to see the New York Times focusing on economic diversity in their list, but it would be nice to look at a slightly broader swath of colleges that serve more than a handful of lower-income students. As The Chronicle of Higher Education notes, the Big Ten Conference enrolls more Pell recipients than all of the colleges ranked by the NYT. Focusing on the net price for families making between $30,000 and $48,000 per year is also a concern at these institutions due to small sample sizes. In 2011-12 (the most recent year of publicly available data), Vassar enrolled 669 first-year students, of whom 67 were in the $30,000-$48,000 income bracket.

The U.S. News & World Report college rankings also came out this morning, and not much changed from last year. Princeton, which is currently fighting a lawsuit challenging whether the entire university should be considered a nonprofit enterprise, is the top national university on the list, while Williams College in Massachusetts is the top liberal arts college. Nick Anderson at the Washington Post has put together a nice table showing changes in rankings over five years; most changes wouldn’t register as being statistically significant. Northeastern University, which has risen into the top 50 in recent years, is an exception. However, as this great piece in Boston Magazine explains, Northeastern’s only focus is to rise in the U.S. News rankings. (They’re near the bottom of the Washington Monthly rankings, in part because they’re really expensive.)

Going forward, the biggest set of rankings for the rest of the fall will be the new college football rankings—as the Bowl Championship Series rankings have been replaced by a 13-person committee. (And no, Bob Morse from U.S. News is not a member, although Condoleezza Rice is.) I like Gregg Easterbrook’s idea at ESPN about including academic performance as a component in college football rankings. That might be worth considering as a tiebreaker if the playoff committee gets deadlocked solely using on-field performance. They could also use the Washington Monthly rankings, but Minnesota has a better chance of winning a Rose Bowl before that happens.

[ADDENDUM: Let’s also not forget about the federal government’s effort to rate (not rank) colleges through the Postsecondary Institution Ratings System (PIRS). That is supposed to come out this fall, as well.]

Are “Affordable Elite” Colleges Growing in Size, or Just Selectivity?

A new addition to this year’s Washington Monthly college guide is a ranking of “Affordable Elite” colleges. Given that many students and families (rightly or wrongly) focus on trying to get into the most selective colleges, we decided to create a special set of rankings covering only the 224 most highly-competitive colleges in the country (as defined by Barron’s). Colleges are assigned scores based on student loan default rates, graduation rates, graduation rate performance, the percentage of students receiving Pell Grants, and the net price of attendance. UCLA, Harvard, and Williams made the top three, with four University of California campuses in the top ten.

I received an interesting piece of criticism regarding the list by Sara Goldrick-Rab, professor at the University of Wisconsin-Madison (and my dissertation chair in graduate school). Her critique noted that the size of the school and the type of admissions standards are missing from the rankings. She wrote:

“Many schools are so tiny that they educate a teensy-weensy fraction of American undergraduates. So they accept 10 poor kids a year, and that’s 10% of their enrollment. Or maybe even 20%? So what? Why is that something we need to laud at the policy level?”

While I don’t think that the size of the college should be a part of the rankings, it’s certainly worth highlighting the selective colleges that have expanded over time compared to those which have remained at the same size in spite of an ever-growing applicant pool.

I used undergraduate enrollment data from the fall semesters of 1980, 1990, 2000, and 2012 from IPEDS for both the 224 colleges in the Affordable Elite list and 2,193 public and private nonprofit four-year colleges not on the list. I calculated the percentage change between each year and 2012 for the selective colleges on the Affordable Elite list and the other less-selective colleges to get an idea of whether selective colleges are curtailing enrollment.

[UPDATE: The fall enrollment numbers include all undergraduates, including nondegree-seeking institutions. This doesn’t have a big impact on most colleges, but does at Harvard–where about 30% of total undergraduate enrollment is not seeking a degree. This means that enrollment growth may be overstated. Thanks to Ben Wildavsky for leading me to investigate this point.]

The median Affordable Elite college enrolled 3,354 students in 2012, compared to 1,794 students at the median less-selective college. The percentage change at the median college between each year and 2012 is below:

Period Affordable Elite Less selective
2000-2012 10.9% 18.3%
1990-2012 16.0% 26.3%
1980-2012 19.9% 41.7%

 

The distribution of growth rates is shown below:

enrollment_by_elite

So, as a whole, less-selective colleges are growing at a more rapid pace than the ones on the Affordable Elite list. But do higher-ranked elite colleges grow faster? The scatterplot below suggests not really—with a correlation of -0.081 between rank and growth, suggesting that higher-ranked colleges grow at slightly slower rates than lower-ranked colleges.

enrollment_vs_rank

But some elite colleges have grown. The top ten colleges in the Affordable Elite list have the following growth rates:

      Change from year to 2012 (pct)
Rank Name (* means public) 2012 enrollment 2000 1990 1980
1 University of California–Los Angeles (CA)* 27941 11.7 15.5 28.0
2 Harvard University (MA) 10564 6.9 1.7 62.3
3 Williams College (MA) 2070 2.5 3.2 6.3
4 Dartmouth College (NH) 4193 3.4 11.1 16.8
5 Vassar College (NY) 2406 0.3 -1.8 1.9
6 University of California–Berkeley (CA)* 25774 13.7 20.1 21.9
7 University of California–Irvine (CA)* 22216 36.9 64.6 191.6
8 University of California–San Diego (CA)* 22676 37.5 57.9 152.5
9 Hanover College (IN) 1123 -1.7 4.5 11.0
10 Amherst College (MA) 1817 7.2 13.7 15.8

 

Some elite colleges have not grown since 1980, including the University of Pennsylvania, MIT, Boston College, and the University of Minnesota. Public colleges have generally grown slightly faster than private colleges (the UC colleges are a prime example), but there is substantial variation in their growth.

The Multiple Stakeholder Problem in Assessing College Quality

One of the biggest challenges the Department of Education’s proposed Postsecondary Institution Ratings System (PIRS) will face is how to present a valid set of ratings to multiple audiences. Much of the discussion at the recent technical symposium was about who should be the key audience: colleges (for accountability purposes) or students (for informational purposes). The determination of what the audience should be will likely influence what the ratings should look like. My research primarily focuses on institutional accountability, and I think that the federal government should focus on that as the goal of PIRS. (I said as much in my presentation earlier this month.)

The student information perspective is much trickier in my view. Students tend to flock to rankings and information sources that are largely based on prestige instead of some measure of “value-added” or societal good. As a result, I view the Washington Monthly college rankings (which I’ve worked on for the past two years) as a much more influential tool to incentivize colleges and policymakers than students. I think that is the right path to take to influence colleges’ priorities, as I have to question whether many students will use college rankings that provide very useful information to students but do not line up with the preexisting idea of what is a “good” college.

I was quoted in an article in Politico this morning regarding PIRS and what can be learned from existing rankings systems. In that article, I expressed similar sentiments, although in a less elegant way. (It’s also a good time to clarify that all opinions I express are my own.) I certainly hope that more than six students use the Washington Monthly rankings to inform their college choice sets, but I do not harbor grand expectations that students will suddenly choose to use our rankings over U.S. News. However, the influence of the rankings on colleges has the potential to help a large number of students through changing institutional priorities.

The Value of “Best Value” Lists

I can always tell when a piece about college rankings makes an appearance in the general media. College administrators see the piece and tend to panic while reaching out to their institutional research and/or enrollment management staffs. The question asked is typically the same: why don’t we look better in this set of college rankings? As the methodologist for Washington Monthly magazine’s rankings, I get a flurry of e-mails from these panicked analysts trying to get answers for their leaders—as well as from local journalists asking questions about their hometown institution.

The most recent article to generate a burst of questions to me was on the front page of Monday’s New York Times.  It noted the rise in lists that look at colleges’ value to students instead of the overall performance on a broader set of criteria. (A list of the top ten value colleges across numerous criteria can be found here.) While Washington Monthly’s bang-for-the-buck article from 2012 was not the first effort at looking at a value list (Princeton Review has that honor, to the best of my knowledge), we were the first to incorporate a cost-adjusted performance measure that accounts for student characteristics and the net price of attendance.

When I talk with institutional researchers or journalists, my answer is straightforward. To look better on a bang-for-the-buck list, colleges have to either increase their bang (higher graduation rates and lower default rates, for example) or lower their buck (with a lower net price of attendance). Prioritizing these measures does come with concerns (see Daniel Luzer’s Washington Monthly piece), but the good most likely outweighs the bad.

Moving forward, it will be interesting to see how these lists continue to develop, and whether they are influenced by the Obama Administration’s proposed college ratings. It’s an interesting time in the world of college rankings, ratings, and guides.

Burning Money on the Quad? Why Rankings May Increase College Costs

Regardless of whether President Obama’s proposed rating system for colleges based on affordability and performance becomes reality (I expect ratings to appear in 2015, but not have a great deal of meaning), his announcement has affected the higher education community. My article listing “bang for the buck” colleges in Washington Monthly ran the same day he announced his plan, a few days ahead of our initial timeline. We were well-positioned with respect to the President’s plan, which led to much more media attention than we would have expected.

A few weeks after the President’s media blitz, U.S. News & World Report unveiled their annual rankings to the great interest of many students, their families, and higher education professionals as well as to the typical criticism of their methodology. But they also faced a new set of critiques based on their perceived focus on prestige and selectivity instead of affordability and social mobility. Bob Morse, U.S. News’s methodologist, answered some of those critiques in a recent blog post. Most of what Morse said isn’t terribly surprising, especially his noting that U.S. News has much different goals than the President’s goals. He also hopes to take advantage of any additional data the federal government collects for its ratings, and I certainly share that interest. However, I strongly disagree with one particular part of his post.

When asked whether U.S. News rewards colleges for raising costs and spending more money, Morse said no. He reminded readers that the methodology only counts spending on the broadly defined category of educational expenditures, implying that additional spending on instruction, student services, research, and academic support always benefits students. (Spending on items such as recreation, housing, and food service does not count.)

I contend that rewarding colleges for spending more in the broad area of educational expenditures is definitely a way to increase the cost of college, particularly since this category makes up 10% of the rankings. Morse and the U.S. News team desire to have their rankings based on academic quality, which can be enhanced by additional spending—I think this is the point they are trying to make. But the critique is mechanically true, as more spending on “good” expenditures still would raise the cost of college. Additionally, this additional spending need not be on factors that benefit undergraduate students and may not be cost-effective. I discuss both of these two points below.

1. Additional spending on “educational expenditures” may not benefit undergraduate students. A good example of this is spending on research, which runs in the tens or even hundreds of millions of dollars per year at many larger universities. Raising tuition to pay for research would increase educational expenditures—and hence an institution’s spot in the U.S. News rankings—but primarily would benefit faculty, graduate students, and postdoctoral scholars. This sort of spending may very well benefit the public through increased research productivity, but it is very unlikely to benefit first-year and second-year undergraduates.

[Lest this be seen solely as a critique of the U.S. News rankings, the Washington Monthly rankings (for which I’m the methodologist) can also be criticized for potentially contributing to the increase in college costs. Our rankings also reward colleges for research expenditures, so the same critiques apply.]

2. Additional spending may fail a cost-effectiveness test. As I previously noted, any spending on the broad area of “educational expenditures” would be a positive. But there is no requirement that the money be used in an efficient way, or even an effective one. I am reminded of a quote by John Duffy, formerly on the faculty of George Washington University’s law school. He famously said in a 2011 New York Times article: “I once joked with my dean that there is a certain amount of money that we could drag into the middle of the school’s quadrangle and burn, and when the flames died down, we’d be a Top 10 school as long as the point of the bonfire was to teach our students.” On a more serious note, additional spending could be used for legitimate programs that fail to move the needle on student achievement, perhaps due to diminishing returns.

I have a great deal of respect for Bob Morse and the U.S. News team, but they are incorrect to claim that their rankings do not have the potential to increase the cost of college. I urge them to reconsider that statement, instead focusing on why the additional spending for primarily educational purposes could benefit students.

Comparing the US News and Washington Monthly Rankings

In yesterday’s post, I discussed the newly released 2014 college rankings from U.S. News & World Report and how they changed from last year. In spite of some changes in methodology that were billed as “significant,” the R-squared value when comparing this year’s rankings with last year’s rankings among ranked national universities and liberal arts colleges was about 0.98. That means that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a nearly perfect prediction.

In today’s post, I compare the results of the U.S. News rankings to those from the Washington Monthly rankings for national universities and liberal arts colleges ranked by both sources. The rankings from Washington Monthly (for which I’m the consulting methodologist and compiled them) are based on three criteria: social mobility, research, and service, which are not the particular goals of the U.S. News rankings. Yet it could still be the case that colleges that recruit high-quality students, have lots of resources, and have a great reputation (the main factors in the U.S. News rankings) do a good job recruiting students from low-income families, produce outstanding research, and graduate servant-leaders.

The results of my comparisons show large differences between the two sets of rankings, particularly at liberal arts colleges. The R-squared value at national universities is 0.34, but only 0.17 at liberal arts colleges, as shown below:

uswm_natl

uswm_libarts

It is worth highlighting some of the colleges that are high on both rankings. Harvard, Stanford, Swarthmore, Pomona, and Carleton all rank in the top ten in both magazines, showing that it is possible to be both highly selective and serve the public in an admirable way. (Of course, we should expect that to be the case given the size of their endowments and their favorable tax treatment!) However, Middlebury and Claremont McKenna check in around 100th in the Washington Monthly rankings in spite of a top-ten U.S. News ranking. These well-endowed institutions don’t seem to have the same commitment to the public good as some of their highly selective peers.

On the other hand, colleges ranked lower by U.S. News do well in the Washington Monthly ranking. Some examples include the University of California-Riverside (2nd in WM, 112th in U.S. News), Berea College (3rd in WM, 76th in U.S. News), and the New College of Florida (8th in WM, 89th in U.S. News). If nothing else, the high ranks in the Washington Monthly rankings give these institutions a chance to toot their own hour and highlight their own successes.

I fully realize that only a small percentage of prospective students will be interested in the Washington Monthly rankings compared to those from U.S. News. But it is worth highlighting the differences across college rankings so students and policymakers can decide what institutions are better for them given their own demands and preferences.

Breaking Down the 2014 US News Rankings

Today is a red-letter day for many people in the higher education community—the release of the annual college rankings from U.S. News and World Report. While many people love to hate the rankings for an array of reasons (from the perceived focus on prestige to a general dislike of accountability in some sectors), their influence on colleges and universities is undeniable. Colleges love to put out press releases touting their place in the rankings even while decrying their general premise.

I’m no stranger to the college ranking business, having been the consulting methodologist for Washington Monthly’s annual college rankings for the past two years. (All opinions in this piece, of course, are my own.) While Washington Monthly’s rankings rank colleges based on social mobility, service, and research performance, U.S. News ranks colleges primarily based on “academic quality,” which consists of inputs such as financial resources and standardized test scores as well as peer assessments for certain types of colleges.

I’m not necessarily in the U.S. News-bashing camp here, as they provide a useful service for people who are interested in prestige-based rankings (which I think is most people who want to buy college guides). But the public policy discussion, driven in part by the President’s proposal to create a college rating system, has been moving toward an outcome-based focus. The Washington Monthly rankings do capture some elements of this focus, as can be seen in my recent appearance on MSNBC and an outstanding panel discussion hosted by New America and Washington Monthly last week in Washington.

Perhaps in response to criticism or the apparent direction of public policy, Robert Morse (the well-known and well-respected methodologist for U.S. News) announced some changes last week in the magazine’s methodology for this year’s rankings. The changes place slightly less weight on peer assessment and selectivity, while putting slightly more weight on graduation rate performance and graduation/retention rates. Yet Morse bills the changes as meaningful, noting that “many schools’ ranks will change in the 2014 [this year’s] edition of the Best Colleges rankings compared with the 2013 edition.”

But the rankings have tended to be quite stable from year to year (here are the 2014 rankings). The top six research universities in the first U.S. News survey (in 1983—based on peer assessments by college presidents) were Stanford, Harvard, Yale, Princeton, Berkeley, and Chicago, with Amherst, Swarthmore, Williams, Carleton, and Oberlin being the top five liberal arts colleges. All of the research universities except Berkeley are in the top six this year and all of the liberal arts colleges except Oberlin are in the top eight.

In this post, I’ve examined all national universities (just over 200) and liberal arts colleges (about 180) ranked by U.S. News in this year’s and last year’s rankings. Note that this is only a portion of qualifying colleges, but the magazine doesn’t rank lower-tier institutions. The two graphs below show the changes in the rankings for national universities and liberal arts colleges between the two years.

usnews_natl

usnews_libarts

The first thing that jumps out at me is the high R-squared, around 0.98 for both classifications. What this essentially means is that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a remarkable amount of persistence even when considering the slow-moving nature of colleges. The graphs show more movement among liberal arts colleges, which are much smaller and can be affected by random noise much more than large research universities.

The biggest blip in the national university rankings is South Carolina State, which went from 147th last year to unranked (no higher than 202nd) this year. Other universities which fell more than 20 spots are Howard University, the University of Missouri-Kansas City, and Rutgers University-Newark, all urban and/or minority-serving institutions. Could the change in formulas have hurt these types of institutions?

In tomorrow’s post, I’ll compare the U.S. News rankings to the Washington Monthly rankings for this same sample of institutions. Stay tuned!

“Bang for the Buck” and College Ratings

President Obama made headlines in the higher education world last week with a series of speeches about possible federal plans designed to bring down the cost of college. While the President made several interesting points (such as cutting law school from three to two years), the most interesting proposal to me was has plan to create a series of federal ratings based on whether colleges provide “good value” to students—tying funding to those ratings.

How could those ratings be constructed? As noted by Libby Nelson in Politico, the federal government plans to publish currently collected data on the net price of attendance (what students pay after taking grant aid into account), average borrowing amounts, and enrollment of Pell Grant recipients. Other measures could potentially be included, some of which are already collected but not readily available (graduation rates for Pell recipients) and others which would be brand new (let your imagination run wild).

Regular readers of this blog are probably aware of my work with Washington Monthly magazine’s annual set of college rankings. Last year was my first year as the consulting methodologist, meaning that I collected the data underlying the rankings, compiled it, and created the rankings—including a new measure of cost-adjusted graduation rate performance. This measure seeks to reward colleges which do a good job serving and graduating students from modest economic means, a far cry from many prestige-based rankings.

The metrics in the Washington Monthly rankings are at least somewhat similar to those proposed by President Obama in his speeches. As a result, we bumped up the release of the new 2013 “bang for the buck” rankings to Thursday afternoon. These rankings reward colleges which performed well on four different metrics:

  • Have a graduation rate of at least 50%.
  • Match or exceed their predicted graduation rate given student and institutional characteristics.
  • Have at least 20% of students receive Pell Grants (a measure of effort in enrolling low-income students).
  • Have a three-year student loan default rate of less than 10%.

Only one in five four-year colleges in America met all four of those criteria, which highlighted a different group of colleges than is normally highlighted. Colleges such as CUNY Baruch College and Cal State University-Fullerton ranked well, while most Ivy League institutions failed to make the list due to Pell Grant enrollment rates in the teens.

This work caught the eye of the media, as I was asked to be on MSNBC’s “All in with Chris Hayes” on Friday night to discuss the rankings and their policy implications. Here is a link to the full segment, where I’m on with Matt Taibbi of Rolling Stone and well-known author Anna Kamenetz:

http://video.msnbc.msn.com/all-in-/52832257/

This was a fun experience, and now I can put the “As Seen on TV” label on my CV. (Right?) Seriously, though, stay tuned for the full Washington Monthly rankings coming out in the morning!