Breaking Down the 2014 US News Rankings

Today is a red-letter day for many people in the higher education community—the release of the annual college rankings from U.S. News and World Report. While many people love to hate the rankings for an array of reasons (from the perceived focus on prestige to a general dislike of accountability in some sectors), their influence on colleges and universities is undeniable. Colleges love to put out press releases touting their place in the rankings even while decrying their general premise.

I’m no stranger to the college ranking business, having been the consulting methodologist for Washington Monthly’s annual college rankings for the past two years. (All opinions in this piece, of course, are my own.) While Washington Monthly’s rankings rank colleges based on social mobility, service, and research performance, U.S. News ranks colleges primarily based on “academic quality,” which consists of inputs such as financial resources and standardized test scores as well as peer assessments for certain types of colleges.

I’m not necessarily in the U.S. News-bashing camp here, as they provide a useful service for people who are interested in prestige-based rankings (which I think is most people who want to buy college guides). But the public policy discussion, driven in part by the President’s proposal to create a college rating system, has been moving toward an outcome-based focus. The Washington Monthly rankings do capture some elements of this focus, as can be seen in my recent appearance on MSNBC and an outstanding panel discussion hosted by New America and Washington Monthly last week in Washington.

Perhaps in response to criticism or the apparent direction of public policy, Robert Morse (the well-known and well-respected methodologist for U.S. News) announced some changes last week in the magazine’s methodology for this year’s rankings. The changes place slightly less weight on peer assessment and selectivity, while putting slightly more weight on graduation rate performance and graduation/retention rates. Yet Morse bills the changes as meaningful, noting that “many schools’ ranks will change in the 2014 [this year’s] edition of the Best Colleges rankings compared with the 2013 edition.”

But the rankings have tended to be quite stable from year to year (here are the 2014 rankings). The top six research universities in the first U.S. News survey (in 1983—based on peer assessments by college presidents) were Stanford, Harvard, Yale, Princeton, Berkeley, and Chicago, with Amherst, Swarthmore, Williams, Carleton, and Oberlin being the top five liberal arts colleges. All of the research universities except Berkeley are in the top six this year and all of the liberal arts colleges except Oberlin are in the top eight.

In this post, I’ve examined all national universities (just over 200) and liberal arts colleges (about 180) ranked by U.S. News in this year’s and last year’s rankings. Note that this is only a portion of qualifying colleges, but the magazine doesn’t rank lower-tier institutions. The two graphs below show the changes in the rankings for national universities and liberal arts colleges between the two years.

usnews_natl

usnews_libarts

The first thing that jumps out at me is the high R-squared, around 0.98 for both classifications. What this essentially means is that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a remarkable amount of persistence even when considering the slow-moving nature of colleges. The graphs show more movement among liberal arts colleges, which are much smaller and can be affected by random noise much more than large research universities.

The biggest blip in the national university rankings is South Carolina State, which went from 147th last year to unranked (no higher than 202nd) this year. Other universities which fell more than 20 spots are Howard University, the University of Missouri-Kansas City, and Rutgers University-Newark, all urban and/or minority-serving institutions. Could the change in formulas have hurt these types of institutions?

In tomorrow’s post, I’ll compare the U.S. News rankings to the Washington Monthly rankings for this same sample of institutions. Stay tuned!

“Bang for the Buck” and College Ratings

President Obama made headlines in the higher education world last week with a series of speeches about possible federal plans designed to bring down the cost of college. While the President made several interesting points (such as cutting law school from three to two years), the most interesting proposal to me was has plan to create a series of federal ratings based on whether colleges provide “good value” to students—tying funding to those ratings.

How could those ratings be constructed? As noted by Libby Nelson in Politico, the federal government plans to publish currently collected data on the net price of attendance (what students pay after taking grant aid into account), average borrowing amounts, and enrollment of Pell Grant recipients. Other measures could potentially be included, some of which are already collected but not readily available (graduation rates for Pell recipients) and others which would be brand new (let your imagination run wild).

Regular readers of this blog are probably aware of my work with Washington Monthly magazine’s annual set of college rankings. Last year was my first year as the consulting methodologist, meaning that I collected the data underlying the rankings, compiled it, and created the rankings—including a new measure of cost-adjusted graduation rate performance. This measure seeks to reward colleges which do a good job serving and graduating students from modest economic means, a far cry from many prestige-based rankings.

The metrics in the Washington Monthly rankings are at least somewhat similar to those proposed by President Obama in his speeches. As a result, we bumped up the release of the new 2013 “bang for the buck” rankings to Thursday afternoon. These rankings reward colleges which performed well on four different metrics:

  • Have a graduation rate of at least 50%.
  • Match or exceed their predicted graduation rate given student and institutional characteristics.
  • Have at least 20% of students receive Pell Grants (a measure of effort in enrolling low-income students).
  • Have a three-year student loan default rate of less than 10%.

Only one in five four-year colleges in America met all four of those criteria, which highlighted a different group of colleges than is normally highlighted. Colleges such as CUNY Baruch College and Cal State University-Fullerton ranked well, while most Ivy League institutions failed to make the list due to Pell Grant enrollment rates in the teens.

This work caught the eye of the media, as I was asked to be on MSNBC’s “All in with Chris Hayes” on Friday night to discuss the rankings and their policy implications. Here is a link to the full segment, where I’m on with Matt Taibbi of Rolling Stone and well-known author Anna Kamenetz:

http://video.msnbc.msn.com/all-in-/52832257/

This was a fun experience, and now I can put the “As Seen on TV” label on my CV. (Right?) Seriously, though, stay tuned for the full Washington Monthly rankings coming out in the morning!

Yes, Student Characteristics Matter. But So Do Colleges.

It is no surprise to those in the higher education world that student characteristics and institutional resources are strongly associated with student outcomes. Colleges which attract academically elite students and have the ability to spend large sums of money on instruction and student support should be able to graduate more of their students than open-access, financially-strapped universities, even after holding factors such as teaching quality constant. But an article in today’s Inside Higher Ed shows that there is a great deal of interest in determining the correlation between inputs and outputs (such as graduation).

The article highlights two new studies that examine the relationship between inputs and outputs. The first, by the Department of Education’s Advisory Committee on Student Financial Assistance, breaks down graduation rates by the percentage of students who are Pell Grant recipients, per-student endowments, and ACT/SAT scores using IPEDS data. The second new study, by the president of Colorado Technical University, finds that four student characteristics (race, EFC, transfer credits, and full-time status) explain 74% of the variation in an unidentified for-profit college’s graduation rate. His conclusion is that “public [emphasis original] policy will not increase college graduates by focusing on institution characteristics.”

While these studies take different approaches (one using institutional-level data and the other using student-level data), they highlight the importance that student and institutional characteristics currently have in predicting student success rates. These studies are not novel or unique—they follow a series of papers in HCM Strategists’ Context for Success project in 2012 and even more work before that. I contributed a paper to the project (with Doug Harris at Tulane University) examining input-adjusted graduation rates using IPEDS data. We found R-squared values of approximately 0.74 using a range of student and institutional characteristics, although the predictive power varied by Carnegie classification. It is also worth noting that the ACSFA report calculated predicted graduation rates with an R-squared value of 0.80, but they control for factors (like expenditures and endowment) that are at least somewhat within an institution’s control and don’t allow for a look at cost-effectiveness.

This suggests the importance of taking a value-added approach in performance measurement. Just like K-12 education is moving beyond rewarding schools for meeting raw benchmarks and adopting a gain score approach, higher education needs to do the same. Higher education also needs to look at cost-adjusted models to examine cost-effectiveness, something which we do in the HCM paper and I have done in the Washington Monthly college rankings (a new set of which will be out later this month).

However, even if a regression model explains 74% of the variation in graduation rates, a substantial amount can be attributed either to omitted variables (such as motivation) or institutional actions. The article by the Colorado Technical University president takes exactly the wrong approach, saying that “student graduation may have little to do with institutional factors.” If his statement is accurate, we would expect colleges’ predicted graduation rates to be equal to their actual graduation rates. But, as anyone who was spent time on college campuses should know, institutional practices and policies can play an important role in retention and graduation. The 2012 Washington Monthly rankings included a predicted vs. actual graduation rate component. While Colorado Tech basically hit its predicted graduation rate of 25% (with an actual graduation rate one percentage point higher), other colleges outperformed their prediction given student and institutional characteristics. For example, San Diego State University and Rutgers University-Newark, among others, outperformed their prediction by more than ten percentage points.

While incoming student characteristics do affect graduation rates (and I’m baffled by the amount of attention on this known fact), colleges’ actions do matter. Let’s highlight the colleges which appear to be doing a good job with their inputs (and at a reasonable price to students and taxpayers) and see what we can learn from them.

How Not to Rate the Worst Professors

I was surprised to come across an article from Yahoo! Finance claiming knowledge of the “25 Universities with the Worst Professors.” (Maybe I shouldn’t have been surprised, but that is another discussion for another day.) The top 25 list includes many technology and engineering-oriented institutions, as well as liberal arts colleges. I am particularly amused by the inclusion of my alma mater (Truman State University) at number 21, as well as my new institution starting next fall (Seton Hall University) at number 16. Additionally, 11 of the 25 universities are located in the Midwest, with none in the South.

This unusual distribution immediately led me to examine the methodology of the list, which comes from Forbes and CCAP’s annual college rankings. The worst professors list is based on Rate My Professor, a website which allows students to rate their instructors on a variety of characteristics. For the rankings, a mix of the helpfulness and clarity measures is used in conjunction with partially controlling for a professor’s “easiness.”

I understand their rationale for using Rate My Professor, as it’s the only widespread source of information about faculty teaching performance. I’m not opposed to using Rate My Professor as part of this measure, but controlling for grades received and the course’s home discipline is essential. At many universities, science and engineering courses have much lower average grades, which may influence students’ perceptions of the professor. The same is true at certain liberal arts colleges.

The course’s home discipline is currently in the Rate My Professor data, and I recommend that Forbes and CCAP weight results by discipline in order to more accurately make comparisons across institutions. I would also push them to aggregate a representative sample of comments for each institution, so students can learn more about what students think beyond a Likert score.

Student course evaluations are not going away (much to the chagrin of some faculty members), and they may be used in institutional accountability systems as well as a very small part of the tenure and promotion process. But like many of the larger college rankings, Forbes/CCAP’s work results in at best an incomplete and at worst a biased comparison of colleges. (And I promise that I will work hard on my helpfulness and clarity measures next fall!)

College Reputation Rankings Go Global

College rankings are not a phenomenon which is limited to the United States. Shanghai Jiao Tong University has ranked research universities for the past decade, and the well-known Times Higher Education rankings have been around for several years. While the Shanghai rankings tend to focus on metrics such as citations and research funding, THE has compiled a reputational ranking of universities around the world. Reputational rankings are a concern in U.S.-only rankings, but extending them to a global scale makes little sense to me.

Thomson Reuters (the group behind the THE rankings) makes a great fuss about the sound methodology of the reputational rankings, which they to their credit acknowledge is a subjective measure. They collected 16,639 responses from academics around the world, with some demographic information available here. But they fail to provide any information about the sampling frame, a devastating omission. The researchers behind the rankings do note that the initial sample was constructed to be broadly representative of global academics, but we know nothing about the response rate or whether the final sample was representative. In my mind, that omission disqualifies the rankings from further consideration. But I’ll push on and analyze the content of the reputational rankings.

The reputational rankings are a combination of separate ratings for teaching and research quality. I really don’t have serious concerns about the research component of the ranking, as the survey asks about research quality of given institutions within the academic’s discipline. Researchers who stay on top of their field should be able to reasonably identify universities with top research departments. I have much less confidence in the teaching portion of the rankings, as someone needs to observe classes in a given department to have any idea of teaching effectiveness. Yet I would be surprised if teaching and research evaluations were not strongly correlated.

The University of Wisconsin-Madison ranks 30th on the global reputation scale, which a slightly higher score for research than teaching. (And according to the map, the university has been relocated to the greater Marshfield area.) That has not stopped Kris Olds, a UW-Madison faculty member, from leveling a devastating critique of the idea of global rankings—or the UW-Madison press office from putting out a favorable release on the news.

I have mixed emotions on this particular set of rankings; the research measure is probably capturing research productivity well, but the teaching measure is likely lousy. However, without more information about the response rate to the THE survey, I cannot view these rankings as being valid.

Another Random List of “Best Value” Colleges

Getting a good value for attending college is on the mind of most prospective students and their families, and as a result, numerous publishers of college rankings have come out with lists of “best value” colleges. I have highlighted the best value college lists from Kiplinger’s and U.S. News in previous posts, as well as discussing my work incorporating a cost component into Washington Monthly’s rankings. Today’s entry in this series comes from the Princeton Review,  a company better known for test preparation classes and private counseling, but they are also in the rankings business.

The Princeton Review released its list of its “Best Value Colleges” today in conjunction with USA Today, and the list is heavily populated with a “who’s who” list of selective, wealthy colleges and universities. Among the top ten private colleges, several of them are wealthy enough to be able to waive all tuition and fees for their few students from modest financial backgrounds. The top ten public institutions do tend to attract a fair number of out-of-state and full-pay students, although there is one surprise name on the list (North Carolina State University—well done!). More data on the top 150 colleges can be found here.

My main complaint with this ranking system, as with other best value colleges lists, is with the methodology. They begin by narrowing their sample from about 2,000 colleges to 650—what they call “the nation’s academically best undergraduate institutions.” This effectively limits the utility of these rankings to students who score a 25 or higher on the ACT, or even higher if students wish to qualify for merit-based grant aid. Student selectivity is further awarded in the academic rating, even though this has no guarantee of future academic performance. Much of the academic and financial aid ratings measures come from student surveys, which are fraught with selection bias. Basically, many colleges handpick the students who take these surveys, which results in an optimistic set of opinions being registers. I wish I could say more about their methodology and point values, but no information is available.

The top 150 list (which can be found here by state) certainly favors wealthy, prestigious colleges with a few exceptions (University of South Dakota, University of Tennessee-Martin, and Southern Utah University, for example). In Wisconsin, only Madison and Eau Claire (two of the three most selective universities in the UW System) made the list. In the Big Ten, there are some notable omissions—Iowa (but Iowa State is included), Michigan State (but Michigan is included), Ohio State, and Penn State.

The best value rankings try to provide information about what college will cost, and whether some colleges provide better “bang for the buck” than others. Providing useful information is an important endeavor, as this recent article in the Chronicle emphasizes. However, the Princeton Review’s list provides useful information to only a small number of academically elite students, many of whom have the financial means to pay for college without taking on much debt. This is illustrated by the accompanying USA Today article featuring the rankings, which notes that fewer than half of all students attending Best Value Colleges take on debt, compared to two-thirds of students nationwide. This differential isn’t just a result of the cost of attendance, but instead the student’s ability to pay for college.

Bill Gates on Measuring Educational Effectiveness

The Bill and Melinda Gates Foundation has become a very influential force in shaping research in health and education policy over the past decade, both due to the large sums of money the foundation has spent funding research in these areas and because of the public influence that someone as successful as Bill Gates can have. (Disclaimer: I’ve worked on several projects which have received Gates funding.) In both the health and education fields, the Gates Foundation is focusing on the importance of being able to collect data and measure a program’s effectiveness. This is evidenced by the Gates Foundation’s annual letter to the public, which I recommend reading.

In the education arena, the Gates letter focuses on creating useful and reliable K-12 teacher feedback and evaluation systems. They have funded a project called Measures of Effective Teaching, which finds some evidence that it is possible to measure teacher effectiveness in a repeatable manner that can be used to help teachers improve. (A hat tip to my friend Trey Miller, who worked on the report.) To me, the important part of the MET report is that multiple measures of teacher effectiveness, including evaluations, observations, and student scores, need to be used when consider teaching effectiveness.

The Gates Foundation is also moving into performance measurement in higher education. I have been a part of one of Gates’s efforts in this arena—a project examining best practices in input-adjusted performance metrics. What this essentially means is that colleges should be judged based on some measure of their “value added” instead of the raw performance of their students. Last week, Bill Gates commented to a small group of journalists that college rankings are doing the exact opposite (as reported by Luisa Kroll of Forbes):

“The control metric shouldn’t be that kids aren’t so qualified. It should be whether colleges are doing their job to teach them. I bet there are community colleges and other colleges that do a good job in this area, but US News & World Report rankings pushes you away from that.”

The Forbes article goes on to mention that Gates would like to see metrics that focus on the performance of students from low-income families and the effectiveness of teacher education programs. Both of these measures are currently in progress, and are likely to continue moving forward given the Gates Foundation’s deep pockets and influence.

Transparency and Teacher Education Programs

I am a firm believer in the public’s right to know nearly everything about government-funded institutions unless there is a clear and compelling reason for privacy. For that reason, I have been following the University of Wisconsin System’s fight against the National Council on Teacher Quality (NCTQ), a group seeking to make information on the standards of teacher education programs public. In conjunction with U.S. News and World Report, NCTQ is compiling course syllabi, textbooks, student handbooks, and other information to rate education schools based on whether they are adequately preparing future K-12 teachers for their professions.

This review process has been objected to by many public colleges and universities (the full list is here) on the grounds that the proposed methodology is inadequate for rating colleges. (Yet these same colleges boast about their U.S. News rankings in other aspects, although the rankings are just as flawed.)The University of Wisconsin System has long refused to cooperate with NCTQ on this, as evidenced by their March 2011 letter to NCTQ.

Yet the UW System and many other public universities are failing the public trust by refusing to make important information produced by public employees available at a reasonable cost. The Wisconsin Institute for Law and Liberty, a Milwaukee-based public interest law firm, sued the UW System last January on behalf of NCTQ to get the records turned over. WILL’s suit was ultimately successful in obtaining its objective, as the UW System agreed to turn over the relevant materials and pay WILL nearly $10,000 in damages and fees after obtaining additional privacy assurances.

Wisconsin taxpayers and students will foot the bill for the UW System’s initial refusal to make information public under open records laws. This is a big PR mistake for Wisconsin higher education, as it gives the appearance that universities think they are above accountability—this isn’t a good thing in the current political climate, to say the least.

Now on to the meat of the new rankings, which should come out sometime this year. There are 17 standards which will be a part of the rankings, centered on four areas:

(1)    Selectivity of teacher education programs and students’ incoming academic characteristics

(2)    Teacher knowledge of subject matter

(3)    Classroom management and student teaching skills

(4)    Outcomes of graduates’ future classes on state tests

As regular readers of this blog know, I’m not a fan of the selectivity criterion. If a college does a good job of training teachers, who cares about their ACT score? But the other three measures are certainly important; the question is whether the available data will be sufficient to accurately rate programs and provide stakeholders with useful information.

I expect a big fuss when these ratings are released, just like there is a big fuss whenever the U.S. News undergraduate rankings are released every fall. While I’m concerned about the ability to draw conclusions from available data, these ratings will provide information about whether institutions are collecting relevant types of data (such as their graduates’ outcomes) and certainly won’t be any worse than the peer rating part of the undergraduate rankings that has existed for nearly three decades.

Examining Kiplinger’s Best Value Colleges

Not too many articles on higher education feature my alma mater, Truman State University. In spite of a long tradition of internal accountability and doing a good job of graduating students on a shoestring budget, Truman lacks the name recognition of larger universities in most circles. This is why I was surprised to see the article discussing Kiplinger’s Best Values in Public Colleges feature Truman so prominently.

HPIM1595Winter at Truman State University

Kiplinger’s ranks the top 100 public four-year colleges and universities based on a combination of five different measures, with the point values being just as arbitrary as all of the other rankings (including the Washington Monthly rankings that I complied last fall). This is in spite of the claim that “neither our opinion nor anyone else’s affects the calculation.” While this may be true in the strictest sense, someone had to determine the point values!

The methodology is as follows:

(1)    Total cost of attendance and net price (after subtracting grant aid)—35%. This is calculated separately for in-state and out-of-state students.

(2)    Academic competitiveness (ACT/SAT scores, admit rate, and yield)—22.5%.

(3)    Graduation rates (four-year and six-year)—18.75%.

(4)    Academic support (retention rates and students per FTE faculty)—13.75%.

(5)    Student debt at graduation—10%.

As most college rankings are prone to do, the Kiplinger’s best value list still unnecessarily rewards colleges for being highly selective, both in the academic competitiveness and graduation measures. The focus on cost is very useful, although it does to some extent reward colleges in states which provide more public support (this is good for the student, but not necessarily as good for the taxpayer).

I do have one other gripe with the Kiplinger’s rankings—they are done separately for public and private colleges (the private college list came out last month). The editors should combine the two lists so the information can be more useful for students and their families. With that being said, the information in these lists is certainly useful to a segment of the collegegoing population.

More Fun With College Rankings

I was recently interviewed by Koran Addo of the (Baton Rouge) Advocate regarding my work with the Washington Monthly college rankings. I’ve had quite a few phone and e-mail exchanges with college officials and the media about my work, but I want to highlight the resulting article both because it was extremely well done and because it highlights what I consider to be the foolish obsession with college rankings.

Two pieces of the article deserve special attention. First, consider this tidbit:

“LSU System President and Baton Rouge Chancellor William Jenkins said he was ‘clearly disappointed’ to learn that LSU had tumbled six spots from 128th last year to 134th in the U.S. News ‘Best Colleges 2013’ list.”

I wish that college rankings came with confidence intervals—which would provide a rough guide of whether a change over time is more than what we would expect by chance or statistical noise. Based on my work with rankings, I can safely say that such a small change in the rankings is not statistically significant and certainly not educationally meaningful.

The next fun quote from the article is from LSU’s director of research and economic development, Nicole Baute Honorée. She argues that only rankings from the National Science Foundation matter:

“Universities are in the knowledge business, as in creating new knowledge and passing it along. That’s why the NSF rankings are the gold standard.”

The problem is that research expenditures (a) do not guarantee high-quality undergraduate education, (b) do not have to be used effectively in order to generate a high score, and (c) do not reward many disciplines (such as the humanities). They are a useful measure of research clout in the sciences, but I would rely on them as only one of many measures (which is what the Washington Monthly rankings have done since long before I took the reins).

Once again, I urge readers not to rely on a single measure of college quality—and to make sure any measure is actually aligned with student success.