Blog (Kelchen on Education)

Comparing the US News and Washington Monthly Rankings

In yesterday’s post, I discussed the newly released 2014 college rankings from U.S. News & World Report and how they changed from last year. In spite of some changes in methodology that were billed as “significant,” the R-squared value when comparing this year’s rankings with last year’s rankings among ranked national universities and liberal arts colleges was about 0.98. That means that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a nearly perfect prediction.

In today’s post, I compare the results of the U.S. News rankings to those from the Washington Monthly rankings for national universities and liberal arts colleges ranked by both sources. The rankings from Washington Monthly (for which I’m the consulting methodologist and compiled them) are based on three criteria: social mobility, research, and service, which are not the particular goals of the U.S. News rankings. Yet it could still be the case that colleges that recruit high-quality students, have lots of resources, and have a great reputation (the main factors in the U.S. News rankings) do a good job recruiting students from low-income families, produce outstanding research, and graduate servant-leaders.

The results of my comparisons show large differences between the two sets of rankings, particularly at liberal arts colleges. The R-squared value at national universities is 0.34, but only 0.17 at liberal arts colleges, as shown below:

uswm_natl

uswm_libarts

It is worth highlighting some of the colleges that are high on both rankings. Harvard, Stanford, Swarthmore, Pomona, and Carleton all rank in the top ten in both magazines, showing that it is possible to be both highly selective and serve the public in an admirable way. (Of course, we should expect that to be the case given the size of their endowments and their favorable tax treatment!) However, Middlebury and Claremont McKenna check in around 100th in the Washington Monthly rankings in spite of a top-ten U.S. News ranking. These well-endowed institutions don’t seem to have the same commitment to the public good as some of their highly selective peers.

On the other hand, colleges ranked lower by U.S. News do well in the Washington Monthly ranking. Some examples include the University of California-Riverside (2nd in WM, 112th in U.S. News), Berea College (3rd in WM, 76th in U.S. News), and the New College of Florida (8th in WM, 89th in U.S. News). If nothing else, the high ranks in the Washington Monthly rankings give these institutions a chance to toot their own hour and highlight their own successes.

I fully realize that only a small percentage of prospective students will be interested in the Washington Monthly rankings compared to those from U.S. News. But it is worth highlighting the differences across college rankings so students and policymakers can decide what institutions are better for them given their own demands and preferences.

Breaking Down the 2014 US News Rankings

Today is a red-letter day for many people in the higher education community—the release of the annual college rankings from U.S. News and World Report. While many people love to hate the rankings for an array of reasons (from the perceived focus on prestige to a general dislike of accountability in some sectors), their influence on colleges and universities is undeniable. Colleges love to put out press releases touting their place in the rankings even while decrying their general premise.

I’m no stranger to the college ranking business, having been the consulting methodologist for Washington Monthly’s annual college rankings for the past two years. (All opinions in this piece, of course, are my own.) While Washington Monthly’s rankings rank colleges based on social mobility, service, and research performance, U.S. News ranks colleges primarily based on “academic quality,” which consists of inputs such as financial resources and standardized test scores as well as peer assessments for certain types of colleges.

I’m not necessarily in the U.S. News-bashing camp here, as they provide a useful service for people who are interested in prestige-based rankings (which I think is most people who want to buy college guides). But the public policy discussion, driven in part by the President’s proposal to create a college rating system, has been moving toward an outcome-based focus. The Washington Monthly rankings do capture some elements of this focus, as can be seen in my recent appearance on MSNBC and an outstanding panel discussion hosted by New America and Washington Monthly last week in Washington.

Perhaps in response to criticism or the apparent direction of public policy, Robert Morse (the well-known and well-respected methodologist for U.S. News) announced some changes last week in the magazine’s methodology for this year’s rankings. The changes place slightly less weight on peer assessment and selectivity, while putting slightly more weight on graduation rate performance and graduation/retention rates. Yet Morse bills the changes as meaningful, noting that “many schools’ ranks will change in the 2014 [this year’s] edition of the Best Colleges rankings compared with the 2013 edition.”

But the rankings have tended to be quite stable from year to year (here are the 2014 rankings). The top six research universities in the first U.S. News survey (in 1983—based on peer assessments by college presidents) were Stanford, Harvard, Yale, Princeton, Berkeley, and Chicago, with Amherst, Swarthmore, Williams, Carleton, and Oberlin being the top five liberal arts colleges. All of the research universities except Berkeley are in the top six this year and all of the liberal arts colleges except Oberlin are in the top eight.

In this post, I’ve examined all national universities (just over 200) and liberal arts colleges (about 180) ranked by U.S. News in this year’s and last year’s rankings. Note that this is only a portion of qualifying colleges, but the magazine doesn’t rank lower-tier institutions. The two graphs below show the changes in the rankings for national universities and liberal arts colleges between the two years.

usnews_natl

usnews_libarts

The first thing that jumps out at me is the high R-squared, around 0.98 for both classifications. What this essentially means is that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a remarkable amount of persistence even when considering the slow-moving nature of colleges. The graphs show more movement among liberal arts colleges, which are much smaller and can be affected by random noise much more than large research universities.

The biggest blip in the national university rankings is South Carolina State, which went from 147th last year to unranked (no higher than 202nd) this year. Other universities which fell more than 20 spots are Howard University, the University of Missouri-Kansas City, and Rutgers University-Newark, all urban and/or minority-serving institutions. Could the change in formulas have hurt these types of institutions?

In tomorrow’s post, I’ll compare the U.S. News rankings to the Washington Monthly rankings for this same sample of institutions. Stay tuned!

Policy Options for Pell Reform: The CBO’s Analysis

The federal Pell Grant program has grown dramatically over the past decade, due to both the effects of the Great Recession and changes to the program that made it more generous to students from low- to middle-income families. As spending has more than doubled since 2006 (although it slightly fell in the most recent year for which data is available), some in Congress have grown concerned about the sustainability of the program. This led Senator Jeff Sessions (R-AL), ranking member of the Senate Budget Committee, to request a review of Pell spending and information about the likely costs of various reform options going forward.

The Congressional Budget Office, the nonpartisan agency charged with “scoring” fiscal proposals, released a report yesterday summarizing the estimated fiscal effects of a host of changes to the Pell program. (Inside Higher Ed has a nice summary of the report.) While the goal of the requesting Senator may have been to find ways to lower spending on the program by better targeting awards, the CBO also looked at proposals to make the Pell program more generous and to simplify Pell eligibility.

While I’m glad that the CBO looked at the fiscal effects of various changes to restrict or expand eligibility, I think that Congress will make those decisions on a year-to-year basis (pending the availability of funds) instead of thinking forward over a ten-year window. However, it is notable that the proposal to restrict Pell Grants to students with an expected family contribution of zero—by far the students with the greatest need—would only cut expenditures by $10 billion per year, or just over one-fourth of the program cost. I am more interested in the CBO’s cost estimates for simplifying eligibility criteria. They propose two possible reforms, which are discussed in more detail on pages 24 and 25 of the report.

Proposal 1: Simplify the FAFSA by only requiring students and their families to provide income data from tax returns instead of pulling in asset and income data from other sources. This would slightly affect targeting, as some resources would be unknown to the government, but research has shown that basic income data predicts Pell awards well for most students. The CBO estimates that about two percent more students would receive the Pell Grant and that about one in five students would see an increase of approximately $350. This is estimated to increase program costs by $1 billion per year, or less than 3% of the annual program cost.

Proposal 2: Tie Pell eligibility to federal poverty guidelines instead of EFCs. I am quite interested in this idea, as it would greatly streamline the financial aid eligibility process—but I’m not sure whether I think it is the best idea out there. Basically, the federal poverty guidelines are calculated based on income, household size, and state of residency, and could be used to calculate Pell eligibility. This is indirectly done right now through means-tested benefit programs; for example, eligibility for the free/reduced price lunch program is based on the poverty line (130% for free, 185% for reduced). Since students who have a family member receiving FRL can qualify for a simpler FAFSA already, this may not be such a leap. The CBO estimates that about one in ten students would have their Pell status affected by their model option and that costs would fall by $1.4 billion per year, but the percent of poverty used (up to 250%) would likely be changed in the legislative process.

In the alternatives section of the report (page 26), the CBO discusses committing Pell funds to students in middle and high school—noting that such a program could increase academic and financial preparation for postsecondary. This sounds very similar to a paper that Sara Goldrick-Rab and I wrote on a possible early commitment Pell program (a citation would have been nice!), but they don’t provide any estimates of the costs of that program. We estimate in our paper that the program will cost about $1.5 billion per year, with the federal government likely to at least break even in the long run via increased tax payments (something not discussed in any of the policy options in the brief).

I’m glad to see this report on possible options to Pell reform and I hope that they will continue to get requests to score and examine innovative ideas to improve and reform the delivery of financial aid.

“Bang for the Buck” and College Ratings

President Obama made headlines in the higher education world last week with a series of speeches about possible federal plans designed to bring down the cost of college. While the President made several interesting points (such as cutting law school from three to two years), the most interesting proposal to me was has plan to create a series of federal ratings based on whether colleges provide “good value” to students—tying funding to those ratings.

How could those ratings be constructed? As noted by Libby Nelson in Politico, the federal government plans to publish currently collected data on the net price of attendance (what students pay after taking grant aid into account), average borrowing amounts, and enrollment of Pell Grant recipients. Other measures could potentially be included, some of which are already collected but not readily available (graduation rates for Pell recipients) and others which would be brand new (let your imagination run wild).

Regular readers of this blog are probably aware of my work with Washington Monthly magazine’s annual set of college rankings. Last year was my first year as the consulting methodologist, meaning that I collected the data underlying the rankings, compiled it, and created the rankings—including a new measure of cost-adjusted graduation rate performance. This measure seeks to reward colleges which do a good job serving and graduating students from modest economic means, a far cry from many prestige-based rankings.

The metrics in the Washington Monthly rankings are at least somewhat similar to those proposed by President Obama in his speeches. As a result, we bumped up the release of the new 2013 “bang for the buck” rankings to Thursday afternoon. These rankings reward colleges which performed well on four different metrics:

  • Have a graduation rate of at least 50%.
  • Match or exceed their predicted graduation rate given student and institutional characteristics.
  • Have at least 20% of students receive Pell Grants (a measure of effort in enrolling low-income students).
  • Have a three-year student loan default rate of less than 10%.

Only one in five four-year colleges in America met all four of those criteria, which highlighted a different group of colleges than is normally highlighted. Colleges such as CUNY Baruch College and Cal State University-Fullerton ranked well, while most Ivy League institutions failed to make the list due to Pell Grant enrollment rates in the teens.

This work caught the eye of the media, as I was asked to be on MSNBC’s “All in with Chris Hayes” on Friday night to discuss the rankings and their policy implications. Here is a link to the full segment, where I’m on with Matt Taibbi of Rolling Stone and well-known author Anna Kamenetz:

http://video.msnbc.msn.com/all-in-/52832257/

This was a fun experience, and now I can put the “As Seen on TV” label on my CV. (Right?) Seriously, though, stay tuned for the full Washington Monthly rankings coming out in the morning!

Financial Aid as a Paycheck?

President Obama is set to make a series of speeches this week addressing college affordability—a hot topic on college campuses as new students move into their dorm rooms. An article in this morning’s New York Times provides some highlights of the plan. While there are other interesting proposals, most notably tying funding to some measure of college success, I’m focusing this brief post on the idea to disburse Pell Grants throughout the semester—“aid like a paycheck.”

The goal of “aid like a paycheck” is to spread grant aid disbursals out through the semester so students take ownership of their education. Sounds great, right? The problem is that it’s only been tested at a small number of community colleges in low-tuition states, such as California. If a student has more financial aid than the cost of attendance, then there is “extra” aid to disburse. But this doesn’t apply to the vast majority of students, particularly those at four-year schools. Spreading out aid awards for students with unmet need creates an even bigger financial gap at the beginning of the semester.

In order for “aid like a paycheck” to work for the vast majority of students, we have to make other costs look like a monthly bill. If students still have to pay for tuition, books, and housing upfront (or face a hefty interest rate), this program will create a yawning financial gap. If colleges want to be accountable to students, perhaps they should bill students per month for their courses—that way, dropped courses hurt the institution’s bottom line more than the student’s. This would delay funds coming in to a college, which can result in a loss of interest given the large amounts of tuition revenue.

Before we try “aid like a paycheck” on a large scale, Mr. President, let’s try making colleges get their funds from students in that same way. And let’s also get some research on how it works for students whose financial need isn’t fully met by the Pell Grant. The feds have the power to try demonstration programs, and this would be worth a shot.

Simplifying the FAFSA–How Far Can We Go?

It is painfully obvious to students, their families, and financial aid administrators alike that the current system of determining federal financial aid eligibility is incredibly complex and time-consuming. Although there should be broad support for changes to the financial aid system, any progress has been halting at best. I have devoted much of my time to researching and discussing potential changes to the financial aid system. Below is some of my work, going from relatively minor to major changes.

I’ve been working on an ongoing study with the National Association of Student Financial Aid Administrators examining the extent to which students’ financial aid packages would change if income data from one year earlier (the “prior-prior year”) than is currently used were to be used in the FAFSA calculations. Although a full report from this study won’t be out until sometime next month, here is a nice summary of the work from the Chronicle of Higher Education. The key point from this work is that, since family resources don’t change that much for students with the greatest financial need, students could file their FAFSA several months earlier using income data from the prior-prior year without a substantial change in aid targeting.

Under a prior-prior year system, students would still have to file the FAFSA each year. Given the fact that many students don’t see that much income volatility, there is a case to be made that students should only have to file the FAFSA once—at the beginning of college—unless their family or financial circumstances change by a considerable margin. In a piece hot off the virtual presses at the Chronicle, Sara Goldrick-Rab and I discuss why it would be better for many students to only have to file the FAFSA once.  I would like to know more about the costs and benefits of such a program (weighing the benefits of reduced complexity and administrative compliance costs versus the likelihood of higher aid spending), but the net fiscal cost is likely to be small or even positive.

So let’s take this one step further. Do we even need to have all students file the FAFSA? Sara and I have looked at the possibility of automatically granting students the maximum Pell Grant if anyone in their family qualifies for means-tested benefits (primarily free and reduced price lunches). We detail the results of our fiscal analysis and simulation in an Institute for Research on Poverty working paper, where we find that such a program is likely to remain reasonably well targeted and pass a cost-benefit test in the long run.

There is a broad menu of options available to simplify the FAFSA, from giving students more time to complete the form to getting rid of it altogether. Let’s talk more about these options (plus many more) and actually get something done that can help all stakeholders in the higher education arena.

Yes, Student Characteristics Matter. But So Do Colleges.

It is no surprise to those in the higher education world that student characteristics and institutional resources are strongly associated with student outcomes. Colleges which attract academically elite students and have the ability to spend large sums of money on instruction and student support should be able to graduate more of their students than open-access, financially-strapped universities, even after holding factors such as teaching quality constant. But an article in today’s Inside Higher Ed shows that there is a great deal of interest in determining the correlation between inputs and outputs (such as graduation).

The article highlights two new studies that examine the relationship between inputs and outputs. The first, by the Department of Education’s Advisory Committee on Student Financial Assistance, breaks down graduation rates by the percentage of students who are Pell Grant recipients, per-student endowments, and ACT/SAT scores using IPEDS data. The second new study, by the president of Colorado Technical University, finds that four student characteristics (race, EFC, transfer credits, and full-time status) explain 74% of the variation in an unidentified for-profit college’s graduation rate. His conclusion is that “public [emphasis original] policy will not increase college graduates by focusing on institution characteristics.”

While these studies take different approaches (one using institutional-level data and the other using student-level data), they highlight the importance that student and institutional characteristics currently have in predicting student success rates. These studies are not novel or unique—they follow a series of papers in HCM Strategists’ Context for Success project in 2012 and even more work before that. I contributed a paper to the project (with Doug Harris at Tulane University) examining input-adjusted graduation rates using IPEDS data. We found R-squared values of approximately 0.74 using a range of student and institutional characteristics, although the predictive power varied by Carnegie classification. It is also worth noting that the ACSFA report calculated predicted graduation rates with an R-squared value of 0.80, but they control for factors (like expenditures and endowment) that are at least somewhat within an institution’s control and don’t allow for a look at cost-effectiveness.

This suggests the importance of taking a value-added approach in performance measurement. Just like K-12 education is moving beyond rewarding schools for meeting raw benchmarks and adopting a gain score approach, higher education needs to do the same. Higher education also needs to look at cost-adjusted models to examine cost-effectiveness, something which we do in the HCM paper and I have done in the Washington Monthly college rankings (a new set of which will be out later this month).

However, even if a regression model explains 74% of the variation in graduation rates, a substantial amount can be attributed either to omitted variables (such as motivation) or institutional actions. The article by the Colorado Technical University president takes exactly the wrong approach, saying that “student graduation may have little to do with institutional factors.” If his statement is accurate, we would expect colleges’ predicted graduation rates to be equal to their actual graduation rates. But, as anyone who was spent time on college campuses should know, institutional practices and policies can play an important role in retention and graduation. The 2012 Washington Monthly rankings included a predicted vs. actual graduation rate component. While Colorado Tech basically hit its predicted graduation rate of 25% (with an actual graduation rate one percentage point higher), other colleges outperformed their prediction given student and institutional characteristics. For example, San Diego State University and Rutgers University-Newark, among others, outperformed their prediction by more than ten percentage points.

While incoming student characteristics do affect graduation rates (and I’m baffled by the amount of attention on this known fact), colleges’ actions do matter. Let’s highlight the colleges which appear to be doing a good job with their inputs (and at a reasonable price to students and taxpayers) and see what we can learn from them.

Does This Explain Opposition to Market-Based Interest Rates?

As of this writing, it appears that the U.S. Senate has finally reached an agreement on student loan interest rates after subsidized Stafford rates doubled from 3.4% to 6.8% on July 1. The general terms of the agreement are similar to what President Obama proposed in his FY 2014 budget and what the House of Representatives agreed to back in June, with some compromises on each side.

The big difference between current law and the Senate agreement is that interest rates for nearly all student loans will be tied to the 10-year Treasury rate, which currently sits at about 2.5%.  (Undergraduates would pay a 2.05% premium above the Treasury rate on Stafford loans to account for program costs and the risk of offering the loans.) However, the Treasury rate is expected to increase to 5.6% by 2016, pushing the interest rate for undergraduates to 7.65% from less than 5%. The plan includes a cap at 8.25%, which may be reached according to the CBO report.

The Senate agreement is not without its critics, particularly on the political Left. Senator Bernie Sanders, a Vermont independent and a self-described “socialist,” criticized the plan as “dangerous” in an article in The Hill. His criticism lies in the fact that interest rates can rise well above the current 6.8% over time, a very real concern given the interest rate projections. While the plan is expected to pass the Senate (and the House), some other Senate Democrats will likely vote no as well.

At this point in the great interest rate debate, I have to wonder if there is another reason some politicians oppose market-based interest rates. Tying student loan interest rates to the 10-year Treasury note directly connects students’ future payments to the cost of federal borrowing. And that cost of federal borrowing is influenced by the federal government’s fiscal policy.

This connection between federal borrowing and student loan rates could potentially have the following repercussions. If loans are tied to Treasury notes—and there is no way to fix the rate as has been done for the past seven years—students should have an incentive to push for federal policies which lower the federal government’s cost of borrowing. (With the decline in home ownership rates among younger adults, fewer 20- and 30-somethings have mortgages, which are affected by federal borrowing costs.)

The policy that best reduces the cost of borrowing is a balanced budget, which reduces the need for additional borrowing. The passage of the Omnibus Budget Reconciliation Act of 1993, which reduced the budget deficit through a combination of tax hikes and spending cuts, had the effect of driving down long-term interest rates. (For more on this, I highly recommend reading Bob Woodward’s Maestro about Alan Greenspan’s role in the policy discussions.)

My question to readers is whether you think that some politicians may oppose market-based interest rates because more young adults may place pressure on Congress to find some legislative solution to balance the budget—although the solutions certainly vary by political persuasion. Say it’s 2016 and undergraduate Stafford rates are 7.5%, with hitting the 8.25% cap becoming more likely. Could we see student advocacy organizations pushing for a balanced budget to bring down interest rates? I don’t know how many people will think this way, but it’s something to consider.

Can “Paying it Forward” Work?

While Congress is deadlocked on what to do regarding student loan interest rates (I have to note here that interest rates on existing loans WILL NOT CHANGE!), others have pushed forward with innovative ways to make college more affordable. I wrote last fall about an innovative proposal from the Economic Opportunity Institute, a liberal think tank from Washington State, which suggests an income-based repayment program for students attending that state’s public colleges and universities. The Oregon Legislature just approved a plan to try out a version of its program after a short period of discussion and bipartisan approval.

This proposal, which the EOI refers to as “Pay It Forward,” is similar to how college is financed in Australia. It would charge students no tuition or fees upfront and would require students to sign a contract stating that they would pay a certain percentage of their adjusted gross income per year (possibly three percent of income or one percent per year in college) for at least 20 years after leaving college. It appears that the state would rely on the IRS to enforce payment in order to capture part of the earnings of those who leave the state of Oregon. This would be tricky to enforce in theory, given the IRS’s general reticence to step into state-level policies.

While I am by no means convinced by simulations conducted regarding the feasibility of the program, I think the idea is worth a shot as a demonstration program. I think the cost of the program will be larger than expected, especially since income-based repayment programs decouple the cost of college from what students pay.  Colleges suddenly have a strong incentive to raise their posted tuition substantially in order to capture this additional revenue. In addition to the demonstration program, I would like to see a robust set of cost-effectiveness estimates under different enrollment, labor market, and repayment parameters. I’ve done this before in my research examining the feasibility of a hypothetical early commitment Pell program.

Needless to say, I’ll be keeping an eye on this program moving forward to see how the demonstration program plays out. It has the potential to change state funding of higher education, and at the very least will be an interesting program to evaluate.

The Vast Array of Net Price Calculators

Net price calculators are designed to give students and their families a clear idea of how much college will cost them each year after taking available financial aid into account. All colleges have to post a net price calculator under the Higher Education Opportunity Act of 2008, but these calculators take a range of different form. The Department of Education has proposed a standardized “shopping sheet” which has been adopted by some colleges, but there is still a wide amount of variation in net price calculators across institutions. This is shown in a 2012 report by The Institute for College Access and Success, using 50 randomly selected colleges across the country.

In this blog post, I examine net price calculators from six University of Wisconsin System institutions for the 2013-14 academic year. Although these colleges might be expected to have similar net price calculators and cost assumptions, this is far from the case as shown in the below screenshots.  In all cases, I used the same student conditions—an in-state, dependent, zero-EFC student.

Two of the six colleges selected (the University of Wisconsin Colleges and UW-La Crosse) require students to enter several screens of financial and personal information in order to get an estimate of their financial aid package. While that can be useful for some students, there should be an option to directly enter the EFC for students who have filed the FAFSA or are automatically eligible for a zero EFC. For the purposes of this post, I stopped there with those campuses—as some students may decide to do.

(UW Colleges and UW-La Crosse, respectively)

UW Colleges Net Price Calculator

La Crosse Net Price Calculator

UW-Milwaukee deserves special commendation for clearly listing the net price before mentioning loans and work-study. Additionally, they do not list out each grant a student could expect to receive, simplifying the information display (although this does have its tradeoffs).

Milwaukee Net Price Calculator

The other three schools examined (Eau Claire, Madison, and Oshkosh) list out each type of financial aid and present an unmet need figure (which can be zero) before reporting the estimated net price of attendance. Students may read these calculators and think that no borrowing is necessary in order to attend college, while this is not the case. The net price should be listed first, since this tool is a net price calculator.

(UW-Eau Claire, UW-Madison, and UW-Oshkosh, respectively)

Eau Claire Net Price Calculator

Madison Net Price CalculatorOshkosh Net Price Calculator

The net price calculators also differ in their terminologies for different types of financial aid. For example, UW-Eau Claire calls the Wisconsin Higher Education Grant the “Wisconsin State Grant,” which appears nowhere else in the information students receive. The miscellaneous and travel budgets vary by more than $1000 across the four campuses with net price calculators, highlighting the subjective nature of these categories. However, they are very important to students because they cannot receive more in financial aid than their total cost of attendance. If colleges want to report a low net price, they have incentives to report low living allowances.

I was surprised to see the amount of variation in net price calculators across UW System institutions. I hope that financial aid officers and data managers from these campuses can continue to work together to refine best practices and present a more unified net price calculator.