Comments on the New College Scorecard Data

The Obama Administration’s two-year effort to develop a federal college ratings system appeared to have hit a dead-end in June, with the announcement that no ratings would actually be released before the start of the 2015-2016 academic year. At that point in time, Department of Education officials promised to instead focus on creating a consumer-friendly website with new data elements that had never before been released to the public. I was skeptical, as there were significant political hurdles to overcome before releasing data on employment rates, the percentage of students paying down their federal loans, and graduation rates for low-income students.

But things changed this week. First, a great new paper out of the Brookings Institution by Adam Looney and Constantine Yannelis showed trends in student loan defaults over time—going well beyond the typical three-year cohort default rate measure. They also included earning data, something which was not previously available. But, although they made summary tables of results available to the public, these tables only included a small number of individual institutions. It’s great for researchers, but not so great for students choosing among colleges.

The big bombshell dropped this morning. In an extremely rare Saturday morning release (something that frustrates journalists and the higher education community to no end), the Department of Education released a massive trove of data (fully downloadable!) underlying the new College Scorecard. The consumer-facing Scorecard is fairly simple (see below for what Seton Hall’s entry looks like), and I look forward to hearing about whether students and their families use this new version more than previous ones. I also recommend ProPublica’s great new data tool for low-income students.


But my focus today is on the new data. Some of the key new data elements include the following:

  • Transfer rates: The percentage of students who transfer from a two-year to a four-year college. This helps community colleges, given their mission of transfer, but still puts colleges at a disadvantage if they serve a more transient student body.
  • Earnings: The distribution of earnings 10 years after starting college and the percentage earning more than those with a high school diploma. This comes from federal tax return data and is a huge step forward. However, given very reasonable concerns about a focus on earnings hurting colleges with public service missions, there is also a metric for the percentage of students making more than $25,000 per year. Plenty of people will focus on presenting earnings data, so I’ll leave the graphics to others. (This is a big step forward over the admirable work done by Payscale in this area.)
  • Student loan repayment: The percentage of students (both completers and non-completers) who are able to pay down some principal on loans within a certain period of time. Seven-year loan repayment data are available, as illustrated here:


In the master data file, many of these outcomes are available by family income, first-generation status, and Pell receipt. First-generation status is a new data element to be made available to the public; although the question is on the FAFSA, it’s never been made available to researchers. For those who are curious, here’s what the breakdown of the percentage of first-generation students (typically defined as students whose parents don’t have a bachelor’s degree) by institutional type:


There are a lot of data elements to explore here, and expect lots of great work from the higher education research community in upcoming months and years using these data. In the short term, it will be fascinating to watch colleges and politicians respond to this game-changing release of outcome data on students receiving federal financial aid.

The Rise and Fall of Federal College Ratings

President Obama’s 2013 announcement that a set of federal college ratings would be created and then tied to federal financial aid dollars caught the higher education world by surprise. Some media coverage at the time even expected what came to be known as the Postsecondary Institution Ratings System (PIRS) to challenge U.S. News & World Report’s dominance in the higher education rankings marketplace. But most researchers and people intimately involved in policy discussions saw a substantial set of hurdles (both methodologically and politically) that college ratings would have to clear before being tied to financial aid. This resulted in a number of delays in the development of PIRS, as evidenced by last fall’s delayed release of a general framework for developing ratings.

The U.S. Department of Education’s March announcement that two college ratings systems would be created, one oriented toward consumers and one for accountability purposes, further complicated the efforts to develop a ratings system. As someone who has written extensively on college ratings, I weighed in with my expectation that any ratings were becoming extremely unlikely (due to both political pressures and other pressing needs for ED to address):

This week’s announcement that the Department of Education is dropping the ratings portion of PIRS (is it PIS now?) comes as little surprise to higher education policy insiders—particularly in the face of bipartisan legislation in Congress that sought to block the development of ratings and fierce opposition from much of the higher education community. I have to chuckle at Education Undersecretary Ted Mitchell’s comments on the changes; he told The Chronicle of Higher Education that dropping ratings “is the exact opposite of a collapse” and “a sprint forward.” But politically, this is a good time for ED to focus on consumer information after its recent court victory against the for-profit sector that allows the gainful employment accountability system to go into effect next week.

It does appear that the PIRS effort will not be in vain, as ED has promised that additional data on colleges’ performance will be made available on consumer-friendly websites. Although I am skeptical that federal websites like the College Scorecard and College Navigator directly reach students and their families, I am a believer in the power of information to help students make at least decent decisions, but I think this information will be more effective when packaged by private organizations such as guidance counselors and college access organizations.

On a historical note, the 2013-2015 effort to rate colleges failed to live up to efforts a century ago, in which ratings were actually created but President Taft blocked their release. As Libby Nelson at Vox noted last summer, President Wilson created a ratings committee in 1914, which then came to the conclusion that publishing ratings was not desirable at the time. 101 years later, some things still haven’t changed. College ratings are likely dead for decades at the federal level, but performance-based funding or “risk-sharing” ideas enjoy some bipartisan support and are the next big accountability policy discussion.

I’d love to be able to write more at this time about the path forward for federal higher education accountability policy, but I’ve got to get back to putting together the annual Washington Monthly college rankings (look for them in late August). Hopefully, future versions of the rankings will be able to include some of the new information that has been promised in this new consumer information system.

The FY 2016 Obama Budget: A Few Surprises

The Obama Administration released their $3.999 trillion budget proposal for Fiscal Year 2016, and the higher education portion of the budget was largely as expected. Some proposals, such as increasing research funding, providing a bonus pool of funds for colleges with high graduation rates, and reallocating the Supplemental Educational Opportunity Grant to be based on current financial need instead of an antiquated formula, were repeats from previous years. Others, such as the idea of tuition-free community college, had already been sketched out. And one controversial proposal—the plan to tax new 529 college savings plans—had already been nixed, but remained in the budget document due to a “printing deadline.”

But the budget proposal (the vast majority of which is dead on arrival in a GOP Congress thanks to differences in viewpoints and preferred budget levels) did have some surprising details. The three most interesting higher education-related details are below.

(1) “Universal” free community college isn’t exactly universal. Pages 59 and 60 of the education budget proposal noted that students with a family Adjusted Gross Income of over $200,000 would be ineligible for tuition-free community college. Although this detail was apparently decided before the program was announced, the Obama Administration for some reason chose to hide that detail from the public until Monday. As the picture shows below, only 2.7% of dependent community college students had family incomes above $200,000 in 2011-12 (data from the National Postsecondary Student Aid Study).


But in order to get family income, students have to file the FAFSA. Research by Lyle McKinney and Heather Novak suggests that 42% of low-income community college students didn’t file the FAFSA in 2007-08, meaning that something big needs to be done to get these students to file. Requiring the FAFSA also means that noncitizens typically would not qualify for free community college, something that is likely to upset advocates for “dreamer” students (but make many on the Right happy).

Additionally, as Susan Dynarski at the University of Michigan pointed out, the GPA requirements (a 2.5 instead of a 2.0) make a big difference. In 2011-12, 15.9% of Pell recipients had GPAs between a 2.0 and 2.49, meaning they would not qualify for free community college.



(2) Asset questions may be off the FAFSA. The budget document called for the following changes to the FAFSA, including the elimination of assets (thanks to Ben Miller at New America for the screenshot):



Getting rid of assets won’t affect most families, as research by Susan Dynarski and Judith Scott-Clayton shows. But it does matter more to selective colleges, more of which might turn to additional financial aid forms like the CSS/PROFILE to get the information they want. Policymakers should take the benefits of FAFSA simplicity as well as the potential costs to students of additional forms into account.

(3) Mum’s the word on college ratings. After last year’s budget featured $10 million for the development of the Postsecondary Institution Ratings System (PIRS), this year’s budget had no mention. Inside Higher Ed reported that ratings will be developed using existing funds and using existing personnel. Will that slow down the development of ratings? Given the slow progress at this point, it’s hard to argue otherwise.

Finally, the budget document also contained details about the “true” default rate for student loans, using the life of the loan instead of the 3-year default window used for accountability purposes. The results aren’t pretty for undergraduate students, with default rates pushing 23% on undergraduate Stafford loans. But default rates for graduate loans hover around 6%-7%, which is roughly the interest rates many of these students face.



What are your thoughts on the President’s budget proposal for higher education? Please share them in the comments section.

Public Comments to the Department of Education on College Ratings

It may be a new year, but the Obama Administration’s proposed Postsecondary Institution Ratings System (PIRS) is still a hot topic. Most observers in the higher education policy and research communities (myself included) were less than overwhelmed by the proposed metrics released on December 19—sixteen months after the idea of ratings was first floated. My first take on the metrics can be found here, and there are too many good pieces about the metrics to mention them all.

The U.S. Department of Education has invited the public to provide additional feedback about the metrics used in PIRS (as well as the ratings system itself). You can submit your comments here before February 17. Below are my comments that I will submit to ED.


January 5, 2015

My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University as well as the methodologist for Washington Monthly magazine’s annual college rankings. (All opinions are my own.) After carefully examining the draft metrics proposed for potential inclusion in the Postsecondary Institution Ratings System (PIRS), I have the following comments and suggestions:

First, I am encouraged by the decision to exclude nondegree-granting colleges (mainly small for-profit colleges) from PIRS, as they are already subject to gainful employment. Holding them accountable for two different metrics is unreasonable. But in the two-year sector, it is essential to rate colleges that primarily grant associate’s degrees separately from those that primarily grant certificates due to the different lengths of those programs. The Department must divide two-year colleges up by their program emphasis (degree or certificate) in order for those ratings to be viewed as reasonable.

While I am glad to see discussions of multiple data sources in the draft metrics, I think the focus in the short term has to be using IPEDS data and previously-collected National Student Loan Data System (NSLDS) data for student loan repayment or default rates. Using NSLDS data for student background characteristics (such as first-generation status) is nice for the future, but is unlikely to be ready by this fall—particularly if colleges wish to dispute those data. I encourage the Department to focus on two sets of measures: refining readily available metrics from IPEDS and NSLDS for the draft ratings and continuing to develop new metrics for 2018 and beyond.

Most of the metrics proposed seem reasonable, although I am thoroughly confused by the “EFC gap” metric due to the lack of details provided. Would this be a measure of unmet need, of the percentage of FAFSA filers below a certain EFC, or something else? The Department should consider how strongly correlated the EFC gap measure may be with existing net price or family income data already in IPEDS—and also issue additional guidance on what the metric might be so the public can provide more informed comments.

I was disappointed not to see a technical discussion of potential weights that could be used in the system, and there were no mentions of the possibility of using multiple years of data in developing PIRS. It is important that the ratings be reasonably robust to a number of model specifications, including variables selected and weights used. I encourage the Department to continue working in this area and consulting with statisticians and education researchers.

While I do not expect PIRS to be tied to any federal financial aid dollars—and it is quite possible that draft ratings are never released to the public—the Department has a tremendous opportunity to improve data collection. Overturning the ban on student unit record data would significantly improve the quality of the data, but this is a great time to have a conversation about what information should be collected and processed for both public-sector and private-sector accountability systems. I am happy to provide assistance to the Department if desired and I wish them the best of luck in this difficult endeavor.


I encourage everyone with an interest in PIRS to submit comments on the ratings, and to leave a copy of your comments in the comments section of this blog post.

Comments on Federal College Rating Metrics

The U.S. Department of Education (ED) released a document containing draft metrics for the Postsecondary Institution Ratings System (PIRS) today (link via Inside Higher Ed), with a request for comments from stakeholders and the general public by February. Although the release of the metrics was delayed several months (and we were initially expecting ratings this fall instead of just some potential metrics), the potential metrics and the explanations provided by ED provide insights about what the ratings will look like if (and when) they are finalized. Below are some of the key pieces of the released metrics, along with my comments.


Which colleges will be rated, and how will they be grouped? ED is planning to rate degree-granting and certificate-granting two-year colleges separately from four-year colleges. They are still considering whether to have finer gradations among four year colleges. Given the substantial differences in mission and completion rates between associate’s degree-granting and certificate-granting two-year colleges, I strongly recommend separating the two groups. Four-year colleges can all be rated together if input adjustments are used, or they can be put into much smaller peer groups (the latter seems to be what colleges prefer).


Leaving non-degree-granting colleges out of PIRS sounds trivial, but it leaves out a fair number of small for-profit colleges. I think many of the colleges not subject to PIRS will be subject to gainful employment, should that survive its latest legal challenge. Given that gainful employment has financial consequences while PIRS does not at this point, the colleges left out of PIRS are subject to more stringent accountability than many of those in PIRS.


What will the ratings categories and scoring system look like? I’m glad to see ED considering three rating categories: high-performing, in the middle, and low-performing. That’s about all the fine gradation the data can support, in my view, and it is far more politically feasible to have fewer ratings categories. No information was provided about how individual metrics will be weighted or scored, which likely indicates that ED is still in the preliminary stage on PIRS.


What metrics are being considered? And which ones do we already have data on? The metrics fall into three main categories: access, affordability, and student outcomes.


Access: Percent Pell, distribution of expected family contributions (EFC), enrollment by family income quintile, percent first-generation. Percent Pell and enrollment by family income quintile are already collected by the Department of Education, although these measures have gaps because not all students from low-income families file the Free Application for Federal Student Aid (FAFSA). The EFC distribution measure is intriguing, but it’s not currently collected. Perhaps considering the percentage of students with zero EFC (who have the least ability to pay) would make sense. The FAFSA asks students about parental education, so first-generation status could be made available in a few years. There is a question of how to define first-generation status, as it could include a student whose parents have some college but no degree or be limited to those with no college experience.


Affordability: Net price of attendance (overall and by income quintile). The net price reflects the total cost of attendance (tuition, fees, books/supplies, and living costs) less all grant or scholarship aid received. It’s a good measure to include, even if it can be gamed by institutions that cut their living allowances to absurdly low levels or use income from the FAFSA instead of the CSS PROFILE (where more sources are counted). I’m surprised not to see a measure for debt burdens or student borrowing here as a measure of affordability.


Outcomes: Graduation and transfer rates, short-term employment, longer-term earnings, graduate school attendance, and “loan performance outcomes.” As of right now, the only measures available are graduation/transfer rates (for first-time, full-time students) and student loan repayment. ED is working to improve the graduation and transfer metrics by 2017, which is welcome. I’m intrigued by how loan performance was described:


“Relatively simple metrics like the percentage of students repaying their loans on time might be important as consumers weigh whether or not they will be able to handle their financial obligations after attending a specific school.”


This is different from the standard cohort default rate measure, which measures whether a student defaults by not making a payment in the last 270 days. Measuring the percentage in current repayment would show a lower percentage of students having a successful outcome, but it better reflects former students’ performance than a cohort default rate. Kudos for ED for making this suggestion.


I see employment, earnings, and graduate enrollment outcomes as being good things to consider, but they won’t be ready to include in PIRS for several years. The ban on student unit record data makes tracking employment and earnings difficult unless ED relies on colleges to self-report data from their former students. It’s worth emphasizing the importance of including dropouts as well as graduates in these metrics. Graduate enrollment could in theory be done with the National Student Clearinghouse, but colleges may not want to participate in the voluntary system if it is used for accountability.


Any other surprises? I was pleasantly surprised to see ED include a section on considering how to reward colleges for improving their outcomes over time. This might be a way to get around the question of how to adjust for student inputs and institutional resources, or it could be a piece designed to bring more colleges to the discussion table.


What does all of this mean? It appears that PIRS is very much in its infancy at this point, given the broadness of the suggested metrics and the difficulty in getting data on some of them in the next year or two. Putting college ratings together is methodologically quite easy to do, but politically very difficult. The delay in the timeline and the call for additional feedback by February highlight the political difficulty of PIRS. Given the GOP takeover of Congress, I think it’s safe to say that even if a full set of ratings comes out next week, the likelihood of ratings being tied to aid by 2018 (as the President has proposed) is basically nil. (For more on why I think PIRS is a difficult political sell, read my new piece in Politico Magazine.) But even getting draft ratings ready for the start of the 2015-16 academic year will be very difficult. ED has a lot of work to do before then.


But PIRS does have the potential to substantially improve data availability and transparency on a number of important student outcomes, even without becoming a high-stakes accountability system. I expect that college access organizations, higher education publications, guidance counselors, and even those of us in the rankings business will work to get any new data sources out to students and their families in a consumer-friendly format. That may be the lasting legacy of PIRS.


Gainful Employment and the Federal Ability to Sanction Colleges

The U.S. Department of Education’s second attempt at “gainful employment” regulations, which apply to the majority of vocationally-oriented programs at for-profit colleges and certain nondegree programs at public and private nonprofit colleges, was released to the public this morning. The Department’s first effort in 2010 was struck down by a federal judge after the for-profit sector challenged a loan repayment rate metric on account of it requiring additional student data collection that would be illegal under current federal law.

The 2014 measure was widely expected to contain two components: a debt-to-earning s ratio that required program completers to have annual loan debt be less than 8% of total income or 20% of “discretionary income” above 150% of the poverty line, and a cohort default rate measure that required fewer than 30% of program borrowers (regardless of completion status) to default on federal loans in less than three years. As excellent articles on the newly released measure in The Chronicle of Higher Education and Inside Higher Ed this morning detail, the cohort default rate measure was unexpectedly dropped from the final regulation. This change in rules, Inside Higher Ed reports, would reduce the number of affected programs from 1,900 to 1,400 and the number of affected students from about one million to 840,000.

There will be a number of analyses of the exact details of gainful employment over the coming days (I highly recommend anything written by Ben Miller at the New America Foundation), but I want to briefly discuss on what the changes to the gainful employment rule mean for other federal accountability policies. Just over a month ago, the Department of Education released cohort default rate data, but they tweaked a calculation at the last minute that had the effect of allowing more colleges to get under the 30% default rate threshold at least once in three years to avoid sanctions.

The last-minute changes to both gainful employment and cohort default rate accountability measures highlight the political difficulty of the current sanctioning system, which is on an all-or-nothing basis. When the only funding lever the federal government uses is so crude, colleges have a strong incentive to lobby against rules that could effectively shut them down. It is long past time for the Department of Education to consider sliding sanctions against colleges with less-than-desirable outcomes if the goal is to eventually cut off financial aid to the poorest performing institutions.

Finally, the successful lobbying efforts of different sectors of higher education make it appear less likely that the Obama Administration’s still-forthcoming Postsecondary Institution Ratings System (PIRS) will be able to tie financial aid to college ratings. This measure still requires Congressional approval, but the Department of Education’s willingness to propose sanctions has been substantially weakened over the last month. It remains to be seen if the Department of Education under the current administration will propose how PIRS will be tied to aid before the clock runs out on the Obama presidency.

It’s PIRS Prediction Time!

It’s definitely springtime in most of the United States—the time of year in which the U.S. Department of Education initially said their draft college ratings under the Postsecondary Institution Ratings System (PIRS) would be released. Department of Education staffers have since stated that the timeline may be more toward midyear, but many observers wouldn’t be too surprised if the project were delayed even more given the difficulty of the task.

Given the uncertainty of the draft ratings’ release, I think it would be fun to ask for predictions for the release date. Submit your guesses on this form, and leave your name if you want to be eligible to receive first prize: bragging rights for the next year. I’ll protect your anonymity and will contact the winner(s) to ask whether I can make their name(s) public.

For what it’s worth, I’m predicting August 15. It almost has to be a Friday, and that is timed nicely with the start of the new academic year. But remember: my prediction is right or you get a full refund of the ($0) entrance fee!

[UPDATE (5/21/14)]: Deputy Undersecretary Jamienne Studley announced today in a blog post that the draft ratings will be out “by this fall,” a delay compared to what has been previously announced. Libby Nelson at Vox also notes some of the difficulties in creating credible ratings in a new post.

PIRS prediction form


College Accountability and the Obama Budget Proposal

The Fiscal Year 2015 $3.9 trillion budget document from the Obama Administration includes a request of $68.6 billion in discretionary funds for the Department of Education, up $1.3 billion from 2014 funding. This excludes a great deal of mandatory spending on entitlements, including student loan costs/subsidies, some Pell Grant funding, and some other types of financial aid. (Mandatory spending is much harder to eliminate than discretionary funding, as illustrated by this helpful CBO summary.) The budget is also a reflection of the Administration’s priorities, even if many components are unlikely to be approved by Congress. For a nice summary of the Department of Education’s request, see this policy brief from the New America Foundation.

On the higher education front, the Obama budget implies that accountability will be a key priority of the Department of Education. The Administration made two key requests in this area: $10 million to fund continued development of the Postsecondary Institution Ratings System (PIRS) and $647 million for a fund to reward colleges that enroll and graduate Pell recipients. There was a holdover request for $4 billion in mandatory funds for a version of Race to the Top in higher education, but few in the higher education policy community are taking this plan seriously.

The $10 million for PIRS would go toward “further development and refinement of a new college rating system” (see p. T-156). This request is a signal that the Administration is taking the development of PIRS seriously, but the $10 million in funds suggests that large-scale additional data collection is unlikely to happen in the near future. It is also unlikely that the federal government will work to audit IPEDS data for the rating, something that I called for in my recent policy brief on ratings. Even if the specific $10 million request for PIRS is not acted upon, the Department of Education will use other discretionary funds to move forward.

The $647 million request for College Opportunity and Graduation Bonuses, if approved, would provide bonuses to colleges that are successful in enrolling and graduating large numbers of Pell recipients. I view this as a first attempt to tie federal funds to college performance using metrics that are likely to be in PIRS. I would be surprised if any Pell Grant funds get reallocated through college ratings except for perhaps a handful of very low-performing colleges, but it is possible to get some additional bonus funds tied to ratings.

I had a poll on a blog post a couple weeks ago asking for readers’ thoughts of the likelihood that PIRS would be tied to student financial aid dollars by 2018. The majority of the respondents gave this less than a 50% chance of happening, and I am inclined to agree as well. The Administration’s budget priorities suggest a serious push toward tying some funds to performance, although it is worth emphasizing that a future Congress and President must agree.

What are your thoughts of the Obama Administration’s higher education budget, particularly about accountability? If you have any comments to share, please do so and continue the conversation!

New Policy Brief on College Ratings

I am pleased to announce the release of my newest policy brief, “Moving Forward with Federal College Ratings: Goals, Metrics, and Recommendations” through my friends at the Wisconsin Center for the Advancement of Postsecondary Education (WISCAPE). In the brief, I outline the likely goals of the Obama Administration’s proposed Postsecondary Institution Ratings System (PIRS), discuss some potential outcome measures, and provide recommendations for a fair and effective ratings system given available data. I welcome your comments on the brief!

The Multiple Stakeholder Problem in Assessing College Quality

One of the biggest challenges the Department of Education’s proposed Postsecondary Institution Ratings System (PIRS) will face is how to present a valid set of ratings to multiple audiences. Much of the discussion at the recent technical symposium was about who should be the key audience: colleges (for accountability purposes) or students (for informational purposes). The determination of what the audience should be will likely influence what the ratings should look like. My research primarily focuses on institutional accountability, and I think that the federal government should focus on that as the goal of PIRS. (I said as much in my presentation earlier this month.)

The student information perspective is much trickier in my view. Students tend to flock to rankings and information sources that are largely based on prestige instead of some measure of “value-added” or societal good. As a result, I view the Washington Monthly college rankings (which I’ve worked on for the past two years) as a much more influential tool to incentivize colleges and policymakers than students. I think that is the right path to take to influence colleges’ priorities, as I have to question whether many students will use college rankings that provide very useful information to students but do not line up with the preexisting idea of what is a “good” college.

I was quoted in an article in Politico this morning regarding PIRS and what can be learned from existing rankings systems. In that article, I expressed similar sentiments, although in a less elegant way. (It’s also a good time to clarify that all opinions I express are my own.) I certainly hope that more than six students use the Washington Monthly rankings to inform their college choice sets, but I do not harbor grand expectations that students will suddenly choose to use our rankings over U.S. News. However, the influence of the rankings on colleges has the potential to help a large number of students through changing institutional priorities.