Examining Trends in Debt to Earnings Ratios

I was just starting to wonder when the U.S. Department of Education would release a new year of College Scorecard data, so I wandered over to the website to check for anything new. I was pleasantly surprised to see a date stamp of April 25 (today!), which meant that it was time for me to give my computer a workout.

There are a lot of great new data elements in the updated Scorecard. Some features include a fourth year of post-graduation earnings, information on the share of students who stayed in state after college, earnings by Pell receipt and gender, and an indicator for whether no, some, or all programs in a field of study can be completed via distance education. There are plenty of things to keep me busy for a while, to say the least. (More on some of the ways I will use the data coming soon!)

In this update, I share data on trends in debt to earnings ratios by field of study. I used median student debt accumulated by the first Scorecard cohorts (2014-15 and 2015-16 leavers) and tracked median earnings one, two, three, and four years after graduating college. The downloadable dataset includes 34,466 programs with data for each element.

The below table shows debt-to-earnings ratios for the four most common credential levels. The good news is that the average ratio ticked downward for each credential level, with bachelor’s and master’s degrees showing steep declines in their ratios than undergraduate certificates and associate degrees.

Credential1 year2 years3 years4 years
Certificate0.4550.4300.4210.356
Associate0.5280.5030.4730.407
Bachelor’s0.7030.6590.5690.485
Master’s0.8330.7930.7340.650

The scatterplot shows debt versus earnings four years later across all credential levels. There is a positive correlation (correlation coefficient of 0.454), but still quite a bit of noise present.

Enjoy the new data!

Sharing a Dataset of Program-Level Debt and Earnings Outcomes

Within a couple of hours of posting my comments on the Department of Education’s proposal to create a list of programs with low financial value, I received multiple inquiries about whether there was a user-friendly dataset of current debt-to-earnings ratios for programs. Since I work with College Scorecard data on a regular basis and have used the data to write about debt-to-earnings ratios, it only took a few minutes to put something together that I hope will be useful.

To create a debt-to-earnings ratio that covered as many programs as possible, I pulled median student debt accumulated at that institution for the cohorts of students who left college in the 2016-17 or 2017-18 academic years and matched it with earnings for those same cohorts one calendar year later (calendar year 2018 or 2019). The College Scorecard has some earnings data more than one year out at this point, but a much smaller share of programs are covered. I then calculated a debt-to-earnings ratio. And for display purposes, I also pulled median parent debt from that institution.

The resulting dataset covers 45,971 programs at 5,033 institutions with data on both student debt and earnings for those same cohorts. You can download the dataset here in Excel format and use filter/sort functions to your heart’s content.

Comments on a Proposed Federal List of Low-Value Programs

The U.S. Department of Education recently announced that they will be creating a list of low-value postsecondary programs, and they requested input from the public on how to do so. They asked seven key questions, and I put together 3,000-plus words in comments as a response to submit. Here, I list the questions and briefly summarize my key points.

Question 1: What program-level data and metrics would be most helpful to students to understand the financial (and other) consequences of attending a program?

Four data elements would be helpful. The first is program-level completion rates, especially for graduate or certificate programs where students are directly admitted into programs. Second, given differential tuition and different credit requirements across programs, time to completion and sticker/net prices by program would be incredibly valuable. The last two are debt and earnings, which are largely present in the current College Scorecard.

Question 2: What program-level data and metrics would be most helpful to understand whether public investments in the program are worthwhile? What data might be collected uniformly across all students who attend a program that would help assess the nonfinancial value created by the program?

I would love to see information on federal income taxes paid by former students and use of public benefits (if possible). More information on income-driven repayment use would also be helpful. Finally, there is a great need to rethink definitions of “public service,” as it currently depends on the employer instead of the job function. That is a concern in fields like nursing that send graduates to do good things in for-profit and nonprofit settings.

Question 3: In addition to the measures or metrics used to determine whether a program is placed on the low-financial-value program list, what other measures and metrics should be disclosed to improve the information provided by the list?

Nothing too fancy here. Just list any sanctions/warnings from the federal government, state agencies, or accreditors along with general outcomes for all students at the undergraduate level to account for major switching.

Question 4: The Department intends to use the 6-digit Classification of Instructional Program (CIP) code and the type of credential awarded to define programs at an institution. Should the Department publish information using the 4-digit CIP codes or some other type of aggregation in cases where we would not otherwise be able to report program data?

This is my nerdy honey hole, as I have spent a lot of time thinking on these issues. The biggest two issues with student debt/earnings data right now is that some campuses get aggregated together in reporting and that it’s also impossible to separate outcomes for fully online versus hybrid/in-person programs. Those nuts need to be cracked, and then aggregate up if cell sizes are too small.

Question 5: Should the Department produce only a single low-financial-value program list, separate lists by credential level, or use some other breakdown, such as one for graduate and another for undergraduate programs?

Separate out by credential level and ideally have a good search function by program of study. Otherwise, some low-paying programs will clog up the lists and not let students see relatively lousy programs in higher-paying areas.

Question 6: What additional data could the Department collect that would substantially improve our ability to provide accurate data for the public to help understand the value being created by the program? Please comment on the value of the new metrics relative to the burden institutions would face in reporting information to the Department.

I would love to see program-level completion rates (where appropriate) and better pricing information at the program level. Those items aren’t free to implement, so I would gladly explore other cuts to IPEDS (such as the academic libraries survey) to help reduce additional burden.

Question 7: What are the best ways to make sure that institutions and students are aware of this information?

Colleges will be aware of this information without the federal government doing much, and they may respond to information that they didn’t have before. But colleges don’t have a great record of responding to public shaming if they already knew that affordability was a concern, so I’m not expecting massive changes.

The College Scorecard had small changes around the margins for student behaviors, primarily driven by more advantaged students. I’m not an expert in reaching out to prospective students, but I know that outreach to as many groups as possible is key.

What Happened to College Spending During the Pandemic?

It’s definitely the holiday season here at Kelchen on Education HQ (my home office in beautiful east Tennessee). My Christmas tree is brightly lit and I’m certainly enjoying my share of homemade cookies right now. But as a researcher, I got an early gift this week when the U.S. Department of Education released the latest round of data for the Integrated Postsecondary Education Data System (IPEDS). Yes, I’m nerdy, but you probably are too if you’re reading this.

This data update included finance data from the 2020-21 fiscal year—the first year to be fully affected by the pandemic following a partially affected 2019-20 fiscal year. At the time, I wrote plenty about how I expected 2020-21 to be a challenging year for institutional finances. Thanks to stronger-than-expected state budgets and timely rounds of federal support, colleges largely avoided the worst-case scenario of closure. But they cut back their spending wherever possible, with personnel being the easiest area to cut. I took cuts to salary and retirement benefits during the 2020-21 academic year at my last job, and that was a university that made major cuts to staff while protecting full-time faculty employment.

In this post, I took a look at the percentage change in total expenditures over each of the last four years with data (2017-18 through 2020-21) for degree-granting public and private nonprofit institutions. These values are not adjusted for inflation.

Changes in total spending, public 4-years (n=550)

Characteristic2020-212019-202018-192017-18
Median change (pct)-1.22.32.22.6
>10% decrease58193919
<10% decrease256152141151
<10% increase174318316307
>10% increase62625472

Changes in total spending, private nonprofit 4-years (n=1,002)

Characteristic2020-212019-202018-192017-18
Median change (pct)-1.8-0.52.32.1
>10% decrease119533522
<10% decrease472494262305
<10% increase340415620595
>10% increase71397973

Changes in total spending, public 2-years (n=975)

Characteristic2020-212019-202018-192017-18
Median change (pct)1.03.61.41.5
>10% decrease77457952
<10% decrease353222305330
<10% increase406548488489
>10% increase139160103104

These numbers tell several important stories. First, spending in the community college sector was affected less than the four-year sector. This could be due to fewer auxiliary enterprises (housing, dining, and the like) that were affected by the pandemic, or it could be due to the existing leanness of their operations. As community college enrollments continue to decline, this is worth watching when new data come out around this time next year.

Second, private nonprofit colleges were the only sector to cut spending in the 2019-20 academic year. The pandemic likely nudged the median number below zero from what it otherwise would have been, as these tuition-dependent institutions were trying to respond immediately to pressures in spring 2020. Finally, there is a lot of variability in institutional expenses from year to year. If you are interested in a particular college, reading financial statements can be a great way to learn more about what is going on that would be available in IPEDS data.

A quick and unrelated final note: I have gotten to know many of you all via Twitter, and it is far from clear whether the old blue bird will be operational in the future. I will stay on Twitter as long as it’s a useful and enjoyable experience, although I recognize that my experience has been better than many others. You can follow my blog directly by clicking “follow” on the bottom right of my website, and you can also find me on LinkedIn. I haven’t gone to any of the other social media sites yet, but that may change in the future.

Have a safe and wonderful holiday season and let’s have a great 2023!

What is Next for College Rankings?

It’s safe to say that leaders in higher education typically have a love/hate relationship with college rankings. Traditionally, they love them when they do well and hate them when they move down a few pegs. Yet, outside of a small number of liberal arts colleges, few institutions have made the choice not to cooperate with the 800-pound gorilla in the college rankings industry–U.S. News and World Report. This is because research has shown that the profile of new students changes following a decline in the rankings and because many people care quite a bit about prestige.

This has made the recent decision by Yale Law and followed by ten law schools (and likely more by the time you read this) to stop cooperating with the U.S. News ranking of those programs fascinating. In this post, I offer some thoughts on what is next for college rankings based on my experiences as a researcher and as the longtime data editor for Washington Monthly’s rankings.

Prestige still matters. There are two groups of institutions that feel comfortable ignoring U.S. News’s implied threat to drop colleges lower in the rankings if they do not voluntarily provide data. The first group is broad-access institutions with a mission to serve all comers within their area, as these students tend not to look at rankings and U.S. News relegates them to the bottom of the list anyway. Why bother sending them data if your ranking won’t change?

The second group is institutions that already think they are the most prestigious, and thus have no need for rankings to validate their opinions. This is what is happening in the law school arena right now. Most of the top 15 institutions have announced that they will no longer provide data, and to some extent this group is a club of its own. Will this undermine the U.S. News law school rankings if none of the boycotting programs are where people expect them to be? That will be fascinating to watch.

What about the middle of the pack? The group of institutions that has been most sensitive to college rankings has been the group of not-quite elite but still selective institutions that are trying to enhance their profiles and jump over some of their competitors. Moving up in the rankings is often a part of their strategic plans, can increase presidential salaries at public universities, and U.S. News metrics have played a large part in how Florida has funded its public universities. Institutional leaders will be under intense pressure to keep cooperating with U.S. News so they can keep moving up.

Another item to keep an eye on: I would not be surprised if conservative state legislators loudly object to any moves away from rankings among public universities. In an era of growing political polarization and concerns about so-called “woke” higher education, this could serve as yet another flashpoint. Expect the boycotts to be at the most elite private institutions and at blue-state public research universities.

Will the main undergraduate rankings be affected? Graduate program rankings depend heavily on data provided by institutions because there are often no other available data sources. Law schools are a little different than many other programs because the accrediting agency (the American Bar Association) collects quite a bit of useful data. For programs such as education, U.S. News is forced to rely on data provided by institutions along with its reputational survey.

At the undergraduate level, U.S. News relies on two main data sources that are potentially at risk from boycotts. The first is the Common Data Set, which is a data collection partnership among U.S. News, Peterson’s, and the College Board. The rankings scandal at Columbia earlier this year came out of data anomalies that a professor identified based on their Common Data Set submissions, and Columbia just started releasing their submission to the public for the first time this fall. Opting out of the Common Data Set affects the powerful College Board, so institutions may not want to do that. The second is the long-lamented reputational survey, which has a history of being gamed by institutional leaders and has suffered from falling response rates. At some point, U.S. News may need to reconsider its methodology if more leaders decline to respond.

From where I sit as the Washington Monthly data editor, it’s nice to not rely on any data that institutions submit. (We don’t have the staff to do large data collections, anyway.) But I think the Common Data Set will survive, although there may need to be some additional checks put into the data collection process to make sure numbers are reasonable. The reputational survey, however, is slowly fading away. It would be great to see a measure of student success replace it, and I would suggest something like the Gallup Alumni Survey. That would be a tremendous addition to the U.S. News rankings and may even shake up the results.

Will colleges or programs ask not to be ranked? So far, the law school announcements that I have seen have mentioned that programs will not be providing data to U.S. News. But they could go one step farther and ask to be completely excluded from the rankings. From an institutional perspective, if most of the top-15 law schools opt out, is it better for them to be ranked in the 30s (or something like that) or just to not appear at all on paper? This would create an ethical question to ponder. Rankings exist in part to provide useful information to students and their families, but should a college that doesn’t want to be ranked still show up based on whatever data sources are available? I don’t have a great answer to that one.

Buckle up, folks. The rankings fun is likely to continue over the next year.

Why I’m Skeptical of Cost of Attendance Figures

In the midst of a fairly busy week for higher education (hello, Biden’s student loan forgiveness and income-driven repayment plans!), the National Center for Education Statistics began adding a new year of data into the Integrated Postsecondary Education Data System. I have long been interested in cost of attendance figures, as colleges often face intense pressure to keep these numbers low. A higher cost of attendance means a higher net price, which makes colleges look bad even if this number is driven by student living allowances that colleges do not receive. For my scholarly work on this, see this Journal of Higher Education article—and I also recommend this new Urban Institute piece on the topic.

After finishing up a bunch of interviews on student loan debt, I finally had a chance to dig into cost of attendance data from IPEDS for the 2020-21 and 2021-22 academic year. I focused on the reported cost of attendance for students living off-campus at 1,568 public and 1,303 private nonprofit institutions (academic year reporters) with data in both years. This time period is notable for two things: more modest increases in tuition and sharply higher living costs due to the pandemic and resulting changes to college attendance and society at large.

And the data bear this out on listed tuition prices. The average increase in tuition was just 1.67%, with similar increases across public and private nonprofit colleges. 116 colleges had lower listed tuition prices in fall 2021 than in fall 2020, while about two-thirds for public and one-third of private nonprofit colleges did not increase tuition for fall 2021. This resulted in a tuition increase well below the rate of inflation, which is generally good news for students but bad news for colleges.

The cost of attendance numbers, as shown below, look a little different. Nearly three times as many institutions (322) reported a lower cost of attendance than reported lower tuition, which is surprising given rising living costs. More colleges also reported increasing the cost of attendance relative to increasing tuition, with fewer colleges reporting no changes.

Changes in tuition and cost of attendance, fall 2020 to fall 2021.

 Public (n=1,568)Private (n=1,303)
Tuition  
  Decrease6452
  No change955439
  Increase549812
Cost of attendance  
  Decrease188134
  No change296172
  Increase1,084997

Some of the reductions in cost of attendance are sizable without a corresponding cut in tuition. For example, California State University-Monterey Bay reduced its listed cost of attendance from $31,312 to $26,430 while tuition increased from $7,143 to $7,218. [As Rich Hershman pointed out on Twitter, this is potentially due to California updating its cost of attendance survey instead of increasing it by inflation every year.]

Texas Wesleyan University increased tuition from $33,408 to $34,412, while the cost of attendance fell from $52,536 to $49,340. These decreases could be due to a more accurate estimate of living expenses, moving to open educational resources instead of textbooks, or reducing student fees. But the magnitude of these decreases during an inflationary period leads me to continue questioning the accuracy of cost of attendance values or the associated net prices.

As a quick note, this week marks the ten-year anniversary of my blog. Thanks for joining me through 368 posts! I don’t have the time to do as many posts as I used to, but it is sure nice to have an outlet for some occasional thoughts and data pieces.

How Colleges’ Carnegie Classifications Have Changed Over Time

NOTE: This post was updated on February 2, 2022 to reflect substantial changes between the initial and final Carnegie data releases.

Every three years, Indiana University’s Center on Postsecondary Research has updated Carnegie classifications–a key measure of prestige for some colleges that helps define peer groups. Much of the higher education community looks closely at these lists, and doesn’t hesitate to share their opinions about whether they are correctly classified.

The 2021 version includes many different types of classifications based on different institutional characteristics. But the basic classification (based on size, degrees awarded, and research intensity) always garners the most attention from the higher education community. I took a look at the 2018 update three years ago, and this post provides an updated analysis of the 2021 classifications.

The item that always gets the most attention in the Carnegie classifications is Research 1 (research universities: very high activity) status, as this is based on research metrics and is a key indicator of prestige. The R1 line has continued to grow, moving from 96 universities in 2005 to 146 in 2021. Notably, nine additional universities were added to the R1 list between the initial data release in December 2021 and the final release in January 2022. This includes three universities that were initially moved down to R2 and successfully managed to get moved back through either correcting data errors or appealing their classification.

YearR1R2R3Total
2021146134189469
2018131132161423
2015115107112334
20101089889295
20059610281279

At the two-year level, there are competing trends of institutional consolidations in the for-profit sector and more community colleges offering bachelor’s degree programs. The number of baccalaureate/associate colleges declined substantially in 2021 (going from 269 in 2018 to 202 in 2021), but this is mainly driven by reclassifications between the initial and final data releases (going from 250 to 202).

2021: 202

2018: 269

2015: 248

2010: 182

2005: 144

IPEDS counts these institutions as four-year universities, but the Carnegie classification (basic codes 14 and 23) is a better way to flag them as two-year colleges.

Going forward, Carnegie classifications will continue to be updated every three years in order to keep up with a rapidly-changing higher education environment. It remains to be seen who will host the classifications following a falling-out with Albion College, and I’m very much intrigued by the high number of reclassifications this time around. It’s never dull in higher ed data land!


Options for Replacing Standardized Test Scores for Researchers and Rankers

It’s the second Monday in September, so it’s time for the annual college rankings season to conclude with U.S. News & World Report’s entry. The top institutions in the rankings change little from year to year, but colleges pay lots of attention to statistically insignificant movements. Plenty has been written on those points, and plenty of digital ink has also been spilled on U.S. News’s decision to keep standardized test scores in their rankings this year.

In this blog post, I want to look a few years farther down the line. Colleges were already starting to adopt test-optional policies prior to March 2020, but the pandemic accelerated that trend. Now a sizable share of four-year colleges had taken a hiatus from requiring ACT or SAT scores, and many may not go back. This means that people who have used test scores in their work—whether as academic researchers or college rankings methodologists—will have to think about how to proceed going forward.

The best metrics to replace test scores depend in part on the goals of the work. Most academic researchers use test scores as a control variable in regression models as a proxy for selectivity or as a way to understand the incoming academic performance of students. High school GPA is an appealing measure, but is not available in the Integrated Postsecondary Education Data System and also varies considerably across high schools. Admit rates and yield rates are available in IPEDS and capture some aspects of selectivity and student preferences to attend particular colleges. Admit rates can be gamed by trying to get as many students as possible with no interest in the college to apply and be rejected, and yield rates vary considerably based on the number of colleges students apply to.

Other potential metrics are likely not nuanced enough to capture smaller variations across colleges. Barron’s Profiles of American Colleges has a helpful admission competitiveness rating (and as a plus, that thick book held up my laptop for hundreds of hours of Zoom calls during the pandemic). But there are not that many categories and they change relatively little over time. Carnegie classifications focus more on the research side of things (a key goal for some colleges), but again are not as nuanced and are only updated every few years.

If the goal is to get at institutional prestige, then U.S. News’s reputational survey could be a useful resource. The challenge there is that colleges have a history of either not caring about filling out the survey or trying to strategically game the results by ranking themselves far higher than their competitors. But if a researcher wants to get at prestige and is willing to compile a dataset of peer assessment scores over time, it’s not a bad idea to consider.

Finally, controlling for socioeconomic and racial/ethnic diversity are also options given the correlations between test scores and these factors. I was more skeptical of these correlations until moving to New Jersey and seeing all of the standardized test tutors and independent college counselors that existed in one of the wealthiest parts of the country.

As the longtime data editor for the Washington Monthly rankings, it’s time for me to start thinking about changes to the 2022 rankings. The 2021 rankings continued to use test scores as a control for predicting student outcomes and I already used admit rates and demographic data from IPEDS as controls. Any suggestions that people have for publicly-available data to replace test scores in the regressions would be greatly appreciated.

Examining Trends in Tuition Freezes and Declines

Greetings from beautiful eastern Tennessee! Since my last post, I have accepted a position as professor and head of the Department of Educational Leadership and Policy Studies at the University of Tennessee, Knoxville. It is an incredible professional opportunity for me, and the Knoxville area is a wonderful place to raise a family. I start on August 1, so the last month has been a combination of moving, taking some time off, and getting data in order to keep making progress on research.

Speaking of getting new data in order, the U.S. Department of Education’s newest iteration of Integrated Postsecondary Education Data System (IPEDS) data came out with a fresh year of data on tuition and fee charges, enrollment, and completions. In this post, I am using the new data on tuition and fees in the 2020-21 academic year to look at how colleges changed their listed prices during the pandemic.

I limited my analysis to 3,356 colleges and universities that met three criteria. First, they had IPEDS data on in-district or in-state tuition and fees in both the 2019-20 and 2020-21 academic years. Second, they reported data for the typical program of study (academic year reporters) instead of separately for large programs (program reporters). This excluded most certificate-dominant institutions. Third, I kept colleges with Carnegie classifications in 2018 and excluded tribal colleges due to their unique governance structures. I then classified public institutions into two-year and four-year colleges based on Carnegie classifications to properly classify associate-dominant institutions as two-year colleges.

Now on to the analysis. There was a lot of coverage of colleges cutting tuition and/or fees for 2020-21 on account of the pandemic, but now analysts can see how prevalent this actually was. The majority of public and for-profit colleges either froze or decreased tuition in 2020-21, but two-thirds of private nonprofit colleges increased their list prices. This does not mean that private colleges actually increased tuition revenue due to the possibility of increased financial aid, and this answer will not be known in publicly available data until early 2023. Public colleges and universities were somewhat more likely to reduce fees than tuition, while for-profit colleges were less likely to do so.

The rightmost column of the first table below combines tuition and fees and provides a more comprehensive picture of student charges. Although a majority of public universities froze tuition and fees, combined tuition and fees still increased at 56% of institutions. This suggests that colleges that froze tuition increased fees and colleges that froze fees increased tuition. Colleges found a way to get the money that they needed. Fifty-three percent of community colleges increased tuition and fees, while 71% of private nonprofit colleges did so compared to just 42% of for-profit colleges.

Changes in tuition and fees, 2019-20 to 2020-21.

TuitionFeesTuition and fees
Four-year public   
Increase35.6%35.8%56.1%
No change61.2%55.5%30.7%
Decrease3.2%8.6%13.1%
Two-year public   
Increase41.0%45.1%53.4%
No change55.3%40.0%39.9%
Decrease3.8%14.9%6.6%
Private nonprofit   
Increase66.9%38.6%71.2%
No change28.2%54.0%21.3%
Decrease4.9%7.4%7.5%
For-profit   
Increase34.7%25.3%42.3%
No change49.6%66.8%41.5%
Decrease15.7%7.8%16.2%
Source: Robert Kelchen’s analysis of IPEDS data.

The next obvious question is whether the 2020-21 trends differed from past years. I pulled IPEDS data going back to 2015 to look at trends in tuition and fees over the past five years. The share of tuition freezes increased in every sector of higher education, with the increase being most pronounced among public universities (9.5% in 2019-20 to 30.7% in 2020-21). Other sectors had smaller increases, although around one-third of community colleges and for-profit institutions had no changes in tuition and fees in prior years. The only sector with a large increase in tuition and fee cuts was public universities, with a jump from 5.1% to 13.1% between 2019-20 and 2020-21.

Changes in tuition and fees over time.

4-year public2-year publicPrivate nonprofitFor-profit
2020-21    
No change30.7%39.9%21.3%41.5%
Decrease13.1%6.6%7.5%16.2%
2019-20    
No change9.5%31.7%13.7%34.0%
Decrease5.1%6.9%4.7%12.0%
2018-19    
No change10.5%27.2%14.6%38.0%
Decrease6.0%6.5%5.2%22.4%
2017-18    
No change7.4%27.2%12.4%34.8%
Decrease3.2%5.9%3.6%25.0%
2016-17    
No change13.8%24.5%12.4%28.7%
Decrease4.4%8.0%3.8%16.0%
2015-16    
No change8.7%29.3%10.8%33.5%
Decrease5.2%7.3%4.3%18.2%
Source: Robert Kelchen’s analysis of IPEDS data.

As the pandemic enters a new stage, the higher education community continues to get more information on the broader effects on the 2020-21 academic year. It will take a few years to get a complete picture of what happened in the sector, but each data release provides additional insights for researchers and policymakers.

An Updated Look at Financial Responsibility Scores and College Closures

The topic of college closures has gotten even more attention since the beginning of the coronavirus pandemic last spring. Even though the number of private nonprofit colleges closing remained around recent norms in 2020 (approximately ten degree-granting institutions), many colleges have absorbed sizable losses during the pandemic and will continue to do so in coming years. In a recent working paper that I wrote with Dubravka Ritter of the Federal Reserve Bank of Philadelphia and Doug Webber of Temple University, we estimate that colleges and universities may lose approximately $100 billion in revenue over the next five years. This means that colleges are still going to face financial challenges going forward.

One of the federal government’s main tools to identify colleges at risk of closure is financial responsibility scores. Private nonprofit and for-profit colleges are scored on a scale of -1.0 to 3.0 based on three measures: a primary reserve ratio, a net income ratio, and an equity ratio. Colleges that score at 1.5 or above pass, while colleges that score between 1.0 and 1.4 are in an oversight zone and colleges that score at 0.9 or below fail. Colleges that fail must submit a letter of credit in order to keep receiving federal financial aid, and colleges that are in the oversight zone or fail are subject to additional financial monitoring.

The financial responsibility scores for fiscal years ending in 2018-19 were recently released by Federal Student Aid, and this represents the final pre-pandemic look at colleges’ finances. The distribution of listed scores by sector is below. The vast majority of colleges in both sectors passed, but a larger number of for-profit colleges failed than in the private nonprofit sector.

OutcomeFor-profitNonprofit
Pass1,5171,479
Zone5151
Fail12349
Total1,6911,579

Four years ago, I looked at the financial responsibility scores of private nonprofit colleges that closed in 2016. Of the 12 colleges with available data, four colleges passed, two were in the oversight zone, three failed, and the final three institutions were placed on heightened cash monitoring for financial responsibility score issues without assigning a score.

I repeated this exercise for twelve private nonprofit colleges that closed or merged in 2020 or 2021 and had available data. As shown below, not a single college that closed received a failing financial responsibility score. Three were in the oversight zone, three were instead placed on heightened cash monitoring for financial responsibility concerns, and the other six all passed. Holy Family College in Wisconsin, which closed in 2020, had a perfect score.

NameFinancial responsibility score
Judson College2.1 (pass)
Becker CollegePlaced on HCM1
Concordia College (NY)Placed on HCM1
Marlboro College1.8 (pass)
Wesley College (DE)Placed on HCM1
Pine Manor College1.0 (zone)
Holy Family College (WI)3.0 (pass)
Urbana University2.9 (pass)
MacMurray College2.6 (pass)
Robert Morris University (IL)1.3 (zone)
Concordia University (OR)1.1 (zone)
Watkins College of Art2.2 (pass)

Next year’s release of financial responsibility scores will begin to show the effects of the pandemic at colleges which had their fiscal years end after the pandemic began. The 2020-21 IPEDS data collection cycle also includes for the first time values for each component of the financial responsibility score so analysts have more information about colleges’ financial positions.