New Research on Heightened Cash Monitoring

I have spent most of the last year digging into the topic of heightened cash monitoring (HCM), perhaps the federal government’s most important tool in its higher education accountability toolbox at this time. HCM places colleges’ federal financial aid disbursements under additional scrutiny in order to protect taxpayer dollars. There are two levels of scrutiny: HCM1 requires additional oversight, while the more severe HCM2 requires colleges to pay out money to students before being reimbursed by Federal Student Aid.

This seems like an obscure topic, but it affects a substantial portion of American higher education. In 2023, 493 colleges were on HCM1 and 78 colleges were on HCM2—together representing about 10% of all colleges receiving federal financial aid. And in the mid-2010s, more than 1,000 colleges were on HCM1 or HCM2 at one time.[1]

Thanks to the generous support of Arnold Ventures, my graduate research assistant Holly Evans and I dove into whether colleges responded to being placed on the more severe HCM2 status by changing their financial priorities, closing, or influencing student debt and graduation outcomes. We compared colleges placed on HCM2 to colleges that were not on HCM2, but had failed the federal financial responsibility metric (and thus also had issues identified by the federal government). Using three analytic approaches, we generally found no relationships between HCM2 status and these outcomes. It was a lot of work for no clear findings, but that is pretty typical when studying institutional responses to government policies.

Here is a copy of our working paper, which I am posting here in the hope of receiving feedback. I am particularly interested in thoughts about the analytic strategy, interpreting results, and potential journals to send this paper to. Stay tuned for more work from this project!


[1] HCM1 data were first made public in 2015 following news coverage from Inside Higher Ed, while retroactive HCM2 data were also released in 2015 with the unveiling of the modern College Scorecard.

Sharing a Dataset of Program-Level Debt and Earnings Outcomes

Within a couple of hours of posting my comments on the Department of Education’s proposal to create a list of programs with low financial value, I received multiple inquiries about whether there was a user-friendly dataset of current debt-to-earnings ratios for programs. Since I work with College Scorecard data on a regular basis and have used the data to write about debt-to-earnings ratios, it only took a few minutes to put something together that I hope will be useful.

To create a debt-to-earnings ratio that covered as many programs as possible, I pulled median student debt accumulated at that institution for the cohorts of students who left college in the 2016-17 or 2017-18 academic years and matched it with earnings for those same cohorts one calendar year later (calendar year 2018 or 2019). The College Scorecard has some earnings data more than one year out at this point, but a much smaller share of programs are covered. I then calculated a debt-to-earnings ratio. And for display purposes, I also pulled median parent debt from that institution.

The resulting dataset covers 45,971 programs at 5,033 institutions with data on both student debt and earnings for those same cohorts. You can download the dataset here in Excel format and use filter/sort functions to your heart’s content.

Comments on a Proposed Federal List of Low-Value Programs

The U.S. Department of Education recently announced that they will be creating a list of low-value postsecondary programs, and they requested input from the public on how to do so. They asked seven key questions, and I put together 3,000-plus words in comments as a response to submit. Here, I list the questions and briefly summarize my key points.

Question 1: What program-level data and metrics would be most helpful to students to understand the financial (and other) consequences of attending a program?

Four data elements would be helpful. The first is program-level completion rates, especially for graduate or certificate programs where students are directly admitted into programs. Second, given differential tuition and different credit requirements across programs, time to completion and sticker/net prices by program would be incredibly valuable. The last two are debt and earnings, which are largely present in the current College Scorecard.

Question 2: What program-level data and metrics would be most helpful to understand whether public investments in the program are worthwhile? What data might be collected uniformly across all students who attend a program that would help assess the nonfinancial value created by the program?

I would love to see information on federal income taxes paid by former students and use of public benefits (if possible). More information on income-driven repayment use would also be helpful. Finally, there is a great need to rethink definitions of “public service,” as it currently depends on the employer instead of the job function. That is a concern in fields like nursing that send graduates to do good things in for-profit and nonprofit settings.

Question 3: In addition to the measures or metrics used to determine whether a program is placed on the low-financial-value program list, what other measures and metrics should be disclosed to improve the information provided by the list?

Nothing too fancy here. Just list any sanctions/warnings from the federal government, state agencies, or accreditors along with general outcomes for all students at the undergraduate level to account for major switching.

Question 4: The Department intends to use the 6-digit Classification of Instructional Program (CIP) code and the type of credential awarded to define programs at an institution. Should the Department publish information using the 4-digit CIP codes or some other type of aggregation in cases where we would not otherwise be able to report program data?

This is my nerdy honey hole, as I have spent a lot of time thinking on these issues. The biggest two issues with student debt/earnings data right now is that some campuses get aggregated together in reporting and that it’s also impossible to separate outcomes for fully online versus hybrid/in-person programs. Those nuts need to be cracked, and then aggregate up if cell sizes are too small.

Question 5: Should the Department produce only a single low-financial-value program list, separate lists by credential level, or use some other breakdown, such as one for graduate and another for undergraduate programs?

Separate out by credential level and ideally have a good search function by program of study. Otherwise, some low-paying programs will clog up the lists and not let students see relatively lousy programs in higher-paying areas.

Question 6: What additional data could the Department collect that would substantially improve our ability to provide accurate data for the public to help understand the value being created by the program? Please comment on the value of the new metrics relative to the burden institutions would face in reporting information to the Department.

I would love to see program-level completion rates (where appropriate) and better pricing information at the program level. Those items aren’t free to implement, so I would gladly explore other cuts to IPEDS (such as the academic libraries survey) to help reduce additional burden.

Question 7: What are the best ways to make sure that institutions and students are aware of this information?

Colleges will be aware of this information without the federal government doing much, and they may respond to information that they didn’t have before. But colleges don’t have a great record of responding to public shaming if they already knew that affordability was a concern, so I’m not expecting massive changes.

The College Scorecard had small changes around the margins for student behaviors, primarily driven by more advantaged students. I’m not an expert in reaching out to prospective students, but I know that outreach to as many groups as possible is key.

Why I’m Skeptical of Cost of Attendance Figures

In the midst of a fairly busy week for higher education (hello, Biden’s student loan forgiveness and income-driven repayment plans!), the National Center for Education Statistics began adding a new year of data into the Integrated Postsecondary Education Data System. I have long been interested in cost of attendance figures, as colleges often face intense pressure to keep these numbers low. A higher cost of attendance means a higher net price, which makes colleges look bad even if this number is driven by student living allowances that colleges do not receive. For my scholarly work on this, see this Journal of Higher Education article—and I also recommend this new Urban Institute piece on the topic.

After finishing up a bunch of interviews on student loan debt, I finally had a chance to dig into cost of attendance data from IPEDS for the 2020-21 and 2021-22 academic year. I focused on the reported cost of attendance for students living off-campus at 1,568 public and 1,303 private nonprofit institutions (academic year reporters) with data in both years. This time period is notable for two things: more modest increases in tuition and sharply higher living costs due to the pandemic and resulting changes to college attendance and society at large.

And the data bear this out on listed tuition prices. The average increase in tuition was just 1.67%, with similar increases across public and private nonprofit colleges. 116 colleges had lower listed tuition prices in fall 2021 than in fall 2020, while about two-thirds for public and one-third of private nonprofit colleges did not increase tuition for fall 2021. This resulted in a tuition increase well below the rate of inflation, which is generally good news for students but bad news for colleges.

The cost of attendance numbers, as shown below, look a little different. Nearly three times as many institutions (322) reported a lower cost of attendance than reported lower tuition, which is surprising given rising living costs. More colleges also reported increasing the cost of attendance relative to increasing tuition, with fewer colleges reporting no changes.

Changes in tuition and cost of attendance, fall 2020 to fall 2021.

 Public (n=1,568)Private (n=1,303)
Tuition  
  Decrease6452
  No change955439
  Increase549812
Cost of attendance  
  Decrease188134
  No change296172
  Increase1,084997

Some of the reductions in cost of attendance are sizable without a corresponding cut in tuition. For example, California State University-Monterey Bay reduced its listed cost of attendance from $31,312 to $26,430 while tuition increased from $7,143 to $7,218. [As Rich Hershman pointed out on Twitter, this is potentially due to California updating its cost of attendance survey instead of increasing it by inflation every year.]

Texas Wesleyan University increased tuition from $33,408 to $34,412, while the cost of attendance fell from $52,536 to $49,340. These decreases could be due to a more accurate estimate of living expenses, moving to open educational resources instead of textbooks, or reducing student fees. But the magnitude of these decreases during an inflationary period leads me to continue questioning the accuracy of cost of attendance values or the associated net prices.

As a quick note, this week marks the ten-year anniversary of my blog. Thanks for joining me through 368 posts! I don’t have the time to do as many posts as I used to, but it is sure nice to have an outlet for some occasional thoughts and data pieces.

What Happened to Colleges at Risk of Closing?

The issue of college closures has gotten a lot of attention in recent years, as evidenced by this recent Chronicle of Higher Education piece summarizing the growing field of researchers and organizations trying to identify colleges at risk of closure. I am one of those people, as I am working on a paper on this topic that I hope to release in the spring.

In doing my research for the paper, I stumbled upon a piece that I wrote for the Chronicle back in 2015 and completely forgot about. (The joys of writing short pieces and blog posts…sometimes I forget what I wrote about several years ago!) In that piece, titled “Where 3 Accountability Measures Meet, A Hazardous Intersection,” I used a brand-new data source from the U.S. Department of Education combined with two other existing measures to identify private nonprofit and for-profit colleges that may be at high risk of closing. The metrics were the following:

(1) Whether the college was on Heightened Cash Monitoring in the first-ever public data release in April 2015.

(2) Scored in the oversight zone or failed the financial responsibility test at least once between 2010-11 and 2012-13.

(3) Had a three-year cohort default rate above 30% (subjecting the institution to extra oversight) at least once between the 2009 and 2011 cohorts.

While 1,150 private colleges that tripped at least one of the three metrics in my 2015 analysis, 26 colleges (six private nonprofit and 20 for-profit) tripped all three metrics. Now with five years’ worth of hindsight, it’s a good time for me to look at how many of these colleges are still open. I determined whether a college was open based on if it did not appear in Federal Student Aid’s database of closed schools that is updated on a weekly basis and if there was evidence of it being open based on a Google search.

The results of my investigation are the following:

College Name (* indicates nonprofit) Status as of Feb. 2020
Academy of Healing Arts, Massage & Facial Skin Care Possibly open, but website redirected
Allen University* Open
American Broadcasting School Closed June 2018
American Indian College of the Assemblies of God* Merged with another college in 2016
Antonelli College Open
eClips School of Cosmetology and Barbering Open
Everest College Closed July 2016
Fayetteville Beauty College Closed March 2019
Florida School of Traditional Midwifery* Open
Hairmasters Institute of Cosmetology Open
Helms Career Institute* Closed December 2015
Hiwassee College* Closed May 2019
Institute of Therapeutic Massage Open
Los Angeles ORT Technical Institute* Open
Mai-trix Beauty College Possibly open, but website redirected
National Institute of Massotherapy Closed June 2017
Northwest Career College Open
Oklahoma School of Photography Closed June 2017
Omega Studios’ School of Applied Recording Arts & Sciences Open
Professional Massage Training Center Closed July 2015
South Texas Vocational Technical Institute Open
Star Career Academy Closed November 2016
Stylemasters College of Hair Design Open
Taylor Business Institute Open
Technical Career Institutes Closed September 2017
Texas Vocational School Open

To summarize, 13 of the 26 colleges that triggered all three accountability metrics in 2015 were clearly open as of February 2020, with two other colleges potentially being open but having no clear Internet presence to support their existence. One college merged with another institution, while the other ten closed between July 2015 and March 2019. Of the ten colleges that closed, two closed in 2015, two closed in 2016, three closed in 2017, one closed in 2018, and two closed in 2019.

At the suggestion of Kevin McClure of UNC-Wilmington, I added an indicator for whether a college was private nonprofit (*) after I initially posted this piece. Of the six private nonprofit colleges, three remained open, two closed, and one merged. So the closure rate was about the same across both sectors.

This quick retrospective shows mixed implications for federal accountability policies. While less than half of the colleges that the federal government identified as being of the highest risk to students and taxpayers clearly closed within five years, this closure rate (especially among for-profit colleges) does suggest some predictive power of federal accountability policies. On the other hand, half or more of the colleges remained open despite all odds. This highlights the resilient (stubborn?) nature of some small private colleges that are determined to persist and improve their performance.

Again, stay tuned later this spring for a more thorough analysis of factors associated with college closures!

New Working Paper on the Effects of Gainful Employment Regulations

As debates regarding Higher Education Act reauthorization continue in Washington, one of the key sticking points between Democrats and Republicans is the issue of accountability for the for-profit sector of higher education. Democrats typically want to have tighter for-profit accountability measures, while Republicans either want to loosen regulations or at the very least hold all colleges to the same standards where appropriate.

The case of federal gainful employment (GE) regulations is a great example of partisan differences regarding for-profit accountability. The Department of Education spent much of its time during the Obama administration trying to implement regulations that would have stripped away aid from programs (mainly at for-profit colleges) that could not pass debt-to-earnings ratios. They finally released the first year of data in January 2017—in the final weeks of the Obama administration. The Trump administration then set about undoing the regulations and finally did so earlier this year. (For those who like reading the Federal Register, here is a link to all of the relevant documents.)

There has been quite a bit of talk in the higher ed policy world that GE led colleges to close poor-performing programs, and Harvard closing its poor-performing graduate certificate program in theater right after the data dropped received a lot of attention. But to this point, there has been no rigorous empirical research examining whether the GE regulations changed colleges’ behaviors.

Until now. Together with my sharp PhD student Zhuoyao Liu, I set out to examine whether the owners of for-profit colleges closed lousy programs or colleges after receiving information about their performance.

You can download our working paper, which we are presenting at the Association for the Study of Higher Education conference this week, here.

For-profit colleges can respond more quickly to new information than nonprofit colleges due to a more streamlined governance process and a lack of annoying tenured faculty, and they are also more motivated to make changes if they expect to lose money going forward. It is worth noting that no college should have expected to lose federal funding due to poor GE performance since the Trump administration was on its way in when the dataset was released.

Data collection for this project took a while. For 4,998 undergraduate programs at 1,462 for-profit colleges, we collected information on whether the college was still open using the U.S. Department of Education’s closed school database. Looking at whether programs were still open took a lot more work. We went to college websites, Facebook pages for mom-and-pop operations, and used the Wayback Machine to find information on whether a program appeared to still be open as of February 2019.

After doing that, we used a regression discontinuity research design to look at whether passing GE outright (relative to not passing) or being in the oversight zone (versus failing) affected the likelihood of college or program closures. While the results for the zone versus fail analyses were not consistently significant across all of our bandwidth and control variable specifications, there were some interesting findings for the passing versus not passing comparisons. Notably, programs that passed GE were much less likely to close than those that did not pass. This suggests that for-profit colleges, possibly encouraged by accrediting agencies and/or state authorizing agencies, closed lower-performing programs and focused their resources on their best-performing programs.

We are putting this paper out as a working paper as a first form of peer review before undergoing the formal peer review process at a scholarly journal. We welcome all of your comments and hope that you find this paper useful—especially as the Department of Education gets ready to release program-level earnings data in the near future.

Twenty-Two Thoughts on House Democrats’ Higher Education Act Reauthorization Bill

House Democrats released the framework for the College Affordability Act today, which is their effort for a comprehensive reauthorization of the long-overdue Higher Education Act. This follows the release of Senator Lamar Alexander’s (R-TN) more targeted version last month. As I like to do when time allows, I live-tweeted my way through the 16-page summary document. Below are my 22 thoughts on certain parts of the bill (annotating some of my initial tweets with references) and what the bill means going forward.

(1) Gainful employment would go back in effect for the same programs covered in the Obama-era effort. (Did that policy induce programs to close? Stay tuned for a new paper on that…I’m getting back to work on it right after putting up this blog post!)

(2) In addition to lifting the student unit record ban, the bill would require data to be disaggregated based on the American Community Survey definitions of race (hopefully with a crosswalk for a couple of years).

(3) Federal Student Aid’s office would have updated performance goals, but there is no mention of a much-needed replacement of the National Student Loan Data System (decidedly unsexy and not cheap, though).

(4) Regarding the federal-state partnership, states would have access to funds to “support the adoption and expansion of evidence-based reforms and practices.” I would love to see a definition of “evidence”—is it What Works Clearinghouse standards or something less?

(5) The antiquated SEOG allocation formula would be phased out and replaced with a new formula based on unmet need and percent low-income. Without new money, this may work as well as the 1980 effort (which flopped). Here is my research on the topic.

(6) Same story for federal work-study. Grad students would still be allowed to participate, which doesn’t seem like the best use of money to me.

(7) Students would start repaying loans at 250% of the federal poverty line, up from 150%. Automatically recertifying income makes a lot of sense.

(8) There are relatively small changes to Public Service Loan Forgiveness, mainly regarding old FFEL loans and consolidation (they would benefit quite a few people). But people still have to wait ten years and hope for the best.

(9) I’m in a Halloween mood after seeing the awesome Pumpkin Blaze festival in the Hudson Valley last night. So, on that note, Zombie Perkins returns!

The Statue of Liberty, made entirely out of pumpkins. Let HEA reauthorization ring???

(10) ED would take a key role in cost of attendance calculations, with a requirement that they create at least one method for colleges to use. Here is my research on the topic, along with a recent blog post showing colleges with very low and very high living allowances.

(11) And if that doesn’t annoy colleges, a requirement about developing particular substance abuse safety programs will. Campus safety and civil rights requirements may also irk some colleges, but will be GOP nonstarters.

(12) The bill places a larger role on accreditors and state authorizers for accountability while not really providing any support. Expect colleges to sue accreditors/states…and involve their members of Congress.

(13) Improving the cohort default rate metric is long-overdue, and a tiered approach could be promising. (More details needed.)

(14) There would be a new on-time loan repayment metric, defined as the share of borrowers who made 33 of 36 payments on time. $0 payments and educational deferments count as payments, and ED would set the threshold with waivers possible.

(15) This is an interesting metric, and I would love to see it alongside the Scorecard repayment rate broken down by IDR and non-IDR students. But if the bill improves IDR, expect the on-time rate to (hopefully!) be high.

(16) It would be great to see new IPEDS data on marketing, recruitment, advertising, and lobbying expenses. Definitions matter a lot here, and the Secretary gets to create them. These are the types of metrics that the field showed interest in when the IPEDS folks asked Tammy Kolbe and me to do a landscape analysis of higher ed finance metrics.

(17) Most of higher ed wants financial responsibility scores to be updated (see my research on this), and this would set up a negotiated rulemaking panel to work on it.

(18) There is also language about “rebalancing” who participates in neg reg. The legislative text will be fun to parse.

(19) Teach for America will be reauthorized, but it’s in a list of programs with potential changes. Democrats will watch that closely.

(20) And pour one out for the programs that were authorized in the last Higher Education Act back in 2008, but never funded. This bill wants to get rid of some of them.

(21) So what’s next? Expect this to get a committee vote fairly quickly, but other events might swamp it (pun intended) in the House. I doubt the Senate will take it up as Alexander has his preferred bill.

(22) Then why do this? It’s a good messaging tool that can keep higher ed in the spotlight. Both parties are positioning for 2021, and this bill (which is moderate by Dem primary standards) is a good starting place for Democrats.

Thanks for reading!

Trends in For-Profit Colleges’ Reliance on Federal Funds

One of the many issues currently derailing bipartisan agreement on federal Higher Education Act reauthorization is how to treat for-profit colleges. Democrats and their ideologically-aligned interest groups, such as Elizabeth Warren and the American Federation of Teachers, have called on Congress to cut off all federal funds to for-profit colleges—a position that few publicly took before this year. Meanwhile, Republicans have generally pushed for all colleges to be held to the same accountability standards, as evidenced by the Department of Education’s recent decision to rescind the Obama-era gainful employment era regulations that primarily focused on for-profit colleges. (Thankfully, program-level debt to earnings data—which was used to calculate gainful employment metrics—will be available for all programs later this year.)

I am spending quite a bit of time thinking about gainful employment right now as I work on a paper with one of my graduate students that examines whether programs at for-profit colleges that failed the gainful employment metrics shut down at higher rates than similar colleges that passed. Look for a draft of this paper to be out later this year, and I welcome feedback from the field as soon as we have something that is ready to share.

But while I was putting together the dataset for that paper, I realized that new data on the 90/10 rule came out with basically no attention last December. (And this is how blog posts are born, folks!) This rule requires for-profit colleges to get at least 10% of their revenue from sources other than federal Title IV financial aid (veterans’ benefits count toward the non-Title IV funds). Democrats who are not calling for the end of federal student aid to for-profits are trying to get 90/10 changed to 85/15 and putting veterans’ benefits in with the rest of federal aid, while Republicans are trying to eliminate the rule entirely. (For what it’s worth, here are my thoughts about a potential compromise.)

With the release of the newest data (covering fiscal years ending in the 2016-17 award year), there are now ten years of 90/10 rule data available on Federal Student Aid’s website. I have written in the past about how much for-profit colleges rely on federal funds, and this post extends the dataset from the 2007-08 through the 2016-17 award years. I limited the sample to colleges located in the 50 states and Washington, DC as well as to the 965 colleges that reported data over all ten years that data have been publicly released. The general trends in the reliance on Title IV revenues are similar when looking at the full sample, which ranges from 1,712 to 1,999 colleges across the ten years.

The graphic below shows how much the median college in the sample relied on Title IV federal financial aid revenues in each of the ten years of available data. The typical institution’s share of revenue coming from federal financial aid increased sharply from 63.2% in 2007-08 to 73.6% in 2009-10. At least part of this increase is attributable to two factors: the Great Recession making more students eligible for need-based financial aid (and encouraging an increase in college enrollment) and increased generosity of the Pell Grant program. Title IV reliance peaked at 76.0% in 2011-12 and has declined each of the most recent five years, reaching 71.5% in 2016-17.

Award Year Reliance on Title IV (pct)
2007-08 63.2
2008-09 68.3
2009-10 73.6
2010-11 74.0
2011-12 76.0
2012-13 75.5
2013-14 74.6
2014-15 73.2
2015-16 72.5
2016-17 71.5
Number of colleges 965

I then looked at reliance on Title IV aid by a college’s total revenues in the 2016-17 award year, dividing colleges into less than $1 million (n=318), $1 million-$10 million (n=506), $10 million-$100 million (n=122), and more than $100 million (n=19). The next graphic highlights that the groups all exhibited similar patterns of change over the last decade. The smallest colleges tended to rely on Title IV funds the least, while colleges with revenue of between $10 million and $100 million in 2016-17 had the highest shares of funds coming from federal financial aid. However, the differences among the groups were less than five percentage points from 2009-10 forward.

For those interested in diving deeper into the data, I highly recommend downloading the source spreadsheets from Federal Student Aid along with the explanations for colleges that have exceeded the 90% threshold. I have also uploaded an Excel spreadsheet of the 965 colleges with data in each of the ten years examined above.

Why the Next Secretary of Education Should Come from Higher Ed

Elizabeth Warren is one of several Democratic presidential candidates who is highlighting education as a key policy issue in their campaigns. A few weeks after announcing an ambitious proposal to forgive nearly half of all outstanding student debt and strip for-profit colleges’ access to federal financial aid (among other issues), she returned to the topic in advance of a town hall event with the American Federation of Teachers in Philadelphia. In a tweet, Warren promised that her Secretary of Education would be a public school teacher.

This would be far from unprecedented: both Rod Paige (under George W. Bush) and John King (under Barack Obama) were public school teachers. But if Warren or any other Democrat wants to influence American education to the greatest extent possible, the candidate should appoint someone from higher education instead of K-12 education. (The same also applies to Donald Trump, who apparently will need a new Secretary of Education if he wins a second term.) Below, I discuss a few reasons why ED’s next leader should come from higher ed.

First, the Every Student Succeeds Act, signed into law in 2015, shifted a significant amount of power from ED to the states. This means that the federal government’s power in K-12 education has shifted more toward the appropriations process, which is controlled by Congress. Putting a teacher in charge of ED may result in better K-12 policy, but the change is likely to be small due to the reduced amount of discretion.

Meanwhile, on the higher education side of the ranch, I still see a comprehensive Higher Education Act reauthorization as being unlikely before 2021—even though Lamar Alexander is promising a bill soon. I could see a narrowly-targeted bill on FAFSA simplification getting through Congress, but HEA reauthorization is going to be tough in three main areas: for-profit college accountability, income-driven student loan repayment plans, and social issues (Title IX, campus safety, and free speech). Warren’s proposal last month probably makes HEA reauthorization tougher as it will pull many Senate Democrats farther to the left.

This means that ED will continue to have a great amount of power to make policy through the negotiated rulemaking process under the current HEA. Both the Obama and Trump administrations used neg reg to shape policies without going through Congress, and a Democratic president is likely to rely on ED to undo Trump-era policies. Meanwhile, a second-term Trump administration will still have a number of loose ends to tie up given the difficulty of getting the sheer number of regulatory changes through the process by November 1 of this year (the deadline to have rules take effect before the 2020 election).

I fully realize that promising a public school teacher as Secretary of Education is a great political statement to win over teachers’ unions—a key ally for Democrats. But in terms of changing educational policies, candidates should be looking toward higher education veterans who can help them reshape a landscape in which there is more room to maneuver.

Which Colleges Failed the Latest Financial Responsibility Test?

Every year, the U.S. Department of Education is required to issue a financial responsibility score for private nonprofit and for-profit colleges, which serves as a crude measure of an institution’s financial health. Colleges are scored on a scale from -1.0 to 3.0, with colleges scoring 0.9 or below failing the test (and having to put up a letter of credit) and colleges scoring between 1.0 and 1.4 being placed in a zone of additional oversight.

Ever since I first learned of the existence of this metric five or six years ago, I have been bizarrely fascinated by its mechanics and how colleges respond to the score as an accountability pressure. I have previously written about how these scores are only loosely correlated with college closures in the past and also wrote an article about how colleges do not appear to change their fiscal priorities as a result of receiving a low score.

ED typically releases financial responsibility scores with no fanfare, and it looks like they updated their website with new scores in late March without anyone noticing (at least based on a Google search of the term “financial responsibility score”). I was adding a link to the financial responsibility score to a paper I am writing and noticed that the newest data—for the fiscal year ending between July 1, 2016 and June 30, 2017—was out. So here is a brief summary of the data.

Of the 3,590 colleges (at the OPEID level) that were subject to the financial responsibility test in 2016-17, 269 failed, 162 were in the oversight zone, and 3,159 passed. Failure rates were higher in the for-profit sector than in the nonprofit sector, as the table below indicates.

Financial responsibility scores by institutional type, 2016-17.

Nonprofit For-profit Total
Fail (-1.0 to 0.9) 82 187 269
Zone (1.0 to 1.4) 58 104 162
Pass (1.5 to 3.0) 1,559 1,600 3,159
Total 1,699 1,891 3,590

 

Among the 91 institutions with the absolute lowest score of -1.0, 85 were for-profit. And many of them were a part of larger chains. Education Management Corporation (17), Education Affiliates, Inc. (19), and Nemo Investor Aggregator (11) were responsible for more than half of the -1 scores. Most of the Education Affiliates (Fortis) and Nemo (Cortiva) campuses still appear to be open, but Education Management Corporation (Argosy, Art Institutes) recently suffered a spectacular collapse.

I am increasingly skeptical of financial responsibility scores as a useful measure of financial health because they are so backwards-looking. The data are already three years old, which is an eternity for a college on the brink of collapse (but perhaps not awful for a cash-strapped nonprofit college with a strong will to live on). I joined Kenny Megan from the Bipartisan Policy Center to write an op-ed for Roll Call on a better way to move forward with collecting more updated financial health measures, and I would love your thoughts on new ways to proceed!