Key Takeaways from the Negotiated Rulemaking Data Release

I thought that the end of 2025 was going to be relatively quiet when I wrote my last piece a couple of weeks ago, but my words to the Chronicle of Higher Education for their 25-year retrospective came back to bite me:

“It’s hard not to focus just on what has happened in 2025, because it seems like this year alone has been 25 years long.”

So I’m back with one more piece on New Year’s Eve before I throw a standing rib roast in the smoker to celebrate the coming of 2026. On December 30, the Department of Education released several new datasets on program-level outcomes in advance of early January’s negotiated rulemaking session on implementing accountability provisions set in place by last July’s budget reconciliation law (OBBB) that effectively served as a reauthorization of the Higher Education Act.

The key focus of the rulemaking session will be to determine whether there should be one or two accountability systems. Currently, gainful employment regulations focus on for-profit institutions and certificate programs at other institutions and base calculations on a debt-to-earning metrics. The new system approved by Congress in OBBB, however, excludes undergraduate certificates and bases passing on whether earnings are higher than a complicated threshold metric four years after completion. By also limiting loans to graduate students, the debt-to-earnings threshold is arguably less important now than in the past, making a good argument for a single metric for graduate students. But for undergraduates, debt is still a somewhat useful metric, although I do not know whether two systems would be worth the hassle.

This is a substantial dataset that goes well beyond the minimum needed to implement accountability, and I applaud the skeleton staff at the Department of Education for getting this done. I have pointed out issues with the first two IPEDS data releases of the second Trump administration, but this one seems to have gone quite well. This also means that ED and the Internal Revenue Service have figured out interagency cooperation to update earnings data, meaning that a big College Scorecard update is also likely to come. I am still quite worried about ED’s ability to manage the huge proposed admissions data collection, so stay tuned on the data front.

The most important new program-level data elements are the following:

  • Noncompletion rate: This is defined as a student who received federal financial aid in a given year and then does not show up as a graduate or enrolled as a federal aid recipient in the next two years. This is a pretty generous definition in terms of allowing students to transfer programs or institutions, but it can also miss students who no longer receive federal aid. I’m watching the metric closely, but am not sure about using it yet.
  • Earnings: We finally get a new cohort of earnings data! For years, the program-level Scorecard has focused on students who graduated in 2014-15 or 2015-16, and that now gets refreshed to 2017-18 and 2018-19 graduates. This makes it possible to track changes in earnings across multiple cohorts, which is neat.
  • The number of financial aid recipients and grant/loan disbursements by program: This is brand-new data, and it is available for ten years (Fiscal Years 2016 through 2025).
  • OBBB earnings metric status: This compares the four-year earnings metric to the threshold, which is essentially what the student is estimated to have earned if they did not pursue that credential. Failing that metric in two of three consecutive years will subject the program to the loss of federal loan eligibility.

I am focusing on two key questions in the rest of this blog post, and I put together a dataset for download that contains the 91,989 programs with at least some data (just under half of all programs, as defined at the 4-digit CIP level).

Question 1: Which programs would fail the earnings threshold metric?

Overall, 2,964 of the 49,860 programs (5.9%) with sufficient data on program-level earnings are estimated to be below the earnings threshold. But there is a lot of variation by institution type, credential level, and field of study.

PassFailFailure rate
For-profit2,3451,26835.1%
Nonprofit14,4464923.3%
Public30,1051,2043.8%

This is a bit of an eye-popping number—programs at for-profit colleges are about ten times more likely to fail than other sectors. But let’s dig deeper. Credential level matters a lot, with quite a few undergraduate certificates (which are not a part of the OBBB) failing. The failure rates by sector and credential level are not quite as jarring for the for-profit sector.

PublicPrivate nonprofitFor-profit
Undergrad certificate13.3%28.5%55.8%
Associate5.9%8.2%12.0%
Bachelor1.0%1.4%3.8%
Post-bacc certificateN/AN/AN/A
Master’s3.1%6.2%12.0%
Graduate certificate3.8%3.9%5.3%
First professional0.0%3.0%31.3%
Doctoral0.2%2.2%0.0%

I then looked by some of the most common fields of study (by 2-digit CIP code). Here is how the fields with at least 1,000 programs fared:

FieldFailure rate
Biology1.1%
Business1.8%
Communications2.5%
Computer science1.2%
Education2.9%
Engineering0.0%
Health8.3%
Liberal arts4.2%
Personal/culinary services78.5%
Psychology2.3%
Public administration1.9%
Security1.2%
Social sciences1.3%
Visual/performing arts17.7%

In general, most fields do pretty well (and engineering had exactly zero programs fail). But personal/culinary services, which is a field with a lot of undergraduate certificates (and tips for earnings that are not reported to the IRS), and visual and performing arts perform much worse.

Question 2: Which programs are facing challenges with reduced graduate student loan limits?

Effective July 1, 2026 (with the exception of some programs that are granted a brief reprieve), only a short list of so-called “professional” programs can access $50,000 per year in federal student loans and $200,000 during the entire length of the program. All other “graduate” programs are limited to $20,500 per year and $100,000 for the entire program. Based on my coding of the relevant CIP codes and the available data, I see 1,120 programs with available data as likely being professional and 17,297 programs likely being graduate.

The bad news for higher education is that quite a few programs have average debt (among borrowers) that is above these annual limits. Thirty percent of professional programs and 26% of graduate programs are over their caps, and in some cases well over the caps. For example, 461 graduate programs have more than $50,000 per year in annual borrowing and 20 professional programs (mostly in dentistry) have more than $100,000 per year in annual borrowing.

There are also differences by field of study in the number of programs over their loan limits. Of the most popular graduate programs (at least 450 observations), nearly half of health and biology programs averaged over the $20,500 annual limit among borrowers. Education, as usual, had the lowest rate of overages. Among professional programs, half of all health-related and veterinary medicine programs were over $50,000 per year in average debt. One-fourth of law schools exceeded the new limit, while only nine of 211 psychology programs and zero theology programs were in excess of $50,000.

FieldOver the limit (pct)
Biology46.8%
Business18.8%
Computer science21.7%
Education8.7%
Health46.9%
Multidisciplinary studies30.7%
Psychology22.1%
Public administration21.8%
Visual/performing arts37.8%

There is a lot more in this dataset, and there is always the possibility of additional data releases next week as negotiators ask for additional information. But for now, Happy New Year!

Examining the Debt and Earnings of “Professional” Programs

Negotiated rulemaking, in which the federal government convenes representatives of affected parties before implementing major policy changes, is one of the wonkier topics in higher education. (I cannot recommend enough Rebecca Natow’s book on the topic.) Negotiated rulemaking has been in the news quite a bit lately as the Department of Education works to implement changes to federal student loan borrowing limits passed in this summer’s budget reconciliation law.

Since 2006, students attending graduate and professional programs have been able to borrow up to the cost of attendance. But the reconciliation law limited graduate programs to $100,000 and professional programs to $200,000, setting off negotiations on which programs counted as “professional” (and thus received higher loan limits). The Department of Education started with ten programs and the list eventually went to eleven with the addition of clinical psychology.

In this short post, I take a look at the debt and earnings of these programs that meet ED’s definition of “professional,” along with a few other programs that could be considered professional but were not.

Data and Methods

I used program-level College Scorecard data, focusing on debt data from 2019 and five-year earnings data from 2020. (These are the most recent data points available, as the Scorecard has not been meaningfully updated during the second Trump administration. Five-year earnings get students in health fields beyond medical residencies. I pulled all doctoral/first professional fields from the data by four-digit Classification of Instructional Programs codes, as well as master’s degrees in theology to meet the listed criteria.

Nine of the eleven programs had enough graduates with debt and earnings to report data; osteopathic medicine and podiatry did not. There were five other fields of study with at least 14 programs reporting data: education, educational administration, rehabilitation, nursing, and business administration. All of these clearly prepare people for employment in a profession, but are not currently recognized as “professional.”

Key takeaways

Below is a summary table of debt and earnings for professional programs, including the number of programs above the $100,000 (graduate) and $200,000 (professional) thresholds. Dentistry, pharmacy, and medicine have a sizable share of programs above the $100,000 threshold, while law (the largest field) has only four of 195 programs over $200,000. Theology is the only one of the nine “professional” programs with sufficient data that has higher five-year earnings than debt, suggesting that students in other programs may have a hard time accessing the private market to fill the gap between $200,000 and the full cost of attendance.

On the other hand, four of the five programs not included as “professional” have higher earnings than debt, with nursing and educational administration being the only programs with sufficient data that had debt levels below 60% of earnings. More than one-third of rehabilitation programs had debt over the new $100,000 cap, while few programs in other fields had that high of a debt level. (Education looks pretty good now, doesn’t it?)

I expect the debate over what counts as “professional” to end up in courts and to possibly make its way into a future budget reconciliation bill (about the only way Congress passes legislation at this point). Until then, I will be hoping for newer and more granular data about affected programs.

More Research on Heightened Cash Monitoring

As the academic summer quickly wraps up (nine-month faculty contracts at Tennessee begin on August 1), I am working on wrapping up some research projects while also simultaneously preparing for new ones. One of the projects that is near completion (thanks to Arnold Ventures for their support of this work) is examining the prevalence and implications of the federal government’s heightened cash monitoring (HCM) policy in higher education.

In the spring, I shared my first paper on the topic, which examined whether HCM placement was associated with changes to institutional financial patterns or student outcomes. We found generally null findings, which matches much of a broader literature on the effects of accountability in higher education that are not directly tied to the loss of federal financial aid. In this post, I am sharing two more new papers.

The first paper descriptively examines trends in HCM status over time, the interaction with other federal accountability policies, and whether colleges placed on HCM tend to close. There are two levels of HCM: HCM1 requires additional oversight, while the more severe HCM2 requires colleges to pay out money to students before being reimbursed by Federal Student Aid. As shown below, there was a spike in usage of HCM2 status around 2015, which was also the first year that HCM1 data were made publicly available by the Department of Education.

Colleges end up on HCM1 and HCM2 for much different reasons. The less severe HCM1 is dominated by colleges with low financial responsibility scores, while more serious accreditation and administrative capacity concerns are key reasons for HCM2 placement. Additionally, colleges on HCM2 tend to close at higher rates than colleges on HCM1.

The second paper builds on the other research from this project to examine whether student enrollment patterns are affected by signals of institutional distress. The motivation for this work is that in an era of heightened concerns about the stability of colleges, students may seek to enroll elsewhere if a college they are attending (or considering attending) displays warning signs. On the other hand, colleges may redouble their recruitment efforts to try to dig themselves out of the financial hole.

We examined these questions using two different accountability thresholds. The first was to compare colleges on HCM2 to colleges with a failing financial responsibility score, as HCM2 is a much smaller list of colleges and comes with restrictions on institutional operations. The second was to compare colleges that just failed the financial responsibility metric to colleges that were in an oversight zone that allowed them to avoid being placed on HCM1 if they posted a letter of credit with the Department of Education. As the below figure shows, there is not a huge jump in the number of colleges that barely avoided failing (the left line)—and that allows for the use of a regression discontinuity design.

After several different analyses, the key takeaway is that students did not respond to bad news about their college’s situation by changing their enrollment patterns. If anything, enrollment may have increased slightly in some cases following placement on HCM2 or receiving a failing financial responsibility score (such as in the below figure). This finding would be consistent with students never hearing about this news or simply not having other feasible options of where to attend. I really wonder if this changes in the future as more attention is being paid to the struggles of small private colleges in particular.

I would love your feedback on these papers, as well as potential journals to explore. Thanks for reading!

New Research on Heightened Cash Monitoring

I have spent most of the last year digging into the topic of heightened cash monitoring (HCM), perhaps the federal government’s most important tool in its higher education accountability toolbox at this time. HCM places colleges’ federal financial aid disbursements under additional scrutiny in order to protect taxpayer dollars. There are two levels of scrutiny: HCM1 requires additional oversight, while the more severe HCM2 requires colleges to pay out money to students before being reimbursed by Federal Student Aid.

This seems like an obscure topic, but it affects a substantial portion of American higher education. In 2023, 493 colleges were on HCM1 and 78 colleges were on HCM2—together representing about 10% of all colleges receiving federal financial aid. And in the mid-2010s, more than 1,000 colleges were on HCM1 or HCM2 at one time.[1]

Thanks to the generous support of Arnold Ventures, my graduate research assistant Holly Evans and I dove into whether colleges responded to being placed on the more severe HCM2 status by changing their financial priorities, closing, or influencing student debt and graduation outcomes. We compared colleges placed on HCM2 to colleges that were not on HCM2, but had failed the federal financial responsibility metric (and thus also had issues identified by the federal government). Using three analytic approaches, we generally found no relationships between HCM2 status and these outcomes. It was a lot of work for no clear findings, but that is pretty typical when studying institutional responses to government policies.

Here is a copy of our working paper, which I am posting here in the hope of receiving feedback. I am particularly interested in thoughts about the analytic strategy, interpreting results, and potential journals to send this paper to. Stay tuned for more work from this project!


[1] HCM1 data were first made public in 2015 following news coverage from Inside Higher Ed, while retroactive HCM2 data were also released in 2015 with the unveiling of the modern College Scorecard.

Sharing a Dataset of Program-Level Debt and Earnings Outcomes

Within a couple of hours of posting my comments on the Department of Education’s proposal to create a list of programs with low financial value, I received multiple inquiries about whether there was a user-friendly dataset of current debt-to-earnings ratios for programs. Since I work with College Scorecard data on a regular basis and have used the data to write about debt-to-earnings ratios, it only took a few minutes to put something together that I hope will be useful.

To create a debt-to-earnings ratio that covered as many programs as possible, I pulled median student debt accumulated at that institution for the cohorts of students who left college in the 2016-17 or 2017-18 academic years and matched it with earnings for those same cohorts one calendar year later (calendar year 2018 or 2019). The College Scorecard has some earnings data more than one year out at this point, but a much smaller share of programs are covered. I then calculated a debt-to-earnings ratio. And for display purposes, I also pulled median parent debt from that institution.

The resulting dataset covers 45,971 programs at 5,033 institutions with data on both student debt and earnings for those same cohorts. You can download the dataset here in Excel format and use filter/sort functions to your heart’s content.

Comments on a Proposed Federal List of Low-Value Programs

The U.S. Department of Education recently announced that they will be creating a list of low-value postsecondary programs, and they requested input from the public on how to do so. They asked seven key questions, and I put together 3,000-plus words in comments as a response to submit. Here, I list the questions and briefly summarize my key points.

Question 1: What program-level data and metrics would be most helpful to students to understand the financial (and other) consequences of attending a program?

Four data elements would be helpful. The first is program-level completion rates, especially for graduate or certificate programs where students are directly admitted into programs. Second, given differential tuition and different credit requirements across programs, time to completion and sticker/net prices by program would be incredibly valuable. The last two are debt and earnings, which are largely present in the current College Scorecard.

Question 2: What program-level data and metrics would be most helpful to understand whether public investments in the program are worthwhile? What data might be collected uniformly across all students who attend a program that would help assess the nonfinancial value created by the program?

I would love to see information on federal income taxes paid by former students and use of public benefits (if possible). More information on income-driven repayment use would also be helpful. Finally, there is a great need to rethink definitions of “public service,” as it currently depends on the employer instead of the job function. That is a concern in fields like nursing that send graduates to do good things in for-profit and nonprofit settings.

Question 3: In addition to the measures or metrics used to determine whether a program is placed on the low-financial-value program list, what other measures and metrics should be disclosed to improve the information provided by the list?

Nothing too fancy here. Just list any sanctions/warnings from the federal government, state agencies, or accreditors along with general outcomes for all students at the undergraduate level to account for major switching.

Question 4: The Department intends to use the 6-digit Classification of Instructional Program (CIP) code and the type of credential awarded to define programs at an institution. Should the Department publish information using the 4-digit CIP codes or some other type of aggregation in cases where we would not otherwise be able to report program data?

This is my nerdy honey hole, as I have spent a lot of time thinking on these issues. The biggest two issues with student debt/earnings data right now is that some campuses get aggregated together in reporting and that it’s also impossible to separate outcomes for fully online versus hybrid/in-person programs. Those nuts need to be cracked, and then aggregate up if cell sizes are too small.

Question 5: Should the Department produce only a single low-financial-value program list, separate lists by credential level, or use some other breakdown, such as one for graduate and another for undergraduate programs?

Separate out by credential level and ideally have a good search function by program of study. Otherwise, some low-paying programs will clog up the lists and not let students see relatively lousy programs in higher-paying areas.

Question 6: What additional data could the Department collect that would substantially improve our ability to provide accurate data for the public to help understand the value being created by the program? Please comment on the value of the new metrics relative to the burden institutions would face in reporting information to the Department.

I would love to see program-level completion rates (where appropriate) and better pricing information at the program level. Those items aren’t free to implement, so I would gladly explore other cuts to IPEDS (such as the academic libraries survey) to help reduce additional burden.

Question 7: What are the best ways to make sure that institutions and students are aware of this information?

Colleges will be aware of this information without the federal government doing much, and they may respond to information that they didn’t have before. But colleges don’t have a great record of responding to public shaming if they already knew that affordability was a concern, so I’m not expecting massive changes.

The College Scorecard had small changes around the margins for student behaviors, primarily driven by more advantaged students. I’m not an expert in reaching out to prospective students, but I know that outreach to as many groups as possible is key.

Why I’m Skeptical of Cost of Attendance Figures

In the midst of a fairly busy week for higher education (hello, Biden’s student loan forgiveness and income-driven repayment plans!), the National Center for Education Statistics began adding a new year of data into the Integrated Postsecondary Education Data System. I have long been interested in cost of attendance figures, as colleges often face intense pressure to keep these numbers low. A higher cost of attendance means a higher net price, which makes colleges look bad even if this number is driven by student living allowances that colleges do not receive. For my scholarly work on this, see this Journal of Higher Education article—and I also recommend this new Urban Institute piece on the topic.

After finishing up a bunch of interviews on student loan debt, I finally had a chance to dig into cost of attendance data from IPEDS for the 2020-21 and 2021-22 academic year. I focused on the reported cost of attendance for students living off-campus at 1,568 public and 1,303 private nonprofit institutions (academic year reporters) with data in both years. This time period is notable for two things: more modest increases in tuition and sharply higher living costs due to the pandemic and resulting changes to college attendance and society at large.

And the data bear this out on listed tuition prices. The average increase in tuition was just 1.67%, with similar increases across public and private nonprofit colleges. 116 colleges had lower listed tuition prices in fall 2021 than in fall 2020, while about two-thirds for public and one-third of private nonprofit colleges did not increase tuition for fall 2021. This resulted in a tuition increase well below the rate of inflation, which is generally good news for students but bad news for colleges.

The cost of attendance numbers, as shown below, look a little different. Nearly three times as many institutions (322) reported a lower cost of attendance than reported lower tuition, which is surprising given rising living costs. More colleges also reported increasing the cost of attendance relative to increasing tuition, with fewer colleges reporting no changes.

Changes in tuition and cost of attendance, fall 2020 to fall 2021.

 Public (n=1,568)Private (n=1,303)
Tuition  
  Decrease6452
  No change955439
  Increase549812
Cost of attendance  
  Decrease188134
  No change296172
  Increase1,084997

Some of the reductions in cost of attendance are sizable without a corresponding cut in tuition. For example, California State University-Monterey Bay reduced its listed cost of attendance from $31,312 to $26,430 while tuition increased from $7,143 to $7,218. [As Rich Hershman pointed out on Twitter, this is potentially due to California updating its cost of attendance survey instead of increasing it by inflation every year.]

Texas Wesleyan University increased tuition from $33,408 to $34,412, while the cost of attendance fell from $52,536 to $49,340. These decreases could be due to a more accurate estimate of living expenses, moving to open educational resources instead of textbooks, or reducing student fees. But the magnitude of these decreases during an inflationary period leads me to continue questioning the accuracy of cost of attendance values or the associated net prices.

As a quick note, this week marks the ten-year anniversary of my blog. Thanks for joining me through 368 posts! I don’t have the time to do as many posts as I used to, but it is sure nice to have an outlet for some occasional thoughts and data pieces.

What Happened to Colleges at Risk of Closing?

The issue of college closures has gotten a lot of attention in recent years, as evidenced by this recent Chronicle of Higher Education piece summarizing the growing field of researchers and organizations trying to identify colleges at risk of closure. I am one of those people, as I am working on a paper on this topic that I hope to release in the spring.

In doing my research for the paper, I stumbled upon a piece that I wrote for the Chronicle back in 2015 and completely forgot about. (The joys of writing short pieces and blog posts…sometimes I forget what I wrote about several years ago!) In that piece, titled “Where 3 Accountability Measures Meet, A Hazardous Intersection,” I used a brand-new data source from the U.S. Department of Education combined with two other existing measures to identify private nonprofit and for-profit colleges that may be at high risk of closing. The metrics were the following:

(1) Whether the college was on Heightened Cash Monitoring in the first-ever public data release in April 2015.

(2) Scored in the oversight zone or failed the financial responsibility test at least once between 2010-11 and 2012-13.

(3) Had a three-year cohort default rate above 30% (subjecting the institution to extra oversight) at least once between the 2009 and 2011 cohorts.

While 1,150 private colleges that tripped at least one of the three metrics in my 2015 analysis, 26 colleges (six private nonprofit and 20 for-profit) tripped all three metrics. Now with five years’ worth of hindsight, it’s a good time for me to look at how many of these colleges are still open. I determined whether a college was open based on if it did not appear in Federal Student Aid’s database of closed schools that is updated on a weekly basis and if there was evidence of it being open based on a Google search.

The results of my investigation are the following:

College Name (* indicates nonprofit) Status as of Feb. 2020
Academy of Healing Arts, Massage & Facial Skin Care Possibly open, but website redirected
Allen University* Open
American Broadcasting School Closed June 2018
American Indian College of the Assemblies of God* Merged with another college in 2016
Antonelli College Open
eClips School of Cosmetology and Barbering Open
Everest College Closed July 2016
Fayetteville Beauty College Closed March 2019
Florida School of Traditional Midwifery* Open
Hairmasters Institute of Cosmetology Open
Helms Career Institute* Closed December 2015
Hiwassee College* Closed May 2019
Institute of Therapeutic Massage Open
Los Angeles ORT Technical Institute* Open
Mai-trix Beauty College Possibly open, but website redirected
National Institute of Massotherapy Closed June 2017
Northwest Career College Open
Oklahoma School of Photography Closed June 2017
Omega Studios’ School of Applied Recording Arts & Sciences Open
Professional Massage Training Center Closed July 2015
South Texas Vocational Technical Institute Open
Star Career Academy Closed November 2016
Stylemasters College of Hair Design Open
Taylor Business Institute Open
Technical Career Institutes Closed September 2017
Texas Vocational School Open

To summarize, 13 of the 26 colleges that triggered all three accountability metrics in 2015 were clearly open as of February 2020, with two other colleges potentially being open but having no clear Internet presence to support their existence. One college merged with another institution, while the other ten closed between July 2015 and March 2019. Of the ten colleges that closed, two closed in 2015, two closed in 2016, three closed in 2017, one closed in 2018, and two closed in 2019.

At the suggestion of Kevin McClure of UNC-Wilmington, I added an indicator for whether a college was private nonprofit (*) after I initially posted this piece. Of the six private nonprofit colleges, three remained open, two closed, and one merged. So the closure rate was about the same across both sectors.

This quick retrospective shows mixed implications for federal accountability policies. While less than half of the colleges that the federal government identified as being of the highest risk to students and taxpayers clearly closed within five years, this closure rate (especially among for-profit colleges) does suggest some predictive power of federal accountability policies. On the other hand, half or more of the colleges remained open despite all odds. This highlights the resilient (stubborn?) nature of some small private colleges that are determined to persist and improve their performance.

Again, stay tuned later this spring for a more thorough analysis of factors associated with college closures!

New Working Paper on the Effects of Gainful Employment Regulations

As debates regarding Higher Education Act reauthorization continue in Washington, one of the key sticking points between Democrats and Republicans is the issue of accountability for the for-profit sector of higher education. Democrats typically want to have tighter for-profit accountability measures, while Republicans either want to loosen regulations or at the very least hold all colleges to the same standards where appropriate.

The case of federal gainful employment (GE) regulations is a great example of partisan differences regarding for-profit accountability. The Department of Education spent much of its time during the Obama administration trying to implement regulations that would have stripped away aid from programs (mainly at for-profit colleges) that could not pass debt-to-earnings ratios. They finally released the first year of data in January 2017—in the final weeks of the Obama administration. The Trump administration then set about undoing the regulations and finally did so earlier this year. (For those who like reading the Federal Register, here is a link to all of the relevant documents.)

There has been quite a bit of talk in the higher ed policy world that GE led colleges to close poor-performing programs, and Harvard closing its poor-performing graduate certificate program in theater right after the data dropped received a lot of attention. But to this point, there has been no rigorous empirical research examining whether the GE regulations changed colleges’ behaviors.

Until now. Together with my sharp PhD student Zhuoyao Liu, I set out to examine whether the owners of for-profit colleges closed lousy programs or colleges after receiving information about their performance.

You can download our working paper, which we are presenting at the Association for the Study of Higher Education conference this week, here.

For-profit colleges can respond more quickly to new information than nonprofit colleges due to a more streamlined governance process and a lack of annoying tenured faculty, and they are also more motivated to make changes if they expect to lose money going forward. It is worth noting that no college should have expected to lose federal funding due to poor GE performance since the Trump administration was on its way in when the dataset was released.

Data collection for this project took a while. For 4,998 undergraduate programs at 1,462 for-profit colleges, we collected information on whether the college was still open using the U.S. Department of Education’s closed school database. Looking at whether programs were still open took a lot more work. We went to college websites, Facebook pages for mom-and-pop operations, and used the Wayback Machine to find information on whether a program appeared to still be open as of February 2019.

After doing that, we used a regression discontinuity research design to look at whether passing GE outright (relative to not passing) or being in the oversight zone (versus failing) affected the likelihood of college or program closures. While the results for the zone versus fail analyses were not consistently significant across all of our bandwidth and control variable specifications, there were some interesting findings for the passing versus not passing comparisons. Notably, programs that passed GE were much less likely to close than those that did not pass. This suggests that for-profit colleges, possibly encouraged by accrediting agencies and/or state authorizing agencies, closed lower-performing programs and focused their resources on their best-performing programs.

We are putting this paper out as a working paper as a first form of peer review before undergoing the formal peer review process at a scholarly journal. We welcome all of your comments and hope that you find this paper useful—especially as the Department of Education gets ready to release program-level earnings data in the near future.

Twenty-Two Thoughts on House Democrats’ Higher Education Act Reauthorization Bill

House Democrats released the framework for the College Affordability Act today, which is their effort for a comprehensive reauthorization of the long-overdue Higher Education Act. This follows the release of Senator Lamar Alexander’s (R-TN) more targeted version last month. As I like to do when time allows, I live-tweeted my way through the 16-page summary document. Below are my 22 thoughts on certain parts of the bill (annotating some of my initial tweets with references) and what the bill means going forward.

(1) Gainful employment would go back in effect for the same programs covered in the Obama-era effort. (Did that policy induce programs to close? Stay tuned for a new paper on that…I’m getting back to work on it right after putting up this blog post!)

(2) In addition to lifting the student unit record ban, the bill would require data to be disaggregated based on the American Community Survey definitions of race (hopefully with a crosswalk for a couple of years).

(3) Federal Student Aid’s office would have updated performance goals, but there is no mention of a much-needed replacement of the National Student Loan Data System (decidedly unsexy and not cheap, though).

(4) Regarding the federal-state partnership, states would have access to funds to “support the adoption and expansion of evidence-based reforms and practices.” I would love to see a definition of “evidence”—is it What Works Clearinghouse standards or something less?

(5) The antiquated SEOG allocation formula would be phased out and replaced with a new formula based on unmet need and percent low-income. Without new money, this may work as well as the 1980 effort (which flopped). Here is my research on the topic.

(6) Same story for federal work-study. Grad students would still be allowed to participate, which doesn’t seem like the best use of money to me.

(7) Students would start repaying loans at 250% of the federal poverty line, up from 150%. Automatically recertifying income makes a lot of sense.

(8) There are relatively small changes to Public Service Loan Forgiveness, mainly regarding old FFEL loans and consolidation (they would benefit quite a few people). But people still have to wait ten years and hope for the best.

(9) I’m in a Halloween mood after seeing the awesome Pumpkin Blaze festival in the Hudson Valley last night. So, on that note, Zombie Perkins returns!

The Statue of Liberty, made entirely out of pumpkins. Let HEA reauthorization ring???

(10) ED would take a key role in cost of attendance calculations, with a requirement that they create at least one method for colleges to use. Here is my research on the topic, along with a recent blog post showing colleges with very low and very high living allowances.

(11) And if that doesn’t annoy colleges, a requirement about developing particular substance abuse safety programs will. Campus safety and civil rights requirements may also irk some colleges, but will be GOP nonstarters.

(12) The bill places a larger role on accreditors and state authorizers for accountability while not really providing any support. Expect colleges to sue accreditors/states…and involve their members of Congress.

(13) Improving the cohort default rate metric is long-overdue, and a tiered approach could be promising. (More details needed.)

(14) There would be a new on-time loan repayment metric, defined as the share of borrowers who made 33 of 36 payments on time. $0 payments and educational deferments count as payments, and ED would set the threshold with waivers possible.

(15) This is an interesting metric, and I would love to see it alongside the Scorecard repayment rate broken down by IDR and non-IDR students. But if the bill improves IDR, expect the on-time rate to (hopefully!) be high.

(16) It would be great to see new IPEDS data on marketing, recruitment, advertising, and lobbying expenses. Definitions matter a lot here, and the Secretary gets to create them. These are the types of metrics that the field showed interest in when the IPEDS folks asked Tammy Kolbe and me to do a landscape analysis of higher ed finance metrics.

(17) Most of higher ed wants financial responsibility scores to be updated (see my research on this), and this would set up a negotiated rulemaking panel to work on it.

(18) There is also language about “rebalancing” who participates in neg reg. The legislative text will be fun to parse.

(19) Teach for America will be reauthorized, but it’s in a list of programs with potential changes. Democrats will watch that closely.

(20) And pour one out for the programs that were authorized in the last Higher Education Act back in 2008, but never funded. This bill wants to get rid of some of them.

(21) So what’s next? Expect this to get a committee vote fairly quickly, but other events might swamp it (pun intended) in the House. I doubt the Senate will take it up as Alexander has his preferred bill.

(22) Then why do this? It’s a good messaging tool that can keep higher ed in the spotlight. Both parties are positioning for 2021, and this bill (which is moderate by Dem primary standards) is a good starting place for Democrats.

Thanks for reading!