Why the Next Secretary of Education Should Come from Higher Ed

Elizabeth Warren is one of several Democratic presidential candidates who is highlighting education as a key policy issue in their campaigns. A few weeks after announcing an ambitious proposal to forgive nearly half of all outstanding student debt and strip for-profit colleges’ access to federal financial aid (among other issues), she returned to the topic in advance of a town hall event with the American Federation of Teachers in Philadelphia. In a tweet, Warren promised that her Secretary of Education would be a public school teacher.

This would be far from unprecedented: both Rod Paige (under George W. Bush) and John King (under Barack Obama) were public school teachers. But if Warren or any other Democrat wants to influence American education to the greatest extent possible, the candidate should appoint someone from higher education instead of K-12 education. (The same also applies to Donald Trump, who apparently will need a new Secretary of Education if he wins a second term.) Below, I discuss a few reasons why ED’s next leader should come from higher ed.

First, the Every Student Succeeds Act, signed into law in 2015, shifted a significant amount of power from ED to the states. This means that the federal government’s power in K-12 education has shifted more toward the appropriations process, which is controlled by Congress. Putting a teacher in charge of ED may result in better K-12 policy, but the change is likely to be small due to the reduced amount of discretion.

Meanwhile, on the higher education side of the ranch, I still see a comprehensive Higher Education Act reauthorization as being unlikely before 2021—even though Lamar Alexander is promising a bill soon. I could see a narrowly-targeted bill on FAFSA simplification getting through Congress, but HEA reauthorization is going to be tough in three main areas: for-profit college accountability, income-driven student loan repayment plans, and social issues (Title IX, campus safety, and free speech). Warren’s proposal last month probably makes HEA reauthorization tougher as it will pull many Senate Democrats farther to the left.

This means that ED will continue to have a great amount of power to make policy through the negotiated rulemaking process under the current HEA. Both the Obama and Trump administrations used neg reg to shape policies without going through Congress, and a Democratic president is likely to rely on ED to undo Trump-era policies. Meanwhile, a second-term Trump administration will still have a number of loose ends to tie up given the difficulty of getting the sheer number of regulatory changes through the process by November 1 of this year (the deadline to have rules take effect before the 2020 election).

I fully realize that promising a public school teacher as Secretary of Education is a great political statement to win over teachers’ unions—a key ally for Democrats. But in terms of changing educational policies, candidates should be looking toward higher education veterans who can help them reshape a landscape in which there is more room to maneuver.

Which Colleges Failed the Latest Financial Responsibility Test?

Every year, the U.S. Department of Education is required to issue a financial responsibility score for private nonprofit and for-profit colleges, which serves as a crude measure of an institution’s financial health. Colleges are scored on a scale from -1.0 to 3.0, with colleges scoring 0.9 or below failing the test (and having to put up a letter of credit) and colleges scoring between 1.0 and 1.4 being placed in a zone of additional oversight.

Ever since I first learned of the existence of this metric five or six years ago, I have been bizarrely fascinated by its mechanics and how colleges respond to the score as an accountability pressure. I have previously written about how these scores are only loosely correlated with college closures in the past and also wrote an article about how colleges do not appear to change their fiscal priorities as a result of receiving a low score.

ED typically releases financial responsibility scores with no fanfare, and it looks like they updated their website with new scores in late March without anyone noticing (at least based on a Google search of the term “financial responsibility score”). I was adding a link to the financial responsibility score to a paper I am writing and noticed that the newest data—for the fiscal year ending between July 1, 2016 and June 30, 2017—was out. So here is a brief summary of the data.

Of the 3,590 colleges (at the OPEID level) that were subject to the financial responsibility test in 2016-17, 269 failed, 162 were in the oversight zone, and 3,159 passed. Failure rates were higher in the for-profit sector than in the nonprofit sector, as the table below indicates.

Financial responsibility scores by institutional type, 2016-17.

Nonprofit For-profit Total
Fail (-1.0 to 0.9) 82 187 269
Zone (1.0 to 1.4) 58 104 162
Pass (1.5 to 3.0) 1,559 1,600 3,159
Total 1,699 1,891 3,590

 

Among the 91 institutions with the absolute lowest score of -1.0, 85 were for-profit. And many of them were a part of larger chains. Education Management Corporation (17), Education Affiliates, Inc. (19), and Nemo Investor Aggregator (11) were responsible for more than half of the -1 scores. Most of the Education Affiliates (Fortis) and Nemo (Cortiva) campuses still appear to be open, but Education Management Corporation (Argosy, Art Institutes) recently suffered a spectacular collapse.

I am increasingly skeptical of financial responsibility scores as a useful measure of financial health because they are so backwards-looking. The data are already three years old, which is an eternity for a college on the brink of collapse (but perhaps not awful for a cash-strapped nonprofit college with a strong will to live on). I joined Kenny Megan from the Bipartisan Policy Center to write an op-ed for Roll Call on a better way to move forward with collecting more updated financial health measures, and I would love your thoughts on new ways to proceed!

Three New Articles on Performance-Based Funding Policies

As an academic, few things make me happier than reading cutting-edge research conducted by talented scholars. So I was thrilled to see three new articles on a topic near and dear to my heart—performance-based funding (PBF) in higher education—come out in top-tier journals. In this post, I briefly summarize the three articles and look at where the body of research is heading.

Nathan Favero (American University) and Amanda Rutherford (Indiana University). “Will the Tide Lift all Boats? Examining the Equity Effects of Performance Funding Policies in U.S. Higher Education.” Research in Higher Education.

In this article, the authors look at state PBF policies (divided into earlier 1.0 policies and later 2.0 policies) to examine whether PBF affects four-year colleges within a state differently. They found evidence that the wave of 2.0 policies may negatively affect less-selective and less-resourced public universities, while 1.0 policies affected colleges in relatively similar ways. In a useful Twitter thread (another reason why all policy-relevant researchers should be on Twitter!), Nathan discusses the implications on equity.

Lori Prince Hagood (University System of Georgia). “The Financial Benefits and Burdens of Performance Funding in Higher Education.” Educational Evaluation and Policy Analysis.

Lori’s article digs into the extent that PBF policies affect per-student state appropriations at four-year colleges, defining PBF as whether a state had any policy funded in a given year. The first item worth noting from the paper is that per-student funding in PBF states has traditionally been lower than in non-PBF states. This may change going forward as states with more generous funding (such as California) are now adopting PBF policies. Lori’s main finding is that selective and research universities tend to see increased state funding following the implementation of PBF, while less-selective institutions see decreased funding, raising concerns about equity.

As an aside, I had the pleasure of discussing an earlier version of this paper at the 2017 Association for the Study of Higher Education conference (although I had forgotten about that until Lori sent me a nice note when the article came out). I wrote in my comments at that time: “I think it has potential to go to a good journal with a modest amount of additional work.” I’m not often right, but I’m glad I was in this case!

Denisa Gándara (Southern Methodist University). “Does Evidence Matter? An Analysis of Evidence Use in Performance-Funding Policy Design.” The Review of Higher Education.

Denisa’s article is a wonderful read alongside the other two because it does not use difference-in-differences techniques to look at quantitative effects of PBF. Instead, she digs into how the legislative sausage of a PBF policy is actually made by studying the policy processes in Colorado (which adopted PBF across two-year and four-year colleges) and Texas (which never adopted PBF in the four-year sector). Her interviews reveal that PBF models in other states and national advocacy groups such as Complete College America and HCM Strategists were far more influential than lowly academic researchers.

In a Twitter thread about her new article, Denisa highlighted the following statement:

As a fellow researcher who also talks with policymakers on a regular basis, I have quite a few thoughts on this statement. Policymakers (including in blue states) are increasingly hesitant to give colleges more money without tying a portion of those funds to student outcomes, and other ways of funding colleges also raise equity concerns. So expect PBF to expand in the next several years.

Does this mean that academic research on PBF is irrelevant? I don’t think so. Advocacy organizations are at least partially influenced by academic research; for example, see how the research on equity metrics in PBF policies has shaped their work. It is the job of researchers to keep raising critical questions about the design of PBF policies, and it is also our job to conduct more nuanced analyses that dive into the details of how policies are constructed. That is why my new project with Kelly Rosinger of Penn State and Justin Ortagus of the University of Florida to collect these details over time excites me so much—it is what the field needs to keep building upon great studies such as the ones highlighted here.

A Possible For-Profit Accountability Compromise?

In the midst of an absolutely bonkers week in the world of higher education (highlighted by a FBI investigation into an elite college admissions scandal, although the sudden closure of Argosy University deserves far more attention than rich families doing stupid things), the U.S. House Appropriations Committee held a hearing on for-profit colleges. Not surprisingly, the hearing quickly developed familiar fault lines: Democrats pushed for tighter oversight over “predatory” colleges, while Republicans repeatedly called for applying the same regulations to both for-profit and nonprofit colleges.

One of the key sticking points in Higher Education Act (HEA) reauthorization is likely to be around the so-called “90/10” rule, which requires for-profit colleges to get at least 10% of their revenue from sources other than federal financial aid (excluding veterans’ benefits) in order to qualify for federal financial aid. Democrats want to return the rule to 85/15 (as it was in the past) and count veterans’ benefits in the federal funds portion of the calculation, which would trip up many for-profit colleges. (Because public colleges get state funds and many private colleges have at least modest endowments, this rule is generally not an issue for them.) Republicans have proposed getting rid of 90/10 in their vision for HEA reauthorization.

I have a fair amount of skepticism about the effectiveness of the 90/10 rule in the for-profit sector, particularly as tuition prices need to be above federal aid limits and for-profit colleges tend to serve students with relatively little ability to pay for their own education. But I also worry about colleges with poor student outcomes sucking up large amounts of federal funds with relatively few strings attached. So, while watching the panelists at the House hearing talk about the 90/10 rule, the framework of an idea on for-profit accountability (which may admittedly be crazy) came into my mind.

I am tossing out the idea of tying the percentage of revenue that colleges can receive from federal funds (including veterans’ benefits as federal funds) to the institution’s performance on a number of metrics. For the sake of simplicity, let’s assume the three outcomes are graduation rates, earnings after attending college, and student loan repayment rates—although other measures are certainly possible. Then I will break each of these outcomes into thirds based on the predominant type of credential awarded (certificate, associate degree, or bachelor’s degree), restricting the sample to broad-access college to reflect the realities of the for-profit sector.

A college that performed in the top third on all three measures would qualify for the maximum share of revenue from federal funds—let’s say 100%. A college in the top third on two measures and in the middle third on the other one could get 95%, and the percentage would drop by 5% (or some set amount) as the college’s performance dropped. Colleges in the bottom third on all three measures would only get 60% of revenue from federal funds.

This type of system would effectively remove the limit on federal funds for high-performing for-profit colleges, while severely tightening it for low performers. Could this idea gain bipartisan support (after a fair amount of model testing)? Possibly. Is it worth at least thinking through? I would love your thoughts on that.

New Data on Pell Grant Recipients’ Graduation Rates

In spite of being a key marker of colleges’ commitments to socioeconomic diversity, it has only recently been possible to see institution-level graduation rates of students who begin college with Pell Grants. I wrote a piece for Brookings in late 2017 based on the first data release from the U.S. Department of Education and later posted a spreadsheet of graduation rates upon the request of readers—highlighting public interest in the metric.

ED released the second year of data late last year, and Melissa Korn of The Wall Street Journal (one of the best education writers in the business) reached out to me to see if I had those data handy for a piece she wanted to write on Pell graduation rate gaps. Since I do my best to keep up with new data releases from the Integrated Postsecondary Education Data System, I was able to send her a file and share my thoughts on the meaning of the data. This turned into a great piece on completion gaps at selective colleges.

Since I have already gotten requests to share the underlying data in the WSJ piece, I am happy to post the spreadsheet again on my site.

Download the spreadsheet here!

A few cautions:

(1) There are likely a few colleges that screwed up data reporting to ED. For example, gaps of 50% for larger colleges are likely an error, but nobody at the college caught them.

(2) Beware the rates for small colleges (with fewer than 50 students in a cohort).

(3) This graduation rate measure is the graduation rate for first-time, full-time students who complete a bachelor’s degree at the same institution within six years. It excludes part-time and transfer students, so global completion numbers will be higher.

(4) As my last post highlighted, there are some legitimate concerns with using percent Pell as an accountability measure. However, it’s the best measure that is currently available.

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!

Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.

Some Good News on Student Loan Repayment Rates

The U.S. Department of Education released updates to its massive College Scorecard dataset earlier this week, including new data on student debt burdens and student loan repayment rates. In this blog post, I look at trends in repayment rates (defined as whether a student repaid at least $1 in principal) at one, three, five, and seven years after entering repayment. I present data for colleges with unique six-digit Federal Student Aid OPEID numbers (to eliminate duplicate results), weighting the final estimates to reflect the total number of borrowers entering repayment.[1]

The table below shows the trends in the 1-year, 3-year, 5-year, and 7-year repayment rates for each cohort of students with available data.

Repayment cohort 1-year rate (pct) 3-year rate (pct) 5-year rate (pct) 7-year rate (pct)
2006-07 63.2 65.1 66.7 68.4
2007-08 55.7 57.4 59.5 62.2
2008-09 49.7 51.7 55.3 59.5
2009-10 45.7 48.2 52.6 57.4
2010-11 41.4 45.4 51.3 N/A
2011-12 39.8 44.4 50.6 N/A
2012-13 39.0 45.0 N/A N/A
2013-14 40.0 46.1 N/A N/A

One piece of good news is that 1-year and 3-year repayment rates ticked up slightly for the most recent cohort of students who entered repayment in 2013 or 2014. The 1-year repayment rate of 40.0% is the highest rate since the 2010-11 cohort and the 3-year rate of 46.1% is the highest since the 2009-10 cohort. Another piece of good news is that the gain between the 5-year and 7-year repayment rates for the most recent cohort with data (2009-10) is the largest among the four cohorts with data.

Across all sectors of higher education, repayment rates increased as a student got farther into the repayment period. The charts below show differences by sector for the cohort entering repayment in 2009 or 2010 (the most recent cohort to be tracked over seven years), and it is worth noting that for-profit students see somewhat smaller increases in repayment rates than other sectors.

But even somewhat better repayment rates still indicate significant issues with student loan repayment. Only half of borrowers have repaid any principal within five years of entering repayment, which is a concern for students and taxpayers alike. Data from a Freedom of Information Act request by Ben Miller of the Center for American Progress highlight that student loan default rates continue to increase beyond the three-year accountability window currently used by the federal government, and other students are muddling through deferment and forbearance while outstanding debt continues to increase.

Other students are relying on income-driven repayment and Public Service Loan Forgiveness to remain current on their payments. This presents a long-term risk to taxpayers as at least a portion of balances will be written off over the next several decades. It would be helpful for the Department of Education to add data to the College Scorecard on the percentage of students by college enrolled in income-driven repayment rates so it is possible to separate students who may not be repaying principal due to income-driven plans from those who are placing their credit at risk by falling behind on payments.

[1] Some of the numbers for prior cohorts slightly differ from what I presented last year due to a change in how I merged datasets (starting with the most recent year of the Scorecard instead of the oldest year, as the latter method excluded some colleges that merged). However, this did not affect the general trends presented in last year’s post. Thanks to Andrea Fuller at the Wall Street Journal for helping me catch that bug.

How to Provide Context for College Scorecard Data

The U.S. Department of Education’s revamped College Scorecard website celebrated its third anniversary last month with another update to the underlying dataset. It is good to see this important consumer information tool continue to be updated, given the role that Scorecard data can play in market-based accountability (a key goal of many conservatives). But the Scorecard’s change log—a great resource for those using the dataset—revealed a few changes to the public-facing site. (Thanks to the indefatigable Clare McCann at New America for pointing this out in a blog post.)

scorecard_fig1_oct18

So to put the above screenshot into plain English, the Scorecard used to have indicators for how a college’s performance on outcomes such as net price, graduation rate, and post-college salary compared to the median institution—and now it doesn’t. In many ways, the Department of Education’s decision to stop comparing colleges with different levels of selectivity and institutional resources to each other makes all the sense in the world. But it would be helpful to provide website users with a general idea of how the college performs relative to more similar institutions (without requiring users to enter a list of comparison colleges).

For example, here is what the Scorecard data now look like for Cal State—Sacramento (the closest college to me as I write this post). The university sure looks affordable, but the context is missing.

scorecard_fig2_oct18

It would sure be helpful if ED already had a mechanism to generate a halfway reasonable set of comparison institutions to help put federal higher education data into context. Hold on just a second…

scorecard_fig3_oct18

It turns out that there is already an option within the Integrated Postsecondary Education Data System (IPEDS) to generate a list of peer institutions. ED creates a list of similar institutions to the focal college based on factors such as sector and level, Carnegie classification, enrollment, and geographic region. For Sacramento State, here is part of the list of 32 comparison institutions that is generated. People can certainly quibble with some of the institutions chosen, but they clearly do have some similarities.

scorecard_fig4_oct18

I then graphed the net prices of these 32 institutions to help put Sacramento State (in black below) into context. They had the fifth-lowest net price among the set of universities, information that is at least somewhat more helpful than looking at a national average across all sectors and levels.

scorecard_fig5_oct18

My takeaway here: the folks behind the College Scorecard should talk with the IPEDS people to consider bringing back a comparison group average based on a methodology that is already used within the Department of Education.

Comments on the Proposed Gainful Employment Regulations

The U.S. Department of Education is currently accepting public comments (through September 13) on their proposal to rescind the Obama administration’s gainful employment regulations, which had the goal of tying federal financial aid eligibility to whether graduates of certain vocationally-focused programs had an acceptable debt-to-earnings ratio. My comments are reprinted below.

September 4, 2018

Annmarie Weisman

U.S. Department of Education

400 Maryland Avenue SW, Room 6W245

Washington, DC 20202

Re: Comments on the proposed rescinding of the gainful employment regulations

Dear Annmarie,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding gainful employment. I write to offer my comments on certain aspects of the proposed rescinding of the regulations.

First, as an academic, I was pleasantly surprised to see ED immediately referring to a research paper in making its justification to change the debt-to-earnings (D/E) threshold. But that quickly turned into dismay as it became clear that ED had incorrectly interpreted what Sandy Baum and Saul Schwartz wrote a decade ago after Baum clarified the findings of the paper in a blog post.[2] I am not wedded to any particular threshold regarding D/E ratios, but I would recommend that ED reach out to researchers before using their findings in order to make sure they are being interpreted correctly.

Second, the point that D/E ratios can be affected by the share of adult students, who have higher loan limits than dependent students, is quite valid. But it can potentially be addressed in one of two ways if D/E ratios are reported in the future. One option is to report D/E ratios separately for independent and dependent students separately, but that runs the risk of creating more issues of small cell sizes by splitting the sample. Another option is to cap the amount of independent student borrowing credited toward D/E ratios at the same level as dependent students (also addressing the possibility that some dependent students have higher limits due to Parent PLUS loan applications being rejected). This is less useful from a consumer information perspective, but could solve issues regarding high-stakes accountability.

Third, ED’s point about gainful employment using a ten-year amortization period for certificate programs while also offering 20-year repayment plans under REPAYE is well-taken. Switching to a 20-year period would allow some lower-performing programs to pass the D/E test, but it is reasonable given that ED offers a loan repayment plan of that period. (I also approach the idea that programs would lose Title IV eligibility under the prior administration’s regulations as being highly unlikely based on experiences with very few colleges losing eligibility based on high cohort default rates.) In any case, aligning amortization periods to repayment plan periods makes sense.

Fourth, I am highly skeptical that requiring institutions to disclose various outcomes on their own websites would have much value. Net price calculators, which colleges are required to post under the Higher Education Act, are a prime example. Research has shown that many colleges place these calculators on obscure portions of their website and that information is often up to five years out of date.[3] Continuing to publish centralized data on outcomes is far preferable than letting colleges do their own thing, and highlights the importance of continuing to publish outcomes information without any pauses in the data.

Fifth, while providing median debt and median earnings data allows analysts to continue to calculate a D/E ratio, there is no harm in continuing to provide such a ratio in the future alongside the raw data. There is no institutional burden for doing so, and it is possible that some prospective students may find that ratio to be more useful than simply looking at median debt. At the very least, ED should conduct several focus groups to make sure that D/E ratios lack value before getting rid of them.

Sixth, while it is absolutely correct to note that people working in certain service industries receive a high portion of their overall compensation in tips, I find it dismaying as a taxpayer that there is no interest in creating incentives for individuals to report their income as required by law. A focus on D/E ratios created a possibility for colleges to encourage their students to follow the law and accurately report their incomes in order to increase earnings relative to debt payments. ED should instead work with IRS and colleges to help protect taxpayers by making sure that everyone pays income taxes as required.

In closing, I do not have a strong preference about whether ED ties Title IV eligibility to program-level D/E thresholds due to my skepticism that any sanctions would actually be enforced.[4] However, I strongly oppose efforts by ED to completely stop publishing program-level student outcomes data until the College Scorecard data are ready (which could be a few years). Continuing to publish data on certificate graduates’ outcomes in the interim is an essential step since all sectors of higher education already have to report certificate outcomes—meaning that keeping these data treat all sectors equally. Publishing outcomes of degree programs would be nice, but not as important since only some colleges would be included.

As I showed with my colleagues in the September/October issue of Washington Monthly magazine, certificate students’ outcomes vary tremendously both within and across CIP codes as well as within different types of higher education institutions.[5] Once the College Scorecard data are ready, this dataset can be phased out. But in the meantime, continuing to publish data meets a key policy goal of fostering market-based accountability in higher education.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer or funders.

[2] Baum, S. (2018, August 22). DeVos misinterprets the evidence in seeking gainful employment deregulation. Urban Wire. https://www.urban.org/urban-wire/devos-misrepresents-evidence-seeking-gainful-employment-deregulation.

[3] Anthony, A. M., Page, L. C., & Seldin, A. (2016). In the right ballpark? Assessing the accuracy of net price calculators. Journal of Student Financial Aid, 46(2), 25-50. Cheng, D. (2012). Adding it all up 2012: Are college net price calculators easy to find, use, and compare? Oakland, CA: The Institute for College Access and Success.

[4] For more reasons why I am skeptical that all-or-nothing accountability systems such as the prior administration’s gainful employment regulations would actually be effective, see my book Higher Education Accountability (Johns Hopkins University Press, 2018).

[5] Washington Monthly (2018, September/October). 2018 best colleges for vocational certificates. https://washingtonmonthly.com/2018-vocational-certificate-programs.