New Research on the Relationship between Nonresident Enrollment and In-State College Prices

Public colleges and universities in most states are under increased financial stress as they strain to compete with other institutions while state appropriations fail to keep up with increases in both inflation and student enrollment. As a result, universities have turned to other revenue sources to raise additional funds. One commonly targeted source is out-of-state students, particularly in Northeastern and Midwestern states with declining populations of recent high school graduates. But prior research has found that trying to enroll more out-of-state students can reduce the number of in-state students attending selective public universities, and this crowding-out effect particularly impacts minority and low-income students.

I have long been interested in studying how colleges use their revenue, so I began sketching out a paper looking at whether public universities appeared to use additional revenue from out-of-state students to improve affordability for in-state students. Since I am particularly interested in prices faced by students from lower-income families, I was also concerned that any potential increase in amenities driven by out-of-state students could actually make college less affordable for in-state students.

I started working on this project back in the spring of 2015 and enjoyed two and a half conference rejections (one paper submission was rejected into a poster presentation), two journal rejections, and a grant application rejection during the first two years. But after getting helpful feedback from the journal reviewers (unfortunately, most conference reviewers provide little feedback and most grant applications are rejected with no feedback), I made improvements and finally got the paper accepted for publication.

The resulting article, just published in Teachers College Record (and is available for free for a limited time upon signing up as a visitor), includes the following research questions:

(1) Do the listed cost of attendance and components such as tuition and fees and housing expenses for in-state students change when nonresident enrollment increases?

(2) Does the net price of attendance (both overall and by family income bracket) for in-state students change when nonresident enrollment increases?

(3) Do the above relationships differ by institutional selectivity?

After years of working on this paper and multiple iterations, I am pleased to report…null findings. (Seriously, though, I am glad that higher education journals seem to be willing to publish null findings, as long as the estimates are precisely located around zero without huge confidence intervals.) These findings suggest two things about the relationship between nonresident enrollment and prices faced by in-state students. First, it does not look like nonresident tuition revenue is being used to bring down in-state tuition prices. Second, it also does not appear that in-state students are paying more for room and board after more out-of-state students enroll, suggesting that any amenities demanded by wealthier out-of-state students may be modest in nature.

I am always happy to take any questions on the article or to share a copy if there are issues accessing it. I am also happy to chat about the process of getting research published in academic journals, since that is often a long and winding road!

How Financial Responsibility Scores Do Not Affect Institutional Behaviors

One of the federal government’s longstanding accountability efforts in higher education is the financial responsibility score—a metric designed to reflect a private college’s financial stability. The federal government has an interest in making sure that only stable colleges receive federal funds, as taxpayers often end up footing at least part of the bill when colleges shut down and students may struggle to resume their education elsewhere. The financial responsibility score metric ranges from -1.0 to 3.0, with colleges scoring between 1.0 and 1.4 being placed under additional oversight and those scoring below 1.0 being required to post a letter of credit with the Department of Education.

Although these scores have been released to the public since the 2006-07 academic year and there was a great deal of dissatisfaction among private colleges regarding how the scores were calculated, there had been no prior academic research on the topic before I started my work in the spring of 2014. My question was simple: did receiving a poor financial responsibility score induce colleges to shift their financial priorities (either increasing revenues or decreasing expenditures) in an effort to avoid future sanctions?

But as is often the case in academic research, the road to a published article was far from smooth and direct. Getting rejected by two different journals took nearly two years and then it took another two years for this paper to wind its way through the review, page proof, and publication process at the Journal of Education Finance. (In the meantime, I scratched my itch on the topic and put a stake in the ground by writing a few blog posts highlighting the data and teasing my findings.)

More than four and a half years after starting work on this project, I am thrilled to share that my paper, “Do Financial Responsibility Scores Affect Institutional Behaviors?” is a part of the most recent issue of the Journal of Education Finance. I examined financial responsibility score data from 2006-07 to 2013-14 in this paper, although I tried to get data going farther back since these scores have been calculated since at least 1996. I filed a Freedom of Information Act request back in 2014 for the data, and my appeal was denied in 2017 on the grounds that the request to receive data (that already existed in some format!) was “too burdensome and expensive.” At that point, the paper was already accepted at JEF, but I am obviously still a little annoyed with how that process went.

Anyway, I failed to find any clear evidence that private nonprofit or for-profit colleges changed their fiscal priorities after receiving an unfavorable financial responsibility score. To some extent, this result made sense among private nonprofit colleges; colleges tend to move fairly slowly and many of their costs are sticky (such as facilities and tenured faculty). But for-profit colleges, which generally tend to be fairly agile critters, the null findings were more surprising. There is certainly more work to do in this area (particularly given the changes in higher education that have occurred over the last five years), so I encourage more researchers to delve into this topic.

To aspiring researchers and those who rely on research in their jobs—I hope this blog post provides some insights into the scholarly publication process and all of the factors that can slow down the production of research. I started this paper during my first year on faculty and it finally came out during my tenure review year (which is okay because accepted papers still count even if they are not yet in print). Many papers move more quickly than this one, but it is worth highlighting that research is a pursuit for people with a fair amount of patience.

What is Public Service Loan Forgiveness? And How Do I Qualify to Get It?

This piece was originally published at The Conversation.

The first group of borrowers who tried to get Public Service Loan Forgiveness – a George W. Bush-era program meant to provide relief to those who went into socially valuable but poorly paid public service jobs, such as teachers and social workers – mostly ran into a brick wall.

Of the 28,000 public servants who applied for Public Service Loan Forgiveness earlier this year, only 96 were approved. Many were denied in large part due to government contractors being less than helpful when it came to telling borrowers about Public Service Loan Forgiveness. Some of these borrowers will end up getting part of their loans forgiven, but will have to make more payments than they expected.

With Democrats having regained control of the U.S. House of Representatives in the November 2018 midterm elections, the Department of Education will likely face greater pressure for providing better information to borrowers, as it was told to do recently by the Government Accountability Office.

The Public Service Loan Forgiveness program forgives loans for students who made 10 years of loan payments while they worked in public service jobs. Without this loan forgiveness plan, many of these borrowers would have been paying off their student loans for 20 to 25 years.

Borrowers must follow a complex set of rules in order to be eligible for the Public Service Loan Forgiveness program. As a professor who studies federal financial aid policies, I explain these rules below so that up to 1 million borrowers who have expressed interest in the program can have a better shot at receiving forgiveness.

What counts as public service?

In general, working for a government agency – such as teaching in a public school or a nonprofit organization that is not partisan in nature – counts as public service for the purposes of the program. For some types of jobs, this means that borrowers need to choose their employers carefully. Teaching at a for-profit school, even if the job is similar to teaching at a public school, would not qualify someone for Public Service Loan Forgiveness. Borrowers must also work at least 30 hours per week in order to qualify.

What types of loans and payment plans qualify?

Only Federal Direct Loans automatically qualify for Public Service Loan Forgiveness. Borrowers with other types of federal loans must consolidate their loans into a Direct Consolidation Loan before any payments count toward Public Service Loan Forgiveness. The failure to consolidate is perhaps the most common reason why borrowers who applied for forgiveness have been rejected, although Congress did provide US$350 million to help some borrowers who were in an ineligible loan program qualify for Public Service Loan Forgiveness.

In order to receive Public Service Loan Forgiveness, borrowers must also be enrolled in an income-driven repayment plan, which ties payments to a percentage of a borrower’s income. The default repayment option is not income-driven and consists of 10 years of fixed monthly payments, but these fixed payments are much higher than income-driven payments. The bottom line is it’s not enough to just make 10 years of payments. You have to make those payments through an income-driven repayment plan to get Public Service Loan Forgiveness.

Parent PLUS Loans and Direct Consolidation Loans have fewer repayment plan options than Direct Loans made to students, so borrowers must enroll in an approved income-driven repayment plan for that type of loan. Borrowers must make 120 months of payments, which do not need to be consecutive, while enrolled in the correct payment plan to receive forgiveness.

How can borrowers track their progress?

First of all, keep every piece of information possible regarding your student loan. Pay stubs, correspondence with student loan servicers and contact information for prior employers can all help support a borrower’s case for qualifying for Public Service Loan Forgiveness. Unfortunately, borrowers have had a hard time getting accurate information from loan servicers and the Department of Education about how to qualify for Public Service Loan Forgiveness.

The U.S. Government Accountability Office told the Department of Education earlier this year to improve its communication with servicers and borrowers, so this process should – at least in theory – get better going forward.

Borrowers should also fill out the Department of Education’s Employment Certification Form each year, as the Department of Education will respond with information on the number of payments made that will qualify toward Public Service Loan Forgiveness. This form should also be filed with the Department of Education each time a borrower starts a new job to make sure that position also qualifies for loan forgiveness.

Can new borrowers still access Public Service Loan Forgiveness?

Yes. Although congressional Republicans proposed eliminating Public Service Loan Forgiveness for new borrowers, the changes have not been approved by Congress. Current borrowers would not be affected under any of the current policy proposals. However, it would be a good idea for borrowers to fill out an Employment Certification Form as soon as possible just in case Congress changes its mind.

Are there other affordable payment options available?

Yes. The federal government offers a number of income-driven repayment options that limit monthly payments to between 10 and 20 percent of “discretionary income.” The federal government determines “discretionary income” as anything you earn that is above 150 percent of the poverty line, which would translate to an annual salary of about $18,000 for a single adult. So if you earn $25,000 a year, your monthly payments would be limited to somewhere between $700 and $1400 per year, or about $58 and $116 per month.

These plans are not as generous as Public Service Loan Forgiveness because payments must be made for between 20 and 25 years – instead of 10 years under Public Service Loan Forgiveness. Also, any forgiven balance under income-driven repayment options is subject to income taxes, whereas balances forgiven through Public Service Loan Forgiveness are not taxed.The Conversation

Some Good News on Student Loan Repayment Rates

The U.S. Department of Education released updates to its massive College Scorecard dataset earlier this week, including new data on student debt burdens and student loan repayment rates. In this blog post, I look at trends in repayment rates (defined as whether a student repaid at least $1 in principal) at one, three, five, and seven years after entering repayment. I present data for colleges with unique six-digit Federal Student Aid OPEID numbers (to eliminate duplicate results), weighting the final estimates to reflect the total number of borrowers entering repayment.[1]

The table below shows the trends in the 1-year, 3-year, 5-year, and 7-year repayment rates for each cohort of students with available data.

Repayment cohort 1-year rate (pct) 3-year rate (pct) 5-year rate (pct) 7-year rate (pct)
2006-07 63.2 65.1 66.7 68.4
2007-08 55.7 57.4 59.5 62.2
2008-09 49.7 51.7 55.3 59.5
2009-10 45.7 48.2 52.6 57.4
2010-11 41.4 45.4 51.3 N/A
2011-12 39.8 44.4 50.6 N/A
2012-13 39.0 45.0 N/A N/A
2013-14 40.0 46.1 N/A N/A

One piece of good news is that 1-year and 3-year repayment rates ticked up slightly for the most recent cohort of students who entered repayment in 2013 or 2014. The 1-year repayment rate of 40.0% is the highest rate since the 2010-11 cohort and the 3-year rate of 46.1% is the highest since the 2009-10 cohort. Another piece of good news is that the gain between the 5-year and 7-year repayment rates for the most recent cohort with data (2009-10) is the largest among the four cohorts with data.

Across all sectors of higher education, repayment rates increased as a student got farther into the repayment period. The charts below show differences by sector for the cohort entering repayment in 2009 or 2010 (the most recent cohort to be tracked over seven years), and it is worth noting that for-profit students see somewhat smaller increases in repayment rates than other sectors.

But even somewhat better repayment rates still indicate significant issues with student loan repayment. Only half of borrowers have repaid any principal within five years of entering repayment, which is a concern for students and taxpayers alike. Data from a Freedom of Information Act request by Ben Miller of the Center for American Progress highlight that student loan default rates continue to increase beyond the three-year accountability window currently used by the federal government, and other students are muddling through deferment and forbearance while outstanding debt continues to increase.

Other students are relying on income-driven repayment and Public Service Loan Forgiveness to remain current on their payments. This presents a long-term risk to taxpayers as at least a portion of balances will be written off over the next several decades. It would be helpful for the Department of Education to add data to the College Scorecard on the percentage of students by college enrolled in income-driven repayment rates so it is possible to separate students who may not be repaying principal due to income-driven plans from those who are placing their credit at risk by falling behind on payments.

[1] Some of the numbers for prior cohorts slightly differ from what I presented last year due to a change in how I merged datasets (starting with the most recent year of the Scorecard instead of the oldest year, as the latter method excluded some colleges that merged). However, this did not affect the general trends presented in last year’s post. Thanks to Andrea Fuller at the Wall Street Journal for helping me catch that bug.

Which Strings Will States Attach to Free College Programs?

There is plenty of uncertainty about exactly how the upcoming midterm elections (enough nasty campaign ads already, everyone!) will shake out at the state and federal levels. Regardless of the outcomes, the idea of tuition-free college will continue to be discussed across both conservative and liberal states. But one thing is becoming clear: states are exploring a range of restrictions as they begin to adopt programs. In this post, I discuss some of the restrictions in today’s programs (see this Education Trust report for a more thorough treatment from an advocacy perspective) and some of the restrictions that I would not be surprised to see going forward.

Currently, there are four types of restrictions that exist across many current and proposed programs. The first one is the type of institution that students can attend. Most tuition-free college programs cover community colleges only due to the higher price tag of covering four-year colleges. (New York’s Excelsior program skirts this somewhat by not covering fees, which are substantial in the state.)

The second restriction is based on family income, since the last-dollar nature of tuition-free college programs means that programs become much more expensive up the income distribution. New Jersey’s new program, which covers tuition and fees at 13 of the state’s 19 community colleges, set an income cutoff of $45,000 per year to stretch limited state funds. But the state set up an income cap that low to allow for two other common restrictions (the age of the student and enrollment intensity) not to apply there. Other states, however, limit their programs to full-time students straight out of high school (and this is also common for standard grant aid programs).

Two other restrictions have popped up in a small number of states, and I would not be surprised to see them expand to other states that are considering tuition-free college programs. The programs in New York and Rhode Island require students to stay in state after college for a number of years or the grant converts into a loan (the dreaded “groan” in financial aid lingo). A few other states, such as Kentucky, have discussed limiting tuition-free programs to certain high-demand majors to better meet state workforce needs. This is similar to how some states provide additional money in their performance-based funding systems for each STEM major who graduates.

The intersection of the power of the phrase “free college” and concerns about the state’s return on investment is likely to result in even more restrictions appearing in states’ new programs. West Virginia saw a proposed program pass the state Senate (but see no action in the House) in 2018 that would have included both a residency requirement and a drug test requirement—something that does not apply to other types of financial aid the state gives. Students would have had to pay for the drug test, which would have kept down the price tag.

While I was on a panel on free college at the Brookings Institution earlier this fall, one idea came to my mind during the discussion. I said that I would not be surprised to see legislators propose that free college come with a clawback provision that pulls the money back if a student does not graduate within a certain number of years. This would be an incredibly painful provision for students who do not finish college for a variety of reasons, but it would be popular among budget hawks. States are also likely to set high initial academic requirements in the future (such as high school grades and ACT/SAT scores), essentially turning existing merit aid programs into new “free college” programs.

The 2019 legislative season is likely to bring dozens of free college proposals of various types across states, even as higher education policy gridlock remains likely in Washington. My request for states is that they be open to having their programs, particularly those with new restrictions, be evaluated by researchers so they can be improved going forward as needed.

How to Provide Context for College Scorecard Data

The U.S. Department of Education’s revamped College Scorecard website celebrated its third anniversary last month with another update to the underlying dataset. It is good to see this important consumer information tool continue to be updated, given the role that Scorecard data can play in market-based accountability (a key goal of many conservatives). But the Scorecard’s change log—a great resource for those using the dataset—revealed a few changes to the public-facing site. (Thanks to the indefatigable Clare McCann at New America for pointing this out in a blog post.)

scorecard_fig1_oct18

So to put the above screenshot into plain English, the Scorecard used to have indicators for how a college’s performance on outcomes such as net price, graduation rate, and post-college salary compared to the median institution—and now it doesn’t. In many ways, the Department of Education’s decision to stop comparing colleges with different levels of selectivity and institutional resources to each other makes all the sense in the world. But it would be helpful to provide website users with a general idea of how the college performs relative to more similar institutions (without requiring users to enter a list of comparison colleges).

For example, here is what the Scorecard data now look like for Cal State—Sacramento (the closest college to me as I write this post). The university sure looks affordable, but the context is missing.

scorecard_fig2_oct18

It would sure be helpful if ED already had a mechanism to generate a halfway reasonable set of comparison institutions to help put federal higher education data into context. Hold on just a second…

scorecard_fig3_oct18

It turns out that there is already an option within the Integrated Postsecondary Education Data System (IPEDS) to generate a list of peer institutions. ED creates a list of similar institutions to the focal college based on factors such as sector and level, Carnegie classification, enrollment, and geographic region. For Sacramento State, here is part of the list of 32 comparison institutions that is generated. People can certainly quibble with some of the institutions chosen, but they clearly do have some similarities.

scorecard_fig4_oct18

I then graphed the net prices of these 32 institutions to help put Sacramento State (in black below) into context. They had the fifth-lowest net price among the set of universities, information that is at least somewhat more helpful than looking at a national average across all sectors and levels.

scorecard_fig5_oct18

My takeaway here: the folks behind the College Scorecard should talk with the IPEDS people to consider bringing back a comparison group average based on a methodology that is already used within the Department of Education.

How to Improve Living Cost Estimates for Students

I am thrilled to be presenting at the Real College conference in Philadelphia weekend on potential ways to improve the living allowance estimates that students receive as a part of their cost of attendance. This expands on my prior research (with Sara Goldrick-Rab of Temple and Braden Hosch of Stony Brook) documenting the incredible amount of variation in living allowances within individual counties.

Since the conference is completely booked and I have heard from several people who were interested in seeing a copy of my slides, I am happy to share them here. Stay tuned for a new working paper this fall that dives deeper into these topics!

Beware Dubious College Rankings

Just like the leaves starting to change colors (in spite of the miserable 93-degree heat outside my New Jersey office window) and students returning to school are clear signs of fall, another indicator of the change in seasons is the proliferation of college rankings that get released in late August and early September. The Washington Monthly college rankings that I compile were released the week before Labor Day, and MONEY and The Wall Street Journal have also released their rankings recently. U.S. News & World Report caps off rankings season by unveiling their undergraduate rankings later this month.

People quibble with the methodology of these rankings all the time (I get e-mails by the dozens about the Washington Monthly rankings, and we’re not the 800-pound gorilla of the industry). Yet at least these rankings are all based on data that can be defended to at least some extent and the methodologies are generally transparent. Even rankings of party schools, such as this Princeton Review list, have a methodology section that does not seem patently absurd.

But since America loves college rankings—and colleges love touting rankings they do well in and grumbling about the rest of them—a number of dubious college rankings have developed over the years. I was forwarded a press release about one particular set of rankings that immediately set my BS detectors into overdrive. This press release was about a ranking of the top 20 fastest online doctoral programs, and here is a link to the rankings that will not boost their search engine results.

First, let’s take a walk through the methods section. There are three red flags that immediately stand out:

(1) The writing resembles a “word salad” and clearly was never edited by anyone. Reputable rankings sites use copy editors to help methodologists communicate with the public.

(2) College Navigator is a good data source for undergraduates, but does not contain any information on graduate programs (which they are trying to rank) other than the number of graduates.

(3) Reputable rankings will publish their full methodology, even if certain data elements are proprietary and cannot be shared. And trust me—nobody wants to duplicate this set of rankings!

As an example of what these rankings look like, here is a screenshot of how Seton Hall’s online EdD in higher education is presented. Again, let’s walk through the issues.

(1) There are typos galore in their description of the university. This is not a good sign.

(2) Acceptance/retention rate data are for undergraduate students, not for a doctoral program. The only way they could get these data are by contacting programs, which costs money and runs into logistical problems.

(3) Seton Hall is accredited by Middle States, not the Higher Learning Commission. (Thanks to Sam Michalowski for bringing this to my attention via Twitter.)

(4) In a slightly important point, Seton Hall does not offer an online EdD in higher education. Given that I teach in the higher education graduate programs and am featured on the webpage for the in-person EdD program, I’m pretty confident in this statement.

For any higher education professionals who are reading this post, I have a few recommendations. First, be skeptical of any rankings that come from sources that you are not familiar with—and triple that skepticism for any program-level rankings. (Ranking programs is generally much harder due to a lack of available data.) Second, look through the methodology with the help of institutional research staff members and/or higher education faculty members. Does it pass the smell test? And finally, keep in mind that many rankings websites are only able to be profitable by getting colleges to highlight their rankings, thus driving clicks to these sites. If colleges were more cautious about posting dubious rankings, it would shut down some of these websites while also avoiding embarrassment when someone finds out that a college fell for what is essentially a ruse.

Comments on the Proposed Gainful Employment Regulations

The U.S. Department of Education is currently accepting public comments (through September 13) on their proposal to rescind the Obama administration’s gainful employment regulations, which had the goal of tying federal financial aid eligibility to whether graduates of certain vocationally-focused programs had an acceptable debt-to-earnings ratio. My comments are reprinted below.

September 4, 2018

Annmarie Weisman

U.S. Department of Education

400 Maryland Avenue SW, Room 6W245

Washington, DC 20202

Re: Comments on the proposed rescinding of the gainful employment regulations

Dear Annmarie,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding gainful employment. I write to offer my comments on certain aspects of the proposed rescinding of the regulations.

First, as an academic, I was pleasantly surprised to see ED immediately referring to a research paper in making its justification to change the debt-to-earnings (D/E) threshold. But that quickly turned into dismay as it became clear that ED had incorrectly interpreted what Sandy Baum and Saul Schwartz wrote a decade ago after Baum clarified the findings of the paper in a blog post.[2] I am not wedded to any particular threshold regarding D/E ratios, but I would recommend that ED reach out to researchers before using their findings in order to make sure they are being interpreted correctly.

Second, the point that D/E ratios can be affected by the share of adult students, who have higher loan limits than dependent students, is quite valid. But it can potentially be addressed in one of two ways if D/E ratios are reported in the future. One option is to report D/E ratios separately for independent and dependent students separately, but that runs the risk of creating more issues of small cell sizes by splitting the sample. Another option is to cap the amount of independent student borrowing credited toward D/E ratios at the same level as dependent students (also addressing the possibility that some dependent students have higher limits due to Parent PLUS loan applications being rejected). This is less useful from a consumer information perspective, but could solve issues regarding high-stakes accountability.

Third, ED’s point about gainful employment using a ten-year amortization period for certificate programs while also offering 20-year repayment plans under REPAYE is well-taken. Switching to a 20-year period would allow some lower-performing programs to pass the D/E test, but it is reasonable given that ED offers a loan repayment plan of that period. (I also approach the idea that programs would lose Title IV eligibility under the prior administration’s regulations as being highly unlikely based on experiences with very few colleges losing eligibility based on high cohort default rates.) In any case, aligning amortization periods to repayment plan periods makes sense.

Fourth, I am highly skeptical that requiring institutions to disclose various outcomes on their own websites would have much value. Net price calculators, which colleges are required to post under the Higher Education Act, are a prime example. Research has shown that many colleges place these calculators on obscure portions of their website and that information is often up to five years out of date.[3] Continuing to publish centralized data on outcomes is far preferable than letting colleges do their own thing, and highlights the importance of continuing to publish outcomes information without any pauses in the data.

Fifth, while providing median debt and median earnings data allows analysts to continue to calculate a D/E ratio, there is no harm in continuing to provide such a ratio in the future alongside the raw data. There is no institutional burden for doing so, and it is possible that some prospective students may find that ratio to be more useful than simply looking at median debt. At the very least, ED should conduct several focus groups to make sure that D/E ratios lack value before getting rid of them.

Sixth, while it is absolutely correct to note that people working in certain service industries receive a high portion of their overall compensation in tips, I find it dismaying as a taxpayer that there is no interest in creating incentives for individuals to report their income as required by law. A focus on D/E ratios created a possibility for colleges to encourage their students to follow the law and accurately report their incomes in order to increase earnings relative to debt payments. ED should instead work with IRS and colleges to help protect taxpayers by making sure that everyone pays income taxes as required.

In closing, I do not have a strong preference about whether ED ties Title IV eligibility to program-level D/E thresholds due to my skepticism that any sanctions would actually be enforced.[4] However, I strongly oppose efforts by ED to completely stop publishing program-level student outcomes data until the College Scorecard data are ready (which could be a few years). Continuing to publish data on certificate graduates’ outcomes in the interim is an essential step since all sectors of higher education already have to report certificate outcomes—meaning that keeping these data treat all sectors equally. Publishing outcomes of degree programs would be nice, but not as important since only some colleges would be included.

As I showed with my colleagues in the September/October issue of Washington Monthly magazine, certificate students’ outcomes vary tremendously both within and across CIP codes as well as within different types of higher education institutions.[5] Once the College Scorecard data are ready, this dataset can be phased out. But in the meantime, continuing to publish data meets a key policy goal of fostering market-based accountability in higher education.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer or funders.

[2] Baum, S. (2018, August 22). DeVos misinterprets the evidence in seeking gainful employment deregulation. Urban Wire. https://www.urban.org/urban-wire/devos-misrepresents-evidence-seeking-gainful-employment-deregulation.

[3] Anthony, A. M., Page, L. C., & Seldin, A. (2016). In the right ballpark? Assessing the accuracy of net price calculators. Journal of Student Financial Aid, 46(2), 25-50. Cheng, D. (2012). Adding it all up 2012: Are college net price calculators easy to find, use, and compare? Oakland, CA: The Institute for College Access and Success.

[4] For more reasons why I am skeptical that all-or-nothing accountability systems such as the prior administration’s gainful employment regulations would actually be effective, see my book Higher Education Accountability (Johns Hopkins University Press, 2018).

[5] Washington Monthly (2018, September/October). 2018 best colleges for vocational certificates. https://washingtonmonthly.com/2018-vocational-certificate-programs.

Comments on the Proposed Borrower Defense to Repayment Regulations

The U.S. Department of Education is currently accepting public comments (through August 30) on their proposed borrower defense to repayment regulations, which affect students’ ability to get loans forgiven in the case of closed schools or colleges that misrepresented important facts. Since these regulations also affect colleges and taxpayers, I weighed in to provide a researcher’s perspective. My comments are reprinted below.

August 21, 2018

Jean-Didier Gaina

U.S. Department of Education

400 Maryland Avenue SW, Mail Stop 294-20

Washington, DC 20202

Re: Comments on the proposed borrower defense to repayment regulations

Dear Jean-Didier,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding borrower defense to repayment and financial responsibility scores. Since there were no academic researchers included in the negotiated rulemaking committee (something that should be reconsidered in the future!), I write to offer my comments on certain segments of the proposed regulations.

My first comment is on the question of whether ED should accept so-called affirmative claims from borrowers who are not yet in default and seek to make a claim against a college instead of only accepting defensive claims from borrowers who have already defaulted. For colleges that are still open, this is a clear decision in my view: affirmative claims should be allowed because ED can then attempt to recoup the money from the college instead of effectively requiring the taxpayer to subsidize at least some amount of loan forgiveness. However, the decision is somewhat more complicated in the case of a closed school, where taxpayers are more likely to foot the bill. My sense is that affirmative claims should probably still be allowed given the relationship between defaulting on student loans and adverse outcomes such as reduced credit scores.[2]

To protect taxpayers and students alike, more needs to be done to strengthen federal requirements for colleges that are at risk of closure. If a college closes suddenly, students may be eligible to receive closed school discharges at taxpayer expense. Yet my research and analyses show that ED’s current rules for determining a college’s financial health (the financial responsibility score) are only weakly related with what they seek to measure. For example, several private nonprofit colleges that closed in 2016 had passing financial responsibility scores in 2014-15, while many colleges have continued to operate with failing scores for years.[3] I also found that colleges did not change their revenue or expenditure patterns in any meaningful way after receiving a failing financial responsibility score, suggesting that colleges are not taking the current measure seriously.[4]

I am heartened to see that ED is continuing to work on updating the financial responsibility score metric to better reflect a college’s real-time risk of closing through another negotiated rulemaking session. However, I am concerned that students and taxpayers could suffer from continuing with the status quo during a potential six-year phase-in period, so anything to shorten the period would be beneficial. I again urge ED to include at least one academic researcher on the negotiated rulemaking panel to complement institutional officials and accountants, as the research community studies how colleges respond to potential policy changes that the rest of the committee may be proposed.

Finally, I am concerned about ED’s vague promise to encourage colleges to offer teach-out plans instead of suddenly closing, as the regulations provide no incentives for colleges on the brink of financial collapse to work with accreditors and states to develop a teach-out plan. It would be far better for ED to require colleges to be proactive and develop teach-out plans at the first sign of financial difficulties, reducing the risk to taxpayers by minimizing the risk of closed school discharges. These plans can then be approved by an accreditor and/or state agency as a part of the regular review process. Colleges would likely contend that having to develop a pre-emptive teach-out plan may affect their ability to recruit and retain students, but tying this to an existing benchmark of federal concern (such as a low financial responsibility score or being on HCM2) should alleviate that issue.

Thank you for the opportunity to provide comments on these proposed regulations and I am happy to respond to any questions that ED staffers may have.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer.

[2] Blagg, K. (2018). Underwater on student debt: Understanding consumer credit and student loan default. Washington, DC: Urban Institute.

[3] Kelchen, R. (2017, March 8). Do financial responsibility scores predict college closures? https://robertkelchen.com/2017/03/08/do-financial-responsibility-scores-predict-college-closures/.

[4] Kelchen, R. (forthcoming). Do financial responsibility scores affect institutional behaviors? Journal of Education Finance.