Should Students in “Boot Camps” Get Federal Financial Aid?

In the last several years, a number of companies have started short-term, intensive training programs in fields such as computer programming, Web design, and business designed to give fresh college graduates the skills they need to land lucrative jobs in growing fields. These “boot camps” include offerings by start-up companies such as Dev Bootcamp, General Assembly, and Koru as well as some entries from branches of traditional colleges (such as Rutgers). This sector is rapidly growing, with one organization estimating that about 16,000 students will complete coding boot camps in 2015.

Boot camps may tout their high job placement rates, but they are not cheap for students. The typical program costs about $11,000 for an 11-week program, although shorter options are often available in some fields. Unlike for most undergraduate and graduate programs through traditional colleges, these programs are currently not eligible for federal financial aid dollars. This means that students have two options to pay for these programs: paying out of pocket or taking out a private loan. However, the U.S. Department of Education is beginning an “experimental sites” program that will allow a small number of colleges to partner with unaccredited providers like boot camps to offer courses and receive federal financial aid.

Should students in boot camp programs be able to receive federal grants and loans? The best argument toward allowing students to receive federal funds for these programs (after a careful vetting process) is that it would allow students with modest financial means and little creditworthiness of their own to easily pay for some or all of these programs. These programs tend to recruit heavily from selective colleges with fewer low-income students (see the list of Koru’s partners), where ability to pay hasn’t been such a concern to this point. But as the sector expands to include colleges with more economic diversity, financing these programs could become a problem.

On the other hand, the highly vocational nature of these programs allows for different financing structures to make sense. This can happen through private loans focused on high-quality programs, which is the goal of the partnership between private lenders Skills Fund and six boot camps. Income share agreements are also a potential fit in this area, although I do have concerns about whether successful graduates would want to give up equity in themselves rather than just make loan payments. Finally, it remains to be seen whether boot camps themselves would actually be interested in going through certification and quality assurance processes that are likely to accompany federal student aid. For example, General Assembly’s co-founder told Inside Higher Ed that he didn’t want to receive federal student aid due to concerns about federal aid leading to higher prices in the future (the so-called “Bennett Hypothesis”). Others, such as Alex Holt at New America, have concerns about additional federal oversight leading to reduced program quality and less innovation.

I’ve thought about the dueling concerns of access and flexibility regarding boot camps, and I still don’t know exactly where I stand. The good thing here is that we’re likely to have a small number of programs get access to federal financial aid, so the effects of federal funding (and rules) can be examined before opening the spigot for more interested programs. I’d love to hear your thoughts on this question below, as this is a developing issue on which research badly needs to be conducted.

How Well Do Default Rates Reflect Student Loan Repayment?

This post initially appeared at the Brown Center Chalkboard blog.

The U.S. Department of Education released new data this week on colleges’ cohort default rates (CDR)—reflecting the percentage of a college’s former students with federal student loans who entered repayment in Fiscal Year 2012 and defaulted by the end of Fiscal Year 2014. The average CDR dropped to 11.8 percent in Fiscal Year 2012, down from 13.7 percent in FY 2011 and 14.7 percent in FY 2010. Eight colleges had a CDR over 30 percent for three consecutive years, subjecting them to the loss of all federal financial aid dollars. Over 100 additional colleges had a CDR over 30 percent in the 2012 cohort, putting them at risk of losing funds if their performance does not improve.

Yet although CDRs are important for accountability purposes, they do not necessarily reflect whether students are repaying their loans.  As of June 30, 2015, just over half of the $623 billion in Direct Loans made to students who have entered repayment are in current repayment. In addition to the $48 billion in loans in default, an additional $63 billion are more than 30 days delinquent and $180 billion are in deferment and forbearance. Deferment and forbearance are not always bad things, as students can qualify for either by being in the military or pursuing graduate studies. However, students can also request deferment and forbearance for economic hardship, while interest still accrues. The presence of income-based repayment plans, in which students making below 150 percent of the federal poverty line can make no payments while still remaining current on their loan, further complicates any analyses. All of these complications make cohort default rates a weak metric of whether students are actually paying back their loans.

Are students repaying their loans? A look using College Scorecard data

The Department of Education’s recent release of College Scorecard data provides new insights into whether students are repaying their loans, while also allowing for comparisons to be made to the current CDR metric. The Scorecard contains a new measure of the percentage of students whose student loan balance was lower than when entering repayment, which reflects the percentage of students who have been able to pay down at least some principal.

Using this new metric on declining student loan balances to compare with colleges’ CDRs, I come to three new findings.  Please note that for the purposes of this blog post, I consider the three-year cohort default rate for students who entered repayment in FY 2011 compared to the one-year and three-year repayment rates for students who entered repayment in FY 2010 and 2011. The key findings are below.

(1) Cohort default rates substantially underestimate the percent of students who have been unable to lower their loan balances. Of the nearly 5,700 colleges with data on both CDRs and repayment rates, the median college had a 14.9 percent three-year CDR while 40.8 percent of students did not repay any principal in the first three years after leaving college. This means that one in four exiting students was not in default, yet did not make a dent in their loan balance in the first three years after entering repayment. Figure 1 below shows the relationship between CDRs and repayment rates. A low CDR for a college is associated with higher rates of repayment (with a correlation coefficient of 0.76), but there are plenty of exceptions. For example, 25 percent of the colleges with default rates below 10% had more than one-fourth of all students failing to repay any principal.

brookings_fig1

(2) The percentage of students paying down principal doesn’t change much between one year and three years since entering repayment. One year after entering repayment, 62.8 percent of students at the median college had paid down at least $1 in principal, though that percentage dipped slightly to 59.2 percent within three years (see Figure 2 for the distribution of repayment rates). This drop is likely due to some students either falling behind on their payments while enrolled in the standard repayment plan as well as payments under income-based plans being insufficient to cover accumulating interest. In either case, stagnant or falling repayment rates should raise red flags regarding students’ ability to eventually pay off the loan within 10-20 years.

brookings_fig2

 

(3) As a whole, repayment outcomes make a turn for the worse at for-profit colleges compared with public or private nonprofit colleges. This can be best illustrated by showing the difference in repayment rates between one and three years of entering repayment by institutional type. As Figure 3 below shows, for-profit colleges tended to have lower repayment rates after three years than one year, a red flag that their borrowers are not doing well, while public and private nonprofit colleges saw similar repayment rates over time. Only one in four for-profit colleges had more students paying down principal three years after completion, which points to potential problems for students and taxpayers alike. Although for-profit colleges have somewhat lower CDRs than community colleges, community colleges do not see drops in the repayment rates that exist in the for-profit sector.

brookings_fig3

The new loan repayment rate data provides an additional tool   for policymakers to use when holding colleges accountable for their performance. Although this metric represents a substantial improvement over CDRs by including students who are struggling to make payments, the presence of income-based repayment plans (where students can stay current on their loans by making small payments if their income is sufficiently low) complicates any accountability efforts. Further research is needed to examine the implications of income-based repayment programs on principal repayment rates.

New Paper and Testimony on Risk Sharing

The concept of risk sharing, in which colleges are held at least partially financially responsible for the outcomes of their students, has become a hot topic of political discussion in recent months. The idea has gained bipartisan support–in least in theory–as presidential candidates Hillary Clinton and Scott Walker have both supported the basic principles of risk sharing. Yet by potentially penalizing colleges with high student loan default rates, risk sharing systems have the incentive to reduce access to higher education while not actually incentivizing colleges to improve.

With generous support from the Lumina Foundation, I set out to sketch out a risk sharing system with the goal of increasing accountability for poor outcomes while recognizing differences in the types of students colleges serve. I released the resulting paper this week and testified in front of the U.S. Department of Education’s Advisory Committee on Student Financial Assistance on the topic. (My testimony is below.) I welcome your comments on risk sharing, as the goal of this paper and testimony is to advance a thoughtful conversation on what a fair and effective system could look like.

For more reading on risk sharing, I highly recommend the thoughtful takes of the American Enterprise Institute’s Andrew Kelly and Temple University’s Doug Webber.

 

Testimony to the Advisory Committee on Student Financial Assistance

Hearing on Higher Education Act Reauthorization

Robert Kelchen

Good afternoon, members of the Advisory Committee on Student Financial Assistance, Department of Education officials, and other guests. My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University. All opinions expressed in this testimony are my own, and I thank the Committee for the opportunity to present.

My testimony today will be on the topic of risk sharing in higher education, which is typically defined as holding colleges financially accountable for their students’ performance. It is a topic that has been discussed by politicians on both sides of the aisle, including legislation recently introduced by Republican Senator Orrin Hatch and Democratic Senator Jeanne Shaheen that would require colleges to pay a percentage of students’ loans that were not paid on in the previous year.[1] But simple risk sharing proposals like this provide colleges with incentives to reduce borrowing by either leaving the Direct Loan program or reducing non-tuition expense allowances included in the cost of attendance.

In a recently-released policy paper funded by the Lumina Foundation, I introduced a risk sharing proposal that attempts to hold colleges accountable for their performance with respect to both Pell Grant and federal student loan dollars.[2] My proposal would reward colleges for strong performance on Pell Grant success and student loan repayment rates, while requiring colleges with weaker performance to pay a penalty to the Department of Education from a source other than institutional aid dollars.

The federal government’s portion of my proposed risk-sharing system would have three main components:

  • First, penalties or rewards for Pell Grant recipients’ performances would be separate from penalties or rewards for student loan performance. This would end the current situation in which colleges face incentives to opt out of federal student loans in order to protect Pell Grant dollars.[3]
  • Second, the federal government would provide better tracking and reporting of outcomes for students receiving federal financial aid. The set of metrics available to examine performance is extremely limited, and could be improved by either overturning the ban on federal student unit record data systems or committing to providing additional subgroup performance information using IPEDS and the National Student Loan Data System.
  • Third, in order to make more accurate comparisons about student loan performance across campuses, federal guidelines for how the non-tuition components of the cost of attendance are defined would be helpful. Research has found large variations in the off-campus room and board and other expense allowances, which are determined by individual colleges, within a given metropolitan area.[4] Colleges need to be placed on a more level playing field for accountability purposes.

Colleges would be required to meet three criteria to receive Title IV funds:

  • First, colleges must agree to put “skin in the game” by being willing to match a percentage of Title IV loan or grant aid with institutional funds if their performance falls below a specified benchmark.
  • Second, colleges must participate in the Federal Direct Loan program in order for their students to receive Pell Grant dollars, giving students access to credit while not directly putting Pell dollars at risk.
  • Third, colleges must be willing to meet heightened accreditation and consumer information provision standards.

Colleges’ performance would be compared to similar institutions using peer groups based on the characteristics of students served, types of degrees and certificates offered, and the level of resources different colleges possess. Notably, by using institutional selectivity, per-student revenues, and endowment values as grouping characteristics, a college would be compared to more selective colleges if it tried to become more selective—limiting its ability to game the system.

The Pell Grant portion of risk sharing would be based on outcomes such as Pell recipients’ retention rates, graduation rates, transfer rates, and the number of graduates. Colleges with performance a certain percentage below the peer group average would have to pay a penalty equal to a percentage of Pell funds awarded out of their own budget, while colleges a certain percentage above the average would receive a bonus to use to supplement need-based financial aid programs.

The student loan portion of risk sharing would be based on outcomes such as cohort default rates 3-5 years after entering repayment, the percent of students current on their payments, and the percentage of students making payments of at least $1 of principal. I would also include PLUS loans in the risk sharing metric. Colleges performing substantially above the peer group average would get additional work-study funds, while colleges performing substantially below average would face a penalty.

The implementation of any risk sharing proposal must be carefully considered in order to avoid perverse incentives and to gain support from colleges and policymakers. Lessons from state performance-based funding program show that implementing over a period of several years is important, as is some method for colleges to limit penalties until they can change their organization.[5] Colleges that can present clear plans for improvement that are supported by their accreditor should be able to get reduced penalties and logistical support from the federal government for a limited period of time.

Thank you once again for the opportunity to present and I look forward to answering any questions.

[1] Student Protection and Success Act (S. 1939, introduced August 5, 2015). http://www.shaheen.senate.gov/imo/media/doc/Student%20Protection%20and%20Sucess%20Act.pdf.

[2] The paper is available at http://www.luminafoundation.org/resources/proposing-a-federal-risk-sharing-policy.

[3] Hillman, N. W. (2015). Cohort default rates: Predicting the probability of federal sanctions. Educational Policy, 29(4), 559-582. Hillman, N. W., & Jaquette, O. (2014). Opting out of federal student loan programs: Examining the community college sector. Paper presented at the Association for Education Finance and Policy annual conference, San Antonio, TX.

[4] Kelchen, R., Hosch, B. J., & Goldrick-Rab, S. (2014). The costs of college attendance: Trends, variation, and accuracy in institutional living cost allowances. Madison, WI: Wisconsin HOPE Lab.

[5] For example, see Dougherty, K. J., & Natow, R. S. (2015). The politics of performance-based funding: Origins, discontinuations, and transformations. Baltimore, MD: Johns Hopkins Press.

Comments on Senator Clinton’s Higher Education Proposal

Hillary Clinton’s presidential campaign released her framework for higher education reform at midnight on Monday morning (see details here and here). The plan, officially listed at a cost of $350 billion over ten years, would move closer to the idea of debt-free public college, require states to increase their spending on public higher education, and potentially embrace some accountability reforms with bipartisan appeal. Below are some of my first-take comments on Sen. Clinton’s proposal, as I look forward to seeing complete details. (I didn’t get an embargoed copy in advance.)

  • This proposal feels like a direct reaction to pressure that Sen. Clinton was facing from the political Left. Both of her main rivals, independent Senator Bernie Sanders of Vermont and former Maryland Governor Martin O’Malley, have supported versions of debt-free public college plans. This has zero chance of passing Congress as is, particularly as the House of Representatives is likely to stay in Republican hands through 2020 and the proposal would be paid for by additional taxes on wealthy Americans.
  • I’m highly skeptical of the $350 billion price tag, or at least when it’s phrased as just being $35 billion per year (roughly equal to federal Pell Grant spending). New federal programs take several years to phase in, meaning that most of the expenses are in later years. (President Obama’s free community college proposal is similar.) Once this plan is fully in place, I’d expect the price tag to be closer to $70 billion per year. All politicians like to massage the ten-year budget window used for cost estimates, and Sen. Clinton is no different.
  • Unlike some other “free college” proposals, Sen. Clinton’s proposal brings at least some private nonprofit colleges to the table by potentially making some of their students eligible for additional aid. This is a politically smart move, as the private nonprofit lobby is strong and many colleges in this sector do good work for students. But as noted in Inside Higher Ed this morning, the leadership of the private college lobby is concerned about any proposals that direct relatively less money to private colleges—as it could affect some institutions’ ability to survive.
  • This plan includes a federal/state partnership, which is typical for Democratic higher education proposals (and a good way to keep the price tag down somewhat). However, as suggested by Medicaid, many Republican governors may not take up the extra funds in exchange for having to assume additional responsibilities. For that reason, Sen. Clinton’s proposal to allow public colleges in those states to bypass the state governments to work directly with the federal government is politically brilliant. But states probably won’t be happy.
  • Much of the price tag will go to reduce interest rates on student loans, both for current students and to allow former students to refinance their loans. This is a big deal for the Elizabeth Warren faction of the Democratic Party—the folks who really make their voices heard in primary elections. But this money will do little to improve access and completion rates, in part because much of the money goes to students after they have left college and because income-based repayment plans make interest rates less relevant. Additionally, students who tried to avoid debt as much as possible (many from lower-income families) won’t benefit as much and may be upset by the subsidies going to higher-income borrowers. I wrote about this in my previous post.
  • There are bipartisan pieces in this plan, including accreditation reform, better consumer information, and risk-sharing for student loans. If Sen. Clinton becomes the nominee, look for her to pivot to the center and highlight some of these proposals.
  • The Clinton staff are claiming this proposal will help bring down the cost of providing a college education, in addition to the price that students pay. I just can’t help but be skeptical when suggested cost-saving areas include administration and technology. Colleges have been facing pressure to tighten their belts for years from states (and many have actually done so), so I don’t think the federal government will be any more successful. But it makes for a good soundbite.

The three main Democratic candidates have now laid out their higher education agendas. Hopefully, the Republican field (which, with the exception of Sen. Marco Rubio, have been fairly quiet on the issue) will follow suit.

The Rise and Fall of Federal College Ratings

President Obama’s 2013 announcement that a set of federal college ratings would be created and then tied to federal financial aid dollars caught the higher education world by surprise. Some media coverage at the time even expected what came to be known as the Postsecondary Institution Ratings System (PIRS) to challenge U.S. News & World Report’s dominance in the higher education rankings marketplace. But most researchers and people intimately involved in policy discussions saw a substantial set of hurdles (both methodologically and politically) that college ratings would have to clear before being tied to financial aid. This resulted in a number of delays in the development of PIRS, as evidenced by last fall’s delayed release of a general framework for developing ratings.

The U.S. Department of Education’s March announcement that two college ratings systems would be created, one oriented toward consumers and one for accountability purposes, further complicated the efforts to develop a ratings system. As someone who has written extensively on college ratings, I weighed in with my expectation that any ratings were becoming extremely unlikely (due to both political pressures and other pressing needs for ED to address):

This week’s announcement that the Department of Education is dropping the ratings portion of PIRS (is it PIS now?) comes as little surprise to higher education policy insiders—particularly in the face of bipartisan legislation in Congress that sought to block the development of ratings and fierce opposition from much of the higher education community. I have to chuckle at Education Undersecretary Ted Mitchell’s comments on the changes; he told The Chronicle of Higher Education that dropping ratings “is the exact opposite of a collapse” and “a sprint forward.” But politically, this is a good time for ED to focus on consumer information after its recent court victory against the for-profit sector that allows the gainful employment accountability system to go into effect next week.

It does appear that the PIRS effort will not be in vain, as ED has promised that additional data on colleges’ performance will be made available on consumer-friendly websites. Although I am skeptical that federal websites like the College Scorecard and College Navigator directly reach students and their families, I am a believer in the power of information to help students make at least decent decisions, but I think this information will be more effective when packaged by private organizations such as guidance counselors and college access organizations.

On a historical note, the 2013-2015 effort to rate colleges failed to live up to efforts a century ago, in which ratings were actually created but President Taft blocked their release. As Libby Nelson at Vox noted last summer, President Wilson created a ratings committee in 1914, which then came to the conclusion that publishing ratings was not desirable at the time. 101 years later, some things still haven’t changed. College ratings are likely dead for decades at the federal level, but performance-based funding or “risk-sharing” ideas enjoy some bipartisan support and are the next big accountability policy discussion.

I’d love to be able to write more at this time about the path forward for federal higher education accountability policy, but I’ve got to get back to putting together the annual Washington Monthly college rankings (look for them in late August). Hopefully, future versions of the rankings will be able to include some of the new information that has been promised in this new consumer information system.

It’s Time to Make Accreditation Reports Public

The higher education world is abuzz about this week’s great piece in The Wall Street Journal questioning the effectiveness of higher education accrediting agencies, whose seal of approval is required for a college to receive federal student financial aid dollars. In the front-page article, Andrea Fuller and Douglas Belkin of the WSJ note that at least 11 accredited four-year colleges had federal graduation rates (excluding part-time and transfer students, among others) below 10%, which leads one to question whether accreditors are doing their job in ensuring institutional quality. A 2014 Government Accountability Office report concluded that accreditors are more likely to yank a college’s accreditation over financial concerns than academic concerns, calling for additional oversight from the U.S. Department of Education.

Congress has also been placing pressure on accreditors in recent weeks due to the collapse of the accredited Corinthian chain of for-profit colleges and the Department of Education’s announcement that at least some Corinthian students will qualify for loan forgiveness. The head of the main accreditation body responsible for most Corinthian campuses got grilled by Senate Democrats in a hearing this week for not pulling the campuses’ accreditation before the chain collapsed. As a part of the (hopefully) impending reauthorization of the Higher Education Act, members of Congress on both sides of the aisle are interested in a potential overhaul of the accreditation system.

Students, their families, policymakers, and the general public have a clear and compelling interest in reading the reports from accrediting agencies and knowing whether colleges are facing sanctions for some aspect of academic or fiscal performance. Yet these reports, which are produced by nonprofit accrediting agencies, are rarely available to the public. For the WSJ piece, the reporters were able to use open-records requests to get accreditation reports for 50 colleges with the lowest graduation rates. I was recently at a conference where the GAO presented on their aforementioned accreditation report and asked whether the data they compiled on accreditor sanctions was available to the public. They suggested I file an open records request, something which I’ve (unsuccessfully) done for another paper.

Basic information about a college’s accreditation status and reports –including any sanctions and key recommendations for improvement—should be readily available to the public as a requirement for federal financial aid eligibility. And this should cover all types of colleges, including private nonprofit and for-profit colleges that accept federal funds. The federal government doesn’t necessarily have to get involved in an accreditation process (a key concern of colleges and universities), but it can use its clout to make additional data available to the public. (Students probably won’t go to the college’s website and read the reports, but third-party groups like guidance counselors and college rankings providers would work to get the information out in more usable form.) A little sunshine in the accreditation process has the potential to be a wonderful disinfectant.

Analyzing the Heightened Cash Monitoring Data Release

NOTE: This post was updated April 3 to reflect the Department of Education’s latest release of data on heightened cash monitoring.

In my previous post, I wrote about the U.S. Department of Education’s release of a list of 544 colleges subject to heightened cash monitoring standards due to various academic, financial, and administrative concerns. I constructed a dataset of the 512 U.S. colleges known to be facing heightened cash monitoring (HCM) along with two other key accountability measures: the percentage of students who default on loans within three years (cohort default rates) and an additional measure of private colleges’ financial strength (financial responsibility scores). In this post, I examine the reasons why colleges face heightened cash monitoring, as well as whether HCM correlates with the other accountability metrics.

The table below shows the number of colleges facing HCM-1 (shorter delays in ED’s disbursement of student financial aid dollars, although colleges not facing HCM have no delays) and HCM-2 (longer delays) by type of institution (public, private nonprofit, and for-profit).

Table 1: HCM status by institutional type.
Sector HCM-1 HCM-2
Public 68 6
Private nonprofit 97 18
Private for-profit 284 39
Total 449 63

 

While only six of 74 public colleges are facing HCM-2, more than one in ten private nonprofit (18 of 115) and for-profit colleges (39 of 323) are facing this higher standard of oversight. The next table shows the various reasons listed for why colleges are facing HCM.

Table 2: HCM status by reason for additional oversight.
Reason HCM-1 HCM-2
Low financial responsibility score 320 4
Financial statements late 66 9
Program review 1 21
Administrative capability 22 7
Accreditation concerns 1 12
Other 39 10

 

More than two-thirds (320) of the 449 colleges facing HCM-1 are included due to low financial responsibility scores (below a 1.5 on a scale ranging from -1 to 3), but only four colleges are facing HCM-2 for that reason. The next most common reason, affecting 75 colleges, is a delayed submission of required financial statements or audits. This affected 43 public colleges in Minnesota, which are a majority of the public colleges subject to HCM. Program review concerns were a main factor for HCM-2, with 21 colleges in this category (including many newly released institutions) facing HCM-2. Other serious concerns included administrative capability (22 in HCM-1 and 7 in HCM-2), accreditation (2 in HCM-1 and 12 in HCM-2), and a range of other factors (39 in HCM-1 and 10 in HCM-2).

The next table includes three of the most common or serious reasons for facing HCM (low financial responsibility scores, administrative capacity concerns, and accreditation issues) and examines their median financial responsibility scores and cohort default rates.

Table 3: Median outcome values on other accountability metrics.
Reason for inclusion in HCM Financial responsibility score Cohort default rate
Low financial responsibility score 1.2 12.1%
Administrative capability 1.6 20.3%
Accreditation issues 2.0 2.8%

 

Not surprisingly, the typical college subject to HCM for a low financial responsibility score had a financial responsibility score of 1.2 in Fiscal Year 2012, which would require additional federal oversight. Although the median cohort default rate was 12.1%, which is slightly lower than the national default rate of 13.7%, some of these colleges do not participate in the federal student loan program and are thus counted as zeroes. The median college with administrative capability concerns barely passed the financial responsibility test (with a score of 1.6), while 20.3% of students defaulted. Colleges with accreditation issues (either academic or financial) had higher financial responsibility scores (2.0) and lower cohort default rates (2.8%).

What does this release of heightened cash monitoring data tell us? Since most colleges are on the list for known concerns (low financial responsibility scores or accreditation issues) or rather silly errors (forgetting to submit financial statements on time), the value is fairly limited. But there is still some value, particularly in the administrative capability category. These colleges deserve additional scrutiny, and the release of this list will do just that.

New Data on Heightened Cash Monitoring and Accountability Policies

Earlier this week, I wrote about the U.S. Department of Education’s pending release of a list of colleges that are currently subject to heightened cash monitoring requirements. On Tuesday morning, ED released the list of 556 colleges (updated to 544 on Friday), thanks to dogged reporting by Michael Stratford at Inside Higher Ed (see his take on the release here).

My interest lies in comparing the colleges facing heightened cash monitoring (HCM) to two other key accountability measures: the percentage of students who default on loans within three years (cohort default rates) and an additional measure of private colleges’ financial strength (financial responsibility scores). I have compiled a dataset with all of the domestic colleges known to be facing HCM, their cohort default rates, and their financial responsibility scores.

That dataset is available for download on my site, and I hope it is useful for those interested in examining these new data on federal accountability policies. I will have a follow-up post with a detailed analysis, but at this point it is more important for me to get the data out in a convenient form to researchers, policymakers, and the public.

DOWNLOAD the dataset here.

Why is it So Difficult to Sanction Colleges for Poor Performance?

The U.S. Department of Education has the ability to sanction colleges for poor performance in several ways. A few weeks ago, I wrote about ED’s most recent release of financial responsibility scores, which require colleges deemed financially unstable to post a bond with the federal government before receiving financial aid dollars. ED can also strip a college’s federal financial aid eligibility if too high of a percentage of students default on their federal loans, if data are not provided on key measures such as graduation rates, or if laws such as Title IX (prohibiting discrimination based on sex) are not followed.

The Department of Education can also sanction colleges by placing them on Heightened Cash Monitoring (HCM), requiring additional documentation and a hold on funds before student financial aid dollars are released. Corinthian Colleges, which partially collapsed last summer, blames suddenly imposed HCM requirements for its collapse as they were left short on cash. Notably, ED has the authority to determine which colleges should face HCM without relying upon a fixed and transparent formula.

In spite of the power of the HCM designation, ED has previously refused to release a list of which colleges are subject to HCM. The outstanding Michael Stratford at Inside Higher Ed tried to get the list for nearly a year through a Freedom of Information Act request (which was mainly denied due to concerns about hurting colleges’ market positions), finally making this dispute public in an article last week. This sunlight proved to be a powerful disinfectant, as ED relinquished late Friday and will publish a list of the names this week.

The concerns about releasing HCM scores is but one of many difficulties the Department of Education has had in sanctioning colleges for poor performance across different dimensions. Last fall, the cohort default rate measures were tweaked at the last minute, which had the effect of allowing more colleges to pass and retain access to federal aid. Financial responsibility scores have been challenged over concerns that ED’s calculations are incorrect. Gainful employment metrics are still tied up in court, and tying any federal aid dollars to college ratings appears to have no chance of passing Congress at this point. Notably, these sanctions are rarely due to direct concerns about academics, as academic matters are left to accreditors.

Why is it so difficult to sanction poorly-performing colleges, and why is the Department of Education so hesitant to release performance data? I suggest three reasons below, and I would love to hear your thoughts in the comments section.

(1) The first reason is the classic political science axiom of concentrated benefits (to colleges) and diffuse costs (to students and the general public). Since there is a college in every Congressional district (Andrew Kelly at AEI shows the median district had 11 colleges in 2011-12), colleges and their professional associations can put forth a fight whenever they feel threatened.

(2) Some of these accountability measures are either all-or-nothing in nature (such as default rates) or incredibly costly for financially struggling colleges (HCM or posting a letter of credit for a low financial responsibility score). More nuanced systems with a sliding scale might make some sanctions possible, and this is a possible reform under Higher Education Act reauthorization.

(3) The complex relationship between accrediting bodies and the Department of Education leaves ED unable to directly sanction colleges for poor academic performance. A 2014 GAO report suggested accrediting bodies also focus more on finances than academics and called for a greater federal role in accreditation, something that will not sit well with colleges.

I look forward to seeing the list of colleges facing Heightened Cash Monitoring be released later this week (please, not Friday afternoon!) and will share my thoughts on the list in a future piece.

Do Financial Responsibility Scores Reflect Colleges’ Financial Strength?

In spite of the vast majority of federal government operations being closed on Thursday due to snow (it’s been a rough end to winter in this part of the country), the U.S. Department of Education released financial responsibility scores for private nonprofit and for-profit colleges and universities based on 2012-2013 data. These scores are based on calculations designed to measure a college’s financial strength in three key areas: primary reserve ratio (liquidity), equity ratio (ability to borrow additional funds) and net income (profitability or excess revenue).

A college can score between -1 and 3, and colleges that score over 1.5 are considered financially responsible without any qualifications and can access federal funds. Colleges scoring between 1.0 and 1.4 are considered financially responsible and can access federal funds for up to three years, but are subject to additional Department of Education oversight of its financial aid programs. If a college does not improve its score within three years, it will not be considered financially responsible. Colleges scoring 0.9 or below are not considered financially responsible and must submit a letter of credit and be subject to additional oversight to get access to funds. A college can submit a letter of credit equal to 50% of all federal student aid funds received in the prior year and be deemed financially responsible, or it can submit a letter equal to 10% of all funds received and gain access to funds but still not be fully considered financially responsible.

As Goldie Blumenstyk (who knows more about the topic than any other journalist) and Joshua Hatch of The Chronicle of Higher Education discover in their snap analysis of the data, 158 private degree-granting colleges (108 nonprofit and 50 for-profit) failed to pass the test in 2012-13, down ten colleges from last year. Looking at all colleges eligible to receive federal financial aid, 192 failed outright in 2012-13 by scoring 0.9 or lower and an additional 128 faced additional oversight by scoring between 1.0 and 1.4.

But, as Blumenstyk and Hatch note in their piece, private colleges have repeatedly questioned how financial responsibility scores are determined and whether they are accurate measures of a college’s financial health. I’m working on an article examining whether and how colleges and other stakeholders respond to financial responsibility scores and therefore have a bunch of data at the ready to look at this topic.

Thanks to the help of my sharp research assistant Michelle Magno, I have a dataset of 270 private nonprofit colleges with financial responsibility scores and their Moody’s credit ratings in the 2010-11 academic year. (Colleges only have Moody’s ratings if they seek additional capital, which explains the smaller sample size and why few colleges with low financial responsibility scores are included.) The below scatterplot shows the relationship between Moody’s ratings and financial responsibility scores, with credit ratings observed between Caa and Aaa and financial responsibility scores observed between 1.3 and 3.0.

credit_rating

The correlation between the two measures of fiscal health was just 0.038, which is not significantly different from zero. Of the 57 colleges with the maximum financial responsibility score of 3.0, only three colleges (Northwestern, Stanford, and Swarthmore) had the highest possible credit rating of Aaa. Twenty-five colleges with financial responsibility scores of 3.0 had credit ratings of Baa, seven to nine grades lower than Aaa. On the other hand, six of the 15 colleges with Aaa credit ratings (including Harvard and Yale) had financial responsibility scores of 2.2, well below the maximum possible score.

This suggests that the federal government and private credit agencies measure colleges’ financial health in different ways—at least among colleges with the ability to access credit. Financial responsibility scores can certainly have the potential to affect how colleges structure their finances, but it is unclear whether they accurately reflect a college’s ability to operate going forward.