How Financial Responsibility Scores Do Not Affect Institutional Behaviors

One of the federal government’s longstanding accountability efforts in higher education is the financial responsibility score—a metric designed to reflect a private college’s financial stability. The federal government has an interest in making sure that only stable colleges receive federal funds, as taxpayers often end up footing at least part of the bill when colleges shut down and students may struggle to resume their education elsewhere. The financial responsibility score metric ranges from -1.0 to 3.0, with colleges scoring between 1.0 and 1.4 being placed under additional oversight and those scoring below 1.0 being required to post a letter of credit with the Department of Education.

Although these scores have been released to the public since the 2006-07 academic year and there was a great deal of dissatisfaction among private colleges regarding how the scores were calculated, there had been no prior academic research on the topic before I started my work in the spring of 2014. My question was simple: did receiving a poor financial responsibility score induce colleges to shift their financial priorities (either increasing revenues or decreasing expenditures) in an effort to avoid future sanctions?

But as is often the case in academic research, the road to a published article was far from smooth and direct. Getting rejected by two different journals took nearly two years and then it took another two years for this paper to wind its way through the review, page proof, and publication process at the Journal of Education Finance. (In the meantime, I scratched my itch on the topic and put a stake in the ground by writing a few blog posts highlighting the data and teasing my findings.)

More than four and a half years after starting work on this project, I am thrilled to share that my paper, “Do Financial Responsibility Scores Affect Institutional Behaviors?” is a part of the most recent issue of the Journal of Education Finance. I examined financial responsibility score data from 2006-07 to 2013-14 in this paper, although I tried to get data going farther back since these scores have been calculated since at least 1996. I filed a Freedom of Information Act request back in 2014 for the data, and my appeal was denied in 2017 on the grounds that the request to receive data (that already existed in some format!) was “too burdensome and expensive.” At that point, the paper was already accepted at JEF, but I am obviously still a little annoyed with how that process went.

Anyway, I failed to find any clear evidence that private nonprofit or for-profit colleges changed their fiscal priorities after receiving an unfavorable financial responsibility score. To some extent, this result made sense among private nonprofit colleges; colleges tend to move fairly slowly and many of their costs are sticky (such as facilities and tenured faculty). But for-profit colleges, which generally tend to be fairly agile critters, the null findings were more surprising. There is certainly more work to do in this area (particularly given the changes in higher education that have occurred over the last five years), so I encourage more researchers to delve into this topic.

To aspiring researchers and those who rely on research in their jobs—I hope this blog post provides some insights into the scholarly publication process and all of the factors that can slow down the production of research. I started this paper during my first year on faculty and it finally came out during my tenure review year (which is okay because accepted papers still count even if they are not yet in print). Many papers move more quickly than this one, but it is worth highlighting that research is a pursuit for people with a fair amount of patience.

Some Good News on Student Loan Repayment Rates

The U.S. Department of Education released updates to its massive College Scorecard dataset earlier this week, including new data on student debt burdens and student loan repayment rates. In this blog post, I look at trends in repayment rates (defined as whether a student repaid at least $1 in principal) at one, three, five, and seven years after entering repayment. I present data for colleges with unique six-digit Federal Student Aid OPEID numbers (to eliminate duplicate results), weighting the final estimates to reflect the total number of borrowers entering repayment.[1]

The table below shows the trends in the 1-year, 3-year, 5-year, and 7-year repayment rates for each cohort of students with available data.

Repayment cohort 1-year rate (pct) 3-year rate (pct) 5-year rate (pct) 7-year rate (pct)
2006-07 63.2 65.1 66.7 68.4
2007-08 55.7 57.4 59.5 62.2
2008-09 49.7 51.7 55.3 59.5
2009-10 45.7 48.2 52.6 57.4
2010-11 41.4 45.4 51.3 N/A
2011-12 39.8 44.4 50.6 N/A
2012-13 39.0 45.0 N/A N/A
2013-14 40.0 46.1 N/A N/A

One piece of good news is that 1-year and 3-year repayment rates ticked up slightly for the most recent cohort of students who entered repayment in 2013 or 2014. The 1-year repayment rate of 40.0% is the highest rate since the 2010-11 cohort and the 3-year rate of 46.1% is the highest since the 2009-10 cohort. Another piece of good news is that the gain between the 5-year and 7-year repayment rates for the most recent cohort with data (2009-10) is the largest among the four cohorts with data.

Across all sectors of higher education, repayment rates increased as a student got farther into the repayment period. The charts below show differences by sector for the cohort entering repayment in 2009 or 2010 (the most recent cohort to be tracked over seven years), and it is worth noting that for-profit students see somewhat smaller increases in repayment rates than other sectors.

But even somewhat better repayment rates still indicate significant issues with student loan repayment. Only half of borrowers have repaid any principal within five years of entering repayment, which is a concern for students and taxpayers alike. Data from a Freedom of Information Act request by Ben Miller of the Center for American Progress highlight that student loan default rates continue to increase beyond the three-year accountability window currently used by the federal government, and other students are muddling through deferment and forbearance while outstanding debt continues to increase.

Other students are relying on income-driven repayment and Public Service Loan Forgiveness to remain current on their payments. This presents a long-term risk to taxpayers as at least a portion of balances will be written off over the next several decades. It would be helpful for the Department of Education to add data to the College Scorecard on the percentage of students by college enrolled in income-driven repayment rates so it is possible to separate students who may not be repaying principal due to income-driven plans from those who are placing their credit at risk by falling behind on payments.

[1] Some of the numbers for prior cohorts slightly differ from what I presented last year due to a change in how I merged datasets (starting with the most recent year of the Scorecard instead of the oldest year, as the latter method excluded some colleges that merged). However, this did not affect the general trends presented in last year’s post. Thanks to Andrea Fuller at the Wall Street Journal for helping me catch that bug.

How to Provide Context for College Scorecard Data

The U.S. Department of Education’s revamped College Scorecard website celebrated its third anniversary last month with another update to the underlying dataset. It is good to see this important consumer information tool continue to be updated, given the role that Scorecard data can play in market-based accountability (a key goal of many conservatives). But the Scorecard’s change log—a great resource for those using the dataset—revealed a few changes to the public-facing site. (Thanks to the indefatigable Clare McCann at New America for pointing this out in a blog post.)

scorecard_fig1_oct18

So to put the above screenshot into plain English, the Scorecard used to have indicators for how a college’s performance on outcomes such as net price, graduation rate, and post-college salary compared to the median institution—and now it doesn’t. In many ways, the Department of Education’s decision to stop comparing colleges with different levels of selectivity and institutional resources to each other makes all the sense in the world. But it would be helpful to provide website users with a general idea of how the college performs relative to more similar institutions (without requiring users to enter a list of comparison colleges).

For example, here is what the Scorecard data now look like for Cal State—Sacramento (the closest college to me as I write this post). The university sure looks affordable, but the context is missing.

scorecard_fig2_oct18

It would sure be helpful if ED already had a mechanism to generate a halfway reasonable set of comparison institutions to help put federal higher education data into context. Hold on just a second…

scorecard_fig3_oct18

It turns out that there is already an option within the Integrated Postsecondary Education Data System (IPEDS) to generate a list of peer institutions. ED creates a list of similar institutions to the focal college based on factors such as sector and level, Carnegie classification, enrollment, and geographic region. For Sacramento State, here is part of the list of 32 comparison institutions that is generated. People can certainly quibble with some of the institutions chosen, but they clearly do have some similarities.

scorecard_fig4_oct18

I then graphed the net prices of these 32 institutions to help put Sacramento State (in black below) into context. They had the fifth-lowest net price among the set of universities, information that is at least somewhat more helpful than looking at a national average across all sectors and levels.

scorecard_fig5_oct18

My takeaway here: the folks behind the College Scorecard should talk with the IPEDS people to consider bringing back a comparison group average based on a methodology that is already used within the Department of Education.

Beware Dubious College Rankings

Just like the leaves starting to change colors (in spite of the miserable 93-degree heat outside my New Jersey office window) and students returning to school are clear signs of fall, another indicator of the change in seasons is the proliferation of college rankings that get released in late August and early September. The Washington Monthly college rankings that I compile were released the week before Labor Day, and MONEY and The Wall Street Journal have also released their rankings recently. U.S. News & World Report caps off rankings season by unveiling their undergraduate rankings later this month.

People quibble with the methodology of these rankings all the time (I get e-mails by the dozens about the Washington Monthly rankings, and we’re not the 800-pound gorilla of the industry). Yet at least these rankings are all based on data that can be defended to at least some extent and the methodologies are generally transparent. Even rankings of party schools, such as this Princeton Review list, have a methodology section that does not seem patently absurd.

But since America loves college rankings—and colleges love touting rankings they do well in and grumbling about the rest of them—a number of dubious college rankings have developed over the years. I was forwarded a press release about one particular set of rankings that immediately set my BS detectors into overdrive. This press release was about a ranking of the top 20 fastest online doctoral programs, and here is a link to the rankings that will not boost their search engine results.

First, let’s take a walk through the methods section. There are three red flags that immediately stand out:

(1) The writing resembles a “word salad” and clearly was never edited by anyone. Reputable rankings sites use copy editors to help methodologists communicate with the public.

(2) College Navigator is a good data source for undergraduates, but does not contain any information on graduate programs (which they are trying to rank) other than the number of graduates.

(3) Reputable rankings will publish their full methodology, even if certain data elements are proprietary and cannot be shared. And trust me—nobody wants to duplicate this set of rankings!

As an example of what these rankings look like, here is a screenshot of how Seton Hall’s online EdD in higher education is presented. Again, let’s walk through the issues.

(1) There are typos galore in their description of the university. This is not a good sign.

(2) Acceptance/retention rate data are for undergraduate students, not for a doctoral program. The only way they could get these data are by contacting programs, which costs money and runs into logistical problems.

(3) Seton Hall is accredited by Middle States, not the Higher Learning Commission. (Thanks to Sam Michalowski for bringing this to my attention via Twitter.)

(4) In a slightly important point, Seton Hall does not offer an online EdD in higher education. Given that I teach in the higher education graduate programs and am featured on the webpage for the in-person EdD program, I’m pretty confident in this statement.

For any higher education professionals who are reading this post, I have a few recommendations. First, be skeptical of any rankings that come from sources that you are not familiar with—and triple that skepticism for any program-level rankings. (Ranking programs is generally much harder due to a lack of available data.) Second, look through the methodology with the help of institutional research staff members and/or higher education faculty members. Does it pass the smell test? And finally, keep in mind that many rankings websites are only able to be profitable by getting colleges to highlight their rankings, thus driving clicks to these sites. If colleges were more cautious about posting dubious rankings, it would shut down some of these websites while also avoiding embarrassment when someone finds out that a college fell for what is essentially a ruse.

Comments on the Proposed Gainful Employment Regulations

The U.S. Department of Education is currently accepting public comments (through September 13) on their proposal to rescind the Obama administration’s gainful employment regulations, which had the goal of tying federal financial aid eligibility to whether graduates of certain vocationally-focused programs had an acceptable debt-to-earnings ratio. My comments are reprinted below.

September 4, 2018

Annmarie Weisman

U.S. Department of Education

400 Maryland Avenue SW, Room 6W245

Washington, DC 20202

Re: Comments on the proposed rescinding of the gainful employment regulations

Dear Annmarie,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding gainful employment. I write to offer my comments on certain aspects of the proposed rescinding of the regulations.

First, as an academic, I was pleasantly surprised to see ED immediately referring to a research paper in making its justification to change the debt-to-earnings (D/E) threshold. But that quickly turned into dismay as it became clear that ED had incorrectly interpreted what Sandy Baum and Saul Schwartz wrote a decade ago after Baum clarified the findings of the paper in a blog post.[2] I am not wedded to any particular threshold regarding D/E ratios, but I would recommend that ED reach out to researchers before using their findings in order to make sure they are being interpreted correctly.

Second, the point that D/E ratios can be affected by the share of adult students, who have higher loan limits than dependent students, is quite valid. But it can potentially be addressed in one of two ways if D/E ratios are reported in the future. One option is to report D/E ratios separately for independent and dependent students separately, but that runs the risk of creating more issues of small cell sizes by splitting the sample. Another option is to cap the amount of independent student borrowing credited toward D/E ratios at the same level as dependent students (also addressing the possibility that some dependent students have higher limits due to Parent PLUS loan applications being rejected). This is less useful from a consumer information perspective, but could solve issues regarding high-stakes accountability.

Third, ED’s point about gainful employment using a ten-year amortization period for certificate programs while also offering 20-year repayment plans under REPAYE is well-taken. Switching to a 20-year period would allow some lower-performing programs to pass the D/E test, but it is reasonable given that ED offers a loan repayment plan of that period. (I also approach the idea that programs would lose Title IV eligibility under the prior administration’s regulations as being highly unlikely based on experiences with very few colleges losing eligibility based on high cohort default rates.) In any case, aligning amortization periods to repayment plan periods makes sense.

Fourth, I am highly skeptical that requiring institutions to disclose various outcomes on their own websites would have much value. Net price calculators, which colleges are required to post under the Higher Education Act, are a prime example. Research has shown that many colleges place these calculators on obscure portions of their website and that information is often up to five years out of date.[3] Continuing to publish centralized data on outcomes is far preferable than letting colleges do their own thing, and highlights the importance of continuing to publish outcomes information without any pauses in the data.

Fifth, while providing median debt and median earnings data allows analysts to continue to calculate a D/E ratio, there is no harm in continuing to provide such a ratio in the future alongside the raw data. There is no institutional burden for doing so, and it is possible that some prospective students may find that ratio to be more useful than simply looking at median debt. At the very least, ED should conduct several focus groups to make sure that D/E ratios lack value before getting rid of them.

Sixth, while it is absolutely correct to note that people working in certain service industries receive a high portion of their overall compensation in tips, I find it dismaying as a taxpayer that there is no interest in creating incentives for individuals to report their income as required by law. A focus on D/E ratios created a possibility for colleges to encourage their students to follow the law and accurately report their incomes in order to increase earnings relative to debt payments. ED should instead work with IRS and colleges to help protect taxpayers by making sure that everyone pays income taxes as required.

In closing, I do not have a strong preference about whether ED ties Title IV eligibility to program-level D/E thresholds due to my skepticism that any sanctions would actually be enforced.[4] However, I strongly oppose efforts by ED to completely stop publishing program-level student outcomes data until the College Scorecard data are ready (which could be a few years). Continuing to publish data on certificate graduates’ outcomes in the interim is an essential step since all sectors of higher education already have to report certificate outcomes—meaning that keeping these data treat all sectors equally. Publishing outcomes of degree programs would be nice, but not as important since only some colleges would be included.

As I showed with my colleagues in the September/October issue of Washington Monthly magazine, certificate students’ outcomes vary tremendously both within and across CIP codes as well as within different types of higher education institutions.[5] Once the College Scorecard data are ready, this dataset can be phased out. But in the meantime, continuing to publish data meets a key policy goal of fostering market-based accountability in higher education.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer or funders.

[2] Baum, S. (2018, August 22). DeVos misinterprets the evidence in seeking gainful employment deregulation. Urban Wire. https://www.urban.org/urban-wire/devos-misrepresents-evidence-seeking-gainful-employment-deregulation.

[3] Anthony, A. M., Page, L. C., & Seldin, A. (2016). In the right ballpark? Assessing the accuracy of net price calculators. Journal of Student Financial Aid, 46(2), 25-50. Cheng, D. (2012). Adding it all up 2012: Are college net price calculators easy to find, use, and compare? Oakland, CA: The Institute for College Access and Success.

[4] For more reasons why I am skeptical that all-or-nothing accountability systems such as the prior administration’s gainful employment regulations would actually be effective, see my book Higher Education Accountability (Johns Hopkins University Press, 2018).

[5] Washington Monthly (2018, September/October). 2018 best colleges for vocational certificates. https://washingtonmonthly.com/2018-vocational-certificate-programs.

A Look at Federal Student Loan Borrowing by Field of Study

The U.S. Department of Education’s Office of Federal Student Aid has slowly been releasing interesting new data on federal student loans over the last few years. In previous posts, I have highlighted data on the types of borrowers who use income-driven repayment plans and average federal student loan balances by state. But one section of Federal Student Aid’s website that gets less attention than the student loan portfolio page (where I pulled data from for the previous posts) is the Title IV program volume reports page. For years, this page—which is updated quarterly with current data—has been a useful source of how many students at each college receive federal grants and loans.

While pulling the latest data on Pell Grant and student loan volumes by college last week, I noticed three new spreadsheets on the page that contained interesting statistics from the 2015-16 academic year. One spreadsheet shows grant and loan disbursements by age group, while a second spreadsheet shows the same information by state. But in this blog post, I look at a third spreadsheet of student loan disbursements by students’ fields of study. The original spreadsheet contained data on the number of recipients and the amount of loans disbursed, and I added a third column of per-student annual average loans by dividing the two columns. This revised spreadsheet can be downloaded here.

Of the 1,310 distinct fields of study included in the spreadsheet, 14 of them included more than $1 billion of student loans in 2015-16 and made up over $36 billion of the $94 billion in disbursed loans. Business majors made up 600,000 of the 9.1 million borrowers, taking out $6.1 billion in loans, with nursing majors having the second most borrowers and loans. The majors with the third and fourth largest loan disbursements were law and medicine, fields that tend to be almost exclusively graduate students and can thus borrow up to the full cost of attendance without the need for Parent PLUS loans. As a result, both of these fields took out more loans than general studies majors in spite of being far fewer in numbers. On the other end (not shown here), the ten students majoring in hematology technology/technician drew out a combined $28,477 in loans, just ahead of the 14 students in explosive ordinance/bomb disposal programs who hopefully are not blowing up over incurring a total of $61,069 in debt.

Turning next to programs where per-student annual borrowing is the highest, the top ten list is completely dominated by health sciences programs (the first two-digit CIP not from health sciences is international business, trade, and tax law at #16). It is pretty remarkable that dentists take on $71,451 of student loans each year while advanced general dentists (all 51 of them!) borrow even more than that. Given that dental school is four years long and that interest accumulates during school, an average debt burden of private dental school graduates of $341,190 seems quite reasonable. Toss in income-driven repayment during additional training and it makes sense that at least one of the 101 people with $1 million in federal student loan debt is an orthodontist. On the low end of average debt, the 164 bartending majors ran up an average tab of $2,963 in student loans in 2015-16 while the 144 personal awareness and self-improvement majors are well into their 12-step plan to repay their average of $4,361 in loans.

Trends in Net Prices by Family Income

I continue my look through newly-released data from the National Postsecondary Student Aid Study by turning to trends in the net price of attendance by family income. The net price, which is the full cost of attendance (tuition and fees, books and supplies, room and board, and miscellaneous living expenses) less all grant aid received, is a key college affordability measure as it represents how much money students and their families have to come up with each year to attend college. This net price can be covered by a combination of savings, work income, and student loans, but it is worth noting that student loan limits for many undergraduate students are far below the net price. This means that many families face challenges in paying for college if the net price is a large share of their income.

The first figure here shows trends (since 2004) in the percentage of family income needed to cover the net price. In 2015-16, 48% of students faced net prices of less than 25% of their family income, 20% were between 26% and 50%, 9% were between 51% and 99%, and 23% of students had net prices greater than their family incomes. The good news is that the distribution of net prices held almost constant since 2011-12 after having taken a jump during the Great Recession.

In the second figure, I break down the percentage of students with net prices higher than their family income by type of college attended. Nearly half of students attending for-profit college were in this category, which is not surprising given the high prices charged by many for-profit colleges and students’ low household incomes. About one in five students attending public and private nonprofit four-year colleges are also in this category. Meanwhile, even 18% of community college students had net prices higher than their family’s income, which is a particular concern as quite a few colleges do not allow their students to take out federal loans.

A Look at College Students’ Living Arrangements

Those of us in the research and policy worlds generally had a different college experience than most American college students have today. One example of this is where students live during college. I had a very traditional college experience, which began with me as a recent high school graduate moving into my (non-air conditioned) dorm room in Truman State University’s Ryle Hall in the sweltering August heat.[1] Yet that residential experience is not what most students experience, as I show in my fourth blog post using newly-released data from the National Postsecondary Student Aid Study (NPSAS).

As the chart below shows, only 15.6% of all undergraduate students lived on campus in the 2015-16 academic year, a percentage that has largely been consistent since 2000. 56.9% of students lived off campus away from their parent(s), while 27.5% lived off campus with their parents. Aside from a strange blip in 2011-12, these percentage have also been fairly consistent over time.[2]

This low percentage could be explained in part by students living on campus during their first year of college and then moving off campus later on in an effort to either save money or gain more independence. I then focused the next chart on the roughly 38%-40% of students who were first-year students (about 25% at four-year public and private nonprofit colleges and 50% at community colleges and for-profits) to get an idea of whether patterns changed among new students only.[3] Interestingly, the percentages of first-year students living on campus (12.9%) and off campus away from their parent(s) (53.8%) were lower than for all students, which I figured was due to the smaller percentage of four-year students among the first-year student cohort.

I then broke down student living arrangements by institutional type for the 2015-16 academic year, showing numbers both for all students and only for first-year students. The finding that will surprise many is that less than 50% of first-year students at four-year colleges live on campus, in spite of this being viewed as the traditional college experience. 49% of first-year students at private nonprofit colleges and 36% of first-year students at public four-year colleges lived on campus, while very few community colleges or for-profit colleges even have campus housing. The most common living arrangement for both the community college and for-profit sectors was to live off campus away from parent(s) , with about 60% of community college and 75% of for-profit students doing this regardless of year in college. About 40% of community college students lived with their parent(s), with private nonprofit students being least likely to do this (13%).

These data show that the “typical” residential college experience that many of us had was not the typical experience even when we went to college.[4] A more typical college student is the young woman who rang me up as a outlet mall cashier last weekend. She was an education major at the local community college and said that she lived at home to save money. After I introduced myself as a professor, she mentioned that she was hoping to continue living at home and commuting to a nearby four-year college. Although I was unable to get an extra teacher discount from her at the cash register, it was a good reminder that most students never live in a residence hall.

[1] Air conditioning matters a lot in education, folks. For empirical evidence in a K-12 setting, see this great new NBER working paper by Josh Goodman and colleagues.

[2] Fellow data nerds, any idea what happened in 2011-12? I looked at each sector and the pattern is still there (with it being strongest among four-year colleges). For that reason, I am hesitant to place much value on the 2011-12 off campus percentages.

[3] I used the NPSAS variable of year in school for financial aid purposes, as the year in school for credit accumulation purposes could be skewed based on attendance status. However, the general pattern of results held across both definitions.

[4] I’m represented by the 2003-04 NPSAS cohort, where about 46% of first-year students on public university campuses lived in residence halls.

Trends in Zero EFC Receipt

In my third blog post using newly-released data from the 2015-16 National Postsecondary Student Aid Study (NPSAS), I turn my attention away from graduate and professional students and toward undergraduate students. Here, I update a 2015 article that I wrote for the Journal of Student Financial Aid examining trends in the share and types of students who have an expected family contribution of zero—the students who have the least financial ability to pay for college and thus qualify for the maximum Pell Grant.

Using the handy TrendStats tool on the National Center for Education Statistics’s DataLab website, I looked at six NPSAS waves from the 1995-96 to 2015-16 and pulled data for all students and then by student and institutional characteristics. The full spreadsheet can be downloaded here (including data by gender and age that I do not cover in this post), and I go through some of the highlights below.

Overall, the percentage of students with a zero EFC has steadily increased every four years since the 1999-2000 academic year in spite of ebbs and flows in the economy. Part of this is likely due to changes in the rules of who automatically qualifies for a zero EFC based on family income and means-tested benefit receipt (currently, the income limit is $25,000 per year), but increased student diversity in American higher education also plays a role. The percentages in each year are as follows:

1995-96: 18.6%

1999-2000: 17.7%

2003-04: 20.7%

2007-08: 25.4%

2011-12: 37.9%

2015-16: 39.1%

There are stark differences in the percentage of students with a zero EFC by dependency status that have grown larger over time. Independent students with dependents of their own have always been the most likely to have a zero EFC, especially because childcare obligations often limit work hours (resulting in a lower household income). The percentage of students in this category with a zero EFC remained between 35 and 40 percent through 2007-08 before spiking to 61% in 2011-12 and 67.3% in 2015-16. Dependents and independent students with no dependents had generally similar zero EFC rates in the teens through 2003-04, but then independent students started to qualify for zero EFCs at much higher rates. By 2015-16, the gap grew to 18 percentage points (42.2% versus 24.2%).

Turning next to institutional type, for-profit colleges (which tend to enroll more independent students with families of their own) have traditionally had higher zero EFC rates than other sectors. 62.2% of students at for-profits had a zero EFC in 2015-16, up from 56.8% in the last NPSAS wave and around 40% before the Great Recession. In the 1990s, community colleges, public 4-year colleges, and private nonprofit 4-year colleges all had zero EFC rates of around 15%. Community colleges’ rates passed 40% in 2011-12, while four-year public and nonprofit colleges’ rates exceeded 30% in 2015-16. Notably, the percentage of zero EFC students at four-year private nonprofit colleges jumped from 25.7% to 30.5% in this NPSAS wave, a much larger increase than among public 4-year colleges.

Readers of my last two blog posts should not be terribly surprised to see that African-American students have been the most likely to have a zero EFC across the last six NPSAS administrations, although there was a slight decrease between 2011-12 and 2015-16 (60.0% to 58.2%). American Indian/Alaska Native students had the next highest zero EFC percentage (51.2%), followed by Hispanic/Latino students (47.6%), Asian students (39.2%), and white students (29.8%). Multiracial students saw an increase in zero EFC rates from 39.1% to 41.8%, but this group is not shown in the chart due to changes in how the Department of Education has classified race and ethnicity over time.

Finally, I examine zero EFC receipt trends by parental education—beginning in the 1999-2000 academic year due to changes in the survey question following the 1995-96 NPSAS. There is a clear relationship between parental education and zero EFC rates, with more than half of all students whose parents never attended college having a zero EFC in 2015-16 and progressively lower rates for students with highly-educated parents. However, two trends stand out among non-first-generation students. The largest increase in zero EFC rates by parental education in the last two NPSAS waves was among families with some college experience or an associate degree (rising from 37.9% to 42.6%). Meanwhile, even among students who had at least one parent with a graduate degree, 27.5% still qualified for a zero EFC.

Readers, if there are any pieces of the new NPSAS data that you would like me to examine in a future blog post, leave me a note in the comments section or send me a tweet. I’m happy to dig into other pieces of the dataset!

What Explains Racial Gaps in Large Graduate Student Debt Burdens?

In my previous blog post, I used brand-new data from the 2015-16 National Postsecondary Student Aid Study (NPSAS) to look at trends in debt burdens among graduate students. The data point that quickly got the most attention was the growth in the percentage of African-American graduate students with at least $100,000 in debt between their undergraduate and graduate programs, with 30% of black students having six-figure debt burdens in 2015-16 compared to just 12% of white borrowers. This means that roughly 150,000 black borrowers had $100,000 in debt, more than half of the number of white borrowers with the same debt level (250,000) despite white graduate student enrollment being four times as white as black grad student enrollment.

My next step is to examine whether the black-white borrowing gap could be explained by other demographic and educational factors. I ran two logistic regressions with the outcome of interest being $100,000 or more in total educational debt using PowerStats, with the results presented in odds ratios. (To interpret odds ratios, note that they are percent changes from 1. So a coefficient of 0.5 means that something is 50% less likely to happen and 1.5 means that something is 50% more likely to happen.) The first regression below only controls for race/ethnicity.

Table 1: Partial regression predicting likelihood of $100,000 or more in debt among graduate students.
  Coefficient (Odds Ratio)    
Characteristic 95% CI p-value
Race/ethnicity (reference: white)
  Black or African American 2.50 (1.91, 3.26) 0.000
  Hispanic or Latino 1.12 (0.89, 1.41) 0.347
  Asian 0.62 (0.46, 0.83) 0.002
  American Indian or Alaska Native 1.31 (0.49, 3.50) 0.595
  Native Hawaiian/other Pacific Islander 1.35 (0.38, 4.74) 0.640
  More than one race 1.73 (1.08, 2.77) 0.023
Source: National Postsecondary Student Aid Study 2015-16.    

 

This shows that black students were 150% more likely to have six-figure debt than white students (p<.001), while Asian students were 38% less likely (p<0.01). Hispanic students had a slightly higher point estimate, but it was not statistically significant.

I then controlled for a number of factors that could be associated with high graduate student debt amounts, including other demographic characteristics (gender, age, and marital status), level of study (master’s or doctoral), institution type, and field of study. The regression results are shown below.

Table 2: Full regression predicting likelihood of $100,000 or more in debt among graduate students.
  Coefficient (Odds Ratio)    
Characteristic 95% CI p-value
Race/ethnicity (reference: white)
  Black or African American 2.30 (1.79, 2.97) 0.000
  Hispanic or Latino 1.03 (0.80, 1.33) 0.828
  Asian 0.69 (0.48, 0.98) 0.036
  American Indian or Alaska Native 0.97 (0.25, 3.77) 0.964
  Native Hawaiian/other Pacific Islander 1.61 (0.44, 5.84) 0.468
  More than one race 1.82 (1.12, 2.95) 0.015
Female 1.00 (0.84, 1.19) 0.990
Age as of 12/31/2015 1.04 (1.03, 1.04) 0.000
Marital status (reference: single)
  Married 0.68 (0.55, 0.85) 0.001
  Separated 0.94 (0.51, 1.73) 0.840
Graduate institution (reference: public)
  Private nonprofit 1.64 (1.36, 1.98) 0.000
  For-profit 2.15 (1.64, 2.82) 0.000
Graduate degree program (reference: master’s)
  Research doctorate 3.00 (2.38, 3.78) 0.000
  Professional doctorate 7.07 (5.61, 8.90) 0.000
Field of study (reference: education)
  Humanities 0.99 (0.66, 1.48) 0.943
  Social/behavioral sciences 1.85 (1.38, 2.48) 0.000
  Life sciences 1.71 (1.14, 2.56) 0.009
  Math/Engineering/Computer science 0.34 (0.20, 0.57) 0.000
  Business/management 0.91 (0.64, 1.28) 0.577
  Health 1.93 (1.47, 2.53) 0.000
  Law 1.38 (0.90, 2.11) 0.140
  Others 1.26 (0.89, 1.79) 0.186
Source: National Postsecondary Student Aid Study 2015-16.    

 

Notably, the coefficient for being African-American (relative to white) decreased slightly in the regression with additional control variables. Black students were 130% more likely to have six-figure debt burdens than white students, down from 150% in the previous regression. Not surprisingly, doctoral students, students at private nonprofit and for-profit colleges, and students studying health, life sciences, and social/behavioral sciences were more likely to have $100,000 in debt than public university students, master’s students, and those studying education. Meanwhile, STEM students were far less likely to have $100,000 in debt than education students, which is not surprising given the large number of assistantships available in STEM fields.

This regression strongly suggests that the black/white gap in large student debt burdens cannot be explained by other demographic characteristics or individuals’ fields of study. Financial resources (such as the large wealth gap between black and white families) are likely to blame, but this is not well-measured in the NPSAS. The best proxy is a student’s expected family contribution (EFC), which only measures a student’s own resources as an adult student. Including EFC as a variable in the model brings the black/white gap down to 120% (not shown here for the sake of brevity), but a good measure of wealth likely shrinks the gap by a much larger amount.