Blog (Kelchen on Education)

How to Provide Context for College Scorecard Data

The U.S. Department of Education’s revamped College Scorecard website celebrated its third anniversary last month with another update to the underlying dataset. It is good to see this important consumer information tool continue to be updated, given the role that Scorecard data can play in market-based accountability (a key goal of many conservatives). But the Scorecard’s change log—a great resource for those using the dataset—revealed a few changes to the public-facing site. (Thanks to the indefatigable Clare McCann at New America for pointing this out in a blog post.)

scorecard_fig1_oct18

So to put the above screenshot into plain English, the Scorecard used to have indicators for how a college’s performance on outcomes such as net price, graduation rate, and post-college salary compared to the median institution—and now it doesn’t. In many ways, the Department of Education’s decision to stop comparing colleges with different levels of selectivity and institutional resources to each other makes all the sense in the world. But it would be helpful to provide website users with a general idea of how the college performs relative to more similar institutions (without requiring users to enter a list of comparison colleges).

For example, here is what the Scorecard data now look like for Cal State—Sacramento (the closest college to me as I write this post). The university sure looks affordable, but the context is missing.

scorecard_fig2_oct18

It would sure be helpful if ED already had a mechanism to generate a halfway reasonable set of comparison institutions to help put federal higher education data into context. Hold on just a second…

scorecard_fig3_oct18

It turns out that there is already an option within the Integrated Postsecondary Education Data System (IPEDS) to generate a list of peer institutions. ED creates a list of similar institutions to the focal college based on factors such as sector and level, Carnegie classification, enrollment, and geographic region. For Sacramento State, here is part of the list of 32 comparison institutions that is generated. People can certainly quibble with some of the institutions chosen, but they clearly do have some similarities.

scorecard_fig4_oct18

I then graphed the net prices of these 32 institutions to help put Sacramento State (in black below) into context. They had the fifth-lowest net price among the set of universities, information that is at least somewhat more helpful than looking at a national average across all sectors and levels.

scorecard_fig5_oct18

My takeaway here: the folks behind the College Scorecard should talk with the IPEDS people to consider bringing back a comparison group average based on a methodology that is already used within the Department of Education.

How to Improve Living Cost Estimates for Students

I am thrilled to be presenting at the Real College conference in Philadelphia weekend on potential ways to improve the living allowance estimates that students receive as a part of their cost of attendance. This expands on my prior research (with Sara Goldrick-Rab of Temple and Braden Hosch of Stony Brook) documenting the incredible amount of variation in living allowances within individual counties.

Since the conference is completely booked and I have heard from several people who were interested in seeing a copy of my slides, I am happy to share them here. Stay tuned for a new working paper this fall that dives deeper into these topics!

Beware Dubious College Rankings

Just like the leaves starting to change colors (in spite of the miserable 93-degree heat outside my New Jersey office window) and students returning to school are clear signs of fall, another indicator of the change in seasons is the proliferation of college rankings that get released in late August and early September. The Washington Monthly college rankings that I compile were released the week before Labor Day, and MONEY and The Wall Street Journal have also released their rankings recently. U.S. News & World Report caps off rankings season by unveiling their undergraduate rankings later this month.

People quibble with the methodology of these rankings all the time (I get e-mails by the dozens about the Washington Monthly rankings, and we’re not the 800-pound gorilla of the industry). Yet at least these rankings are all based on data that can be defended to at least some extent and the methodologies are generally transparent. Even rankings of party schools, such as this Princeton Review list, have a methodology section that does not seem patently absurd.

But since America loves college rankings—and colleges love touting rankings they do well in and grumbling about the rest of them—a number of dubious college rankings have developed over the years. I was forwarded a press release about one particular set of rankings that immediately set my BS detectors into overdrive. This press release was about a ranking of the top 20 fastest online doctoral programs, and here is a link to the rankings that will not boost their search engine results.

First, let’s take a walk through the methods section. There are three red flags that immediately stand out:

(1) The writing resembles a “word salad” and clearly was never edited by anyone. Reputable rankings sites use copy editors to help methodologists communicate with the public.

(2) College Navigator is a good data source for undergraduates, but does not contain any information on graduate programs (which they are trying to rank) other than the number of graduates.

(3) Reputable rankings will publish their full methodology, even if certain data elements are proprietary and cannot be shared. And trust me—nobody wants to duplicate this set of rankings!

As an example of what these rankings look like, here is a screenshot of how Seton Hall’s online EdD in higher education is presented. Again, let’s walk through the issues.

(1) There are typos galore in their description of the university. This is not a good sign.

(2) Acceptance/retention rate data are for undergraduate students, not for a doctoral program. The only way they could get these data are by contacting programs, which costs money and runs into logistical problems.

(3) Seton Hall is accredited by Middle States, not the Higher Learning Commission. (Thanks to Sam Michalowski for bringing this to my attention via Twitter.)

(4) In a slightly important point, Seton Hall does not offer an online EdD in higher education. Given that I teach in the higher education graduate programs and am featured on the webpage for the in-person EdD program, I’m pretty confident in this statement.

For any higher education professionals who are reading this post, I have a few recommendations. First, be skeptical of any rankings that come from sources that you are not familiar with—and triple that skepticism for any program-level rankings. (Ranking programs is generally much harder due to a lack of available data.) Second, look through the methodology with the help of institutional research staff members and/or higher education faculty members. Does it pass the smell test? And finally, keep in mind that many rankings websites are only able to be profitable by getting colleges to highlight their rankings, thus driving clicks to these sites. If colleges were more cautious about posting dubious rankings, it would shut down some of these websites while also avoiding embarrassment when someone finds out that a college fell for what is essentially a ruse.

Comments on the Proposed Gainful Employment Regulations

The U.S. Department of Education is currently accepting public comments (through September 13) on their proposal to rescind the Obama administration’s gainful employment regulations, which had the goal of tying federal financial aid eligibility to whether graduates of certain vocationally-focused programs had an acceptable debt-to-earnings ratio. My comments are reprinted below.

September 4, 2018

Annmarie Weisman

U.S. Department of Education

400 Maryland Avenue SW, Room 6W245

Washington, DC 20202

Re: Comments on the proposed rescinding of the gainful employment regulations

Dear Annmarie,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding gainful employment. I write to offer my comments on certain aspects of the proposed rescinding of the regulations.

First, as an academic, I was pleasantly surprised to see ED immediately referring to a research paper in making its justification to change the debt-to-earnings (D/E) threshold. But that quickly turned into dismay as it became clear that ED had incorrectly interpreted what Sandy Baum and Saul Schwartz wrote a decade ago after Baum clarified the findings of the paper in a blog post.[2] I am not wedded to any particular threshold regarding D/E ratios, but I would recommend that ED reach out to researchers before using their findings in order to make sure they are being interpreted correctly.

Second, the point that D/E ratios can be affected by the share of adult students, who have higher loan limits than dependent students, is quite valid. But it can potentially be addressed in one of two ways if D/E ratios are reported in the future. One option is to report D/E ratios separately for independent and dependent students separately, but that runs the risk of creating more issues of small cell sizes by splitting the sample. Another option is to cap the amount of independent student borrowing credited toward D/E ratios at the same level as dependent students (also addressing the possibility that some dependent students have higher limits due to Parent PLUS loan applications being rejected). This is less useful from a consumer information perspective, but could solve issues regarding high-stakes accountability.

Third, ED’s point about gainful employment using a ten-year amortization period for certificate programs while also offering 20-year repayment plans under REPAYE is well-taken. Switching to a 20-year period would allow some lower-performing programs to pass the D/E test, but it is reasonable given that ED offers a loan repayment plan of that period. (I also approach the idea that programs would lose Title IV eligibility under the prior administration’s regulations as being highly unlikely based on experiences with very few colleges losing eligibility based on high cohort default rates.) In any case, aligning amortization periods to repayment plan periods makes sense.

Fourth, I am highly skeptical that requiring institutions to disclose various outcomes on their own websites would have much value. Net price calculators, which colleges are required to post under the Higher Education Act, are a prime example. Research has shown that many colleges place these calculators on obscure portions of their website and that information is often up to five years out of date.[3] Continuing to publish centralized data on outcomes is far preferable than letting colleges do their own thing, and highlights the importance of continuing to publish outcomes information without any pauses in the data.

Fifth, while providing median debt and median earnings data allows analysts to continue to calculate a D/E ratio, there is no harm in continuing to provide such a ratio in the future alongside the raw data. There is no institutional burden for doing so, and it is possible that some prospective students may find that ratio to be more useful than simply looking at median debt. At the very least, ED should conduct several focus groups to make sure that D/E ratios lack value before getting rid of them.

Sixth, while it is absolutely correct to note that people working in certain service industries receive a high portion of their overall compensation in tips, I find it dismaying as a taxpayer that there is no interest in creating incentives for individuals to report their income as required by law. A focus on D/E ratios created a possibility for colleges to encourage their students to follow the law and accurately report their incomes in order to increase earnings relative to debt payments. ED should instead work with IRS and colleges to help protect taxpayers by making sure that everyone pays income taxes as required.

In closing, I do not have a strong preference about whether ED ties Title IV eligibility to program-level D/E thresholds due to my skepticism that any sanctions would actually be enforced.[4] However, I strongly oppose efforts by ED to completely stop publishing program-level student outcomes data until the College Scorecard data are ready (which could be a few years). Continuing to publish data on certificate graduates’ outcomes in the interim is an essential step since all sectors of higher education already have to report certificate outcomes—meaning that keeping these data treat all sectors equally. Publishing outcomes of degree programs would be nice, but not as important since only some colleges would be included.

As I showed with my colleagues in the September/October issue of Washington Monthly magazine, certificate students’ outcomes vary tremendously both within and across CIP codes as well as within different types of higher education institutions.[5] Once the College Scorecard data are ready, this dataset can be phased out. But in the meantime, continuing to publish data meets a key policy goal of fostering market-based accountability in higher education.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer or funders.

[2] Baum, S. (2018, August 22). DeVos misinterprets the evidence in seeking gainful employment deregulation. Urban Wire. https://www.urban.org/urban-wire/devos-misrepresents-evidence-seeking-gainful-employment-deregulation.

[3] Anthony, A. M., Page, L. C., & Seldin, A. (2016). In the right ballpark? Assessing the accuracy of net price calculators. Journal of Student Financial Aid, 46(2), 25-50. Cheng, D. (2012). Adding it all up 2012: Are college net price calculators easy to find, use, and compare? Oakland, CA: The Institute for College Access and Success.

[4] For more reasons why I am skeptical that all-or-nothing accountability systems such as the prior administration’s gainful employment regulations would actually be effective, see my book Higher Education Accountability (Johns Hopkins University Press, 2018).

[5] Washington Monthly (2018, September/October). 2018 best colleges for vocational certificates. https://washingtonmonthly.com/2018-vocational-certificate-programs.

Comments on the Proposed Borrower Defense to Repayment Regulations

The U.S. Department of Education is currently accepting public comments (through August 30) on their proposed borrower defense to repayment regulations, which affect students’ ability to get loans forgiven in the case of closed schools or colleges that misrepresented important facts. Since these regulations also affect colleges and taxpayers, I weighed in to provide a researcher’s perspective. My comments are reprinted below.

August 21, 2018

Jean-Didier Gaina

U.S. Department of Education

400 Maryland Avenue SW, Mail Stop 294-20

Washington, DC 20202

Re: Comments on the proposed borrower defense to repayment regulations

Dear Jean-Didier,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding borrower defense to repayment and financial responsibility scores. Since there were no academic researchers included in the negotiated rulemaking committee (something that should be reconsidered in the future!), I write to offer my comments on certain segments of the proposed regulations.

My first comment is on the question of whether ED should accept so-called affirmative claims from borrowers who are not yet in default and seek to make a claim against a college instead of only accepting defensive claims from borrowers who have already defaulted. For colleges that are still open, this is a clear decision in my view: affirmative claims should be allowed because ED can then attempt to recoup the money from the college instead of effectively requiring the taxpayer to subsidize at least some amount of loan forgiveness. However, the decision is somewhat more complicated in the case of a closed school, where taxpayers are more likely to foot the bill. My sense is that affirmative claims should probably still be allowed given the relationship between defaulting on student loans and adverse outcomes such as reduced credit scores.[2]

To protect taxpayers and students alike, more needs to be done to strengthen federal requirements for colleges that are at risk of closure. If a college closes suddenly, students may be eligible to receive closed school discharges at taxpayer expense. Yet my research and analyses show that ED’s current rules for determining a college’s financial health (the financial responsibility score) are only weakly related with what they seek to measure. For example, several private nonprofit colleges that closed in 2016 had passing financial responsibility scores in 2014-15, while many colleges have continued to operate with failing scores for years.[3] I also found that colleges did not change their revenue or expenditure patterns in any meaningful way after receiving a failing financial responsibility score, suggesting that colleges are not taking the current measure seriously.[4]

I am heartened to see that ED is continuing to work on updating the financial responsibility score metric to better reflect a college’s real-time risk of closing through another negotiated rulemaking session. However, I am concerned that students and taxpayers could suffer from continuing with the status quo during a potential six-year phase-in period, so anything to shorten the period would be beneficial. I again urge ED to include at least one academic researcher on the negotiated rulemaking panel to complement institutional officials and accountants, as the research community studies how colleges respond to potential policy changes that the rest of the committee may be proposed.

Finally, I am concerned about ED’s vague promise to encourage colleges to offer teach-out plans instead of suddenly closing, as the regulations provide no incentives for colleges on the brink of financial collapse to work with accreditors and states to develop a teach-out plan. It would be far better for ED to require colleges to be proactive and develop teach-out plans at the first sign of financial difficulties, reducing the risk to taxpayers by minimizing the risk of closed school discharges. These plans can then be approved by an accreditor and/or state agency as a part of the regular review process. Colleges would likely contend that having to develop a pre-emptive teach-out plan may affect their ability to recruit and retain students, but tying this to an existing benchmark of federal concern (such as a low financial responsibility score or being on HCM2) should alleviate that issue.

Thank you for the opportunity to provide comments on these proposed regulations and I am happy to respond to any questions that ED staffers may have.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer.

[2] Blagg, K. (2018). Underwater on student debt: Understanding consumer credit and student loan default. Washington, DC: Urban Institute.

[3] Kelchen, R. (2017, March 8). Do financial responsibility scores predict college closures? https://robertkelchen.com/2017/03/08/do-financial-responsibility-scores-predict-college-closures/.

[4] Kelchen, R. (forthcoming). Do financial responsibility scores affect institutional behaviors? Journal of Education Finance.

Some Thoughts on the Academic Peer Review Process

Like most research-intensive faculty members, I receive regular requests to review papers for legitimate scholarly journals. (My spam e-mail folder is also full of requests to join editorial boards for phony journals, but that’s another topic for another day.) Earlier this week, I was working on reviewing a paper submitted to The Review of Higher Education, one of the three main higher education field journals in the United States (Journal of Higher Education and Research in Higher Education are the other two). I went to check one of the submission guidelines on the journal’s website and was surprised to see that the journal is temporarily closed for new manuscript submissions to help clear a backlog of submissions.

After I shared news of the journal’s decision on Twitter, I received a response from one of the associate editors of the journal. Her statement astonished me:

This sets off all kinds of alarms. How can a well-respected journal struggle so much to get qualified reviewers, pushing the length of the initial peer review process to six months or beyond? As someone who both submits to and reviews for a wide range of journals, here are some of my thoughts on how to potentially streamline the academic peer review process.

(1) Editors should ‘desk reject’ a higher percentage of submissions. Since it can be difficult to find qualified reviewers and most respectable journals accept less than 20% of all submissions, there is no reason to send all papers out to multiple external reviewers. If a member of the editorial board glances through the paper and can easily determine that it has a very low chance of publication, the paper should be immediately ‘desk rejected’ and quickly returned to the author with a brief note about why it was not sent out for full review. Journals in some fields, such as economics, already do this and it is sorely needed in education to help manage workloads. It is also humane to authors, as they are not waiting several months to hear back on a paper that will end up being rejected. I have been desk rejected several times during my career, and it allowed me to keep moving papers through the publication pipeline as a tenure-track faculty member.

(2) Journals should consider rejecting submissions from serial free riders. The typical academic paper is reviewed by two or three external scholars in the peer review process, with more people potentially getting involved if the paper goes through multiple revise and resubmit rounds. This means that for every sole-authored paper that someone submits, that person should be prepared to review two or three other papers in order to maintain balance. But in practice, since journals prefer reviewers with doctoral degrees and graduate students need to submit papers in order to be eligible for academic jobs, those of us with doctoral degrees should probably plan on reviewing 3-4 papers for each sole-authored paper we submit. (Divide that number accordingly based on the number of co-authors on your submissions.) It’s okay to decline review invitations if the paper is outside your scope of knowledge, but otherwise scholars need to accept most invitations. Declining because we are too busy doing our own research—and thus further jamming the publication pipeline—is not acceptable, particularly for tenured faculty members. If journals publicly commit to rejecting submissions from serial free riders, there may be fewer difficulties finding reviewers.

(3) There needs to be some incentive for reviewers to submit in a timely manner. Right now, journals can only beg and plead to get reviewers to submit their reviews within a reasonable time period (usually 3-6 weeks). But in my conversations with journal editors, reviewers often fail to meet that timeline. In an ideal world, journal reviewers would actually get paid for their work like many foundations and scholarly presses pay a few hundred dollars for thorough reviews. Absent that incentive, it may be worth establishing some sort of priority structure that rewards those who review quickly with quick reviews on their own submissions.

(4) In some cases, there needs to be better vetting of reviews before they are sent to authors. Most reputable academic journals have relatively few problems with this, as this is the job of the editorial board. Reviews generally come with a letter from the editor explaining discrepancies among reviewers and which comments can potentially be ignored. But the peer review process at academic conferences has more quality control issues, potentially due to the large number of reviews that are requested (ten 2,000-2,500 word proposals is not uncommon). It seems like reviewers rush through these proposals and often lack knowledge in the subject matter. Limiting the number of submissions that any individual can make and carefully vetting conference reviewers could help with this concern.

By helping to restrict the number of items that go out for peer review and providing incentives for people to fulfill their professional reviewing obligations, it should be possible to bring the peer review timeline down to a more humane 2-3 months rather than the 4-8 months that seems to be the norm in much of education. This is crucial for junior scholars trying to meet tenure requirements, but it will also help get peer-reviewed research out to the public and policymakers more quickly. Journals such as AERA Open, Educational Evaluation and Policy Analysis, and Economics of Education Review are models in quick and thorough peer review processes that the rest of the field can emulate.

Why the Democrats’ New ‘Debt-Free’ College Plan Won’t Really Make College Debt-Free

This article was originally published at The Conversation and is co-authored with Dennis Kramer II of the University of Florida.

Rising student loan debt and concerns about college affordability got considerable attention from Democrats in the 2016 presidential campaign. Those issues are bound to get renewed attention since House Democrats recently introduced the Aim Higher Act – an effort to update the Higher Education Act, the federal law that governs federal higher education programs.

The bill promises “debt-free” college to students. As scholars who focus on higher education finance and student aid, we believe the bill actually falls well short of that promise.

What ‘free’ really means

In its current form, the bill guarantees two years of tuition-free community college to students. However, the Democratic bill does not address the fact that tuition is only about one-fifth of the total cost of attending community college. Rent, food, books and transportation make up the rest of the cost of attendance and are not covered by this plan.

The “debt-free” label is problematic for other reasons. For instance, the maximum Pell Grant – $6,095 for the 2018-2019 school year – already covers community college tuition in nearly all states. This means the neediest students likely already have access to federal grant funds to cover tuition. Although the bill would increase Pell awards by $500 each year and reduce debt somewhat for the neediest students, many needy students will still need to take out loans to attend college.

States may not cooperate

Another reason the Democrats’ “debt-free” college plan does not live up to its name is the fact that its tuition-free provision requires states to maintain their funding for public colleges in order qualify for more federal funds under the proposed bill. This approach is similar to the state-federal partnership that was part of the recent Medicaid expansion, which led 16 conservative states to decline to expand Medicaid. Many conservative-leaning states might push back against the Aim Higher Act’s tuition-free provision because it restricts states’ ability to cut higher education spending.

Slim chance of becoming law

The ConversationIt is unlikely that either the PROSPER Act or the Aim Higher Act become law in the near future given the lack of comprehensive support within the Republican Party and Democrats’ minority status in Congress. But there are a few parts of both bills that could get bipartisan support, such as simplifying the process for applying for federal financial aid, creating better data systems to help track students’ outcomes, and allowing Pell Grants to be used for shorter-term training programs. Although neither the Republican nor the Democratic bills appear likely to pass, expect both parties to use their proposals in the upcoming midterm elections.

New Experimental Evidence on the Effectiveness of Need-Based Financial Aid

My first experience doing higher education research began in the spring 2008, when I (then a graduate student in economics) responded to an e-mail from an education professor at the University of Wisconsin who was looking for students to help her with an interesting new study. Sara Goldrick-Rab was co-leading an evaluation of the Wisconsin Scholars Grant (WSG)—a rare case of need-based financial aid being given to students from low-income families via random assignment. Over the past decade, the Wisconsin Hope Lab team published articles on the effectiveness of the WSG in improving on-time graduation rates among university students and on changing students’ work patterns.

A decade later, we were able to conduct a follow-up study to examine the outcomes of treatment and control group students who started college between 2008 and 2011. This sort of long-term analysis of financial aid programs has rarely been conducted—and the two best existing evaluations (of the Cal Grant and the West Virginia PROMISE program) are on programs with substantial merit-based components. Eligibility for the WSG was solely based on financial need (conditional on being a first-time, full-time student), providing the first long-term experimental evaluation of a need-based program.

Along with longtime collaborators from our days in Wisconsin (Drew Anderson of the RAND Corporation, Katharine Broton of the University of Iowa, and Sara Goldrick-Rab of Temple University), I am pleased to announce the release of our new working paper on the long-term effects of the WSG to kick off the opening of the new Hope Center for College, Community and Justice at Temple University. We found some evidence that students who began at four-year colleges who were assigned to receive the WSG had improved academic outcomes. The positive impacts on degree completion for the initial cohort of students in 2008 did fade out over a period of up to nine years, but the grant still helped students complete their degrees more quickly than the comparison group. Additionally, there was a positive impact on six-year graduation rates in later cohorts, with treatment students in the 2011 cohort being 5.4 percentage points more likely to graduate than the control group.

The grant generated clear increases in the percentage of students who both declared and completed STEM majors, even though the grant made no mentions whatsoever of STEM and had no major requirements. A second new paper by Katharine Broton and David Monaghan of Shippensburg University found that university students assigned to treatment were eight percentage points more likely to declare a STEM major, while our paper estimated a 3.6 percentage point increase in the likelihood of graduating with a STEM major. This strongly suggests that additional need-based financial aid can free students to pursue a wider range of majors, including ones that may require more expensive textbooks and additional hours spent in laboratory sessions.

However, the WSG did not generate across-the-board positive impacts. Impacts on persistence, degree completion, and transfer for students who began at two-year colleges were generally null, which could be due to the smaller size of the grant ($1,800 per year at two-year colleges versus $3,500 at four-year colleges) or the rather unusual population of first-time, full-time students attending mainly transfer-focused two-year colleges. We also found no effects of the grant on graduate school enrollment among students who started at four-year colleges, although this trend is worth re-examining in the future as people may choose to enroll after several years of work experience.

It has been an absolute delight to reunite with my longstanding group of colleagues to conduct this long-term evaluation of the WSG. We welcome any comments on our working paper and look forward to continuing our work in this area through the Hope Center.

A Look at Federal Student Loan Borrowing by Field of Study

The U.S. Department of Education’s Office of Federal Student Aid has slowly been releasing interesting new data on federal student loans over the last few years. In previous posts, I have highlighted data on the types of borrowers who use income-driven repayment plans and average federal student loan balances by state. But one section of Federal Student Aid’s website that gets less attention than the student loan portfolio page (where I pulled data from for the previous posts) is the Title IV program volume reports page. For years, this page—which is updated quarterly with current data—has been a useful source of how many students at each college receive federal grants and loans.

While pulling the latest data on Pell Grant and student loan volumes by college last week, I noticed three new spreadsheets on the page that contained interesting statistics from the 2015-16 academic year. One spreadsheet shows grant and loan disbursements by age group, while a second spreadsheet shows the same information by state. But in this blog post, I look at a third spreadsheet of student loan disbursements by students’ fields of study. The original spreadsheet contained data on the number of recipients and the amount of loans disbursed, and I added a third column of per-student annual average loans by dividing the two columns. This revised spreadsheet can be downloaded here.

Of the 1,310 distinct fields of study included in the spreadsheet, 14 of them included more than $1 billion of student loans in 2015-16 and made up over $36 billion of the $94 billion in disbursed loans. Business majors made up 600,000 of the 9.1 million borrowers, taking out $6.1 billion in loans, with nursing majors having the second most borrowers and loans. The majors with the third and fourth largest loan disbursements were law and medicine, fields that tend to be almost exclusively graduate students and can thus borrow up to the full cost of attendance without the need for Parent PLUS loans. As a result, both of these fields took out more loans than general studies majors in spite of being far fewer in numbers. On the other end (not shown here), the ten students majoring in hematology technology/technician drew out a combined $28,477 in loans, just ahead of the 14 students in explosive ordinance/bomb disposal programs who hopefully are not blowing up over incurring a total of $61,069 in debt.

Turning next to programs where per-student annual borrowing is the highest, the top ten list is completely dominated by health sciences programs (the first two-digit CIP not from health sciences is international business, trade, and tax law at #16). It is pretty remarkable that dentists take on $71,451 of student loans each year while advanced general dentists (all 51 of them!) borrow even more than that. Given that dental school is four years long and that interest accumulates during school, an average debt burden of private dental school graduates of $341,190 seems quite reasonable. Toss in income-driven repayment during additional training and it makes sense that at least one of the 101 people with $1 million in federal student loan debt is an orthodontist. On the low end of average debt, the 164 bartending majors ran up an average tab of $2,963 in student loans in 2015-16 while the 144 personal awareness and self-improvement majors are well into their 12-step plan to repay their average of $4,361 in loans.