Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.

New Research on the Relationship between Nonresident Enrollment and In-State College Prices

Public colleges and universities in most states are under increased financial stress as they strain to compete with other institutions while state appropriations fail to keep up with increases in both inflation and student enrollment. As a result, universities have turned to other revenue sources to raise additional funds. One commonly targeted source is out-of-state students, particularly in Northeastern and Midwestern states with declining populations of recent high school graduates. But prior research has found that trying to enroll more out-of-state students can reduce the number of in-state students attending selective public universities, and this crowding-out effect particularly impacts minority and low-income students.

I have long been interested in studying how colleges use their revenue, so I began sketching out a paper looking at whether public universities appeared to use additional revenue from out-of-state students to improve affordability for in-state students. Since I am particularly interested in prices faced by students from lower-income families, I was also concerned that any potential increase in amenities driven by out-of-state students could actually make college less affordable for in-state students.

I started working on this project back in the spring of 2015 and enjoyed two and a half conference rejections (one paper submission was rejected into a poster presentation), two journal rejections, and a grant application rejection during the first two years. But after getting helpful feedback from the journal reviewers (unfortunately, most conference reviewers provide little feedback and most grant applications are rejected with no feedback), I made improvements and finally got the paper accepted for publication.

The resulting article, just published in Teachers College Record (and is available for free for a limited time upon signing up as a visitor), includes the following research questions:

(1) Do the listed cost of attendance and components such as tuition and fees and housing expenses for in-state students change when nonresident enrollment increases?

(2) Does the net price of attendance (both overall and by family income bracket) for in-state students change when nonresident enrollment increases?

(3) Do the above relationships differ by institutional selectivity?

After years of working on this paper and multiple iterations, I am pleased to report…null findings. (Seriously, though, I am glad that higher education journals seem to be willing to publish null findings, as long as the estimates are precisely located around zero without huge confidence intervals.) These findings suggest two things about the relationship between nonresident enrollment and prices faced by in-state students. First, it does not look like nonresident tuition revenue is being used to bring down in-state tuition prices. Second, it also does not appear that in-state students are paying more for room and board after more out-of-state students enroll, suggesting that any amenities demanded by wealthier out-of-state students may be modest in nature.

I am always happy to take any questions on the article or to share a copy if there are issues accessing it. I am also happy to chat about the process of getting research published in academic journals, since that is often a long and winding road!

How Financial Responsibility Scores Do Not Affect Institutional Behaviors

One of the federal government’s longstanding accountability efforts in higher education is the financial responsibility score—a metric designed to reflect a private college’s financial stability. The federal government has an interest in making sure that only stable colleges receive federal funds, as taxpayers often end up footing at least part of the bill when colleges shut down and students may struggle to resume their education elsewhere. The financial responsibility score metric ranges from -1.0 to 3.0, with colleges scoring between 1.0 and 1.4 being placed under additional oversight and those scoring below 1.0 being required to post a letter of credit with the Department of Education.

Although these scores have been released to the public since the 2006-07 academic year and there was a great deal of dissatisfaction among private colleges regarding how the scores were calculated, there had been no prior academic research on the topic before I started my work in the spring of 2014. My question was simple: did receiving a poor financial responsibility score induce colleges to shift their financial priorities (either increasing revenues or decreasing expenditures) in an effort to avoid future sanctions?

But as is often the case in academic research, the road to a published article was far from smooth and direct. Getting rejected by two different journals took nearly two years and then it took another two years for this paper to wind its way through the review, page proof, and publication process at the Journal of Education Finance. (In the meantime, I scratched my itch on the topic and put a stake in the ground by writing a few blog posts highlighting the data and teasing my findings.)

More than four and a half years after starting work on this project, I am thrilled to share that my paper, “Do Financial Responsibility Scores Affect Institutional Behaviors?” is a part of the most recent issue of the Journal of Education Finance. I examined financial responsibility score data from 2006-07 to 2013-14 in this paper, although I tried to get data going farther back since these scores have been calculated since at least 1996. I filed a Freedom of Information Act request back in 2014 for the data, and my appeal was denied in 2017 on the grounds that the request to receive data (that already existed in some format!) was “too burdensome and expensive.” At that point, the paper was already accepted at JEF, but I am obviously still a little annoyed with how that process went.

Anyway, I failed to find any clear evidence that private nonprofit or for-profit colleges changed their fiscal priorities after receiving an unfavorable financial responsibility score. To some extent, this result made sense among private nonprofit colleges; colleges tend to move fairly slowly and many of their costs are sticky (such as facilities and tenured faculty). But for-profit colleges, which generally tend to be fairly agile critters, the null findings were more surprising. There is certainly more work to do in this area (particularly given the changes in higher education that have occurred over the last five years), so I encourage more researchers to delve into this topic.

To aspiring researchers and those who rely on research in their jobs—I hope this blog post provides some insights into the scholarly publication process and all of the factors that can slow down the production of research. I started this paper during my first year on faculty and it finally came out during my tenure review year (which is okay because accepted papers still count even if they are not yet in print). Many papers move more quickly than this one, but it is worth highlighting that research is a pursuit for people with a fair amount of patience.

Some Good News on Student Loan Repayment Rates

The U.S. Department of Education released updates to its massive College Scorecard dataset earlier this week, including new data on student debt burdens and student loan repayment rates. In this blog post, I look at trends in repayment rates (defined as whether a student repaid at least $1 in principal) at one, three, five, and seven years after entering repayment. I present data for colleges with unique six-digit Federal Student Aid OPEID numbers (to eliminate duplicate results), weighting the final estimates to reflect the total number of borrowers entering repayment.[1]

The table below shows the trends in the 1-year, 3-year, 5-year, and 7-year repayment rates for each cohort of students with available data.

Repayment cohort 1-year rate (pct) 3-year rate (pct) 5-year rate (pct) 7-year rate (pct)
2006-07 63.2 65.1 66.7 68.4
2007-08 55.7 57.4 59.5 62.2
2008-09 49.7 51.7 55.3 59.5
2009-10 45.7 48.2 52.6 57.4
2010-11 41.4 45.4 51.3 N/A
2011-12 39.8 44.4 50.6 N/A
2012-13 39.0 45.0 N/A N/A
2013-14 40.0 46.1 N/A N/A

One piece of good news is that 1-year and 3-year repayment rates ticked up slightly for the most recent cohort of students who entered repayment in 2013 or 2014. The 1-year repayment rate of 40.0% is the highest rate since the 2010-11 cohort and the 3-year rate of 46.1% is the highest since the 2009-10 cohort. Another piece of good news is that the gain between the 5-year and 7-year repayment rates for the most recent cohort with data (2009-10) is the largest among the four cohorts with data.

Across all sectors of higher education, repayment rates increased as a student got farther into the repayment period. The charts below show differences by sector for the cohort entering repayment in 2009 or 2010 (the most recent cohort to be tracked over seven years), and it is worth noting that for-profit students see somewhat smaller increases in repayment rates than other sectors.

But even somewhat better repayment rates still indicate significant issues with student loan repayment. Only half of borrowers have repaid any principal within five years of entering repayment, which is a concern for students and taxpayers alike. Data from a Freedom of Information Act request by Ben Miller of the Center for American Progress highlight that student loan default rates continue to increase beyond the three-year accountability window currently used by the federal government, and other students are muddling through deferment and forbearance while outstanding debt continues to increase.

Other students are relying on income-driven repayment and Public Service Loan Forgiveness to remain current on their payments. This presents a long-term risk to taxpayers as at least a portion of balances will be written off over the next several decades. It would be helpful for the Department of Education to add data to the College Scorecard on the percentage of students by college enrolled in income-driven repayment rates so it is possible to separate students who may not be repaying principal due to income-driven plans from those who are placing their credit at risk by falling behind on payments.

[1] Some of the numbers for prior cohorts slightly differ from what I presented last year due to a change in how I merged datasets (starting with the most recent year of the Scorecard instead of the oldest year, as the latter method excluded some colleges that merged). However, this did not affect the general trends presented in last year’s post. Thanks to Andrea Fuller at the Wall Street Journal for helping me catch that bug.

How to Provide Context for College Scorecard Data

The U.S. Department of Education’s revamped College Scorecard website celebrated its third anniversary last month with another update to the underlying dataset. It is good to see this important consumer information tool continue to be updated, given the role that Scorecard data can play in market-based accountability (a key goal of many conservatives). But the Scorecard’s change log—a great resource for those using the dataset—revealed a few changes to the public-facing site. (Thanks to the indefatigable Clare McCann at New America for pointing this out in a blog post.)

scorecard_fig1_oct18

So to put the above screenshot into plain English, the Scorecard used to have indicators for how a college’s performance on outcomes such as net price, graduation rate, and post-college salary compared to the median institution—and now it doesn’t. In many ways, the Department of Education’s decision to stop comparing colleges with different levels of selectivity and institutional resources to each other makes all the sense in the world. But it would be helpful to provide website users with a general idea of how the college performs relative to more similar institutions (without requiring users to enter a list of comparison colleges).

For example, here is what the Scorecard data now look like for Cal State—Sacramento (the closest college to me as I write this post). The university sure looks affordable, but the context is missing.

scorecard_fig2_oct18

It would sure be helpful if ED already had a mechanism to generate a halfway reasonable set of comparison institutions to help put federal higher education data into context. Hold on just a second…

scorecard_fig3_oct18

It turns out that there is already an option within the Integrated Postsecondary Education Data System (IPEDS) to generate a list of peer institutions. ED creates a list of similar institutions to the focal college based on factors such as sector and level, Carnegie classification, enrollment, and geographic region. For Sacramento State, here is part of the list of 32 comparison institutions that is generated. People can certainly quibble with some of the institutions chosen, but they clearly do have some similarities.

scorecard_fig4_oct18

I then graphed the net prices of these 32 institutions to help put Sacramento State (in black below) into context. They had the fifth-lowest net price among the set of universities, information that is at least somewhat more helpful than looking at a national average across all sectors and levels.

scorecard_fig5_oct18

My takeaway here: the folks behind the College Scorecard should talk with the IPEDS people to consider bringing back a comparison group average based on a methodology that is already used within the Department of Education.

Beware Dubious College Rankings

Just like the leaves starting to change colors (in spite of the miserable 93-degree heat outside my New Jersey office window) and students returning to school are clear signs of fall, another indicator of the change in seasons is the proliferation of college rankings that get released in late August and early September. The Washington Monthly college rankings that I compile were released the week before Labor Day, and MONEY and The Wall Street Journal have also released their rankings recently. U.S. News & World Report caps off rankings season by unveiling their undergraduate rankings later this month.

People quibble with the methodology of these rankings all the time (I get e-mails by the dozens about the Washington Monthly rankings, and we’re not the 800-pound gorilla of the industry). Yet at least these rankings are all based on data that can be defended to at least some extent and the methodologies are generally transparent. Even rankings of party schools, such as this Princeton Review list, have a methodology section that does not seem patently absurd.

But since America loves college rankings—and colleges love touting rankings they do well in and grumbling about the rest of them—a number of dubious college rankings have developed over the years. I was forwarded a press release about one particular set of rankings that immediately set my BS detectors into overdrive. This press release was about a ranking of the top 20 fastest online doctoral programs, and here is a link to the rankings that will not boost their search engine results.

First, let’s take a walk through the methods section. There are three red flags that immediately stand out:

(1) The writing resembles a “word salad” and clearly was never edited by anyone. Reputable rankings sites use copy editors to help methodologists communicate with the public.

(2) College Navigator is a good data source for undergraduates, but does not contain any information on graduate programs (which they are trying to rank) other than the number of graduates.

(3) Reputable rankings will publish their full methodology, even if certain data elements are proprietary and cannot be shared. And trust me—nobody wants to duplicate this set of rankings!

As an example of what these rankings look like, here is a screenshot of how Seton Hall’s online EdD in higher education is presented. Again, let’s walk through the issues.

(1) There are typos galore in their description of the university. This is not a good sign.

(2) Acceptance/retention rate data are for undergraduate students, not for a doctoral program. The only way they could get these data are by contacting programs, which costs money and runs into logistical problems.

(3) Seton Hall is accredited by Middle States, not the Higher Learning Commission. (Thanks to Sam Michalowski for bringing this to my attention via Twitter.)

(4) In a slightly important point, Seton Hall does not offer an online EdD in higher education. Given that I teach in the higher education graduate programs and am featured on the webpage for the in-person EdD program, I’m pretty confident in this statement.

For any higher education professionals who are reading this post, I have a few recommendations. First, be skeptical of any rankings that come from sources that you are not familiar with—and triple that skepticism for any program-level rankings. (Ranking programs is generally much harder due to a lack of available data.) Second, look through the methodology with the help of institutional research staff members and/or higher education faculty members. Does it pass the smell test? And finally, keep in mind that many rankings websites are only able to be profitable by getting colleges to highlight their rankings, thus driving clicks to these sites. If colleges were more cautious about posting dubious rankings, it would shut down some of these websites while also avoiding embarrassment when someone finds out that a college fell for what is essentially a ruse.

Comments on the Proposed Gainful Employment Regulations

The U.S. Department of Education is currently accepting public comments (through September 13) on their proposal to rescind the Obama administration’s gainful employment regulations, which had the goal of tying federal financial aid eligibility to whether graduates of certain vocationally-focused programs had an acceptable debt-to-earnings ratio. My comments are reprinted below.

September 4, 2018

Annmarie Weisman

U.S. Department of Education

400 Maryland Avenue SW, Room 6W245

Washington, DC 20202

Re: Comments on the proposed rescinding of the gainful employment regulations

Dear Annmarie,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding gainful employment. I write to offer my comments on certain aspects of the proposed rescinding of the regulations.

First, as an academic, I was pleasantly surprised to see ED immediately referring to a research paper in making its justification to change the debt-to-earnings (D/E) threshold. But that quickly turned into dismay as it became clear that ED had incorrectly interpreted what Sandy Baum and Saul Schwartz wrote a decade ago after Baum clarified the findings of the paper in a blog post.[2] I am not wedded to any particular threshold regarding D/E ratios, but I would recommend that ED reach out to researchers before using their findings in order to make sure they are being interpreted correctly.

Second, the point that D/E ratios can be affected by the share of adult students, who have higher loan limits than dependent students, is quite valid. But it can potentially be addressed in one of two ways if D/E ratios are reported in the future. One option is to report D/E ratios separately for independent and dependent students separately, but that runs the risk of creating more issues of small cell sizes by splitting the sample. Another option is to cap the amount of independent student borrowing credited toward D/E ratios at the same level as dependent students (also addressing the possibility that some dependent students have higher limits due to Parent PLUS loan applications being rejected). This is less useful from a consumer information perspective, but could solve issues regarding high-stakes accountability.

Third, ED’s point about gainful employment using a ten-year amortization period for certificate programs while also offering 20-year repayment plans under REPAYE is well-taken. Switching to a 20-year period would allow some lower-performing programs to pass the D/E test, but it is reasonable given that ED offers a loan repayment plan of that period. (I also approach the idea that programs would lose Title IV eligibility under the prior administration’s regulations as being highly unlikely based on experiences with very few colleges losing eligibility based on high cohort default rates.) In any case, aligning amortization periods to repayment plan periods makes sense.

Fourth, I am highly skeptical that requiring institutions to disclose various outcomes on their own websites would have much value. Net price calculators, which colleges are required to post under the Higher Education Act, are a prime example. Research has shown that many colleges place these calculators on obscure portions of their website and that information is often up to five years out of date.[3] Continuing to publish centralized data on outcomes is far preferable than letting colleges do their own thing, and highlights the importance of continuing to publish outcomes information without any pauses in the data.

Fifth, while providing median debt and median earnings data allows analysts to continue to calculate a D/E ratio, there is no harm in continuing to provide such a ratio in the future alongside the raw data. There is no institutional burden for doing so, and it is possible that some prospective students may find that ratio to be more useful than simply looking at median debt. At the very least, ED should conduct several focus groups to make sure that D/E ratios lack value before getting rid of them.

Sixth, while it is absolutely correct to note that people working in certain service industries receive a high portion of their overall compensation in tips, I find it dismaying as a taxpayer that there is no interest in creating incentives for individuals to report their income as required by law. A focus on D/E ratios created a possibility for colleges to encourage their students to follow the law and accurately report their incomes in order to increase earnings relative to debt payments. ED should instead work with IRS and colleges to help protect taxpayers by making sure that everyone pays income taxes as required.

In closing, I do not have a strong preference about whether ED ties Title IV eligibility to program-level D/E thresholds due to my skepticism that any sanctions would actually be enforced.[4] However, I strongly oppose efforts by ED to completely stop publishing program-level student outcomes data until the College Scorecard data are ready (which could be a few years). Continuing to publish data on certificate graduates’ outcomes in the interim is an essential step since all sectors of higher education already have to report certificate outcomes—meaning that keeping these data treat all sectors equally. Publishing outcomes of degree programs would be nice, but not as important since only some colleges would be included.

As I showed with my colleagues in the September/October issue of Washington Monthly magazine, certificate students’ outcomes vary tremendously both within and across CIP codes as well as within different types of higher education institutions.[5] Once the College Scorecard data are ready, this dataset can be phased out. But in the meantime, continuing to publish data meets a key policy goal of fostering market-based accountability in higher education.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer or funders.

[2] Baum, S. (2018, August 22). DeVos misinterprets the evidence in seeking gainful employment deregulation. Urban Wire. https://www.urban.org/urban-wire/devos-misrepresents-evidence-seeking-gainful-employment-deregulation.

[3] Anthony, A. M., Page, L. C., & Seldin, A. (2016). In the right ballpark? Assessing the accuracy of net price calculators. Journal of Student Financial Aid, 46(2), 25-50. Cheng, D. (2012). Adding it all up 2012: Are college net price calculators easy to find, use, and compare? Oakland, CA: The Institute for College Access and Success.

[4] For more reasons why I am skeptical that all-or-nothing accountability systems such as the prior administration’s gainful employment regulations would actually be effective, see my book Higher Education Accountability (Johns Hopkins University Press, 2018).

[5] Washington Monthly (2018, September/October). 2018 best colleges for vocational certificates. https://washingtonmonthly.com/2018-vocational-certificate-programs.

A Look at Federal Student Loan Borrowing by Field of Study

The U.S. Department of Education’s Office of Federal Student Aid has slowly been releasing interesting new data on federal student loans over the last few years. In previous posts, I have highlighted data on the types of borrowers who use income-driven repayment plans and average federal student loan balances by state. But one section of Federal Student Aid’s website that gets less attention than the student loan portfolio page (where I pulled data from for the previous posts) is the Title IV program volume reports page. For years, this page—which is updated quarterly with current data—has been a useful source of how many students at each college receive federal grants and loans.

While pulling the latest data on Pell Grant and student loan volumes by college last week, I noticed three new spreadsheets on the page that contained interesting statistics from the 2015-16 academic year. One spreadsheet shows grant and loan disbursements by age group, while a second spreadsheet shows the same information by state. But in this blog post, I look at a third spreadsheet of student loan disbursements by students’ fields of study. The original spreadsheet contained data on the number of recipients and the amount of loans disbursed, and I added a third column of per-student annual average loans by dividing the two columns. This revised spreadsheet can be downloaded here.

Of the 1,310 distinct fields of study included in the spreadsheet, 14 of them included more than $1 billion of student loans in 2015-16 and made up over $36 billion of the $94 billion in disbursed loans. Business majors made up 600,000 of the 9.1 million borrowers, taking out $6.1 billion in loans, with nursing majors having the second most borrowers and loans. The majors with the third and fourth largest loan disbursements were law and medicine, fields that tend to be almost exclusively graduate students and can thus borrow up to the full cost of attendance without the need for Parent PLUS loans. As a result, both of these fields took out more loans than general studies majors in spite of being far fewer in numbers. On the other end (not shown here), the ten students majoring in hematology technology/technician drew out a combined $28,477 in loans, just ahead of the 14 students in explosive ordinance/bomb disposal programs who hopefully are not blowing up over incurring a total of $61,069 in debt.

Turning next to programs where per-student annual borrowing is the highest, the top ten list is completely dominated by health sciences programs (the first two-digit CIP not from health sciences is international business, trade, and tax law at #16). It is pretty remarkable that dentists take on $71,451 of student loans each year while advanced general dentists (all 51 of them!) borrow even more than that. Given that dental school is four years long and that interest accumulates during school, an average debt burden of private dental school graduates of $341,190 seems quite reasonable. Toss in income-driven repayment during additional training and it makes sense that at least one of the 101 people with $1 million in federal student loan debt is an orthodontist. On the low end of average debt, the 164 bartending majors ran up an average tab of $2,963 in student loans in 2015-16 while the 144 personal awareness and self-improvement majors are well into their 12-step plan to repay their average of $4,361 in loans.

Trends in Net Prices by Family Income

I continue my look through newly-released data from the National Postsecondary Student Aid Study by turning to trends in the net price of attendance by family income. The net price, which is the full cost of attendance (tuition and fees, books and supplies, room and board, and miscellaneous living expenses) less all grant aid received, is a key college affordability measure as it represents how much money students and their families have to come up with each year to attend college. This net price can be covered by a combination of savings, work income, and student loans, but it is worth noting that student loan limits for many undergraduate students are far below the net price. This means that many families face challenges in paying for college if the net price is a large share of their income.

The first figure here shows trends (since 2004) in the percentage of family income needed to cover the net price. In 2015-16, 48% of students faced net prices of less than 25% of their family income, 20% were between 26% and 50%, 9% were between 51% and 99%, and 23% of students had net prices greater than their family incomes. The good news is that the distribution of net prices held almost constant since 2011-12 after having taken a jump during the Great Recession.

In the second figure, I break down the percentage of students with net prices higher than their family income by type of college attended. Nearly half of students attending for-profit college were in this category, which is not surprising given the high prices charged by many for-profit colleges and students’ low household incomes. About one in five students attending public and private nonprofit four-year colleges are also in this category. Meanwhile, even 18% of community college students had net prices higher than their family’s income, which is a particular concern as quite a few colleges do not allow their students to take out federal loans.

A Look at College Students’ Living Arrangements

Those of us in the research and policy worlds generally had a different college experience than most American college students have today. One example of this is where students live during college. I had a very traditional college experience, which began with me as a recent high school graduate moving into my (non-air conditioned) dorm room in Truman State University’s Ryle Hall in the sweltering August heat.[1] Yet that residential experience is not what most students experience, as I show in my fourth blog post using newly-released data from the National Postsecondary Student Aid Study (NPSAS).

As the chart below shows, only 15.6% of all undergraduate students lived on campus in the 2015-16 academic year, a percentage that has largely been consistent since 2000. 56.9% of students lived off campus away from their parent(s), while 27.5% lived off campus with their parents. Aside from a strange blip in 2011-12, these percentage have also been fairly consistent over time.[2]

This low percentage could be explained in part by students living on campus during their first year of college and then moving off campus later on in an effort to either save money or gain more independence. I then focused the next chart on the roughly 38%-40% of students who were first-year students (about 25% at four-year public and private nonprofit colleges and 50% at community colleges and for-profits) to get an idea of whether patterns changed among new students only.[3] Interestingly, the percentages of first-year students living on campus (12.9%) and off campus away from their parent(s) (53.8%) were lower than for all students, which I figured was due to the smaller percentage of four-year students among the first-year student cohort.

I then broke down student living arrangements by institutional type for the 2015-16 academic year, showing numbers both for all students and only for first-year students. The finding that will surprise many is that less than 50% of first-year students at four-year colleges live on campus, in spite of this being viewed as the traditional college experience. 49% of first-year students at private nonprofit colleges and 36% of first-year students at public four-year colleges lived on campus, while very few community colleges or for-profit colleges even have campus housing. The most common living arrangement for both the community college and for-profit sectors was to live off campus away from parent(s) , with about 60% of community college and 75% of for-profit students doing this regardless of year in college. About 40% of community college students lived with their parent(s), with private nonprofit students being least likely to do this (13%).

These data show that the “typical” residential college experience that many of us had was not the typical experience even when we went to college.[4] A more typical college student is the young woman who rang me up as a outlet mall cashier last weekend. She was an education major at the local community college and said that she lived at home to save money. After I introduced myself as a professor, she mentioned that she was hoping to continue living at home and commuting to a nearby four-year college. Although I was unable to get an extra teacher discount from her at the cash register, it was a good reminder that most students never live in a residence hall.

[1] Air conditioning matters a lot in education, folks. For empirical evidence in a K-12 setting, see this great new NBER working paper by Josh Goodman and colleagues.

[2] Fellow data nerds, any idea what happened in 2011-12? I looked at each sector and the pattern is still there (with it being strongest among four-year colleges). For that reason, I am hesitant to place much value on the 2011-12 off campus percentages.

[3] I used the NPSAS variable of year in school for financial aid purposes, as the year in school for credit accumulation purposes could be skewed based on attendance status. However, the general pattern of results held across both definitions.

[4] I’m represented by the 2003-04 NPSAS cohort, where about 46% of first-year students on public university campuses lived in residence halls.