Which Private Colleges Always Lose Money?

I write this piece with the sounds of excavators and dump trucks in the background, as we are getting the 30-year-old pool at our house replaced this month. Pools should last a lot longer than that, but the original owner of the house decided to save money by installing the pool on top of a pile of logs and stumps left over from clearing the land. As those logs settled and decayed, the pool began to leak and we are left with a sizable bill to dig everything out and do things right. Even though we budgeted for this, it is still painful to see every load of junk exit and every load of gravel enter what I am calling the money pit.

On the higher education front, it has been a week with several announced or threatened closures. On Monday, the University of Wisconsin-Milwaukee announced that it would close its Waukesha branch campus, marking at least the fourth of the 13 former University of Wisconsin Colleges to close in the last several years. Fontbonne University in St. Louis also announced its closure on Monday, although they get a lot of credit from me for giving students and at least some employees more than a year to adjust. Today, Northland College in Wisconsin announced that it will close at the end of this academic year unless they can raise $12 million—one-third of their annual budget—in the next three weeks. Closures just keep dripping out, and I am really concerned about a late wave of closures this spring once colleges finally get FAFSA information from the U.S. Department of Education.

The two topics blended together for me (along with my students’ budget analyses being due on Friday) on my run this morning, and I quickly jotted down the gist of this post. The coverage of both Fontbonne and Northland focused on the number of years that they had lost money, so I used IPEDS data to take a look at the operating margins (revenues minus expenses) private nonprofit colleges for the past ten years (2012-13 to 2021-22). This analysis included 924 institutions in the 50 states and Washington, DC and excluded colleges with any missing data or special-focus institutions based on the most recent Carnegie classifications.

You can download the dataset here, with highlighted colleges having closed since IPEDS data were collected.

The first thing is that the share of colleges with losses varied considerably across years, and a high share of losses is driven by investment losses. But with the exception of the pandemic-aided 2020-21 year, the next lowest year of operating losses was 2013-14 (9%). 2021-22 saw two-thirds of colleges post an operating loss as pandemic aid began to fade and investments had a rough year.

YearOperating loss (pct)
2012-1311.8
2013-149.1
2014-1531.6
2015-1656.2
2016-1712.8
2017-1820.4
2018-1937.2
2019-2043.8
2020-213.5
2021-2267.2

Below is the distribution of the number of years that each college posted an operating loss. Seventy-nine colleges never lost money, and most of these institutions have small endowments but growing enrollment. The modal college had an operating loss in three years, and 90% of colleges at least broke even in five years out of the last decade.

On the other hand, 19 colleges posted losses in eight or more years. Notably, nine of these colleges have closed in the last year or so, compared to nine of the other 905 colleges. (Let me know if I’m missing any obvious closures!) The list of colleges with eight or more closures is below, and closed/closing colleges are bolded.

NameStateLosses
Polytechnic University of Puerto Rico-OrlandoFL10
Roberts Wesleyan UniversityNY10
Trinity International University-FloridaFL9
Iowa Wesleyan UniversityIA9
Cambridge CollegeMA9
Fontbonne UniversityMO9
Medaille UniversityNY9
Bethany CollegeWV9
American Jewish UniversityCA8
Polytechnic University of Puerto Rico-MiamiFL8
Hawaii Pacific UniversityHI8
Great Lakes Christian CollegeMI8
Alliance UniversityNY8
Cazenovia CollegeNY8
Yeshiva UniversityNY8
Bacone College [not enrolling students]OK8
University of Valley ForgePA8
Cardinal Stritch UniversityWI8
Alderson Broaddus UniversityWV8

On a related note, I wrote a piece for the Chronicle of Higher Education that reviews a new book that makes the case for more colleges declaring financial exigency in order to cut academic programs. I think that it is more important than ever for faculty, staff, and students to have a sense of the financial health of their college by being equipped to read budget documents and enrollment projections. That is crucial in order for shared governance to have a chance of working in difficult situations and to help avoid money pit situations like the one in my own backyard.

Discovering Issues with IPEDS Completions Data

The U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) is a great resource in the field of higher education. While it is the foundation of much of my research, the data are self-reported by colleges and occasionally include errors or implausible values. A great example of some of the issues with IPEDS data is this recent Wall Street Journal analysis of the finances of flagship public universities. When their great reporting team started asking questions, colleges often said that their IPEDS submission was incorrect. That’s not good.

I received grants from Arnold Ventures over the summer to fund two new projects. One of them is examining the growth in master’s degree programs over time and the implications for students and taxpayers. (More on the other project sometime soon.) This led me to work with my sharp graduate research assistant Faith Barrett to dive into IPEDS program completions data.

As we worked to get the data ready for analysis, we noticed a surprisingly large number of master’s programs apparently being discontinued. Colleges can report zero graduates in a given year if the program still exists, so we assumed that programs with no data (instead of a reported zero) were discontinued. But we then looked at years immediately following the apparent discontinuation and there were again graduates. This suggests that programs with missing data periods between when graduates were reported are likely either a data entry error (failing to enter a positive number of graduates) or not reporting zero graduates in an active program instead of truly missing (a program discontinuation). This is not great news for IPEDS data quality.

We then took this a step further by attempting to find evidence that programs that seem to disappear and reappear actually still exist. We used the Wayback Machine (https://archive.org/web/) to look at institutional websites by year to see whether the apparently discontinued program appeared to be active in years without graduates. We found consistent evidence from websites that programs continued to exist during their hiatus in IPEDS data. To provide an example, the Mental and Social Health Services and Allied Professions master’s program at Rollins College did not report data for 2015 after reporting 25 graduates in 2013 and 24 graduates in 2014. They then reported 30 graduates in 2016, 26 graduates in 2017, 27 graduates in 2018, 26 graduates in 2019, and 22 graduates in 2020. Additionally, they had active program websites throughout the period, providing more evidence of a data error.

The table below shows the number of master’s programs (defined at the 4-digit Classification of Instructional Programs level) for each year between 2005 and 2020 after we dropped all programs that never reported any graduates during this period. The “likely true discontinuations” column consists of programs that never reported any graduates to IPEDS following a year of missing data. The “likely false discontinuations” column consists of programs that reported graduates to IPEDS in subsequent years, meaning that most of these are likely institutional reporting errors. These likely false discontinuations made up 31% of all discontinuations during the period, suggesting that data quality is not a trivial issue.

Number of active programs and discontinuations by year, 2005-2020.

YearNumber of programsLikely true discontinuationsLikely false discontinuations
200520,679195347
200621,167213568
200721,326567445
200821,852436257
200922,214861352
201022,449716357
201122,816634288
201223,640302121
201324,148368102
201424,76631189
201525,17041097
201625,80836166
201726,33534435
201826,80438441
201927,572581213
202027,88374223

For the purposes of our analyses, we will recode years of missing data for these likely false discontinuations to have zero graduates. This likely understates the number of graduates for some of these programs, but this conservative approach at least fixes issues with programs disappearing and reappearing when they should not be. Stay tuned for more fun findings from this project!

There are two broader takeaways from this post. First, researchers relying on program-level completions data should carefully check for likely data errors such as the ones that we found and figure out how to best address them in their own analyses. Second, this is yet another reminder that IPEDS data are not audited for quality and quite a few errors are in the data. As IPEDS data continue to be used to make decisions for practice and policy, it is essential to improve the quality of the data.

Four Big Questions on Carnegie Classifications Changes

It is World Series time, so why not devote a blog post to one of the most fascinating inside baseball conversations within higher education? The Carnegie classifications have served for decades as perhaps the most prominent way to group colleges into buckets of reasonably similar institutions. Indiana University hosted the Carnegie classifications for a long time, but they ended up moving to the American Council on Education after a rather bizarre planned move to Albion College never ended up happening.

After multiple blue-ribbon panels and meetings across the higher education industry, ACE gave the public the first glimpse of what the Carnegie classifications may look like in 2025. There is still a lot of uncertainty about the final results, but the most concrete change is to the coveted Research I university criteria. Instead of being based on ten criteria, the only two criteria moving forward will be $50 million in research expenditures and 70 doctorates awarded. Other classifications are also likely to change, but many more details are needed before I can comment.

After thinking about this for a while and having a great conversation with The Chronicle of Higher Education on the proposed changes, here are the four big questions that I have at this point.

(1) This changes incentives for research universities, and expect plenty of strategy to reach R1 status. Colleges have always been able to appeal their preliminary classification, and it seems like some institutions have successfully shifted from R2 to R1 status before the final classifications were released. But it is a lot easier to game two clearly defined metrics than a complicated set of variables hidden behind some complicated statistical analyses.

Consider the research expenditures figure, which comes from the National Science Foundation’s Higher Education Research and Development survey. HERD data include research expenditures from a range of sources, including federal, industry, foundation, state, and institutional sources. While the first four of these sources are difficult to manipulate, colleges can tweak the amount of institutional funding in a way that meaningfully increases total funding. For example, if faculty are expected to spend 40% of their time on research, the institution can legitimately be seen as putting 40% of that person’s salary on a research line. Some colleges appear to already do this. For example, I found 35 institutions that reported total research expenses between $40 million and $60 million in 2021. The range of institutionally-funded research expenses ranged between $3.7 million and $31.4 million. So there is probably some room for colleges to increase their figures in completely legitimate ways.

The previous R1 criteria heavily rewarded doctoral degree production in a wide range of fields, and now that is gone. This means that health science-focused institutions will now qualify for R1 status, and universities can now feel comfortable reducing their breadth of PhD programs without losing their coveted R1 status. Humanities PhD programs really didn’t need this change, but it is happening anyway.

(2) Will Research I status have less meaning as the club expands? Between 2005 and 2021, the number of universities classified as Research I increased from 96 to 146. The Chronicle’s data team estimates that the number would grow to approximately 168 in 2025 based on current data. Institutions that gain Research I status are darn proud of themselves and have used their newfound status to pursue additional funding. But as the group of Research I institutions continues to grow, expect distinctions within the group (such as AAU membership) to become more important markers of prestige.

(3) Will other classifications of colleges develop? The previous Carnegie classifications were fairly stable and predictable for decades, and this looks likely to change in a big way in 2025. This provides a rare opportunity for others to get into the game of trying to classify institutions into similar groups. Institutional researchers and professional associations may try to rely on the old classifications for a while if the new ones do not match their needs, but there is also the possibility that someone else develops a set of criteria for new classifications.

(4) How will college rankings respond? Both the U.S. News and Washington Monthly rankings have historically relied on Carnegie classifications to help group colleges, with the research university category being used to define national universities and the baccalaureate colleges/arts and sciences category defining liberal arts colleges. But as more colleges have gained research university status, the national university category has swelled to about 400 institutions. The creation of a new research college designation and the unclear fate of master’s and baccalaureate institutions classifications are going to force rankings teams to respond.

I’m not just writing this as a researcher in the higher ed field, as I have been the Washington Monthly data editor since 2012. I have some thinking ahead about how to best group colleges for meaningful comparisons. And ACE will be happy to have colleges stop calling them about how their classification affects where they are located in the U.S. News rankings (looking at you, High Point).

If you made it to the end of this piece, you’re as interested in this rather arcane topic as I am. It will be interesting to see how this all plays out over the next year or two.

Making Sense of Changes to the U.S. News Rankings Methodology

Standard disclaimer: I have been the data editor for Washington Monthly’s rankings since 2012. All thoughts here are solely my own.

College rankings season officially concluded today with the release of the newest year of rankings from U.S. News and World Report. I wrote last year about things that I was watching for in the rankings industry, particularly regarding colleges no longer voluntarily providing data to U.S. News. The largest ranker announced a while back that this year’s rankings would not be based on data provided by colleges, and that is mostly true. (More on this below.)

When I see a set of college rankings, I don’t even look at the position of individual colleges. (To be perfectly honest, I don’t pay attention to this when I put together the Washington Monthly rankings every year.) I look at the methodology to see what their priorities are and what has changed since last year. U.S. News usually puts together a really helpful list of metrics and weights, and this year is no exception. Here are my thoughts on changes to their methodology and how colleges might respond.

Everyone is focusing more on social mobility. Here, I will start by giving a shout-out to the new Wall Street Journal rankings, which were reconstituted this year after moving away from a partnership with Times Higher Education. Fully seventy percent of these rankings are tied to metrics of social mobility, with a massive survey of students and alumni (20%) and diversity metrics (10%) making up the remainder. Check them out if you haven’t already. I also like Money magazine’s rankings, which are focused on social mobility.

U.S. News creeps slower in the direction that other rankers have taken over the last decade by including a new metric of the share of graduates earning more than $32,000 per year (from the College Scorecard). They also added graduation rates for first-generation students using College Scorecard data, but this is just for students who received federal financial aid. This is a metric worth watching, especially as completion flags get better in the Scorecard data. (They may already be quite good enough.)

Colleges that did not provide data were evaluated slightly differently. After a well-publicized scandal involving Columbia University, U.S. News was moving away from data sources from the Common Data Set—a voluntary data system also involving Peterson’s and the College Board. U.S. News mostly moved away from the Common Data Set, but still primarily used it for the share of full-time faculty, faculty salaries, and student-to-faculty ratios. If colleges did not provide data, then U.S. News used IPEDS data. To give an example of the difference, here is what the methodology mentioned for the percentage of full-time faculty:

“Schools that declined to report faculty data to U.S. News were assessed on fall 2021 data reported to the IPEDS Human Resources survey. Besides being a year older, schools reporting to IPEDS are instructed to report on a broader group of faculty, including those in roles that typically have less interaction with undergraduates, such as part-time staff working in university hospitals.”

I don’t know if colleges are advantaged or disadvantaged by reporting Common Data Set data, but I would bet that institutional research offices around the country are running their analyses right now to see which method gives them a strategic advantage.

The reputation survey continues to struggle. One of the most criticized portions of the U.S. News rankings is their annual survey sent to college administrators with the instructions to judge the academic quality of other institutions. There is a long history of college leaders providing dubious ratings or trying to game the metrics by judging other institutions poorly. As a result, the response rate has declined from 68% in 1989 to 48% in 2009 and 30.8% this year. Notably, response rates were much lower at liberal arts colleges (28.6%) than national universities (44.1%).

Another interesting nugget from the methodology is the following:

“Whether a school submitted a peer assessment survey or statistical survey had no impact on the average peer score it received from other schools. However, new this year, nonresponders to the statistical survey who submitted peer surveys had their ratings of other schools excluded from the computations.”

To translate that into plain English, if a college does not provide data through the Common Data Set, the surveys their administrators complete get thrown out. That seems like an effort to tighten the screws a bit on CDS participation.

New research metrics! It looks like there is a new partnership with the publishing giant Elsevier to provide data on citation count and impact of publications for national universities only. It’s just four percent of the overall score, but I see this more of a preview of coming attractions for graduate program rankings than anything else. U.S. News is really vulnerable to a boycott among graduate programs in most fields, so introducing external data sources is a way to shore up that part of their portfolio.

What now? My biggest question is about whether institutions will cooperate in providing Common Data Set data (since apparently U.S. News would still really like to have it) and completing reputation surveys. The CDS data help flesh out institutional profiles and it’s a nice thing for U.S. News to have on their college profile pages. But dropping the reputation survey, which is worth 20% of the total score, would result in major changes. I have been surprised that efforts to stop cooperating with U.S. News have not centered on the reputation survey, but maybe that is coming in the future.

Otherwise, I expect to continue to see growth in the number of groups putting out rankings each year as the quantity and quality of federal data sources continue to improve. Just pay close attention to the methodology before promoting rankings!

Examining Trends in Debt to Earnings Ratios

I was just starting to wonder when the U.S. Department of Education would release a new year of College Scorecard data, so I wandered over to the website to check for anything new. I was pleasantly surprised to see a date stamp of April 25 (today!), which meant that it was time for me to give my computer a workout.

There are a lot of great new data elements in the updated Scorecard. Some features include a fourth year of post-graduation earnings, information on the share of students who stayed in state after college, earnings by Pell receipt and gender, and an indicator for whether no, some, or all programs in a field of study can be completed via distance education. There are plenty of things to keep me busy for a while, to say the least. (More on some of the ways I will use the data coming soon!)

In this update, I share data on trends in debt to earnings ratios by field of study. I used median student debt accumulated by the first Scorecard cohorts (2014-15 and 2015-16 leavers) and tracked median earnings one, two, three, and four years after graduating college. The downloadable dataset includes 34,466 programs with data for each element.

The below table shows debt-to-earnings ratios for the four most common credential levels. The good news is that the average ratio ticked downward for each credential level, with bachelor’s and master’s degrees showing steep declines in their ratios than undergraduate certificates and associate degrees.

Credential1 year2 years3 years4 years
Certificate0.4550.4300.4210.356
Associate0.5280.5030.4730.407
Bachelor’s0.7030.6590.5690.485
Master’s0.8330.7930.7340.650

The scatterplot shows debt versus earnings four years later across all credential levels. There is a positive correlation (correlation coefficient of 0.454), but still quite a bit of noise present.

Enjoy the new data!

Sharing a Dataset of Program-Level Debt and Earnings Outcomes

Within a couple of hours of posting my comments on the Department of Education’s proposal to create a list of programs with low financial value, I received multiple inquiries about whether there was a user-friendly dataset of current debt-to-earnings ratios for programs. Since I work with College Scorecard data on a regular basis and have used the data to write about debt-to-earnings ratios, it only took a few minutes to put something together that I hope will be useful.

To create a debt-to-earnings ratio that covered as many programs as possible, I pulled median student debt accumulated at that institution for the cohorts of students who left college in the 2016-17 or 2017-18 academic years and matched it with earnings for those same cohorts one calendar year later (calendar year 2018 or 2019). The College Scorecard has some earnings data more than one year out at this point, but a much smaller share of programs are covered. I then calculated a debt-to-earnings ratio. And for display purposes, I also pulled median parent debt from that institution.

The resulting dataset covers 45,971 programs at 5,033 institutions with data on both student debt and earnings for those same cohorts. You can download the dataset here in Excel format and use filter/sort functions to your heart’s content.

Comments on a Proposed Federal List of Low-Value Programs

The U.S. Department of Education recently announced that they will be creating a list of low-value postsecondary programs, and they requested input from the public on how to do so. They asked seven key questions, and I put together 3,000-plus words in comments as a response to submit. Here, I list the questions and briefly summarize my key points.

Question 1: What program-level data and metrics would be most helpful to students to understand the financial (and other) consequences of attending a program?

Four data elements would be helpful. The first is program-level completion rates, especially for graduate or certificate programs where students are directly admitted into programs. Second, given differential tuition and different credit requirements across programs, time to completion and sticker/net prices by program would be incredibly valuable. The last two are debt and earnings, which are largely present in the current College Scorecard.

Question 2: What program-level data and metrics would be most helpful to understand whether public investments in the program are worthwhile? What data might be collected uniformly across all students who attend a program that would help assess the nonfinancial value created by the program?

I would love to see information on federal income taxes paid by former students and use of public benefits (if possible). More information on income-driven repayment use would also be helpful. Finally, there is a great need to rethink definitions of “public service,” as it currently depends on the employer instead of the job function. That is a concern in fields like nursing that send graduates to do good things in for-profit and nonprofit settings.

Question 3: In addition to the measures or metrics used to determine whether a program is placed on the low-financial-value program list, what other measures and metrics should be disclosed to improve the information provided by the list?

Nothing too fancy here. Just list any sanctions/warnings from the federal government, state agencies, or accreditors along with general outcomes for all students at the undergraduate level to account for major switching.

Question 4: The Department intends to use the 6-digit Classification of Instructional Program (CIP) code and the type of credential awarded to define programs at an institution. Should the Department publish information using the 4-digit CIP codes or some other type of aggregation in cases where we would not otherwise be able to report program data?

This is my nerdy honey hole, as I have spent a lot of time thinking on these issues. The biggest two issues with student debt/earnings data right now is that some campuses get aggregated together in reporting and that it’s also impossible to separate outcomes for fully online versus hybrid/in-person programs. Those nuts need to be cracked, and then aggregate up if cell sizes are too small.

Question 5: Should the Department produce only a single low-financial-value program list, separate lists by credential level, or use some other breakdown, such as one for graduate and another for undergraduate programs?

Separate out by credential level and ideally have a good search function by program of study. Otherwise, some low-paying programs will clog up the lists and not let students see relatively lousy programs in higher-paying areas.

Question 6: What additional data could the Department collect that would substantially improve our ability to provide accurate data for the public to help understand the value being created by the program? Please comment on the value of the new metrics relative to the burden institutions would face in reporting information to the Department.

I would love to see program-level completion rates (where appropriate) and better pricing information at the program level. Those items aren’t free to implement, so I would gladly explore other cuts to IPEDS (such as the academic libraries survey) to help reduce additional burden.

Question 7: What are the best ways to make sure that institutions and students are aware of this information?

Colleges will be aware of this information without the federal government doing much, and they may respond to information that they didn’t have before. But colleges don’t have a great record of responding to public shaming if they already knew that affordability was a concern, so I’m not expecting massive changes.

The College Scorecard had small changes around the margins for student behaviors, primarily driven by more advantaged students. I’m not an expert in reaching out to prospective students, but I know that outreach to as many groups as possible is key.

What Happened to College Spending During the Pandemic?

It’s definitely the holiday season here at Kelchen on Education HQ (my home office in beautiful east Tennessee). My Christmas tree is brightly lit and I’m certainly enjoying my share of homemade cookies right now. But as a researcher, I got an early gift this week when the U.S. Department of Education released the latest round of data for the Integrated Postsecondary Education Data System (IPEDS). Yes, I’m nerdy, but you probably are too if you’re reading this.

This data update included finance data from the 2020-21 fiscal year—the first year to be fully affected by the pandemic following a partially affected 2019-20 fiscal year. At the time, I wrote plenty about how I expected 2020-21 to be a challenging year for institutional finances. Thanks to stronger-than-expected state budgets and timely rounds of federal support, colleges largely avoided the worst-case scenario of closure. But they cut back their spending wherever possible, with personnel being the easiest area to cut. I took cuts to salary and retirement benefits during the 2020-21 academic year at my last job, and that was a university that made major cuts to staff while protecting full-time faculty employment.

In this post, I took a look at the percentage change in total expenditures over each of the last four years with data (2017-18 through 2020-21) for degree-granting public and private nonprofit institutions. These values are not adjusted for inflation.

Changes in total spending, public 4-years (n=550)

Characteristic2020-212019-202018-192017-18
Median change (pct)-1.22.32.22.6
>10% decrease58193919
<10% decrease256152141151
<10% increase174318316307
>10% increase62625472

Changes in total spending, private nonprofit 4-years (n=1,002)

Characteristic2020-212019-202018-192017-18
Median change (pct)-1.8-0.52.32.1
>10% decrease119533522
<10% decrease472494262305
<10% increase340415620595
>10% increase71397973

Changes in total spending, public 2-years (n=975)

Characteristic2020-212019-202018-192017-18
Median change (pct)1.03.61.41.5
>10% decrease77457952
<10% decrease353222305330
<10% increase406548488489
>10% increase139160103104

These numbers tell several important stories. First, spending in the community college sector was affected less than the four-year sector. This could be due to fewer auxiliary enterprises (housing, dining, and the like) that were affected by the pandemic, or it could be due to the existing leanness of their operations. As community college enrollments continue to decline, this is worth watching when new data come out around this time next year.

Second, private nonprofit colleges were the only sector to cut spending in the 2019-20 academic year. The pandemic likely nudged the median number below zero from what it otherwise would have been, as these tuition-dependent institutions were trying to respond immediately to pressures in spring 2020. Finally, there is a lot of variability in institutional expenses from year to year. If you are interested in a particular college, reading financial statements can be a great way to learn more about what is going on that would be available in IPEDS data.

A quick and unrelated final note: I have gotten to know many of you all via Twitter, and it is far from clear whether the old blue bird will be operational in the future. I will stay on Twitter as long as it’s a useful and enjoyable experience, although I recognize that my experience has been better than many others. You can follow my blog directly by clicking “follow” on the bottom right of my website, and you can also find me on LinkedIn. I haven’t gone to any of the other social media sites yet, but that may change in the future.

Have a safe and wonderful holiday season and let’s have a great 2023!

What is Next for College Rankings?

It’s safe to say that leaders in higher education typically have a love/hate relationship with college rankings. Traditionally, they love them when they do well and hate them when they move down a few pegs. Yet, outside of a small number of liberal arts colleges, few institutions have made the choice not to cooperate with the 800-pound gorilla in the college rankings industry–U.S. News and World Report. This is because research has shown that the profile of new students changes following a decline in the rankings and because many people care quite a bit about prestige.

This has made the recent decision by Yale Law and followed by ten law schools (and likely more by the time you read this) to stop cooperating with the U.S. News ranking of those programs fascinating. In this post, I offer some thoughts on what is next for college rankings based on my experiences as a researcher and as the longtime data editor for Washington Monthly’s rankings.

Prestige still matters. There are two groups of institutions that feel comfortable ignoring U.S. News’s implied threat to drop colleges lower in the rankings if they do not voluntarily provide data. The first group is broad-access institutions with a mission to serve all comers within their area, as these students tend not to look at rankings and U.S. News relegates them to the bottom of the list anyway. Why bother sending them data if your ranking won’t change?

The second group is institutions that already think they are the most prestigious, and thus have no need for rankings to validate their opinions. This is what is happening in the law school arena right now. Most of the top 15 institutions have announced that they will no longer provide data, and to some extent this group is a club of its own. Will this undermine the U.S. News law school rankings if none of the boycotting programs are where people expect them to be? That will be fascinating to watch.

What about the middle of the pack? The group of institutions that has been most sensitive to college rankings has been the group of not-quite elite but still selective institutions that are trying to enhance their profiles and jump over some of their competitors. Moving up in the rankings is often a part of their strategic plans, can increase presidential salaries at public universities, and U.S. News metrics have played a large part in how Florida has funded its public universities. Institutional leaders will be under intense pressure to keep cooperating with U.S. News so they can keep moving up.

Another item to keep an eye on: I would not be surprised if conservative state legislators loudly object to any moves away from rankings among public universities. In an era of growing political polarization and concerns about so-called “woke” higher education, this could serve as yet another flashpoint. Expect the boycotts to be at the most elite private institutions and at blue-state public research universities.

Will the main undergraduate rankings be affected? Graduate program rankings depend heavily on data provided by institutions because there are often no other available data sources. Law schools are a little different than many other programs because the accrediting agency (the American Bar Association) collects quite a bit of useful data. For programs such as education, U.S. News is forced to rely on data provided by institutions along with its reputational survey.

At the undergraduate level, U.S. News relies on two main data sources that are potentially at risk from boycotts. The first is the Common Data Set, which is a data collection partnership among U.S. News, Peterson’s, and the College Board. The rankings scandal at Columbia earlier this year came out of data anomalies that a professor identified based on their Common Data Set submissions, and Columbia just started releasing their submission to the public for the first time this fall. Opting out of the Common Data Set affects the powerful College Board, so institutions may not want to do that. The second is the long-lamented reputational survey, which has a history of being gamed by institutional leaders and has suffered from falling response rates. At some point, U.S. News may need to reconsider its methodology if more leaders decline to respond.

From where I sit as the Washington Monthly data editor, it’s nice to not rely on any data that institutions submit. (We don’t have the staff to do large data collections, anyway.) But I think the Common Data Set will survive, although there may need to be some additional checks put into the data collection process to make sure numbers are reasonable. The reputational survey, however, is slowly fading away. It would be great to see a measure of student success replace it, and I would suggest something like the Gallup Alumni Survey. That would be a tremendous addition to the U.S. News rankings and may even shake up the results.

Will colleges or programs ask not to be ranked? So far, the law school announcements that I have seen have mentioned that programs will not be providing data to U.S. News. But they could go one step farther and ask to be completely excluded from the rankings. From an institutional perspective, if most of the top-15 law schools opt out, is it better for them to be ranked in the 30s (or something like that) or just to not appear at all on paper? This would create an ethical question to ponder. Rankings exist in part to provide useful information to students and their families, but should a college that doesn’t want to be ranked still show up based on whatever data sources are available? I don’t have a great answer to that one.

Buckle up, folks. The rankings fun is likely to continue over the next year.

Why I’m Skeptical of Cost of Attendance Figures

In the midst of a fairly busy week for higher education (hello, Biden’s student loan forgiveness and income-driven repayment plans!), the National Center for Education Statistics began adding a new year of data into the Integrated Postsecondary Education Data System. I have long been interested in cost of attendance figures, as colleges often face intense pressure to keep these numbers low. A higher cost of attendance means a higher net price, which makes colleges look bad even if this number is driven by student living allowances that colleges do not receive. For my scholarly work on this, see this Journal of Higher Education article—and I also recommend this new Urban Institute piece on the topic.

After finishing up a bunch of interviews on student loan debt, I finally had a chance to dig into cost of attendance data from IPEDS for the 2020-21 and 2021-22 academic year. I focused on the reported cost of attendance for students living off-campus at 1,568 public and 1,303 private nonprofit institutions (academic year reporters) with data in both years. This time period is notable for two things: more modest increases in tuition and sharply higher living costs due to the pandemic and resulting changes to college attendance and society at large.

And the data bear this out on listed tuition prices. The average increase in tuition was just 1.67%, with similar increases across public and private nonprofit colleges. 116 colleges had lower listed tuition prices in fall 2021 than in fall 2020, while about two-thirds for public and one-third of private nonprofit colleges did not increase tuition for fall 2021. This resulted in a tuition increase well below the rate of inflation, which is generally good news for students but bad news for colleges.

The cost of attendance numbers, as shown below, look a little different. Nearly three times as many institutions (322) reported a lower cost of attendance than reported lower tuition, which is surprising given rising living costs. More colleges also reported increasing the cost of attendance relative to increasing tuition, with fewer colleges reporting no changes.

Changes in tuition and cost of attendance, fall 2020 to fall 2021.

 Public (n=1,568)Private (n=1,303)
Tuition  
  Decrease6452
  No change955439
  Increase549812
Cost of attendance  
  Decrease188134
  No change296172
  Increase1,084997

Some of the reductions in cost of attendance are sizable without a corresponding cut in tuition. For example, California State University-Monterey Bay reduced its listed cost of attendance from $31,312 to $26,430 while tuition increased from $7,143 to $7,218. [As Rich Hershman pointed out on Twitter, this is potentially due to California updating its cost of attendance survey instead of increasing it by inflation every year.]

Texas Wesleyan University increased tuition from $33,408 to $34,412, while the cost of attendance fell from $52,536 to $49,340. These decreases could be due to a more accurate estimate of living expenses, moving to open educational resources instead of textbooks, or reducing student fees. But the magnitude of these decreases during an inflationary period leads me to continue questioning the accuracy of cost of attendance values or the associated net prices.

As a quick note, this week marks the ten-year anniversary of my blog. Thanks for joining me through 368 posts! I don’t have the time to do as many posts as I used to, but it is sure nice to have an outlet for some occasional thoughts and data pieces.