Are “Affordable Elite” Colleges Growing in Size, or Just Selectivity?

A new addition to this year’s Washington Monthly college guide is a ranking of “Affordable Elite” colleges. Given that many students and families (rightly or wrongly) focus on trying to get into the most selective colleges, we decided to create a special set of rankings covering only the 224 most highly-competitive colleges in the country (as defined by Barron’s). Colleges are assigned scores based on student loan default rates, graduation rates, graduation rate performance, the percentage of students receiving Pell Grants, and the net price of attendance. UCLA, Harvard, and Williams made the top three, with four University of California campuses in the top ten.

I received an interesting piece of criticism regarding the list by Sara Goldrick-Rab, professor at the University of Wisconsin-Madison (and my dissertation chair in graduate school). Her critique noted that the size of the school and the type of admissions standards are missing from the rankings. She wrote:

“Many schools are so tiny that they educate a teensy-weensy fraction of American undergraduates. So they accept 10 poor kids a year, and that’s 10% of their enrollment. Or maybe even 20%? So what? Why is that something we need to laud at the policy level?”

While I don’t think that the size of the college should be a part of the rankings, it’s certainly worth highlighting the selective colleges that have expanded over time compared to those which have remained at the same size in spite of an ever-growing applicant pool.

I used undergraduate enrollment data from the fall semesters of 1980, 1990, 2000, and 2012 from IPEDS for both the 224 colleges in the Affordable Elite list and 2,193 public and private nonprofit four-year colleges not on the list. I calculated the percentage change between each year and 2012 for the selective colleges on the Affordable Elite list and the other less-selective colleges to get an idea of whether selective colleges are curtailing enrollment.

[UPDATE: The fall enrollment numbers include all undergraduates, including nondegree-seeking institutions. This doesn’t have a big impact on most colleges, but does at Harvard–where about 30% of total undergraduate enrollment is not seeking a degree. This means that enrollment growth may be overstated. Thanks to Ben Wildavsky for leading me to investigate this point.]

The median Affordable Elite college enrolled 3,354 students in 2012, compared to 1,794 students at the median less-selective college. The percentage change at the median college between each year and 2012 is below:

Period Affordable Elite Less selective
2000-2012 10.9% 18.3%
1990-2012 16.0% 26.3%
1980-2012 19.9% 41.7%


The distribution of growth rates is shown below:


So, as a whole, less-selective colleges are growing at a more rapid pace than the ones on the Affordable Elite list. But do higher-ranked elite colleges grow faster? The scatterplot below suggests not really—with a correlation of -0.081 between rank and growth, suggesting that higher-ranked colleges grow at slightly slower rates than lower-ranked colleges.


But some elite colleges have grown. The top ten colleges in the Affordable Elite list have the following growth rates:

      Change from year to 2012 (pct)
Rank Name (* means public) 2012 enrollment 2000 1990 1980
1 University of California–Los Angeles (CA)* 27941 11.7 15.5 28.0
2 Harvard University (MA) 10564 6.9 1.7 62.3
3 Williams College (MA) 2070 2.5 3.2 6.3
4 Dartmouth College (NH) 4193 3.4 11.1 16.8
5 Vassar College (NY) 2406 0.3 -1.8 1.9
6 University of California–Berkeley (CA)* 25774 13.7 20.1 21.9
7 University of California–Irvine (CA)* 22216 36.9 64.6 191.6
8 University of California–San Diego (CA)* 22676 37.5 57.9 152.5
9 Hanover College (IN) 1123 -1.7 4.5 11.0
10 Amherst College (MA) 1817 7.2 13.7 15.8


Some elite colleges have not grown since 1980, including the University of Pennsylvania, MIT, Boston College, and the University of Minnesota. Public colleges have generally grown slightly faster than private colleges (the UC colleges are a prime example), but there is substantial variation in their growth.

Are Some Elite Colleges Understating Net Prices?

As a faculty member researching higher education finance, I’m used to seeing the limitations in federal data available to students and their families as they choose colleges. For example, the net price of attendance measure (measured as tuition and fees, room and board, books, and other expenses less any grants received) is only for first-time, full-time students—and therefore excludes a lot of students with great financial need. But a new graphic-heavy report from The Chronicle of Higher Education on net price revealed another huge limitation of the net price data.

The report, titled “Are Poor Families Really Paying Half Their Income at Elite Colleges?” looked at the two ways that some of the most selective public and private colleges calculate household income. About 400 colleges require students to file the CSS/Financial Aid PROFILE (or PROFILE for short) in addition to the FAFSA in order to receive institutional aid; unlike the FAFSA, the PROFILE requires all but the lowest-income students to pay an application fee. Selective colleges require the PROFILE because it includes more questions about household assets than the FAFSA, with the goal of getting a more complete picture of middle-income and upper-income families’ ability to pay for college. This form isn’t really necessary for families with low incomes and little wealth, and can serve as a barrier to attending certain colleges –as noted by Rachel Fishman of the New America Foundation.

The Chronicle piece looked at income data from Notre Dame, which provided both the FAFSA and PROFILE definitions of income. The PROFILE definition of family income resulted in far fewer students in the lowest income bracket (below $30,000 per year) than the FAFSA definition. Because Notre Dame targets more aid to the neediest students, the net price using PROFILE income below $30,000 (the very lowest-income students) was just $4,472 per year, compared to $11,626 using the FAFSA definition.

Notre Dame reported net prices to the Department of Education using the FAFSA definition of family income, which is the same way that all non-PROFILE colleges report income for net price. But the kicker in the Chronicle piece is that apparently some colleges use the PROFILE definition of income to generate net price data for the federal government. These selective colleges look much less expensive than a college like Notre Dame that reports data like most colleges do, giving them great publicity. Reporting PROFILE-based net prices can also improve these colleges’ performance on Washington Monthly’s list of best bang-for-the-buck colleges, as we use the average net price paid by students making less than $75,000 per year in the metric. (But many of the elite colleges don’t make the list since they fail to enroll 20% Pell recipients in their student body.)

The Department of Education should put forth language clarifying that net price data should be based on the FAFSA definition of income and not the PROFILE definition that puts fewer students in the lower income brackets and results in a seemingly lower net price. Colleges can report both FAFSA and PROFILE definitions on their own websites, but federal data need to be consistent across colleges.

Building a Better Student Loan Default Measure

Student loan default rates have been a hot political topic as of late given increased accountability pressures at the federal level. Currently, colleges can lose access to all federal financial aid (grants as well as loans) if more than 25% of students defaulted on their loans within two years of leaving college for three consecutive cohorts. Starting later this year, the measure used will be the default rate within three years of leaving college, and the cutoff for federal eligibility will rise to 30%. (Colleges can appeal this result if there are relatively few borrowers.)

But few students should ever have to default on their loans given the availability of various income-based repayment (IBR) plans. (PLUS loans typically aren’t eligible for income-based repayment, but their default rates oddly aren’t tracked and aren’t used for accountability purposes.) If a former student enrolled in IBR falls on tough times, his or her monthly payment will go down—potentially to zero if income is less than 150% of the federal poverty line. As a result, savvy colleges should be encouraging their students to enroll in IBR in order to reduce default rates.

And more students are enrolling in IBR. Jason Delisle at the New America Foundation analyzed new Federal Student Aid data out this week that showed that the number of students in IBR doubled from 950,000 to 1.9 million in the last year while outstanding loan balances went from $52.2 billion to $101.0 billion. The federal government’s total Direct Loan portfolio increased from $361.3 billion to $464.3 billion in the last year, meaning that IBR was responsible for nearly half of the increase in loan dollars.

This shift to IBR means that the federal government needs to consider new options for holding colleges accountable for their outcomes. Some options include:

(1) Using a longer default window. The “standard” loan repayment plan is ten years, but defaults are only tracked for three years. A longer window wouldn’t give an accurate picture of outcomes if more students enroll in IBR, but it would provide useful information on students who expect to do well enough after college that standard payments will be a better deal than IBR. This probably requires replacement of the creaky National Student Loan Data System, which may not be able to handle that many more data requests.

(2) Look at the percentage of students who don’t pay anything under IBR. This would measure the percentage of students making more than 150% of the poverty line, or about $23,000 per year for a former borrower with one other family member. Even with the woeful salaries in many public service jobs (such as teaching), they’ll likely have to pay something here.

(3) Look at the total amount repaid compared to the amount borrowed. If the goal is to make sure the federal government gets its money back, a measure of the percentage of funds repaid might be useful. Colleges could even be held accountable for part of the unpaid amount if desired.

As the Department of Education continues to develop draft college ratings (to come out later this fall), they are hopefully having these types of conversations when considering outcome measures. I hope this piece sparks a conversation about potential loan default or repayment measures that can improve upon the currently inadequate measure, so please offer your suggestions as comments below.