Which Colleges Enroll First-Generation Students?

The higher education world is abuzz over the Obama Administration’s Saturday morning release of a new College Scorecard tool (and underlying trove of data). In my initial reaction piece, I discussed some of the new elements that are available for the first time. Earnings of former students are getting the most attention (and have been frequently misinterpreted as being the earnings of graduates only), but today I am focusing on a new data element that should be of interest to students, researchers, and policymakers alike.

The Free Application for Federal Student Aid has included a question about the highest education level of the student’s parent(s), but this information was never included in publicly available data. (And, yes, the FAFSA application period will be moved up three months starting in 2016—and my research on the topic may have played a small role in it!) In my blog post on Saturday, I showed the distribution of the percentage of first-generation students (as defined as not having a parent with at least some college) among students receiving federal financial aid dollars. Here it is again:


I dug deeper into the data to highlight the ten four-year public and private nonprofit colleges with the lowest and highest percentages of first-generation students (among those receiving federal aid) in 2013. The results are below:

Four-year private nonprofit colleges with the fewest first-generation students, 2013.
Name Pct First Gen
California Institute of Technology 5.9
Wheaton College (IL) 8.3
Oberlin College (OH) 8.5
Elon University (NC) 8.6
Dickinson College (PA) 9.0
Macalester College (MN) 9.1
University of Notre Dame (IN) 9.7
Carnegie Mellon University (PA) 9.8
Hobart William Smith Colleges (NY) 9.8
Rhode Island School of Design 10.6
Source: College Scorecard/NSLDS.
Note: Only includes students receiving Title IV aid, excludes specialty colleges.
Four-year public colleges with the fewest first-generation students, 2013.
Name Pct First Gen
College of William and Mary (VA) 13.2
University of Vermont 14.1
Georgia Institute of Technology 16.5
University of North Carolina School of the Arts 17.4
University of Virginia 17.6
New College of Florida 18.0
University of Michigan-Ann Arbor 18.0
SUNY College at Geneseo 18.4
Clemson University 18.5
University of Wisconsin-Madison 19.1
Source: College Scorecard/NSLDS.
Note: Only includes students receiving Title IV aid, excludes specialty colleges.

Just 5.9% of students receiving federal financial aid at the California Institute of Technology were defined as first-generation in 2013, and eight other private nonprofit colleges were under 10% (including Oberlin, Notre Dame, and Carnegie Mellon). The lowest public college was the College of William and Mary, where just 13% of students were first-generation. Several flagships check in on the list, including Vermont, Virginia, Michigan, and Wisconsin (where I got my PhD).

The list of colleges with the highest percentage of first-generation students is quite different:

Four-year private nonprofit colleges with the most first-generation students, 2013.
Name Pct First Gen
Colorado Heights University 75.6
Beulah Heights University (GA) 66.0
Heritage University (WA) 64.3
Grace Mission University (CA) 64.1
Hodges University (FL) 63.3
Humphreys College (CA) 60.5
Selma University (AL) 59.8
Mid-Continent University (KY) 59.7
Sojourner-Douglass College (MD) 59.2
University of Rio Grande (OH) 58.5
Source: College Scorecard/NSLDS.
Note: Only includes students receiving Title IV aid, excludes specialty colleges.
Four-year public colleges with the most first-generation students, 2013.
Name Pct First Gen
Cal State University-Los Angeles 64.0
Cal State University-Dominguez Hills 60.2
Cal State University-Stanislaus 60.2
Cal State University-San Bernardino 59.4
Cal State University-Bakersfield 58.2
University of Texas-Pan American* 56.9
University of Arkansas at Monticello 56.1
University of Texas at Brownsville* 55.2
Cal State University-Fresno 53.4
Cal State University-Northridge 53.1
Source: College Scorecard/NSLDS.
Note: Only includes students receiving Title IV aid, excludes specialty colleges.
* These two colleges are now UT-Rio Grande Valley as of Sept. 1.

Six private nonprofit and three public four-year colleges had at least three-fifths of their federal aid recipients classified as first-generation students, ten times the rate of Caltech. The top-ten lists for both public and private colleges include many minority-serving institutions, as well as a good chunk of the Cal State University System. These engines of social mobility deserve credit, as do some flagship institutions that do far better than average in enrolling first-generation students. UC-Berkeley, where 37% of aided students are first-generation, also deserves special commendation.

There are a lot of great data elements present in the College Scorecard data that go beyond earnings. I hope that they get attention from researchers and are disseminated to the public.

Comments on the New College Scorecard Data

The Obama Administration’s two-year effort to develop a federal college ratings system appeared to have hit a dead-end in June, with the announcement that no ratings would actually be released before the start of the 2015-2016 academic year. At that point in time, Department of Education officials promised to instead focus on creating a consumer-friendly website with new data elements that had never before been released to the public. I was skeptical, as there were significant political hurdles to overcome before releasing data on employment rates, the percentage of students paying down their federal loans, and graduation rates for low-income students.

But things changed this week. First, a great new paper out of the Brookings Institution by Adam Looney and Constantine Yannelis showed trends in student loan defaults over time—going well beyond the typical three-year cohort default rate measure. They also included earning data, something which was not previously available. But, although they made summary tables of results available to the public, these tables only included a small number of individual institutions. It’s great for researchers, but not so great for students choosing among colleges.

The big bombshell dropped this morning. In an extremely rare Saturday morning release (something that frustrates journalists and the higher education community to no end), the Department of Education released a massive trove of data (fully downloadable!) underlying the new College Scorecard. The consumer-facing Scorecard is fairly simple (see below for what Seton Hall’s entry looks like), and I look forward to hearing about whether students and their families use this new version more than previous ones. I also recommend ProPublica’s great new data tool for low-income students.


But my focus today is on the new data. Some of the key new data elements include the following:

  • Transfer rates: The percentage of students who transfer from a two-year to a four-year college. This helps community colleges, given their mission of transfer, but still puts colleges at a disadvantage if they serve a more transient student body.
  • Earnings: The distribution of earnings 10 years after starting college and the percentage earning more than those with a high school diploma. This comes from federal tax return data and is a huge step forward. However, given very reasonable concerns about a focus on earnings hurting colleges with public service missions, there is also a metric for the percentage of students making more than $25,000 per year. Plenty of people will focus on presenting earnings data, so I’ll leave the graphics to others. (This is a big step forward over the admirable work done by Payscale in this area.)
  • Student loan repayment: The percentage of students (both completers and non-completers) who are able to pay down some principal on loans within a certain period of time. Seven-year loan repayment data are available, as illustrated here:


In the master data file, many of these outcomes are available by family income, first-generation status, and Pell receipt. First-generation status is a new data element to be made available to the public; although the question is on the FAFSA, it’s never been made available to researchers. For those who are curious, here’s what the breakdown of the percentage of first-generation students (typically defined as students whose parents don’t have a bachelor’s degree) by institutional type:


There are a lot of data elements to explore here, and expect lots of great work from the higher education research community in upcoming months and years using these data. In the short term, it will be fascinating to watch colleges and politicians respond to this game-changing release of outcome data on students receiving federal financial aid.

New Paper and Testimony on Risk Sharing

The concept of risk sharing, in which colleges are held at least partially financially responsible for the outcomes of their students, has become a hot topic of political discussion in recent months. The idea has gained bipartisan support–in least in theory–as presidential candidates Hillary Clinton and Scott Walker have both supported the basic principles of risk sharing. Yet by potentially penalizing colleges with high student loan default rates, risk sharing systems have the incentive to reduce access to higher education while not actually incentivizing colleges to improve.

With generous support from the Lumina Foundation, I set out to sketch out a risk sharing system with the goal of increasing accountability for poor outcomes while recognizing differences in the types of students colleges serve. I released the resulting paper this week and testified in front of the U.S. Department of Education’s Advisory Committee on Student Financial Assistance on the topic. (My testimony is below.) I welcome your comments on risk sharing, as the goal of this paper and testimony is to advance a thoughtful conversation on what a fair and effective system could look like.

For more reading on risk sharing, I highly recommend the thoughtful takes of the American Enterprise Institute’s Andrew Kelly and Temple University’s Doug Webber.


Testimony to the Advisory Committee on Student Financial Assistance

Hearing on Higher Education Act Reauthorization

Robert Kelchen

Good afternoon, members of the Advisory Committee on Student Financial Assistance, Department of Education officials, and other guests. My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University. All opinions expressed in this testimony are my own, and I thank the Committee for the opportunity to present.

My testimony today will be on the topic of risk sharing in higher education, which is typically defined as holding colleges financially accountable for their students’ performance. It is a topic that has been discussed by politicians on both sides of the aisle, including legislation recently introduced by Republican Senator Orrin Hatch and Democratic Senator Jeanne Shaheen that would require colleges to pay a percentage of students’ loans that were not paid on in the previous year.[1] But simple risk sharing proposals like this provide colleges with incentives to reduce borrowing by either leaving the Direct Loan program or reducing non-tuition expense allowances included in the cost of attendance.

In a recently-released policy paper funded by the Lumina Foundation, I introduced a risk sharing proposal that attempts to hold colleges accountable for their performance with respect to both Pell Grant and federal student loan dollars.[2] My proposal would reward colleges for strong performance on Pell Grant success and student loan repayment rates, while requiring colleges with weaker performance to pay a penalty to the Department of Education from a source other than institutional aid dollars.

The federal government’s portion of my proposed risk-sharing system would have three main components:

  • First, penalties or rewards for Pell Grant recipients’ performances would be separate from penalties or rewards for student loan performance. This would end the current situation in which colleges face incentives to opt out of federal student loans in order to protect Pell Grant dollars.[3]
  • Second, the federal government would provide better tracking and reporting of outcomes for students receiving federal financial aid. The set of metrics available to examine performance is extremely limited, and could be improved by either overturning the ban on federal student unit record data systems or committing to providing additional subgroup performance information using IPEDS and the National Student Loan Data System.
  • Third, in order to make more accurate comparisons about student loan performance across campuses, federal guidelines for how the non-tuition components of the cost of attendance are defined would be helpful. Research has found large variations in the off-campus room and board and other expense allowances, which are determined by individual colleges, within a given metropolitan area.[4] Colleges need to be placed on a more level playing field for accountability purposes.

Colleges would be required to meet three criteria to receive Title IV funds:

  • First, colleges must agree to put “skin in the game” by being willing to match a percentage of Title IV loan or grant aid with institutional funds if their performance falls below a specified benchmark.
  • Second, colleges must participate in the Federal Direct Loan program in order for their students to receive Pell Grant dollars, giving students access to credit while not directly putting Pell dollars at risk.
  • Third, colleges must be willing to meet heightened accreditation and consumer information provision standards.

Colleges’ performance would be compared to similar institutions using peer groups based on the characteristics of students served, types of degrees and certificates offered, and the level of resources different colleges possess. Notably, by using institutional selectivity, per-student revenues, and endowment values as grouping characteristics, a college would be compared to more selective colleges if it tried to become more selective—limiting its ability to game the system.

The Pell Grant portion of risk sharing would be based on outcomes such as Pell recipients’ retention rates, graduation rates, transfer rates, and the number of graduates. Colleges with performance a certain percentage below the peer group average would have to pay a penalty equal to a percentage of Pell funds awarded out of their own budget, while colleges a certain percentage above the average would receive a bonus to use to supplement need-based financial aid programs.

The student loan portion of risk sharing would be based on outcomes such as cohort default rates 3-5 years after entering repayment, the percent of students current on their payments, and the percentage of students making payments of at least $1 of principal. I would also include PLUS loans in the risk sharing metric. Colleges performing substantially above the peer group average would get additional work-study funds, while colleges performing substantially below average would face a penalty.

The implementation of any risk sharing proposal must be carefully considered in order to avoid perverse incentives and to gain support from colleges and policymakers. Lessons from state performance-based funding program show that implementing over a period of several years is important, as is some method for colleges to limit penalties until they can change their organization.[5] Colleges that can present clear plans for improvement that are supported by their accreditor should be able to get reduced penalties and logistical support from the federal government for a limited period of time.

Thank you once again for the opportunity to present and I look forward to answering any questions.

[1] Student Protection and Success Act (S. 1939, introduced August 5, 2015). http://www.shaheen.senate.gov/imo/media/doc/Student%20Protection%20and%20Sucess%20Act.pdf.

[2] The paper is available at http://www.luminafoundation.org/resources/proposing-a-federal-risk-sharing-policy.

[3] Hillman, N. W. (2015). Cohort default rates: Predicting the probability of federal sanctions. Educational Policy, 29(4), 559-582. Hillman, N. W., & Jaquette, O. (2014). Opting out of federal student loan programs: Examining the community college sector. Paper presented at the Association for Education Finance and Policy annual conference, San Antonio, TX.

[4] Kelchen, R., Hosch, B. J., & Goldrick-Rab, S. (2014). The costs of college attendance: Trends, variation, and accuracy in institutional living cost allowances. Madison, WI: Wisconsin HOPE Lab.

[5] For example, see Dougherty, K. J., & Natow, R. S. (2015). The politics of performance-based funding: Origins, discontinuations, and transformations. Baltimore, MD: Johns Hopkins Press.

Do SAT-Mandatory States Explain Declining Scores?

Yesterday, I wrote about how it was likely the case that some of the decline in SAT scores  was due to states and districts requiring students to take the SAT. At the request of several esteemed readers, I did a back-of-the-envelope calculation to see how much of the change in SAT scores over the last five years is due to states requiring all students to take the SAT (hat tip to Kan-Ye Test (love the name!) for pointing me to the data). Between 2011 and 2015, Delaware, the District of Columbia, and Idaho moved from having some of their students take the SAT (14,765) to having all of their students (32,236) take the SAT. Meanwhile, the average SAT score fell from 1500 to 1490.

Based on 2011 state-by-state data, I recalculated average 2015 SAT scores while substituting 2011 participation levels and scores for 2015  levels and scores in those three states. Erasing the additional 17,471 test-takers (and their average SAT of 1292) from those three states was enough to raise the average SAT score of 1.6 million other test-takers by 2.1 points. These three states explain approximately 21% of the decline in SAT scores, as outlined below.

Required SAT states explain at least 21% of the decline in SAT scores since 2011
States Num. students Avg. SAT
DC, DE, & ID (2011) 14,765 1445
DC, DE, & ID (2015) 32,236 1362
All others (2015) 1,614,887 1493
Total (using ’11 DC, DE, & ID) 1,629,652 1492
Total (using ’15 DC, DE, & ID) 1,647,123 1490

I’d still love to see the College Board pull out data from the districts which moved to require the SAT, as it’s entirely possible that half of the decline in SAT scores could just be due to students who were required to take the test. They’ve got the data, and I hope they take a look!

Why SAT Scores Going Down May Be Just Fine

The average score for students taking the venerable SAT exam in 2014-2015 was 1490, seven points below last year’s scores and the lowest score since the writing section was added in 2005. Not surprisingly, this drop is generating a lot of media coverage—much of it focused on how high schools are failing America’s children. But while high schools may very well be a concern (and those of us in colleges shouldn’t get off without criticism, either), I contend that the decline in SAT scores may be just fine.

The simple reason for my lack of concern is that the decline may very well be due to more students taking the exams in response to new state laws and district rules in several states requiring or encouraging testing. For example, Idaho required beginning in 2012 that students had to take the ACT or SAT to graduate—and that the state would cover SAT costs for students. In 2011-2012, 27% of Idaho students took the SAT and got an average score of 1613, while practically all Idaho students in 2014-2015 took the SAT and got an average score of 1372. (The District of Columbia, Delaware, and Maine—the other three jurisdictions where basically everyone takes the SAT—had similarly low scores.) Either Idaho high schools imploded over a three-year window, or the types of students who weren’t previously taking the test didn’t have the same level of ability on standardized tests as the 27% of students who were likely considering selective four-year colleges.

The chart below shows the relationship between the percentage of students taking the SAT and scores (data available via the Washington Post). The R-squared is 0.82, suggesting that 82% of the variation in state-level test scores can be explained by the percentage of students tested in each state.


What I would like to see is some comparisons across similar types of students over time. Among students who signal a clear intent to go to a four-year college, are SAT scores declining? Or is the entire decline driven by different students taking the test? And are students considering college for the first time because they took the SAT and did reasonably well? There is value to everyone taking a standardized test across states (given the differences in state high school exams), but it’s inappropriate to look at trends over time with such large differences in the types of students taking the test.