Downloadable Dataset of Marriage Rates by College

I enjoyed reading this recent piece in the Chronicle of Higher Education that looked at the “ring by spring” pressures that students at some Christian colleges face to be engaged by graduation. I looked into factors affecting marriage rates across colleges in a blog post earlier this year and found a nearly six percentage point increase in marriage rates at religiously-affiliated colleges between ages 23 and 25 relative to public institutions, as shown in the figure below.

As a data person—and someone who married his college sweetheart only three years after graduation—I wanted to share a dataset that I had already compiled for that piece so people can look through to their heart’s content. It contains data on 820 public and private nonprofit four-year colleges from the Equality of Opportunity Project, with marriage rates for cohorts ages 23-25 and 32-34 in 2014. The three colleges featured in the Chronicle piece all have higher-than-average marriage rates by age 25, with Cedarville University having a 41% marriage rate, Houghton College having a 34% marriage rate, and Baylor University having a 18% marriage rate.

You can download the dataset here, and have fun exploring the data!

A special thanks to Carol Meinhart for catching a silly error in an earlier version of the dataset, where the two marriage rate column headings were switched. It has since been fixed.

Downloadable Dataset of Pell Recipient Graduation Rates

Earlier this week, my blog post summarizing new data on Pell Grant recipients’ graduation rates at four-year colleges was released through the Brookings Institution’s Brown Center Chalkboard blog. I have since received several questions about the data and requests for detailed data for specific colleges, showing the interest within the higher education community for better data on social mobility.

I put together a downloadable Excel file of six-year graduation rates and cohort sizes by Pell Grant receipt in the first year of college (yes/no) and race/ethnicity (black/white/Hispanic). One tab has all of the data, while the “Read Me” tab includes some additional details and caveats that users should be aware of. Hopefully, this dataset can be useful to others!

A Look at Pell Grant Recipients’ Graduation Rates

This post originally appeared on the Brookings Institution’s Brown Center Chalkboard blog.

The federal government provides nearly $30 billion in grant aid each year to nearly eight million students from lower-income families (mainly with household incomes below $50,000 per year) through the Pell Grant program, which can give students up to $5,920 per year to help pay for college. Yet in spite of research showing that the Pell Grant and similar need-based grant programs are effective in increasing college completion rates, there are still large gaps in graduation rates by family income. For example, among students who began college in the fall 2003 semester, Pell recipients were seven percentage points less likely to earn a college credential within six years than non-Pell students.

In spite of the federal government’s sizable investment in students, relatively little has been known about whether Pell recipients succeed at particular colleges. The last Higher Education Act reauthorization in 2008 required colleges to disclose Pell graduation rates upon request, but two studies have shown that colleges have been unable or unwilling to disclose these data. This means that before now, little has been known about whether colleges are able to graduate their students from lower-income families.[1]

The U.S. Department of Education recently updated its Integrated Postsecondary Education Data System (IPEDS) to include long-awaited graduation rates for Pell Grant recipients, and I focus on graduation rates for students at four-year colleges (about half of all Pell recipients) in this post. I examined the percentage of Pell recipients and non-Pell recipients who graduated with a bachelor’s degree from the same four-year college within six years of entering college in 2010.[2] After limiting the sample to four-year colleges that had at least 50 Pell recipients and 50 non-Pell recipients in their incoming cohorts, my analysis included 1,266 institutions (504 public, 747 private nonprofit, and 15 for-profit).

The average six-year graduation rate for Pell recipients in my sample was 51.4%, compared to 59.2% for non-Pell recipients. The graphic below shows the graduation rates for non-Pell students on the horizontal axis and Pell graduation rates on the vertical axis, with colleges to the left of the red line having higher graduation rates for Pell recipients than non-Pell recipients. Most of the colleges (1,097) had non-Pell graduation rates higher than Pell graduation rates, but 169 colleges (13.3%) had higher Pell graduation rates.

Table 1 below shows five colleges where Pell students graduate at the highest and lowest rates relative to non-Pell students.[3] For example, the University of Akron (which had 3,370 students in its incoming class of first-time, full-time students) reported that just 8.8% of its 1,505 Pell recipients in its incoming class graduated within six years compared to 70.1% of its 1,865 non-Pell students—a yawning gap of 61.3% and the second-largest in the country. Assuming the Pell and non-Pell graduation rates are not the result of a data error that the university made in its IPEDS submission, this is a serious concern for institutional equity. On the other hand, some colleges had far higher graduation rates for Pell recipients than non-Pell students. An example is Howard University, where 79.4% of Pell recipients and just 46.1% of non-Pell students graduated.

Table 1: Colleges with the largest Pell/non-Pell graduation rate gaps.
Name State Number of new students Pell grad rate Non-Pell grad rate Gap Pct Pell
Saint Augustine’s University NC 440 2.7 92.2 -89.5 76.8
University of Akron OH 3370 8.8 70.1 -61.3 44.7
St. Thomas Aquinas College NY 290 20.7 78.3 -57.6 31.7
Southern Virginia University VA 226 20.7 54.3 -33.6 64.2
Upper Iowa University IA 201 27.9 60.8 -32.9 51.7

Ninety-seven of the colleges with at least 50 Pell and 50 non-Pell recipients had graduation rates of over 80% for both Pell and non-Pell students. Most of these colleges are highly selective institutions with relatively low percentages of Pell recipients, but six institutions had Pell and non-Pell graduation rates above 80% while having at least 30% of students in their incoming class receive Pell Grants. All six are in California, with five in the University of California system (Davis, Irvine, Los Angeles, San Diego, and Santa Barbara) and one private institution (Pepperdine). This suggests that it is possible to be both socioeconomically diverse and successful in graduating students.

As a comparison, I also examined the black/white graduation rate gaps for the 499 colleges that had at least 50 black and 50 white students in their graduation rate cohorts. The average black/white graduation rate gap at these colleges was 13.5% (59.0% for white students compared to 45.5% for black students). As the figure shows below, only 39 colleges had higher graduation rates for black students than for white students while the other 460 colleges had higher graduation rates for white students than black students.

Fourteen colleges had higher graduation rates for Pell recipients than non-Pell students and for black students than white students. This group includes elite institutions with small percentages of Pell recipients and black students such as Dartmouth, Duke, and Yale as well as broader-access and more diverse colleges such as CUNY York College, Florida Atlantic, and South Carolina-Upstate. Table 2 shows the full list of 14 colleges that had higher success rates from Pell and black students than non-Pell and white students.

Table 2: Colleges with higher graduation rates for Pell and black students.
Name State Pell grad rate Non-Pell grad rate Black grad rate White grad rate
U of South Carolina-Upstate SC 50.4 34.0 47.3 38.8
CUNY York College NY 31.5 27.3 32.7 28.0
Agnes Scott College GA 71.1 68.3 72.4 62.1
Clayton State University GA 34.0 31.5 33.2 31.0
Duke University NC 96.6 94.3 95.1 95.0
Florida Atlantic University FL 50.6 49.0 50.1 48.5
Wingate University NC 54.5 53.1 60.0 51.4
UMass-Boston MA 45.8 44.7 50.0 40.6
U of South Florida FL 68.1 67.1 68.7 65.5
CUNY City College NY 47.2 46.3 52.8 45.6
Dartmouth College NH 97.2 96.5 97.3 97.1
CUNY John Jay College NY 44.1 43.4 43.5 42.4
Yale University CT 98.2 97.7 100.0 97.6
Stony Brook University NY 72.5 72.3 71.3 70.5

The considerable variation in Pell recipients’ graduation rates across colleges deserves additional investigation. Colleges with similar Pell and non-Pell graduation rates should be examined to see whether they have implemented any practices to support students with financial need. The less-selective colleges that have erased graduation rate gaps by race and family income could potentially serve as exemplars for other colleges that are interested in equity to emulate. Meanwhile, policymakers, college leaders, and the public should be asking tough questions of colleges with reasonable graduation rates for non-Pell students but abysmal outcomes for Pell recipients.

Finally, the U.S. Department of Education deserves credit for the release of Pell students’ graduation rates, as well as several other recent datasets that provide new information on student outcomes. This includes new data on students’ long-term student loan default and repayment outcomes and the completion rates of students who were not first-time, full-time students, along with an updated College Scorecard that now includes a nifty college comparison tool. Though the Pell graduation rate measure fails to cover all students and does not credit institutions if a student transfers and completes elsewhere, it is still a useful measure of whether colleges are effectively educating students from lower-income families. In the future, student-level data that includes part-time and transfer students would be useful to help examine whether colleges are helping all of their students succeed.

[1] Focusing on Pell Grant recipients undercounts the number of lower-income students because a sizable percentage of lower-income students do not file the Free Application for Federal Student Aid, which is required for students to be eligible to receive a Pell Grant.

[2] I calculated the number of non-Pell recipients by subtracting the number of Pell recipients from the total graduation rate cohort in the IPEDS dataset.

[3] This excludes two colleges that reported a 0% or 100% graduation rate for their Pell students, which is likely a data reporting error.

New Data on Long-Term Student Loan Default Rates

In recent years, more data have come out on how well students are able to manage repaying their loans beyond the three-year window currently used for federal accountability purposes (via cohort default rates). A great 2015 paper by Adam Looney and Constantine Yannelis used tax records merged with data from the National Student Loan Data System (NSLDS) to show longer-term trends in default in repayment. Two days later, the release of the College Scorecard provided college-level data on student loan repayment rates going out seven years (even though the repayment rates were initially calculated incorrectly).

Thanks to a lot of hard work by the data folks at the U.S. Department of Education and their contractor RTI, there are new data available on long-term student loan default rates. ED and RTI used NSLDS data going through 2015 to match records from the Beginning Postsecondary Students studies of cohorts beginning college in 1995-96 and 2003-04. This allowed a 20-year look at student loan default and payoff rates for the 1995-96 cohort and a 12-year look at the 2003-04 cohort, as detailed in this useful report from the National Center for Education Statistics.

Thanks to NCES’s wonderful PowerStats tool, I took a look at the percentage of students in the 2003-04 entering cohort (my college cohort) who had defaulted on at least one of their federal student loans within 12 years. Many of the news headlines focused on the high default rates of students at for-profit colleges (about 52%!), but this isn’t entirely a fair comparison because for-profit colleges tend to serve more economically-disadvantaged students. So in this post, I focused on racial/ethnic differences in default rates by type of college attended to give a flavor of what the data can do.

As the below chart shows, nearly half of all black students (49%) defaulted on at least one loan within 12 years—more than twice the rate of white students (20%) and more than four times the rate of Asian students (11%). The differentials are still present across sector, with more than one-third of black students defaulting across all sectors while a relatively small percentage of Asian students defaulted across all nonprofit sectors. Default rates at for-profit colleges are high for all racial/ethnic groups, with almost half of white students defaulting alongside nearly two-thirds of black students.

An advantage of the PowerStats tool is that it allows users to run regressions via NCES’s remote server. This allows interested people to analyze the relationship between long-term default rates and attending a for-profit college after controlling for other characteristics. However, PowerStats is overwhelmed by requests by my fellow higher education data nerds at this point, so I gave up on trying to run the regression after several hours of waiting. But if someone wants to run some regressions using the new loan repayment data in the BPS once the server calms down, I’m happy to feature their work on my blog!

Examining Trends in Student Loan Repayment Rates

It’s been a good week for higher education data nerds. The Department of Education released updated student loan cohort default rates on Wednesday afternoon (see my summary here), followed by an update to the massive College Scorecard dataset on Thursday morning. This is the third update to the Scorecard, with this year’s update also featuring a nice new comparison tool on the student-facing version of the site.

In this post, I focus on trends in student loan repayment rates (defined as the percentage of students who have repaid at least $1 in principal) at various periods entering loan repayment. I present data for colleges with unique six-digit Federal Student Aid OPEID numbers (to eliminate duplicate results), weighting the final estimates to reflect the total number of borrowers entering repayment. Additionally, I use the January 2017 data release for the 2012-13 Scorecard data because there appears to be an error in that year’s dataset that results in very few colleges having loan repayment rates.

I begin by show the trends in the 1-year, 3-year, 5-year, and 7-year repayment rates for each cohort of students with available data.

Repayment cohort 1-year rate (pct) 3-year rate (pct) 5-year rate (pct) 7-year rate (pct)
2006-07 61.8 63.5 64.6 66.6
2007-08 53.0 54.2 56.1 59.7
2008-09 46.1 47.9 52.0 56.0
2009-10 41.0 43.2 48.7 N/A
2010-11 36.6 40.7 46.3 N/A
2011-12 32.2 38.1 N/A N/A
2012-13 33.0 38.3 N/A N/A

There are two clear trends from this table. First, repayment rates have steadily dropped for more recent cohorts of students. The one-year repayment rate for students entering repayment in 2006-07 (before the Great Recession) was 61.8%, while the most recent cohort of students had a one-year repayment rate of just 33.0%. Much of this decline is likely due to the growth of income-driven repayment plans (which can allow students to be current on their payments while not making a dent in the overall principal). But economic circumstances also likely play a role here.

Second, repayment rates steadily rise for a given cohort as they have more time in the labor market after college. In the 2008-09 repayment cohort, the seven-year repayment rate was 56.0%, 9.9% higher than the one-year rate. These trends still suggest that it will be a long time before students repay their loans, but this is a step in the right direction.

I also show the distribution of colleges’ repayment rates for the 2008-09 cohort across all of the repayment periods by the type of college (public, private nonprofit, and for-profit). In general, private nonprofit colleges have higher repayment rates than both public and for-profit colleges (in part because private nonprofit colleges are primarily four-year institutions), but all sectors see slight improvements between the one-year and seven-year repayment rates.

Finally, a programming note: I’ll be getting the final page proofs for my book shortly and have to do final checks and put together an index during the month of October. I’ll try to write a couple of short blog posts when the new National Postsecondary Student Aid Study and full IPEDS Outcomes Measures survey come out; otherwise, stay tuned for some exciting new research that I’ll be unveiling in early November.

It’s Time to Move Beyond Cohort Default Rates

Today marked the annual release of data on cohort default rates—representing the percentage of students at a given college who default on their federal student loans within three years. The newest data show that 11.5% of students who entered repayment in Fiscal Year 2014 defaulted during this period, which is up slightly from 11.3% for those who entered repayment in Fiscal Year 2013.

Cohort default rates (CDRs) have been used for decades as an accountability metric by the federal government, with colleges posting CDRs of over 40% in a given year losing access to federal student loans for a two-year period and colleges with CDRs above 30% in three consecutive years losing access to all federal financial aid for two years. This year, six colleges posted default rates high enough to lose all Title IV aid and four more had default rates high enough to lose loan access.

Yet CDRs suffer from two key concerns that make them almost toothless from an accountability perspective—and show the need for better accountability metrics. I discuss the two key points in brief below (and if you like this topic, you’ll love my book on higher education accountability that will come out in January!).

Point 1: Default rates are an almost meaningless indicator of student outcomes. The availability of income-driven repayment programs means that no student should ever default on their obligations (although these programs are still clunky and some students simply don’t ever want to repay their loans). But for students who are able and willing to jump through the hoops of income-driven programs and have very low incomes, they can be current on their loans while making zero payments. Many colleges also adopt default management programs that can encourage students to either enroll in income-driven plans or to defer their obligations beyond the three-year accountability window.

In a recent article (a summary is available here), Amy Li of the University of Northern Colorado and I explored the relationship between default and repayment rates (as defined as paying down at least $1 in principal over a given period of time). We showed that although reported default rates stayed low, the percentage of students failing to repay any principal—a key question for taxpayers—was far higher.

Point 2: Default rate sanctions affect almost no colleges. Ben Miller of the Center for American Progress summed up how few colleges faced the loss of federal aid:

The all-or-nothing nature of potential sanctions gives colleges a tremendous incentive to make sure they aren’t affected. In 2014, the Obama Department of Education agreed to a controversial last-minute change to CDRs that allowed some colleges to sneak just below the 30% threshold. In 2017, a provision appeared in the FY 2018 budget that would effectively void CDR sanctions for colleges in economically distressed areas:

It turns out that Senator Mitch McConnell (R-KY) inserted the provision, likely to help out Southeast Kentucky Community and Technical College—one of the six institutions that is at risk of losing all federal financial aid due to high default rates. It pays to have friends in high places, I reckon.

So what can be done to improve federal accountability policies on student loans? I offer two simple ideas to start. First, move from default rates to repayment rates in order to get a better idea of students’ post-college circumstances. Second, move from an all-or-nothing sanction system to gradual sanctions. I go into both of these points in more depth in a paper I wrote in 2015 on the idea of “risk sharing” for student loans. It is essential to move away from CDRs as quickly as possible, even though some in higher education community may prefer the CDR system that affects relatively few colleges.

Trends in Student Fees at Public Universities

Out of all the research I have done during my time as an assistant professor, I get more questions from journalists and policymakers about my research on student fees than any other study. In this study (published in the Review of Higher Education in 2016), I showed trends in student fees at public four-year institutions and also examined the institutional-level and state-level factors associated with higher levels of fees. Yet due to the time it takes to write a paper and eventually get it published, the newest data on fees in the paper came from the 2012-13 academic year. In this blog post, I update the data on trends in fees at public universities for in-state students to go through the 2016-17 academic year.

It’s quite a bit harder than it appears to show trends in student fees because of the presence of fee rollbacks—colleges resetting their fees to a lower level and increasing tuition to compensate. Between the 2000-01 and 2016-17 academic years, 89 public universities reset their fees at least once (as measured by decreasing fees by at least $500 and increasing tuition by a larger amount). This includes most public universities in California, Massachusetts, Minnesota, and South Dakota, as well as a smattering of institutions in other states. Universities that reset their fees had a 115.3% increase in inflation-adjusted tuition and fees since 2000-01 (from $4,286 to $9,228), compared to an 83.7% increase for the 441 universities that did not reset their fees (from $4,936 to $9,068). With the caveat that I can’t break down consistent increases in tuition and fees for some of the colleges with the largest price increases, I present trends in tuition and fees for the other 441 institutions below.

The first figure shows average tuition (dashed) and fees (solid) levels for each year through 2000-01 through 2016-17. During this period, tuition increased from $3,999 to $7,183 in inflation-adjusted dollars (a 79.6% increase). Fees went up even faster, with a 106.7% increase from $912 to $1,885.

The second figure shows student fees as a percentage of overall tuition and fees. This percentage increased from 18.6% in 2000-01 to 20.8% in 2016-17.

This increase in fees is particularly important in conversations about free public college. Many of the policy proposals for free public higher education (such as the Excelsior Scholarship in New York) only cover tuition—and thus give states an incentive to encourage colleges to increase their fees while holding the line on tuition. It’s also unclear whether students and their families look at fees in the college search process in the same way they look at tuition, meaning that growing fee levels could surprise students when the first bills come due. More research needs to be done on how students and their families perceive fees.

A Peek Inside the New IPEDS Outcome Measures Dataset

Much of higher education policy focuses on “traditional” college students—those who started college at age 18 after getting dropped off in the family station wagon or minivan, enrolled full-time, and stayed at that institution until graduation. Yet although this is how many policymakers and academics experienced college (I’m no exception), this represents a minority of the current American higher education system. Higher education data systems have often followed this mold, with the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) collecting some key success and financial aid metrics for first-time, full-time students only.

As a result of the 1990 Student Right-to-Know Act, all colleges were required to start compiling graduation rates (and disclosing them upon request) for first-time, full-time students and a smaller group of colleges were also required to collect transfer-out rates. Colleges were then required to submit the data to IPEDS for students who began college in the 1996-97 academic year so information would be available to the public. This was a step forward for transparency, but it did little to accurately represent community colleges and less-selective four-year institutions. Some groups, such as the Student Achievement Measure, have developed to voluntarily provide information on completion rates for part-time and transfer students. These data have shown that IPEDS significantly understates overall completion rates even among students who initially fit the first-time, full-time definition.

After years of technical review panels and discussions about how to best collect data on part-time and non-first-time students along with a one-year delay to “address data quality issues,” the National Center for Education Statistics released the first year of the new Outcome Measures survey via College Navigator earlier this week. This covers students who began college in 2008 and were tracked for a period of up to eight years. Although the data won’t be easily downloadable via the IPEDS Data Center until mid-October, I pulled up data on six colleges (two community colleges, two public four-year colleges, and two private nonprofit colleges in New Jersey) to show the advantages of more complete outcomes data.

Examples of IPEDS Outcome Measures survey data, 2008 entering cohort.
Institution 6-year grad rate 8-year grad rate Still enrolled within 8 years Enrolled elsewhere within 8 years
Community colleges
Atlantic Cape Community College
First-time, full-time 26% 28% 3% 27%
Not first-time, but full-time 41% 45% 0% 29%
First-time, part-time 12% 14% 5% 20%
Not first-time, but part-time 23% 26% 0% 38%
Brookdale Community College
First-time, full-time 33% 35% 3% 24%
Not first-time, but full-time 36% 39% 2% 33%
First-time, part-time 17% 18% 3% 25%
Not first-time, but part-time 25% 28% 0% 28%
Public four-year colleges
Rowan University
First-time, full-time 64% 66% 0% 20%
Not first-time, but full-time 82% 82% 1% 7%
First-time, part-time 17% 17% 0% 0%
Not first-time, but part-time 49% 52% 5% 21%
Thomas Edison State University
Not first-time, but part-time 42% 44% 3% 29%
Private nonprofit colleges
Centenary University of NJ
First-time, full-time 61% 62% 0% 4%
Seton Hall University
First-time, full-time 66% 68% 0% 24%
Not first-time, but full-time 67% 68% 0% 18%
First-time, part-time 0% 0% 33% 33%
Not first-time, but part-time 38% 38% 0% 38%

There are several key points that the new data highlight:

(1) A sizable percentage of students enrolled at another college within eight years of enrolling in the initial college. The percentages at the two community colleges in the sample (Atlantic Cape and Brookdale) are roughly similar to the eight-year graduation rates, suggesting that quite a few students are transferring without receiving degrees. These rates are lower in the four-year sector, but still far from trivial.

(2) New colleges show up in the graduation rate data! Thomas Edison State University is well-known for focusing on adult students (they only accept students age 21 or older). So, as a result, they didn’t have a first-time, full-time cohort for the traditional graduation rate. But TESU has a respectable 42% graduation rate of part-time students within six years, and another 29% enrolled elsewhere within eight years. On the other hand, residential colleges may just have a first-time, full-time cohort (such as Centenary University) or small cohorts of other students for which data shouldn’t be trusted (such as Seton Hall’s tiny cohort of first-time, part-time students).

(3) Not first-time students graduate at similar or higher rates compared to first-time students. To some extent, this is not surprising as students enter with more credits. For example, at Rowan University, 82% of transfer students who entered full-time graduated within six years compared to 64% of first-time students.

(4) Institutional graduation rates don’t change much after six years. Among these six colleges, graduation rates went up by less than five percentage points between six and eight years and few students are still enrolled after eight years. It’s important to see if this is a broader trend, but this suggests that six-year graduation rates are fairly reasonable metrics.

Once the full dataset is available in October, I’ll return to analyze broader trends in the Outcome Measures data. But for now, take a look at a few colleges and enjoy a sneak peek into the new data!

Beware OPEIDs and Super OPEIDs

In higher education discussions, everyone wants to know how a particular college or university is performing across a range of metrics. For metrics such as graduation rates and enrollment levels, this isn’t a big problem. Each freestanding college (typically meaning that they have their own accreditation and institutional governance structure) has to report this information to the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) each year. But other metrics are more challenging to use and interpret because they can cover multiple campuses—something I dig into in this post.

In the 2015-16 academic year, there were 7,409 individual colleges (excluding administrative offices) in the 50 states and Washington, DC that reported data to IPEDS and were uniquely identified by a UnitID number. A common mistake that analysts make is to assume that all federal higher education (or even all IPEDS) data metrics represent just one UnitID, but that is not always the case. Enter researchers’ longtime nemesis—the OPEID.

OPEIDs are assigned by the U.S. Department of Education’s Office of Postsecondary Education (OPE) to reflect each postsecondary institution that has a program participation agreement to participate in federal student aid programs. However, some colleges within a system of higher education share a program participation agreement, in which one parent institution has a number of child institutions for financial aid purposes.

Parent/child relationships can generally be identified using OPEID codes; parent institutions typically have OPEIDs ending with “00,” while child institutions typically have OPEIDs ending in another value. These reporting relationships are fairly prevalent, with there being approximately 5,744 parent and 1,665 child institutions in IPEDS in the 2015-16 academic year based on OPEID values. For-profit college chains typically report using parent/child relationships, while a number of public college and university systems also aggregate institutional data to the OPEID level. For example, Penn State and Rutgers have parent/child relationships while the University of Missouri and the University of Wisconsin do not.

In this case of a parent/child relationship, all data that come from the Office of Federal Student Aid or from the National Student Loan Data System are aggregated up across a number of colleges. This includes all data on student loan repayment rates, earnings, and debt from the College Scorecard as well as student loan default rates that are currently used for accountability purposes. Additionally, some colleges report finance data out at the OPEID level on a seemingly chaotic basis—which can only be discovered by combing through data to see if child institutions do not have values. For example, Penn State always reports at the parent level, while Rutgers has reported at the parent level and the child level on different occasions over the last 15 years. Ozan Jaquette and Edna Parra have pointed out in some great research that failing to address parent/child issues can result in estimates from IPEDS or Delta Cost Project data being inaccurate (although trend data are generally reasonable).

If UnitIDs and OPEIDs were not enough, the Equality of Opportunity Project (EOP) dataset added a new term—super-OPEIDs—to researchers’ jargon. This innovative dataset, compiled by economists Raj Chetty, John Friedman, and Nathaniel Hendren, uses federal income tax records to construct social mobility metrics for 2,461 institutions of higher education based on pre-college family income and post-college student income. (I used this dataset last month in a blog post looking at variations in marriage rates across four-year colleges.) However, the limitation of this approach is that the researchers have to rely on the names of the institutions on tax forms, which are sometimes aggregated beyond UnitIDs or OPEIDs. Hence, the super-OPEID.

The researchers helpfully included a flag for super-OPEIDs that combined multiple OPEIDs (the variable name is “multi” in the dataset, for those playing along at home). There are 96 super-OPEIDs that have this multiple-OPEID flag, including a number of states’ public university systems. The full list can be found in this spreadsheet, but I wanted to pull out some of the most interesting pairings. Here are a few:

–Arizona State And Northern Arizona University And University Of Arizona

–University Of Maryland System (Except University College) And Baltimore City Community College

–Minnesota State University System, Century And Various Other Minnesota Community Colleges

–SUNY Upstate Medical University And SUNY College Of Environment Science And Forestry

–Certain Colorado Community Colleges

To get an idea of how many colleges (as measured by UnitIDs) have their own super-OPEID, I examined the number of colleges that did not have a multiple-OPEID flag in the EOP data and did not have any child institutions based on their OPEID. This resulted in 2,143 colleges having their own UnitID, OPEID, and super-OPEID—meaning that all of their data across these sources is not combined with different institutions. (This number would likely be higher if all colleges were in the EOP data, but some institutions were either too new or too small to be included in the dataset.)

I want to close by noting the limitations of both the EOP and Federal Student Aid/College Scorecard data for analytic purposes, as well as highlighting the importance of the wonky terms UnitID, OPEID, and super-OPEID. Analysts should carefully note when data are being aggregated across separate UnitIDs (particularly when different types of colleges are being combined) and consider omitting colleges where aggregation may be a larger concern across OPEIDs or super-OPEIDs.

For example, earnings data from the College Scorecard would be fine for the University of Maryland-College Park (as the dataset just reflects those earnings), but social mobility data would include a number of other institutions. Users of these data sources should also describe their strategies in their methods discussions to an extent that would allow users to replicate their decisions.

Thanks to Sherman Dorn at Arizona State University for inspiring this blog post via Twitter.

Not-so-Free College and the Disappointment Effect

One of the most appealing aspects of tuition-free higher education proposals is that they convey a simple message about higher education affordability. Although students will need to come up with a substantial amount of money to cover textbooks, fees, and living expenses, one key expense will be covered if students hold up their end of the bargain. That is why the results of existing private-sector college promise programs are generally promising, as shown in this policy brief that I wrote for my friends at the Midwestern Higher Education Compact.

But free college programs in the public sector often come with a key limitation—the amount of money that the state has to fund the program in a given year. Tennessee largely avoided this concern by endowing the Tennessee Promise program through lottery funds, and the program appears to be in good financial shape at this point. However, two other states are finding that available funds are insufficient to meet program demand.

  • Oregon will provide only $40 million of the $48 million needed to fund its nearly tuition-free community college program (which requires a $50 student copay). As a result, the state will eliminate grants to the 15% to 20% of students with the highest expected family contributions (a very rough proxy for ability to pay).
  • New York received 75,000 completed applications for its tuition-free public college program, yet still only expects to give out 23,000 scholarships. Some of this dropoff may be due to students attending other colleges, but other students are probably still counting on the money.

In both states, a number of students who expected to get state grant aid will not receive any money. While rationing of state aid dollars is nothing new (many states’ aid programs are first-come, first-served), advertising tuition-free college and then telling students they won’t receive grant aid close to the beginning of the academic year may have negative effects such as choosing not to attend college at all or diminished academic performance if they do attend. There is a sizable body of literature documenting the “disappointment effect” in other areas, but relatively little in financial aid. There is evidence that losing grant aid can hurt continuing students, yet this does not separate out the potential effect of not having money from the potential disappointment effect.

The Oregon and New York experiences provide for a great opportunity to test the disappointment effect. Both states could compare students who applied for but did not receive the grant in 2017-18 to similar students in years prior to the free college programs. This would allow for a reasonably clean test of whether the disappointment effect had any implications for college choice and eventual persistence.