A First Look at Program-Level Earnings Data by Credential Level

The U.S. Department of Education has been promising program-level earnings data in the College Scorecard for several months now following the release of program-level debt data back in May. Debt data are interesting, but I think everyone was waiting for earnings data to come out. And it came out today, sending me scrambling to get into the data in between meetings, teaching, and other responsibilities of a tenured faculty member. The data can be found here, and please do read the documentation before digging into the data.

Before I get back to meetings, here are a few takeaways:

(1) Debt and earnings data are based on different samples of students. Debt data only include people with federal loans, while earnings data include people with any type of financial aid. At community colleges, these samples are quite different because more students typically get Pell Grants than loans. But for graduate programs, the numbers really only differ by a few work-study students.

(2) Most programs aren’t covered in the data, but most students are. For the most recent data file, there are 216,638 programs listed. Of these programs, 45,371 have earnings data and 51,423 have debt data.

(3) Earnings data are soon after graduation. Earnings were measured in 2016-17 for students graduating in 2014-15 and 2015-16. More years of data will be included in the future.

(4) Want to make money? Be a dentist. The program with highest earnings was (The) Ohio State University’s dental program, with earnings of $231,200 and debt of $173,309. Dental and other health sciences programs dominated the top of the earnings distributions, with a few law and business programs thrown in. Most of these programs have high debt burdens. On the other hand, Parker University’s chiropractic program brought up the rear with debt of $193,328 and earnings of $2,700. Something strange is probably going on with the data there.

(5) Earnings and debt vary considerably by credential level. In general, both debt and earnings increase across credential levels, but debt increases at a higher rate. As shown below, the median debt-to-earnings ratio across first professional (law, medicine, etc.) programs was 191%. Earnings often increase quickly in future years, but the first few years won’t be fun.

I look forward to seeing a whole host of (responsible) analyses using the new data, so keep me posted of any good takes. This has the potential to influence families and colleges alike, and I’m particularly interested to see if the data release affects whether colleges close low-performing programs (as I discussed in my last blog post).

Highlighting Some Interesting Living Allowance Estimates

As a self-proclaimed higher education data nerd, I was thrilled to see the U.S. Department of Education release the first of the 2018-19 data via its Integrated Postsecondary Education Data System (IPEDS) website. Among the new components released today was fresh data on tuition, fees, and other components of the total cost of attendance. After taking a little bit of time to update my datasets (a tip to users: investing in using the full data files instead of the point-and-click interface is well worth it), I’m surfacing with a look at some of the more interesting living allowance estimates for off-campus students.

Some quick details on why this is important: colleges are responsible for setting the cost of attendance (COA) for students, which includes estimated expenses for room and board, books and supplies, and other miscellaneous expenses like transportation and personal care. Students can access financial aid up to the COA, and the net price of attendance (a key accountability measure) is derived by subtracting grant aid from the COA. Colleges are thus caught in a bind between giving students access to the aid—often loans—they need to succeed while not looking too expensive or raising concerns about ‘overborrowing’ (which I am generally skeptical of at the undergraduate level).

Building on previous work that I did with Sara Goldrick-Rab of Temple University and Braden Hosch of Stony Brook University (here is a publicly-available version of our journal article), I pulled colleges’ reported on-campus and off-campus room and board estimates for the 2018-19 academic year.[1] To put this information in comparison, I also pulled in the average county-level nine-month rent for a two-bedroom apartment that is shared with a roommate. To make this fully comparable, I also added $1,800 for nine months to account for food; this amount falls between the USDA’s current cost estimates for their thrifty and low-cost food plans.

Here is a link to the data for all 3,403 colleges that reported off-campus room and board data for the 2018-19 academic year.[2] Below, I highlight some colleges on the high end and on the low end of the estimated living allowances.

Extremely low living allowances

Thirty colleges listed living allowances of $3,000 or below in the 2018-19 academic year. Given that food is approximately $1,800 for nine months, this leaves less than $150 per month for rent. Even in affordable parts of the country, this is essentially impossible. For example, Wilmington College in Ohio is in a reasonably affordable region with the price tag of sharing a two-bedroom apartment coming in at about $350 per month. But an off-campus allowance of $2,650 for nine months is insufficient to cover this and food. (The on-campus price tag is $9,925 for nine months, suggesting that price-sensitive students are probably looking to live off campus as much as possible.)

name state On-campus room and board, 2018-19 Off-campus room and board, 2018-19 Off-campus room and board, 2017-18 Estimated off-campus room and board, 2018-19
Southern California University of Health Sciences CA N/A 1600 4800 9859.5
University of the People CA N/A 2001 2001 9859.5
Wellesley College MA 16468 2050 2050 11673
Kehilath Yakov Rabbinical Seminary NY 2800 2100 2100 9787.5
Western International University AZ N/A 2160 2160 6628.5
Central Georgia Technical College GA N/A 2184 2600 5823
Washington Adventist University MD 9370 2226 2226 9292.5
The Southern Baptist Theological Seminary KY 7150 2460 2460 5638.5
The College of Wooster OH 11850 2500 2500 5107.5
Ohio Institute of Allied Health OH N/A 2500 2500 5346
Agnes Scott College GA 12330 2500 2500 6777
Sharon Regional School of Nursing PA N/A 2500 4800 4995
John Brown University AR 9224 2500 2500 5211
Elmira College NY 12000 2500 2500 5553
Estelle Medical Academy IL N/A 2500 2500 7254
Mountain Empire Community College VA N/A 2600 2600 4995
Wilmington College OH 9925 2650 2650 4945.5
Cleveland Community College NC N/A 2700 2700 4882.5
Michigan Career and Technical Institute MI 6156 2716 2664 5823
Hope College MI 10310 2760 2790 5733
Bryant & Stratton College-Online NY N/A 2800 2800 5571
Allegheny Wesleyan College OH 3600 2880 2880 4869
Daemen College NY 12915 2900 2900 5571
George C Wallace Community College-Dothan AL N/A 2983 2983 4630.5
Long Island Business Institute NY N/A 3000 3000 10039.5
Uta Mesivta of Kiryas Joel NY 6000 3000 3000 7857
Wytheville Community College VA N/A 3000 3000 4959
Skokie Institute of Allied Health and Technology IL N/A 3000 N/A 7254
Rabbinical College Ohr Yisroel NY 3000 3000 3000 10039.5
Bishop State Community College AL N/A 3000 3000 5616

 

Extremely high living allowances

On the high end, 28 colleges checked in with nine-month living allowances above $19,000. Even for colleges in expensive areas, students could easily afford splitting a two-bedroom apartment and eating reasonably well with this allowance. For example, Pace University in New York has a room and board allowance of $19,774 for nine months while splitting a two-bedroom apartment and buying food checks in at $10,040. But if the student has a child and needs a two-bedroom apartment, this estimate is almost spot-on.

name state On-campus room and board, 2018-19 Off-campus room and board, 2018-19 Off-campus room and board, 2017-18 Estimated off-campus room and board, 2018-19
Acupuncture and Massage College FL N/A 19144 16880 8343
Central California School of Continuing Education CA N/A 19210 19210 8739
Arcadia University PA 13800 19292 18365 7200
University of Baltimore MD N/A 19350 14200 7839
Circle in the Square Theatre School NY N/A 19375 18500 10039.5
Little Priest Tribal College NE 7000 19440 19440 4950
Pace University NY 18529 19774 18756 10039.5
New York Film Academy CA N/A 19800 19800 9859.5
Fashion Institute of Technology NY 14480 19968 19558 10039.5
Miami Ad School at Portfolio Center GA N/A 20000 14520 6777
Atlantic Cape Community College NJ N/A 20100 19600 7555.5
John F. Kennedy University CA N/A 20112 N/A 11367
Hofstra University NY 14998 20323 19850 10381.5
School of Visual Arts NY 20400 20400 19600 10039.5
California Institute of Arts & Technology CA N/A 20496 19271 11106
Hawaii Medical College HI N/A 20712 19152 11101.5
Ocean County College NJ N/A 20832 20496 8455.5
Colorado School of Healing Arts CO N/A 20940 12267 8586
New York School of Interior Design NY 21300 21300 21000 10039.5
Monterey Peninsula College CA N/A 21753 17298 8730
School of Professional Horticulture, New York Botanical Garden NY N/A 22000 22000 10039.5
The University of America CA N/A 23000 N/A 7344
Carolinas College of Health Sciences NC N/A 24831 24108 6426
Long Island University NY 14020 25000 25000 10381.5
Carlos Albizu University-Miami FL N/A 25536 25083 8343
Miami Ad School-San Francisco CA N/A 29400 29400 16065
Miami Ad School-New York NY N/A 29400 29400 10039.5
Miami Ad School-Wynwood FL N/A 29400 29400 8343

 

As a final note in this post, I would like to say that I frequently hear from colleges that I am using incorrect data for their institution in my analyses. My response to that is to remind them to make sure the data they provide to the U.S. Department of Education is correct. I do my best not to highlight colleges that had massive changes from year to year, as that could be a reporting error. But ultimately, it’s up to the college to get the data right until the federal government finally decides to audit a few colleges’ data each year as a quality assurance tool.

[1] This excludes colleges that report living allowances for the entire length of the program to allow for a consistent comparison across nine-month academic years. Additionally, room and board estimates are for students living off campus away from their families, as students living ‘at home’ do not have living allowance data in IPEDS.

[2] If a college requires all first-year students to live on campus, they may be missing from this dataset.

How the New Carnegie Classifications Scrambled College Rankings

Carnegie classifications are one of the wonkiest, most inside baseball concepts in the world of higher education policy. Updated every three years by the good folks at Indiana University, these classifications serve as a useful tool to group similar colleges based on their mix of programs, degree offerings, and research intensity. And since I have been considered “a reliable source of deep-weeds wonkery” in the past, I wrote about the most recent changes to Carnegie classifications earlier this year.

But for most people outside institutional research offices, the first time the updated Carnegie classifications really got noticed was with this fall’s college rankings season. Both the Washington Monthly rankings that I compile and the U.S. News rankings that I get asked to comment about quite a bit rely on Carnegie classifications to define the group of national universities. We both use the Carnegie doctoral/research university category for this, putting master’s institutions to a master’s university category (us) or regional universities (U.S. News). With the number of Carnegie research universities spiking from 334 in the 2015 classifications to 423 in the most recent 2018 classifications, this introduces a bunch of new universities into the national rankings.

To be more exact, 92 universities appeared in Washington Monthly’s national university rankings for the first time this year, with nearly all of these universities coming out of the master’s rankings last year. The full dataset of these colleges and their rankings in both the US News and Washington Monthly rankings can be downloaded here, but I will highlight a few colleges that cracked the top 100 in either ranking below:

Santa Clara University: #54 in US News, #137 in Washington Monthly

Loyola Marymount University: #64 in US News, #258 in Washington Monthly

Gonzaga University: #79 in US News, #211 in Washington Monthly

Elon University: #84 in US News, #282 in Washington Monthly

Rutgers University-Camden: #166 in US News, #57 in Washington Monthly

Towson University: #197 in US News, #59 in Washington Monthly

Mary Baldwin University: #272 in US News, #35 in Washington Monthly

These new colleges appearing in the national university rankings means that other colleges got squeezed down the rankings. Given the priority that many colleges and their boards place on the US News rankings, it’s a tough day on some campuses. Meanwhile, judging by press releases, the new top-100 national universities are probably having a good time right now.

Some Updates on the State Performance Funding Data Project

Last December, I publicly announced a new project with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University that would collect data on the details of states’ performance-based funding (PBF) systems. We have spent the last nine months diving even deeper into policy documents and obscure corners of the Internet as well as talking with state higher education officials to build our dataset. Now is a good chance to come up for air for a few minutes and provide an update on our project and our status going forward.

First, I’m happy to share that data collection is moving along pretty well. We gave a presentation at the State Higher Education Executives Officers Association’s annual policy conference in Boston in early August and were also able to make some great connections with people from more states at the conference. We are getting close to having a solid first draft of a 20-plus year dataset on state-level policies, and are working hard to build institution-level datasets for each state. As we discuss in the slide deck, our painstaking data collection process is leading us to question some of the prior typologies of performance funding systems. We will have more to share on that in the coming months, but going back to get data on early PBF systems is quite illuminating.

Second, our initial announcement about the project included a one-year, $204,528 grant from the William T. Grant Foundation to fund our data collection efforts. We recently received $373,590 in funding from Arnold Ventures and the Joyce Foundation to extend the project through mid-2021. This will allow us to build a project website, analyze the data, and disseminate results to policymakers and the public.

Finally, we have learned an incredible amount about data collection over the last couple of years working together as a team. (And I couldn’t ask for better colleagues!) One thing that we learned is that there is little guidance to researchers on how to collect the types of detailed data needed to provide useful information to the field. We decided to write up a how-to guide on data collection and analyses, and I’m pleased to share our new article on the topic in AERA Open. In this article (which is fully open access), we share some tips and tricks for collecting data (the Wayback Machine might as well be a member of our research team at this point), as well as how to do difference-in-differences analyses with continuous treatment variables. Hopefully, this article will encourage other researchers to launch similar data collection efforts while helping them avoid some of the missteps that we made early in our project.

Stay tuned for future updates on our project, as we will have some exciting new research to share throughout the next few years!

Trends in For-Profit Colleges’ Reliance on Federal Funds

One of the many issues currently derailing bipartisan agreement on federal Higher Education Act reauthorization is how to treat for-profit colleges. Democrats and their ideologically-aligned interest groups, such as Elizabeth Warren and the American Federation of Teachers, have called on Congress to cut off all federal funds to for-profit colleges—a position that few publicly took before this year. Meanwhile, Republicans have generally pushed for all colleges to be held to the same accountability standards, as evidenced by the Department of Education’s recent decision to rescind the Obama-era gainful employment era regulations that primarily focused on for-profit colleges. (Thankfully, program-level debt to earnings data—which was used to calculate gainful employment metrics—will be available for all programs later this year.)

I am spending quite a bit of time thinking about gainful employment right now as I work on a paper with one of my graduate students that examines whether programs at for-profit colleges that failed the gainful employment metrics shut down at higher rates than similar colleges that passed. Look for a draft of this paper to be out later this year, and I welcome feedback from the field as soon as we have something that is ready to share.

But while I was putting together the dataset for that paper, I realized that new data on the 90/10 rule came out with basically no attention last December. (And this is how blog posts are born, folks!) This rule requires for-profit colleges to get at least 10% of their revenue from sources other than federal Title IV financial aid (veterans’ benefits count toward the non-Title IV funds). Democrats who are not calling for the end of federal student aid to for-profits are trying to get 90/10 changed to 85/15 and putting veterans’ benefits in with the rest of federal aid, while Republicans are trying to eliminate the rule entirely. (For what it’s worth, here are my thoughts about a potential compromise.)

With the release of the newest data (covering fiscal years ending in the 2016-17 award year), there are now ten years of 90/10 rule data available on Federal Student Aid’s website. I have written in the past about how much for-profit colleges rely on federal funds, and this post extends the dataset from the 2007-08 through the 2016-17 award years. I limited the sample to colleges located in the 50 states and Washington, DC as well as to the 965 colleges that reported data over all ten years that data have been publicly released. The general trends in the reliance on Title IV revenues are similar when looking at the full sample, which ranges from 1,712 to 1,999 colleges across the ten years.

The graphic below shows how much the median college in the sample relied on Title IV federal financial aid revenues in each of the ten years of available data. The typical institution’s share of revenue coming from federal financial aid increased sharply from 63.2% in 2007-08 to 73.6% in 2009-10. At least part of this increase is attributable to two factors: the Great Recession making more students eligible for need-based financial aid (and encouraging an increase in college enrollment) and increased generosity of the Pell Grant program. Title IV reliance peaked at 76.0% in 2011-12 and has declined each of the most recent five years, reaching 71.5% in 2016-17.

Award Year Reliance on Title IV (pct)
2007-08 63.2
2008-09 68.3
2009-10 73.6
2010-11 74.0
2011-12 76.0
2012-13 75.5
2013-14 74.6
2014-15 73.2
2015-16 72.5
2016-17 71.5
Number of colleges 965

I then looked at reliance on Title IV aid by a college’s total revenues in the 2016-17 award year, dividing colleges into less than $1 million (n=318), $1 million-$10 million (n=506), $10 million-$100 million (n=122), and more than $100 million (n=19). The next graphic highlights that the groups all exhibited similar patterns of change over the last decade. The smallest colleges tended to rely on Title IV funds the least, while colleges with revenue of between $10 million and $100 million in 2016-17 had the highest shares of funds coming from federal financial aid. However, the differences among the groups were less than five percentage points from 2009-10 forward.

For those interested in diving deeper into the data, I highly recommend downloading the source spreadsheets from Federal Student Aid along with the explanations for colleges that have exceeded the 90% threshold. I have also uploaded an Excel spreadsheet of the 965 colleges with data in each of the ten years examined above.

Some Thoughts on Program-Level College Scorecard Data

The U.S. Department of Education has been promising to make program-level outcome data available on the College Scorecard for several years now. The Obama administration started the underlying data collection after releasing the initial Scorecard to the public in 2015, and the Trump administration elevated this topic by issuing an executive order earlier this year. I was at a technical review panel at ED last month on this topic, and I just noticed earlier today that members of the public can now comment on our two-day discussion in one of Washington’s most scenic windowless conference rooms.

So I was surprised to see a press release this afternoon announcing that the College Scorecard had been updated in several important ways. This update includes more than just program-level data. The public-facing site now has data on certificate-granting institutions, as well as using IPEDS data on graduation rates that go beyond first-time, full-time students. Needless to say, I’m happy to see both of these improvements, even though I am somewhat skeptical that students pursuing vocational certificates will access the public-facing Scorecard to the same extent that students seeking bachelor’s degrees will.

But this blog post focuses on program-level Scorecard data, which are preliminary and will be updated as soon as later this year. I used the combined 2015-16 and 2016-17 dataset (the most recent year available), which includes data on all graduates who received federal financial aid. This means that coverage is better for certain programs than others; for example, law schools are better covered than PhD programs since relatively few PhD students borrow compared to law students. The dataset contains 194,575 programs across 6,094 institutions.

Here are some highlights:

  • Median debt data are only available for 42,430 programs (21.8% of the sample), as small programs do not have data shown due to privacy concerns. But based on IPEDS completions, about 70% of students are enrolled in programs where debt data are available.
  • Here are the average median debt burdens by credential level:
    • Undergraduate certificate: $10,953 (n=4,146)
    • Associate: $15,134 (n=5,952)
    • Bachelor’s: $23,382 (n=23,649)
    • Graduate certificate: $48,513 (n=266)
    • Master’s: $42,335 (n=7,011)
    • First professional: $141,310 (n=660)
    • Doctoral: $95,715 (n=716)
  • 172 programs had over $200,000 in median debt, and it looks like the top 116 programs are all in health sciences. The data are preliminary, but Roseman University of Health Sciences’s dentistry program has the top listed debt burden at a cool $410,213. Meanwhile, 3,970 programs had median debt burdens below $10,000.

I am thrilled to see program-level debt data, both as a researcher (if I only had more time to sit down and dive into the data!) and as the newly-minted director of higher education graduate programs. Thanks to this dataset, I now know that roughly half of the students in the educational leadership doctoral program (K-12 and higher ed) at Seton Hall borrow, and median debt among graduates is $63,045. I hope that colleges around the country use this tool to get a handle on their graduates’ situations now that data are available for more than just those programs that were covered by gainful employment.

Oh, and about gainful employment. Once earnings data come out (which hopefully will be soon), it will be possible to calculate a debt-to-earnings ratio for programs that cover a large number of students even without the sanctions present in the now-mothballed gainful employment regulations. Also expect to see loan repayment rates in the updated Scorecard, which will shed some interesting light on income-driven repayment rate usage and the implications for students and taxpayers.

Which Colleges Failed the Latest Financial Responsibility Test?

Every year, the U.S. Department of Education is required to issue a financial responsibility score for private nonprofit and for-profit colleges, which serves as a crude measure of an institution’s financial health. Colleges are scored on a scale from -1.0 to 3.0, with colleges scoring 0.9 or below failing the test (and having to put up a letter of credit) and colleges scoring between 1.0 and 1.4 being placed in a zone of additional oversight.

Ever since I first learned of the existence of this metric five or six years ago, I have been bizarrely fascinated by its mechanics and how colleges respond to the score as an accountability pressure. I have previously written about how these scores are only loosely correlated with college closures in the past and also wrote an article about how colleges do not appear to change their fiscal priorities as a result of receiving a low score.

ED typically releases financial responsibility scores with no fanfare, and it looks like they updated their website with new scores in late March without anyone noticing (at least based on a Google search of the term “financial responsibility score”). I was adding a link to the financial responsibility score to a paper I am writing and noticed that the newest data—for the fiscal year ending between July 1, 2016 and June 30, 2017—was out. So here is a brief summary of the data.

Of the 3,590 colleges (at the OPEID level) that were subject to the financial responsibility test in 2016-17, 269 failed, 162 were in the oversight zone, and 3,159 passed. Failure rates were higher in the for-profit sector than in the nonprofit sector, as the table below indicates.

Financial responsibility scores by institutional type, 2016-17.

Nonprofit For-profit Total
Fail (-1.0 to 0.9) 82 187 269
Zone (1.0 to 1.4) 58 104 162
Pass (1.5 to 3.0) 1,559 1,600 3,159
Total 1,699 1,891 3,590

 

Among the 91 institutions with the absolute lowest score of -1.0, 85 were for-profit. And many of them were a part of larger chains. Education Management Corporation (17), Education Affiliates, Inc. (19), and Nemo Investor Aggregator (11) were responsible for more than half of the -1 scores. Most of the Education Affiliates (Fortis) and Nemo (Cortiva) campuses still appear to be open, but Education Management Corporation (Argosy, Art Institutes) recently suffered a spectacular collapse.

I am increasingly skeptical of financial responsibility scores as a useful measure of financial health because they are so backwards-looking. The data are already three years old, which is an eternity for a college on the brink of collapse (but perhaps not awful for a cash-strapped nonprofit college with a strong will to live on). I joined Kenny Megan from the Bipartisan Policy Center to write an op-ed for Roll Call on a better way to move forward with collecting more updated financial health measures, and I would love your thoughts on new ways to proceed!

The 2019 Net Price Madness Tournament

Ever since 2013, I have taken the 68 teams in the NCAA Division I men’s basketball tournament and fill out a bracket based on colleges with the lowest net price of attendance (defined as the total cost of attendance less all grant aid received). While the winners are not known for on-court success (see my 2018 bracket and older brackets along with my other writing on net price), it’s still great to highlight colleges that are affordable for their students. (Also, as UMBC’s win on the court last year over Virginia—which my bracket did call—shows, anything is theoretically possible!)

I created a bracket using 2016-17 data (the most recent available through the U.S. Department of Education for the net price of attendance for all first-time, full-time students receiving grant aid I should note that these net price measures are far from perfect—the data are now three years old and colleges can manipulate these numbers through the living allowance portion of the cost of attendance. Nevertheless, they provide some insights regarding college affordability—and they may not be a bad way to pick that tossup 8/9 or 7/10 game that you’ll probably get wrong anyway.

The final four teams in the bracket are the following, with the full dataset for all NCAA institutions available here:

East: Northern Kentucky ($9,338)

West: UNC-Chapel Hill ($10,077)

South: Purdue ($12,117)

Midwest: Washington ($9,443)

Kudos to Northern Kentucky for having the lowest net price for all students ($9,338), with an additional shout-out to UNC-Chapel Hill for having the lowest net price among teams that are likely to make it to the final weekend of basketball ($11,100). Not to be forgotten, UNC’s Tobacco Road rivals Duke deserve a shoutout for having net prices below $1,000 for students with family incomes below $48,000 per year even as the overall net price is high.

As a closing note, this is the first NCAA tournament for which gambling is legal in certain states (including New Jersey). I can’t bring myself to wager on games in which student-athletes who are technically amateurs are playing. If a portion of gambling revenues went to trusts that players could activate after their collegiate careers are over (and they do not benefit from a particular outcome of a game), I might be interested in putting down a few dollars. But until then, I will use this bracket for bragging rights and educating folks about available higher education data.

New Data on Pell Grant Recipients’ Graduation Rates

In spite of being a key marker of colleges’ commitments to socioeconomic diversity, it has only recently been possible to see institution-level graduation rates of students who begin college with Pell Grants. I wrote a piece for Brookings in late 2017 based on the first data release from the U.S. Department of Education and later posted a spreadsheet of graduation rates upon the request of readers—highlighting public interest in the metric.

ED released the second year of data late last year, and Melissa Korn of The Wall Street Journal (one of the best education writers in the business) reached out to me to see if I had those data handy for a piece she wanted to write on Pell graduation rate gaps. Since I do my best to keep up with new data releases from the Integrated Postsecondary Education Data System, I was able to send her a file and share my thoughts on the meaning of the data. This turned into a great piece on completion gaps at selective colleges.

Since I have already gotten requests to share the underlying data in the WSJ piece, I am happy to post the spreadsheet again on my site.

Download the spreadsheet here!

A few cautions:

(1) There are likely a few colleges that screwed up data reporting to ED. For example, gaps of 50% for larger colleges are likely an error, but nobody at the college caught them.

(2) Beware the rates for small colleges (with fewer than 50 students in a cohort).

(3) This graduation rate measure is the graduation rate for first-time, full-time students who complete a bachelor’s degree at the same institution within six years. It excludes part-time and transfer students, so global completion numbers will be higher.

(4) As my last post highlighted, there are some legitimate concerns with using percent Pell as an accountability measure. However, it’s the best measure that is currently available.

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!