How the New Carnegie Classifications Scrambled College Rankings

Carnegie classifications are one of the wonkiest, most inside baseball concepts in the world of higher education policy. Updated every three years by the good folks at Indiana University, these classifications serve as a useful tool to group similar colleges based on their mix of programs, degree offerings, and research intensity. And since I have been considered “a reliable source of deep-weeds wonkery” in the past, I wrote about the most recent changes to Carnegie classifications earlier this year.

But for most people outside institutional research offices, the first time the updated Carnegie classifications really got noticed was with this fall’s college rankings season. Both the Washington Monthly rankings that I compile and the U.S. News rankings that I get asked to comment about quite a bit rely on Carnegie classifications to define the group of national universities. We both use the Carnegie doctoral/research university category for this, putting master’s institutions to a master’s university category (us) or regional universities (U.S. News). With the number of Carnegie research universities spiking from 334 in the 2015 classifications to 423 in the most recent 2018 classifications, this introduces a bunch of new universities into the national rankings.

To be more exact, 92 universities appeared in Washington Monthly’s national university rankings for the first time this year, with nearly all of these universities coming out of the master’s rankings last year. The full dataset of these colleges and their rankings in both the US News and Washington Monthly rankings can be downloaded here, but I will highlight a few colleges that cracked the top 100 in either ranking below:

Santa Clara University: #54 in US News, #137 in Washington Monthly

Loyola Marymount University: #64 in US News, #258 in Washington Monthly

Gonzaga University: #79 in US News, #211 in Washington Monthly

Elon University: #84 in US News, #282 in Washington Monthly

Rutgers University-Camden: #166 in US News, #57 in Washington Monthly

Towson University: #197 in US News, #59 in Washington Monthly

Mary Baldwin University: #272 in US News, #35 in Washington Monthly

These new colleges appearing in the national university rankings means that other colleges got squeezed down the rankings. Given the priority that many colleges and their boards place on the US News rankings, it’s a tough day on some campuses. Meanwhile, judging by press releases, the new top-100 national universities are probably having a good time right now.

Some Updates on the State Performance Funding Data Project

Last December, I publicly announced a new project with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University that would collect data on the details of states’ performance-based funding (PBF) systems. We have spent the last nine months diving even deeper into policy documents and obscure corners of the Internet as well as talking with state higher education officials to build our dataset. Now is a good chance to come up for air for a few minutes and provide an update on our project and our status going forward.

First, I’m happy to share that data collection is moving along pretty well. We gave a presentation at the State Higher Education Executives Officers Association’s annual policy conference in Boston in early August and were also able to make some great connections with people from more states at the conference. We are getting close to having a solid first draft of a 20-plus year dataset on state-level policies, and are working hard to build institution-level datasets for each state. As we discuss in the slide deck, our painstaking data collection process is leading us to question some of the prior typologies of performance funding systems. We will have more to share on that in the coming months, but going back to get data on early PBF systems is quite illuminating.

Second, our initial announcement about the project included a one-year, $204,528 grant from the William T. Grant Foundation to fund our data collection efforts. We recently received $373,590 in funding from Arnold Ventures and the Joyce Foundation to extend the project through mid-2021. This will allow us to build a project website, analyze the data, and disseminate results to policymakers and the public.

Finally, we have learned an incredible amount about data collection over the last couple of years working together as a team. (And I couldn’t ask for better colleagues!) One thing that we learned is that there is little guidance to researchers on how to collect the types of detailed data needed to provide useful information to the field. We decided to write up a how-to guide on data collection and analyses, and I’m pleased to share our new article on the topic in AERA Open. In this article (which is fully open access), we share some tips and tricks for collecting data (the Wayback Machine might as well be a member of our research team at this point), as well as how to do difference-in-differences analyses with continuous treatment variables. Hopefully, this article will encourage other researchers to launch similar data collection efforts while helping them avoid some of the missteps that we made early in our project.

Stay tuned for future updates on our project, as we will have some exciting new research to share throughout the next few years!

Trends in For-Profit Colleges’ Reliance on Federal Funds

One of the many issues currently derailing bipartisan agreement on federal Higher Education Act reauthorization is how to treat for-profit colleges. Democrats and their ideologically-aligned interest groups, such as Elizabeth Warren and the American Federation of Teachers, have called on Congress to cut off all federal funds to for-profit colleges—a position that few publicly took before this year. Meanwhile, Republicans have generally pushed for all colleges to be held to the same accountability standards, as evidenced by the Department of Education’s recent decision to rescind the Obama-era gainful employment era regulations that primarily focused on for-profit colleges. (Thankfully, program-level debt to earnings data—which was used to calculate gainful employment metrics—will be available for all programs later this year.)

I am spending quite a bit of time thinking about gainful employment right now as I work on a paper with one of my graduate students that examines whether programs at for-profit colleges that failed the gainful employment metrics shut down at higher rates than similar colleges that passed. Look for a draft of this paper to be out later this year, and I welcome feedback from the field as soon as we have something that is ready to share.

But while I was putting together the dataset for that paper, I realized that new data on the 90/10 rule came out with basically no attention last December. (And this is how blog posts are born, folks!) This rule requires for-profit colleges to get at least 10% of their revenue from sources other than federal Title IV financial aid (veterans’ benefits count toward the non-Title IV funds). Democrats who are not calling for the end of federal student aid to for-profits are trying to get 90/10 changed to 85/15 and putting veterans’ benefits in with the rest of federal aid, while Republicans are trying to eliminate the rule entirely. (For what it’s worth, here are my thoughts about a potential compromise.)

With the release of the newest data (covering fiscal years ending in the 2016-17 award year), there are now ten years of 90/10 rule data available on Federal Student Aid’s website. I have written in the past about how much for-profit colleges rely on federal funds, and this post extends the dataset from the 2007-08 through the 2016-17 award years. I limited the sample to colleges located in the 50 states and Washington, DC as well as to the 965 colleges that reported data over all ten years that data have been publicly released. The general trends in the reliance on Title IV revenues are similar when looking at the full sample, which ranges from 1,712 to 1,999 colleges across the ten years.

The graphic below shows how much the median college in the sample relied on Title IV federal financial aid revenues in each of the ten years of available data. The typical institution’s share of revenue coming from federal financial aid increased sharply from 63.2% in 2007-08 to 73.6% in 2009-10. At least part of this increase is attributable to two factors: the Great Recession making more students eligible for need-based financial aid (and encouraging an increase in college enrollment) and increased generosity of the Pell Grant program. Title IV reliance peaked at 76.0% in 2011-12 and has declined each of the most recent five years, reaching 71.5% in 2016-17.

Award Year Reliance on Title IV (pct)
2007-08 63.2
2008-09 68.3
2009-10 73.6
2010-11 74.0
2011-12 76.0
2012-13 75.5
2013-14 74.6
2014-15 73.2
2015-16 72.5
2016-17 71.5
Number of colleges 965

I then looked at reliance on Title IV aid by a college’s total revenues in the 2016-17 award year, dividing colleges into less than $1 million (n=318), $1 million-$10 million (n=506), $10 million-$100 million (n=122), and more than $100 million (n=19). The next graphic highlights that the groups all exhibited similar patterns of change over the last decade. The smallest colleges tended to rely on Title IV funds the least, while colleges with revenue of between $10 million and $100 million in 2016-17 had the highest shares of funds coming from federal financial aid. However, the differences among the groups were less than five percentage points from 2009-10 forward.

For those interested in diving deeper into the data, I highly recommend downloading the source spreadsheets from Federal Student Aid along with the explanations for colleges that have exceeded the 90% threshold. I have also uploaded an Excel spreadsheet of the 965 colleges with data in each of the ten years examined above.

Some Thoughts on Program-Level College Scorecard Data

The U.S. Department of Education has been promising to make program-level outcome data available on the College Scorecard for several years now. The Obama administration started the underlying data collection after releasing the initial Scorecard to the public in 2015, and the Trump administration elevated this topic by issuing an executive order earlier this year. I was at a technical review panel at ED last month on this topic, and I just noticed earlier today that members of the public can now comment on our two-day discussion in one of Washington’s most scenic windowless conference rooms.

So I was surprised to see a press release this afternoon announcing that the College Scorecard had been updated in several important ways. This update includes more than just program-level data. The public-facing site now has data on certificate-granting institutions, as well as using IPEDS data on graduation rates that go beyond first-time, full-time students. Needless to say, I’m happy to see both of these improvements, even though I am somewhat skeptical that students pursuing vocational certificates will access the public-facing Scorecard to the same extent that students seeking bachelor’s degrees will.

But this blog post focuses on program-level Scorecard data, which are preliminary and will be updated as soon as later this year. I used the combined 2015-16 and 2016-17 dataset (the most recent year available), which includes data on all graduates who received federal financial aid. This means that coverage is better for certain programs than others; for example, law schools are better covered than PhD programs since relatively few PhD students borrow compared to law students. The dataset contains 194,575 programs across 6,094 institutions.

Here are some highlights:

  • Median debt data are only available for 42,430 programs (21.8% of the sample), as small programs do not have data shown due to privacy concerns. But based on IPEDS completions, about 70% of students are enrolled in programs where debt data are available.
  • Here are the average median debt burdens by credential level:
    • Undergraduate certificate: $10,953 (n=4,146)
    • Associate: $15,134 (n=5,952)
    • Bachelor’s: $23,382 (n=23,649)
    • Graduate certificate: $48,513 (n=266)
    • Master’s: $42,335 (n=7,011)
    • First professional: $141,310 (n=660)
    • Doctoral: $95,715 (n=716)
  • 172 programs had over $200,000 in median debt, and it looks like the top 116 programs are all in health sciences. The data are preliminary, but Roseman University of Health Sciences’s dentistry program has the top listed debt burden at a cool $410,213. Meanwhile, 3,970 programs had median debt burdens below $10,000.

I am thrilled to see program-level debt data, both as a researcher (if I only had more time to sit down and dive into the data!) and as the newly-minted director of higher education graduate programs. Thanks to this dataset, I now know that roughly half of the students in the educational leadership doctoral program (K-12 and higher ed) at Seton Hall borrow, and median debt among graduates is $63,045. I hope that colleges around the country use this tool to get a handle on their graduates’ situations now that data are available for more than just those programs that were covered by gainful employment.

Oh, and about gainful employment. Once earnings data come out (which hopefully will be soon), it will be possible to calculate a debt-to-earnings ratio for programs that cover a large number of students even without the sanctions present in the now-mothballed gainful employment regulations. Also expect to see loan repayment rates in the updated Scorecard, which will shed some interesting light on income-driven repayment rate usage and the implications for students and taxpayers.

Which Colleges Failed the Latest Financial Responsibility Test?

Every year, the U.S. Department of Education is required to issue a financial responsibility score for private nonprofit and for-profit colleges, which serves as a crude measure of an institution’s financial health. Colleges are scored on a scale from -1.0 to 3.0, with colleges scoring 0.9 or below failing the test (and having to put up a letter of credit) and colleges scoring between 1.0 and 1.4 being placed in a zone of additional oversight.

Ever since I first learned of the existence of this metric five or six years ago, I have been bizarrely fascinated by its mechanics and how colleges respond to the score as an accountability pressure. I have previously written about how these scores are only loosely correlated with college closures in the past and also wrote an article about how colleges do not appear to change their fiscal priorities as a result of receiving a low score.

ED typically releases financial responsibility scores with no fanfare, and it looks like they updated their website with new scores in late March without anyone noticing (at least based on a Google search of the term “financial responsibility score”). I was adding a link to the financial responsibility score to a paper I am writing and noticed that the newest data—for the fiscal year ending between July 1, 2016 and June 30, 2017—was out. So here is a brief summary of the data.

Of the 3,590 colleges (at the OPEID level) that were subject to the financial responsibility test in 2016-17, 269 failed, 162 were in the oversight zone, and 3,159 passed. Failure rates were higher in the for-profit sector than in the nonprofit sector, as the table below indicates.

Financial responsibility scores by institutional type, 2016-17.

Nonprofit For-profit Total
Fail (-1.0 to 0.9) 82 187 269
Zone (1.0 to 1.4) 58 104 162
Pass (1.5 to 3.0) 1,559 1,600 3,159
Total 1,699 1,891 3,590

 

Among the 91 institutions with the absolute lowest score of -1.0, 85 were for-profit. And many of them were a part of larger chains. Education Management Corporation (17), Education Affiliates, Inc. (19), and Nemo Investor Aggregator (11) were responsible for more than half of the -1 scores. Most of the Education Affiliates (Fortis) and Nemo (Cortiva) campuses still appear to be open, but Education Management Corporation (Argosy, Art Institutes) recently suffered a spectacular collapse.

I am increasingly skeptical of financial responsibility scores as a useful measure of financial health because they are so backwards-looking. The data are already three years old, which is an eternity for a college on the brink of collapse (but perhaps not awful for a cash-strapped nonprofit college with a strong will to live on). I joined Kenny Megan from the Bipartisan Policy Center to write an op-ed for Roll Call on a better way to move forward with collecting more updated financial health measures, and I would love your thoughts on new ways to proceed!

The 2019 Net Price Madness Tournament

Ever since 2013, I have taken the 68 teams in the NCAA Division I men’s basketball tournament and fill out a bracket based on colleges with the lowest net price of attendance (defined as the total cost of attendance less all grant aid received). While the winners are not known for on-court success (see my 2018 bracket and older brackets along with my other writing on net price), it’s still great to highlight colleges that are affordable for their students. (Also, as UMBC’s win on the court last year over Virginia—which my bracket did call—shows, anything is theoretically possible!)

I created a bracket using 2016-17 data (the most recent available through the U.S. Department of Education for the net price of attendance for all first-time, full-time students receiving grant aid I should note that these net price measures are far from perfect—the data are now three years old and colleges can manipulate these numbers through the living allowance portion of the cost of attendance. Nevertheless, they provide some insights regarding college affordability—and they may not be a bad way to pick that tossup 8/9 or 7/10 game that you’ll probably get wrong anyway.

The final four teams in the bracket are the following, with the full dataset for all NCAA institutions available here:

East: Northern Kentucky ($9,338)

West: UNC-Chapel Hill ($10,077)

South: Purdue ($12,117)

Midwest: Washington ($9,443)

Kudos to Northern Kentucky for having the lowest net price for all students ($9,338), with an additional shout-out to UNC-Chapel Hill for having the lowest net price among teams that are likely to make it to the final weekend of basketball ($11,100). Not to be forgotten, UNC’s Tobacco Road rivals Duke deserve a shoutout for having net prices below $1,000 for students with family incomes below $48,000 per year even as the overall net price is high.

As a closing note, this is the first NCAA tournament for which gambling is legal in certain states (including New Jersey). I can’t bring myself to wager on games in which student-athletes who are technically amateurs are playing. If a portion of gambling revenues went to trusts that players could activate after their collegiate careers are over (and they do not benefit from a particular outcome of a game), I might be interested in putting down a few dollars. But until then, I will use this bracket for bragging rights and educating folks about available higher education data.

New Data on Pell Grant Recipients’ Graduation Rates

In spite of being a key marker of colleges’ commitments to socioeconomic diversity, it has only recently been possible to see institution-level graduation rates of students who begin college with Pell Grants. I wrote a piece for Brookings in late 2017 based on the first data release from the U.S. Department of Education and later posted a spreadsheet of graduation rates upon the request of readers—highlighting public interest in the metric.

ED released the second year of data late last year, and Melissa Korn of The Wall Street Journal (one of the best education writers in the business) reached out to me to see if I had those data handy for a piece she wanted to write on Pell graduation rate gaps. Since I do my best to keep up with new data releases from the Integrated Postsecondary Education Data System, I was able to send her a file and share my thoughts on the meaning of the data. This turned into a great piece on completion gaps at selective colleges.

Since I have already gotten requests to share the underlying data in the WSJ piece, I am happy to post the spreadsheet again on my site.

Download the spreadsheet here!

A few cautions:

(1) There are likely a few colleges that screwed up data reporting to ED. For example, gaps of 50% for larger colleges are likely an error, but nobody at the college caught them.

(2) Beware the rates for small colleges (with fewer than 50 students in a cohort).

(3) This graduation rate measure is the graduation rate for first-time, full-time students who complete a bachelor’s degree at the same institution within six years. It excludes part-time and transfer students, so global completion numbers will be higher.

(4) As my last post highlighted, there are some legitimate concerns with using percent Pell as an accountability measure. However, it’s the best measure that is currently available.

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!

How Colleges’ Carnegie Classifications Have Changed Over Time

Right as the entire higher education community was beginning to check out for the holiday season last month, Indiana University’s Center on Postsecondary Research released the 2018 Carnegie classifications. While there are many different types of classifications based on different institutional characteristics, the basic classification (based on size, degrees awarded, and research intensity) always garners the most attention from the higher education community. In this post, I look at some of the biggest changes between the 2015 and 2018 classifications and how the number of colleges in key categories has changed over time. (The full dataset can be downloaded here.)

The biggest change in the 2018 classifications was about how doctoral universities were classified. In previous classifications, a college was considered a doctoral university if it awarded at least 20 research/scholarship doctoral degrees (PhDs and a few other types of professional doctorates such as EdDs). The 2018 revisions counted a college as being a doctoral university if there were at least 30 professional practice doctorates (JDs, MDs, and other related fields such as in health sciences). This resulted in accelerating the increase in the number of doctoral universities that has existed since 2000:

2018: 423

2015: 334

2010: 295

2005: 279

2000: 258

This reclassification is important to universities because college rankings systems often classify institutions based on their Carnegie classification. U.S. News and Washington Monthly (the latter of which I compile) both base the national university category on the Carnegie doctoral university classification. The desire to be in the national university category (instead of regional or master’s university categories that get less public attention) has contributed to some universities developing doctoral programs (as Villanova did prior to the 2015 reclassification).

The revision of the lowest two levels of doctoral universities (which I will call R2 and R3 for shorthand, matching common language) did quite a bit to scramble the number of colleges in each category, with a number of R3 colleges moving into R2 status. Here is the breakdown among the three doctoral university groups since 2005 (the first year of three categories):

Year R1 R2 R3
2018 130 132 161
2015 115 107 112
2010 108 98 89
2005 96 102 81

Changing categories within the doctoral university group is important for benchmarking purposes. As I told Inside Higher Ed back in December, my university’s moving within the Carnegie doctoral category (from R3 to R2) affects its peer group. All of the sudden, tenure and pay comparisons will be based on a different—and somewhat more research-focused—group of institutions.

There has also been an increase in the number of two-year colleges offering at least some bachelor’s degrees, driven by the growth of community college baccalaureate efforts in states such as Florida and a diversifying for-profit sector. Here is the trend in the number of baccalaureate/associate colleges since 2005:

2018: 269

2015: 248

2010: 182

2005: 144

Going forward, Carnegie classifications will continue to be updated every three years in order to keep up with a rapidly-changing higher education environment. Colleges will certainly be paying attention to future updates that could affect their reputation and peer groups.

Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.