New Working Paper on the Effects of Gainful Employment Regulations

As debates regarding Higher Education Act reauthorization continue in Washington, one of the key sticking points between Democrats and Republicans is the issue of accountability for the for-profit sector of higher education. Democrats typically want to have tighter for-profit accountability measures, while Republicans either want to loosen regulations or at the very least hold all colleges to the same standards where appropriate.

The case of federal gainful employment (GE) regulations is a great example of partisan differences regarding for-profit accountability. The Department of Education spent much of its time during the Obama administration trying to implement regulations that would have stripped away aid from programs (mainly at for-profit colleges) that could not pass debt-to-earnings ratios. They finally released the first year of data in January 2017—in the final weeks of the Obama administration. The Trump administration then set about undoing the regulations and finally did so earlier this year. (For those who like reading the Federal Register, here is a link to all of the relevant documents.)

There has been quite a bit of talk in the higher ed policy world that GE led colleges to close poor-performing programs, and Harvard closing its poor-performing graduate certificate program in theater right after the data dropped received a lot of attention. But to this point, there has been no rigorous empirical research examining whether the GE regulations changed colleges’ behaviors.

Until now. Together with my sharp PhD student Zhuoyao Liu, I set out to examine whether the owners of for-profit colleges closed lousy programs or colleges after receiving information about their performance.

You can download our working paper, which we are presenting at the Association for the Study of Higher Education conference this week, here.

For-profit colleges can respond more quickly to new information than nonprofit colleges due to a more streamlined governance process and a lack of annoying tenured faculty, and they are also more motivated to make changes if they expect to lose money going forward. It is worth noting that no college should have expected to lose federal funding due to poor GE performance since the Trump administration was on its way in when the dataset was released.

Data collection for this project took a while. For 4,998 undergraduate programs at 1,462 for-profit colleges, we collected information on whether the college was still open using the U.S. Department of Education’s closed school database. Looking at whether programs were still open took a lot more work. We went to college websites, Facebook pages for mom-and-pop operations, and used the Wayback Machine to find information on whether a program appeared to still be open as of February 2019.

After doing that, we used a regression discontinuity research design to look at whether passing GE outright (relative to not passing) or being in the oversight zone (versus failing) affected the likelihood of college or program closures. While the results for the zone versus fail analyses were not consistently significant across all of our bandwidth and control variable specifications, there were some interesting findings for the passing versus not passing comparisons. Notably, programs that passed GE were much less likely to close than those that did not pass. This suggests that for-profit colleges, possibly encouraged by accrediting agencies and/or state authorizing agencies, closed lower-performing programs and focused their resources on their best-performing programs.

We are putting this paper out as a working paper as a first form of peer review before undergoing the formal peer review process at a scholarly journal. We welcome all of your comments and hope that you find this paper useful—especially as the Department of Education gets ready to release program-level earnings data in the near future.

Twenty-Two Thoughts on House Democrats’ Higher Education Act Reauthorization Bill

House Democrats released the framework for the College Affordability Act today, which is their effort for a comprehensive reauthorization of the long-overdue Higher Education Act. This follows the release of Senator Lamar Alexander’s (R-TN) more targeted version last month. As I like to do when time allows, I live-tweeted my way through the 16-page summary document. Below are my 22 thoughts on certain parts of the bill (annotating some of my initial tweets with references) and what the bill means going forward.

(1) Gainful employment would go back in effect for the same programs covered in the Obama-era effort. (Did that policy induce programs to close? Stay tuned for a new paper on that…I’m getting back to work on it right after putting up this blog post!)

(2) In addition to lifting the student unit record ban, the bill would require data to be disaggregated based on the American Community Survey definitions of race (hopefully with a crosswalk for a couple of years).

(3) Federal Student Aid’s office would have updated performance goals, but there is no mention of a much-needed replacement of the National Student Loan Data System (decidedly unsexy and not cheap, though).

(4) Regarding the federal-state partnership, states would have access to funds to “support the adoption and expansion of evidence-based reforms and practices.” I would love to see a definition of “evidence”—is it What Works Clearinghouse standards or something less?

(5) The antiquated SEOG allocation formula would be phased out and replaced with a new formula based on unmet need and percent low-income. Without new money, this may work as well as the 1980 effort (which flopped). Here is my research on the topic.

(6) Same story for federal work-study. Grad students would still be allowed to participate, which doesn’t seem like the best use of money to me.

(7) Students would start repaying loans at 250% of the federal poverty line, up from 150%. Automatically recertifying income makes a lot of sense.

(8) There are relatively small changes to Public Service Loan Forgiveness, mainly regarding old FFEL loans and consolidation (they would benefit quite a few people). But people still have to wait ten years and hope for the best.

(9) I’m in a Halloween mood after seeing the awesome Pumpkin Blaze festival in the Hudson Valley last night. So, on that note, Zombie Perkins returns!

The Statue of Liberty, made entirely out of pumpkins. Let HEA reauthorization ring???

(10) ED would take a key role in cost of attendance calculations, with a requirement that they create at least one method for colleges to use. Here is my research on the topic, along with a recent blog post showing colleges with very low and very high living allowances.

(11) And if that doesn’t annoy colleges, a requirement about developing particular substance abuse safety programs will. Campus safety and civil rights requirements may also irk some colleges, but will be GOP nonstarters.

(12) The bill places a larger role on accreditors and state authorizers for accountability while not really providing any support. Expect colleges to sue accreditors/states…and involve their members of Congress.

(13) Improving the cohort default rate metric is long-overdue, and a tiered approach could be promising. (More details needed.)

(14) There would be a new on-time loan repayment metric, defined as the share of borrowers who made 33 of 36 payments on time. $0 payments and educational deferments count as payments, and ED would set the threshold with waivers possible.

(15) This is an interesting metric, and I would love to see it alongside the Scorecard repayment rate broken down by IDR and non-IDR students. But if the bill improves IDR, expect the on-time rate to (hopefully!) be high.

(16) It would be great to see new IPEDS data on marketing, recruitment, advertising, and lobbying expenses. Definitions matter a lot here, and the Secretary gets to create them. These are the types of metrics that the field showed interest in when the IPEDS folks asked Tammy Kolbe and me to do a landscape analysis of higher ed finance metrics.

(17) Most of higher ed wants financial responsibility scores to be updated (see my research on this), and this would set up a negotiated rulemaking panel to work on it.

(18) There is also language about “rebalancing” who participates in neg reg. The legislative text will be fun to parse.

(19) Teach for America will be reauthorized, but it’s in a list of programs with potential changes. Democrats will watch that closely.

(20) And pour one out for the programs that were authorized in the last Higher Education Act back in 2008, but never funded. This bill wants to get rid of some of them.

(21) So what’s next? Expect this to get a committee vote fairly quickly, but other events might swamp it (pun intended) in the House. I doubt the Senate will take it up as Alexander has his preferred bill.

(22) Then why do this? It’s a good messaging tool that can keep higher ed in the spotlight. Both parties are positioning for 2021, and this bill (which is moderate by Dem primary standards) is a good starting place for Democrats.

Thanks for reading!

Trends in For-Profit Colleges’ Reliance on Federal Funds

One of the many issues currently derailing bipartisan agreement on federal Higher Education Act reauthorization is how to treat for-profit colleges. Democrats and their ideologically-aligned interest groups, such as Elizabeth Warren and the American Federation of Teachers, have called on Congress to cut off all federal funds to for-profit colleges—a position that few publicly took before this year. Meanwhile, Republicans have generally pushed for all colleges to be held to the same accountability standards, as evidenced by the Department of Education’s recent decision to rescind the Obama-era gainful employment era regulations that primarily focused on for-profit colleges. (Thankfully, program-level debt to earnings data—which was used to calculate gainful employment metrics—will be available for all programs later this year.)

I am spending quite a bit of time thinking about gainful employment right now as I work on a paper with one of my graduate students that examines whether programs at for-profit colleges that failed the gainful employment metrics shut down at higher rates than similar colleges that passed. Look for a draft of this paper to be out later this year, and I welcome feedback from the field as soon as we have something that is ready to share.

But while I was putting together the dataset for that paper, I realized that new data on the 90/10 rule came out with basically no attention last December. (And this is how blog posts are born, folks!) This rule requires for-profit colleges to get at least 10% of their revenue from sources other than federal Title IV financial aid (veterans’ benefits count toward the non-Title IV funds). Democrats who are not calling for the end of federal student aid to for-profits are trying to get 90/10 changed to 85/15 and putting veterans’ benefits in with the rest of federal aid, while Republicans are trying to eliminate the rule entirely. (For what it’s worth, here are my thoughts about a potential compromise.)

With the release of the newest data (covering fiscal years ending in the 2016-17 award year), there are now ten years of 90/10 rule data available on Federal Student Aid’s website. I have written in the past about how much for-profit colleges rely on federal funds, and this post extends the dataset from the 2007-08 through the 2016-17 award years. I limited the sample to colleges located in the 50 states and Washington, DC as well as to the 965 colleges that reported data over all ten years that data have been publicly released. The general trends in the reliance on Title IV revenues are similar when looking at the full sample, which ranges from 1,712 to 1,999 colleges across the ten years.

The graphic below shows how much the median college in the sample relied on Title IV federal financial aid revenues in each of the ten years of available data. The typical institution’s share of revenue coming from federal financial aid increased sharply from 63.2% in 2007-08 to 73.6% in 2009-10. At least part of this increase is attributable to two factors: the Great Recession making more students eligible for need-based financial aid (and encouraging an increase in college enrollment) and increased generosity of the Pell Grant program. Title IV reliance peaked at 76.0% in 2011-12 and has declined each of the most recent five years, reaching 71.5% in 2016-17.

Award Year Reliance on Title IV (pct)
2007-08 63.2
2008-09 68.3
2009-10 73.6
2010-11 74.0
2011-12 76.0
2012-13 75.5
2013-14 74.6
2014-15 73.2
2015-16 72.5
2016-17 71.5
Number of colleges 965

I then looked at reliance on Title IV aid by a college’s total revenues in the 2016-17 award year, dividing colleges into less than $1 million (n=318), $1 million-$10 million (n=506), $10 million-$100 million (n=122), and more than $100 million (n=19). The next graphic highlights that the groups all exhibited similar patterns of change over the last decade. The smallest colleges tended to rely on Title IV funds the least, while colleges with revenue of between $10 million and $100 million in 2016-17 had the highest shares of funds coming from federal financial aid. However, the differences among the groups were less than five percentage points from 2009-10 forward.

For those interested in diving deeper into the data, I highly recommend downloading the source spreadsheets from Federal Student Aid along with the explanations for colleges that have exceeded the 90% threshold. I have also uploaded an Excel spreadsheet of the 965 colleges with data in each of the ten years examined above.

Why the Next Secretary of Education Should Come from Higher Ed

Elizabeth Warren is one of several Democratic presidential candidates who is highlighting education as a key policy issue in their campaigns. A few weeks after announcing an ambitious proposal to forgive nearly half of all outstanding student debt and strip for-profit colleges’ access to federal financial aid (among other issues), she returned to the topic in advance of a town hall event with the American Federation of Teachers in Philadelphia. In a tweet, Warren promised that her Secretary of Education would be a public school teacher.

This would be far from unprecedented: both Rod Paige (under George W. Bush) and John King (under Barack Obama) were public school teachers. But if Warren or any other Democrat wants to influence American education to the greatest extent possible, the candidate should appoint someone from higher education instead of K-12 education. (The same also applies to Donald Trump, who apparently will need a new Secretary of Education if he wins a second term.) Below, I discuss a few reasons why ED’s next leader should come from higher ed.

First, the Every Student Succeeds Act, signed into law in 2015, shifted a significant amount of power from ED to the states. This means that the federal government’s power in K-12 education has shifted more toward the appropriations process, which is controlled by Congress. Putting a teacher in charge of ED may result in better K-12 policy, but the change is likely to be small due to the reduced amount of discretion.

Meanwhile, on the higher education side of the ranch, I still see a comprehensive Higher Education Act reauthorization as being unlikely before 2021—even though Lamar Alexander is promising a bill soon. I could see a narrowly-targeted bill on FAFSA simplification getting through Congress, but HEA reauthorization is going to be tough in three main areas: for-profit college accountability, income-driven student loan repayment plans, and social issues (Title IX, campus safety, and free speech). Warren’s proposal last month probably makes HEA reauthorization tougher as it will pull many Senate Democrats farther to the left.

This means that ED will continue to have a great amount of power to make policy through the negotiated rulemaking process under the current HEA. Both the Obama and Trump administrations used neg reg to shape policies without going through Congress, and a Democratic president is likely to rely on ED to undo Trump-era policies. Meanwhile, a second-term Trump administration will still have a number of loose ends to tie up given the difficulty of getting the sheer number of regulatory changes through the process by November 1 of this year (the deadline to have rules take effect before the 2020 election).

I fully realize that promising a public school teacher as Secretary of Education is a great political statement to win over teachers’ unions—a key ally for Democrats. But in terms of changing educational policies, candidates should be looking toward higher education veterans who can help them reshape a landscape in which there is more room to maneuver.

Which Colleges Failed the Latest Financial Responsibility Test?

Every year, the U.S. Department of Education is required to issue a financial responsibility score for private nonprofit and for-profit colleges, which serves as a crude measure of an institution’s financial health. Colleges are scored on a scale from -1.0 to 3.0, with colleges scoring 0.9 or below failing the test (and having to put up a letter of credit) and colleges scoring between 1.0 and 1.4 being placed in a zone of additional oversight.

Ever since I first learned of the existence of this metric five or six years ago, I have been bizarrely fascinated by its mechanics and how colleges respond to the score as an accountability pressure. I have previously written about how these scores are only loosely correlated with college closures in the past and also wrote an article about how colleges do not appear to change their fiscal priorities as a result of receiving a low score.

ED typically releases financial responsibility scores with no fanfare, and it looks like they updated their website with new scores in late March without anyone noticing (at least based on a Google search of the term “financial responsibility score”). I was adding a link to the financial responsibility score to a paper I am writing and noticed that the newest data—for the fiscal year ending between July 1, 2016 and June 30, 2017—was out. So here is a brief summary of the data.

Of the 3,590 colleges (at the OPEID level) that were subject to the financial responsibility test in 2016-17, 269 failed, 162 were in the oversight zone, and 3,159 passed. Failure rates were higher in the for-profit sector than in the nonprofit sector, as the table below indicates.

Financial responsibility scores by institutional type, 2016-17.

Nonprofit For-profit Total
Fail (-1.0 to 0.9) 82 187 269
Zone (1.0 to 1.4) 58 104 162
Pass (1.5 to 3.0) 1,559 1,600 3,159
Total 1,699 1,891 3,590

 

Among the 91 institutions with the absolute lowest score of -1.0, 85 were for-profit. And many of them were a part of larger chains. Education Management Corporation (17), Education Affiliates, Inc. (19), and Nemo Investor Aggregator (11) were responsible for more than half of the -1 scores. Most of the Education Affiliates (Fortis) and Nemo (Cortiva) campuses still appear to be open, but Education Management Corporation (Argosy, Art Institutes) recently suffered a spectacular collapse.

I am increasingly skeptical of financial responsibility scores as a useful measure of financial health because they are so backwards-looking. The data are already three years old, which is an eternity for a college on the brink of collapse (but perhaps not awful for a cash-strapped nonprofit college with a strong will to live on). I joined Kenny Megan from the Bipartisan Policy Center to write an op-ed for Roll Call on a better way to move forward with collecting more updated financial health measures, and I would love your thoughts on new ways to proceed!

Three New Articles on Performance-Based Funding Policies

As an academic, few things make me happier than reading cutting-edge research conducted by talented scholars. So I was thrilled to see three new articles on a topic near and dear to my heart—performance-based funding (PBF) in higher education—come out in top-tier journals. In this post, I briefly summarize the three articles and look at where the body of research is heading.

Nathan Favero (American University) and Amanda Rutherford (Indiana University). “Will the Tide Lift all Boats? Examining the Equity Effects of Performance Funding Policies in U.S. Higher Education.” Research in Higher Education.

In this article, the authors look at state PBF policies (divided into earlier 1.0 policies and later 2.0 policies) to examine whether PBF affects four-year colleges within a state differently. They found evidence that the wave of 2.0 policies may negatively affect less-selective and less-resourced public universities, while 1.0 policies affected colleges in relatively similar ways. In a useful Twitter thread (another reason why all policy-relevant researchers should be on Twitter!), Nathan discusses the implications on equity.

Lori Prince Hagood (University System of Georgia). “The Financial Benefits and Burdens of Performance Funding in Higher Education.” Educational Evaluation and Policy Analysis.

Lori’s article digs into the extent that PBF policies affect per-student state appropriations at four-year colleges, defining PBF as whether a state had any policy funded in a given year. The first item worth noting from the paper is that per-student funding in PBF states has traditionally been lower than in non-PBF states. This may change going forward as states with more generous funding (such as California) are now adopting PBF policies. Lori’s main finding is that selective and research universities tend to see increased state funding following the implementation of PBF, while less-selective institutions see decreased funding, raising concerns about equity.

As an aside, I had the pleasure of discussing an earlier version of this paper at the 2017 Association for the Study of Higher Education conference (although I had forgotten about that until Lori sent me a nice note when the article came out). I wrote in my comments at that time: “I think it has potential to go to a good journal with a modest amount of additional work.” I’m not often right, but I’m glad I was in this case!

Denisa Gándara (Southern Methodist University). “Does Evidence Matter? An Analysis of Evidence Use in Performance-Funding Policy Design.” The Review of Higher Education.

Denisa’s article is a wonderful read alongside the other two because it does not use difference-in-differences techniques to look at quantitative effects of PBF. Instead, she digs into how the legislative sausage of a PBF policy is actually made by studying the policy processes in Colorado (which adopted PBF across two-year and four-year colleges) and Texas (which never adopted PBF in the four-year sector). Her interviews reveal that PBF models in other states and national advocacy groups such as Complete College America and HCM Strategists were far more influential than lowly academic researchers.

In a Twitter thread about her new article, Denisa highlighted the following statement:

As a fellow researcher who also talks with policymakers on a regular basis, I have quite a few thoughts on this statement. Policymakers (including in blue states) are increasingly hesitant to give colleges more money without tying a portion of those funds to student outcomes, and other ways of funding colleges also raise equity concerns. So expect PBF to expand in the next several years.

Does this mean that academic research on PBF is irrelevant? I don’t think so. Advocacy organizations are at least partially influenced by academic research; for example, see how the research on equity metrics in PBF policies has shaped their work. It is the job of researchers to keep raising critical questions about the design of PBF policies, and it is also our job to conduct more nuanced analyses that dive into the details of how policies are constructed. That is why my new project with Kelly Rosinger of Penn State and Justin Ortagus of the University of Florida to collect these details over time excites me so much—it is what the field needs to keep building upon great studies such as the ones highlighted here.

A Possible For-Profit Accountability Compromise?

In the midst of an absolutely bonkers week in the world of higher education (highlighted by a FBI investigation into an elite college admissions scandal, although the sudden closure of Argosy University deserves far more attention than rich families doing stupid things), the U.S. House Appropriations Committee held a hearing on for-profit colleges. Not surprisingly, the hearing quickly developed familiar fault lines: Democrats pushed for tighter oversight over “predatory” colleges, while Republicans repeatedly called for applying the same regulations to both for-profit and nonprofit colleges.

One of the key sticking points in Higher Education Act (HEA) reauthorization is likely to be around the so-called “90/10” rule, which requires for-profit colleges to get at least 10% of their revenue from sources other than federal financial aid (excluding veterans’ benefits) in order to qualify for federal financial aid. Democrats want to return the rule to 85/15 (as it was in the past) and count veterans’ benefits in the federal funds portion of the calculation, which would trip up many for-profit colleges. (Because public colleges get state funds and many private colleges have at least modest endowments, this rule is generally not an issue for them.) Republicans have proposed getting rid of 90/10 in their vision for HEA reauthorization.

I have a fair amount of skepticism about the effectiveness of the 90/10 rule in the for-profit sector, particularly as tuition prices need to be above federal aid limits and for-profit colleges tend to serve students with relatively little ability to pay for their own education. But I also worry about colleges with poor student outcomes sucking up large amounts of federal funds with relatively few strings attached. So, while watching the panelists at the House hearing talk about the 90/10 rule, the framework of an idea on for-profit accountability (which may admittedly be crazy) came into my mind.

I am tossing out the idea of tying the percentage of revenue that colleges can receive from federal funds (including veterans’ benefits as federal funds) to the institution’s performance on a number of metrics. For the sake of simplicity, let’s assume the three outcomes are graduation rates, earnings after attending college, and student loan repayment rates—although other measures are certainly possible. Then I will break each of these outcomes into thirds based on the predominant type of credential awarded (certificate, associate degree, or bachelor’s degree), restricting the sample to broad-access college to reflect the realities of the for-profit sector.

A college that performed in the top third on all three measures would qualify for the maximum share of revenue from federal funds—let’s say 100%. A college in the top third on two measures and in the middle third on the other one could get 95%, and the percentage would drop by 5% (or some set amount) as the college’s performance dropped. Colleges in the bottom third on all three measures would only get 60% of revenue from federal funds.

This type of system would effectively remove the limit on federal funds for high-performing for-profit colleges, while severely tightening it for low performers. Could this idea gain bipartisan support (after a fair amount of model testing)? Possibly. Is it worth at least thinking through? I would love your thoughts on that.

New Data on Pell Grant Recipients’ Graduation Rates

In spite of being a key marker of colleges’ commitments to socioeconomic diversity, it has only recently been possible to see institution-level graduation rates of students who begin college with Pell Grants. I wrote a piece for Brookings in late 2017 based on the first data release from the U.S. Department of Education and later posted a spreadsheet of graduation rates upon the request of readers—highlighting public interest in the metric.

ED released the second year of data late last year, and Melissa Korn of The Wall Street Journal (one of the best education writers in the business) reached out to me to see if I had those data handy for a piece she wanted to write on Pell graduation rate gaps. Since I do my best to keep up with new data releases from the Integrated Postsecondary Education Data System, I was able to send her a file and share my thoughts on the meaning of the data. This turned into a great piece on completion gaps at selective colleges.

Since I have already gotten requests to share the underlying data in the WSJ piece, I am happy to post the spreadsheet again on my site.

Download the spreadsheet here!

A few cautions:

(1) There are likely a few colleges that screwed up data reporting to ED. For example, gaps of 50% for larger colleges are likely an error, but nobody at the college caught them.

(2) Beware the rates for small colleges (with fewer than 50 students in a cohort).

(3) This graduation rate measure is the graduation rate for first-time, full-time students who complete a bachelor’s degree at the same institution within six years. It excludes part-time and transfer students, so global completion numbers will be higher.

(4) As my last post highlighted, there are some legitimate concerns with using percent Pell as an accountability measure. However, it’s the best measure that is currently available.

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!

Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.