A Possible For-Profit Accountability Compromise?

In the midst of an absolutely bonkers week in the world of higher education (highlighted by a FBI investigation into an elite college admissions scandal, although the sudden closure of Argosy University deserves far more attention than rich families doing stupid things), the U.S. House Appropriations Committee held a hearing on for-profit colleges. Not surprisingly, the hearing quickly developed familiar fault lines: Democrats pushed for tighter oversight over “predatory” colleges, while Republicans repeatedly called for applying the same regulations to both for-profit and nonprofit colleges.

One of the key sticking points in Higher Education Act (HEA) reauthorization is likely to be around the so-called “90/10” rule, which requires for-profit colleges to get at least 10% of their revenue from sources other than federal financial aid (excluding veterans’ benefits) in order to qualify for federal financial aid. Democrats want to return the rule to 85/15 (as it was in the past) and count veterans’ benefits in the federal funds portion of the calculation, which would trip up many for-profit colleges. (Because public colleges get state funds and many private colleges have at least modest endowments, this rule is generally not an issue for them.) Republicans have proposed getting rid of 90/10 in their vision for HEA reauthorization.

I have a fair amount of skepticism about the effectiveness of the 90/10 rule in the for-profit sector, particularly as tuition prices need to be above federal aid limits and for-profit colleges tend to serve students with relatively little ability to pay for their own education. But I also worry about colleges with poor student outcomes sucking up large amounts of federal funds with relatively few strings attached. So, while watching the panelists at the House hearing talk about the 90/10 rule, the framework of an idea on for-profit accountability (which may admittedly be crazy) came into my mind.

I am tossing out the idea of tying the percentage of revenue that colleges can receive from federal funds (including veterans’ benefits as federal funds) to the institution’s performance on a number of metrics. For the sake of simplicity, let’s assume the three outcomes are graduation rates, earnings after attending college, and student loan repayment rates—although other measures are certainly possible. Then I will break each of these outcomes into thirds based on the predominant type of credential awarded (certificate, associate degree, or bachelor’s degree), restricting the sample to broad-access college to reflect the realities of the for-profit sector.

A college that performed in the top third on all three measures would qualify for the maximum share of revenue from federal funds—let’s say 100%. A college in the top third on two measures and in the middle third on the other one could get 95%, and the percentage would drop by 5% (or some set amount) as the college’s performance dropped. Colleges in the bottom third on all three measures would only get 60% of revenue from federal funds.

This type of system would effectively remove the limit on federal funds for high-performing for-profit colleges, while severely tightening it for low performers. Could this idea gain bipartisan support (after a fair amount of model testing)? Possibly. Is it worth at least thinking through? I would love your thoughts on that.

New Data on Pell Grant Recipients’ Graduation Rates

In spite of being a key marker of colleges’ commitments to socioeconomic diversity, it has only recently been possible to see institution-level graduation rates of students who begin college with Pell Grants. I wrote a piece for Brookings in late 2017 based on the first data release from the U.S. Department of Education and later posted a spreadsheet of graduation rates upon the request of readers—highlighting public interest in the metric.

ED released the second year of data late last year, and Melissa Korn of The Wall Street Journal (one of the best education writers in the business) reached out to me to see if I had those data handy for a piece she wanted to write on Pell graduation rate gaps. Since I do my best to keep up with new data releases from the Integrated Postsecondary Education Data System, I was able to send her a file and share my thoughts on the meaning of the data. This turned into a great piece on completion gaps at selective colleges.

Since I have already gotten requests to share the underlying data in the WSJ piece, I am happy to post the spreadsheet again on my site.

Download the spreadsheet here!

A few cautions:

(1) There are likely a few colleges that screwed up data reporting to ED. For example, gaps of 50% for larger colleges are likely an error, but nobody at the college caught them.

(2) Beware the rates for small colleges (with fewer than 50 students in a cohort).

(3) This graduation rate measure is the graduation rate for first-time, full-time students who complete a bachelor’s degree at the same institution within six years. It excludes part-time and transfer students, so global completion numbers will be higher.

(4) As my last post highlighted, there are some legitimate concerns with using percent Pell as an accountability measure. However, it’s the best measure that is currently available.

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!

Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.

Some Good News on Student Loan Repayment Rates

The U.S. Department of Education released updates to its massive College Scorecard dataset earlier this week, including new data on student debt burdens and student loan repayment rates. In this blog post, I look at trends in repayment rates (defined as whether a student repaid at least $1 in principal) at one, three, five, and seven years after entering repayment. I present data for colleges with unique six-digit Federal Student Aid OPEID numbers (to eliminate duplicate results), weighting the final estimates to reflect the total number of borrowers entering repayment.[1]

The table below shows the trends in the 1-year, 3-year, 5-year, and 7-year repayment rates for each cohort of students with available data.

Repayment cohort 1-year rate (pct) 3-year rate (pct) 5-year rate (pct) 7-year rate (pct)
2006-07 63.2 65.1 66.7 68.4
2007-08 55.7 57.4 59.5 62.2
2008-09 49.7 51.7 55.3 59.5
2009-10 45.7 48.2 52.6 57.4
2010-11 41.4 45.4 51.3 N/A
2011-12 39.8 44.4 50.6 N/A
2012-13 39.0 45.0 N/A N/A
2013-14 40.0 46.1 N/A N/A

One piece of good news is that 1-year and 3-year repayment rates ticked up slightly for the most recent cohort of students who entered repayment in 2013 or 2014. The 1-year repayment rate of 40.0% is the highest rate since the 2010-11 cohort and the 3-year rate of 46.1% is the highest since the 2009-10 cohort. Another piece of good news is that the gain between the 5-year and 7-year repayment rates for the most recent cohort with data (2009-10) is the largest among the four cohorts with data.

Across all sectors of higher education, repayment rates increased as a student got farther into the repayment period. The charts below show differences by sector for the cohort entering repayment in 2009 or 2010 (the most recent cohort to be tracked over seven years), and it is worth noting that for-profit students see somewhat smaller increases in repayment rates than other sectors.

But even somewhat better repayment rates still indicate significant issues with student loan repayment. Only half of borrowers have repaid any principal within five years of entering repayment, which is a concern for students and taxpayers alike. Data from a Freedom of Information Act request by Ben Miller of the Center for American Progress highlight that student loan default rates continue to increase beyond the three-year accountability window currently used by the federal government, and other students are muddling through deferment and forbearance while outstanding debt continues to increase.

Other students are relying on income-driven repayment and Public Service Loan Forgiveness to remain current on their payments. This presents a long-term risk to taxpayers as at least a portion of balances will be written off over the next several decades. It would be helpful for the Department of Education to add data to the College Scorecard on the percentage of students by college enrolled in income-driven repayment rates so it is possible to separate students who may not be repaying principal due to income-driven plans from those who are placing their credit at risk by falling behind on payments.

[1] Some of the numbers for prior cohorts slightly differ from what I presented last year due to a change in how I merged datasets (starting with the most recent year of the Scorecard instead of the oldest year, as the latter method excluded some colleges that merged). However, this did not affect the general trends presented in last year’s post. Thanks to Andrea Fuller at the Wall Street Journal for helping me catch that bug.

How to Provide Context for College Scorecard Data

The U.S. Department of Education’s revamped College Scorecard website celebrated its third anniversary last month with another update to the underlying dataset. It is good to see this important consumer information tool continue to be updated, given the role that Scorecard data can play in market-based accountability (a key goal of many conservatives). But the Scorecard’s change log—a great resource for those using the dataset—revealed a few changes to the public-facing site. (Thanks to the indefatigable Clare McCann at New America for pointing this out in a blog post.)

scorecard_fig1_oct18

So to put the above screenshot into plain English, the Scorecard used to have indicators for how a college’s performance on outcomes such as net price, graduation rate, and post-college salary compared to the median institution—and now it doesn’t. In many ways, the Department of Education’s decision to stop comparing colleges with different levels of selectivity and institutional resources to each other makes all the sense in the world. But it would be helpful to provide website users with a general idea of how the college performs relative to more similar institutions (without requiring users to enter a list of comparison colleges).

For example, here is what the Scorecard data now look like for Cal State—Sacramento (the closest college to me as I write this post). The university sure looks affordable, but the context is missing.

scorecard_fig2_oct18

It would sure be helpful if ED already had a mechanism to generate a halfway reasonable set of comparison institutions to help put federal higher education data into context. Hold on just a second…

scorecard_fig3_oct18

It turns out that there is already an option within the Integrated Postsecondary Education Data System (IPEDS) to generate a list of peer institutions. ED creates a list of similar institutions to the focal college based on factors such as sector and level, Carnegie classification, enrollment, and geographic region. For Sacramento State, here is part of the list of 32 comparison institutions that is generated. People can certainly quibble with some of the institutions chosen, but they clearly do have some similarities.

scorecard_fig4_oct18

I then graphed the net prices of these 32 institutions to help put Sacramento State (in black below) into context. They had the fifth-lowest net price among the set of universities, information that is at least somewhat more helpful than looking at a national average across all sectors and levels.

scorecard_fig5_oct18

My takeaway here: the folks behind the College Scorecard should talk with the IPEDS people to consider bringing back a comparison group average based on a methodology that is already used within the Department of Education.

Comments on the Proposed Gainful Employment Regulations

The U.S. Department of Education is currently accepting public comments (through September 13) on their proposal to rescind the Obama administration’s gainful employment regulations, which had the goal of tying federal financial aid eligibility to whether graduates of certain vocationally-focused programs had an acceptable debt-to-earnings ratio. My comments are reprinted below.

September 4, 2018

Annmarie Weisman

U.S. Department of Education

400 Maryland Avenue SW, Room 6W245

Washington, DC 20202

Re: Comments on the proposed rescinding of the gainful employment regulations

Dear Annmarie,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding gainful employment. I write to offer my comments on certain aspects of the proposed rescinding of the regulations.

First, as an academic, I was pleasantly surprised to see ED immediately referring to a research paper in making its justification to change the debt-to-earnings (D/E) threshold. But that quickly turned into dismay as it became clear that ED had incorrectly interpreted what Sandy Baum and Saul Schwartz wrote a decade ago after Baum clarified the findings of the paper in a blog post.[2] I am not wedded to any particular threshold regarding D/E ratios, but I would recommend that ED reach out to researchers before using their findings in order to make sure they are being interpreted correctly.

Second, the point that D/E ratios can be affected by the share of adult students, who have higher loan limits than dependent students, is quite valid. But it can potentially be addressed in one of two ways if D/E ratios are reported in the future. One option is to report D/E ratios separately for independent and dependent students separately, but that runs the risk of creating more issues of small cell sizes by splitting the sample. Another option is to cap the amount of independent student borrowing credited toward D/E ratios at the same level as dependent students (also addressing the possibility that some dependent students have higher limits due to Parent PLUS loan applications being rejected). This is less useful from a consumer information perspective, but could solve issues regarding high-stakes accountability.

Third, ED’s point about gainful employment using a ten-year amortization period for certificate programs while also offering 20-year repayment plans under REPAYE is well-taken. Switching to a 20-year period would allow some lower-performing programs to pass the D/E test, but it is reasonable given that ED offers a loan repayment plan of that period. (I also approach the idea that programs would lose Title IV eligibility under the prior administration’s regulations as being highly unlikely based on experiences with very few colleges losing eligibility based on high cohort default rates.) In any case, aligning amortization periods to repayment plan periods makes sense.

Fourth, I am highly skeptical that requiring institutions to disclose various outcomes on their own websites would have much value. Net price calculators, which colleges are required to post under the Higher Education Act, are a prime example. Research has shown that many colleges place these calculators on obscure portions of their website and that information is often up to five years out of date.[3] Continuing to publish centralized data on outcomes is far preferable than letting colleges do their own thing, and highlights the importance of continuing to publish outcomes information without any pauses in the data.

Fifth, while providing median debt and median earnings data allows analysts to continue to calculate a D/E ratio, there is no harm in continuing to provide such a ratio in the future alongside the raw data. There is no institutional burden for doing so, and it is possible that some prospective students may find that ratio to be more useful than simply looking at median debt. At the very least, ED should conduct several focus groups to make sure that D/E ratios lack value before getting rid of them.

Sixth, while it is absolutely correct to note that people working in certain service industries receive a high portion of their overall compensation in tips, I find it dismaying as a taxpayer that there is no interest in creating incentives for individuals to report their income as required by law. A focus on D/E ratios created a possibility for colleges to encourage their students to follow the law and accurately report their incomes in order to increase earnings relative to debt payments. ED should instead work with IRS and colleges to help protect taxpayers by making sure that everyone pays income taxes as required.

In closing, I do not have a strong preference about whether ED ties Title IV eligibility to program-level D/E thresholds due to my skepticism that any sanctions would actually be enforced.[4] However, I strongly oppose efforts by ED to completely stop publishing program-level student outcomes data until the College Scorecard data are ready (which could be a few years). Continuing to publish data on certificate graduates’ outcomes in the interim is an essential step since all sectors of higher education already have to report certificate outcomes—meaning that keeping these data treat all sectors equally. Publishing outcomes of degree programs would be nice, but not as important since only some colleges would be included.

As I showed with my colleagues in the September/October issue of Washington Monthly magazine, certificate students’ outcomes vary tremendously both within and across CIP codes as well as within different types of higher education institutions.[5] Once the College Scorecard data are ready, this dataset can be phased out. But in the meantime, continuing to publish data meets a key policy goal of fostering market-based accountability in higher education.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer or funders.

[2] Baum, S. (2018, August 22). DeVos misinterprets the evidence in seeking gainful employment deregulation. Urban Wire. https://www.urban.org/urban-wire/devos-misrepresents-evidence-seeking-gainful-employment-deregulation.

[3] Anthony, A. M., Page, L. C., & Seldin, A. (2016). In the right ballpark? Assessing the accuracy of net price calculators. Journal of Student Financial Aid, 46(2), 25-50. Cheng, D. (2012). Adding it all up 2012: Are college net price calculators easy to find, use, and compare? Oakland, CA: The Institute for College Access and Success.

[4] For more reasons why I am skeptical that all-or-nothing accountability systems such as the prior administration’s gainful employment regulations would actually be effective, see my book Higher Education Accountability (Johns Hopkins University Press, 2018).

[5] Washington Monthly (2018, September/October). 2018 best colleges for vocational certificates. https://washingtonmonthly.com/2018-vocational-certificate-programs.

Comments on the Proposed Borrower Defense to Repayment Regulations

The U.S. Department of Education is currently accepting public comments (through August 30) on their proposed borrower defense to repayment regulations, which affect students’ ability to get loans forgiven in the case of closed schools or colleges that misrepresented important facts. Since these regulations also affect colleges and taxpayers, I weighed in to provide a researcher’s perspective. My comments are reprinted below.

August 21, 2018

Jean-Didier Gaina

U.S. Department of Education

400 Maryland Avenue SW, Mail Stop 294-20

Washington, DC 20202

Re: Comments on the proposed borrower defense to repayment regulations

Dear Jean-Didier,

My name is Robert Kelchen and I am an assistant professor of higher education at Seton Hall University.[1] As a researcher who studies financial aid, accountability policies, and higher education finance, I have been closely following the Department of Education (ED)’s 2017-18 negotiated rulemaking efforts regarding borrower defense to repayment and financial responsibility scores. Since there were no academic researchers included in the negotiated rulemaking committee (something that should be reconsidered in the future!), I write to offer my comments on certain segments of the proposed regulations.

My first comment is on the question of whether ED should accept so-called affirmative claims from borrowers who are not yet in default and seek to make a claim against a college instead of only accepting defensive claims from borrowers who have already defaulted. For colleges that are still open, this is a clear decision in my view: affirmative claims should be allowed because ED can then attempt to recoup the money from the college instead of effectively requiring the taxpayer to subsidize at least some amount of loan forgiveness. However, the decision is somewhat more complicated in the case of a closed school, where taxpayers are more likely to foot the bill. My sense is that affirmative claims should probably still be allowed given the relationship between defaulting on student loans and adverse outcomes such as reduced credit scores.[2]

To protect taxpayers and students alike, more needs to be done to strengthen federal requirements for colleges that are at risk of closure. If a college closes suddenly, students may be eligible to receive closed school discharges at taxpayer expense. Yet my research and analyses show that ED’s current rules for determining a college’s financial health (the financial responsibility score) are only weakly related with what they seek to measure. For example, several private nonprofit colleges that closed in 2016 had passing financial responsibility scores in 2014-15, while many colleges have continued to operate with failing scores for years.[3] I also found that colleges did not change their revenue or expenditure patterns in any meaningful way after receiving a failing financial responsibility score, suggesting that colleges are not taking the current measure seriously.[4]

I am heartened to see that ED is continuing to work on updating the financial responsibility score metric to better reflect a college’s real-time risk of closing through another negotiated rulemaking session. However, I am concerned that students and taxpayers could suffer from continuing with the status quo during a potential six-year phase-in period, so anything to shorten the period would be beneficial. I again urge ED to include at least one academic researcher on the negotiated rulemaking panel to complement institutional officials and accountants, as the research community studies how colleges respond to potential policy changes that the rest of the committee may be proposed.

Finally, I am concerned about ED’s vague promise to encourage colleges to offer teach-out plans instead of suddenly closing, as the regulations provide no incentives for colleges on the brink of financial collapse to work with accreditors and states to develop a teach-out plan. It would be far better for ED to require colleges to be proactive and develop teach-out plans at the first sign of financial difficulties, reducing the risk to taxpayers by minimizing the risk of closed school discharges. These plans can then be approved by an accreditor and/or state agency as a part of the regular review process. Colleges would likely contend that having to develop a pre-emptive teach-out plan may affect their ability to recruit and retain students, but tying this to an existing benchmark of federal concern (such as a low financial responsibility score or being on HCM2) should alleviate that issue.

Thank you for the opportunity to provide comments on these proposed regulations and I am happy to respond to any questions that ED staffers may have.

[1] All opinions reflected in this commentary are solely my own and do not represent the views of my employer.

[2] Blagg, K. (2018). Underwater on student debt: Understanding consumer credit and student loan default. Washington, DC: Urban Institute.

[3] Kelchen, R. (2017, March 8). Do financial responsibility scores predict college closures? https://robertkelchen.com/2017/03/08/do-financial-responsibility-scores-predict-college-closures/.

[4] Kelchen, R. (forthcoming). Do financial responsibility scores affect institutional behaviors? Journal of Education Finance.

Is Administrative Bloat Really a Big Problem?

I usually begin talks on my book Higher Education Accountability with a discussion of why accountability pressures now are stronger than ever for much of nonprofit higher education. Not surprisingly, one of the key reasons that I discuss is the rising price tag of a college education. I usually get at least one question from audience members in every talk about the extent to which administrative bloat in higher education is driving up college prices. I have written before about how difficult it is to pin the rising cost of providing a college education on any given factor, but I am diving in deeper on the administrative bloat concern in this post.

First, let’s take a look at trends in administrative expenditures and staffing over the last decade or two. Here are charts on inflation-adjusted per-FTE expenditures for instruction, academic support, institutional support, and student services between 2003 and 2013 (courtesy of Delta Cost Project analyses). The charts show that spending on student services and academic support increased faster than both inflation and instructional expenditures, while institutional support expenditures (the IPEDS expenditure category most closely associated with administration) increased about as fast as instructional expenditures.

Turning to staffing trends, I again use Delta Cost Project data to look at the ratios of full-time faculty, part-time faculty, administrators, and staff per 1,000 FTE students. In general, the ratio of full-time faculty and administrators per 1,000 students held fairly constant across time in most sections of higher education. However, the ratio of part-time faculty and professional staff members (lower-level administrators) increased markedly across higher education.

The data suggest that there has not been a massive explosion of high-level administrators, but there has been substantial growth in low- to mid-level academic support and student services staff members. What might be behind that growth in professional staff members? I offer two potential explanations below.

Explanation 1: Students need/want more services than in the past. As most colleges have enrolled increasingly diverse student bodies and institutions respond to pressures to graduate more students, it’s not surprising that colleges have hired additional staff members to assist with academic and social engagement. Students have also demanded additional services, such as more staff members to support campus diversity initiatives. (Lazy rivers and climbing walls could factor in here, but there are limited to such a small segment of higher education that they’re likely to be a rounding error in the grand scheme of things.)

Explanation 2: Staff members are doing tasks that faculty members used to do, which may not necessarily be a bad thing. A good example here is academic advising. Decades ago, it was far more common for faculty members to advise undergraduate students from their first year on. But over the years, professional academic advisers have taken on these responsibilities at many campuses, leaving faculty members to advise juniors and seniors within a major. To me, it seems logical to allow lower-paid professional advisers to work with first-year and second-year students, freeing up the time of higher-paid faculty members to do something else such as teach or do research. (I also have a strong hunch that professional advisers are better at helping students through general education requirements than faculty members, but I’d love to see more research on that point.)

In summary, there are lots of gripes coming from both faculty members and the public about the number of assistant and associate deans on college campuses. But most of the growth in non-faculty employees is among lower-level student and academic affairs staff members, not among highly-paid deans. There is still room for a robust debate about the right number of staff members and administrators, but claims of massive administrative bloat are not well-supported across all of higher education.

It’s hard to believe that a faculty member is writing this, but I do feel that most administrators do serve a useful purpose. As I told The Chronicle of Higher Education in a recent interview (conducted via e-mail while I was waiting for a meeting with an associate dean—I kid you not!), “Faculty do complain about all of the assistant and associate deans out there, but this workload would otherwise fall on faculty. And given the research, teaching, and service expectations that we face, we can’t take on those roles.”

Why Accountability Efforts in Higher Education Often Fail

This article was originally published at The Conversation.

As the price tag of a college education continues to rise along with questions about academic quality, skepticism about the value of a four-year college degree has grown among the American public.

This has led both the federal government and many state governments to propose new accountability measures that seek to spur colleges to improve their performance.

This is one of the key goals of the PROSPER Act, a House bill to reauthorize the federal Higher Education Act, which is the most important law affecting American colleges and universities. For example, one provision in the act would end access to federal student loans for students who major in subjects with low loan repayment rates.

Accountability is also one of the key goals of efforts in many state legislatures to tie funding for colleges and universities to their performance.

As a researcher who studies higher education accountability – and also just wrote a book on the topic – I have examined why policies that have the best of intentions often fail to produce their desired results. Two examples in particular stand out.

Federal and state failures

The first is a federal policy that is designed to end colleges’ access to federal grants and loans if too many students default on their loans. Only 11 colleges have lost federal funding since 1999, even though nearly 600 colleges have fewer than 25 percent of their students paying down any principal on their loans five years after leaving college, according to my analysis of data available on the federal College Scorecard. This shows that although students may be avoiding defaulting on their loans, they will be struggling to repay their loans for years to come.

The second is state performance funding policies, which have encouraged colleges to make much-needed improvements to academic advising but have not resulted in meaningful increases in the number of graduates.

Based on my research, here are four of the main reasons why many accountability efforts fall short.

1. Competing initiatives

Colleges face many pressures that provide conflicting incentives, which in turn makes any individual accountability policy less effective. In addition to the federal government and state governments, colleges face strong pressures from other stakeholders. Accrediting agencies require colleges to meet certain standards. Faculty and student governments have their own visions for the future of their college. And private sector organizations, such as college rankings providers, have their own visions for what colleges should prioritize. (In the interest of full disclosure, I am the methodologist for Washington Monthly magazine’s college rankings, which ranks colleges on social mobility, research and service.)

As one example of these conflicting pressures, consider a public research university in a state with a performance funding policy that ties money to the number of students who graduate. One way to meet this goal is to admit more students, including some who have modest ACT or SAT scores but are otherwise well-prepared to succeed in college. This strategy would hurt the university in the U.S. News & World Report college rankings, which judge colleges in part based on ACT/SAT scores, selectivity and academic reputation.

Research shows that students considering selective colleges are influenced by rankings, so a university may choose to focus on improving their rankings instead of broadening access in an effort to get more state funds.

2. Policies can be gamed

Colleges can satisfy some performance metrics by gaming the system, instead of actually improving their performance. The theory behind many accountability policies is that colleges are not operating in an efficient manner and that they must be given incentives in order to improve their performance. But if colleges are already operating efficiently – or if they do not want to change their practices in response to an external mandate – the only option to meet the performance goal may be to try to game the system.

An example of this practice is with the federal government’s student loan default rate measure, which tracks the percentage of borrowers who default on their loans within three years of when they are supposed to start repaying their loans. Colleges that are concerned about their default rates can encourage students to enroll in temporary deferment or forbearance plans. These plans result in students owing more money in the long run, but also they push the risk of default outside the three-year period that the federal government tracks, which essentially lets colleges off the hook.

3. Unclear connections

It’s hard to tie individual faculty members to student outcomes. The idea of evaluating teachers based on their students’ outcomes is nothing new; 38 states require student test scores to be used in K-12 teacher evaluations, and most colleges include student evaluations as a criterion of the faculty review process. Tying an individual teacher to a student’s achievement test scores has been controversial in K-12 education, but it is far easier than identifying how much an individual faculty member contributes to a student’s likelihood of graduating from college or repaying their loans.

For example, a student pursuing a bachelor’s degree will take roughly 40 courses during their course of study. That student may have 30 different professors over four or five years. And some of them may no longer be employed when the student graduates. Colleges can try to encourage all faculty to teach better, but it’s difficult to identify and motivate the worst teachers because of the elapsed time between when a student takes a class and when he or she graduates or enters the workforce.

4. Politics as usual

Even when a college should be held accountable, politics often get in the way. Politicians may be skeptical of the value of higher education, but they will work to protect their local colleges, which are often one of the largest employers in their home states. This means that politicians often act to stop a college from losing money under an accountability system.

The ConversationTake for example Senate Majority Leader Mitch McConnell, R-Ky., who was sympathetic to the plight of a Kentucky community college with a student loan default rate that should have resulted in a loss of federal financial aid. He got a provision added to the recent federal budget agreement that allowed only that college to appeal the sanction.