New Data on Pell Grant Recipients’ Graduation Rates

In spite of being a key marker of colleges’ commitments to socioeconomic diversity, it has only recently been possible to see institution-level graduation rates of students who begin college with Pell Grants. I wrote a piece for Brookings in late 2017 based on the first data release from the U.S. Department of Education and later posted a spreadsheet of graduation rates upon the request of readers—highlighting public interest in the metric.

ED released the second year of data late last year, and Melissa Korn of The Wall Street Journal (one of the best education writers in the business) reached out to me to see if I had those data handy for a piece she wanted to write on Pell graduation rate gaps. Since I do my best to keep up with new data releases from the Integrated Postsecondary Education Data System, I was able to send her a file and share my thoughts on the meaning of the data. This turned into a great piece on completion gaps at selective colleges.

Since I have already gotten requests to share the underlying data in the WSJ piece, I am happy to post the spreadsheet again on my site.

Download the spreadsheet here!

A few cautions:

(1) There are likely a few colleges that screwed up data reporting to ED. For example, gaps of 50% for larger colleges are likely an error, but nobody at the college caught them.

(2) Beware the rates for small colleges (with fewer than 50 students in a cohort).

(3) This graduation rate measure is the graduation rate for first-time, full-time students who complete a bachelor’s degree at the same institution within six years. It excludes part-time and transfer students, so global completion numbers will be higher.

(4) As my last post highlighted, there are some legitimate concerns with using percent Pell as an accountability measure. However, it’s the best measure that is currently available.

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!

How Colleges’ Carnegie Classifications Have Changed Over Time

Right as the entire higher education community was beginning to check out for the holiday season last month, Indiana University’s Center on Postsecondary Research released the 2018 Carnegie classifications. While there are many different types of classifications based on different institutional characteristics, the basic classification (based on size, degrees awarded, and research intensity) always garners the most attention from the higher education community. In this post, I look at some of the biggest changes between the 2015 and 2018 classifications and how the number of colleges in key categories has changed over time. (The full dataset can be downloaded here.)

The biggest change in the 2018 classifications was about how doctoral universities were classified. In previous classifications, a college was considered a doctoral university if it awarded at least 20 research/scholarship doctoral degrees (PhDs and a few other types of professional doctorates such as EdDs). The 2018 revisions counted a college as being a doctoral university if there were at least 30 professional practice doctorates (JDs, MDs, and other related fields such as in health sciences). This resulted in accelerating the increase in the number of doctoral universities that has existed since 2000:

2018: 423

2015: 334

2010: 295

2005: 279

2000: 258

This reclassification is important to universities because college rankings systems often classify institutions based on their Carnegie classification. U.S. News and Washington Monthly (the latter of which I compile) both base the national university category on the Carnegie doctoral university classification. The desire to be in the national university category (instead of regional or master’s university categories that get less public attention) has contributed to some universities developing doctoral programs (as Villanova did prior to the 2015 reclassification).

The revision of the lowest two levels of doctoral universities (which I will call R2 and R3 for shorthand, matching common language) did quite a bit to scramble the number of colleges in each category, with a number of R3 colleges moving into R2 status. Here is the breakdown among the three doctoral university groups since 2005 (the first year of three categories):

Year R1 R2 R3
2018 130 132 161
2015 115 107 112
2010 108 98 89
2005 96 102 81

Changing categories within the doctoral university group is important for benchmarking purposes. As I told Inside Higher Ed back in December, my university’s moving within the Carnegie doctoral category (from R3 to R2) affects its peer group. All of the sudden, tenure and pay comparisons will be based on a different—and somewhat more research-focused—group of institutions.

There has also been an increase in the number of two-year colleges offering at least some bachelor’s degrees, driven by the growth of community college baccalaureate efforts in states such as Florida and a diversifying for-profit sector. Here is the trend in the number of baccalaureate/associate colleges since 2005:

2018: 269

2015: 248

2010: 182

2005: 144

Going forward, Carnegie classifications will continue to be updated every three years in order to keep up with a rapidly-changing higher education environment. Colleges will certainly be paying attention to future updates that could affect their reputation and peer groups.

Why Negotiated Rulemaking Committees Should Include a Researcher

The U.S. Department of Education officially unveiled on Monday the membership committees for its spring 2019 negotiated rulemaking sessions on accreditation and innovation. This incredibly ambitious rulemaking effort includes subcommittees on the TEACH Grant, distance education, and faith-based institutions and has wide-ranging implications for nearly all of American higher education. If all negotiators do not reach consensus on a given topic (the most likely outcome), ED can write regulations as it sees fit. (For more on the process, I highly recommend Rebecca Natow’s great book on negotiated rulemaking.)

The Department of Education is tasked with selecting the membership of negotiated rulemaking committees and subcommittees by choosing from among people who are nominated to participate by various stakeholders. Traditionally, ED has limited the positions to those who are representing broad sectors of institutions (such as community colleges) or affected organizations (like accrediting agencies). But given the breadth of the negotiations, I felt that it was crucial for at least one researcher to be a negotiator.

I heard from dozens of people both online and offline in support of my quixotic effort. But ED declined to include any researchers in this negotiated rulemaking session, which I find to be a major concern.

Why is the lack of an academic researcher such a big deal? First of all, it is important to have an understanding of how colleges may respond to major changes in federal policies. Institutional stakeholders may have a good idea of what their college might do, but may be hard to honestly explain unintended consequences when all negotiations are being livestreamed to the public. Including a researcher who is not representing a particular sector or perspective provides the opportunity for someone to speak more candidly without the potential fear of reprisal.

Additionally, the Department of Education’s white papers on reform and innovation (see here and here) did not demonstrate full knowledge of the research on the areas to be negotiated. As I told The Chronicle of Higher Education:

“In general, ED didn’t do an awful job describing the few high-quality studies that they chose to include, but they only included a few quality studies alongside some seemingly random websites. If one of my students turned in this paper as an assignment, I would send it back with guidance to include more rigorous research and fewer opinion pieces.”

Including a researcher who knows the body of literature can help ensure that the resulting regulations have a sound backing in research. This is an important consideration given that the regulations can be challenged for either omitting or misusing prior research, as is the case with Sandy Baum’s research and the gainful employment regulations. Including a researcher can help ED get things right the first time.

In the future, I urge the Department of Education to include a spot in negotiated rulemaking committees for a researcher. This could be done in conjunction with professional associations such as the American Educational Research Association or the Association for Education Finance and Policy. This change has the potential to improve the quality of regulations and reduce the potential that regulations must be revised after public comment periods.

The only alternative right now is for someone to show up in Washington on Monday morning—the start of the semester for many academics—and petition to get on the committee in person. While I would love to see that happen, it is not feasible for most researchers to do. So I wish negotiators the best in the upcoming sessions, while reminding the Department of Education that researchers will continue to weigh in during public comment periods.

The 2018 “Not Top Ten” List in Higher Education

Yesterday, I unveiled my sixth annual list of the top ten events in American higher education in 2018. Now it’s time for the annual list of the “not top ten” events—which are a mix of puzzling decisions and epic fails that leave most of us wondering what people were thinking. (Catch up on my previous lists here.) Enjoy the list—and send along any feedback that you have!

(10) Some college presidents can’t help but do stupid things. As I wrote about yesterday, being a college president is a difficult job in part because of the constant public spotlight. But this means that presidents (who usually receive media training) should know how to avoid really silly mistakes. But someone forget to tell that to two presidents this year. Southeast Missouri State University’s president was forced to apologize after drinking from a beer bong at a tailgate party before a road football game. The (now-former) president of Edinboro University made national headlines after he tried to use a “Wag the Dog” disinformation strategy to make changes to his university. This may have worked if he didn’t try to pitch a story to The Chronicle about himself—a move that he later came to regret.

(9) It was a busy year for poop-related stories in higher education. Three fascinating stories about good old number 2 stood out in 2018. First, the University of Kansas made national news for installing a bicycle rack shaped like the letters P-A-R-C. Reading it from the other side, though, revealed a different message. Staying with the University of Kansas, someone stole an inflatable ten-foot colon owned by its cancer center in October (what, you don’t want to walk through it?). Thankfully, the colon was found in a vacant house in Kansas City. More seriously, Nebraska congressman Jeff Fortenberry tried to get a University of Nebraska professor in trouble with his employer for liking a Facebook post of a vandalized campaign sign that changed the “o” in the representative’s name to an “a” (what is with my fellow Midwesterners and poop?). Thankfully, although the professor’s behaviors were sophomoric at best, cooler heads prevailed and he did not get fired for expressing his First Amendment rights.

(8) Adventures in research, part 1. As a researcher who interacts with a wide range of people (largely through higher education Twitter), I come across some blatantly awful ways that people do or explain research. Here are four tweets, two from me and two from the tremendous Doug Webber at Temple University, that explain why I buy headache medicine in bulk.

Journalists, please read this post from earlier in the year in which I offer my thoughts on how to think about the pitches you all receive about every five minutes.

(7) The Review of Higher Education grinds to a halt after becoming hopelessly backlogged with accepted articles. On a beautiful summer evening, I was living the academic dream (reviewing an article for The Review of Higher Education while listening to a Cardinals game with the windows open) when I stumbled across a note on the journal’s website that no new papers were being accepted due to a two-year backlog. I tweeted this out and both Inside Higher Ed and the Chronicle soon covered the shutdown. Part of the issue is with the peer review process itself (the length of the process is brutal for grad students and junior faculty), and part is that the journal simply accepted too many good papers. The journal will reopen for submissions late next summer after being closed for about 15 months, so my thoughts are with the other major higher ed journals as they get swamped even more.

(6) Folks, don’t make up a fake counteroffer in an effort to get a pay raise. Like most employees, it’s not uncommon for faculty members to feel like they are underpaid. (And thanks to public employees’ salaries generally being available to the public, it’s often not that hard to get an idea of whether that is true.) And it’s also not that unusual for top-shelf faculty at research universities to go on the job market to get an offer from another university with the hope that their home university matches it. But while some faculty may feel icky about this process, it’s not illegal. However, faking a counteroffer like former Colorado State University professor Brian McNaughton did was bound to backfire—and it did in epic fashion. Do not try this at home.

(5) The first wave of Public Service Loan Forgiveness requests met a woefully underprepared Department of Education. One of the data points that got a tremendous amount of attention in 2018 was the finding that more than 99% of students who applied for PSLF were rejected. Part of the high rejection rate makes sense, because the program began in 2007 and students had to work for a qualified public or private nonprofit organization while making 120 loan payments. But part of this can be attributed to ED dropping the ball across three different administrations, as noted by a Government Accountability Office report. Congress did pass a Temporary Expanded PSLF program this year for students who were in the wrong loan program, but expect chaos for a few years while the bugs get worked out. (For students interested in receiving PSLF, see this piece I wrote with some key pointers.)

(4) West Virginia’s proposed tuition-free community college programs would have required students to take (and pay for) drug tests each semester. The state’s governor and Senate president introduced the bill as an effort to improve college access in a state with low college attainment rates. How could anyone be opposed to this idea, which is increasingly common for public assistance programs? My answer: the state’s longstanding merit aid program (which serves more students from higher-income families) does not have that provision. Either make all students pee in a cup or don’t make anyone do it…what a gee whiz idea!

(3) The University of Texas-Tyler pulled the rug out from under accepted Nepalese students at the last minute. Beside from being a college president or a football or basketball coach, being in charge of enrollment management is one of the most visible and high-stakes jobs in higher education (plus, success is easily observed, unlike for most staff and faculty members). But the University of Texas at Tyler’s enrollment management director made a tremendous error by offering far too many full scholarships to international students. Rather than bite the bullet and cover the costs, UT-Tyler chose to revoke about 60 Nepalese students’ scholarships in mid-April. This rightly resulted in a PR nightmare for the university (the UT system belatedly apologized), and lesser-resourced colleges and the international counseling community stepped up to help many students. Time will tell whether this debacle affects UT-Tyler’s standing and admissions profile going forward.

(2) Adventures in research, part 2. As regular readers of my blog (or at least those who made it this far on this post) know, I’m not afraid to call out research that seems to feature dubious research methods. I was in a particularly grumpy mood one Monday morning when I stumbled across this “study” that was starting to get fawning coverage in the higher ed press.

Thanks to my grumpy tweets, I was able to talk with Chris Quintana of The Chronicle of Higher Education to air my concerns. He wrote a nice piece in which I was able to explain these issues to a broader audience, and I thought the story ended there. But boy, was I ever wrong! It turns out that the “expert” in the Bitcoin piece, Drew Cloud, was not a real person. Kudos to Chris and Dan Bauman from the Chronicle for exposing this creation of a corporate website.

(1) Let’s just say that 2018 was not a great year for big-time college athletics. If all was well with college athletics, I could use this space to highlight some of the more woeful teams out there (such as the University of Connecticut’s historically awful football defense and Rutgers volleyball’s 1-99 conference record since 2014). But instead, the main focus is on scandals at Maryland (where the board tried to fire the president over his efforts to fire a football coach who had a player die under his watch), Michigan State (where the former president now faces two felony charges over lying to police in the Larry Nassar scandal), and Ohio State (where the one trustee willing to stand up to retiring football coach Urban Meyer over his handling of a violent assistant coach resigned in protest). Sparty, please tell us what you think of college athletics this year!

(Dis)honorable mentions (courts division): Appalachian State University and Cape Cod Community College fell victim to fraudsters, a Florida Atlantic student tweeted a threat against his professor for scheduling a 7 AM exam, Teachers College’s former financial aid director was charged with fraud and bribery after running a student loan kickback scam, Temple’s MBA program faced student lawsuits after reporting phony data to US News.

(Dis)honorable mentions (non-courts division): Former Iranian president (!?!) weighs in on Michigan-Michigan State football rivalry, the Department of Education tries to make Freedom of Information Act requests as painful as possible, RateMyProfessors somehow let the ‘hotness’ chili pepper survive until June 2018, Oakland University hands out hockey pucks as a school shooting defense mechanism, online dating study shows men find women with a graduate degree to be less desirable (what year is this again?)

And with this post, I am taking my annual hiatus for winter break (unless something very important breaks in the meantime). Until then, I hope that all of my readers can enjoy some quiet time with friends and family and I will see you all in 2019!

The 2018 Higher Education Top Ten List

2018 has been another busy year in American higher education at both the federal and state levels, and it has been hard to keep up with all of the goings-on that affect students and colleges alike. In my sixth annual top ten list (see past lists here), I present the ten events of the year that I consider to be the most important or influential. (My slightly irreverent list of “not top ten” events comes out tomorrow.) As always, I’d love to hear your thoughts about the list and what I missed!

(10) America continues to become more politically polarized by educational attainment. Over the last several years, public opinion surveys have shown bipartisan concerns about the higher education enterprise. (The reasons differ—Democrats focus more on tuition prices, while Republicans zoom in on the liberal leanings of higher education as a whole.) But the midterm elections clearly showed a growing divide in political preference based on education, continuing a trend going back decades. White adults with a bachelor’s degree were about 15 percentage points more likely to vote for Democrats than those without a degree. This contributed to a near-wipeout of Republican House members in well-educated suburban districts (like my Congressional district in New Jersey), while likely also yielding Republican gains in rural Senate seats.

(9) New international student enrollments fell again, placing stresses on the budgets of less-selective colleges. A growing number of public and private nonprofit colleges now rely on international students (who often pay the full sticker price) to help balance budgets amid fierce competition for domestic students. International student enrollment increased again in the 2017-18 academic year, but new enrollment fell by 6.6%–marking a second consecutive decline. This decline, which is likely driven by a combination of how Trump-era immigration policies are perceived internationally and rising prices faced by international students, appears to be hitting less-selective colleges and graduate programs the hardest. Expect colleges to double down on the international market over the next few years—and for some to be spectacularly unsuccessful to the point where some closures may be blamed on cratering international enrollments.

(8) Michael Bloomberg’s $1.8 billion donation to Johns Hopkins University sparks conversations about access to higher education and divides in institutional resources. The media mogul and potential 2020 presidential candidate’s donation would represent one of the sixty largest endowments in the country as its own gift, so his announcement of a massive gift to meet students’ financial need got a fair amount of media attention. The roughly $80 million-$90 million in earnings from the gift will help improve access to Hopkins for at least some students with financial need, but Bloomberg was also criticized for giving to a wealthy university instead of a community college. Elite colleges have faced a fair amount of pushback in recent years (see the endowment tax of 2017, which Hopkins will likely have to pay), and expect criticism to come from both the liberal Left and populist Right.

(7) Public research universities launch an ambitious collaboration to improve college completion rates. One of the neatest innovations in recent years is the University Innovation Alliance, a group of 11 public research universities that got together in 2014 to share best practices regarding student success. It seems like the effort has been successful in improving completion rates (although I haven’t seen a rigorous evaluation confirming this), and universities such as Georgia State and Arizona State have been overwhelmed by other university leaders wanting to look under their hoods. This new Transformation Cluster Initiative by the Association of Public and Land-Grant Universities includes 130 universities in 16 regional clusters that will take a data-driven approach to improving student outcomes. I’m pulling for this initiative to succeed and get adopted by other sectors of higher education that have fewer resources (such as community colleges or struggling small private colleges).

(6) The K-12 school funding protests in statehouses around the country got the attention of lawmakers—and are likely to affect higher education. One of the biggest political developments in 2018 was the push by K-12 teachers in a number of traditionally conservative states (Arizona, Kentucky, Oklahoma, and West Virginia) for higher salaries. These efforts were fairly successful in upping salaries and some teachers took matters into their own hands by running for office and ousting incumbents. Yet since the money to pay for teacher salaries generally comes from other parts of the state budget given states’ hesitance to increase taxes (with Oklahoma being an exception), higher education becomes an appealing place to raid for revenue. Keep an eye out for whether future initiatives to increase K-12 teacher salaries affect state support for higher education.

(5) The PROSPER Act did not live long. In late 2017, House Republicans introduced the PROSPER Act as their long-overdue bill to reauthorize the federal Higher Education Act (here was my hot take at the time). But this ambitious bill to reshape federal policy in a conservative image soon hit roadblocks among fellow Republicans. Some of the concerns were evident in a 14-hour markup hearing in the House Education and the Workforce Committee, and the series of hearings that the Senate Health, Education, Labor, and Pensions Committee held in early 2018 barely even referenced PROSPER. Nothing moved beyond committee in the House or Senate in 2018, and the prospects for 2019 and 2020 are dim given discord among Republicans and the new Democratic leadership in the House Education and Labor Committee (change the stationery, folks!).

(4) Corporate-university partnerships continue to proliferate, creating both risks and opportunities. These partnerships have taken several forms in recent years, including some prominent 2018 examples. The first is online program management (OPM) companies, which basically run all of the non-academic parts of an academic program. (Purdue’s purchase of Kaplan and Grand Canyon’s nonprofit conversion both involved OPMs.) However, fewer details are available about OPMs than when institutions run their own programs—and ED’s Office of Inspector General will be investigating OPMs in 2019. The high-tech manufacturing company Foxconn will give up to $100 million to the University of Wisconsin-Madison to support the university’s engineering program amid criticisms of secrecy and lack of faculty oversight. Finally, Virginia Tech’s announcement of a new $1 billion campus in northern Virginia was a key factor behind Amazon’s decision to locate half of its new second headquarters in Crystal City.

(3) The University of Texas System introduces an important database of student outcomes. Efforts to track the outcomes of former students have generally taken one of two forms. A number of states (such as Colorado and Virginia) have great databases of students who stay within the state after leaving college, while the U.S. Department of Education’s College Scorecard captures students who received federal financial aid but can track them around the country. Both of these databases lose about 30% of all students, and the College Scorecard does not report data by major yet (perhaps in 2019?). This is what makes the University of Texas System’s new SEEK database so exciting. By partnering with the Census Bureau, the system can now provide major-level data for all campuses and students. Perhaps the U.S. Department of Education will catch up with this important consumer information tool in the coming years, but for now look to states for the most fascinating developments.

(2) Why would anyone want to be the president of a public university? Given how presidents have to deal with often-hostile governing boards and politicians while living in the spotlight during the 24-hour media cycle, it takes a special type of person to fill this job for more than a few years. (The cynical answer would be for the nice paycheck, but I would contend that pay has increased in part to compensate for the higher risk of getting fired.) The saga with Margaret Spellings, who announced her upcoming resignation after three years as president of the University of North Carolina, is a great example. She faced withering criticism from faculty members upon being appointed (likely in part because she is fairly conservative relative to most of higher education, but she was not conservative enough for the state’s legislature. Also, it was clear that she did not want anything to do with the Silent Sam statue situation in Chapel Hill, which may or may not be resolved by constructing a new building to house the relic. UNC will struggle to get candidates anywhere close to the caliber of the highly-qualified Spellings, and expect more public universities to have a hard time getting good candidates going forward.

(1) Two Obama-era policies survive for another year (at least on paper) after the Department of Education misses key deadlines. Much of the Trump administration’s higher education policy efforts to this point have focused on undoing Obama-era regulatory actions. Two of the most prominent policies are borrower defense to repayment (affecting whether students can get loans forgiven if their college misrepresented facts to them) and gainful employment (which would have eventually tied federal financial aid eligibility for certain vocational programs to debt/earnings ratios).The Department of Education went through negotiated rulemaking sessions and then proposed new regulations to effectively repeal the current regulations (see my comments to ED here and here). However, ED missed a key November 1 deadline that would have allowed them to repeal the regulations on July 1, 2019.  This means that the Obama-era regulations will be on the books until July 1, 2020, even though ED would prefer not to enforce them. ED announced last week that they would implement the closed school discharge portion of borrower defense to repayment, but still expect plenty of lawsuits from left-leaning advocacy groups in the coming year.

Honorable mentions: University of Maryland-Baltimore County basks in the glory of a March Madness victory and highlights its STEM programs, investigation reveals Russian trolls helped foster the Mizzou protests, local professor unveils a delightful paperweight on accountability in higher education, the Supreme Court’s Janus decision weakens unions at public universities, enrollments and public funding for higher education generally remain stable.

How to Respond to Rejection in Academia

There is an old saying in baseball that even the best hitters fail 70% of the time, which shows the difficulty of hitting a round ball with a round bat. But while achieving a .300 batting average in baseball is becoming harder than it has been in decades, most academics would be overjoyed by a 30% success rate across all of their endeavors. This high failure rate often comes as a surprise for new graduate students, who only see the successes of faculty members and think that they never get rejected. I tweeted about this earlier this week and was asked to say more about ways to model responding to rejection.

I feel like I am approaching expert status in rejection by now (even while developing a solid CV), and I am far from the only one. Doug Webber at Temple University put together an impressive CV of his failures, and here are some of mine:

  • I applied to about twenty PhD programs in economics straight out of college, and was only accepted by one of them (Wisconsin). I then promptly ended up on academic probation, got off of probation, failed the micro theory prelim twice, and was unceremoniously dismissed with a terminal master’s degree. Failing out of that program was the best thing that ever happened to me professionally, as many econ PhD programs are known for being brutal on students’ mental health (mine included). I then applied to the ed policy PhD program at Wisconsin and had a great experience there.
  • I applied for multiple dissertation fellowships and was rejected by all of them.
  • I applied to about 80 tenure-track jobs while finishing my dissertation. I never even heard back from about half of them and only had one flyout (which thankfully worked out). And I’m one of the small percentage of interested PhD students who got a tenure-track position!
  • My first eight external grant applications were all rejected.
  • Journals have rejected my submissions 39 times over the last six years using a number of methods. Quick desk rejections (in which the editor says submissions don’t meet the journal’s standards or are outside their area of focus) are always appreciated, as are timely (2-4 month) rejections with helpful feedback. But I have had papers rejected in far worse ways: revise and resubmits rejected after I made every requested change, papers rejected without feedback because reviewers never responded, and delayed (8-12 month) rejections with snarky or unhelpful comments.
  • Every early career award that I have been nominated for (or applied for) has ended with a rejection to this point. C’est la vie.

So how can established academics model how to respond to rejection for graduate students and junior scholars? I offer four suggestions.

(1) Be transparent about failures as well as successes. Doug’s failure CV is a great example of how academics can show the many potholes on the road to success. It is important for us to talk more about our failures (and not just in the form of snarky comments or tweets). There is an element of randomness in nearly every process in higher education (I have had mediocre articles get easily accepted, while better ones have struggled), and we need to do a better job of communicating that reality.

(2) Share the odds of success and how to learn from failures. The fact that I struck out on my first eight grant applications sounds terrible to almost any person new to the field of higher education. But being below the Mendoza line (a .200 batting average) is typical for many funding agencies, which often fund less than one in ten applicants. Rejected grant applications often do not come with feedback, which is frustrating. But getting rejected by a selective journal (conditional on getting past the editor’s desk and out for review) usually results in useful feedback that can result in an acceptance by the next journal. And since there is that element of randomness in acceptances, it is often worthwhile to send a paper to a journal that offers a low likelihood of publication. Sharing this information with rising scholars provides useful context into academic life.

(3) Be there to support colleagues and students during difficult times. Aside from teaching, academics often do much of their work in isolation. And rejections (particularly the first few) can be even more devastating in isolation. Part of mentoring new scholars should include being there to just listen while people vent about being rejected.

(4) Be considerate while rejecting people. For those of us in the position to reject a large percentage of people (search committee chairs, journal reviewers, and the like), it is important to be as compassionate as possible in the process. As a job applicant, it was nice to get some confirmation that I was out of the running for a position—even though it was pretty clear by a given point that I was not the candidate. However, HR policies at some campuses make that difficult or impossible. On the journal side, reviewers need to think about how to shape comments to the author(s) versus their confidential comments to the editor. It’s okay to tell the editor that the paper falls far below the expectations for that journal or that the paper should have been desk rejected, but try to provide author(s) with at least some constructive feedback.

One final note: Even after been rejected dozens of times, the sting never fully goes away. I don’t think it ever will, but as long as the rejection is reasonably considerate, I finally feel comfortable trying again without too much self-doubt. And that is important given that sometimes my efforts feel as futile as trying to hit an eephus pitch!

Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.

New Research on the Relationship between Nonresident Enrollment and In-State College Prices

Public colleges and universities in most states are under increased financial stress as they strain to compete with other institutions while state appropriations fail to keep up with increases in both inflation and student enrollment. As a result, universities have turned to other revenue sources to raise additional funds. One commonly targeted source is out-of-state students, particularly in Northeastern and Midwestern states with declining populations of recent high school graduates. But prior research has found that trying to enroll more out-of-state students can reduce the number of in-state students attending selective public universities, and this crowding-out effect particularly impacts minority and low-income students.

I have long been interested in studying how colleges use their revenue, so I began sketching out a paper looking at whether public universities appeared to use additional revenue from out-of-state students to improve affordability for in-state students. Since I am particularly interested in prices faced by students from lower-income families, I was also concerned that any potential increase in amenities driven by out-of-state students could actually make college less affordable for in-state students.

I started working on this project back in the spring of 2015 and enjoyed two and a half conference rejections (one paper submission was rejected into a poster presentation), two journal rejections, and a grant application rejection during the first two years. But after getting helpful feedback from the journal reviewers (unfortunately, most conference reviewers provide little feedback and most grant applications are rejected with no feedback), I made improvements and finally got the paper accepted for publication.

The resulting article, just published in Teachers College Record (and is available for free for a limited time upon signing up as a visitor), includes the following research questions:

(1) Do the listed cost of attendance and components such as tuition and fees and housing expenses for in-state students change when nonresident enrollment increases?

(2) Does the net price of attendance (both overall and by family income bracket) for in-state students change when nonresident enrollment increases?

(3) Do the above relationships differ by institutional selectivity?

After years of working on this paper and multiple iterations, I am pleased to report…null findings. (Seriously, though, I am glad that higher education journals seem to be willing to publish null findings, as long as the estimates are precisely located around zero without huge confidence intervals.) These findings suggest two things about the relationship between nonresident enrollment and prices faced by in-state students. First, it does not look like nonresident tuition revenue is being used to bring down in-state tuition prices. Second, it also does not appear that in-state students are paying more for room and board after more out-of-state students enroll, suggesting that any amenities demanded by wealthier out-of-state students may be modest in nature.

I am always happy to take any questions on the article or to share a copy if there are issues accessing it. I am also happy to chat about the process of getting research published in academic journals, since that is often a long and winding road!

How Financial Responsibility Scores Do Not Affect Institutional Behaviors

One of the federal government’s longstanding accountability efforts in higher education is the financial responsibility score—a metric designed to reflect a private college’s financial stability. The federal government has an interest in making sure that only stable colleges receive federal funds, as taxpayers often end up footing at least part of the bill when colleges shut down and students may struggle to resume their education elsewhere. The financial responsibility score metric ranges from -1.0 to 3.0, with colleges scoring between 1.0 and 1.4 being placed under additional oversight and those scoring below 1.0 being required to post a letter of credit with the Department of Education.

Although these scores have been released to the public since the 2006-07 academic year and there was a great deal of dissatisfaction among private colleges regarding how the scores were calculated, there had been no prior academic research on the topic before I started my work in the spring of 2014. My question was simple: did receiving a poor financial responsibility score induce colleges to shift their financial priorities (either increasing revenues or decreasing expenditures) in an effort to avoid future sanctions?

But as is often the case in academic research, the road to a published article was far from smooth and direct. Getting rejected by two different journals took nearly two years and then it took another two years for this paper to wind its way through the review, page proof, and publication process at the Journal of Education Finance. (In the meantime, I scratched my itch on the topic and put a stake in the ground by writing a few blog posts highlighting the data and teasing my findings.)

More than four and a half years after starting work on this project, I am thrilled to share that my paper, “Do Financial Responsibility Scores Affect Institutional Behaviors?” is a part of the most recent issue of the Journal of Education Finance. I examined financial responsibility score data from 2006-07 to 2013-14 in this paper, although I tried to get data going farther back since these scores have been calculated since at least 1996. I filed a Freedom of Information Act request back in 2014 for the data, and my appeal was denied in 2017 on the grounds that the request to receive data (that already existed in some format!) was “too burdensome and expensive.” At that point, the paper was already accepted at JEF, but I am obviously still a little annoyed with how that process went.

Anyway, I failed to find any clear evidence that private nonprofit or for-profit colleges changed their fiscal priorities after receiving an unfavorable financial responsibility score. To some extent, this result made sense among private nonprofit colleges; colleges tend to move fairly slowly and many of their costs are sticky (such as facilities and tenured faculty). But for-profit colleges, which generally tend to be fairly agile critters, the null findings were more surprising. There is certainly more work to do in this area (particularly given the changes in higher education that have occurred over the last five years), so I encourage more researchers to delve into this topic.

To aspiring researchers and those who rely on research in their jobs—I hope this blog post provides some insights into the scholarly publication process and all of the factors that can slow down the production of research. I started this paper during my first year on faculty and it finally came out during my tenure review year (which is okay because accepted papers still count even if they are not yet in print). Many papers move more quickly than this one, but it is worth highlighting that research is a pursuit for people with a fair amount of patience.