Discovering Issues with IPEDS Completions Data

The U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) is a great resource in the field of higher education. While it is the foundation of much of my research, the data are self-reported by colleges and occasionally include errors or implausible values. A great example of some of the issues with IPEDS data is this recent Wall Street Journal analysis of the finances of flagship public universities. When their great reporting team started asking questions, colleges often said that their IPEDS submission was incorrect. That’s not good.

I received grants from Arnold Ventures over the summer to fund two new projects. One of them is examining the growth in master’s degree programs over time and the implications for students and taxpayers. (More on the other project sometime soon.) This led me to work with my sharp graduate research assistant Faith Barrett to dive into IPEDS program completions data.

As we worked to get the data ready for analysis, we noticed a surprisingly large number of master’s programs apparently being discontinued. Colleges can report zero graduates in a given year if the program still exists, so we assumed that programs with no data (instead of a reported zero) were discontinued. But we then looked at years immediately following the apparent discontinuation and there were again graduates. This suggests that programs with missing data periods between when graduates were reported are likely either a data entry error (failing to enter a positive number of graduates) or not reporting zero graduates in an active program instead of truly missing (a program discontinuation). This is not great news for IPEDS data quality.

We then took this a step further by attempting to find evidence that programs that seem to disappear and reappear actually still exist. We used the Wayback Machine (https://archive.org/web/) to look at institutional websites by year to see whether the apparently discontinued program appeared to be active in years without graduates. We found consistent evidence from websites that programs continued to exist during their hiatus in IPEDS data. To provide an example, the Mental and Social Health Services and Allied Professions master’s program at Rollins College did not report data for 2015 after reporting 25 graduates in 2013 and 24 graduates in 2014. They then reported 30 graduates in 2016, 26 graduates in 2017, 27 graduates in 2018, 26 graduates in 2019, and 22 graduates in 2020. Additionally, they had active program websites throughout the period, providing more evidence of a data error.

The table below shows the number of master’s programs (defined at the 4-digit Classification of Instructional Programs level) for each year between 2005 and 2020 after we dropped all programs that never reported any graduates during this period. The “likely true discontinuations” column consists of programs that never reported any graduates to IPEDS following a year of missing data. The “likely false discontinuations” column consists of programs that reported graduates to IPEDS in subsequent years, meaning that most of these are likely institutional reporting errors. These likely false discontinuations made up 31% of all discontinuations during the period, suggesting that data quality is not a trivial issue.

Number of active programs and discontinuations by year, 2005-2020.

YearNumber of programsLikely true discontinuationsLikely false discontinuations
200520,679195347
200621,167213568
200721,326567445
200821,852436257
200922,214861352
201022,449716357
201122,816634288
201223,640302121
201324,148368102
201424,76631189
201525,17041097
201625,80836166
201726,33534435
201826,80438441
201927,572581213
202027,88374223

For the purposes of our analyses, we will recode years of missing data for these likely false discontinuations to have zero graduates. This likely understates the number of graduates for some of these programs, but this conservative approach at least fixes issues with programs disappearing and reappearing when they should not be. Stay tuned for more fun findings from this project!

There are two broader takeaways from this post. First, researchers relying on program-level completions data should carefully check for likely data errors such as the ones that we found and figure out how to best address them in their own analyses. Second, this is yet another reminder that IPEDS data are not audited for quality and quite a few errors are in the data. As IPEDS data continue to be used to make decisions for practice and policy, it is essential to improve the quality of the data.

Changing Contributions to the Peer Review Process

One of the joys and challenges of being an academic is being able to help to shape the future of scholarship through the peer review process. Much has been written about the issues with academic peer review, most notably the limited incentives to spend time reviewing submissions and the increasing length of time between when an academic submits a paper to a journal and when they finally receive feedback. Heck, I wrote about this issue five years ago when The Review of Higher Education stopped accepting new submissions for about a year and a half due to this imbalance.

Throughout my ten years as a tenure-line faculty member, what I give to and take from the peer review system has changed considerably. When I was first starting on the tenure track, I was reliant on relatively quick reviews on my own submissions and was receiving 5-10 requests to review each year from legitimate journals. And since I keep a spreadsheet of the details of each journal submission, I can see that I received decisions on many articles within 2-4 months. I have never missed a deadline—typically around 30 days—to submit my thoughts as a reviewer, and have tried to accept as many requests as possible.

The peer review system changed considerably in the late 2010s. As I got closer to tenure, I received more requests to review (25-30 legitimate requests per year) and accepted them all because I was in a position to do so. Decisions on my article submissions moved more toward the 4-6 month range, which was frustrating but not a big deal for me because I figured that I had already met the standards for tenure and promotion. My philosophy at that point became to be a giver to the field because of the privileged position that I was in. I needed to review at least 2-3 times as many submissions as I submitted myself to account for multiple reviewers and so grad students and brand-new faculty did not need to review.

Going through the tenure and promotion process exposed me to another crucial kind of reviewing: external reviews of tenure applications. Most research-focused universities expect somewhere between three and eight external letters speaking to the quality of an applicant’s scholarship. I am grateful to the anonymous reviewers who accepted my department chair’s invitation to write, and now a part of my job most years as a department head is soliciting letters from some of the most accomplished (and busiest) scholars in the world.

All of this is to say that being a full professor in a field that loses a lot of full professors to full-time administrative positions (the joy of specializing in higher education!) means that my priorities for external service have changed. I am focusing my reviewing time and energy in two areas that are particularly well suited for full professors at the expense of accepting the majority of journal review requests that I receive.

The first is that I just started as an associate editor at Research in Higher Education and am thrilled to join a great leadership team after being on the editorial board for several years. I took this position because I am a big fan of the journal and I believe that we can work to improve the author experience in two key areas: keeping authors updated on the status of their submissions and quickly desk rejecting manuscripts that are outside of the scope of the journal. Researchers, please send us your best higher education manuscripts. And reviewers, please say yes if at all possible.

The second is to continue trying to accept as many requests as possible for reviewing faculty members for tenure and/or promotion. I am doing 6-8 reviews per year at this point, and it is a sizable task to review tenure packets and relevant departmental, college, and university standards. But as a department head, I am used to doing faculty evaluations and rather enjoy reading through different bylaws. It is an incredible honor to review great faculty from around the country, and it is a job that I take seriously. (Plus, as someone who solicits letters from colleagues, a little karma never hurts!)

As I prepare to enter my second decade as a faculty member, I wanted to share my thoughts about how my role has changed and will continue to change. My apologies to my fellow associate editors and editors at other journals (I will complete my term on the editorial board at The Review of Higher Education and continue to be active there), but I will say no to many of you where I would have gladly accepted a few years ago. I hope you all understand as I rebalance my scholarly portfolio to try to help the field as much as possible.

Options for Replacing Standardized Test Scores for Researchers and Rankers

It’s the second Monday in September, so it’s time for the annual college rankings season to conclude with U.S. News & World Report’s entry. The top institutions in the rankings change little from year to year, but colleges pay lots of attention to statistically insignificant movements. Plenty has been written on those points, and plenty of digital ink has also been spilled on U.S. News’s decision to keep standardized test scores in their rankings this year.

In this blog post, I want to look a few years farther down the line. Colleges were already starting to adopt test-optional policies prior to March 2020, but the pandemic accelerated that trend. Now a sizable share of four-year colleges had taken a hiatus from requiring ACT or SAT scores, and many may not go back. This means that people who have used test scores in their work—whether as academic researchers or college rankings methodologists—will have to think about how to proceed going forward.

The best metrics to replace test scores depend in part on the goals of the work. Most academic researchers use test scores as a control variable in regression models as a proxy for selectivity or as a way to understand the incoming academic performance of students. High school GPA is an appealing measure, but is not available in the Integrated Postsecondary Education Data System and also varies considerably across high schools. Admit rates and yield rates are available in IPEDS and capture some aspects of selectivity and student preferences to attend particular colleges. Admit rates can be gamed by trying to get as many students as possible with no interest in the college to apply and be rejected, and yield rates vary considerably based on the number of colleges students apply to.

Other potential metrics are likely not nuanced enough to capture smaller variations across colleges. Barron’s Profiles of American Colleges has a helpful admission competitiveness rating (and as a plus, that thick book held up my laptop for hundreds of hours of Zoom calls during the pandemic). But there are not that many categories and they change relatively little over time. Carnegie classifications focus more on the research side of things (a key goal for some colleges), but again are not as nuanced and are only updated every few years.

If the goal is to get at institutional prestige, then U.S. News’s reputational survey could be a useful resource. The challenge there is that colleges have a history of either not caring about filling out the survey or trying to strategically game the results by ranking themselves far higher than their competitors. But if a researcher wants to get at prestige and is willing to compile a dataset of peer assessment scores over time, it’s not a bad idea to consider.

Finally, controlling for socioeconomic and racial/ethnic diversity are also options given the correlations between test scores and these factors. I was more skeptical of these correlations until moving to New Jersey and seeing all of the standardized test tutors and independent college counselors that existed in one of the wealthiest parts of the country.

As the longtime data editor for the Washington Monthly rankings, it’s time for me to start thinking about changes to the 2022 rankings. The 2021 rankings continued to use test scores as a control for predicting student outcomes and I already used admit rates and demographic data from IPEDS as controls. Any suggestions that people have for publicly-available data to replace test scores in the regressions would be greatly appreciated.

New Working Paper on the Effects of Gainful Employment Regulations

As debates regarding Higher Education Act reauthorization continue in Washington, one of the key sticking points between Democrats and Republicans is the issue of accountability for the for-profit sector of higher education. Democrats typically want to have tighter for-profit accountability measures, while Republicans either want to loosen regulations or at the very least hold all colleges to the same standards where appropriate.

The case of federal gainful employment (GE) regulations is a great example of partisan differences regarding for-profit accountability. The Department of Education spent much of its time during the Obama administration trying to implement regulations that would have stripped away aid from programs (mainly at for-profit colleges) that could not pass debt-to-earnings ratios. They finally released the first year of data in January 2017—in the final weeks of the Obama administration. The Trump administration then set about undoing the regulations and finally did so earlier this year. (For those who like reading the Federal Register, here is a link to all of the relevant documents.)

There has been quite a bit of talk in the higher ed policy world that GE led colleges to close poor-performing programs, and Harvard closing its poor-performing graduate certificate program in theater right after the data dropped received a lot of attention. But to this point, there has been no rigorous empirical research examining whether the GE regulations changed colleges’ behaviors.

Until now. Together with my sharp PhD student Zhuoyao Liu, I set out to examine whether the owners of for-profit colleges closed lousy programs or colleges after receiving information about their performance.

You can download our working paper, which we are presenting at the Association for the Study of Higher Education conference this week, here.

For-profit colleges can respond more quickly to new information than nonprofit colleges due to a more streamlined governance process and a lack of annoying tenured faculty, and they are also more motivated to make changes if they expect to lose money going forward. It is worth noting that no college should have expected to lose federal funding due to poor GE performance since the Trump administration was on its way in when the dataset was released.

Data collection for this project took a while. For 4,998 undergraduate programs at 1,462 for-profit colleges, we collected information on whether the college was still open using the U.S. Department of Education’s closed school database. Looking at whether programs were still open took a lot more work. We went to college websites, Facebook pages for mom-and-pop operations, and used the Wayback Machine to find information on whether a program appeared to still be open as of February 2019.

After doing that, we used a regression discontinuity research design to look at whether passing GE outright (relative to not passing) or being in the oversight zone (versus failing) affected the likelihood of college or program closures. While the results for the zone versus fail analyses were not consistently significant across all of our bandwidth and control variable specifications, there were some interesting findings for the passing versus not passing comparisons. Notably, programs that passed GE were much less likely to close than those that did not pass. This suggests that for-profit colleges, possibly encouraged by accrediting agencies and/or state authorizing agencies, closed lower-performing programs and focused their resources on their best-performing programs.

We are putting this paper out as a working paper as a first form of peer review before undergoing the formal peer review process at a scholarly journal. We welcome all of your comments and hope that you find this paper useful—especially as the Department of Education gets ready to release program-level earnings data in the near future.

Some Updates on the State Performance Funding Data Project

Last December, I publicly announced a new project with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University that would collect data on the details of states’ performance-based funding (PBF) systems. We have spent the last nine months diving even deeper into policy documents and obscure corners of the Internet as well as talking with state higher education officials to build our dataset. Now is a good chance to come up for air for a few minutes and provide an update on our project and our status going forward.

First, I’m happy to share that data collection is moving along pretty well. We gave a presentation at the State Higher Education Executives Officers Association’s annual policy conference in Boston in early August and were also able to make some great connections with people from more states at the conference. We are getting close to having a solid first draft of a 20-plus year dataset on state-level policies, and are working hard to build institution-level datasets for each state. As we discuss in the slide deck, our painstaking data collection process is leading us to question some of the prior typologies of performance funding systems. We will have more to share on that in the coming months, but going back to get data on early PBF systems is quite illuminating.

Second, our initial announcement about the project included a one-year, $204,528 grant from the William T. Grant Foundation to fund our data collection efforts. We recently received $373,590 in funding from Arnold Ventures and the Joyce Foundation to extend the project through mid-2021. This will allow us to build a project website, analyze the data, and disseminate results to policymakers and the public.

Finally, we have learned an incredible amount about data collection over the last couple of years working together as a team. (And I couldn’t ask for better colleagues!) One thing that we learned is that there is little guidance to researchers on how to collect the types of detailed data needed to provide useful information to the field. We decided to write up a how-to guide on data collection and analyses, and I’m pleased to share our new article on the topic in AERA Open. In this article (which is fully open access), we share some tips and tricks for collecting data (the Wayback Machine might as well be a member of our research team at this point), as well as how to do difference-in-differences analyses with continuous treatment variables. Hopefully, this article will encourage other researchers to launch similar data collection efforts while helping them avoid some of the missteps that we made early in our project.

Stay tuned for future updates on our project, as we will have some exciting new research to share throughout the next few years!

Trends in For-Profit Colleges’ Reliance on Federal Funds

One of the many issues currently derailing bipartisan agreement on federal Higher Education Act reauthorization is how to treat for-profit colleges. Democrats and their ideologically-aligned interest groups, such as Elizabeth Warren and the American Federation of Teachers, have called on Congress to cut off all federal funds to for-profit colleges—a position that few publicly took before this year. Meanwhile, Republicans have generally pushed for all colleges to be held to the same accountability standards, as evidenced by the Department of Education’s recent decision to rescind the Obama-era gainful employment era regulations that primarily focused on for-profit colleges. (Thankfully, program-level debt to earnings data—which was used to calculate gainful employment metrics—will be available for all programs later this year.)

I am spending quite a bit of time thinking about gainful employment right now as I work on a paper with one of my graduate students that examines whether programs at for-profit colleges that failed the gainful employment metrics shut down at higher rates than similar colleges that passed. Look for a draft of this paper to be out later this year, and I welcome feedback from the field as soon as we have something that is ready to share.

But while I was putting together the dataset for that paper, I realized that new data on the 90/10 rule came out with basically no attention last December. (And this is how blog posts are born, folks!) This rule requires for-profit colleges to get at least 10% of their revenue from sources other than federal Title IV financial aid (veterans’ benefits count toward the non-Title IV funds). Democrats who are not calling for the end of federal student aid to for-profits are trying to get 90/10 changed to 85/15 and putting veterans’ benefits in with the rest of federal aid, while Republicans are trying to eliminate the rule entirely. (For what it’s worth, here are my thoughts about a potential compromise.)

With the release of the newest data (covering fiscal years ending in the 2016-17 award year), there are now ten years of 90/10 rule data available on Federal Student Aid’s website. I have written in the past about how much for-profit colleges rely on federal funds, and this post extends the dataset from the 2007-08 through the 2016-17 award years. I limited the sample to colleges located in the 50 states and Washington, DC as well as to the 965 colleges that reported data over all ten years that data have been publicly released. The general trends in the reliance on Title IV revenues are similar when looking at the full sample, which ranges from 1,712 to 1,999 colleges across the ten years.

The graphic below shows how much the median college in the sample relied on Title IV federal financial aid revenues in each of the ten years of available data. The typical institution’s share of revenue coming from federal financial aid increased sharply from 63.2% in 2007-08 to 73.6% in 2009-10. At least part of this increase is attributable to two factors: the Great Recession making more students eligible for need-based financial aid (and encouraging an increase in college enrollment) and increased generosity of the Pell Grant program. Title IV reliance peaked at 76.0% in 2011-12 and has declined each of the most recent five years, reaching 71.5% in 2016-17.

Award Year Reliance on Title IV (pct)
2007-08 63.2
2008-09 68.3
2009-10 73.6
2010-11 74.0
2011-12 76.0
2012-13 75.5
2013-14 74.6
2014-15 73.2
2015-16 72.5
2016-17 71.5
Number of colleges 965

I then looked at reliance on Title IV aid by a college’s total revenues in the 2016-17 award year, dividing colleges into less than $1 million (n=318), $1 million-$10 million (n=506), $10 million-$100 million (n=122), and more than $100 million (n=19). The next graphic highlights that the groups all exhibited similar patterns of change over the last decade. The smallest colleges tended to rely on Title IV funds the least, while colleges with revenue of between $10 million and $100 million in 2016-17 had the highest shares of funds coming from federal financial aid. However, the differences among the groups were less than five percentage points from 2009-10 forward.

For those interested in diving deeper into the data, I highly recommend downloading the source spreadsheets from Federal Student Aid along with the explanations for colleges that have exceeded the 90% threshold. I have also uploaded an Excel spreadsheet of the 965 colleges with data in each of the ten years examined above.

How to Maintain Research Productivity

This summer is my first summer after receiving tenure at Seton Hall. While tenure and promotion to associate professor officially do not kick in until the start of the next academic year in August, there have already been some changes to my job responsibilities. The most notable change is that I have taken over as the director of the higher education graduate programs at Seton Hall, which means taking on a heaping helping of administrative work that is needed to make things run smoothly. While this work does come with a teaching reduction during the academic year, it’s a year-round job that takes a hefty bite out of my schedule. (And yes, professors do work—which is often unpaid—during the summer!)

Over the past few years, a few other factors have contributed to sharply reduce the amount of available time that I have to work on research. Since I teach in a doctoral program, faculty members are typically asked to chair more and more dissertation committees as they gain more experience. I also spend quite a bit of time on the road giving talks and being in meetings on higher education policy issues across the country, which is a great opportunity to catch up on reading dissertations in transit but makes it hard to write. These demands have really hit hard over the last few months, which is why blog posts have been relatively few and far between this year.

I had the chance to participate in a panel discussion through Seton Hall’s Center for Faculty Development last academic year on the topic of maintaining research productivity. I summarize some of my key points below, and people who are interested can listen to the entire podcast. Hopefully, some of these tips are especially useful for new faculty members who are beginning the exciting transition into a tenure-track position and often face more demands on their time than they faced in the past.

(1) Take care of yourself. One challenge of being a faculty member is that an unusually large proportion of our time is unstructured. Even for colleagues who teach three or four classes a semester (I teach two), direct teaching and office hour obligations may only be 20 hours per week. But the amount of work to do is seemingly infinite, resulting in pressures to work absurd hours. Set a reasonable bound on the number of hours that you are willing to work each week and stick to it the best that you can. Also make sure to have some hobbies to get away from the computer. I enjoy running, gardening, and cooking—as demonstrated by these homemade pizzas from last weekend.

(2) Keep your time allocation in mind. In addition to not working too many hours each week, it is important to be spending time on what is eventually rewarded. If your annual review or tenure/promotion guidelines specify that your evaluation is based 40% on research, 40% on teaching, and 20% on service, it is an issue to be spending 25 hours each week on teaching. Talk with experienced faculty members or trusted colleagues about what you can do to improve your teaching efficiency. If efficiency isn’t the issue, it’s time to talk with trusted colleagues about what can be done (if anything) to protect your time for research. I do my best to block off two days each week for research during the academic year, although that does get tough with travel, conference calls, and interviews.

Another helpful hint is structuring assignment due dates so you don’t get overwhelmed. I usually have a conference to attend during the middle of the semester, so I schedule the due date for midterm papers to be right before the trip. That way, I can read papers on the train or plane (since I’m not good at writing away from my trusted home office).

(3) Guard your most productive writing time. Most faculty members that I talk with have a much harder time getting into a research mindset than getting into a teaching or service mindset. This means that for many people, their writing time needs to be the time of day in which they are at their sharpest. Being able to control when you teach and meet with students is often outside your control, but deciding when to answer e-mails and prepare classes typically is. It’s hard enough to write, so blocking off several of your most productive hours each week to write is a must when tenure and promotion depend on it. Conference calls and nonessential meetings can fit nicely into the rest of your week.

(4) Collaborations can be awesome. (Caveat: Make sure your discipline/institution rewards collaborative research first. Most do, but some don’t.) In the tenure and promotion process, it is crucial for faculty members to be able to demonstrate their own research agenda and contribution to their field of study. But strategically using collaborations in addition to sole-authored projects can be a wonderful way to maintain research productivity and stay excited about your work. I have been fortunate to work with a number of great collaborators over the last few years, and just had a great time last week going out to Penn State to meet with my colleagues on a fun research project on state performance funding policies. These collaborations motivate me to keep working on new projects!

Colleagues, I would love to hear your thoughts about how you keep your research agenda moving forward amid a host of other demands. Either comment below or send me a note; I would love to do a follow-up post with more suggestions!

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!

Why Negotiated Rulemaking Committees Should Include a Researcher

The U.S. Department of Education officially unveiled on Monday the membership committees for its spring 2019 negotiated rulemaking sessions on accreditation and innovation. This incredibly ambitious rulemaking effort includes subcommittees on the TEACH Grant, distance education, and faith-based institutions and has wide-ranging implications for nearly all of American higher education. If all negotiators do not reach consensus on a given topic (the most likely outcome), ED can write regulations as it sees fit. (For more on the process, I highly recommend Rebecca Natow’s great book on negotiated rulemaking.)

The Department of Education is tasked with selecting the membership of negotiated rulemaking committees and subcommittees by choosing from among people who are nominated to participate by various stakeholders. Traditionally, ED has limited the positions to those who are representing broad sectors of institutions (such as community colleges) or affected organizations (like accrediting agencies). But given the breadth of the negotiations, I felt that it was crucial for at least one researcher to be a negotiator.

I heard from dozens of people both online and offline in support of my quixotic effort. But ED declined to include any researchers in this negotiated rulemaking session, which I find to be a major concern.

Why is the lack of an academic researcher such a big deal? First of all, it is important to have an understanding of how colleges may respond to major changes in federal policies. Institutional stakeholders may have a good idea of what their college might do, but may be hard to honestly explain unintended consequences when all negotiations are being livestreamed to the public. Including a researcher who is not representing a particular sector or perspective provides the opportunity for someone to speak more candidly without the potential fear of reprisal.

Additionally, the Department of Education’s white papers on reform and innovation (see here and here) did not demonstrate full knowledge of the research on the areas to be negotiated. As I told The Chronicle of Higher Education:

“In general, ED didn’t do an awful job describing the few high-quality studies that they chose to include, but they only included a few quality studies alongside some seemingly random websites. If one of my students turned in this paper as an assignment, I would send it back with guidance to include more rigorous research and fewer opinion pieces.”

Including a researcher who knows the body of literature can help ensure that the resulting regulations have a sound backing in research. This is an important consideration given that the regulations can be challenged for either omitting or misusing prior research, as is the case with Sandy Baum’s research and the gainful employment regulations. Including a researcher can help ED get things right the first time.

In the future, I urge the Department of Education to include a spot in negotiated rulemaking committees for a researcher. This could be done in conjunction with professional associations such as the American Educational Research Association or the Association for Education Finance and Policy. This change has the potential to improve the quality of regulations and reduce the potential that regulations must be revised after public comment periods.

The only alternative right now is for someone to show up in Washington on Monday morning—the start of the semester for many academics—and petition to get on the committee in person. While I would love to see that happen, it is not feasible for most researchers to do. So I wish negotiators the best in the upcoming sessions, while reminding the Department of Education that researchers will continue to weigh in during public comment periods.