How to Maintain Research Productivity

This summer is my first summer after receiving tenure at Seton Hall. While tenure and promotion to associate professor officially do not kick in until the start of the next academic year in August, there have already been some changes to my job responsibilities. The most notable change is that I have taken over as the director of the higher education graduate programs at Seton Hall, which means taking on a heaping helping of administrative work that is needed to make things run smoothly. While this work does come with a teaching reduction during the academic year, it’s a year-round job that takes a hefty bite out of my schedule. (And yes, professors do work—which is often unpaid—during the summer!)

Over the past few years, a few other factors have contributed to sharply reduce the amount of available time that I have to work on research. Since I teach in a doctoral program, faculty members are typically asked to chair more and more dissertation committees as they gain more experience. I also spend quite a bit of time on the road giving talks and being in meetings on higher education policy issues across the country, which is a great opportunity to catch up on reading dissertations in transit but makes it hard to write. These demands have really hit hard over the last few months, which is why blog posts have been relatively few and far between this year.

I had the chance to participate in a panel discussion through Seton Hall’s Center for Faculty Development last academic year on the topic of maintaining research productivity. I summarize some of my key points below, and people who are interested can listen to the entire podcast. Hopefully, some of these tips are especially useful for new faculty members who are beginning the exciting transition into a tenure-track position and often face more demands on their time than they faced in the past.

(1) Take care of yourself. One challenge of being a faculty member is that an unusually large proportion of our time is unstructured. Even for colleagues who teach three or four classes a semester (I teach two), direct teaching and office hour obligations may only be 20 hours per week. But the amount of work to do is seemingly infinite, resulting in pressures to work absurd hours. Set a reasonable bound on the number of hours that you are willing to work each week and stick to it the best that you can. Also make sure to have some hobbies to get away from the computer. I enjoy running, gardening, and cooking—as demonstrated by these homemade pizzas from last weekend.

(2) Keep your time allocation in mind. In addition to not working too many hours each week, it is important to be spending time on what is eventually rewarded. If your annual review or tenure/promotion guidelines specify that your evaluation is based 40% on research, 40% on teaching, and 20% on service, it is an issue to be spending 25 hours each week on teaching. Talk with experienced faculty members or trusted colleagues about what you can do to improve your teaching efficiency. If efficiency isn’t the issue, it’s time to talk with trusted colleagues about what can be done (if anything) to protect your time for research. I do my best to block off two days each week for research during the academic year, although that does get tough with travel, conference calls, and interviews.

Another helpful hint is structuring assignment due dates so you don’t get overwhelmed. I usually have a conference to attend during the middle of the semester, so I schedule the due date for midterm papers to be right before the trip. That way, I can read papers on the train or plane (since I’m not good at writing away from my trusted home office).

(3) Guard your most productive writing time. Most faculty members that I talk with have a much harder time getting into a research mindset than getting into a teaching or service mindset. This means that for many people, their writing time needs to be the time of day in which they are at their sharpest. Being able to control when you teach and meet with students is often outside your control, but deciding when to answer e-mails and prepare classes typically is. It’s hard enough to write, so blocking off several of your most productive hours each week to write is a must when tenure and promotion depend on it. Conference calls and nonessential meetings can fit nicely into the rest of your week.

(4) Collaborations can be awesome. (Caveat: Make sure your discipline/institution rewards collaborative research first. Most do, but some don’t.) In the tenure and promotion process, it is crucial for faculty members to be able to demonstrate their own research agenda and contribution to their field of study. But strategically using collaborations in addition to sole-authored projects can be a wonderful way to maintain research productivity and stay excited about your work. I have been fortunate to work with a number of great collaborators over the last few years, and just had a great time last week going out to Penn State to meet with my colleagues on a fun research project on state performance funding policies. These collaborations motivate me to keep working on new projects!

Colleagues, I would love to hear your thoughts about how you keep your research agenda moving forward amid a host of other demands. Either comment below or send me a note; I would love to do a follow-up post with more suggestions!

Some Thoughts on Using Pell Enrollment for Accountability

It is relatively rare for an academic paper to both dominate the headlines in the education media and be covered by mainstream outlets, but a new paper by economists Caroline Hoxby and Sarah Turner did exactly that. The paper, benignly titled “Measuring Opportunity in U.S. Higher Education” (technical and accessible versions) raised two major concerns with using the number or percentage of students receiving federal Pell Grants for accountability purposes:

(1) Because states have different income distributions, it is far easier for universities in some states to enroll a higher share of Pell recipients than others. For example, Wisconsin has a much lower share of lower-income adults than does California, which could help explain why California universities have a higher percentage of students receiving Pell Grants than do Wisconsin universities.

(2) At least a small number of selective colleges appear to be gaming the Pell eligibility threshold by enrolling far more students who barely receive Pell Grants than those who have significant financial need but barely do not qualify. Here is the awesome graph that Catherine Rampell made in her Washington Post article summarizing the paper:

hoxby_turner

As someone who writes about accountability and social mobility while also pulling together Washington Monthly’s college rankings (all opinions here are my own, of course), I have a few thoughts inspired by the paper. Here goes!

(1) Most colleges likely aren’t gaming the number of Pell recipients in the way that some elite colleges appear to be doing. As this Twitter thread chock-full of information from great researchers discusses, there is no evidence nationally that colleges are manipulating enrollment right around the Pell eligibility cutoff. Since most colleges are broad-access and/or are trying to simply meet their enrollment targets, it follows that they are less concerned with maximizing their Pell enrollment share (which is likely high already).

(2) How are elite colleges manipulating Pell enrollment? This could be happening in one or more of three possible ways. First, if these colleges are known for generous aid to Pell recipients, more students just on the edge of Pell eligibility may choose to apply. Second, colleges could be explicitly recruiting students from areas likely to have larger shares of Pell recipients toward the eligibility threshold. Finally, colleges could make admissions and/or financial aid decisions based on Pell eligibility. It would be ideal to see data on each step of the process to better figure out what is going on.

(3) What other metrics can currently be used to measure social mobility in addition to Pell enrollment? Three other metrics currently jump out as possibilities. The first is enrollment by family income bracket (such as below $30,000 or $30,001-$48,000), which is collected for first-time, full-time, in-state students in IPEDS. It suffers from the same manipulation issues around the cutoffs, though. The second is first-generation status, which the College Scorecard collects for FAFSA filers. The third is race/ethnicity, which tends to be correlated with the previous two measures but is likely a political nonstarter in a number of states (while being a requirement in others).

(4) How can percent Pell still be used? The first finding of Hoxby’s and Turner’s work is far more important than the second finding for nationwide analyses (within states, it may be worth looking at regional differences in income, too). The Washington Monthly rankings use both the percentage of Pell recipients and an actual versus predicted Pell enrollment measure (controlling for ACT/SAT scores and the percentage of students admitted). I plan to play around with ways to take a state’s income distribution into account to see how this changes the predicted Pell enrollments and will report back on my findings in a future blog post.

(5) How can social mobility be measured better? States can dive much deeper into social mobility than the federal government can thanks to their detailed student-level datasets. This allows for sliding scales of social mobility to be created or to use something like median household income instead of just percent Pell. It would be great to have a measure of the percentage of students with zero expected family contribution (the neediest students) at the national level, and this would be pretty easy to add onto IPEDS as a new measure.

I would like to close this post by thanking Hoxby and Turner for provoking important conversations on data, social mobility, and accountability. I look forward to seeing their next paper in this area!

Why Negotiated Rulemaking Committees Should Include a Researcher

The U.S. Department of Education officially unveiled on Monday the membership committees for its spring 2019 negotiated rulemaking sessions on accreditation and innovation. This incredibly ambitious rulemaking effort includes subcommittees on the TEACH Grant, distance education, and faith-based institutions and has wide-ranging implications for nearly all of American higher education. If all negotiators do not reach consensus on a given topic (the most likely outcome), ED can write regulations as it sees fit. (For more on the process, I highly recommend Rebecca Natow’s great book on negotiated rulemaking.)

The Department of Education is tasked with selecting the membership of negotiated rulemaking committees and subcommittees by choosing from among people who are nominated to participate by various stakeholders. Traditionally, ED has limited the positions to those who are representing broad sectors of institutions (such as community colleges) or affected organizations (like accrediting agencies). But given the breadth of the negotiations, I felt that it was crucial for at least one researcher to be a negotiator.

I heard from dozens of people both online and offline in support of my quixotic effort. But ED declined to include any researchers in this negotiated rulemaking session, which I find to be a major concern.

Why is the lack of an academic researcher such a big deal? First of all, it is important to have an understanding of how colleges may respond to major changes in federal policies. Institutional stakeholders may have a good idea of what their college might do, but may be hard to honestly explain unintended consequences when all negotiations are being livestreamed to the public. Including a researcher who is not representing a particular sector or perspective provides the opportunity for someone to speak more candidly without the potential fear of reprisal.

Additionally, the Department of Education’s white papers on reform and innovation (see here and here) did not demonstrate full knowledge of the research on the areas to be negotiated. As I told The Chronicle of Higher Education:

“In general, ED didn’t do an awful job describing the few high-quality studies that they chose to include, but they only included a few quality studies alongside some seemingly random websites. If one of my students turned in this paper as an assignment, I would send it back with guidance to include more rigorous research and fewer opinion pieces.”

Including a researcher who knows the body of literature can help ensure that the resulting regulations have a sound backing in research. This is an important consideration given that the regulations can be challenged for either omitting or misusing prior research, as is the case with Sandy Baum’s research and the gainful employment regulations. Including a researcher can help ED get things right the first time.

In the future, I urge the Department of Education to include a spot in negotiated rulemaking committees for a researcher. This could be done in conjunction with professional associations such as the American Educational Research Association or the Association for Education Finance and Policy. This change has the potential to improve the quality of regulations and reduce the potential that regulations must be revised after public comment periods.

The only alternative right now is for someone to show up in Washington on Monday morning—the start of the semester for many academics—and petition to get on the committee in person. While I would love to see that happen, it is not feasible for most researchers to do. So I wish negotiators the best in the upcoming sessions, while reminding the Department of Education that researchers will continue to weigh in during public comment periods.

How to Respond to Rejection in Academia

There is an old saying in baseball that even the best hitters fail 70% of the time, which shows the difficulty of hitting a round ball with a round bat. But while achieving a .300 batting average in baseball is becoming harder than it has been in decades, most academics would be overjoyed by a 30% success rate across all of their endeavors. This high failure rate often comes as a surprise for new graduate students, who only see the successes of faculty members and think that they never get rejected. I tweeted about this earlier this week and was asked to say more about ways to model responding to rejection.

I feel like I am approaching expert status in rejection by now (even while developing a solid CV), and I am far from the only one. Doug Webber at Temple University put together an impressive CV of his failures, and here are some of mine:

  • I applied to about twenty PhD programs in economics straight out of college, and was only accepted by one of them (Wisconsin). I then promptly ended up on academic probation, got off of probation, failed the micro theory prelim twice, and was unceremoniously dismissed with a terminal master’s degree. Failing out of that program was the best thing that ever happened to me professionally, as many econ PhD programs are known for being brutal on students’ mental health (mine included). I then applied to the ed policy PhD program at Wisconsin and had a great experience there.
  • I applied for multiple dissertation fellowships and was rejected by all of them.
  • I applied to about 80 tenure-track jobs while finishing my dissertation. I never even heard back from about half of them and only had one flyout (which thankfully worked out). And I’m one of the small percentage of interested PhD students who got a tenure-track position!
  • My first eight external grant applications were all rejected.
  • Journals have rejected my submissions 39 times over the last six years using a number of methods. Quick desk rejections (in which the editor says submissions don’t meet the journal’s standards or are outside their area of focus) are always appreciated, as are timely (2-4 month) rejections with helpful feedback. But I have had papers rejected in far worse ways: revise and resubmits rejected after I made every requested change, papers rejected without feedback because reviewers never responded, and delayed (8-12 month) rejections with snarky or unhelpful comments.
  • Every early career award that I have been nominated for (or applied for) has ended with a rejection to this point. C’est la vie.

So how can established academics model how to respond to rejection for graduate students and junior scholars? I offer four suggestions.

(1) Be transparent about failures as well as successes. Doug’s failure CV is a great example of how academics can show the many potholes on the road to success. It is important for us to talk more about our failures (and not just in the form of snarky comments or tweets). There is an element of randomness in nearly every process in higher education (I have had mediocre articles get easily accepted, while better ones have struggled), and we need to do a better job of communicating that reality.

(2) Share the odds of success and how to learn from failures. The fact that I struck out on my first eight grant applications sounds terrible to almost any person new to the field of higher education. But being below the Mendoza line (a .200 batting average) is typical for many funding agencies, which often fund less than one in ten applicants. Rejected grant applications often do not come with feedback, which is frustrating. But getting rejected by a selective journal (conditional on getting past the editor’s desk and out for review) usually results in useful feedback that can result in an acceptance by the next journal. And since there is that element of randomness in acceptances, it is often worthwhile to send a paper to a journal that offers a low likelihood of publication. Sharing this information with rising scholars provides useful context into academic life.

(3) Be there to support colleagues and students during difficult times. Aside from teaching, academics often do much of their work in isolation. And rejections (particularly the first few) can be even more devastating in isolation. Part of mentoring new scholars should include being there to just listen while people vent about being rejected.

(4) Be considerate while rejecting people. For those of us in the position to reject a large percentage of people (search committee chairs, journal reviewers, and the like), it is important to be as compassionate as possible in the process. As a job applicant, it was nice to get some confirmation that I was out of the running for a position—even though it was pretty clear by a given point that I was not the candidate. However, HR policies at some campuses make that difficult or impossible. On the journal side, reviewers need to think about how to shape comments to the author(s) versus their confidential comments to the editor. It’s okay to tell the editor that the paper falls far below the expectations for that journal or that the paper should have been desk rejected, but try to provide author(s) with at least some constructive feedback.

One final note: Even after been rejected dozens of times, the sting never fully goes away. I don’t think it ever will, but as long as the rejection is reasonably considerate, I finally feel comfortable trying again without too much self-doubt. And that is important given that sometimes my efforts feel as futile as trying to hit an eephus pitch!

Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.

New Research on the Relationship between Nonresident Enrollment and In-State College Prices

Public colleges and universities in most states are under increased financial stress as they strain to compete with other institutions while state appropriations fail to keep up with increases in both inflation and student enrollment. As a result, universities have turned to other revenue sources to raise additional funds. One commonly targeted source is out-of-state students, particularly in Northeastern and Midwestern states with declining populations of recent high school graduates. But prior research has found that trying to enroll more out-of-state students can reduce the number of in-state students attending selective public universities, and this crowding-out effect particularly impacts minority and low-income students.

I have long been interested in studying how colleges use their revenue, so I began sketching out a paper looking at whether public universities appeared to use additional revenue from out-of-state students to improve affordability for in-state students. Since I am particularly interested in prices faced by students from lower-income families, I was also concerned that any potential increase in amenities driven by out-of-state students could actually make college less affordable for in-state students.

I started working on this project back in the spring of 2015 and enjoyed two and a half conference rejections (one paper submission was rejected into a poster presentation), two journal rejections, and a grant application rejection during the first two years. But after getting helpful feedback from the journal reviewers (unfortunately, most conference reviewers provide little feedback and most grant applications are rejected with no feedback), I made improvements and finally got the paper accepted for publication.

The resulting article, just published in Teachers College Record (and is available for free for a limited time upon signing up as a visitor), includes the following research questions:

(1) Do the listed cost of attendance and components such as tuition and fees and housing expenses for in-state students change when nonresident enrollment increases?

(2) Does the net price of attendance (both overall and by family income bracket) for in-state students change when nonresident enrollment increases?

(3) Do the above relationships differ by institutional selectivity?

After years of working on this paper and multiple iterations, I am pleased to report…null findings. (Seriously, though, I am glad that higher education journals seem to be willing to publish null findings, as long as the estimates are precisely located around zero without huge confidence intervals.) These findings suggest two things about the relationship between nonresident enrollment and prices faced by in-state students. First, it does not look like nonresident tuition revenue is being used to bring down in-state tuition prices. Second, it also does not appear that in-state students are paying more for room and board after more out-of-state students enroll, suggesting that any amenities demanded by wealthier out-of-state students may be modest in nature.

I am always happy to take any questions on the article or to share a copy if there are issues accessing it. I am also happy to chat about the process of getting research published in academic journals, since that is often a long and winding road!

How Financial Responsibility Scores Do Not Affect Institutional Behaviors

One of the federal government’s longstanding accountability efforts in higher education is the financial responsibility score—a metric designed to reflect a private college’s financial stability. The federal government has an interest in making sure that only stable colleges receive federal funds, as taxpayers often end up footing at least part of the bill when colleges shut down and students may struggle to resume their education elsewhere. The financial responsibility score metric ranges from -1.0 to 3.0, with colleges scoring between 1.0 and 1.4 being placed under additional oversight and those scoring below 1.0 being required to post a letter of credit with the Department of Education.

Although these scores have been released to the public since the 2006-07 academic year and there was a great deal of dissatisfaction among private colleges regarding how the scores were calculated, there had been no prior academic research on the topic before I started my work in the spring of 2014. My question was simple: did receiving a poor financial responsibility score induce colleges to shift their financial priorities (either increasing revenues or decreasing expenditures) in an effort to avoid future sanctions?

But as is often the case in academic research, the road to a published article was far from smooth and direct. Getting rejected by two different journals took nearly two years and then it took another two years for this paper to wind its way through the review, page proof, and publication process at the Journal of Education Finance. (In the meantime, I scratched my itch on the topic and put a stake in the ground by writing a few blog posts highlighting the data and teasing my findings.)

More than four and a half years after starting work on this project, I am thrilled to share that my paper, “Do Financial Responsibility Scores Affect Institutional Behaviors?” is a part of the most recent issue of the Journal of Education Finance. I examined financial responsibility score data from 2006-07 to 2013-14 in this paper, although I tried to get data going farther back since these scores have been calculated since at least 1996. I filed a Freedom of Information Act request back in 2014 for the data, and my appeal was denied in 2017 on the grounds that the request to receive data (that already existed in some format!) was “too burdensome and expensive.” At that point, the paper was already accepted at JEF, but I am obviously still a little annoyed with how that process went.

Anyway, I failed to find any clear evidence that private nonprofit or for-profit colleges changed their fiscal priorities after receiving an unfavorable financial responsibility score. To some extent, this result made sense among private nonprofit colleges; colleges tend to move fairly slowly and many of their costs are sticky (such as facilities and tenured faculty). But for-profit colleges, which generally tend to be fairly agile critters, the null findings were more surprising. There is certainly more work to do in this area (particularly given the changes in higher education that have occurred over the last five years), so I encourage more researchers to delve into this topic.

To aspiring researchers and those who rely on research in their jobs—I hope this blog post provides some insights into the scholarly publication process and all of the factors that can slow down the production of research. I started this paper during my first year on faculty and it finally came out during my tenure review year (which is okay because accepted papers still count even if they are not yet in print). Many papers move more quickly than this one, but it is worth highlighting that research is a pursuit for people with a fair amount of patience.

Some Thoughts on the Academic Peer Review Process

Like most research-intensive faculty members, I receive regular requests to review papers for legitimate scholarly journals. (My spam e-mail folder is also full of requests to join editorial boards for phony journals, but that’s another topic for another day.) Earlier this week, I was working on reviewing a paper submitted to The Review of Higher Education, one of the three main higher education field journals in the United States (Journal of Higher Education and Research in Higher Education are the other two). I went to check one of the submission guidelines on the journal’s website and was surprised to see that the journal is temporarily closed for new manuscript submissions to help clear a backlog of submissions.

After I shared news of the journal’s decision on Twitter, I received a response from one of the associate editors of the journal. Her statement astonished me:

This sets off all kinds of alarms. How can a well-respected journal struggle so much to get qualified reviewers, pushing the length of the initial peer review process to six months or beyond? As someone who both submits to and reviews for a wide range of journals, here are some of my thoughts on how to potentially streamline the academic peer review process.

(1) Editors should ‘desk reject’ a higher percentage of submissions. Since it can be difficult to find qualified reviewers and most respectable journals accept less than 20% of all submissions, there is no reason to send all papers out to multiple external reviewers. If a member of the editorial board glances through the paper and can easily determine that it has a very low chance of publication, the paper should be immediately ‘desk rejected’ and quickly returned to the author with a brief note about why it was not sent out for full review. Journals in some fields, such as economics, already do this and it is sorely needed in education to help manage workloads. It is also humane to authors, as they are not waiting several months to hear back on a paper that will end up being rejected. I have been desk rejected several times during my career, and it allowed me to keep moving papers through the publication pipeline as a tenure-track faculty member.

(2) Journals should consider rejecting submissions from serial free riders. The typical academic paper is reviewed by two or three external scholars in the peer review process, with more people potentially getting involved if the paper goes through multiple revise and resubmit rounds. This means that for every sole-authored paper that someone submits, that person should be prepared to review two or three other papers in order to maintain balance. But in practice, since journals prefer reviewers with doctoral degrees and graduate students need to submit papers in order to be eligible for academic jobs, those of us with doctoral degrees should probably plan on reviewing 3-4 papers for each sole-authored paper we submit. (Divide that number accordingly based on the number of co-authors on your submissions.) It’s okay to decline review invitations if the paper is outside your scope of knowledge, but otherwise scholars need to accept most invitations. Declining because we are too busy doing our own research—and thus further jamming the publication pipeline—is not acceptable, particularly for tenured faculty members. If journals publicly commit to rejecting submissions from serial free riders, there may be fewer difficulties finding reviewers.

(3) There needs to be some incentive for reviewers to submit in a timely manner. Right now, journals can only beg and plead to get reviewers to submit their reviews within a reasonable time period (usually 3-6 weeks). But in my conversations with journal editors, reviewers often fail to meet that timeline. In an ideal world, journal reviewers would actually get paid for their work like many foundations and scholarly presses pay a few hundred dollars for thorough reviews. Absent that incentive, it may be worth establishing some sort of priority structure that rewards those who review quickly with quick reviews on their own submissions.

(4) In some cases, there needs to be better vetting of reviews before they are sent to authors. Most reputable academic journals have relatively few problems with this, as this is the job of the editorial board. Reviews generally come with a letter from the editor explaining discrepancies among reviewers and which comments can potentially be ignored. But the peer review process at academic conferences has more quality control issues, potentially due to the large number of reviews that are requested (ten 2,000-2,500 word proposals is not uncommon). It seems like reviewers rush through these proposals and often lack knowledge in the subject matter. Limiting the number of submissions that any individual can make and carefully vetting conference reviewers could help with this concern.

By helping to restrict the number of items that go out for peer review and providing incentives for people to fulfill their professional reviewing obligations, it should be possible to bring the peer review timeline down to a more humane 2-3 months rather than the 4-8 months that seems to be the norm in much of education. This is crucial for junior scholars trying to meet tenure requirements, but it will also help get peer-reviewed research out to the public and policymakers more quickly. Journals such as AERA Open, Educational Evaluation and Policy Analysis, and Economics of Education Review are models in quick and thorough peer review processes that the rest of the field can emulate.

New Experimental Evidence on the Effectiveness of Need-Based Financial Aid

My first experience doing higher education research began in the spring 2008, when I (then a graduate student in economics) responded to an e-mail from an education professor at the University of Wisconsin who was looking for students to help her with an interesting new study. Sara Goldrick-Rab was co-leading an evaluation of the Wisconsin Scholars Grant (WSG)—a rare case of need-based financial aid being given to students from low-income families via random assignment. Over the past decade, the Wisconsin Hope Lab team published articles on the effectiveness of the WSG in improving on-time graduation rates among university students and on changing students’ work patterns.

A decade later, we were able to conduct a follow-up study to examine the outcomes of treatment and control group students who started college between 2008 and 2011. This sort of long-term analysis of financial aid programs has rarely been conducted—and the two best existing evaluations (of the Cal Grant and the West Virginia PROMISE program) are on programs with substantial merit-based components. Eligibility for the WSG was solely based on financial need (conditional on being a first-time, full-time student), providing the first long-term experimental evaluation of a need-based program.

Along with longtime collaborators from our days in Wisconsin (Drew Anderson of the RAND Corporation, Katharine Broton of the University of Iowa, and Sara Goldrick-Rab of Temple University), I am pleased to announce the release of our new working paper on the long-term effects of the WSG to kick off the opening of the new Hope Center for College, Community and Justice at Temple University. We found some evidence that students who began at four-year colleges who were assigned to receive the WSG had improved academic outcomes. The positive impacts on degree completion for the initial cohort of students in 2008 did fade out over a period of up to nine years, but the grant still helped students complete their degrees more quickly than the comparison group. Additionally, there was a positive impact on six-year graduation rates in later cohorts, with treatment students in the 2011 cohort being 5.4 percentage points more likely to graduate than the control group.

The grant generated clear increases in the percentage of students who both declared and completed STEM majors, even though the grant made no mentions whatsoever of STEM and had no major requirements. A second new paper by Katharine Broton and David Monaghan of Shippensburg University found that university students assigned to treatment were eight percentage points more likely to declare a STEM major, while our paper estimated a 3.6 percentage point increase in the likelihood of graduating with a STEM major. This strongly suggests that additional need-based financial aid can free students to pursue a wider range of majors, including ones that may require more expensive textbooks and additional hours spent in laboratory sessions.

However, the WSG did not generate across-the-board positive impacts. Impacts on persistence, degree completion, and transfer for students who began at two-year colleges were generally null, which could be due to the smaller size of the grant ($1,800 per year at two-year colleges versus $3,500 at four-year colleges) or the rather unusual population of first-time, full-time students attending mainly transfer-focused two-year colleges. We also found no effects of the grant on graduate school enrollment among students who started at four-year colleges, although this trend is worth re-examining in the future as people may choose to enroll after several years of work experience.

It has been an absolute delight to reunite with my longstanding group of colleagues to conduct this long-term evaluation of the WSG. We welcome any comments on our working paper and look forward to continuing our work in this area through the Hope Center.