How to Respond to Rejection in Academia

There is an old saying in baseball that even the best hitters fail 70% of the time, which shows the difficulty of hitting a round ball with a round bat. But while achieving a .300 batting average in baseball is becoming harder than it has been in decades, most academics would be overjoyed by a 30% success rate across all of their endeavors. This high failure rate often comes as a surprise for new graduate students, who only see the successes of faculty members and think that they never get rejected. I tweeted about this earlier this week and was asked to say more about ways to model responding to rejection.

I feel like I am approaching expert status in rejection by now (even while developing a solid CV), and I am far from the only one. Doug Webber at Temple University put together an impressive CV of his failures, and here are some of mine:

  • I applied to about twenty PhD programs in economics straight out of college, and was only accepted by one of them (Wisconsin). I then promptly ended up on academic probation, got off of probation, failed the micro theory prelim twice, and was unceremoniously dismissed with a terminal master’s degree. Failing out of that program was the best thing that ever happened to me professionally, as many econ PhD programs are known for being brutal on students’ mental health (mine included). I then applied to the ed policy PhD program at Wisconsin and had a great experience there.
  • I applied for multiple dissertation fellowships and was rejected by all of them.
  • I applied to about 80 tenure-track jobs while finishing my dissertation. I never even heard back from about half of them and only had one flyout (which thankfully worked out). And I’m one of the small percentage of interested PhD students who got a tenure-track position!
  • My first eight external grant applications were all rejected.
  • Journals have rejected my submissions 39 times over the last six years using a number of methods. Quick desk rejections (in which the editor says submissions don’t meet the journal’s standards or are outside their area of focus) are always appreciated, as are timely (2-4 month) rejections with helpful feedback. But I have had papers rejected in far worse ways: revise and resubmits rejected after I made every requested change, papers rejected without feedback because reviewers never responded, and delayed (8-12 month) rejections with snarky or unhelpful comments.
  • Every early career award that I have been nominated for (or applied for) has ended with a rejection to this point. C’est la vie.

So how can established academics model how to respond to rejection for graduate students and junior scholars? I offer four suggestions.

(1) Be transparent about failures as well as successes. Doug’s failure CV is a great example of how academics can show the many potholes on the road to success. It is important for us to talk more about our failures (and not just in the form of snarky comments or tweets). There is an element of randomness in nearly every process in higher education (I have had mediocre articles get easily accepted, while better ones have struggled), and we need to do a better job of communicating that reality.

(2) Share the odds of success and how to learn from failures. The fact that I struck out on my first eight grant applications sounds terrible to almost any person new to the field of higher education. But being below the Mendoza line (a .200 batting average) is typical for many funding agencies, which often fund less than one in ten applicants. Rejected grant applications often do not come with feedback, which is frustrating. But getting rejected by a selective journal (conditional on getting past the editor’s desk and out for review) usually results in useful feedback that can result in an acceptance by the next journal. And since there is that element of randomness in acceptances, it is often worthwhile to send a paper to a journal that offers a low likelihood of publication. Sharing this information with rising scholars provides useful context into academic life.

(3) Be there to support colleagues and students during difficult times. Aside from teaching, academics often do much of their work in isolation. And rejections (particularly the first few) can be even more devastating in isolation. Part of mentoring new scholars should include being there to just listen while people vent about being rejected.

(4) Be considerate while rejecting people. For those of us in the position to reject a large percentage of people (search committee chairs, journal reviewers, and the like), it is important to be as compassionate as possible in the process. As a job applicant, it was nice to get some confirmation that I was out of the running for a position—even though it was pretty clear by a given point that I was not the candidate. However, HR policies at some campuses make that difficult or impossible. On the journal side, reviewers need to think about how to shape comments to the author(s) versus their confidential comments to the editor. It’s okay to tell the editor that the paper falls far below the expectations for that journal or that the paper should have been desk rejected, but try to provide author(s) with at least some constructive feedback.

One final note: Even after been rejected dozens of times, the sting never fully goes away. I don’t think it ever will, but as long as the rejection is reasonably considerate, I finally feel comfortable trying again without too much self-doubt. And that is important given that sometimes my efforts feel as futile as trying to hit an eephus pitch!

Announcing a New Data Collection Project on State Performance-Based Funding Policies

Performance-based funding (PBF) policies in higher education, in which states fund colleges in part based on student outcomes instead of enrollment measures or historical tradition, have spread rapidly across states in recent years. This push for greater accountability has resulted in more than half of all states currently using PBF to fund at least some colleges, with deep-blue California joining a diverse group of states by developing a PBF policy for its community colleges.

Academic researchers have flocked to the topic of PBF over the last decade and have produced dozens of studies looking at the effects of PBF both on a national level and for individual states. In general, this research has found modest effects of PBF, with some differences across states, sectors, and how long the policies have been in place. There have also been concerns about the potential unintended consequences of PBF on access for low-income and minority students, although new policies that provide bonuses to colleges that graduate historically underrepresented students seem to be promising in mitigating these issues.

In spite of the intense research and policy interest in PBF, relatively little is known about what is actually in these policies. States vary considerably in how much money is tied to student outcomes, which outcomes (such as retention and degree completion) are incentivized, and whether there are bonuses for serving low-income, minority, first-generation, rural, adult, or veteran students. Some states also give bonuses for STEM graduates, which is even more important to understand given this week’s landmark paper by Kevin Stange and colleagues documenting differences in the cost of providing an education across disciplines.

Most research has relied on binary indicators of whether a state has a PBF policy or an incentive to encourage equity, with some studies trying to get at the importance of the strength of PBF policies by looking at individual states. But researchers and advocacy organizations cannot even agree on whether certain states had PBF policies in certain years, and no research has tried to fully catalog the different strengths of policies (“dosage”) across states over time.

Because collecting high-quality data on the nuances of PBF policies is a time-consuming endeavor, I was just about ready to walk away from studying PBF given my available resources. But last fall at the Association for the Study of Higher Education conference, two wonderful colleagues approached me with an idea to go out and collect the data. After a year of working with Justin Ortagus of the University of Florida and Kelly Rosinger of Pennsylvania State University—two tremendous assistant professors of higher education—we are pleased to announce that we have received a $204,528 grant from the William T. Grant Foundation to build a 20-year dataset containing detailed information about the characteristics of PBF policies and how much money is at stake.

Our dataset, which will eventually be made available to the public, will help us answer a range of policy-relevant questions about PBF. Some particularly important questions are whether dosage matters regarding student outcomes, whether different types of equity provisions are effective in reducing educational inequality, and whether colleges respond to PBF policies differently based on what share of their funding comes from the state. We are still seeking funding to do these analyses over the next several years, so we would love to talk with interested foundations about the next phases of our work.

To close, one thing that I tell often-skeptical audiences of institutional leaders and fellow faculty members is that PBF policies are not going away anytime soon and that many state policymakers will not give additional funding to higher education without at least a portion being directly tied to student outcomes. These policies are also rapidly changing, in part driven by some of the research over the last decade that was not as positive toward many early PBF systems. This dataset will allow us to examine which types of PBF systems can improve outcomes across all students, thus helping states improve their current PBF systems.

New Research on the Relationship between Nonresident Enrollment and In-State College Prices

Public colleges and universities in most states are under increased financial stress as they strain to compete with other institutions while state appropriations fail to keep up with increases in both inflation and student enrollment. As a result, universities have turned to other revenue sources to raise additional funds. One commonly targeted source is out-of-state students, particularly in Northeastern and Midwestern states with declining populations of recent high school graduates. But prior research has found that trying to enroll more out-of-state students can reduce the number of in-state students attending selective public universities, and this crowding-out effect particularly impacts minority and low-income students.

I have long been interested in studying how colleges use their revenue, so I began sketching out a paper looking at whether public universities appeared to use additional revenue from out-of-state students to improve affordability for in-state students. Since I am particularly interested in prices faced by students from lower-income families, I was also concerned that any potential increase in amenities driven by out-of-state students could actually make college less affordable for in-state students.

I started working on this project back in the spring of 2015 and enjoyed two and a half conference rejections (one paper submission was rejected into a poster presentation), two journal rejections, and a grant application rejection during the first two years. But after getting helpful feedback from the journal reviewers (unfortunately, most conference reviewers provide little feedback and most grant applications are rejected with no feedback), I made improvements and finally got the paper accepted for publication.

The resulting article, just published in Teachers College Record (and is available for free for a limited time upon signing up as a visitor), includes the following research questions:

(1) Do the listed cost of attendance and components such as tuition and fees and housing expenses for in-state students change when nonresident enrollment increases?

(2) Does the net price of attendance (both overall and by family income bracket) for in-state students change when nonresident enrollment increases?

(3) Do the above relationships differ by institutional selectivity?

After years of working on this paper and multiple iterations, I am pleased to report…null findings. (Seriously, though, I am glad that higher education journals seem to be willing to publish null findings, as long as the estimates are precisely located around zero without huge confidence intervals.) These findings suggest two things about the relationship between nonresident enrollment and prices faced by in-state students. First, it does not look like nonresident tuition revenue is being used to bring down in-state tuition prices. Second, it also does not appear that in-state students are paying more for room and board after more out-of-state students enroll, suggesting that any amenities demanded by wealthier out-of-state students may be modest in nature.

I am always happy to take any questions on the article or to share a copy if there are issues accessing it. I am also happy to chat about the process of getting research published in academic journals, since that is often a long and winding road!

How Financial Responsibility Scores Do Not Affect Institutional Behaviors

One of the federal government’s longstanding accountability efforts in higher education is the financial responsibility score—a metric designed to reflect a private college’s financial stability. The federal government has an interest in making sure that only stable colleges receive federal funds, as taxpayers often end up footing at least part of the bill when colleges shut down and students may struggle to resume their education elsewhere. The financial responsibility score metric ranges from -1.0 to 3.0, with colleges scoring between 1.0 and 1.4 being placed under additional oversight and those scoring below 1.0 being required to post a letter of credit with the Department of Education.

Although these scores have been released to the public since the 2006-07 academic year and there was a great deal of dissatisfaction among private colleges regarding how the scores were calculated, there had been no prior academic research on the topic before I started my work in the spring of 2014. My question was simple: did receiving a poor financial responsibility score induce colleges to shift their financial priorities (either increasing revenues or decreasing expenditures) in an effort to avoid future sanctions?

But as is often the case in academic research, the road to a published article was far from smooth and direct. Getting rejected by two different journals took nearly two years and then it took another two years for this paper to wind its way through the review, page proof, and publication process at the Journal of Education Finance. (In the meantime, I scratched my itch on the topic and put a stake in the ground by writing a few blog posts highlighting the data and teasing my findings.)

More than four and a half years after starting work on this project, I am thrilled to share that my paper, “Do Financial Responsibility Scores Affect Institutional Behaviors?” is a part of the most recent issue of the Journal of Education Finance. I examined financial responsibility score data from 2006-07 to 2013-14 in this paper, although I tried to get data going farther back since these scores have been calculated since at least 1996. I filed a Freedom of Information Act request back in 2014 for the data, and my appeal was denied in 2017 on the grounds that the request to receive data (that already existed in some format!) was “too burdensome and expensive.” At that point, the paper was already accepted at JEF, but I am obviously still a little annoyed with how that process went.

Anyway, I failed to find any clear evidence that private nonprofit or for-profit colleges changed their fiscal priorities after receiving an unfavorable financial responsibility score. To some extent, this result made sense among private nonprofit colleges; colleges tend to move fairly slowly and many of their costs are sticky (such as facilities and tenured faculty). But for-profit colleges, which generally tend to be fairly agile critters, the null findings were more surprising. There is certainly more work to do in this area (particularly given the changes in higher education that have occurred over the last five years), so I encourage more researchers to delve into this topic.

To aspiring researchers and those who rely on research in their jobs—I hope this blog post provides some insights into the scholarly publication process and all of the factors that can slow down the production of research. I started this paper during my first year on faculty and it finally came out during my tenure review year (which is okay because accepted papers still count even if they are not yet in print). Many papers move more quickly than this one, but it is worth highlighting that research is a pursuit for people with a fair amount of patience.

Some Thoughts on the Academic Peer Review Process

Like most research-intensive faculty members, I receive regular requests to review papers for legitimate scholarly journals. (My spam e-mail folder is also full of requests to join editorial boards for phony journals, but that’s another topic for another day.) Earlier this week, I was working on reviewing a paper submitted to The Review of Higher Education, one of the three main higher education field journals in the United States (Journal of Higher Education and Research in Higher Education are the other two). I went to check one of the submission guidelines on the journal’s website and was surprised to see that the journal is temporarily closed for new manuscript submissions to help clear a backlog of submissions.

After I shared news of the journal’s decision on Twitter, I received a response from one of the associate editors of the journal. Her statement astonished me:

This sets off all kinds of alarms. How can a well-respected journal struggle so much to get qualified reviewers, pushing the length of the initial peer review process to six months or beyond? As someone who both submits to and reviews for a wide range of journals, here are some of my thoughts on how to potentially streamline the academic peer review process.

(1) Editors should ‘desk reject’ a higher percentage of submissions. Since it can be difficult to find qualified reviewers and most respectable journals accept less than 20% of all submissions, there is no reason to send all papers out to multiple external reviewers. If a member of the editorial board glances through the paper and can easily determine that it has a very low chance of publication, the paper should be immediately ‘desk rejected’ and quickly returned to the author with a brief note about why it was not sent out for full review. Journals in some fields, such as economics, already do this and it is sorely needed in education to help manage workloads. It is also humane to authors, as they are not waiting several months to hear back on a paper that will end up being rejected. I have been desk rejected several times during my career, and it allowed me to keep moving papers through the publication pipeline as a tenure-track faculty member.

(2) Journals should consider rejecting submissions from serial free riders. The typical academic paper is reviewed by two or three external scholars in the peer review process, with more people potentially getting involved if the paper goes through multiple revise and resubmit rounds. This means that for every sole-authored paper that someone submits, that person should be prepared to review two or three other papers in order to maintain balance. But in practice, since journals prefer reviewers with doctoral degrees and graduate students need to submit papers in order to be eligible for academic jobs, those of us with doctoral degrees should probably plan on reviewing 3-4 papers for each sole-authored paper we submit. (Divide that number accordingly based on the number of co-authors on your submissions.) It’s okay to decline review invitations if the paper is outside your scope of knowledge, but otherwise scholars need to accept most invitations. Declining because we are too busy doing our own research—and thus further jamming the publication pipeline—is not acceptable, particularly for tenured faculty members. If journals publicly commit to rejecting submissions from serial free riders, there may be fewer difficulties finding reviewers.

(3) There needs to be some incentive for reviewers to submit in a timely manner. Right now, journals can only beg and plead to get reviewers to submit their reviews within a reasonable time period (usually 3-6 weeks). But in my conversations with journal editors, reviewers often fail to meet that timeline. In an ideal world, journal reviewers would actually get paid for their work like many foundations and scholarly presses pay a few hundred dollars for thorough reviews. Absent that incentive, it may be worth establishing some sort of priority structure that rewards those who review quickly with quick reviews on their own submissions.

(4) In some cases, there needs to be better vetting of reviews before they are sent to authors. Most reputable academic journals have relatively few problems with this, as this is the job of the editorial board. Reviews generally come with a letter from the editor explaining discrepancies among reviewers and which comments can potentially be ignored. But the peer review process at academic conferences has more quality control issues, potentially due to the large number of reviews that are requested (ten 2,000-2,500 word proposals is not uncommon). It seems like reviewers rush through these proposals and often lack knowledge in the subject matter. Limiting the number of submissions that any individual can make and carefully vetting conference reviewers could help with this concern.

By helping to restrict the number of items that go out for peer review and providing incentives for people to fulfill their professional reviewing obligations, it should be possible to bring the peer review timeline down to a more humane 2-3 months rather than the 4-8 months that seems to be the norm in much of education. This is crucial for junior scholars trying to meet tenure requirements, but it will also help get peer-reviewed research out to the public and policymakers more quickly. Journals such as AERA Open, Educational Evaluation and Policy Analysis, and Economics of Education Review are models in quick and thorough peer review processes that the rest of the field can emulate.

New Experimental Evidence on the Effectiveness of Need-Based Financial Aid

My first experience doing higher education research began in the spring 2008, when I (then a graduate student in economics) responded to an e-mail from an education professor at the University of Wisconsin who was looking for students to help her with an interesting new study. Sara Goldrick-Rab was co-leading an evaluation of the Wisconsin Scholars Grant (WSG)—a rare case of need-based financial aid being given to students from low-income families via random assignment. Over the past decade, the Wisconsin Hope Lab team published articles on the effectiveness of the WSG in improving on-time graduation rates among university students and on changing students’ work patterns.

A decade later, we were able to conduct a follow-up study to examine the outcomes of treatment and control group students who started college between 2008 and 2011. This sort of long-term analysis of financial aid programs has rarely been conducted—and the two best existing evaluations (of the Cal Grant and the West Virginia PROMISE program) are on programs with substantial merit-based components. Eligibility for the WSG was solely based on financial need (conditional on being a first-time, full-time student), providing the first long-term experimental evaluation of a need-based program.

Along with longtime collaborators from our days in Wisconsin (Drew Anderson of the RAND Corporation, Katharine Broton of the University of Iowa, and Sara Goldrick-Rab of Temple University), I am pleased to announce the release of our new working paper on the long-term effects of the WSG to kick off the opening of the new Hope Center for College, Community and Justice at Temple University. We found some evidence that students who began at four-year colleges who were assigned to receive the WSG had improved academic outcomes. The positive impacts on degree completion for the initial cohort of students in 2008 did fade out over a period of up to nine years, but the grant still helped students complete their degrees more quickly than the comparison group. Additionally, there was a positive impact on six-year graduation rates in later cohorts, with treatment students in the 2011 cohort being 5.4 percentage points more likely to graduate than the control group.

The grant generated clear increases in the percentage of students who both declared and completed STEM majors, even though the grant made no mentions whatsoever of STEM and had no major requirements. A second new paper by Katharine Broton and David Monaghan of Shippensburg University found that university students assigned to treatment were eight percentage points more likely to declare a STEM major, while our paper estimated a 3.6 percentage point increase in the likelihood of graduating with a STEM major. This strongly suggests that additional need-based financial aid can free students to pursue a wider range of majors, including ones that may require more expensive textbooks and additional hours spent in laboratory sessions.

However, the WSG did not generate across-the-board positive impacts. Impacts on persistence, degree completion, and transfer for students who began at two-year colleges were generally null, which could be due to the smaller size of the grant ($1,800 per year at two-year colleges versus $3,500 at four-year colleges) or the rather unusual population of first-time, full-time students attending mainly transfer-focused two-year colleges. We also found no effects of the grant on graduate school enrollment among students who started at four-year colleges, although this trend is worth re-examining in the future as people may choose to enroll after several years of work experience.

It has been an absolute delight to reunite with my longstanding group of colleagues to conduct this long-term evaluation of the WSG. We welcome any comments on our working paper and look forward to continuing our work in this area through the Hope Center.

Musings from a Midwest Road Trip

One of the best things about being a faculty member is the incredible flexibility during the summer. Although I am only on a nine-month contract and have to hustle for grant or contract funding to maintain a nice standard of living (here is what I did last summer), it’s great to be in almost complete control of my schedule for three months out of the year. I had the pleasure of spending much of early June on the road in the Midwest, mixing some time with my family and friends alongside more typical academic obligations. Here are some musings from 900 miles behind the wheel across some of the most beautiful scenery in America.

After some time with my parents, my wife and I went to Kansas City for a friend’s wedding. But since we are both Truman State University alumni, we had to make a stop at the Harry S. Truman presidential library in Independence, Missouri. In the midst of all of the exhibits (including the famous Zimmermann Telegram), there was a well-worn display on some aspects of Truman’s legacy that are still being debated today. Truman is well-known in higher education circles for the commission that he established, and many of these ideas keep popping up on a regular basis.

We then took a walk in downtown Kansas City, which has been revitalized over the last decade. (Ed policy friends: you’re going to love going to AEFP there next year!) One of the downtown attractions is the College Basketball Experience, which also hosts the National Collegiate Basketball Hall of Fame. I was struck by the graphic outside the building, which prominently featured a Creighton basketball player. This raises questions about whether players should be paid for their likenesses, even when the organization using the likeness is nonprofit.

After a gorgeous drive through corn and soybean fields (and listening to a near no-hitter on the radio), I was in Champaign, Illinois for a conference on state funding volatility in higher education hosted by the University of Illinois. Illinois knows something about the topic, but it was good to see a sense of normalcy (and construction cranes!) after a second year of consistent state funding recently came through. I presented my draft paper examining whether star research faculty members leave public research universities after state funding cuts—and I found little evidence of this. (Thanks to Eric Kelderman for this nice writeup in The Chronicle!) I also enjoyed the art outside the conference room, including this nice sign that would look great in my office.

I was then back in New Jersey for a few days to chair a dissertation defense and cut the grass before heading to Minneapolis to give a talk on higher education accountability at the Lawlor Group’s Summer Seminar for administrators at private nonprofit colleges. I usually speak with policy and scholarly audiences, so it was great to learn from a different group of people over the course of two days. It has been great to travel around for a while over the last few weeks, but now it’s nice to be back in New Jersey for a prolonged stretch of time. Time to write!

New Research on Equity Provisions in State Performance Funding Policies

Previous versions of state performance-based funding (PBF) policies were frequently criticized for encouraging colleges to simply become more selective in order to get more state funding (see a good summary of the research here). This has potential concerns for equity, as lower-income, first-generation, adult, and racial/ethnic minority students often need additional supports to succeed in college compared to their more advantaged peers.

With the support of foundations and advocacy organizations, the most recent wave of state PBF policies has often included provisions that encourage colleges to enroll traditionally underrepresented students. For example, Indiana now gives $6,000 to a college if a low-income student completes a bachelor’s degree; while this is far less than the $23,000 that the college gets if a student completes their degree in four years, it still provides an incentive for colleges to change their recruitment and admissions practices. Today, at least sixteen states provide incentives for colleges to serve underrepresented students.

Given the growth of these equity provisions, it is not surprising that researchers are now turning their attention to these policies. Denisa Gandara of SMU and Amanda Rutherford of Indiana University published a great article in Research in Higher Education last fall looking at the effects of these provisions among four-year colleges. They found that the policies were at least somewhat effective in encouraging colleges to enroll more racial/ethnic minority and lower-income students.

As occasionally happens in the research world, multiple research teams were studying the same topic at the same time. I was also studying the same topic, and my article was accepted in The Journal of Higher Education a few days before their article was released. My article is now available online (the pre-publication version is here), and my findings are generally similar—PBF policies with equity provisions can at the very least help reduce incentives for colleges to enroll fewer at-risk students.

The biggest contribution of my work is how I define the comparison group in my analyses. The treatment group is easy to define (colleges that are subject to a PBF policy with equity provisions), but comparison groups often combine colleges that face PBF without equity provisions with colleges that are not subject to PBF. By dividing those two types of colleges into separate comparison groups, I can dig deeper into how the provisions of performance funding policies affect colleges. And I did find some differences in the results across the two comparison groups, highlighting the importance of more nuanced comparison groups.

Much more work still needs to be done to understand the implications of these new equity provisions. In particular, more details are needed about which components are in a state’s PBF system, and qualitative work is sorely needed to help researchers and policymakers understand how colleges respond to the nuances of different states’ policies. Given the growing group of scholars doing research in this area, I am confident that the state of PBF research will continue to improve over the next few years.

New Research on Brain Drain and Recent College Graduates

As I discussed in my previous post, I believe there is value in education scholars using social media in spite of the concerns that being active on venues like Twitter can raise. One example of this occurred last April, when Doug Webber of Temple University ran some numbers from the American Community Survey looking at the percentage of young college graduates who left New York (in the context of the state’s proposed Excelsior Scholarship program). The numbers got quite a bit of attention in a very nerdy portion of higher ed Twitter and led me to encourage Doug to write up the results.

He then reached out to me about working on the paper with him, which ended up being a lot of fun to write. After going through the peer review process (one substantive and one minor round of changes), our resulting article is now online at Educational Researcher. (And a big kudos to the ER reviewers and editorial team for taking the paper from initial submission to appearing online in just eight months!)

We ended up looking at state-level interstate mobility rates among young (age 22-24) bachelor’s degree recipients using ACS data, focusing on the 2005-2015 period to examine pre-recession and post-recession patterns. Overall mobility rates dropped from 12.7% in 2005 to 10.4% in 2015, which is a surprising finding given that people have historically tended to move at higher rates during economic downturns. We found quite a bit of variation across states in net interstate mobility rates both pre-recession (2005-07) and post-recession (2013-15), as summarized in the table below.

State-level changes in the number of young adults with bachelor’s degrees.
  Gain/loss of young adults w/BAs (pct)
State 2005-2007 2013-2015
Alabama -4.0 -4.6
Alaska 3.9 -5.0
Arizona 4.2 -0.5
Arkansas -1.4 -2.7
California 3.9 3.7
Colorado 0.7 8.0
Connecticut -2.3 -4.1
Delaware -17.5 -7.2
District of Columbia 20.0 19.0
Florida 2.6 1.0
Georgia 6.5 -1.0
Hawaii 7.6 8.1
Idaho -3.9 -10.8
Illinois 3.6 3.4
Indiana -12.9 -7.2
Iowa -5.1 -8.1
Kansas -10.3 -4.6
Kentucky -1.2 -2.8
Louisiana -8.3 3.4
Maine -12.5 -8.7
Maryland 4.9 -1.5
Massachusetts 0.7 2.1
Michigan -8.7 -5.6
Minnesota 1.9 -1.2
Mississippi -2.3 -10.8
Missouri -0.7 -2.6
Montana -23.4 -13.3
Nebraska 3.6 -4.3
Nevada 13.3 10.0
New Hampshire -4.6 -10.0
New Jersey 3.0 -3.4
New Mexico 4.3 2.1
New York -0.2 -0.3
North Carolina 3.6 4.2
North Dakota -9.0 -1.8
Ohio -5.9 -3.5
Oklahoma -5.8 -4.4
Oregon -2.1 1.4
Pennsylvania -6.2 -6.1
Rhode Island -19.1 -11.3
South Carolina -2.7 -2.8
South Dakota -8.0 0.0
Tennessee -1.6 1.9
Texas 3.5 3.4
Utah -12.4 -3.7
Vermont -15.4 -10.9
Virginia 3.6 2.8
Washington 6.2 6.8
West Virginia -12.7 -1.9
Wisconsin -3.3 -0.2
Wyoming 6.1 3.5
Notes:
(1) The percentages reflect changes over the number of 22-24 year olds with a bachelor’s degree who were in the state in a given year.
(2) These values represent averages across the years referenced above.

This article reflects a great example of how a willingness to share some preliminary data on social media results in a publication that is both (hopefully) policy-relevant and a chance to work with a new collaborator. I can’t say enough great things about working with Doug—and I hope to have more of these types of collaborations in the future!