What Will the College Opportunity Summit Mean for Higher Education?

Today, the White House is hosting a second College Opportunity Summit, following up on a summit held in January that was roundly criticized for focusing on elite institutions. Both this summit and the previous summit involved colleges and other organizations making pledges designed to improve college access and completion rates, particularly for underrepresented populations and in STEM. The first round of pledges (and progress made) and the second round of pledges can both be found on the White House’s website.

Several hundred people, including administrators, policy analysts, and researchers, are at today’s summit, which has the potential to generate useful discussions. But it could also be the case that the discussion turns into a stereotypical academic conference, where a lot of items are discussed but no action is ever taken. So what could the summit mean for higher education?

The first thing that jumps out from the list of pledges is the sheer number. The list contains over 600 actions that colleges, associations, and other organizations plan to take—which is admirable. But as a researcher, two key questions should be considered:

(1) Would colleges and organizations have adopted these policies even without a formal pledge? In research language, this is known as the counterfactual—considering what would have happened in the absence of the policy being studied. This list could represent a list of things that colleges already planned to do (but they get good PR and tickets to the White House tree lighting), or this could be a result of colleges setting new goals as a result of the White House’s call for commitments. When considering the impact of this summit, researchers should talk to some college administrators (while promising confidentiality) to see if the pledges were policies already being planned or a new development.

(2) Will these pledges improve student outcomes? This involves thinking carefully about program design and data collection, so it is possible to use experimental or quasi-experimental methods combined with in-depth interviews in order to examine program impacts and potential moderating and mediating factors. The Institute for Education Sciences announced an additional $10 million in funding for postsecondary research, but that amount won’t make much of a difference as funding an intervention and conducting an evaluation can easily cost several million dollars.

I hope the summit helps colleges and organizations develop partnerships similar to the University Innovation Alliance, the Student Achievement Measure, and other organizations that link colleges with similar goals to each other. But it’s worth keeping in mind that many of these pledges are likely things that colleges planned to do anyway.

Lessons Learned as a First-Year Assistant Professor

When I was finishing my dissertation at the University of Wisconsin-Madison and going on the academic job market, I got a lot of great advice from my dissertation committee, other academics, and friends from around the country about how to survive the first year. The typical advice was to work really hard, be nice to everyone, and to do everything possible to lay the groundwork for the rest of my career while somehow getting to the middle of May.

I got a great job as a tenure-track assistant professor of higher education at Seton Hall University, and it’s safe to say that the first year flew by. It feels like I just moved to New Jersey a few weeks ago, but instead I’m taking a break in between rounds of grading student papers to write down things I learned from the first year on the tenure track. The three basic principles that I outlined above definitely still hold true, but I wanted to take a minute to share some other lessons that I learned this year. (Note that some of this advice is most applicable to tenure-track faculty at institutions where research is a key expectation of tenure.)

(1) Try to get courses prepared as far ahead of time as possible. New course preparations take a lot of time. I estimate that I probably spent 30-40 hours preparing the syllabi for each of my solo course preparations, including finding the assigned articles, thinking about potential assignments, and posting materials to Blackboard. I then spent about 6-8 hours preparing lecture notes for the typical week’s class, which is a pretty big upfront cost but I’ll only need to spend a fraction of that time updating the course for next year.

There are three major concerns with advance course preparation. First, course assignments can change, so wait to spend too much time on a course until it’s definitely yours. Second, you may not have access to your new institution’s library and technology resources until close to the start of the semester. If that might be a concern, talk with your new department to see if they can help. Finally, there does need to be some flexibility in the course based on whether your expectations of the class’s knowledge or your pace are accurate. I built a flexible day into the schedule this spring semester, which came in handy when New Jersey got 62 inches of snow during the winter.

(2) Budget blocks of research time far in advance. Teaching will take up a lot of time during the first year, and service responsibilities such as advising and committee work will vary considerably across colleges. But research cannot be neglected during the first year, particularly given the amount of time between submitting an article to a journal and finally seeing it in print (two years is not uncommon). Keep a close eye on submission deadlines for conferences and small grants, as these proposals are good ways to continue developing a research agenda and meet more senior researchers in your field.

One word of caution: Although conference proposals don’t take that much time to write, keep in mind that the papers must be written if the proposals are accepted. I submitted three paper proposals last fall for conferences this spring, and was pleasantly surprised to see all of them accepted. The drawback was that I had to draft three papers in a six-week period, which was a lot of work. However, my previous work to get ahead of the curve on course preparation allowed me the time to write the papers.

Some people like to dedicate certain days of the week and/or times of day to focus on research and writing. I would advise not trying to write in more than two-hour blocks due to diminished returns after a long period of concentration, but people quickly find their own style. What is more important is finding the time of day which you have the most energy and placing your most cognitively difficult tasks (research or lecture preparation) in those periods. Save the tedious data work or editing for another point in time.

(3) Make time to be a public scholar, but proceed with caution. Many of us in academia entered the profession due to a strong interest in shaping public discourse on important topics. I’m no exception, as I have a strong interest in providing policy-relevant research in the areas of higher education finance, accountability, and policy. For this reason, academics tend to be defensive against criticisms that we don’t care about public policy. My blog post on the topic in February got a large amount of traffic and was covered by other media outlets.

With that being said, proceed into the public arena with caution. Make sure your statements can be supported with research and it’s ideal if they fit well into your research agenda. Not every department is supportive of young faculty members who are engaged in policy discussions, so talk with your colleagues to get their thoughts. I’m thankful to be in a very supportive department and university, which allows me to engage policymakers and advance my teaching and research.

(4) Plan goals for the summer after the first year and beyond. The summer after the first year is certainly a good time to take a break. It’s been a busy first year and many new professors haven’t had a proper break for years. But that summer is also crucial for thinking about grant applications, planning new projects, and looking ahead to the tenure review process. Given the long arc of many projects, it’s not unreasonable to expect a project that is started right after the first year to bear fruit not long before the tenure application is submitted.

Friends and colleagues in academia, what other suggestions would you have for new faculty?

Come See Me at AERA!

I’m involved in two presentations at this weekend’s gigantic American Educational Research Association conference in Philadelphia. (And I’m not kidding about the gigantic part. There are often more than 100 sessions going on at any particular time!)

“Making Sense of Loan Aversion: Evidence from Wisconsin.” (Friday, 2:15-3:45, Marriott, 407) I’ve worked on this paper with Sara Goldrick-Rab of the University of Wisconsin-Madison (this year’s recipient of an early career award from AERA), who will be giving the presentation. In this presentation, she will talk about our work looking at loantaking patterns among a sample of Pell recipients from the state of Wisconsin.

“Financial Need and Income Volatility among Students with Zero Expected Family Contribution.” (Sunday, 10:35-12:05, Marriott, Fourth Level, Franklin 11) In this paper, I look at students with zero EFC using both nationally representative data and student-level FAFSA data from nine colleges and universities. I examine trends in zero EFC receipt, as well as breaking down zero EFC students into groups based on how the EFC was calculated (full FAFSA, simplified FAFSA excluding assets, and automatic zero EFC). Here are the slides from this presentation.

I hope to see you at AERA, and please send along any sessions that I should attend between Friday and Sunday!

Come See Me at AEFP!

I’m presenting two papers at the annual conference of the Association for Education Finance and Policy (AEFP) this week in San Antonio. Below are short descriptions of the papers that I’ll be presenting, along with information about the time and room location.

Are Federal Allocations for Campus-Based Financial Aid Programs Equitable and Effective?(Thursday at 2:45 PM, Conference Room 4, Third Floor)

Abstract: Two federal campus-based financial aid programs, the Supplemental Educational Opportunity Grant (SEOG) and the Federal Work-Study program (FWS), combine to provide nearly $2 billion in funding to students with financial need. However, the allocation formulas have changed little since 1965, resulting in community colleges and newer institutions getting much smaller awards than longstanding private colleges with high costs of attendance. I document the trends in campus-level allocations over the past two decades and explore several different methods to reallocate funds based on current financial need while limiting the influence of high-tuition colleges. I show that allocation formulas that count a modest amount of tuition toward financial need reallocate aid away from private nonprofit colleges and toward public colleges and universities.

A Longitudinal Analysis of Student Fees: The Roles of States and Institutions(Saturday at 9:45 AM, Conference Room 12, Third Floor)

Abstract: Student fees are used to finance a growing number of services and programs at colleges and universities, including core academic functions, and make up 20% of the total cost of tuition and fees at the typical four-year public college. Yet little research has been conducted to examine state-level and institutional-level factors that may affect student fee charges. In this paper, I use state-level data on tuition and fee policy, the role of state governments and higher education systems, and partisan political balance combined with institutional-level data on athletics programs and selectivity to create a panel from the 1999-2000 to 2011-12 academic years. I find that some state-level factors that would be expected to reduce student fees, such as fee caps, do reduce fees at four-year public colleges, but giving the legislature authority to set fees results in higher fees. Additional state grant aid and higher-level athletics programs are also associated with higher fees in my primary model.

I welcome any feedback you may have on either of these papers, as they are both preliminary works that still need polishing at the very least. I hope to see you at AEFP!

New Policy Brief on College Ratings

I am pleased to announce the release of my newest policy brief, “Moving Forward with Federal College Ratings: Goals, Metrics, and Recommendations” through my friends at the Wisconsin Center for the Advancement of Postsecondary Education (WISCAPE). In the brief, I outline the likely goals of the Obama Administration’s proposed Postsecondary Institution Ratings System (PIRS), discuss some potential outcome measures, and provide recommendations for a fair and effective ratings system given available data. I welcome your comments on the brief!

Are Academics Public Intellectuals? (And What Can We Do?)

The Sunday New York Times included an editorial piece by Nicholas Kristof with the title, “Professors, We Need You!” In this piece, Kristof argued that the vast majority of faculty do not do a good job connecting with media and policymakers and thus do not get the importance of their work communicated beyond the proverbial ivory tower. Perhaps the most damning statement in the piece is Kristof’s assertion that “there are, I think, fewer public intellectuals on American university campuses today than a generation ago.”

Some of Kristof’s statements about the disincentives toward public engagements are certainly true, at least for some faculty at some institutions. Tenure-track faculty are often judged by the number of peer-reviewed publications in top journals, at the expense of public service and publishing in open-access journals. The increased specialization of many faculty members also makes communicating with the public more difficult due to the often technical nature of our work. Faculty who are not on the tenure track face an additional set of concerns in engaging with the public due to their often unstable employment situations.

With those concerns being noted, I think that Kristof is providing a somewhat misguided view of faculty engagement. Some (but not enough) academics, regardless of their employment situation, do make the extra effort to be public as well as private intellectuals. (If you’re reading this blog post, I’ve succeeded to at least some extent.) The Internet lit up with complaints from academics about Kristof’s take, which are well-summarized in a blog post by Chuck Pearson, an associate professor at Virginia Intermont College. He also created the #engagedacademics hashtag on Twitter, which is worth a look.

While I would love to see elite media outlets like the New York Times reach out beyond their usual list of sources at the most prestigious institutions, I don’t see that as tremendously likely to happen. So what can academics do in order to get their work out to policymakers and the media? Here are a few suggestions based on my experiences, which have included a decent amount of media coverage for a first-year assistant professor:

1. Work on cultivating a public presence. Academics who are serious about being public intellectuals should work to develop a strong public presence. If your institution supports a professional website under the faculty directory, be sure to do that. Otherwise, use Twitter, Facebook, or blogging to help create connections with other academics and the general public. One word of caution: if you have strong opinions on other topics, consider a personal and a professional account.

2. Try to reach out to journalists. Most journalists are available via social media, and some of them are more than willing to engage with academics doing work of interest to their readers. Providing useful information to journalists and responding to their tweets can result in being their source for articles. Help a Reporter Out (HARO), which sends out regular e-mails about journalists seeking sources on certain topics, is a good resources for academics in some disciplines. I have used HARO to get several interviews in various media outlets regarding financial aid questions.

3. Work through professional associations and groups. Academics who belong to professional associations can potentially use the association’s connections to advance their work. I am encouraged by associations like the American Educational Research Association, which highlights particularly relevant papers through its media outreach efforts. Another option is to connect with other academics with similar goals. An example of this is the Scholars Strategy Network, a network of “progressive-minded citizens” working to get their research out to the public.

4. Don’t forget your campus resources. If your college or university has a media relations person or staff, make sure to reach out to them as soon as possible. This may not be appropriate for all research topics, but colleges tend to like to highlight faculty members’ research—particularly at smaller institutions. The media relations staff can potentially help with messaging and making connections.

While Kristof’s piece overstates the problem that faculty face in being viewed as public intellectuals, it is a worthwhile wakeup call for us to step up for efforts for public engagement. Perhaps Kristof will turn his op-ed column over to some academics who are engaged with the public to highlight some successful examples?

[UPDATE: Thanks to The Chronicle of Higher Education for linking to this piece. Readers, I would love to get your comments on my post and your suggestions on how to engage the media and public!]

Is the Term “College Ratings” Toxic?

In what will come as a surprise to few observers, much of the higher education community isn’t terribly fond of President Obama’s plan to develop a college ratings system for the 2015-16 academic year. An example of this is a recently released Inside Higher Ed/Gallup survey of college provosts and chief academic officers. Only a small percentage of the 829 individuals who returned surveys were supportive of the ratings and thought they would be effective, as shown below:

  • 12% of provosts agree the ratings will help families make better comparisons across institutions.
  • 12% of provosts agree the ratings will reflect their own college’s strengths.
  • Just 9% agree the ratings will accurately reflect their own college’s weakness.

There is some variation in support by type of college. Provosts at for-profit institutions and public research universities tended to offer more support, while those at private nonprofit institutions were almost unanimous in opposition. But regardless of whether provosts like the idea of ratings, the plan seems to be full steam ahead.

The Association of Public and Land-Grant Universities (APLU) took a productive step in the ratings conversation by releasing their own plan for accountability and cost-effectiveness. This plan centers on three components that could be used to allocate financial aid to colleges: risk-adjusted retention and graduation rates, employment/graduate degree rates, and default/loan repayment rates. Under APLU’s proposal, colleges could fall into one of three groups: a top tier that receives bonus Title IV funds, a middle tier that is held harmless, and a bottom tier that loses some or all Title IV funds.

To me, that sounds like a ratings system. But APLU took care not to call their plan a ratings system, and viewed the Administration’s plans as being “extremely difficult to structure.” It seems like the phrase “college ratings” has become a toxic idea; so rather than call for a simplified set of ratings, APLU discussed the use of “performance tiers.” This sounds a little like the Common Core debate in K-12 education, in which some states have considered renaming the standards in an attempt to reduce opposition.

It will be interesting to see how the discussion on college ratings moves forward over the next several weeks, particularly as more associations either offer their plans or decry the entire idea.  The technical ratings symposium previously scheduled for January 22 will now occur on February 6 on account of snow, and I’ll be presenting my thoughts on how to develop a ratings system for postsecondary education. I’ll post my presentation on this blog at that time.

The 2013 Higher Education Not Top Ten List

Yesterday, I put out my top-ten list of higher education policy and finance issues from 2013. And today, I’m back with a list of not-top-ten events from the year (big thanks to Justin Chase Brown for inspiring me to write this post). These are events that left me shaking my head in disbelief or wondering how someone could fail so dramatically.

(Did I miss anything? Start the discussion below!)

10. Monsters University isn’t real. The higher education community was abuzz this summer with the premiere of Pixar’s newest movie about one of the few universities outside Fear Tech specializing in scaring studies. The Monsters University website is quite good, and as Jens Larson at U of Admissions Marketing notes, it’s hard to distinguish from many Title IV-participating institutions. I’ll use this blog post to announce my willingness to give a lecture or two at Monsters University. (As an aside, since the two main characters didn’t graduate, their post-college success may not help MU’s scores in a college rating system.)

9. Brent Musburger set men back at least five decades in the course of 30 seconds. His public ogling of the girlfriend of Alabama quarterback A.J. McCarron during January’s BCS championship game instantly became a YouTube sensation. Musburger shouldn’t have listened to his partner in The Waterboy, Dan Fouts, who urged him to not hold anything back in the last game of the season. McCarron, on the other hand, is preparing to play Oklahoma in the Sugar Bowl on January 2.

8. Rankings and ratings are not the same thing. While college leaders tend not to like the Obama Administration’s proposed Postsecondary Institution Rating System, it is important to emphasize the difference between rankings and ratings. Rankings assign unique values to each institution (like the college football or basketball polls), while ratings lump colleges into broad categories (think A-F grades). Maybe since I work on college rankings, I’m particularly annoyed by the confusion. In any case, it’s enough to make my list.

7. Mooooove over: The College Board has another rough year. This follows a rough 2012 for the publishers of the SAT, as more students took the ACT than the SAT for the first time last year. But in 2013, the redesign of the SAT got pushed back from 2015 to 2016, giving the ACT more time to gain market share. The College Board followed that up with a head-scratching example of “brand-ing,” passing out millions of cow stickers to students taking the PSAT. If these weren’t enough, the College Board also runs the CSS Profile, a supplemental (and not free) application for financial aid required by many expensive institutions. Rachel Fishman at New America has written extensively about the concerns of the Profile.

6. Gordon Gee is the most interesting man in higher education. The well-traveled university president began 2013 leading Ohio State University, but left the post this summer after his 2012 comments disparaging Notre Dame, Catholic priests, and the ability of the Southeastern Conference to read came to light. Yet, he and his large bowtie collection will be heading to West Virginia University this spring as he assumes the role of interim president. There is still no word if the Little Sisters of the Poor will show up on WVU’s 2014 football schedule.

5. Rate My Professor is a lousy measure of institutional teaching quality. I’m not going to fully dismiss Rate My Professor, as I do believe it can be correlated with an individual professor’s teaching quality. But a Yahoo! Finance piece claiming to have knowledge of the 25 colleges with the worst professors cross the boundaries of absurd. I quickly wrote a response to that piece, noting that controlling for a student’s grade and the difficulty of the course are essential in order to try to isolate teaching quality. This was by far my most-viewed blog of 2013.

4. Elizabeth Warren’s interest rate follies. The Democratic Senator from Massachusetts became even more of a progressive darling this spring when she announced a plan to tie student loan interest rates to the Federal Reserve’s overnight borrowing rate—0.75%. Unfortunately, this plan made no sense on several dimensions. While overnight borrowing has nearly no risk, student loans (over a ten-year period) have considerable risk. Additionally, if interest rates were set this low, money would have to come from somewhere else. I would much rather see the subsidy go upfront to students through larger Pell Grants than through lower interest payments after leaving college. Fortunately, Congress listened to smart people like Jason Delisle at New America and her plan went nowhere.

3. The Common Application fails early applicants. The Common Application, used by a substantial number of elite colleges, did not work for some students applying in October and November. The reason was that the Common App’s new software didn’t work and they failed to leave the previous version available in case of problems. Although this didn’t affect the vast majority of students who aspire to attend less-selective institutions, it certainly got the chattering classes talking.

2. The federal government shut down and budget games ensued all year long. The constant partisan battle culminated with a sixteen-day shutdown in October, bringing much of the Department of Education to a screeching halt. While the research community used Twitter to trade downloaded copies of IPEDS data and government reports, other disruptions were more substantial. 2013 also featured sequestration of some education spending, although it looks like the budget process might return to regular order for the next two years.

1. Georgetown Law finds a way to stick taxpayers with the entire cost of law school. It is no secret that law school is an expensive proposition, with six-figure debt burdens becoming the norm at many institutions. But some of the loans can be forgiven if students pursue public service careers for a decade, a program that was designed to help underpaid and overworked folks like public defenders or prosecuting attorneys.

Georgetown’s Loan Repayment Assistance Program advertises that “public interest borrowers might now pay a single penny on their loans—ever!” To do this, the law school increased tuition to cover the cost of 10 years’ worth of loan payments under income-based repayment for students making under $75,000 per year. Students take out Grad PLUS loans to fund this upfront, but never have to pay a dime of those loans back as Georgetown makes the payments. Jason Delisle and Alex Holt, who busted this scheme wide open this summer, estimate that students will have over $150,000 in loans forgiven—and put on the backs of taxpayers.  Although Georgetown tries to defend the practice as being good for society, it is extremely hard to make that argument.

Honorable mentions: #Karma, lousy attacks on performance-based funding research, financial stability of athletics at Rutgers and Maryland, and parking at 98% of campuses.

Don’t Dismiss Performance Based Funding Research

Performance-based funding (PBF), in which at least a small portion of state higher education appropriations are tied to outcomes, is a hot political topic in many states. According to the National Conference of State Legislatures and work by Janice Friedel and others, 22 states have PBF in place, seven more are transitioning to PBF, and ten more have discussed a switch.

The theory of PBF is simple: if colleges are incentivized to focus on improving student retention and graduation rates, they will redirect effort and funds from other areas to do so. PBF should work if two conditions hold:

(1) Colleges must currently be using their resources in ways that do not strongly correlate with student success, a point of contention with many institutions. If colleges are already operating in a way that maximizes student success, then PBF will not have an impact. PBF could also have negative effects if colleges end up using resources less effectively than they currently are.

(2) The expected funding tied to performance must be larger than the expected cost of changing institutional practices. Most state PBF systems currently tie small amounts of state appropriations to outcomes, which could result in the cost of making changes smaller than the benefits. Colleges also need to be convinced that PBF systems will be around for the long run instead of until the next governor ends the plan or state budget crises cut any funds for PBF. Otherwise, they may choose to wait out the current PBF system and not make any changes. Research by Kevin Dougherty and colleagues through the Community College Research Center highlights the unstable nature of many PBF systems.

For these reasons, the expected impacts of state PBF plans on student outcomes may not be positive. A recent WISCAPE policy brief by David Tandberg, an assistant professor at Florida State University, and Nicholas Hillman, an assistant professor at the University of Wisconsin-Madison, examines whether PBF plans appear to affect the number of associate’s and bachelor’s degrees awarded by institutions in affected states. Their primary findings are that although some states had significantly significant gains in degrees awarded (four at the four-year level and four at the two-year level), other states had significant declines (four at the four-year level and five at the two-year level). Moreover, PBF was most effective in inducing additional degree completions in states with long-running programs.

The general consensus in the research community is that more work needs to be done to understand the effects of state performance-based funding policies on student outcomes. PBF policies differ considerably by state, and it is too early to evaluate the impact of policies on states that have recently adopted the systems.

For these reasons, I was particularly excited to read the Inside Higher Ed piece by Nancy Shulock and Martha Snyder entitled, “Don’t Dismiss Performance Funding,” in response to Tandberg and Hillman’s policy brief. Shulock and Snyder are well-known individuals in the policy community and work for groups with significant PBF experience. However, their one-sided look at the research and cavalier assumptions about the authors’ motives upset me to the point that writing this response became necessary.

First of all, ad hominem attacks about the motives of well-respected researchers should never be a part of a published piece, regardless of the audience. Shulock and Snyder’s reference to the authors’ “surprising lack of curiosity about their own findings” is both an unfair personal attack and untrue. Tandberg and Hillman not only talk about the eight states with some positive impacts, they also discuss the nine states with negative impacts and a larger number of states with no statistically significant effects. Yet Shulock and Snyder do not bother mentioning the states with negative effects in their piece.

Shulock and Snyder are quite willing to attack Tandberg and Hillman for a perceived lack of complexity in their statistical model, particularly regarding their lack of controls for “realism and complexities.” In the academic community, criticisms like this are usually followed up with suggestions on how to improve the model given available data. Yet they fail to do so.

It is also unusual to see a short policy brief like this receive such a great degree of criticism, particularly when the findings are null, the methodology is not under serious question, and the authors are assistant professors. As a new assistant professor myself, I hope that this sort of criticism does not deter untenured faculty and graduate students from pursuing research in policy-relevant fields.

I teach higher education finance to graduate students, and one of the topics this semester was performance-based funding and accountability policy. If Shulock and Snyder submitted their essay for my class, I would ask for a series of revisions before the end of the semester. They need to provide empirical evidence in support of their position and to accurately describe the work done by Tandberg and Hillman. They deserve to have their research fairly characterized in the public sphere.

Simplifying the FAFSA–How Far Can We Go?

It is painfully obvious to students, their families, and financial aid administrators alike that the current system of determining federal financial aid eligibility is incredibly complex and time-consuming. Although there should be broad support for changes to the financial aid system, any progress has been halting at best. I have devoted much of my time to researching and discussing potential changes to the financial aid system. Below is some of my work, going from relatively minor to major changes.

I’ve been working on an ongoing study with the National Association of Student Financial Aid Administrators examining the extent to which students’ financial aid packages would change if income data from one year earlier (the “prior-prior year”) than is currently used were to be used in the FAFSA calculations. Although a full report from this study won’t be out until sometime next month, here is a nice summary of the work from the Chronicle of Higher Education. The key point from this work is that, since family resources don’t change that much for students with the greatest financial need, students could file their FAFSA several months earlier using income data from the prior-prior year without a substantial change in aid targeting.

Under a prior-prior year system, students would still have to file the FAFSA each year. Given the fact that many students don’t see that much income volatility, there is a case to be made that students should only have to file the FAFSA once—at the beginning of college—unless their family or financial circumstances change by a considerable margin. In a piece hot off the virtual presses at the Chronicle, Sara Goldrick-Rab and I discuss why it would be better for many students to only have to file the FAFSA once.  I would like to know more about the costs and benefits of such a program (weighing the benefits of reduced complexity and administrative compliance costs versus the likelihood of higher aid spending), but the net fiscal cost is likely to be small or even positive.

So let’s take this one step further. Do we even need to have all students file the FAFSA? Sara and I have looked at the possibility of automatically granting students the maximum Pell Grant if anyone in their family qualifies for means-tested benefits (primarily free and reduced price lunches). We detail the results of our fiscal analysis and simulation in an Institute for Research on Poverty working paper, where we find that such a program is likely to remain reasonably well targeted and pass a cost-benefit test in the long run.

There is a broad menu of options available to simplify the FAFSA, from giving students more time to complete the form to getting rid of it altogether. Let’s talk more about these options (plus many more) and actually get something done that can help all stakeholders in the higher education arena.