Let’s Track First-Generation Students’ Outcomes

I’ve recently written about the need to report the outcomes of students based on whether they received a Pell Grant during their first year of college. Given that annual spending on the Pell Grant is about $35 billion, this should be a no-brainer—especially since colleges are already required to collect the data under the Higher Education Opportunity Act. Household income is a strong predictor of educational attainment, so people interested in social mobility should support publishing Pell graduation rates. I’m grateful to get support from Ben Miller of the New America Foundation on this point.

Yet, there has not been a corresponding call to collect information based on parental education, even though there are federal programs targeted to supporting first-generation students. The federal government already collects parental education on the FAFSA, although the choice of “college or beyond” may be unclear. (It would be simple enough to clarify the question if desired.)

My proposal here is simple: track graduation rates by parental education. It can be easily done through the current version of IPEDS, although the usual caveats about IPEDS’s focus on first-time, full-time students still applies. This could be another useful data point for students and their families, as well as policymakers and potentially President Obama’s proposed college ratings. Collecting these data shouldn’t be an enormous burden on institutions, particularly in relationship to their Title IV funds received.

Let’s continue to work to improve IPEDS by collecting more useful data, and this should be a part of the conversation.

Two and a Half Cheers for Prior Prior Year!

Earlier this week, the National Association of Student Financial Aid Administrators (NASFAA) released a report I wrote with Gigi Jones of NASFAA on the potential to use prior prior year income data (PPY) in determining students’ financial aid awards. Compared to the current policy of prior year (PY) data, students could file the FAFSA up to a year earlier than under current law. (See this previous post for a more detailed summary of PPY.)

Although the use of PPY could advance the timeline for financial aid notification, this could also have the effect of changing some students’ aid packages. For example, if a dependent student’s family had a large decrease in family income the year before entering college, the financial aid award would be more generous under PY. Other students’ aid packages would be more generous under PPY. Although we might expect that the number of aid increases and decreases from a move to PPY would balance each other out, the existence of professional judgments (in which financial aid officers can adjust students’ aid packages based on unusual circumstances) complicates that analysis. As a result, it’s possible that PPY could increase program costs in addition to the burden faced by financial aid offices.

To examine the feasibility and potential distributional effects of PPY, we received student-level FAFSA data from nine colleges and universities from the 2007-08 through the 2012-13 academic years. We then estimated the expected family contribution (EFC) for students using PY and PPY data to see how much Pell Grant awards would vary by the year of financial data used. (This exercise also gave me a much greater appreciation for how complicated it truly is to calculate the EFC…and how much data is currently needed in the FAFSA!)

The primary result of the study is that about two-thirds of students would see the exact same Pell award using PPY as they would using PY. These students tend to fall into two groups—students who would never be eligible for the Pell (and are largely filing the FAFSA to be eligible for federal student loans) and those with zero EFC. Students near the Pell eligibility threshold are the bigger concern, as about one in seven students would see a change in their Pell award of at least $1,000 under PPY compared to PY. However, many of these students would never know their PY eligibility, somewhat reducing concerns about the fairness of the change.

To me, the benefits of PPY are pretty clear. So why two and a half cheers? I have three reasons to knock half a cheer off my assessment of a program that is still quite promising:

(1) We don’t know much about the burden of PPY on financial aid offices. When I’ve presented earlier versions of this work to financial aid administrators, they generally think that the additional burden of professional judgments (students appealing their aid awards due to extenuating circumstances) won’t be too bad. I hope they’re right, but it is worth a note of caution going forward.

(2) If students request professional judgments and are successful in getting a larger Pell award, program costs will increase. Roughly 5-7% of students would see their Pell fall by $1,000 or more under PPY. If about 2% of the Pell population is successful (200,000 students), program costs could rise by something like $300-$500 million per year. Compared to a $34 billion program budget, that’s noticeable, but not enormous.

(3) A perfectly implemented PPY program would let students know their eligibility for certain types of financial aid a year earlier than current rules, so as early as the spring of a traditional-age student’s junior year of high school. While that is an improvement, it may still not be early enough to sufficiently influence students’ academic and financial preparation for college. Early commitment and college promise programs reach students at earlier ages, and thus have more potential to be successful.

Even after noting these caveats, I would like to see PPY get a shot at a demonstration program in the next few years. If it can help at least some students at a reasonable cost, let’s give it a try and see if it does induce students to enroll and persist in college.

Free the Pell Graduation Data!

Today is an exciting data in my little corner of academia, as the end of the partial government shutdown means that federal education datasets are once again available for researchers to use. But the most exciting data to come out today is from Bob Morse, rankings guru for U.S. News and World Report. He has collected graduation rates for Pell Grant recipients, long an unknown for the majority of colleges. Despite the nearly $35 billion per year we spend on the Pell program, we have no idea what the national graduation rate is for Pell recipients. (Richard Vedder, economist of higher education at Ohio University, has mentioned a ballpark estimate of 30%-40% in many public appearances, but he notes that is just a guess.)

Morse notes in his blog post that colleges have been required to collect and disclose graduation rates for Pell recipients since the 2009 renewal of the Higher Education Act. I’ve heard rumors of this for years, but these data have not yet made their way into IPEDS. I have absolutely no problems with him using the data he collects in the proprietary U.S. News rankings, nor do I object to him holding the data very tight—after all, U.S. News did spend time and money collecting it.

However, given that the federal government requires that Pell graduation rates be collected, the Department of Education should collect this data and make it freely and publicly available as soon as possible. This would also be a good place for foundations to step in and help collect this data in the meantime, as it is certainly a potential metric for the President’s proposed college ratings.

Update: An earlier version of this post stated that the Pell graduation data are a part of the Common Data Set. Bob Morse tweeted me to note that they are not a part of that set and are collected by U.S. News. My apologies for the initial error! He also agreed that NCES should collect the data, which only understates the importance of this collection.

State Need and Merit Aid Spending

I’m fortunate to be teaching a class in higher education finance this semester, as it’s a class that I greatly enjoy and is also intertwined with my research interests. I’m working on slides for a lecture on grant aid (both need-based and merit-based) in the next few weeks, which involves creating graphics about trends in aid. In this post, I’m sharing two of my graphics about state-level financial aid.

States have taken different philosophies regarding financial aid. Some states, particularly in the South, have focused more of their resources on merit-based aid, rewarding students with strong pre-college levels of academic achievement. Other states have put their resources into need-based aid, such as Wisconsin and New Jersey. Yet others have chosen to keep the cost of college low instead of providing aid to students.

The two charts below demonstrate the states’ differences in philosophies. The state-level data come from the National Association of State Student Aid & Grant Programs (NASSGAP) from the 2011-12 academic year. The first chart shows the percentage of funds given to need-based aid (green) and merit-based aid:

state_aid_distribution

Two states currently have no need-based aid (Georgia and South Dakota), and six other states allocate 75% or more of state aid to merit-based programs. On the other hand, nine states only have need-based aid programs and 16 more allocate 90% or more to need-based aid. Two states (New Hampshire and Wyoming) did not report having student aid programs in 2011-12.

The second chart measures the intensity of spending on state-level student aid. I divide overall spending by the state’s population in 2012, as estimated by the Census Bureau. States with more spending on aid per student are in green, while lower-spending states are in red:

state_aid_spending

South Carolina leads the way in state student aid, with nearly $69 per resident; four other Southern states provide $50 or more per resident. The other extreme sees 15 states spending less than $10 per person on aid.

Notably, states with more of an emphasis on merit aid spend more on per-resident aid. The correlation between the percentage of funds allocated to need-based aid and per-resident spending is -0.33, suggesting that merit-based programs (regardless of their effectiveness) are more capable of generating resources for students.

I’m looking forward to using these graphics (and several others) in my class on grant aid, as the class has been so much fun this semester. I hope my students feel the same way!

Associate’s Degree Recipients are College Graduates

Like most faculty members, I have my fair share of quirks, preferences, and pet peeves. While some of them are fairly minor and come from my training (such as referring to Pell Grant recipients as students from low-income families instead of low-income students, since most students have very little income of their own), others are more important because of the way they incorrectly classify students and fail to recognize their accomplishments.

With that in mind, I’m particularly annoyed by a Demos piece with the headline “Since 1991, Only College Graduates Have Seen Their Income Rise.” This claim comes from Pew data showing that only households headed by someone with a bachelor’s degree or more had a real income gain between 1991 and 2012, while households headed by those with less education lost ground. However, this headline implies that students who graduate with associate’s degrees are not college graduates—a value judgment that comes off as elitist.

According to the Current Population Survey, over 21 million Americans have an associate’s degree, with about 60% of them being academic degrees and the rest classified as occupational. This is nearly half the size of the 43 million Americans whose highest degree is a bachelor’s degree. Many of these students are the first in their families to even attend college, so an associate’s degree represents a significant accomplishment with meaning in the labor market.

Although most people in the higher education world have an abundance of degrees, let’s not forget that our college experiences are becoming the exception rather than the norm. I urge writers to clarify their language and recognize that associate’s degree holders are most certainly college graduates.

Improving Data on PhD Placements

Graduate students love to complain about the lack of accurate placement data for students who graduated from their programs. Programs are occasionally accused of only reporting data for students who successfully received tenure-track jobs, and other programs apparently do not have any information on what happened to their graduates. Not surprisingly, this can frustrate students as they try to make a more informed decision about where to pursue graduate studies.

An article in today’s Chronicle of Higher Education highlights the work of Dean Savage, a sociologist who has tracked the outcomes of CUNY sociology PhD recipients for decades. His work shows a wide range of paths for CUNY PhDs, many of whom have been successful outside tenure-track jobs. Tracking these students over their lifetimes is certainly a time-consuming job, but it should be much easier to determine the initial placements of doctoral degree recipients.

All students who complete doctoral degrees are required to complete the Survey of Earned Doctorates (SED), which is supported by the National Science Foundation and administered by the National Opinion Research Center. The SED contains questions designed to elicit a whole host of useful information, such as where doctoral degree recipients earned their undergraduate degrees (something which I use in the Washington Monthly college rankings as a measure of research productivity) and information about the broad sector in which the degree recipient will be employed.

The utility of the SED could be improved by clearly asking degree recipients where their next job is located, as well as their job title and academic department. The current survey asks about the broad sector of employment, but the most relevant response for postgraduate plans is “have signed contract or made definite commitment to a “postdoc” or other work. Later questions do ask about the organization where the degree recipient will work, but there is no clear distinction between postdoctoral positions, temporary faculty positions, and tenure-track faculty positions. Additionally, there is no information requested about the department in which the recipient will work.

My proposed changes to the SED are little more than tweaks in the grand scheme of things, but have the potential to provide much better data about where newly minted PhDs take academic or administrative positions. This still wouldn’t fix the lack of data on the substantial numbers of students who do not complete their PhDs, but it’s a start to providing better data at a reasonable cost using an already-existing survey instrument.

Is there anything else we should be asking about the placements of new doctoral recipients? Please let me know in the comments section.

Breaking Down the 2014 US News Rankings

Today is a red-letter day for many people in the higher education community—the release of the annual college rankings from U.S. News and World Report. While many people love to hate the rankings for an array of reasons (from the perceived focus on prestige to a general dislike of accountability in some sectors), their influence on colleges and universities is undeniable. Colleges love to put out press releases touting their place in the rankings even while decrying their general premise.

I’m no stranger to the college ranking business, having been the consulting methodologist for Washington Monthly’s annual college rankings for the past two years. (All opinions in this piece, of course, are my own.) While Washington Monthly’s rankings rank colleges based on social mobility, service, and research performance, U.S. News ranks colleges primarily based on “academic quality,” which consists of inputs such as financial resources and standardized test scores as well as peer assessments for certain types of colleges.

I’m not necessarily in the U.S. News-bashing camp here, as they provide a useful service for people who are interested in prestige-based rankings (which I think is most people who want to buy college guides). But the public policy discussion, driven in part by the President’s proposal to create a college rating system, has been moving toward an outcome-based focus. The Washington Monthly rankings do capture some elements of this focus, as can be seen in my recent appearance on MSNBC and an outstanding panel discussion hosted by New America and Washington Monthly last week in Washington.

Perhaps in response to criticism or the apparent direction of public policy, Robert Morse (the well-known and well-respected methodologist for U.S. News) announced some changes last week in the magazine’s methodology for this year’s rankings. The changes place slightly less weight on peer assessment and selectivity, while putting slightly more weight on graduation rate performance and graduation/retention rates. Yet Morse bills the changes as meaningful, noting that “many schools’ ranks will change in the 2014 [this year’s] edition of the Best Colleges rankings compared with the 2013 edition.”

But the rankings have tended to be quite stable from year to year (here are the 2014 rankings). The top six research universities in the first U.S. News survey (in 1983—based on peer assessments by college presidents) were Stanford, Harvard, Yale, Princeton, Berkeley, and Chicago, with Amherst, Swarthmore, Williams, Carleton, and Oberlin being the top five liberal arts colleges. All of the research universities except Berkeley are in the top six this year and all of the liberal arts colleges except Oberlin are in the top eight.

In this post, I’ve examined all national universities (just over 200) and liberal arts colleges (about 180) ranked by U.S. News in this year’s and last year’s rankings. Note that this is only a portion of qualifying colleges, but the magazine doesn’t rank lower-tier institutions. The two graphs below show the changes in the rankings for national universities and liberal arts colleges between the two years.

usnews_natl

usnews_libarts

The first thing that jumps out at me is the high R-squared, around 0.98 for both classifications. What this essentially means is that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a remarkable amount of persistence even when considering the slow-moving nature of colleges. The graphs show more movement among liberal arts colleges, which are much smaller and can be affected by random noise much more than large research universities.

The biggest blip in the national university rankings is South Carolina State, which went from 147th last year to unranked (no higher than 202nd) this year. Other universities which fell more than 20 spots are Howard University, the University of Missouri-Kansas City, and Rutgers University-Newark, all urban and/or minority-serving institutions. Could the change in formulas have hurt these types of institutions?

In tomorrow’s post, I’ll compare the U.S. News rankings to the Washington Monthly rankings for this same sample of institutions. Stay tuned!

Can “Paying it Forward” Work?

While Congress is deadlocked on what to do regarding student loan interest rates (I have to note here that interest rates on existing loans WILL NOT CHANGE!), others have pushed forward with innovative ways to make college more affordable. I wrote last fall about an innovative proposal from the Economic Opportunity Institute, a liberal think tank from Washington State, which suggests an income-based repayment program for students attending that state’s public colleges and universities. The Oregon Legislature just approved a plan to try out a version of its program after a short period of discussion and bipartisan approval.

This proposal, which the EOI refers to as “Pay It Forward,” is similar to how college is financed in Australia. It would charge students no tuition or fees upfront and would require students to sign a contract stating that they would pay a certain percentage of their adjusted gross income per year (possibly three percent of income or one percent per year in college) for at least 20 years after leaving college. It appears that the state would rely on the IRS to enforce payment in order to capture part of the earnings of those who leave the state of Oregon. This would be tricky to enforce in theory, given the IRS’s general reticence to step into state-level policies.

While I am by no means convinced by simulations conducted regarding the feasibility of the program, I think the idea is worth a shot as a demonstration program. I think the cost of the program will be larger than expected, especially since income-based repayment programs decouple the cost of college from what students pay.  Colleges suddenly have a strong incentive to raise their posted tuition substantially in order to capture this additional revenue. In addition to the demonstration program, I would like to see a robust set of cost-effectiveness estimates under different enrollment, labor market, and repayment parameters. I’ve done this before in my research examining the feasibility of a hypothetical early commitment Pell program.

Needless to say, I’ll be keeping an eye on this program moving forward to see how the demonstration program plays out. It has the potential to change state funding of higher education, and at the very least will be an interesting program to evaluate.

The Vast Array of Net Price Calculators

Net price calculators are designed to give students and their families a clear idea of how much college will cost them each year after taking available financial aid into account. All colleges have to post a net price calculator under the Higher Education Opportunity Act of 2008, but these calculators take a range of different form. The Department of Education has proposed a standardized “shopping sheet” which has been adopted by some colleges, but there is still a wide amount of variation in net price calculators across institutions. This is shown in a 2012 report by The Institute for College Access and Success, using 50 randomly selected colleges across the country.

In this blog post, I examine net price calculators from six University of Wisconsin System institutions for the 2013-14 academic year. Although these colleges might be expected to have similar net price calculators and cost assumptions, this is far from the case as shown in the below screenshots.  In all cases, I used the same student conditions—an in-state, dependent, zero-EFC student.

Two of the six colleges selected (the University of Wisconsin Colleges and UW-La Crosse) require students to enter several screens of financial and personal information in order to get an estimate of their financial aid package. While that can be useful for some students, there should be an option to directly enter the EFC for students who have filed the FAFSA or are automatically eligible for a zero EFC. For the purposes of this post, I stopped there with those campuses—as some students may decide to do.

(UW Colleges and UW-La Crosse, respectively)

UW Colleges Net Price Calculator

La Crosse Net Price Calculator

UW-Milwaukee deserves special commendation for clearly listing the net price before mentioning loans and work-study. Additionally, they do not list out each grant a student could expect to receive, simplifying the information display (although this does have its tradeoffs).

Milwaukee Net Price Calculator

The other three schools examined (Eau Claire, Madison, and Oshkosh) list out each type of financial aid and present an unmet need figure (which can be zero) before reporting the estimated net price of attendance. Students may read these calculators and think that no borrowing is necessary in order to attend college, while this is not the case. The net price should be listed first, since this tool is a net price calculator.

(UW-Eau Claire, UW-Madison, and UW-Oshkosh, respectively)

Eau Claire Net Price Calculator

Madison Net Price CalculatorOshkosh Net Price Calculator

The net price calculators also differ in their terminologies for different types of financial aid. For example, UW-Eau Claire calls the Wisconsin Higher Education Grant the “Wisconsin State Grant,” which appears nowhere else in the information students receive. The miscellaneous and travel budgets vary by more than $1000 across the four campuses with net price calculators, highlighting the subjective nature of these categories. However, they are very important to students because they cannot receive more in financial aid than their total cost of attendance. If colleges want to report a low net price, they have incentives to report low living allowances.

I was surprised to see the amount of variation in net price calculators across UW System institutions. I hope that financial aid officers and data managers from these campuses can continue to work together to refine best practices and present a more unified net price calculator.

More on Rate My Professors and the Worst Universities List

It turns out that writing on the issue of whether Rate My Professors should be used to rank colleges is a popular topic. My previous blog post on the topic, in which I discuss why the website shouldn’t be used as a measure of teaching quality, was by far the most-viewed post that I’ve ever written and got picked up by other media outlets. I’m briefly returning to the topic to acknowledge a wonderful (albeit late) statement released by the Center for College Affordability and Productivity, the data source which compiled the Rate My Professors (RMP) data for Forbes.

The CCAP’s statement notes that the RMP data should only be considered as a measure of student satisfaction and not a measure of teaching quality. This is a much more reasonable interpretation, given the documented correlation between official course evaluations and RMP data—it’s also no secret that certain disciplines receive lower student evaluations regardless of teaching quality. The previous CBS MoneyWatch list should be interpreted as a list of schools with the least satisfied students before controlling for academic rigor or major fields, but that doesn’t make for as spicy of a headline.

Kudos to the CCAP for calling out CBS regarding its misinterpretation of the RMP data. Although I think that it is useful for colleges to document student satisfaction, this measure should not be interpreted as a measure of instructional quality—let alone student learning.