More Proposed Financial Aid Reforms

The past few months have been an exciting time for financial aid researchers, as many reports proposing changes in federal financial aid policies and practices have been released as a part of the Bill and Melinda Gates Foundation’s Reimagining Aid Design and Delivery (RADD) project. The most recent proposal comes from the Education Policy Program at the New America Foundation, a left-of-center Washington think tank. Their proposal (summary here, full .pdf here) would dramatically shift federal priorities in student financial aid—by prioritizing the federal Pell Grant over all other types of aid and changing loan repayment options—without creating any additional costs to the government. Below, I detail some of the key proposals and offer my comments.

Pell Grant program

(1)     Shift the program from discretionary spending to an entitlement. I’m torn over this proposal. The goal is to guarantee that funding will be present for students in order to provide more certainty in the college planning process (a goal in my work), but moving more items to the entitlement side of the ledger makes cutting spending in any meaningful way exceedingly difficult. A potential compromise would be to authorize spending several years in advance, but not lock us into a program for generations to come.

(2)    Limit Pell eligibility to 125% of program length (five years for a four-year college and three years for a two-year college). Currently, students are allowed 12 full time equivalent semesters of Pell eligibility, which can be used through the bachelor’s degree. This means that students who only seek to earn an associate’s degree can use the Pell for six years in a two-year program. This can safely be cut back (to three years, perhaps), but I’m not sure if cutting all the way back to 125% of stated program length is ideal. I would be concerned about students who can’t quite make it across the finish line financially.

(3)    Create institutional incentives to enroll Pell recipients and graduate students. New America has several prongs in this policy, including bonuses for colleges which enroll and graduate large numbers of Pell recipients. But the most interesting part is a proposed requirement that colleges which enroll few Pell recipients, have high net prices of attendance for Pell recipients, and have substantial endowments have to provide matching funds in order for students to be Pell-eligible. I think this policy has potential and doesn’t punish colleges for actions they can’t control—compared to other proposals, which have sought to tie Pell funding for public and private colleges to state appropriations.

Student loans

(1)    Switch all students to income-based repayment of loans. This would reduce default rates and simplify financial aid, but has the potential to let students attending expensive colleges off the hook. New America shares my concern on this, but switching to IBR could still have substantial upfront costs (which would later be repaid).

(2)    Set student loan interests based on government borrowing costs plus three percentage points. This proposal should result in a revenue-neutral student loan program (after accounting for defaults) and stop the games of reauthorizing artificially low interest rates for political gain. Loan rates would be fixed for each cohort of students, but vary across each incoming cohort.

(3)    Allow colleges to lower federal loan limits “to discourage excessive borrowing.” I’m concerned about this point of the proposal, at least for undergraduate students. Loan limits are currently fairly modest and students should have the right to borrow a sufficient amount of money needed to attend college, whether the college disagrees with that or not.

Other key points

(1)    Pay for the additional Pell expenditures by cutting education tax credits, savings plans, and student loan interest deductions. This is a common call by financial aid researchers, and not just because academia tilts heavily to the left. Economic theory would suggest that plans to reduce the cost of college through grants should work as well as credits and deductions, but this assumes that students and their families fully account for the tax benefits in their decisionmaking and that the students who take up these programs are on the margin of attending college. Neither appears to be true. An additional tax deduction for being a student would likely be more effective than the current credit system.

(2)    Require better data systems and consumer information. I’m fully on board with getting better data systems so researchers can finally figure out whether financial aid works and student outcomes can be better tracked across colleges. I’m a little more concerned about some of the consumer information measures, as colleges should have the ability to tailor materials somewhat.

(3)    Create publicly available accountability standards. Gainful employment, in which for-profit colleges are examined based on job placement rates, could be a model for extending some sort of accountability to all colleges receiving federal funds. Graduation rates, earnings, and other measures could be used—or at the very least, the information could be made public to students, their families, and policymakers.

I don’t agree with everything that New America suggests in their policy proposals, but many of the suggestions would help improve financial aid delivery and our ability to examine whether programs work for students. To me, that is the mark of a successful proposal that could at least partially be adopted by Congress.

Another Commission on Improving Graduation Rates

College leaders and policymakers are rightly concerned about the percentage of incoming students who graduate in a reasonable period of time. Although there have been numerous reports and commissions at the university, state, and national level to improve college completion rates, about the same percentage of incoming students graduate college now as a decade ago. This spurred the creation of the National Commission on Higher Education Attainment, a group of college presidents from various types of public and private nonprofit colleges and universities. This group released their report on improving graduation rates today, which offers few new suggestions and repeats many of the same concerns of past commissions.

The report made the following recommendations, with my comments below:

Recommendation 1: Change campus culture to boost student success.

We’ve heard this one before, to say the least. The problem is that few campus-level innovations have been “scalable”—or able to expand to other colleges with the same results. Other programs appear promising, but have never been rigorously evaluated or cost a lot of money. Rigorous evaluation is essential to determine what we can learn from other colleges’ apparent successes.

Recommendation 2: Improve cost-effectiveness and quality.

In theory, this sounds great—and many of the recommendations sound reasonable. But policymakers and college leaders should be concerned about any potential cost savings resulting in a lower-quality education. A slightly less personalized education for a lower price may be a worthwhile tradeoff and pass a cost-effectiveness test, but these concerns should be addressed.

A bigger concern not addressed regarding the cost of education is the actual cost of teaching a given course. First-year students tend to subsidize upper-level undergraduates, and all undergraduates tend to subsidize doctoral students. Much more research needs to be done about the costs of individual courses in order to provide lower-cost offerings to certain groups of students.

Recommendation 3: Make better use of data to boost success.

The commission calls for better use of institutional-level data to identify at-risk students and keep students on track to graduation. They call for more students to be included in the federal IPEDS dataset, which currently only tracks first-time, full-time, traditional-age students at their first institution of attendance. While this would be an improvement, I would like to see a pilot test of a student-level dataset instead of an institutional-level dataset. This would be much better for identifying student success patterns for groups with a lower probability of success.

 

The report also had a few notable omissions. First of all, the decision to exclude leaders of for-profit colleges is troubling. While many for-profit colleges have low completion rates, their cost structure (in terms of tracking per-student expenditures) is worth examining and they do disproportionately serve at-risk students. There is no reason to leave out an important, if not controversial, sector of higher education. Second, the typical text on declining public support for higher education (on a per-student basis) was present. While it might make college presidents feel good, any requests for additional funding in this political and economic climate need to be more closely tied to improving college completion rates. Finally, little attention was paid to the different sectors of higher education sharing best practices in spite of their often symbiotic relationship.

I don’t expect more than a few months to go by before the next commission issues a very similar report to this one. Stakeholders in the higher education arena need to think of how potential success stories can actually be brought to scale to benefit a meaningful number of students.

Examining Kiplinger’s Best Value Colleges

Not too many articles on higher education feature my alma mater, Truman State University. In spite of a long tradition of internal accountability and doing a good job of graduating students on a shoestring budget, Truman lacks the name recognition of larger universities in most circles. This is why I was surprised to see the article discussing Kiplinger’s Best Values in Public Colleges feature Truman so prominently.

HPIM1595Winter at Truman State University

Kiplinger’s ranks the top 100 public four-year colleges and universities based on a combination of five different measures, with the point values being just as arbitrary as all of the other rankings (including the Washington Monthly rankings that I complied last fall). This is in spite of the claim that “neither our opinion nor anyone else’s affects the calculation.” While this may be true in the strictest sense, someone had to determine the point values!

The methodology is as follows:

(1)    Total cost of attendance and net price (after subtracting grant aid)—35%. This is calculated separately for in-state and out-of-state students.

(2)    Academic competitiveness (ACT/SAT scores, admit rate, and yield)—22.5%.

(3)    Graduation rates (four-year and six-year)—18.75%.

(4)    Academic support (retention rates and students per FTE faculty)—13.75%.

(5)    Student debt at graduation—10%.

As most college rankings are prone to do, the Kiplinger’s best value list still unnecessarily rewards colleges for being highly selective, both in the academic competitiveness and graduation measures. The focus on cost is very useful, although it does to some extent reward colleges in states which provide more public support (this is good for the student, but not necessarily as good for the taxpayer).

I do have one other gripe with the Kiplinger’s rankings—they are done separately for public and private colleges (the private college list came out last month). The editors should combine the two lists so the information can be more useful for students and their families. With that being said, the information in these lists is certainly useful to a segment of the collegegoing population.

More Data on the Returns to College

Most people consider attending college to be a good bet in the long run, in spite of the rising cost of attendance and increasing levels of student loan debt. While I’m definitely not in the camp that everyone should earn a bachelor’s degree, I do believe that some sort of postsecondary training benefits the majority of adults. A recent report from the State Higher Education Executive Officers (SHEEO) highlights the benefits of graduating with a college degree from public colleges and universities.

Not surprisingly, their report suggests that there are substantial benefits to graduating from college. Using data from IPEDS and the American Community Survey, they find that the average associate’s degree holder earned 31.2% more (or about $9,200 per year) than the average person with a high school diploma. The premium associated with a bachelor’s degree is even larger, 71.2%–or nearly $21,000 per year. These figures seem to be on the high end (but quite plausible) of the returns to education literature, which suggests that students tend to get an additional 10-15% boost in wages for each year of college completed.

I do have some concerns with the analysis, which does limit its generalizability and/or policy relevance. They are the following:

(1)    Given that SHEEO represents public colleges and universities, it is not surprising that they focused on that sector in their analysis. Policymakers who are interested in the overall returns to education (including the private not-for-profit and for-profit sectors) should try to get more data.

(2)    This study is in line with the classic returns to education literature, which compares students who completed a degree to those with a high school diploma. The latter group of students who just have a high school diploma may have also completed some college but left without a degree, which results in a different comparison group than students and policymakers would expect. I would like to see studies compare all students who entered college with students who never attended to get a better idea of the average wage premium among those who attempt college.

(3)    While the average student benefits from completing a college degree, not all students benefit. For example, welders with a high school diploma may very well make more than a preschool teacher with a bachelor’s degree. A 2011 report by Georgetown University’s Center on Education and the Workforce does a nice job showing that not everyone benefits.

(4)    Most reports like this one do a good job estimating the benefits of education (in terms of higher wages), but neglect the costs in terms of forgone earnings and tuition expenses. While most people are still likely to benefit from attending relatively inexpensive public colleges, some students’ expected returns may become negative after this assumption.

(5)    Students who complete a certificate degree (generally one-year programs in technical fields) are excluded from the analyses for data reasons, which is truly a shame. Students and policymakers should keep in mind that many of these programs have high completion rates and positive payoffs in the long run.

My gripes notwithstanding, I encourage readers to check out the state-level estimates of the returns to different types of college degrees and majors. It’s worth a read.

(Note: This will likely be my last post of 2012, as I am looking forward to spending some time far away from computer screens and datasets next week. I’ll be back in January…enjoy the holidays and please travel carefully!)

Innovating for Success in Financial Aid

Most education researchers and policymakers would likely agree that the current financial aid distribution system is both inefficient and not as effective as it could be. Under current rules, the vast majority of students do not learn about their eligibility for need-based financial aid until their senior year of high school. While waiting this long can help the federal and state governments make sure their aid dollars are targeted toward students who are currently the most financially needy, waiting that long to notify students of their aid awards makes little sense for students from persistently poor families.

There have been numerous efforts to streamline the financial aid process over the past several years, but they have neglected the importance of timing. If students know their financial aid package well before reaching college age, they can both academically and financially prepare for college should that be a match with their career and personal ambitions. However, most research fails to suggest possible solutions to important informational deficiencies.

Today, I am pleased to release a working paper with my frequent co-author (and political opposite) Sara Goldrick-Rab that seeks to advance the research agenda on the importance of timing in the financial aid process.  Under current policy, students whose families receive federal means-tested benefits in grade 12 currently are awarded the maximum Pell Grant (which results in the maximum award for many state and institutional grants). In our paper, we estimate what could happen to both college enrollment rates and government revenue if the aid award would happen in grade 8 instead of grade 12.

Pell Grant program costs would increase under this policy change for two reasons—because some students would likely be induced to attend college by the promise of financial aid and because about 30% of students would likely receive more money than under current law. But the federal government would also see an increase in tax revenue through the additional earnings of these students. Under a fairly conservative set of assumptions in a Monte Carlo simulation (make your own assumptions here and here), the program is fairly likely to result in positive net fiscal benefits over the long run.

Even though the initial results from this study appear to be promising, I still lose sleep at night about whether people will respond in the expected ways and whether any perverse incentives could be in play. As a result, any such policy change should be explored in a demonstration program to see whether the program is cost-effective in real life.

This paper will get a fair amount of media attention, which will hopefully result in useful feedback from smart people in the academic and policy communities. I would also love to hear your thoughts on the paper as well as the fun methodological assumptions.

Am I Selling “Mathematical Nonsense?”

When I started a line of research on college rankings and value-added, I assumed that if my work ever saw the light of day, it would be at least somewhat controversial. I’ve gotten plenty of feedback on my academic research on the topic, and most of that has been at least mildly encouraging. And I’ve gotten even more feedback on the Washington Monthly college rankings, most of which has also been fairly positive. This work has given me the opportunity to talk with dozens of institutional researchers, college presidents, and provosts from around the country about their best practices for measuring student success.

But one e-mail that we received was sharply negative and over the top. Frederik Ohles, president of Nebraska Wesleyan University in Lincoln, Nebraska, sent along a wonderful missive. Here is the edited version that ran in this month’s magazine (subscribe to the print version here):

—————–

“There are lots of things that I’ve long admired about your magazine. And for that reason, I had thought you might do a better job in the business of college rankings than U.S. News & World Report. But on reading this year’s issue, I was disappointed. In the Monthly college rankings, Nebraska Wesleyan University is predicted to graduate 66 percent, graduates 65 percent, and you rank us number 144 [out of 254] for that result.

What kind of Rube Goldberg-inspired formula would lead to this result? Sorry, folks, but you’ve discredited yourselves with such mathematical nonsense. In the future you’d better stick to subjects that you know something about. Math and ranking methodologies sure aren’t among them.”

—————–

The e-mail went on to call me and the rest of the College Guide staff “charlatans,” just like the U.S. News staff, but you get the point. In any case, I resisted a strong urge to snark in the published response, an excerpt of which is below:

“You focus entirely on the numerator of the measure in your letter and do not mention the denominator—the annual net price of attendance, in your school’s case $20,723. If the net price of Nebraska Wesleyan University were lower, the school’s ranking on this measure would be higher.”

In my full response, I assured Mr. Ohles that it is my goal to never be a charlatan. But am I selling mathematical nonsense? You be the judge.

Pell Grants and Data-Driven Decisions

I am a big proponent of making data-driven decisions whenever possible, but sadly that isn’t the case among many policymakers. Recently, in an effort to reduce costs, Congress and the Obama Administration agreed to reduce the maximum number of semesters of Pell Grant eligibility from 18 to 12 (which is in line with the federal government’s primary graduation rate measure for students attending four-year colleges). However, this decision was made without considering the cost-effectiveness of the policy change or even without a good idea of how many students would be affected.

Today’s online version of The Chronicle of Higher Education includes a piece that I co-authored with Sara Goldrick-Rab on this policy change. We’re both strong proponents of data-driven decision making, as well as conducting experiments whenever possible to evaluate the effects of policy changes. We come from very different places on the political spectrum (which is why we disagree on whether the federal government can and should hold states accountable for their funding decisions), but there are certainly fundamental points that are just a part of an effective policymaking process.

College Selectivity Does Not Imply Quality

For me, the greatest benefit of attending academic conferences is the ability to clarify my own thinking about important issues in educational policy. At my most recent conference last week, I attended several outstanding sessions on issues in higher education in addition to presenting my own work on early commitment programs for financial aid. (I’ll have more on that in a post in the near future, so stay tuned.) I greatly enjoyed the talks and learned quite a bit from them, but the biggest thing I am taking away from them is something that I think they’re doing wrong—conflating college selectivity with college quality.

When most researchers refer to the concept of “college quality,” they are really referring to a college’s inputs, such as financial resources, student ACT/SAT scores, and high school class rank. What this really means is that a college is selective and has what we consider to be quality inputs. But plentiful, malleable inputs do not imply a quality outcome, given what we would expect from the student and the college. Rather, a quality college helps its student body succeed instead of just recruiting a select group of students. This does not mean that selective colleges cannot be quality colleges; however, it does mean that the relationship is not guaranteed.

I am particularly interested in measuring college quality based on an estimate of its value added to students instead of a measure highly correlated with inputs. Part of my research agenda is on that topic, as illustrated by my work compiling the Washington Monthly college rankings. However, other popular college rankings continue to reward colleges for their selectivity, which creates substantial incentives to game the rankings system in unproductive ways.

For example, a recent piece in The Chronicle of Higher Education illustrates how one college submitted inaccurate and overly optimistic data for the U.S. News rankings. George Washington University, one of the few colleges in the country with a posted cost of attendance of over $50,000 per year, had previously reported that 78% of their incoming freshman class was in the top ten percent of their high school graduating class, in spite of large numbers of high schools declining to rank students in recent years. An eagle-eyed staffer in the provost’s office realized that the number was too high and discovered that the admissions staff was inaccurately estimating the rank for students with no data. As a result, the revised figure was only 58%.

Regardless of whether GWU’s error was of one of omission or malfeasance, the result was that the university appeared to be a higher-quality school under the fatally flawed U.S. News rankings. [UPDATE 11/15/12: U.S. News has removed the ranking for GWU in this year’s online guide.] GWU certainly aspires to be more selective, but keep in mind that selectivity does not imply quality in a value-added sense. Academics and policymakers would be wise to be careful when discussing quality when they really mean selectivity.

More Information on the Education Week Article

I was happy to learn this morning that my research on value-added with respect to college graduation rates (with Doug Harris) was covered in an Education Week blog post by Sarah Sparks. While I am glad to get media coverage for this week, the author never reached out to me to make sure her take on the article was accurate. (I had a radio show in college and this was one of the things that was drilled into my head, so I am probably a little too sensitive regarding fact-checking.) As a result, there are some concerns with the Ed Week post that need to be addressed. My concerns are as follows:

(1)    The blog post states that we “analyzed data on six-year graduation rates, ACT or SAT placement-test scores and the percentage of students receiving federal need-based Pell grants at 1,279 colleges and universities from all 50 states from 2006-07 through 2008-09.” While that is true, we also used a range of other demographic and institutional measures in our value-added models. Using ACT/SAT scores and Pell Grants to predict graduation rates explains only about 60% of the variation in institutional graduation rates, while including the additional demographic measures that we use explains an additional 15% or so of the variation. The post should have briefly mentioned this, as it helps set our work apart from previous work (and particularly the U.S. News rankings).

(2)    After generating the predicted graduation rate and comparing it to the actual graduation rate, we adjust for cost in two different ways. In what we call the student/family model, we adjust for the net price of attendance (this is what I used in the Washington Monthly rankings this year). And in the policymaker model, we adjust for educational expenditures per full-time equivalent student. The blog post characterizes our rankings as “value-added rankings and popularity with families.” While the popularity with families is an accurate depiction of the student/family model, the term “value-added rankings” doesn’t reflect the policymaker model that well.

(3)    While we do present the schools in the top ten of our measures by Carnegie classification, we spend a great amount of time discussing the issues of confidence intervals and statistical significance. Even if a school has the highest value-added score, its score is generally not different from other high-performing institutions. We present the top-ten lists for illustrative purposes only and would encourage readers not to consider the lists to be perfect.

As an aside, there are five other papers in the Context for Success working group which also examine measuring college value-added that were not mentioned in the article, plus an outstanding literature review by Tom Bailey and Di Xu. I highly recommend reading through the summaries of those articles to learn more about the state of research in this field.

UPDATE (10/29): I had a wonderful e-mail conversation with the author and the above points have now been addressed. Chalk this up as another positive experience with the education press.

Using Input-Adjusted Measures to Estimate College Performance

I have been privileged to work with HCM Strategists over the past two years on a Gates Foundation-funded project to explore how to use input-adjusted measures to estimate a college’s performance. Although the terminology sounds fancy, the basic goal of the project is to figure out better ways to measure whether a college does a good job educating the types of students that it actually enrolls. It doesn’t make any sense to measure a highly selective and well-resourced flagship university against an open-access commuter college; doing so is akin to comparing my ability to run a marathon with that of an elite professional athlete. Just like me finishing a marathon is a much more substantial accomplishment, getting a first-generation student with modest academic preparation to graduate is a much bigger deal than someone whom everyone expected to race through their coursework with ease.

The seven-paper project was officially unveiled in Washington on Friday, and I was able to make it out there for the release. My paper (joint work with Doug Harris) is essentially a policymaker’s version of our academic paper on the pitfalls of popular rankings. It’s worth a read if you want to find out more about my research beyond the Washington Monthly rankings.  Additional media coverage can be found in The Chronicle of Higher Education and Inside Higher Ed.

As a side note, it’s pretty neat that the Inside Higher Ed article links to the “authors” page of the project’s website (which includes my bio and information) under the term “prominent scholars.” I know I’m by no means a prominent scholar, but maybe some of that will rub off the others via association.