Predicting Student Loan Default Rates

Regular readers of this blog know that there are several concerns to using outcome measures in a higher education accountability system. One of my primary concerns is that outcomes must be adjusted to reflect a college’s inputs—in non-economist language, this means that colleges need to be assessed based on how well they do given their available resources.  I have done quite a bit of work in this area with respect to graduation rates, but this same principle can be applied to many other areas in higher education.

The Education Sector also shares this concern, as evidenced by their recent blog post on the importance of input-adjusted graduation measures. In this post (at the Quick and the Ed), Andrew Gillen examines four-year colleges’ performance in student loan default rates. He adjusts for the percentage of Pell Grant recipients, the percentage of part-time students, and the average student loan size to get a measure of student default rate performance.

I repeat this estimate using the most recent loan default data (through 2009-10) and IPEDS data for the above characteristics for the 2009-10 academic year. This simple model does a fair job predicting loan default rates, with a R-squared value of 0.422. Figure 1 below shows actual vs. predicted loan default rates for 1876 four-year institutions with complete data:

figure1_jan17

The Education Sector analysis did not break down student default rate performance by important institutional characteristics, such as type of control (public, private not-for-profit, or for-profit) or the cost of attendance. Figures 2 and 3 below the performance between public universities and their private non-profit and for-profit peers:

figure2_jan17figure3_jan17

Note: A positive differential means that default rates are higher than predicted. Negative numbers are good.

The default rate performances of public and private not-for profit colleges do not differ in a meaningful way, but a significant number of for-profit colleges have substantially higher than predicted default rates. This difference is obscured when all colleges’ performances are combined.

Finally, Figure 4 compares default rate performance by the net price of attendance (the sticker cost of attendance less grant aid) and finds no relationship between the net price and loan default rates:

figure4_jan17

Certainly, more work needs to be done before adopting input-adjusted student loan default rates as an accountability tool. But it does appear that a certain group of colleges tend to have a higher percentage of former students default, which is worth additional investigation.

Wisconsin Higher Education Policy Issues for 2013

2013 marks a potential benchmark year for state higher education policy debates. More tends to happen in odd-numbered years because politicians are farther away from elections and more willing to make difficult budget decisions—and the influx of federal stimulus dollars is rapidly drying up. In Wisconsin, 2013 is a particularly important year as discussions begin on the state’s biennial budget. The American Association of State Colleges and Universities, an association representing primarily non-flagship public four-year schools, has released its list of the top ten state policy issues for 2013. They are the following:

(1)    Increasing college performance

(2)    Funding for public colleges and universities

(3)    Tuition prices and policy

(4)    State grant aid programs

(5)    Academic preparation for college

(6)    Immigration policy

(7)    Competency-based education

(8)    Concealed carry on campus

(9)    Workforce/economic development

(10) For-profit college regulation

Not all of these issues are a major concern in Wisconsin (such as whether to grant in-state concern to illegal immigrants who graduated from a Wisconsin public high school), are particularly relevant to student success (such as concealed carry regulations), or are likely to change much (tuition policy). My take on the five most important issues facing the Wisconsin Legislature in 2013 are the following:

Priority #1: Workforce and economic development

Although many in the academic community might disagree with how I have these key issues ordered, the Legislature is clearly focused on workforce and economic development. I expect a focus on vocational and technical education in 2013, as outlined in an August 2012 report by Tim Sullivan, special consultant on economic, workforce, and education development. I’ve written about this report in a previous blog post; overall, the key points in the proposal are reasonable, as long as the Legislature doesn’t go off on a tangent regarding immigration policy or setting unreasonable expectations.

Priority #2: Increasing college performance

Legislation was passed in the previous session that required colleges to make certain accountability information public. (I analyzed UW-Madison’s 2012 report in a post last August.) This legislation didn’t really have any teeth in terms of changing a university’s funding level. This looks very likely to change in 2013, as performance-based funding is going to be a key point of discussion. As Gov. Walker outlined in a speech last fall, he is pushing for some of the higher education funding to be based on a college’s performance in key areas, such as graduation rates and possibly enrolling Pell Grant recipients. I’ll have much more to say about performance-based funding in future blog posts, but for now I will emphasize the importance of using some sort of value-added measure as part of the performance score. (I’ve written quite a bit on this in the past, as well.)

Priority #3: Competency-based education

Wisconsin has become a leader in competency-based education in specialized degree programs, allowing students to earn credit for prior knowledge in certain areas. Unlike some states, which are contracting with the not-for-profit Western Governors University, Wisconsin is doing their effort in-house through the University of Wisconsin System. This experiment will be watched closely around the nation to see whether students take up the program in meaningful numbers as well as whether it will be cost-effective.

Priority #4: State grant aid programs

In 2012, the Legislature tasked the Higher Education Aids Board, the state’s agency administering need-based and merit-based grant programs, with exploring ways to consolidate and modernize the state’s financial aid system. The report, released in December, failed to suggest any meaningful changes that would help ensure a more reasonable distribution of financial aid to students. I hope that the Legislature will reconsider ways to reduce the number of separate need-based grants in order to have a more streamlined and student-friendly aid system, but I am not terribly optimistic.

Priority #5: Funding for public colleges and universities

After several rough budget cycles, Wisconsin looks to be in reasonable fiscal health entering the 2013-15 biennium. As such, Wisconsin higher education is requesting a funding increase over the 2011-13 cycle. The University of Wisconsin System is requesting a $224 million increase (1.9%), while the Wisconsin Technical College System is requesting an additional $92 million (a 31.6% increase). Most of the requested increases for the UW System are designated for meeting the accountability goals, while most of WTCS’s requested increases are designated for meeting workforce shortages in high-demand occupations. These requested increases show the importance of the top two priorities on my list to Wisconsin legislators.

 

I expect 2013 to be a much calmer year in Wisconsin politics than the past several years, but no less important to the higher education community. Hopefully, the state will continue to make progress in meeting key performance goals and fostering student success.

Transparency and Teacher Education Programs

I am a firm believer in the public’s right to know nearly everything about government-funded institutions unless there is a clear and compelling reason for privacy. For that reason, I have been following the University of Wisconsin System’s fight against the National Council on Teacher Quality (NCTQ), a group seeking to make information on the standards of teacher education programs public. In conjunction with U.S. News and World Report, NCTQ is compiling course syllabi, textbooks, student handbooks, and other information to rate education schools based on whether they are adequately preparing future K-12 teachers for their professions.

This review process has been objected to by many public colleges and universities (the full list is here) on the grounds that the proposed methodology is inadequate for rating colleges. (Yet these same colleges boast about their U.S. News rankings in other aspects, although the rankings are just as flawed.)The University of Wisconsin System has long refused to cooperate with NCTQ on this, as evidenced by their March 2011 letter to NCTQ.

Yet the UW System and many other public universities are failing the public trust by refusing to make important information produced by public employees available at a reasonable cost. The Wisconsin Institute for Law and Liberty, a Milwaukee-based public interest law firm, sued the UW System last January on behalf of NCTQ to get the records turned over. WILL’s suit was ultimately successful in obtaining its objective, as the UW System agreed to turn over the relevant materials and pay WILL nearly $10,000 in damages and fees after obtaining additional privacy assurances.

Wisconsin taxpayers and students will foot the bill for the UW System’s initial refusal to make information public under open records laws. This is a big PR mistake for Wisconsin higher education, as it gives the appearance that universities think they are above accountability—this isn’t a good thing in the current political climate, to say the least.

Now on to the meat of the new rankings, which should come out sometime this year. There are 17 standards which will be a part of the rankings, centered on four areas:

(1)    Selectivity of teacher education programs and students’ incoming academic characteristics

(2)    Teacher knowledge of subject matter

(3)    Classroom management and student teaching skills

(4)    Outcomes of graduates’ future classes on state tests

As regular readers of this blog know, I’m not a fan of the selectivity criterion. If a college does a good job of training teachers, who cares about their ACT score? But the other three measures are certainly important; the question is whether the available data will be sufficient to accurately rate programs and provide stakeholders with useful information.

I expect a big fuss when these ratings are released, just like there is a big fuss whenever the U.S. News undergraduate rankings are released every fall. While I’m concerned about the ability to draw conclusions from available data, these ratings will provide information about whether institutions are collecting relevant types of data (such as their graduates’ outcomes) and certainly won’t be any worse than the peer rating part of the undergraduate rankings that has existed for nearly three decades.

Making the College Scorecard More Student Friendly

The Obama Administration and the U.S. Department of Education have spent a great deal of time and effort in developing a simple one-page “college scorecard.” The goal of this scorecard is to provide information about the cost of attending college, average graduation rates, and information regarding student debt. The Department of Education has followed suit with a College Affordability and Transparency Center, which seeks to highlight colleges with unusually high or low costs to students.

Although I have no doubt that the Administration shares my goal of facilitating the availability of useful information to prospective students and their families, I doubt the current measures are having any effect. The college scorecard is difficult to understand, with technical language that is second nature to higher education professionals but is completely foreign to many prospective students. Because of this, I was happy to see a new report from the Center for American Progress, a liberal think tank, suggesting improvements to the measures. (As a side note, liberal and conservative think tanks work together quite a bit on issues of higher education. Transparency and information provision are nearly universal principles, and partisan concerns such as state-level teachers’ unions and charter schools just aren’t as present in higher ed.)

The authors of the report took the federal government’s scorecard and their own version to groups of high school students, where they tested the two versions and suggested improvements. The key points aren’t terribly surprising—focusing on a few important measures with simple language is critical—but it appears that the Department of Education has not yet done adequate testing of their measure. I am also not surprised that students prefer to see four-year graduation rates instead of six-year rates, as everyone thinks they will graduate on time—even though we know that is far from the case.

The changes to the college scorecard are generally a good idea, but I remain concerned about students’ ability to access the information. Even if the scorecard is required to be posted on a college website (like certain outcome measures currently are), it does not mean that it will be easy to access. For example, the graduation rate for first-time, full-time students who received a Pell Grant during their first year of college must be posted on the college’s website, but actually finding this information is difficult. I hope outside groups (such as CAP) will continue to publicize the information, as greater use of the data is the best way to influence colleges’ behavior.

Paying It Forward: A Different Take on Income-Based Repayment

In prior blog posts, I have been less than charitable toward the federal government’s changes to the income-based repayment policies for student loans. (As a reminder, these changes provide large subsidies to students who attend expensive colleges and particularly those who earn good salaries after having attended law or medical school.) My criticism of the federal government’s way of implementing the program does not mean that I am not open to a better way of income-based repayment. With this in mind, I look at a proposal from the Economic Opportunity Institute, a liberal think tank from Washington State, which suggests an income-based repayment program for students attending that state’s public colleges and universities.

The EOI’s proposal, called “Pay It Forward,” would charge students no tuition or fees upfront and would require students to sign a contract stating that they would pay a certain percentage of their adjusted gross income per year (possibly one percent per year in college) for 25 years after leaving college. It appears that the state would rely on the IRS to enforce payment in order to capture part of the earnings of those who leave the state of Washington. This would be tricky to enforce in theory, given the IRS’s general reticence to step into state-level policies.

I am by no means convinced by the group’s crude simulations regarding the feasibility of the program. This is currently short on details and would also require a large one-time investment to get off the ground and enroll an initial cohort of students. Additionally, it is not clear whether the authors of the report accounted for part-time enrollment patterns in their cost estimates. I also urge caution with this program, as this sort of an income-based repayment program decouples the cost of attendance from what students actually pay. Colleges suddenly have a strong incentive to raise their posted tuition substantially in order to capture this additional revenue.

With all of these caveats, the Pay It Forward program does have the potential to serve as a simple income-based repayment program once analysts do more cost-effectiveness analyses. But this will only work if policymakers keep a close eye on the cost of college in order to result in a revenue-neutral program. My gut feeling is that the group’s estimates understate the cost of college under current rules and don’t consider the possibility of the incentives that will increase cost.

A November Surprise in Student Loans

A few weeks ago, I co-authored a piece in The Chronicle of Higher Education on the federal government’s authority to relax income-based repayment requirements for student loans. To summarize the proposal, the federal government was granted the authority (starting in 2014) to allow students to repay student loans using only ten percent of their discretionary income for 20 years, down from 15 percent for 25 years. Our argument in the Chronicle piece is that the program represents an enormous subsidy for students attending expensive colleges and particularly professional schools. We were not the only people with those concerns; the left-leaning New America Foundation put out a similar set of concerns.

I was very surprised to learn yesterday that the Obama Administration published the final regulations for the new income-based repayment program (called “Pay as You Earn” or PAYE) in the Federal Register, which will suddenly take effect much sooner than 2014 and apparently no later than July 1, 2013. There appears to be no regulatory authority for speeding up the changes, other than the federal requirement that regulations be published by November 1 in order to take effect on July 1 of the following year. These regulations continue a disturbing trend of this administration ignoring Congressionally mandated timelines. It is my sincere hope that someone will ask the Department of Education for clarification as to how speeding up implementation is legal, especially when Congress did not agree to that timeline and there is a cost impact (more on that later).

Substantial legal issues aside, it appears that the Department of Education did not seriously consider the moral hazard concerns of people taking out more debt simply because they will not have to repay it. Buried on page 28 of the 61-page regulation document is the following nugget:

“Income-based repayment options may encourage higher borrowing and potentially introduce an unintended moral hazard, especially for borrowers enrolled at schools with high tuitions and with low expected income streams. Some commenters disagreed with the inclusion of this moral hazard statement, noting that the aspect of more generous income-based repayment plans causing increased borrowing has not been established. The Department has not found any definitive studies on the matter but since some analysts, academics, and others have suggested the possibility of this inducement effect, we wanted to address it to ensure comprehensive coverage of this issue.”

The Department of Education then never addresses the topic in the rest of the regulation document, instead focusing on the benefits for borrowers with more modest amounts of debt and household incomes of less than $60,000 per year. I side with Jason Delisle and the New America policy folks, who still note that moral hazard is a substantial concern.

The cost estimates seem way too good to be true, which is often the case in implementing new federal programs. The Department of Education (on p. 35 of the regulations) estimates that the cost will be only $2.1 billion over the next ten years, which seems to be an incredibly low number. Assuming roughly $250 million per year in additional costs over the peak years of the budget window (the last few years should not be included because they don’t include the full cohort costs) only covers $50,000 in loan forgiveness for 5,000 students. There are a lot more than 5,000 law and medical school graduates who could benefit under this program, yet it is unclear whether the Department of Education actually modeled professional school students (they mention on page 34 that graduate students were modeled, but they have much less debt on average).

Despite this change being a substantial shift in student loan policy, the education community and the media don’t seem to be too interested in the substantial cost shifting. The Chronicle had a nice article (subscription required) on the moral hazard regarding the program and Inside Higher Ed mentioned the release of the final rules, but completely missed the point. The conservative Daily Caller also mentioned the changes, but doesn’t get into the questionable legal foundation of advancing the policy before 2014 or the issue of who benefits.

It is easy to link the release of these regulations to electoral politics as usual, and I am certainly skeptical of what happens this time of year. However, given the lack of media coverage and the fact that it all hasn’t been positive, it appears that the Department of Education wanted to release the rules to have them take effect before receiving more public scrutiny. Hopefully, there will be a successful lawsuit delaying the rules on the grounds of the Obama Administration overstepping its legal authority to have the rules take effect in 2013 instead of 2014—and this will give researchers and policymakers a chance to rewrite the rules to target aid to those who truly need it instead of subsidizing expensive professional education programs.

More Information on the Education Week Article

I was happy to learn this morning that my research on value-added with respect to college graduation rates (with Doug Harris) was covered in an Education Week blog post by Sarah Sparks. While I am glad to get media coverage for this week, the author never reached out to me to make sure her take on the article was accurate. (I had a radio show in college and this was one of the things that was drilled into my head, so I am probably a little too sensitive regarding fact-checking.) As a result, there are some concerns with the Ed Week post that need to be addressed. My concerns are as follows:

(1)    The blog post states that we “analyzed data on six-year graduation rates, ACT or SAT placement-test scores and the percentage of students receiving federal need-based Pell grants at 1,279 colleges and universities from all 50 states from 2006-07 through 2008-09.” While that is true, we also used a range of other demographic and institutional measures in our value-added models. Using ACT/SAT scores and Pell Grants to predict graduation rates explains only about 60% of the variation in institutional graduation rates, while including the additional demographic measures that we use explains an additional 15% or so of the variation. The post should have briefly mentioned this, as it helps set our work apart from previous work (and particularly the U.S. News rankings).

(2)    After generating the predicted graduation rate and comparing it to the actual graduation rate, we adjust for cost in two different ways. In what we call the student/family model, we adjust for the net price of attendance (this is what I used in the Washington Monthly rankings this year). And in the policymaker model, we adjust for educational expenditures per full-time equivalent student. The blog post characterizes our rankings as “value-added rankings and popularity with families.” While the popularity with families is an accurate depiction of the student/family model, the term “value-added rankings” doesn’t reflect the policymaker model that well.

(3)    While we do present the schools in the top ten of our measures by Carnegie classification, we spend a great amount of time discussing the issues of confidence intervals and statistical significance. Even if a school has the highest value-added score, its score is generally not different from other high-performing institutions. We present the top-ten lists for illustrative purposes only and would encourage readers not to consider the lists to be perfect.

As an aside, there are five other papers in the Context for Success working group which also examine measuring college value-added that were not mentioned in the article, plus an outstanding literature review by Tom Bailey and Di Xu. I highly recommend reading through the summaries of those articles to learn more about the state of research in this field.

UPDATE (10/29): I had a wonderful e-mail conversation with the author and the above points have now been addressed. Chalk this up as another positive experience with the education press.

Using Input-Adjusted Measures to Estimate College Performance

I have been privileged to work with HCM Strategists over the past two years on a Gates Foundation-funded project to explore how to use input-adjusted measures to estimate a college’s performance. Although the terminology sounds fancy, the basic goal of the project is to figure out better ways to measure whether a college does a good job educating the types of students that it actually enrolls. It doesn’t make any sense to measure a highly selective and well-resourced flagship university against an open-access commuter college; doing so is akin to comparing my ability to run a marathon with that of an elite professional athlete. Just like me finishing a marathon is a much more substantial accomplishment, getting a first-generation student with modest academic preparation to graduate is a much bigger deal than someone whom everyone expected to race through their coursework with ease.

The seven-paper project was officially unveiled in Washington on Friday, and I was able to make it out there for the release. My paper (joint work with Doug Harris) is essentially a policymaker’s version of our academic paper on the pitfalls of popular rankings. It’s worth a read if you want to find out more about my research beyond the Washington Monthly rankings.  Additional media coverage can be found in The Chronicle of Higher Education and Inside Higher Ed.

As a side note, it’s pretty neat that the Inside Higher Ed article links to the “authors” page of the project’s website (which includes my bio and information) under the term “prominent scholars.” I know I’m by no means a prominent scholar, but maybe some of that will rub off the others via association.

Is College Cost Certainty a Possibility?

There are only a few certainties in life, such as death, taxes, and Murphy’s Law holding true at the most inopportune times. For nearly everyone, however, knowing the cost of college more than a few months in advance is definitely not one of those certainties. But given the high sticker price of attending college (which is not tremendously useful for most people), what can be done to provide cost certainty for students and their families over the course of several years?

As a part of a nifty series of essays on possible ways to overhaul the college experience, Beckie Supiano of The Chronicle of Higher Education takes a quick look at what has been done to provide better cost information. She focuses on a useful goal—being able to provide students with the net price of education (tuition less grant aid) over the course of several years. While this is a great idea in theory, it quickly runs into several problems in practice:

(1)    Colleges only control a fraction of grant aid, especially for financially needy students. Most federal need-based grants require that a student be eligible to receive the Pell Grant; if family income rises by a small amount, the loss of aid can be devastating. This also makes forecasting net price nearly impossible for middle-income families.

(2)    Most colleges cannot forecast their available resources several years in advance. Public colleges are at the whims of economic circumstances and the state government, while many private colleges rely on a combination on endowment revenue and rear ends in seats in order to make ends meet.

(3)    Colleges, like most of us out there, tend to be what economists call risk-averse. In English, that means that we don’t like being exposed to uncertainty. Students and their families currently bear the brunt of the uncertainty with respect to higher education pricing, but locking in a price regardless of the economic circumstances would shift that risk to the college. Most colleges would likely set a very high initial price in order to account for this uncertainty.

A small number of states have experimented with guaranteed tuition plans through prepaid tuition programs, which helps provide at least some certainty (although they do not guarantee set amounts of financial aid). Alabama’s program ran out of money during the financial crisis and is currently closed to new enrollment while facing legal challenges. A similar program in Illinois is also drastically underfunded, which is a common theme in the Land of Lincoln.

The article makes a key mistake in discussing cost certainty—it ignores the fact that known cost increases still represent cost certainty. If a college guarantees that tuition will go up by five percent per year for the next five years, students and parents can still have an idea of what college will cost in the future. The State University of New York system took a similar path in 2011, allowing each university in the system to raise tuition by up to $300 per year for five years. This proposal is quite useful as it puts increases in dollar terms, which are easier to understand than percentages.

I am interested in providing more information on cost certainty, but my research focuses on the financial aid side of the cost equation instead of the tuition side. I am currently working on two studies examining whether students and their families can receive earlier notifications of their financial aid, with results hopefully to come in the next few months. It is far from perfect, but it does help provide a little more information in an uncertain world.

New Data on the Returns to College

Many people love to hate college rankings, but they have traditionally been one of the most easily digestible sources of information about institutions of higher education. We know very little about the outcomes of students who attend a particular college over time, so we tend to rely on simplistic measures such as graduation rates or measures of prestige. It is difficult to follow and assess the outcomes of students once they leave a given college for multiple reasons:

(1)    A substantial percentage of students transfer colleges at least once. A recent report estimated that about one-third of students who enrolled in fall 2006 were enrolled elsewhere sometime in the next five years. The growth of the National Student Clearinghouse has made following students easier, but it is difficult to figure out how to split the credit for successful outcomes across the colleges that a given student attends.

(2)    While the group of students to be assessed (everyone!) sounds straightforward, most of the push has been to focus on the outcomes of graduates. This makes for a reasonable comparison group across colleges, but colleges have different graduation rates. It makes sense to focus on all students who entered a college, but this would lower the returns to college (and doesn’t fit well with selective colleges, where everyone is assumed to graduate).

(3)    Some people choose to postpone entry into the full-time labor market, whether for good reasons (such as starting a family) or for more dubious reasons (such as getting a master’s degree and working on a PhD). Given the lack of a federal data system, other students will not be observed if they move out-of-state to work.

Even with all of the limitations of measuring student outcomes once they leave college, I am heartened to see states starting to track the labor market outcomes of students who attended public colleges and stay in-state. This requires the merging of two data systems that don’t always exist in some states and don’t talk to each other in others—state higher education data systems and unemployment insurance (UI) records. Two states, Arkansas and Tennessee, just launched websites with labor market information for graduates from their public institutions of higher education. While the sample included is far from perfect, it still provides useful data to many students, families, and policymakers.

Not surprisingly, many in academia are worried about these new measures, as they prioritize one of the purposes of higher education (employment) at the expense of other important purposes (such as critical thinking and higher-order learning). The comments on this recent Chronicle of Higher Education article are worth a read. I am concerned about policymakers solely relying on these imperfect measures of student outcomes, but stakeholders should be able to have more information about the effectiveness of colleges on as many outcomes as possible.