The Great Student Loan Interest Rate Debate

As I write this post, the House of Representatives is currently debating the future of student loan interest rates. Under current law, the rates on subsidized Stafford loans for undergraduates (the rates which get the most attention) will double on July 1 from 3.4% to 6.8% without Congressional action. The same debate was held last year under the same parameters, but Congress and the President agreed to extend interest rates for an additional year.

There have been a wide range of proposals put forth regarding plans to address the interest rate cliff, an outstanding summary of which was written by Libby Nelson in Inside Higher Ed. (In addition to the plans listed in that article, some Senate Democrats have supported a two-year extension to current law in order to allow for the Higher Education Act to be reauthorized.) Most proposals move to tie interest rates to the market—represented here by borrowing costs for the federal government—but the plans vary widely in their ideas of what the relevant market should be.

Proposals put forth by the Obama Administration and House and Senate Republicans all tie interest rates to long-term Treasury bills, but vary in their other features. (I’ve previously written on the Obama Administration’s proposal.) While the President has threatened to veto the common House GOP proposal over certain aspects, there is enough common ground here to reach an agreement.

However, proposals put forth by certain Democratic senators, particular Sen. Elizabeth Warren from Massachusetts, confuse long-term lending risks with short-term credit markets. She has proposed tying student loan interest rates (which are repaid for at least ten years once a student leaves college) to the interest rate the Federal Reserve charges banks for very short-term borrowing. Jason Delisle of the New America Foundation, hardly a bastion of conservatism, crushes her argument in a great piece of writing. He notes the confusion between short-term and long-term rates, as well as accounting for the probability of default. I would also note that if Congress wishes to help make college more affordable, it’s a better idea to give the funds upfront to students than to lower interest rates later on–long after the enrollment decisions have been made.

The federal government should move toward some sort of a market-based strategy for interest rates with certain student protections. This would allow for the costs of student loans to be more adequately reflected in the federal budget. (And if interest rates get too high, maybe it’s a reminder for Congress and the President to produce a balanced budget!) With that being said, I would still expect to see a short-term extension of the current interest rates as Congress may end up deadlocked on this issue until the Higher Education Act is reauthorized.

More on Rate My Professors and the Worst Universities List

It turns out that writing on the issue of whether Rate My Professors should be used to rank colleges is a popular topic. My previous blog post on the topic, in which I discuss why the website shouldn’t be used as a measure of teaching quality, was by far the most-viewed post that I’ve ever written and got picked up by other media outlets. I’m briefly returning to the topic to acknowledge a wonderful (albeit late) statement released by the Center for College Affordability and Productivity, the data source which compiled the Rate My Professors (RMP) data for Forbes.

The CCAP’s statement notes that the RMP data should only be considered as a measure of student satisfaction and not a measure of teaching quality. This is a much more reasonable interpretation, given the documented correlation between official course evaluations and RMP data—it’s also no secret that certain disciplines receive lower student evaluations regardless of teaching quality. The previous CBS MoneyWatch list should be interpreted as a list of schools with the least satisfied students before controlling for academic rigor or major fields, but that doesn’t make for as spicy of a headline.

Kudos to the CCAP for calling out CBS regarding its misinterpretation of the RMP data. Although I think that it is useful for colleges to document student satisfaction, this measure should not be interpreted as a measure of instructional quality—let alone student learning.

Net Price and Pell Enrollment: The Good and the Bad

I am thrilled to see more researchers and policymakers taking advantage of the net price data (the cost of attendance less all grant aid) available through the federal IPEDS dataset. This data can be used to show colleges which do a good job keeping the out-of-pocket cost low either to all students who receive federal financial aid, or just students from the lowest-income families.

Stephen Burd of the New America Foundation released a fascinating report today showing the net prices for the lowest-income students (with household incomes below $30,000 per year) in conjunction with the percentage of students receiving Pell Grants. The report lists colleges which are successful in keeping the net price low for the neediest students while enrolling a substantial proportion of Pell recipients along with colleges that charge relatively high net prices to a small number of low-income students.

The report advocates for more of a focus on financially needy students and a shift to more aid based on financial need instead of academic qualifications. Indeed, the phrase “merit aid” has fallen out of favor in a good portion of the higher education community. An example of this came at last week’s Education Writers Association conference, where many journalists stressed the importance of using the phrase “non-need based aid” instead of “merit aid” to change the public’s perspective on the term. But regardless of the preferred name, giving aid based on academic characteristics is used to attract students with more financial resources and to stay toward the top of prestige-based rankings such as U.S. News and World Report.

While a great addition to the policy debate, the report deserves a substantial caveat. The measure of net price for low-income students only does include students with a household income below $30,000. This does not perfectly line up with Pell recipients, who often have household incomes around $40,000 per year. Additionally, focusing on just the lowest income bracket can result in a small number of students being used in the analysis. In the case of small liberal arts colleges, the net price may be based on fewer than 100 students. It can also result in ways to game the system by charging much higher prices to families making just over $30,000 per year—a potentially undesirable outcome.

As an aside, I’m defending my dissertation tomorrow, so wish me luck! I hope to get back to blogging somewhat more frequently in the next few weeks.

How Not to Rate the Worst Professors

I was surprised to come across an article from Yahoo! Finance claiming knowledge of the “25 Universities with the Worst Professors.” (Maybe I shouldn’t have been surprised, but that is another discussion for another day.) The top 25 list includes many technology and engineering-oriented institutions, as well as liberal arts colleges. I am particularly amused by the inclusion of my alma mater (Truman State University) at number 21, as well as my new institution starting next fall (Seton Hall University) at number 16. Additionally, 11 of the 25 universities are located in the Midwest, with none in the South.

This unusual distribution immediately led me to examine the methodology of the list, which comes from Forbes and CCAP’s annual college rankings. The worst professors list is based on Rate My Professor, a website which allows students to rate their instructors on a variety of characteristics. For the rankings, a mix of the helpfulness and clarity measures is used in conjunction with partially controlling for a professor’s “easiness.”

I understand their rationale for using Rate My Professor, as it’s the only widespread source of information about faculty teaching performance. I’m not opposed to using Rate My Professor as part of this measure, but controlling for grades received and the course’s home discipline is essential. At many universities, science and engineering courses have much lower average grades, which may influence students’ perceptions of the professor. The same is true at certain liberal arts colleges.

The course’s home discipline is currently in the Rate My Professor data, and I recommend that Forbes and CCAP weight results by discipline in order to more accurately make comparisons across institutions. I would also push them to aggregate a representative sample of comments for each institution, so students can learn more about what students think beyond a Likert score.

Student course evaluations are not going away (much to the chagrin of some faculty members), and they may be used in institutional accountability systems as well as a very small part of the tenure and promotion process. But like many of the larger college rankings, Forbes/CCAP’s work results in at best an incomplete and at worst a biased comparison of colleges. (And I promise that I will work hard on my helpfulness and clarity measures next fall!)