Improving Net Price Data Reporting

As the sticker price of attending colleges and universities has steadily increased over the past decade, researchers and policymakers have begun to focus on the actual price that students and their families face. The federal government collects a measure of the net price of attendance in its IPEDS database, which is calculated as the total cost of attendance (tuition, fees, room and board, and other expenses) less any grant aid received. (More information can be found on the IPEDS website.) I have used the net price measure in my prior work, including the Washington Monthly rankings and my previous post on the Net Price Madness tournament. However, the data do have substantial limitations—some of which could be easily addressed in the data collection process.

There are two different net price measures currently available in the IPEDS dataset—one for all students receiving grant aid (federal, state, and/or institutional) and one for students receiving any federal financial aid (grants, loans, or work-study).  The average net price is available for the first measure, while the second measure breaks down the net price by family income (but does not report an average net price.) For public institutions, both of these measures only include first-time, full-time, degree-seeking students paying in-state tuition, which can substantially limit the generalizability of the results.

Here, I use my current institution (the University of Wisconsin-Madison) as an example. The starting sample for IPEDS is the 3,487 first-time, full-time, degree-seeking freshmen who are in-state students. Of those students, net price by family income is calculated for the 1,983 students receiving Title IV aid. (This suggests that just over half of in-state Madison freshmen file the FAFSA.) Here are the net price and number of students by income group:

0-30k: $6,363 (n=212)
30-48k: $10,098 (n=232)
48-75k: $15,286 (n=406)
75-110k: $19,482 (n=542)
110+k: $20,442 (n=591)

The average net price is calculated for a slightly different group of students—those who received grant aid for any source (n=1,858). The average net price is $14,940, which is lower than the average net price faced by students who file the FAFSA ($16,409) as some students who do not receive institutional grants are included in the latter measure. However, the latter number is not reported in the main IPEDS dataset and can only be calculated by digging into the institutional reports.

I would encourage IPEDS to add the average net price for all FAFSA filers into the dataset, as that better reflects what students from financially modest backgrounds will pay. Additionally, to counter the relatively small number of students who may have a family income of less than $30,000 and to tie into policy discussions, I would like to see the average net price for all Pell Grant recipients. These changes can easily be made given current data collection procedures and would provide more useful data to stakeholders.

The 2013 Net Price Madness Tournament

Millions and millions of Americans will be sitting on the couch over the next several weeks watching the NCAA college basketball tournaments—and I’ll be keeping an eye on my Wisconsin Badgers as the men’s team makes its way through the tournament. Those of us in the higher education community have made a variety of brackets highlighting different aspects of the participating institutions (see Inside Higher Ed’s looks at the men’s and women’s tournaments, using the academic performance rate for student-athletes, and one from The Awl based on tuition, with higher tuition resulting in advancement).

I take a different look at advancing colleges through the tournament—based on having the lowest net price of attendance. Net price is calculated as the total cost of attendance (tuition and fees, room and board, books, and a living allowance) less any grant aid received—among students receiving any grant aid. I use IPEDS data from 2010-11 for this analysis, and also show results if the analysis is limited to students with family income below $30,000 per year (most of whom will have an expected family contribution of zero). Data for the 2013 Net Price Madness Tournament is below:





Overall Net Price

Round of 16

Midwest: North Carolina A&T ($6,147) vs. New Mexico State ($8,492), Middle Tennessee State ($9,148) vs. Albany ($12,697)

West: Wichita State ($8,079) vs. Ole Miss ($12,516), New Mexico ($10,272) vs. Iowa State ($13,554)

South: North Carolina ($11,028) vs. South Dakota State ($12,815), Northwestern State ($7,939) vs. San Diego State ($8,527)

East: North Carolina State ($9,847) vs. UNLV ($9,943), Davidson ($23,623) vs. Illinois ($15,610)

Final Four

North Carolina A&T ($6,147) vs. Wichita State ($8,079)

Northwestern State ($7,939) vs. North Carolina State ($9,847)

WINNER: North Carolina A&T (59% Pell, 41% grad rate)

Net Price (household income below $30k)

Round of 16

Midwest: North Carolina A&T ($4,774) vs. New Mexico State ($5,966), Michigan State ($5,569) vs. Duke ($8,049)

West: Southern University ($8,752) vs. Wisconsin ($6,363), Harvard ($1,297) vs. Iowa State ($8,636)

South: North Carolina ($4,101) vs. Michigan ($4,778), Florida ($3,778) vs. San Diego State ($3,454)

East: Indiana ($3,919) vs. UNLV ($6,412), Davidson ($7,165) vs. Illinois ($7,432)

Final Four

North Carolina A&T ($4,774) vs. Harvard ($1,297)

San Diego State ($3,454) vs. Indiana ($3,919)

WINNER: Harvard (11% Pell, 97% graduation rate)

Depending on which version of net price is used, the results do change substantially. Some colleges dramatically lower their net price of attendance for the neediest students, while others keep theirs more constant in spite of Pell Grant funds being available. Harvard’s victory on the lowest-income measure does ring somewhat hollow, as its percentage of students receiving Pell Grants (11%) tied with Villanova for the lowest in the tournament.

Thanks for reading this post, and feel free to use these picks if you choose to fill out a bracket for the real tournament. Do keep in mind that low net prices and basketball prowess may not exactly be correlated!

The Benefits of Biennial Budgets

The federal government had a substantial problem with its budgeting process over the past several years, with funding being provided by a series of continuing resolutions outside the annual process for more than three years. With bipartisan frustration over this process growing, a group of centrist Senators, led by Jeanne Shaheen (D-NH) and Johnny Isakson (R-GA), have proposed a switch from annual to biennial budgets. This proposal was introduced in the past Congress and was not seriously discussed, but is likely to be considered this time around with the interest of Senate Majority Leader Harry Reid (D-NV).

Biennial budgets are not uncommon at the state level. A 2011 report from the National Conference of State Legislatures shows that 19 states have biennial budgets, including Ohio, Texas, and Wisconsin. Only four of these states have legislatures that only meet every two years, meaning that 15 states have actively chosen the biennial path.

Biennial budgeting allows for more time for debate and discussion of tricky matters, but the budgets often have to be adjusted because of the balanced budget requirements. (Budget repair bills are well-known here in Wisconsin.) The lack of such a requirement at the federal level makes biennial budgeting even more feasible. While I am a staunch supporter of a balanced budget, I recognize that a small error in economic growth or demographic assumptions can result in a slightly unbalanced budget over a two-year period. As long as the assumptions are reasonable, I’m fine with a small error which can be addressed in the future.

Requiring a budget every two years instead of one can help provide more stability to federal education funding, particularly regarding policies and levels of student financial aid and education research. This stability has the potential to have positive impacts which are independent of the actual funding levels. For example, if the exact dollar amount for the maximum Pell Grant is known, a push should be made to communicate that level to students who are likely to qualify upon entering college. Providing earlier information of financial aid could induce the marginal student to enroll in college and perhaps even take an additional high school course which would lower the likelihood of remediation. This push toward earlier notification of financial aid is consistent with other parts of my research agenda, and would have the added benefit (in my view) of allowing Pell Grant funding to be flexible as needed in the future.

A biennial budget process could also have the benefit of making student loan interest rates more predictable. Under current law, undergraduate subsidized Stafford interest rates are currently set to double (from 3.4% to 6.8%) on July 1. (This is a budgetary matter because the interest rate does determine the level of profit or loss for the federal government.) While I am a strong supporter of plans to tie student loan interest rates to market conditionssuch as the rate paid on Treasury bills plus 3%—biennial budgeting would at least allow interest rates to not face a cliff every single year.

Biennial budgeting has the potential to result in more stability in education funding, as well as result in budgets which are well-discussed and passed under regular order. For those reasons, I am supportive of moving from annual to biennial budgets. I would love to hear your thoughts on this proposal in the comments!

The Higher Learning Commission’s Accreditation Gamble

Accrediting bodies play an important role in judging the quality (or at least the competency) of American colleges and universities. There are six accreditors which cover the majority of non-profit, non-religious postsecondary institutions, including the powerful Higher Learning Commission in the Midwest.  The HLC recently informed Apollo Group, the owner of the University of Phoenix, that it may be placed on probation due to concerns about administrative and governance structures.

Part of Phoenix’s accrediting concerns may be due to a philosophical shift at the HLC, emphasizing the public purposes of higher education. As noted in an Inside Higher Ed article on the topic, Sylvia Manning, president of the HLC, stated the priority that education be a public good. The new accrediting criteria include the following statement:

“The institution’s educational responsibilities take primacy over other purposes, such as generating financial returns for investors, contributing to a related or parent organization, or supporting external interests.”

This shift occurs in the midst of questions about the purposes of the current accreditation structure. While colleges must be accredited in order for students to receive federal financial aid dollars, the federal government currently has no direct involvement in the accreditation structure. Accrediting bodies also focus on degree programs instead of individual courses, something which has also been questioned.

Given the current decentralized structure of accreditation, Phoenix could easily move to another of the main regional nonprofit accrediting bodies—or it could go through a body focusing on private colleges and universities. The latter would likely be easier for Phoenix, as it would have to answer to more like-minded critics. While these bodies are viewed as being less prestigious than the HLC, it is an open question whether students care about the accrediting body—as long as they can receive financial aid.

The Higher Learning Commission is taking a gamble with its move toward placing Phoenix on probation, partially due to the new criteria. They need to carefully consider whether it is better to have oversight over one of the nation’s largest and most powerful postsecondary institutions or to steer them toward a more friendly accrediting body. Traditional accrediting bodies should also consider the possibility that the federal government will get into the accreditation business if for-profits leave groups like the HLC. If the HLC chooses to focus on Phoenix’s control instead of its academic competency, a chain reaction could be set off which may end up with them being replaced by federal oversight.

College Reputation Rankings Go Global

College rankings are not a phenomenon which is limited to the United States. Shanghai Jiao Tong University has ranked research universities for the past decade, and the well-known Times Higher Education rankings have been around for several years. While the Shanghai rankings tend to focus on metrics such as citations and research funding, THE has compiled a reputational ranking of universities around the world. Reputational rankings are a concern in U.S.-only rankings, but extending them to a global scale makes little sense to me.

Thomson Reuters (the group behind the THE rankings) makes a great fuss about the sound methodology of the reputational rankings, which they to their credit acknowledge is a subjective measure. They collected 16,639 responses from academics around the world, with some demographic information available here. But they fail to provide any information about the sampling frame, a devastating omission. The researchers behind the rankings do note that the initial sample was constructed to be broadly representative of global academics, but we know nothing about the response rate or whether the final sample was representative. In my mind, that omission disqualifies the rankings from further consideration. But I’ll push on and analyze the content of the reputational rankings.

The reputational rankings are a combination of separate ratings for teaching and research quality. I really don’t have serious concerns about the research component of the ranking, as the survey asks about research quality of given institutions within the academic’s discipline. Researchers who stay on top of their field should be able to reasonably identify universities with top research departments. I have much less confidence in the teaching portion of the rankings, as someone needs to observe classes in a given department to have any idea of teaching effectiveness. Yet I would be surprised if teaching and research evaluations were not strongly correlated.

The University of Wisconsin-Madison ranks 30th on the global reputation scale, which a slightly higher score for research than teaching. (And according to the map, the university has been relocated to the greater Marshfield area.) That has not stopped Kris Olds, a UW-Madison faculty member, from leveling a devastating critique of the idea of global rankings—or the UW-Madison press office from putting out a favorable release on the news.

I have mixed emotions on this particular set of rankings; the research measure is probably capturing research productivity well, but the teaching measure is likely lousy. However, without more information about the response rate to the THE survey, I cannot view these rankings as being valid.

Tying FAFSA Data to IPEDS: The Need for “Medium Data”

It is safe to say that I’m a fan of data in higher education. Students and their families, states, and the federal government spend a massive amount of money on higher education, yet we have relatively little data on outcomes other than graduation rates and student loan default rates for a small subset of students—those who started as first-time, full-time students. The federal government currently operates on what I call a “little data” model, with some rough institutional-level measures available through IPEDS. Some of these measures are also available through a slightly more student-friendly portal in the College Navigator website.

As is often the case, some states are light years ahead of the federal government regarding data collection and availability. Florida, Texas, and Ohio are often recognized as leaders in terms of higher education data availability, both in terms of collecting (deidentified) student-level data and tying together K-12, higher education, and workforce data outcomes. The Spellings Commission in 2006 did call for a student-level dataset at a national level, but Congress explicitly denied the Department of Education this authority in the reauthorization of the Higher Education Act. Although there are sporadic movements toward “big data” at the national level, making this policy shift will require Congressional support and a substantial amount of resources.

Although I am willing to direct resources to a much more robust data system (after all, how can we determine funding priorities if we know so little about student outcomes?), a “medium data” approach could easily be enacted by using data sources already collected by colleges or the federal governemnt. I spent a fair amount of the morning today trying to find a fairly simple piece of data—the percentage of students at given colleges whose parent(s) did not complete college. The topic of first-generation students is important in policy circles, yet we have no systemic data on how large this group of students is at most colleges.

FAFSA data could be used to expand the number of IPEDS measures to include such topics as the following, in addition to first-generation status:

(1)    The percentage of students who file the FAFSA

(2)    Average/median family income

(3)    Percentage of students with zero EFC

(4)    Information on means-tested benefit receipt (such as food stamps or TANF)

(5)    Marital status

Of course, these measures would only include students who file the FAFSA—which would exclude many students who would not qualify for need-based aid, as well as some students who are unable to navigate through the complicated form. But these measures would provide a better idea of institutional diversity beyond racial/ethnic diversity and the percentage of students receiving Pell Grants and could be incorporated into IPEDS at a fairly low cost. Adding these FAFSA measures would help move IPEDS from “little data” to “medium data” and provide more useful measures to higher education stakeholders.