Improving Net Price Data Reporting

As the sticker price of attending colleges and universities has steadily increased over the past decade, researchers and policymakers have begun to focus on the actual price that students and their families face. The federal government collects a measure of the net price of attendance in its IPEDS database, which is calculated as the total cost of attendance (tuition, fees, room and board, and other expenses) less any grant aid received. (More information can be found on the IPEDS website.) I have used the net price measure in my prior work, including the Washington Monthly rankings and my previous post on the Net Price Madness tournament. However, the data do have substantial limitations—some of which could be easily addressed in the data collection process.

There are two different net price measures currently available in the IPEDS dataset—one for all students receiving grant aid (federal, state, and/or institutional) and one for students receiving any federal financial aid (grants, loans, or work-study).  The average net price is available for the first measure, while the second measure breaks down the net price by family income (but does not report an average net price.) For public institutions, both of these measures only include first-time, full-time, degree-seeking students paying in-state tuition, which can substantially limit the generalizability of the results.

Here, I use my current institution (the University of Wisconsin-Madison) as an example. The starting sample for IPEDS is the 3,487 first-time, full-time, degree-seeking freshmen who are in-state students. Of those students, net price by family income is calculated for the 1,983 students receiving Title IV aid. (This suggests that just over half of in-state Madison freshmen file the FAFSA.) Here are the net price and number of students by income group:

0-30k: $6,363 (n=212)
30-48k: $10,098 (n=232)
48-75k: $15,286 (n=406)
75-110k: $19,482 (n=542)
110+k: $20,442 (n=591)

The average net price is calculated for a slightly different group of students—those who received grant aid for any source (n=1,858). The average net price is $14,940, which is lower than the average net price faced by students who file the FAFSA ($16,409) as some students who do not receive institutional grants are included in the latter measure. However, the latter number is not reported in the main IPEDS dataset and can only be calculated by digging into the institutional reports.

I would encourage IPEDS to add the average net price for all FAFSA filers into the dataset, as that better reflects what students from financially modest backgrounds will pay. Additionally, to counter the relatively small number of students who may have a family income of less than $30,000 and to tie into policy discussions, I would like to see the average net price for all Pell Grant recipients. These changes can easily be made given current data collection procedures and would provide more useful data to stakeholders.

Tying FAFSA Data to IPEDS: The Need for “Medium Data”

It is safe to say that I’m a fan of data in higher education. Students and their families, states, and the federal government spend a massive amount of money on higher education, yet we have relatively little data on outcomes other than graduation rates and student loan default rates for a small subset of students—those who started as first-time, full-time students. The federal government currently operates on what I call a “little data” model, with some rough institutional-level measures available through IPEDS. Some of these measures are also available through a slightly more student-friendly portal in the College Navigator website.

As is often the case, some states are light years ahead of the federal government regarding data collection and availability. Florida, Texas, and Ohio are often recognized as leaders in terms of higher education data availability, both in terms of collecting (deidentified) student-level data and tying together K-12, higher education, and workforce data outcomes. The Spellings Commission in 2006 did call for a student-level dataset at a national level, but Congress explicitly denied the Department of Education this authority in the reauthorization of the Higher Education Act. Although there are sporadic movements toward “big data” at the national level, making this policy shift will require Congressional support and a substantial amount of resources.

Although I am willing to direct resources to a much more robust data system (after all, how can we determine funding priorities if we know so little about student outcomes?), a “medium data” approach could easily be enacted by using data sources already collected by colleges or the federal governemnt. I spent a fair amount of the morning today trying to find a fairly simple piece of data—the percentage of students at given colleges whose parent(s) did not complete college. The topic of first-generation students is important in policy circles, yet we have no systemic data on how large this group of students is at most colleges.

FAFSA data could be used to expand the number of IPEDS measures to include such topics as the following, in addition to first-generation status:

(1)    The percentage of students who file the FAFSA

(2)    Average/median family income

(3)    Percentage of students with zero EFC

(4)    Information on means-tested benefit receipt (such as food stamps or TANF)

(5)    Marital status

Of course, these measures would only include students who file the FAFSA—which would exclude many students who would not qualify for need-based aid, as well as some students who are unable to navigate through the complicated form. But these measures would provide a better idea of institutional diversity beyond racial/ethnic diversity and the percentage of students receiving Pell Grants and could be incorporated into IPEDS at a fairly low cost. Adding these FAFSA measures would help move IPEDS from “little data” to “medium data” and provide more useful measures to higher education stakeholders.

Bill Gates on Measuring Educational Effectiveness

The Bill and Melinda Gates Foundation has become a very influential force in shaping research in health and education policy over the past decade, both due to the large sums of money the foundation has spent funding research in these areas and because of the public influence that someone as successful as Bill Gates can have. (Disclaimer: I’ve worked on several projects which have received Gates funding.) In both the health and education fields, the Gates Foundation is focusing on the importance of being able to collect data and measure a program’s effectiveness. This is evidenced by the Gates Foundation’s annual letter to the public, which I recommend reading.

In the education arena, the Gates letter focuses on creating useful and reliable K-12 teacher feedback and evaluation systems. They have funded a project called Measures of Effective Teaching, which finds some evidence that it is possible to measure teacher effectiveness in a repeatable manner that can be used to help teachers improve. (A hat tip to my friend Trey Miller, who worked on the report.) To me, the important part of the MET report is that multiple measures of teacher effectiveness, including evaluations, observations, and student scores, need to be used when consider teaching effectiveness.

The Gates Foundation is also moving into performance measurement in higher education. I have been a part of one of Gates’s efforts in this arena—a project examining best practices in input-adjusted performance metrics. What this essentially means is that colleges should be judged based on some measure of their “value added” instead of the raw performance of their students. Last week, Bill Gates commented to a small group of journalists that college rankings are doing the exact opposite (as reported by Luisa Kroll of Forbes):

“The control metric shouldn’t be that kids aren’t so qualified. It should be whether colleges are doing their job to teach them. I bet there are community colleges and other colleges that do a good job in this area, but US News & World Report rankings pushes you away from that.”

The Forbes article goes on to mention that Gates would like to see metrics that focus on the performance of students from low-income families and the effectiveness of teacher education programs. Both of these measures are currently in progress, and are likely to continue moving forward given the Gates Foundation’s deep pockets and influence.

An Incomplete Comparison of College Costs and Expenditures

A recent piece by Derek Thompson of The Atlantic shows a provocative chart that suggests that students from the lowest-income families pay much more out-of-pocket to attend college than that college actually spends on their education:

thompson_graph

(From The Atlantic)

This chart comes from data reported in a recent NBER working paper by Caroline Hoxby and Christopher Avery (Table 1). While the premise of the NBER paper is otherwise strong (noting that lower-income, high-achieving students from rural areas are very unlikely to attend highly selective colleges), I do have some concerns about this table and how the broader media are interpreting it. My biggest concern is the following:

The total out-of-pocket cost of attendance is compared to instructional expenses, an incomplete look at how much a college spends on a particular student.

I don’t have a problem with the measure used of the total out-of-pocket cost of attendance—the net price posted for someone at the 20th percentile of family income. But instructional expenses are but a portion of per-student expenditures. The cost of providing room and board to on-campus students is an important part of the expenditure equation, but one can certainly argue that it isn’t directly tied to education. So I will focus on a broader category of educational expenditures, which include expenditures for academic support and student services as well as instruction.

Instructional expenditures (which Hoxby and Avery report and Thompson uses in his chart) include the costs of teaching courses, but do not include the costs of closely related enterprises that enhance the classroom experience and even make it possible. In the 2009-10 academic year, the average four-year university in the Washington Monthly college rankings spent $8,728 per full-time equivalent student.

Academic support expenditures help to keep the university operating and include essential functions such as advising, course development, and libraries, as well as some administrative costs. The average academic support expenditure per student was $6,832 per FTE—nearly as much as direct instructional expenses.

Student service expenditures include financial aid, admissions, and social development in addition to some spending on athletics and transportation. Average expenditures in this category were $2,981 per FTE in 2009-10, although truly necessary expenses may be somewhat lower.

Combining these three categories, the average educational expenditure per full-time equivalent student was $18,542 in 2009-10, more than twice the cost of instructional expenditures and very similar to the out-of-pocket cost for students from lower-income families. In that light (and after accounting for the cost of room and board), these students are receiving at least a modest subsidy.

Hoxby and Avery should add as a caveat that there are other factors that go into educational expenditures besides the cost of teaching classes. This would help the education press not leap to such hasty conclusions that do not pass a smell test.

Another Commission on Improving Graduation Rates

College leaders and policymakers are rightly concerned about the percentage of incoming students who graduate in a reasonable period of time. Although there have been numerous reports and commissions at the university, state, and national level to improve college completion rates, about the same percentage of incoming students graduate college now as a decade ago. This spurred the creation of the National Commission on Higher Education Attainment, a group of college presidents from various types of public and private nonprofit colleges and universities. This group released their report on improving graduation rates today, which offers few new suggestions and repeats many of the same concerns of past commissions.

The report made the following recommendations, with my comments below:

Recommendation 1: Change campus culture to boost student success.

We’ve heard this one before, to say the least. The problem is that few campus-level innovations have been “scalable”—or able to expand to other colleges with the same results. Other programs appear promising, but have never been rigorously evaluated or cost a lot of money. Rigorous evaluation is essential to determine what we can learn from other colleges’ apparent successes.

Recommendation 2: Improve cost-effectiveness and quality.

In theory, this sounds great—and many of the recommendations sound reasonable. But policymakers and college leaders should be concerned about any potential cost savings resulting in a lower-quality education. A slightly less personalized education for a lower price may be a worthwhile tradeoff and pass a cost-effectiveness test, but these concerns should be addressed.

A bigger concern not addressed regarding the cost of education is the actual cost of teaching a given course. First-year students tend to subsidize upper-level undergraduates, and all undergraduates tend to subsidize doctoral students. Much more research needs to be done about the costs of individual courses in order to provide lower-cost offerings to certain groups of students.

Recommendation 3: Make better use of data to boost success.

The commission calls for better use of institutional-level data to identify at-risk students and keep students on track to graduation. They call for more students to be included in the federal IPEDS dataset, which currently only tracks first-time, full-time, traditional-age students at their first institution of attendance. While this would be an improvement, I would like to see a pilot test of a student-level dataset instead of an institutional-level dataset. This would be much better for identifying student success patterns for groups with a lower probability of success.

 

The report also had a few notable omissions. First of all, the decision to exclude leaders of for-profit colleges is troubling. While many for-profit colleges have low completion rates, their cost structure (in terms of tracking per-student expenditures) is worth examining and they do disproportionately serve at-risk students. There is no reason to leave out an important, if not controversial, sector of higher education. Second, the typical text on declining public support for higher education (on a per-student basis) was present. While it might make college presidents feel good, any requests for additional funding in this political and economic climate need to be more closely tied to improving college completion rates. Finally, little attention was paid to the different sectors of higher education sharing best practices in spite of their often symbiotic relationship.

I don’t expect more than a few months to go by before the next commission issues a very similar report to this one. Stakeholders in the higher education arena need to think of how potential success stories can actually be brought to scale to benefit a meaningful number of students.

More Data on the Returns to College

Most people consider attending college to be a good bet in the long run, in spite of the rising cost of attendance and increasing levels of student loan debt. While I’m definitely not in the camp that everyone should earn a bachelor’s degree, I do believe that some sort of postsecondary training benefits the majority of adults. A recent report from the State Higher Education Executive Officers (SHEEO) highlights the benefits of graduating with a college degree from public colleges and universities.

Not surprisingly, their report suggests that there are substantial benefits to graduating from college. Using data from IPEDS and the American Community Survey, they find that the average associate’s degree holder earned 31.2% more (or about $9,200 per year) than the average person with a high school diploma. The premium associated with a bachelor’s degree is even larger, 71.2%–or nearly $21,000 per year. These figures seem to be on the high end (but quite plausible) of the returns to education literature, which suggests that students tend to get an additional 10-15% boost in wages for each year of college completed.

I do have some concerns with the analysis, which does limit its generalizability and/or policy relevance. They are the following:

(1)    Given that SHEEO represents public colleges and universities, it is not surprising that they focused on that sector in their analysis. Policymakers who are interested in the overall returns to education (including the private not-for-profit and for-profit sectors) should try to get more data.

(2)    This study is in line with the classic returns to education literature, which compares students who completed a degree to those with a high school diploma. The latter group of students who just have a high school diploma may have also completed some college but left without a degree, which results in a different comparison group than students and policymakers would expect. I would like to see studies compare all students who entered college with students who never attended to get a better idea of the average wage premium among those who attempt college.

(3)    While the average student benefits from completing a college degree, not all students benefit. For example, welders with a high school diploma may very well make more than a preschool teacher with a bachelor’s degree. A 2011 report by Georgetown University’s Center on Education and the Workforce does a nice job showing that not everyone benefits.

(4)    Most reports like this one do a good job estimating the benefits of education (in terms of higher wages), but neglect the costs in terms of forgone earnings and tuition expenses. While most people are still likely to benefit from attending relatively inexpensive public colleges, some students’ expected returns may become negative after this assumption.

(5)    Students who complete a certificate degree (generally one-year programs in technical fields) are excluded from the analyses for data reasons, which is truly a shame. Students and policymakers should keep in mind that many of these programs have high completion rates and positive payoffs in the long run.

My gripes notwithstanding, I encourage readers to check out the state-level estimates of the returns to different types of college degrees and majors. It’s worth a read.

(Note: This will likely be my last post of 2012, as I am looking forward to spending some time far away from computer screens and datasets next week. I’ll be back in January…enjoy the holidays and please travel carefully!)

My College is a Better Value than Yours

It is not surprising that college officials are proud of their institution. But a recent survey released by the Association of Governing Boards, a body representing trustees of four-year colleges and universities, takes this pride a little too far. Trustees were asked several questions about their own institution as well as about higher education in general, and in each case more trustees rated their own college much more favorably.

A prime example of this (irrational?) pride is shown in a question asking whether trustees view the cost of attending their college (relative to the value) as being too high, too low, or just about right. While 62% of trustees thought their college cost the right amount and only 17% thought it was too expensive relative to its value, 38% of trustees thought that higher education in general cost the right amount and 55% considered higher education to be too expensive. (Don’t look at my college…the problem is elsewhere!)

The perception that one’s own institution is better than average is not just limited to higher education or Lake Wobegon. National surveys have consistently shown that parents give high marks to their child’s public school, while giving much dimmer reviews to other schools in their district or K-12 education in general. Perhaps Americans should consider that the great unknown as probably not as bad as they think—and that their own school may not be a paragon of excellence.

Making the College Scorecard More Student Friendly

The Obama Administration and the U.S. Department of Education have spent a great deal of time and effort in developing a simple one-page “college scorecard.” The goal of this scorecard is to provide information about the cost of attending college, average graduation rates, and information regarding student debt. The Department of Education has followed suit with a College Affordability and Transparency Center, which seeks to highlight colleges with unusually high or low costs to students.

Although I have no doubt that the Administration shares my goal of facilitating the availability of useful information to prospective students and their families, I doubt the current measures are having any effect. The college scorecard is difficult to understand, with technical language that is second nature to higher education professionals but is completely foreign to many prospective students. Because of this, I was happy to see a new report from the Center for American Progress, a liberal think tank, suggesting improvements to the measures. (As a side note, liberal and conservative think tanks work together quite a bit on issues of higher education. Transparency and information provision are nearly universal principles, and partisan concerns such as state-level teachers’ unions and charter schools just aren’t as present in higher ed.)

The authors of the report took the federal government’s scorecard and their own version to groups of high school students, where they tested the two versions and suggested improvements. The key points aren’t terribly surprising—focusing on a few important measures with simple language is critical—but it appears that the Department of Education has not yet done adequate testing of their measure. I am also not surprised that students prefer to see four-year graduation rates instead of six-year rates, as everyone thinks they will graduate on time—even though we know that is far from the case.

The changes to the college scorecard are generally a good idea, but I remain concerned about students’ ability to access the information. Even if the scorecard is required to be posted on a college website (like certain outcome measures currently are), it does not mean that it will be easy to access. For example, the graduation rate for first-time, full-time students who received a Pell Grant during their first year of college must be posted on the college’s website, but actually finding this information is difficult. I hope outside groups (such as CAP) will continue to publicize the information, as greater use of the data is the best way to influence colleges’ behavior.

More Fun With College Rankings

I was recently interviewed by Koran Addo of the (Baton Rouge) Advocate regarding my work with the Washington Monthly college rankings. I’ve had quite a few phone and e-mail exchanges with college officials and the media about my work, but I want to highlight the resulting article both because it was extremely well done and because it highlights what I consider to be the foolish obsession with college rankings.

Two pieces of the article deserve special attention. First, consider this tidbit:

“LSU System President and Baton Rouge Chancellor William Jenkins said he was ‘clearly disappointed’ to learn that LSU had tumbled six spots from 128th last year to 134th in the U.S. News ‘Best Colleges 2013’ list.”

I wish that college rankings came with confidence intervals—which would provide a rough guide of whether a change over time is more than what we would expect by chance or statistical noise. Based on my work with rankings, I can safely say that such a small change in the rankings is not statistically significant and certainly not educationally meaningful.

The next fun quote from the article is from LSU’s director of research and economic development, Nicole Baute Honorée. She argues that only rankings from the National Science Foundation matter:

“Universities are in the knowledge business, as in creating new knowledge and passing it along. That’s why the NSF rankings are the gold standard.”

The problem is that research expenditures (a) do not guarantee high-quality undergraduate education, (b) do not have to be used effectively in order to generate a high score, and (c) do not reward many disciplines (such as the humanities). They are a useful measure of research clout in the sciences, but I would rely on them as only one of many measures (which is what the Washington Monthly rankings have done since long before I took the reins).

Once again, I urge readers not to rely on a single measure of college quality—and to make sure any measure is actually aligned with student success.

Pell Grants and Data-Driven Decisions

I am a big proponent of making data-driven decisions whenever possible, but sadly that isn’t the case among many policymakers. Recently, in an effort to reduce costs, Congress and the Obama Administration agreed to reduce the maximum number of semesters of Pell Grant eligibility from 18 to 12 (which is in line with the federal government’s primary graduation rate measure for students attending four-year colleges). However, this decision was made without considering the cost-effectiveness of the policy change or even without a good idea of how many students would be affected.

Today’s online version of The Chronicle of Higher Education includes a piece that I co-authored with Sara Goldrick-Rab on this policy change. We’re both strong proponents of data-driven decision making, as well as conducting experiments whenever possible to evaluate the effects of policy changes. We come from very different places on the political spectrum (which is why we disagree on whether the federal government can and should hold states accountable for their funding decisions), but there are certainly fundamental points that are just a part of an effective policymaking process.