More Data on the Returns to College

Most people consider attending college to be a good bet in the long run, in spite of the rising cost of attendance and increasing levels of student loan debt. While I’m definitely not in the camp that everyone should earn a bachelor’s degree, I do believe that some sort of postsecondary training benefits the majority of adults. A recent report from the State Higher Education Executive Officers (SHEEO) highlights the benefits of graduating with a college degree from public colleges and universities.

Not surprisingly, their report suggests that there are substantial benefits to graduating from college. Using data from IPEDS and the American Community Survey, they find that the average associate’s degree holder earned 31.2% more (or about $9,200 per year) than the average person with a high school diploma. The premium associated with a bachelor’s degree is even larger, 71.2%–or nearly $21,000 per year. These figures seem to be on the high end (but quite plausible) of the returns to education literature, which suggests that students tend to get an additional 10-15% boost in wages for each year of college completed.

I do have some concerns with the analysis, which does limit its generalizability and/or policy relevance. They are the following:

(1)    Given that SHEEO represents public colleges and universities, it is not surprising that they focused on that sector in their analysis. Policymakers who are interested in the overall returns to education (including the private not-for-profit and for-profit sectors) should try to get more data.

(2)    This study is in line with the classic returns to education literature, which compares students who completed a degree to those with a high school diploma. The latter group of students who just have a high school diploma may have also completed some college but left without a degree, which results in a different comparison group than students and policymakers would expect. I would like to see studies compare all students who entered college with students who never attended to get a better idea of the average wage premium among those who attempt college.

(3)    While the average student benefits from completing a college degree, not all students benefit. For example, welders with a high school diploma may very well make more than a preschool teacher with a bachelor’s degree. A 2011 report by Georgetown University’s Center on Education and the Workforce does a nice job showing that not everyone benefits.

(4)    Most reports like this one do a good job estimating the benefits of education (in terms of higher wages), but neglect the costs in terms of forgone earnings and tuition expenses. While most people are still likely to benefit from attending relatively inexpensive public colleges, some students’ expected returns may become negative after this assumption.

(5)    Students who complete a certificate degree (generally one-year programs in technical fields) are excluded from the analyses for data reasons, which is truly a shame. Students and policymakers should keep in mind that many of these programs have high completion rates and positive payoffs in the long run.

My gripes notwithstanding, I encourage readers to check out the state-level estimates of the returns to different types of college degrees and majors. It’s worth a read.

(Note: This will likely be my last post of 2012, as I am looking forward to spending some time far away from computer screens and datasets next week. I’ll be back in January…enjoy the holidays and please travel carefully!)

My College is a Better Value than Yours

It is not surprising that college officials are proud of their institution. But a recent survey released by the Association of Governing Boards, a body representing trustees of four-year colleges and universities, takes this pride a little too far. Trustees were asked several questions about their own institution as well as about higher education in general, and in each case more trustees rated their own college much more favorably.

A prime example of this (irrational?) pride is shown in a question asking whether trustees view the cost of attending their college (relative to the value) as being too high, too low, or just about right. While 62% of trustees thought their college cost the right amount and only 17% thought it was too expensive relative to its value, 38% of trustees thought that higher education in general cost the right amount and 55% considered higher education to be too expensive. (Don’t look at my college…the problem is elsewhere!)

The perception that one’s own institution is better than average is not just limited to higher education or Lake Wobegon. National surveys have consistently shown that parents give high marks to their child’s public school, while giving much dimmer reviews to other schools in their district or K-12 education in general. Perhaps Americans should consider that the great unknown as probably not as bad as they think—and that their own school may not be a paragon of excellence.

Making the College Scorecard More Student Friendly

The Obama Administration and the U.S. Department of Education have spent a great deal of time and effort in developing a simple one-page “college scorecard.” The goal of this scorecard is to provide information about the cost of attending college, average graduation rates, and information regarding student debt. The Department of Education has followed suit with a College Affordability and Transparency Center, which seeks to highlight colleges with unusually high or low costs to students.

Although I have no doubt that the Administration shares my goal of facilitating the availability of useful information to prospective students and their families, I doubt the current measures are having any effect. The college scorecard is difficult to understand, with technical language that is second nature to higher education professionals but is completely foreign to many prospective students. Because of this, I was happy to see a new report from the Center for American Progress, a liberal think tank, suggesting improvements to the measures. (As a side note, liberal and conservative think tanks work together quite a bit on issues of higher education. Transparency and information provision are nearly universal principles, and partisan concerns such as state-level teachers’ unions and charter schools just aren’t as present in higher ed.)

The authors of the report took the federal government’s scorecard and their own version to groups of high school students, where they tested the two versions and suggested improvements. The key points aren’t terribly surprising—focusing on a few important measures with simple language is critical—but it appears that the Department of Education has not yet done adequate testing of their measure. I am also not surprised that students prefer to see four-year graduation rates instead of six-year rates, as everyone thinks they will graduate on time—even though we know that is far from the case.

The changes to the college scorecard are generally a good idea, but I remain concerned about students’ ability to access the information. Even if the scorecard is required to be posted on a college website (like certain outcome measures currently are), it does not mean that it will be easy to access. For example, the graduation rate for first-time, full-time students who received a Pell Grant during their first year of college must be posted on the college’s website, but actually finding this information is difficult. I hope outside groups (such as CAP) will continue to publicize the information, as greater use of the data is the best way to influence colleges’ behavior.

More Fun With College Rankings

I was recently interviewed by Koran Addo of the (Baton Rouge) Advocate regarding my work with the Washington Monthly college rankings. I’ve had quite a few phone and e-mail exchanges with college officials and the media about my work, but I want to highlight the resulting article both because it was extremely well done and because it highlights what I consider to be the foolish obsession with college rankings.

Two pieces of the article deserve special attention. First, consider this tidbit:

“LSU System President and Baton Rouge Chancellor William Jenkins said he was ‘clearly disappointed’ to learn that LSU had tumbled six spots from 128th last year to 134th in the U.S. News ‘Best Colleges 2013’ list.”

I wish that college rankings came with confidence intervals—which would provide a rough guide of whether a change over time is more than what we would expect by chance or statistical noise. Based on my work with rankings, I can safely say that such a small change in the rankings is not statistically significant and certainly not educationally meaningful.

The next fun quote from the article is from LSU’s director of research and economic development, Nicole Baute Honorée. She argues that only rankings from the National Science Foundation matter:

“Universities are in the knowledge business, as in creating new knowledge and passing it along. That’s why the NSF rankings are the gold standard.”

The problem is that research expenditures (a) do not guarantee high-quality undergraduate education, (b) do not have to be used effectively in order to generate a high score, and (c) do not reward many disciplines (such as the humanities). They are a useful measure of research clout in the sciences, but I would rely on them as only one of many measures (which is what the Washington Monthly rankings have done since long before I took the reins).

Once again, I urge readers not to rely on a single measure of college quality—and to make sure any measure is actually aligned with student success.

Pell Grants and Data-Driven Decisions

I am a big proponent of making data-driven decisions whenever possible, but sadly that isn’t the case among many policymakers. Recently, in an effort to reduce costs, Congress and the Obama Administration agreed to reduce the maximum number of semesters of Pell Grant eligibility from 18 to 12 (which is in line with the federal government’s primary graduation rate measure for students attending four-year colleges). However, this decision was made without considering the cost-effectiveness of the policy change or even without a good idea of how many students would be affected.

Today’s online version of The Chronicle of Higher Education includes a piece that I co-authored with Sara Goldrick-Rab on this policy change. We’re both strong proponents of data-driven decision making, as well as conducting experiments whenever possible to evaluate the effects of policy changes. We come from very different places on the political spectrum (which is why we disagree on whether the federal government can and should hold states accountable for their funding decisions), but there are certainly fundamental points that are just a part of an effective policymaking process.

College Selectivity Does Not Imply Quality

For me, the greatest benefit of attending academic conferences is the ability to clarify my own thinking about important issues in educational policy. At my most recent conference last week, I attended several outstanding sessions on issues in higher education in addition to presenting my own work on early commitment programs for financial aid. (I’ll have more on that in a post in the near future, so stay tuned.) I greatly enjoyed the talks and learned quite a bit from them, but the biggest thing I am taking away from them is something that I think they’re doing wrong—conflating college selectivity with college quality.

When most researchers refer to the concept of “college quality,” they are really referring to a college’s inputs, such as financial resources, student ACT/SAT scores, and high school class rank. What this really means is that a college is selective and has what we consider to be quality inputs. But plentiful, malleable inputs do not imply a quality outcome, given what we would expect from the student and the college. Rather, a quality college helps its student body succeed instead of just recruiting a select group of students. This does not mean that selective colleges cannot be quality colleges; however, it does mean that the relationship is not guaranteed.

I am particularly interested in measuring college quality based on an estimate of its value added to students instead of a measure highly correlated with inputs. Part of my research agenda is on that topic, as illustrated by my work compiling the Washington Monthly college rankings. However, other popular college rankings continue to reward colleges for their selectivity, which creates substantial incentives to game the rankings system in unproductive ways.

For example, a recent piece in The Chronicle of Higher Education illustrates how one college submitted inaccurate and overly optimistic data for the U.S. News rankings. George Washington University, one of the few colleges in the country with a posted cost of attendance of over $50,000 per year, had previously reported that 78% of their incoming freshman class was in the top ten percent of their high school graduating class, in spite of large numbers of high schools declining to rank students in recent years. An eagle-eyed staffer in the provost’s office realized that the number was too high and discovered that the admissions staff was inaccurately estimating the rank for students with no data. As a result, the revised figure was only 58%.

Regardless of whether GWU’s error was of one of omission or malfeasance, the result was that the university appeared to be a higher-quality school under the fatally flawed U.S. News rankings. [UPDATE 11/15/12: U.S. News has removed the ranking for GWU in this year’s online guide.] GWU certainly aspires to be more selective, but keep in mind that selectivity does not imply quality in a value-added sense. Academics and policymakers would be wise to be careful when discussing quality when they really mean selectivity.

New Data on the Returns to College

Many people love to hate college rankings, but they have traditionally been one of the most easily digestible sources of information about institutions of higher education. We know very little about the outcomes of students who attend a particular college over time, so we tend to rely on simplistic measures such as graduation rates or measures of prestige. It is difficult to follow and assess the outcomes of students once they leave a given college for multiple reasons:

(1)    A substantial percentage of students transfer colleges at least once. A recent report estimated that about one-third of students who enrolled in fall 2006 were enrolled elsewhere sometime in the next five years. The growth of the National Student Clearinghouse has made following students easier, but it is difficult to figure out how to split the credit for successful outcomes across the colleges that a given student attends.

(2)    While the group of students to be assessed (everyone!) sounds straightforward, most of the push has been to focus on the outcomes of graduates. This makes for a reasonable comparison group across colleges, but colleges have different graduation rates. It makes sense to focus on all students who entered a college, but this would lower the returns to college (and doesn’t fit well with selective colleges, where everyone is assumed to graduate).

(3)    Some people choose to postpone entry into the full-time labor market, whether for good reasons (such as starting a family) or for more dubious reasons (such as getting a master’s degree and working on a PhD). Given the lack of a federal data system, other students will not be observed if they move out-of-state to work.

Even with all of the limitations of measuring student outcomes once they leave college, I am heartened to see states starting to track the labor market outcomes of students who attended public colleges and stay in-state. This requires the merging of two data systems that don’t always exist in some states and don’t talk to each other in others—state higher education data systems and unemployment insurance (UI) records. Two states, Arkansas and Tennessee, just launched websites with labor market information for graduates from their public institutions of higher education. While the sample included is far from perfect, it still provides useful data to many students, families, and policymakers.

Not surprisingly, many in academia are worried about these new measures, as they prioritize one of the purposes of higher education (employment) at the expense of other important purposes (such as critical thinking and higher-order learning). The comments on this recent Chronicle of Higher Education article are worth a read. I am concerned about policymakers solely relying on these imperfect measures of student outcomes, but stakeholders should be able to have more information about the effectiveness of colleges on as many outcomes as possible.

Knowing Before You Go

Knowing Before You Go

The American Enterprise Institute today hosted a discussion of the Student Right to Know Before You Go Act, introduced by Senator Ron Wyden (D-OR) and co-sponsored by Senator Marco Rubio (R-FL). The two senators, both of whom are known for working across party lines, briefly discussed the legislation and were then followed by a panel of higher education experts. Video of the discussion will be available on AEI’s website shortly.

The goal of the legislation, as the senators discuss in a column in USA Today, is to provide more information about labor market and other important outcomes to students and their families. While labor market outcomes are rarely available in any systemic manner, this legislation would support states which release the data both at the school level and by academic programs. This sort of information cannot be collected at the federal level due to a restriction placed in Section 134 of the Higher Education Act reauthorization in 2008, which bans the Department of Education from having a student-level data system of the sort used in some states.

While nearly everyone across the political spectrum agrees that making additional data available is good for students and their families, there are certainly concerns about the proposed legislation. One concern is that the availability of employment data will make more rigorous accountability systems feasible, even though state-level data systems can only track students who stay within that state. This concern is shared by colleges, which tend to loathe regulation, and some conservatives, who don’t feel that the federal government should regulate higher education.

Additionally, measuring employment outcomes does place more of a focus on generating employment over some of the other goals of college (such as learning for learning’s sake). The security of these large unit-record datasets is also a concern of some people; I am less concerned about this given the difficulty of accessing deidentified data. (I’ve worked with the data from Florida, which has possibly the most advanced state-level data system. Getting access is extremely difficult.)

Although I certainly recognize those concerns, I strongly support this piece of legislation. It would reduce reporting requirements for colleges, since they would work primarily with states instead of the federal government. (In that respect, the legislation is quite conservative.) It makes more data available to all stakeholders in education and provides researchers with more opportunities to examine promising educational practices and intervention. Finally, it allows for states to make more informed decisions about how to allocate their scarce resources.

I don’t expect this legislation to go anywhere during this session of Congress, even with bipartisan support. Let’s see what happens next session, by which time I hope we are away from the “fiscal cliff.”