Blog (Kelchen on Education)

New Data on the Returns to College

Many people love to hate college rankings, but they have traditionally been one of the most easily digestible sources of information about institutions of higher education. We know very little about the outcomes of students who attend a particular college over time, so we tend to rely on simplistic measures such as graduation rates or measures of prestige. It is difficult to follow and assess the outcomes of students once they leave a given college for multiple reasons:

(1)    A substantial percentage of students transfer colleges at least once. A recent report estimated that about one-third of students who enrolled in fall 2006 were enrolled elsewhere sometime in the next five years. The growth of the National Student Clearinghouse has made following students easier, but it is difficult to figure out how to split the credit for successful outcomes across the colleges that a given student attends.

(2)    While the group of students to be assessed (everyone!) sounds straightforward, most of the push has been to focus on the outcomes of graduates. This makes for a reasonable comparison group across colleges, but colleges have different graduation rates. It makes sense to focus on all students who entered a college, but this would lower the returns to college (and doesn’t fit well with selective colleges, where everyone is assumed to graduate).

(3)    Some people choose to postpone entry into the full-time labor market, whether for good reasons (such as starting a family) or for more dubious reasons (such as getting a master’s degree and working on a PhD). Given the lack of a federal data system, other students will not be observed if they move out-of-state to work.

Even with all of the limitations of measuring student outcomes once they leave college, I am heartened to see states starting to track the labor market outcomes of students who attended public colleges and stay in-state. This requires the merging of two data systems that don’t always exist in some states and don’t talk to each other in others—state higher education data systems and unemployment insurance (UI) records. Two states, Arkansas and Tennessee, just launched websites with labor market information for graduates from their public institutions of higher education. While the sample included is far from perfect, it still provides useful data to many students, families, and policymakers.

Not surprisingly, many in academia are worried about these new measures, as they prioritize one of the purposes of higher education (employment) at the expense of other important purposes (such as critical thinking and higher-order learning). The comments on this recent Chronicle of Higher Education article are worth a read. I am concerned about policymakers solely relying on these imperfect measures of student outcomes, but stakeholders should be able to have more information about the effectiveness of colleges on as many outcomes as possible.

Right Idea, Wrong Time

It’s election season once again, so President Obama is coming back to Madison for a large campaign event right smack dab in the middle of the University of Wisconsin-Madison campus Thursday afternoon. Given the amount of security required to host a Presidential visit (regardless of the purpose), it is not surprising that all of the buildings on Bascom Hill will be closed on Thursday. This campaign rally will require all classes in affected buildings to be moved—many of them will likely be cancelled despite this being midterm exam season for undergraduates.

I am always happy to have politicians come to campus to ask for the community’s support, but two things just grate me the wrong way about the visit. The first thing is the timing. When Obama came to campus the previous two times (February 2008 and September 2010), his events were scheduled later in the day. While classes were still moved from the immediate area of Bascom Hill for the 2010 visit, the rally was held later in the afternoon so more classes could be held. Ann Althouse, prominent blogger and faculty member in the UW Law School, isn’t too happy about the class disruption:

“Nice for the campaign, but positioned to maximize disruption of regular classes. Is that a bug or a feature? If there are no classes and it’s a class day, students are around and they are free to attend. Classes are being cancelled to supply the photogenic crowd for the President?”

Badgers are a pretty photogenic lot. (It’s hard to be humble when you’re from Wisconsin, after all.) But starting the event at, say, 4 PM instead of noon would allow for a much more normal day of classes. For reference, recall the hubbub about having a night football game on the Thursday before classes even started. I’m guessing that the folks complaining about a night football game aren’t complaining about the President’s campaign stop—I’m happy to complain about both.

I have one more gripe about the rally: in order to get into the event in the heart of campus, people have to register with the President’s campaign team. I don’t have any problems with metal detectors and tight security (there are plenty of crazy people out there), but requiring registration with an aggressive political campaign team to attend an on-campus event does not support sifting and winnowing. (To be fair, Romney’s folks do the same thing to harvest voter information—but he is never coming to far-left Madison.)

I have taken steps to cancel or postpone all of my events on campus on Thursday and will likely listen to the rally online. Hopefully, all of the people displaced by the campaign event can have a fairly normal day of work if they so choose.

An Elite Take on College Rankings

As a conservative, small-town Midwesterner, I get a great deal of amusement out of the education coverage in the New York Times. I have never quite understood the newspaper’s consistent focus on the most elite portions of America’s educational systems, from kindergartens which cost more than most colleges (is the neighborhood school really that bad) to the special section of the website regarding the Ivy League. In that light, I was interested when several friends sent me the NYT’s take on college rankings and surprised to find a discussion that didn’t focus solely on the Ivy League.

In Saturday’s edition of the paper, columnist Joe Nocera noted some of the limitations of the U.S. News and World Report college rankings, such as rewarding selectivity and spending more money regardless of outcomes. (I’ve written plenty on this topic.) He notes that the Washington Monthly rankings do seek to reward colleges which effectively educate their students, and also states that a reduced focus on institutional prestige might help reduce student stress.

I am hardly a fan of Nocera (who is best known for comparing Tea Party supporters to terrorists), but the piece is worth a read. I highly recommend reading through the comments on the article, as they show a sharp divide between commenters who believe that attending solid—but not elite—colleges is a good investment and those who believe strongly in attending an elite institution. For those of us who are not regular readers of the Gray Lady, the comments also give us an idea of what some of America’s elite think about the value of certain types of higher education.

The Wisconsin Idea in Action

One of the factors which attracted me to the University of Wisconsin-Madison for graduate school was the Wisconsin Idea—the belief that the boundaries of the university should be the boundaries of the state. (Yes, that is much more important than being able to see my beloved Packers on television each week—and I’m a shareholder in the team.) As the University of Wisconsin System was formed in the early 1970s, the Wisconsin Idea has been adopted by the rest of the state’s public colleges and universities. While some people say that the Wisconsin Idea has passed its prime due to the focus on arcane research topics, I still think the idea is alive in well.

I saw a great example of the Wisconsin Idea in action at UW-Parkside that made the state newspapers this morning. Two Parkside students did research for a class project and discovered that moving prisoners’ medical records from paper to electronic formats could save millions of dollars and likely improve patient outcomes. This is a win-win for the students (who gain valuable research experience and analytic skills), the university (which gets great publicity), and the state (which should be able to save money).

I have been privileged to study the Wisconsin public higher education system for the past four-plus years through the Wisconsin Scholars Longitudinal Study. It is not uncommon for someone at UW-Madison to look down their noses at the rest of the UW System, but it is critical to recognize the contributions of the entire system toward making Wisconsin a better place to live.

Sticker Shock in Choosing Colleges: What Can Be Done?

Very few items are priced in the same manner as a college education. While the price of some items, such as cars and houses, can be negotiated downward from a posted (sticker) price, the actual price and the sticker price are usually in the same ballpark. However, the difference between the sticker price and the actual price paid can be enormous in higher education. This has posed a substantial problem to students and their families, especially those with less knowledge of the collegegoing and financial aid processes.

Until recently, students had to apply for financial aid to get an idea of how much college would actually cost them. The latest iteration of the Higher Education Opportunity Act, signed in 2008, required that institutions place a net price calculator on their website by last October. This calculator uses basic financial information such as income, household size, and dependency status to estimate a student’s expected family contribution (EFC), which would then give students an idea of their grant aid.

The need for more transparent information on the actual cost of college is shown by a recently released poll conducted by the College Board and Art & Science Group, LLC. These groups polled a nonrandom sample of SAT test-takers applying to mainly selective four-year colleges and universities in late 2011 and early 2012 and found that nearly 60% of low and middle-income families ruled out colleges solely because of the sticker price. This is in spite of generous need-based financial aid programs at some expensive, well-endowed colleges.

Given that the survey was conducted right as net price calculators became mandatory, it is likely the case that more students are aware of these tools by now. But it is unlikely that net price calculators have been used as much as possible, especially by first-generation students. To make the net price more apparent, the Department of Education has put forth a proposed “Shopping Sheet” that can be easily compared across colleges. This proposal has advocates in Washington, but there are reasonable concerns that a one-size-fits-all model may not benefit all colleges.

As an economist, I hope that better information can help students and their families make good decisions about whether to go to college and where to attend. However, I am also hesitant to believe that requiring uniform information across colleges will result in something useful.

Public Research at its Finest: The 2012 Ig Nobel Prize Winners

It is no secret that academics research some obscure topics—and are known to write about these topics in ways that obfuscate the importance of such research. This is one reason why former Senator William Proxmire (D-WI) started the Golden Fleece Awards to highlight research that he did not consider cost-effective. Here are some examples, courtesy of the Wisconsin Historical Society. (Academia has started to push back through the Golden Goose Awards, conceived by Rep. Jim Cooper (D-TN).)

Some of these potentially strange topics either have potentially useful applications or are just plain thought-provoking. To recognize some of the most unusual research in a given year, some good chaps at Harvard organized the first Ig Nobel Prize ceremony in 1991. This wonderful tradition continues to this day, with the 2012 ceremony being held yesterday. Real Nobel Prize winners are even known to hand out the awards!

Ten awards are handed out each year, so it is difficult to pick the best award. My initial thought was to highlight the Government Accountability Office’s report titled, “Actions Needed to Evaluate the Impact of Efforts to Estimate Costs of Reports and Studies,” but this sort of report is not unusual in the federal government. So I’ll single out a nice little article on whether multiple comparisons bias can result in brain wave activities for a dead Atlantic salmon (no word on whether the study participant was consumed after completion of the study) as my favorite award. Multiple comparisons bias is certainly real and the authors provide a nice example of how to lie with statistics, but the subject tested sure is unusual. I encourage people to take a look at the other awards and try to figure out how these research projects got started. Some seem more useful than others, but that is the nature of academic research.

The Annals of Improbable Research, the folks who put on the Ig Nobel ceremony, also have three hair clubs for scientists: The Luxuriant Flowing Hair Club for Scientists, the Luxuriant Former Hair Club for Scientists, and the Luxuriant Facial Hair Club for Scientists.

Here is the full video of the ceremony.

Knowing Before You Go

Knowing Before You Go

The American Enterprise Institute today hosted a discussion of the Student Right to Know Before You Go Act, introduced by Senator Ron Wyden (D-OR) and co-sponsored by Senator Marco Rubio (R-FL). The two senators, both of whom are known for working across party lines, briefly discussed the legislation and were then followed by a panel of higher education experts. Video of the discussion will be available on AEI’s website shortly.

The goal of the legislation, as the senators discuss in a column in USA Today, is to provide more information about labor market and other important outcomes to students and their families. While labor market outcomes are rarely available in any systemic manner, this legislation would support states which release the data both at the school level and by academic programs. This sort of information cannot be collected at the federal level due to a restriction placed in Section 134 of the Higher Education Act reauthorization in 2008, which bans the Department of Education from having a student-level data system of the sort used in some states.

While nearly everyone across the political spectrum agrees that making additional data available is good for students and their families, there are certainly concerns about the proposed legislation. One concern is that the availability of employment data will make more rigorous accountability systems feasible, even though state-level data systems can only track students who stay within that state. This concern is shared by colleges, which tend to loathe regulation, and some conservatives, who don’t feel that the federal government should regulate higher education.

Additionally, measuring employment outcomes does place more of a focus on generating employment over some of the other goals of college (such as learning for learning’s sake). The security of these large unit-record datasets is also a concern of some people; I am less concerned about this given the difficulty of accessing deidentified data. (I’ve worked with the data from Florida, which has possibly the most advanced state-level data system. Getting access is extremely difficult.)

Although I certainly recognize those concerns, I strongly support this piece of legislation. It would reduce reporting requirements for colleges, since they would work primarily with states instead of the federal government. (In that respect, the legislation is quite conservative.) It makes more data available to all stakeholders in education and provides researchers with more opportunities to examine promising educational practices and intervention. Finally, it allows for states to make more informed decisions about how to allocate their scarce resources.

I don’t expect this legislation to go anywhere during this session of Congress, even with bipartisan support. Let’s see what happens next session, by which time I hope we are away from the “fiscal cliff.”

The Limitations of “Data-Driven” Decisions

It’s safe to say that I am a data-driven person. I am an economist of education by training, and I get more than a little giddy when I get a new dataset that can help me examine an interesting policy question (and even more exciting when I can get the dataset coded correctly). But there are limits to what quantitative analysis can tell us, which comes as no surprise to nearly everyone in the education community (but can be surprising to some other researchers). Given my training and perspectives, I found an Education Week article on the limitations of data-driven decisions by Alfie Kohn, a noted critic of quantitative analyses in education, interesting.

Kohn writes that our reliance on quantifiable measures (such as test scores) in education result in the goals of education being transformed to meet those measures. He also notes that educators and policymakers have frequently created rubrics to quantify performance that used to be more qualitatively assessed, such as writing assignments. These critiques are certainly valid and should be kept in mind at all times, but then his clear agenda against what is often referred to as data-driven decision making shows through.

Toward the end of his essay, he launches into a scathing criticism of the “pseudoscience” of value-added models, in which students’ gains on standardized tests or other outcomes are estimated over time. While nobody in the education or psychometric communities is (or should be) claiming that value-added models give us a perfect measure of student learning, they do provide us with at least some useful information. A good source for more information on value-added models and data-driven decisions in K-12 education can be found in a book by my longtime mentor and dissertation committee member Doug Harris (with a foreword by the president of the American Federation of Teachers).

Like it or not, policy debates in education are being increasingly being shaped by the available quantitative data in conjunction with more qualitative sources such as teacher evaluations. I certainly don’t put full faith in what large-scale datasets can tell us, but it is abundantly clear that the accountability movement at all levels of education is not going away anytime soon. If Kohn disagrees with the type of assessment going on, he should propose an actionable alternative; otherwise, his objections cannot be taken seriously.

To Be a “Best Value,” Charge Higher Tuition

In addition to the better-known college rankings from U.S. News & World Report, the magazine also publishes a listing of “Best Value” colleges. The listing seems helpful enough, with the goal of highlighting colleges which are a good value for students and their families. However, this list rewards colleges for charging higher tuition and being more selective, factors that are not necessarily associated with true educational effectiveness.

U.S. News uses dubious methodology to calculate its Best Value list (a more detailed explanation can be found here). Before I get into the rankings components, there are two serious flaws with the methodology. First, colleges are only eligible to be on the list if they are approximately in the top half of the overall rankings. Since we already know that the rankings better measure prestige than educational effectiveness, the best value list must be taken with a shaker of salt right away. Additionally, for public colleges, U.S. News uses the cost of attendance for out-of-state students, despite the fact that the vast majority of students come from in-state. It is true that relatively more students at elite public universities (like the University of Wisconsin-Madison) come from out of state, but even here over 70% of freshmen come from Wisconsin or Minnesota. This decision inflates the cost of attending public institutions and thus shoves them farther down the list

The rankings components are as follows:

(1) “Ratio of quality to price”—60%. This is the score on the U.S. News ranking (their measure of quality) divided by the net price of attendance, which is the cost of attendance less need-based financial aid. It is similar to what I did in the Washington Monthly rankings to calculate the cost-adjusted graduation rate measure. This measure has some merits, but suffers from the flaws of a prestige-based numerator and a net price of attendance that is biased toward private institutions.

(2) The percentage of undergraduates receiving need-based grants—25%. This measure rewards colleges with lots of Pell Grant recipients (which is just fine) as well as colleges with large endowments or high posted tuition which can offer lots of grants (which isn’t related to the actual price a student pays). If every student with a household income of under one million dollars received a $5 need-based grant, a college would look good on this measure…this measure can be gamed.

(3) Average discount—15%. This is the average amount of need-based grants divided by the net price of attendance. This certainly rewards colleges with high posted tuition and lots of financial aid.

Once again, I don’t focus on the actual rankings, but I will say that the top of the list is dominated by elite private colleges with massive endowments. Daniel Luzer of Washington Monthly (with whom I’ve had the pleasure of working over the past six months) has a good take on the top of the Best Value list. He notes that although these well-endowed institutions do provide a lot of financial aid to needy students, they don’t educate too many of these students.

I am glad that the 800-pound gorilla in the college rankings game is thinking about whether a college is a good value to students, but their methodological choices mean that colleges which are really educating students in a cost-effective manner are not being rewarded.

Explaining Subgroup Effects: The New York City Voucher Experiment

The last three-plus years of my professional life have been consumed by attempting to determine the extent to which a privately funded need-based grant program affected the outcomes of college students from low-income families in the state of Wisconsin. Although the overall impact on university students in the program’s first cohort are effectively zero, this masks a substantial amount of heterogeneity in outcomes across different types of people. One of the greatest challenges that we have faced in interpreting the results is determining whether program impacts are truly different across subgroups (such as academic preparation and type of initial institution attended), something which is sorely lacking in many studies. (See our latest findings here.)

Matthew Chingos of the Brookings Institution and Paul Peterson of Harvard had to face a similar challenge in explaining subgroup effects in their research on a voucher program in New York City. They concluded that, although the overall impact of offering vouchers to disadvantaged students on college enrollment was null, there were positive and statistically significant effects for black students. This research got a great deal of publicity, including in the right-leaning Wall Street Journal. (I’m somewhere in the political middle on vouchers in K-12 education—I am strongly in favor of open enrollment across public school districts and support vouchers for qualified non-profit programs, but am much more hesitant to support vouchers for faith-based and unproven for-profit programs.)

This research got even more attention today with a report by Sara Goldrick-Rab of the University of Wisconsin-Madison (my dissertation chair) which sought to downplay the subgroup effects (see this summary of the debate in Inside Higher Ed). This brief, released through the left-leaning National Education Policy Center (here is the review panel), notes that the reported impacts for black students are in fact not statistically different from Hispanic students and that the impacts for black students may not even be statistically significant due to data limitations (word to the wise: the National Student Clearinghouse is not a perfect data source). I share Sara’s concerns about statistical significance and subgroup effects. [UPDATE: Here is the authors’ response to Sara’s report, which is not surprising. If you like snark with your policy debates, I recommend checking out their Twitter discussion.]

I am generally extremely hesitant to make much out of differences in impacts by race (as well as other characteristics like parental education and family income) for several reasons. First, it is difficult to consistently measure race. (How are multiracial students classified? Why do some states classify students differently?) Second, although researchers should look at differences in outcomes by race, the question then becomes, “So what?” If black students do benefit more from a voucher program than Hispanic students, the policy lever isn’t clear. It is extremely difficult to target toward one race and not another. Chingos and Peterson were correct in their WSJ piece to make the correct comparison—if vouchers worked for black students in New York City, they might work in Washington, DC. Finally, good luck enacting a policy that makes opportunities available only for people of a certain racial background; this is much less of a problem when considering family income or parental education.

Although the true effects of the voucher program for black students may not have been statistically significant, the program is still likely to be cost-effective given the much lower costs of the private schools. Researchers and educators should carefully consider what these private schools are doing to generate similar educational outcomes at a lower cost—and also consider whether private schools are spending less money per student due to educating students with fewer special needs. I would like to see more of a discussion of cost-effectiveness in both of these pieces.