The Wisconsin Idea in Action

One of the factors which attracted me to the University of Wisconsin-Madison for graduate school was the Wisconsin Idea—the belief that the boundaries of the university should be the boundaries of the state. (Yes, that is much more important than being able to see my beloved Packers on television each week—and I’m a shareholder in the team.) As the University of Wisconsin System was formed in the early 1970s, the Wisconsin Idea has been adopted by the rest of the state’s public colleges and universities. While some people say that the Wisconsin Idea has passed its prime due to the focus on arcane research topics, I still think the idea is alive in well.

I saw a great example of the Wisconsin Idea in action at UW-Parkside that made the state newspapers this morning. Two Parkside students did research for a class project and discovered that moving prisoners’ medical records from paper to electronic formats could save millions of dollars and likely improve patient outcomes. This is a win-win for the students (who gain valuable research experience and analytic skills), the university (which gets great publicity), and the state (which should be able to save money).

I have been privileged to study the Wisconsin public higher education system for the past four-plus years through the Wisconsin Scholars Longitudinal Study. It is not uncommon for someone at UW-Madison to look down their noses at the rest of the UW System, but it is critical to recognize the contributions of the entire system toward making Wisconsin a better place to live.

Sticker Shock in Choosing Colleges: What Can Be Done?

Very few items are priced in the same manner as a college education. While the price of some items, such as cars and houses, can be negotiated downward from a posted (sticker) price, the actual price and the sticker price are usually in the same ballpark. However, the difference between the sticker price and the actual price paid can be enormous in higher education. This has posed a substantial problem to students and their families, especially those with less knowledge of the collegegoing and financial aid processes.

Until recently, students had to apply for financial aid to get an idea of how much college would actually cost them. The latest iteration of the Higher Education Opportunity Act, signed in 2008, required that institutions place a net price calculator on their website by last October. This calculator uses basic financial information such as income, household size, and dependency status to estimate a student’s expected family contribution (EFC), which would then give students an idea of their grant aid.

The need for more transparent information on the actual cost of college is shown by a recently released poll conducted by the College Board and Art & Science Group, LLC. These groups polled a nonrandom sample of SAT test-takers applying to mainly selective four-year colleges and universities in late 2011 and early 2012 and found that nearly 60% of low and middle-income families ruled out colleges solely because of the sticker price. This is in spite of generous need-based financial aid programs at some expensive, well-endowed colleges.

Given that the survey was conducted right as net price calculators became mandatory, it is likely the case that more students are aware of these tools by now. But it is unlikely that net price calculators have been used as much as possible, especially by first-generation students. To make the net price more apparent, the Department of Education has put forth a proposed “Shopping Sheet” that can be easily compared across colleges. This proposal has advocates in Washington, but there are reasonable concerns that a one-size-fits-all model may not benefit all colleges.

As an economist, I hope that better information can help students and their families make good decisions about whether to go to college and where to attend. However, I am also hesitant to believe that requiring uniform information across colleges will result in something useful.

Public Research at its Finest: The 2012 Ig Nobel Prize Winners

It is no secret that academics research some obscure topics—and are known to write about these topics in ways that obfuscate the importance of such research. This is one reason why former Senator William Proxmire (D-WI) started the Golden Fleece Awards to highlight research that he did not consider cost-effective. Here are some examples, courtesy of the Wisconsin Historical Society. (Academia has started to push back through the Golden Goose Awards, conceived by Rep. Jim Cooper (D-TN).)

Some of these potentially strange topics either have potentially useful applications or are just plain thought-provoking. To recognize some of the most unusual research in a given year, some good chaps at Harvard organized the first Ig Nobel Prize ceremony in 1991. This wonderful tradition continues to this day, with the 2012 ceremony being held yesterday. Real Nobel Prize winners are even known to hand out the awards!

Ten awards are handed out each year, so it is difficult to pick the best award. My initial thought was to highlight the Government Accountability Office’s report titled, “Actions Needed to Evaluate the Impact of Efforts to Estimate Costs of Reports and Studies,” but this sort of report is not unusual in the federal government. So I’ll single out a nice little article on whether multiple comparisons bias can result in brain wave activities for a dead Atlantic salmon (no word on whether the study participant was consumed after completion of the study) as my favorite award. Multiple comparisons bias is certainly real and the authors provide a nice example of how to lie with statistics, but the subject tested sure is unusual. I encourage people to take a look at the other awards and try to figure out how these research projects got started. Some seem more useful than others, but that is the nature of academic research.

The Annals of Improbable Research, the folks who put on the Ig Nobel ceremony, also have three hair clubs for scientists: The Luxuriant Flowing Hair Club for Scientists, the Luxuriant Former Hair Club for Scientists, and the Luxuriant Facial Hair Club for Scientists.

Here is the full video of the ceremony.

Knowing Before You Go

Knowing Before You Go

The American Enterprise Institute today hosted a discussion of the Student Right to Know Before You Go Act, introduced by Senator Ron Wyden (D-OR) and co-sponsored by Senator Marco Rubio (R-FL). The two senators, both of whom are known for working across party lines, briefly discussed the legislation and were then followed by a panel of higher education experts. Video of the discussion will be available on AEI’s website shortly.

The goal of the legislation, as the senators discuss in a column in USA Today, is to provide more information about labor market and other important outcomes to students and their families. While labor market outcomes are rarely available in any systemic manner, this legislation would support states which release the data both at the school level and by academic programs. This sort of information cannot be collected at the federal level due to a restriction placed in Section 134 of the Higher Education Act reauthorization in 2008, which bans the Department of Education from having a student-level data system of the sort used in some states.

While nearly everyone across the political spectrum agrees that making additional data available is good for students and their families, there are certainly concerns about the proposed legislation. One concern is that the availability of employment data will make more rigorous accountability systems feasible, even though state-level data systems can only track students who stay within that state. This concern is shared by colleges, which tend to loathe regulation, and some conservatives, who don’t feel that the federal government should regulate higher education.

Additionally, measuring employment outcomes does place more of a focus on generating employment over some of the other goals of college (such as learning for learning’s sake). The security of these large unit-record datasets is also a concern of some people; I am less concerned about this given the difficulty of accessing deidentified data. (I’ve worked with the data from Florida, which has possibly the most advanced state-level data system. Getting access is extremely difficult.)

Although I certainly recognize those concerns, I strongly support this piece of legislation. It would reduce reporting requirements for colleges, since they would work primarily with states instead of the federal government. (In that respect, the legislation is quite conservative.) It makes more data available to all stakeholders in education and provides researchers with more opportunities to examine promising educational practices and intervention. Finally, it allows for states to make more informed decisions about how to allocate their scarce resources.

I don’t expect this legislation to go anywhere during this session of Congress, even with bipartisan support. Let’s see what happens next session, by which time I hope we are away from the “fiscal cliff.”

The Limitations of “Data-Driven” Decisions

It’s safe to say that I am a data-driven person. I am an economist of education by training, and I get more than a little giddy when I get a new dataset that can help me examine an interesting policy question (and even more exciting when I can get the dataset coded correctly). But there are limits to what quantitative analysis can tell us, which comes as no surprise to nearly everyone in the education community (but can be surprising to some other researchers). Given my training and perspectives, I found an Education Week article on the limitations of data-driven decisions by Alfie Kohn, a noted critic of quantitative analyses in education, interesting.

Kohn writes that our reliance on quantifiable measures (such as test scores) in education result in the goals of education being transformed to meet those measures. He also notes that educators and policymakers have frequently created rubrics to quantify performance that used to be more qualitatively assessed, such as writing assignments. These critiques are certainly valid and should be kept in mind at all times, but then his clear agenda against what is often referred to as data-driven decision making shows through.

Toward the end of his essay, he launches into a scathing criticism of the “pseudoscience” of value-added models, in which students’ gains on standardized tests or other outcomes are estimated over time. While nobody in the education or psychometric communities is (or should be) claiming that value-added models give us a perfect measure of student learning, they do provide us with at least some useful information. A good source for more information on value-added models and data-driven decisions in K-12 education can be found in a book by my longtime mentor and dissertation committee member Doug Harris (with a foreword by the president of the American Federation of Teachers).

Like it or not, policy debates in education are being increasingly being shaped by the available quantitative data in conjunction with more qualitative sources such as teacher evaluations. I certainly don’t put full faith in what large-scale datasets can tell us, but it is abundantly clear that the accountability movement at all levels of education is not going away anytime soon. If Kohn disagrees with the type of assessment going on, he should propose an actionable alternative; otherwise, his objections cannot be taken seriously.

To Be a “Best Value,” Charge Higher Tuition

In addition to the better-known college rankings from U.S. News & World Report, the magazine also publishes a listing of “Best Value” colleges. The listing seems helpful enough, with the goal of highlighting colleges which are a good value for students and their families. However, this list rewards colleges for charging higher tuition and being more selective, factors that are not necessarily associated with true educational effectiveness.

U.S. News uses dubious methodology to calculate its Best Value list (a more detailed explanation can be found here). Before I get into the rankings components, there are two serious flaws with the methodology. First, colleges are only eligible to be on the list if they are approximately in the top half of the overall rankings. Since we already know that the rankings better measure prestige than educational effectiveness, the best value list must be taken with a shaker of salt right away. Additionally, for public colleges, U.S. News uses the cost of attendance for out-of-state students, despite the fact that the vast majority of students come from in-state. It is true that relatively more students at elite public universities (like the University of Wisconsin-Madison) come from out of state, but even here over 70% of freshmen come from Wisconsin or Minnesota. This decision inflates the cost of attending public institutions and thus shoves them farther down the list

The rankings components are as follows:

(1) “Ratio of quality to price”—60%. This is the score on the U.S. News ranking (their measure of quality) divided by the net price of attendance, which is the cost of attendance less need-based financial aid. It is similar to what I did in the Washington Monthly rankings to calculate the cost-adjusted graduation rate measure. This measure has some merits, but suffers from the flaws of a prestige-based numerator and a net price of attendance that is biased toward private institutions.

(2) The percentage of undergraduates receiving need-based grants—25%. This measure rewards colleges with lots of Pell Grant recipients (which is just fine) as well as colleges with large endowments or high posted tuition which can offer lots of grants (which isn’t related to the actual price a student pays). If every student with a household income of under one million dollars received a $5 need-based grant, a college would look good on this measure…this measure can be gamed.

(3) Average discount—15%. This is the average amount of need-based grants divided by the net price of attendance. This certainly rewards colleges with high posted tuition and lots of financial aid.

Once again, I don’t focus on the actual rankings, but I will say that the top of the list is dominated by elite private colleges with massive endowments. Daniel Luzer of Washington Monthly (with whom I’ve had the pleasure of working over the past six months) has a good take on the top of the Best Value list. He notes that although these well-endowed institutions do provide a lot of financial aid to needy students, they don’t educate too many of these students.

I am glad that the 800-pound gorilla in the college rankings game is thinking about whether a college is a good value to students, but their methodological choices mean that colleges which are really educating students in a cost-effective manner are not being rewarded.

Explaining Subgroup Effects: The New York City Voucher Experiment

The last three-plus years of my professional life have been consumed by attempting to determine the extent to which a privately funded need-based grant program affected the outcomes of college students from low-income families in the state of Wisconsin. Although the overall impact on university students in the program’s first cohort are effectively zero, this masks a substantial amount of heterogeneity in outcomes across different types of people. One of the greatest challenges that we have faced in interpreting the results is determining whether program impacts are truly different across subgroups (such as academic preparation and type of initial institution attended), something which is sorely lacking in many studies. (See our latest findings here.)

Matthew Chingos of the Brookings Institution and Paul Peterson of Harvard had to face a similar challenge in explaining subgroup effects in their research on a voucher program in New York City. They concluded that, although the overall impact of offering vouchers to disadvantaged students on college enrollment was null, there were positive and statistically significant effects for black students. This research got a great deal of publicity, including in the right-leaning Wall Street Journal. (I’m somewhere in the political middle on vouchers in K-12 education—I am strongly in favor of open enrollment across public school districts and support vouchers for qualified non-profit programs, but am much more hesitant to support vouchers for faith-based and unproven for-profit programs.)

This research got even more attention today with a report by Sara Goldrick-Rab of the University of Wisconsin-Madison (my dissertation chair) which sought to downplay the subgroup effects (see this summary of the debate in Inside Higher Ed). This brief, released through the left-leaning National Education Policy Center (here is the review panel), notes that the reported impacts for black students are in fact not statistically different from Hispanic students and that the impacts for black students may not even be statistically significant due to data limitations (word to the wise: the National Student Clearinghouse is not a perfect data source). I share Sara’s concerns about statistical significance and subgroup effects. [UPDATE: Here is the authors’ response to Sara’s report, which is not surprising. If you like snark with your policy debates, I recommend checking out their Twitter discussion.]

I am generally extremely hesitant to make much out of differences in impacts by race (as well as other characteristics like parental education and family income) for several reasons. First, it is difficult to consistently measure race. (How are multiracial students classified? Why do some states classify students differently?) Second, although researchers should look at differences in outcomes by race, the question then becomes, “So what?” If black students do benefit more from a voucher program than Hispanic students, the policy lever isn’t clear. It is extremely difficult to target toward one race and not another. Chingos and Peterson were correct in their WSJ piece to make the correct comparison—if vouchers worked for black students in New York City, they might work in Washington, DC. Finally, good luck enacting a policy that makes opportunities available only for people of a certain racial background; this is much less of a problem when considering family income or parental education.

Although the true effects of the voucher program for black students may not have been statistically significant, the program is still likely to be cost-effective given the much lower costs of the private schools. Researchers and educators should carefully consider what these private schools are doing to generate similar educational outcomes at a lower cost—and also consider whether private schools are spending less money per student due to educating students with fewer special needs. I would like to see more of a discussion of cost-effectiveness in both of these pieces.

Measuring Prestige: Analyzing the U.S. News & World Report College Rankings

The 2013 U.S. News college rankings were released today and are certain to be the topic of discussion for much of the higher education community. Many people grumble about the rankings, but it’s hard to dismiss the rankings due to their profound impact on how colleges and universities operate. It is not uncommon for colleges to set goals to improve their ranking, and data falsification is sadly a real occurrence.  As someone who does research in the fields of college rankings and accountability policy, I am glad to see the rankings come out every fall. However, I urge readers to take these rankings as what they intend to be—a measure of prestige rather than college effectiveness.

The measures used to calculate the rankings are generally the same as last year and focus on six or seven factors, depending on the type of university:

–Academic reputation (from peers and/or high school counselors)

–Selectivity (admit rate, high school rank, and ACT/SAT scores)

–Faculty resources (salary, terminal degree status, full-time status, and class size)

–Graduation and retention rates

–Financial resources per student

–Alumni giving rates

–Graduation rate performance (only for research universities and liberal arts colleges)

Most of these measures can directly be controlled by increasing tuition and/or enrolling only the most academically prepared students. The only measure that is truly independent of prestige is the graduation rate performance measure, in which the actual graduation rate is regressed on student characteristics and spending to generate a predicted graduation rate. While U.S. News doesn’t release its methodology for calculating its predicted graduation rate measure, the results are likely similar to what I did in the Washington Monthly rankings.

I am pleased to see that U.S. News is using an average of multiple years of data for some of its measures. I do this in my own work on estimating college effectiveness, although this was not a part of Washington Monthly’s methodology this year (it may be in the future, however). The use of multiple years of data does reduce the effects of random variation (and helps to smooth the rankings), but I am concerned that U.S. News uses only the submitted years of data if not all years are submitted. This gives colleges an incentive to not report a year of bad data on measures such as alumni giving rates.

Overall, these rankings are virtually unchanged from last year, but watch colleges crow about moving up two or three spots when their score hardly changed. These rankings are big business in the prestige market; hopefully, students who care more about educational effectiveness consider other measures of college quality in addition to these rankings.

I’ll put up a follow-up post in the next few days discussing the so-called “Best Value” college list from U.S. News. As a preview, I don’t hold it in high regard.

Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.

How (Not) to Rank Colleges

The college rankings marketplace became a little bit more crowded this week with the release of a partial set of rankings from an outfit called The Alumni Factor. The rankings, headed by Monica McGurk, former partner at McKinsey & Company, rank colleges primarily based on the results of proprietary alumni surveys. The site claims to have surveyed approximately 100-500 alumni across the age distribution at 450 universities in creating these rankings. (By this point in time, a few alarms should be sounding. More on those later.)

The Alumni Factor uses thirteen measures from their surveys to rank colleges, in addition to graduation and alumni giving rates from external sources. These measures include development of cognitive and noncognitive skills, bang for the buck, average income and net worth, and “overall happiness of graduates.” The data behind the rankings has supposedly been verified by an independent statistician, but the verification is just a description of confidence intervals (which are never again mentioned). They provide rankings for the top 177 schools, which are not visible unless you sign up for their services. As a result, I do not discuss the overall rankings.

While I commend the Alumni Factor for making more information available to consumers (albeit behind a paywall), I don’t see this set of rankings as being tremendously useful. My first concern is how they selected the colleges to be included in the rankings. The list of the top 177 schools consists primarily of prestigious research universities and liberal arts colleges; few to no good public bachelor’s to master’s level universities (such as my alma mater) are included in the list. This suggests that the creators of the ranking care much more about catering to an audience of well-heeled students and parents than making a serious attempt to examine the effectiveness of lesser-known, but potentially higher-quality, colleges.

I am extremely skeptical of survey-based college ranking systems because they generally make every college look better than they actually are. The website mentions that multiple sources were used to reach alumni, but the types of people who respond to surveys are generally happy with their college experience and doing well in life. (Just ask anyone who has ever worked in alumni relations.)

The website states that “we did not let our subjective judgment of what is important in education lead to values-based weightings that might favor some schools more than others.” I am by no means a philosopher of education, but even I know that the choice of outcomes to be measured is a value judgment about what is important in education. The fact that four out of fifteen of the measures used measure income and net worth, while only one reflects bang for the buck, shows their priority on labor market outcomes. I don’t have a problem with that, but they cannot ignore the decisions they made in creating the rankings.

Finally, these survey outcomes are for alumni only and exclude anyone who did not graduate from college. Granted, the colleges on the initial list of 450 all probably have pretty high graduation rates, but it is important to remember that not everyone graduates. It’s another example of these rankings being limited to prestigious colleges, as well as the perception that everyone who starts college will graduate. It’s just not true.

Although no set of college rankings is perfect (and not even the ones that I’ve worked on), The Alumni Factor’s rankings serves a very limited population of students who are likely well-served by the existing college guides and rankings. This is a shame because the survey measures could have been tailored to better suit students who are much closer to being on the margin of graduating from college.

For those who are interested, the 2013 U.S. News & World Report college rankings will come out on Wednesday, September 12. I’ll have my analysis of those rankings later in the week.

Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.