College Selectivity Does Not Imply Quality

For me, the greatest benefit of attending academic conferences is the ability to clarify my own thinking about important issues in educational policy. At my most recent conference last week, I attended several outstanding sessions on issues in higher education in addition to presenting my own work on early commitment programs for financial aid. (I’ll have more on that in a post in the near future, so stay tuned.) I greatly enjoyed the talks and learned quite a bit from them, but the biggest thing I am taking away from them is something that I think they’re doing wrong—conflating college selectivity with college quality.

When most researchers refer to the concept of “college quality,” they are really referring to a college’s inputs, such as financial resources, student ACT/SAT scores, and high school class rank. What this really means is that a college is selective and has what we consider to be quality inputs. But plentiful, malleable inputs do not imply a quality outcome, given what we would expect from the student and the college. Rather, a quality college helps its student body succeed instead of just recruiting a select group of students. This does not mean that selective colleges cannot be quality colleges; however, it does mean that the relationship is not guaranteed.

I am particularly interested in measuring college quality based on an estimate of its value added to students instead of a measure highly correlated with inputs. Part of my research agenda is on that topic, as illustrated by my work compiling the Washington Monthly college rankings. However, other popular college rankings continue to reward colleges for their selectivity, which creates substantial incentives to game the rankings system in unproductive ways.

For example, a recent piece in The Chronicle of Higher Education illustrates how one college submitted inaccurate and overly optimistic data for the U.S. News rankings. George Washington University, one of the few colleges in the country with a posted cost of attendance of over $50,000 per year, had previously reported that 78% of their incoming freshman class was in the top ten percent of their high school graduating class, in spite of large numbers of high schools declining to rank students in recent years. An eagle-eyed staffer in the provost’s office realized that the number was too high and discovered that the admissions staff was inaccurately estimating the rank for students with no data. As a result, the revised figure was only 58%.

Regardless of whether GWU’s error was of one of omission or malfeasance, the result was that the university appeared to be a higher-quality school under the fatally flawed U.S. News rankings. [UPDATE 11/15/12: U.S. News has removed the ranking for GWU in this year’s online guide.] GWU certainly aspires to be more selective, but keep in mind that selectivity does not imply quality in a value-added sense. Academics and policymakers would be wise to be careful when discussing quality when they really mean selectivity.

Overvaluing Harvard

Many parents went to send their children to what they consider to be the best colleges and universities. For quite a few of these families, this means that Junior should go to fair Harvard (after all, it’s the top-rated university by U.S. News and World Report). But few families are willing to go as far as Gerald and Lily Chow of Hong Kong.

An article in the Boston Globe tells the sad saga of the Chow family and how they got duped out of $2.2 million by a former Harvard professor who claimed to be able to get the family’s two sons into the university. The family filed suit against defendant Mark Zimmy’s company after claiming fraud after their children did not get accepted there (although they did get into other elite colleges). Zimmy’s website is still active and targets Chinese students, many of whom have little knowledge of the American educational system. Needless to say, I am interested in how this case proceeds through the legal system.

I am pretty familiar with the academic literature studying the returns to attending a prestigious college. Although there are possibly some additional benefits of attending a more prestigious college to students from disadvantaged backgrounds, the literature is quite clear that the typical student should not expect to benefit by over one million dollars for attending Harvard compared to a slightly less prestigious college. It’s safe to say that the Chow family was likely going to waste their money, even if Mr. Zimmy was able to get their children into Harvard.

An Elite Take on College Rankings

As a conservative, small-town Midwesterner, I get a great deal of amusement out of the education coverage in the New York Times. I have never quite understood the newspaper’s consistent focus on the most elite portions of America’s educational systems, from kindergartens which cost more than most colleges (is the neighborhood school really that bad) to the special section of the website regarding the Ivy League. In that light, I was interested when several friends sent me the NYT’s take on college rankings and surprised to find a discussion that didn’t focus solely on the Ivy League.

In Saturday’s edition of the paper, columnist Joe Nocera noted some of the limitations of the U.S. News and World Report college rankings, such as rewarding selectivity and spending more money regardless of outcomes. (I’ve written plenty on this topic.) He notes that the Washington Monthly rankings do seek to reward colleges which effectively educate their students, and also states that a reduced focus on institutional prestige might help reduce student stress.

I am hardly a fan of Nocera (who is best known for comparing Tea Party supporters to terrorists), but the piece is worth a read. I highly recommend reading through the comments on the article, as they show a sharp divide between commenters who believe that attending solid—but not elite—colleges is a good investment and those who believe strongly in attending an elite institution. For those of us who are not regular readers of the Gray Lady, the comments also give us an idea of what some of America’s elite think about the value of certain types of higher education.

To Be a “Best Value,” Charge Higher Tuition

In addition to the better-known college rankings from U.S. News & World Report, the magazine also publishes a listing of “Best Value” colleges. The listing seems helpful enough, with the goal of highlighting colleges which are a good value for students and their families. However, this list rewards colleges for charging higher tuition and being more selective, factors that are not necessarily associated with true educational effectiveness.

U.S. News uses dubious methodology to calculate its Best Value list (a more detailed explanation can be found here). Before I get into the rankings components, there are two serious flaws with the methodology. First, colleges are only eligible to be on the list if they are approximately in the top half of the overall rankings. Since we already know that the rankings better measure prestige than educational effectiveness, the best value list must be taken with a shaker of salt right away. Additionally, for public colleges, U.S. News uses the cost of attendance for out-of-state students, despite the fact that the vast majority of students come from in-state. It is true that relatively more students at elite public universities (like the University of Wisconsin-Madison) come from out of state, but even here over 70% of freshmen come from Wisconsin or Minnesota. This decision inflates the cost of attending public institutions and thus shoves them farther down the list

The rankings components are as follows:

(1) “Ratio of quality to price”—60%. This is the score on the U.S. News ranking (their measure of quality) divided by the net price of attendance, which is the cost of attendance less need-based financial aid. It is similar to what I did in the Washington Monthly rankings to calculate the cost-adjusted graduation rate measure. This measure has some merits, but suffers from the flaws of a prestige-based numerator and a net price of attendance that is biased toward private institutions.

(2) The percentage of undergraduates receiving need-based grants—25%. This measure rewards colleges with lots of Pell Grant recipients (which is just fine) as well as colleges with large endowments or high posted tuition which can offer lots of grants (which isn’t related to the actual price a student pays). If every student with a household income of under one million dollars received a $5 need-based grant, a college would look good on this measure…this measure can be gamed.

(3) Average discount—15%. This is the average amount of need-based grants divided by the net price of attendance. This certainly rewards colleges with high posted tuition and lots of financial aid.

Once again, I don’t focus on the actual rankings, but I will say that the top of the list is dominated by elite private colleges with massive endowments. Daniel Luzer of Washington Monthly (with whom I’ve had the pleasure of working over the past six months) has a good take on the top of the Best Value list. He notes that although these well-endowed institutions do provide a lot of financial aid to needy students, they don’t educate too many of these students.

I am glad that the 800-pound gorilla in the college rankings game is thinking about whether a college is a good value to students, but their methodological choices mean that colleges which are really educating students in a cost-effective manner are not being rewarded.

Measuring Prestige: Analyzing the U.S. News & World Report College Rankings

The 2013 U.S. News college rankings were released today and are certain to be the topic of discussion for much of the higher education community. Many people grumble about the rankings, but it’s hard to dismiss the rankings due to their profound impact on how colleges and universities operate. It is not uncommon for colleges to set goals to improve their ranking, and data falsification is sadly a real occurrence.  As someone who does research in the fields of college rankings and accountability policy, I am glad to see the rankings come out every fall. However, I urge readers to take these rankings as what they intend to be—a measure of prestige rather than college effectiveness.

The measures used to calculate the rankings are generally the same as last year and focus on six or seven factors, depending on the type of university:

–Academic reputation (from peers and/or high school counselors)

–Selectivity (admit rate, high school rank, and ACT/SAT scores)

–Faculty resources (salary, terminal degree status, full-time status, and class size)

–Graduation and retention rates

–Financial resources per student

–Alumni giving rates

–Graduation rate performance (only for research universities and liberal arts colleges)

Most of these measures can directly be controlled by increasing tuition and/or enrolling only the most academically prepared students. The only measure that is truly independent of prestige is the graduation rate performance measure, in which the actual graduation rate is regressed on student characteristics and spending to generate a predicted graduation rate. While U.S. News doesn’t release its methodology for calculating its predicted graduation rate measure, the results are likely similar to what I did in the Washington Monthly rankings.

I am pleased to see that U.S. News is using an average of multiple years of data for some of its measures. I do this in my own work on estimating college effectiveness, although this was not a part of Washington Monthly’s methodology this year (it may be in the future, however). The use of multiple years of data does reduce the effects of random variation (and helps to smooth the rankings), but I am concerned that U.S. News uses only the submitted years of data if not all years are submitted. This gives colleges an incentive to not report a year of bad data on measures such as alumni giving rates.

Overall, these rankings are virtually unchanged from last year, but watch colleges crow about moving up two or three spots when their score hardly changed. These rankings are big business in the prestige market; hopefully, students who care more about educational effectiveness consider other measures of college quality in addition to these rankings.

I’ll put up a follow-up post in the next few days discussing the so-called “Best Value” college list from U.S. News. As a preview, I don’t hold it in high regard.

Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.

How (Not) to Rank Colleges

The college rankings marketplace became a little bit more crowded this week with the release of a partial set of rankings from an outfit called The Alumni Factor. The rankings, headed by Monica McGurk, former partner at McKinsey & Company, rank colleges primarily based on the results of proprietary alumni surveys. The site claims to have surveyed approximately 100-500 alumni across the age distribution at 450 universities in creating these rankings. (By this point in time, a few alarms should be sounding. More on those later.)

The Alumni Factor uses thirteen measures from their surveys to rank colleges, in addition to graduation and alumni giving rates from external sources. These measures include development of cognitive and noncognitive skills, bang for the buck, average income and net worth, and “overall happiness of graduates.” The data behind the rankings has supposedly been verified by an independent statistician, but the verification is just a description of confidence intervals (which are never again mentioned). They provide rankings for the top 177 schools, which are not visible unless you sign up for their services. As a result, I do not discuss the overall rankings.

While I commend the Alumni Factor for making more information available to consumers (albeit behind a paywall), I don’t see this set of rankings as being tremendously useful. My first concern is how they selected the colleges to be included in the rankings. The list of the top 177 schools consists primarily of prestigious research universities and liberal arts colleges; few to no good public bachelor’s to master’s level universities (such as my alma mater) are included in the list. This suggests that the creators of the ranking care much more about catering to an audience of well-heeled students and parents than making a serious attempt to examine the effectiveness of lesser-known, but potentially higher-quality, colleges.

I am extremely skeptical of survey-based college ranking systems because they generally make every college look better than they actually are. The website mentions that multiple sources were used to reach alumni, but the types of people who respond to surveys are generally happy with their college experience and doing well in life. (Just ask anyone who has ever worked in alumni relations.)

The website states that “we did not let our subjective judgment of what is important in education lead to values-based weightings that might favor some schools more than others.” I am by no means a philosopher of education, but even I know that the choice of outcomes to be measured is a value judgment about what is important in education. The fact that four out of fifteen of the measures used measure income and net worth, while only one reflects bang for the buck, shows their priority on labor market outcomes. I don’t have a problem with that, but they cannot ignore the decisions they made in creating the rankings.

Finally, these survey outcomes are for alumni only and exclude anyone who did not graduate from college. Granted, the colleges on the initial list of 450 all probably have pretty high graduation rates, but it is important to remember that not everyone graduates. It’s another example of these rankings being limited to prestigious colleges, as well as the perception that everyone who starts college will graduate. It’s just not true.

Although no set of college rankings is perfect (and not even the ones that I’ve worked on), The Alumni Factor’s rankings serves a very limited population of students who are likely well-served by the existing college guides and rankings. This is a shame because the survey measures could have been tailored to better suit students who are much closer to being on the margin of graduating from college.

For those who are interested, the 2013 U.S. News & World Report college rankings will come out on Wednesday, September 12. I’ll have my analysis of those rankings later in the week.

Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.

Estimating College Effectiveness: The 2012 Washington Monthly College Rankings

One of my primary research interests is examining whether and how we can estimate the extent to which colleges are doing a good job educating their students. Estimating institutional effectiveness in K-12 education is difficult, even with the ready availability of standardized tests across multiple grades. But in higher education, we do not even have these imperfect measures across a wide range of institutions.

Students, their families, and policymakers have been left to grasp at straws as they try to make important financial and educational decisions. The most visible set of college performance measures is the wide array of college rankings, with the ubiquitous U.S. News and World Report college rankings being the most influential in the rankings market.  However, these rankings serve much more as a measure of a college’s resources and selectivity instead of whether students actually benefit.

I have been working with Doug Harris, who is now an associate professor of economics at Tulane University, on developing a model for estimating a college’s value-added to students. Unfortunately, due to data limitations, we can only estimate value added with respect to graduation rates—the only outcome available for nearly all colleges. To do this, we estimate a college’s predicted graduation rate given certain student and fixed institutional characteristics and then compare that to the actual graduation rate.

This measure of college value-added gives us an idea of whether a college graduates as many students as would be expected, but this does not address whether colleges are educating students in an efficient manner. To estimate a college’s cost-effectiveness for students and their families, we divide the above value-added measure by the net price of attendance (defined as the cost of attendance minus all grant and scholarship aid). Colleges can then be ranked on this cost-effectiveness measure, which yields much different results than the U.S. News rankings.

This research has attracted attention in policy circles, which has been a great experience for me. I thought that at least some people would be interested in this way of looking at college performance, but I was quite flattered to get an e-mail from Washington Monthly magazine asking if I would be their methodologist for the 2012 college rankings. I have always appreciated their rankings, as the focus is much more on estimating educational effectiveness and public service instead of the quality of the incoming student body.

The 2012 college rankings are now available online and include a somewhat simplified version of our cost-adjusted value-added measure. This measure consists of one-half of the social mobility score, which is one-third of the overall rankings (research and service are the other two components). The college guide includes an article I wrote with Rachel Fishman of the New America Foundation looking at how some colleges provide students with good “bang for the buck.”

I am excited to see some of my research put into practice, and hopefully at least some people will find it useful. I welcome any questions or comments about the rankings and methodology!