Not-so-Free College and the Disappointment Effect

One of the most appealing aspects of tuition-free higher education proposals is that they convey a simple message about higher education affordability. Although students will need to come up with a substantial amount of money to cover textbooks, fees, and living expenses, one key expense will be covered if students hold up their end of the bargain. That is why the results of existing private-sector college promise programs are generally promising, as shown in this policy brief that I wrote for my friends at the Midwestern Higher Education Compact.

But free college programs in the public sector often come with a key limitation—the amount of money that the state has to fund the program in a given year. Tennessee largely avoided this concern by endowing the Tennessee Promise program through lottery funds, and the program appears to be in good financial shape at this point. However, two other states are finding that available funds are insufficient to meet program demand.

  • Oregon will provide only $40 million of the $48 million needed to fund its nearly tuition-free community college program (which requires a $50 student copay). As a result, the state will eliminate grants to the 15% to 20% of students with the highest expected family contributions (a very rough proxy for ability to pay).
  • New York received 75,000 completed applications for its tuition-free public college program, yet still only expects to give out 23,000 scholarships. Some of this dropoff may be due to students attending other colleges, but other students are probably still counting on the money.

In both states, a number of students who expected to get state grant aid will not receive any money. While rationing of state aid dollars is nothing new (many states’ aid programs are first-come, first-served), advertising tuition-free college and then telling students they won’t receive grant aid close to the beginning of the academic year may have negative effects such as choosing not to attend college at all or diminished academic performance if they do attend. There is a sizable body of literature documenting the “disappointment effect” in other areas, but relatively little in financial aid. There is evidence that losing grant aid can hurt continuing students, yet this does not separate out the potential effect of not having money from the potential disappointment effect.

The Oregon and New York experiences provide for a great opportunity to test the disappointment effect. Both states could compare students who applied for but did not receive the grant in 2017-18 to similar students in years prior to the free college programs. This would allow for a reasonably clean test of whether the disappointment effect had any implications for college choice and eventual persistence.

Understanding Financial Responsibility Scores for Private Colleges

This post originally appeared on the Brookings Institution’s Brown Center Chalkboard blog.

The stories of financially struggling private colleges, both nonprofit and for-profit, have been told in many news articles. Small private nonprofit colleges are increasing tuition discount rates in an effort to attract a shrinking pool of traditional-age students in many parts of the country, while credit rating agency Moody’s expects the number of private nonprofit college closings to triple to about 15 per year by next year. Meanwhile, the for-profit sector has seen large enrollment decreases in the last few years amid the collapse of Corinthian Colleges and the University of Phoenix’s 50 percent drop in enrollment since 2010.

In an effort to identify financially struggling colleges and protect federal investments in student financial aid, Congress requires the U.S. Department of Education to calculate financial responsibility composite scores that are designed to measure a college’s overall financial strength based on metrics of liquidity, ability to borrow additional funds if needed, and net income. Private nonprofit and for-profit colleges are required to submit financial data each year, while public colleges are excluded under the assumption that state funding makes them unlikely to become insolvent.

Though not commonly known, these financial responsibility scores have important consequences for private colleges.  Scores can range between -1.0 and 3.0, with colleges scoring at or above 1.5 being considered financially responsible and are allowed to access federal funds. Colleges scoring between 1.0 and 1.4 can access financial aid dollars, but are subject to additional Department of Education oversight of their financial aid programs. Finally, colleges scoring 0.9 or below are not considered financially responsible and must submit a letter of credit of at least 10 percent of federal student aid from the previous year and be subject to additional oversight to get access to funds. The Department of Education can also determine that a college does not meet “initial eligibility requirements due to a failing composite score” and assign it a failing grade without releasing a score to the public. In this case, a college will be immediately subject to heightened cash monitoring rules that delay the federal government’s disbursement of financial aid dollars to colleges. However, private nonprofit colleges dispute the validity of the formula, claiming it is inaccurate and does not meet current accounting standards.

I first examined the distribution of financial responsibility scores among the 3,435 institutions (1,683 private nonprofit and 1,752 for-profit) with scores in the 2013-14 academic year, using data released to the public earlier this month. As illustrated in the figure below, only a small percentage of colleges that were assigned a score did not pass the test. In 2013-14, 203 colleges (73 nonprofit and 130 for-profit) received a failing score and an additional 136 (51 nonprofit and 85 for-profit) were in the oversight zone. Most of the colleges with failing scores are obscure institutions, such as the Champion Institute of Cosmetology in California and The Chicago School for Piano Technology. However, a few of these institutions, such as for-profit colleges Charleston School of Law, ITT Technical Institute, and Vatterott Colleges as well as nonprofit colleges Erskine College in South Carolina, Everglades University in Florida (a former for-profit) and Finlandia University in Michigan are at least somewhat better-known.

finscore_fig1

I then examined trends in financial responsibility scores since when scores were first released to the public in the 2006-07 academic year. The first finding to note in the below table is that the number of nonprofit colleges that did not pass the financial responsibility test nearly doubled between 2007-08 and 2008-09, including more than one in six institutions. Much of this increase appears to be due to the collapse in endowment values, as even a decline in a rather small endowment would affect a college’s score through reducing net income. During the same period, there was only a slight increase in the number of for-profit colleges facing additional oversight.

finscore_fig2

The second interesting trend is that in spite of concerns about the viability of small colleges with high tuition prices since the Great Recession, the number of colleges that either received a failing score or faced additional oversight has slowly declined since 2010-11. Only 12 percent of for-profits and seven percent of nonprofits failed in 2013-14, reflecting a general stabilizing trend for struggling private institutions.  Although there are certainly valid concerns about how these scores are calculated, most colleges with failing scores and some others facing additional oversight are likely on shaky financial footing. Many of these colleges with failing scores—particularly for several years in a row—will be forced to consider merging with another institution or closing their doors entirely in the near future. Other colleges closer to the passing threshold may be facing tight budgets for years to come, but their short-term viability is generally secure.

It is unlikely that a substantial number of students and families know that financial responsibility scores even exist, let alone use them in their college choice decisions. However, these scores do provide some potential insights into the financial stability of a college and could potentially be included in the new College Scorecard tool. Students who are considering attending a college that repeatedly receives a failing score should ask tough questions of college officials about whether they will be financially solvent several years from now. Policymakers should use these scores as a way to identify financially struggling institutions and provide support for ones with solid academic outcomes, while also asking tough questions about the viability of cash-strapped colleges that academically underperform similar colleges.

Comments on the New College Scorecard Data

The Obama Administration’s two-year effort to develop a federal college ratings system appeared to have hit a dead-end in June, with the announcement that no ratings would actually be released before the start of the 2015-2016 academic year. At that point in time, Department of Education officials promised to instead focus on creating a consumer-friendly website with new data elements that had never before been released to the public. I was skeptical, as there were significant political hurdles to overcome before releasing data on employment rates, the percentage of students paying down their federal loans, and graduation rates for low-income students.

But things changed this week. First, a great new paper out of the Brookings Institution by Adam Looney and Constantine Yannelis showed trends in student loan defaults over time—going well beyond the typical three-year cohort default rate measure. They also included earning data, something which was not previously available. But, although they made summary tables of results available to the public, these tables only included a small number of individual institutions. It’s great for researchers, but not so great for students choosing among colleges.

The big bombshell dropped this morning. In an extremely rare Saturday morning release (something that frustrates journalists and the higher education community to no end), the Department of Education released a massive trove of data (fully downloadable!) underlying the new College Scorecard. The consumer-facing Scorecard is fairly simple (see below for what Seton Hall’s entry looks like), and I look forward to hearing about whether students and their families use this new version more than previous ones. I also recommend ProPublica’s great new data tool for low-income students.

shu

But my focus today is on the new data. Some of the key new data elements include the following:

  • Transfer rates: The percentage of students who transfer from a two-year to a four-year college. This helps community colleges, given their mission of transfer, but still puts colleges at a disadvantage if they serve a more transient student body.
  • Earnings: The distribution of earnings 10 years after starting college and the percentage earning more than those with a high school diploma. This comes from federal tax return data and is a huge step forward. However, given very reasonable concerns about a focus on earnings hurting colleges with public service missions, there is also a metric for the percentage of students making more than $25,000 per year. Plenty of people will focus on presenting earnings data, so I’ll leave the graphics to others. (This is a big step forward over the admirable work done by Payscale in this area.)
  • Student loan repayment: The percentage of students (both completers and non-completers) who are able to pay down some principal on loans within a certain period of time. Seven-year loan repayment data are available, as illustrated here:

loan_repayment

In the master data file, many of these outcomes are available by family income, first-generation status, and Pell receipt. First-generation status is a new data element to be made available to the public; although the question is on the FAFSA, it’s never been made available to researchers. For those who are curious, here’s what the breakdown of the percentage of first-generation students (typically defined as students whose parents don’t have a bachelor’s degree) by institutional type:

firstgen

There are a lot of data elements to explore here, and expect lots of great work from the higher education research community in upcoming months and years using these data. In the short term, it will be fascinating to watch colleges and politicians respond to this game-changing release of outcome data on students receiving federal financial aid.

Comments on the Brookings Value-Added Rankings

Jonathan Rothwell and Siddharth Kulkarni of the Metropolitan Policy Program at Brookings made a big splash today with the release of a set of college “value-added” rankings (link to full study and Inside Higher Ed summary) focused primarily on labor market outcomes. Value-added measures, which adjust for student and institutional characteristics to get a better handle on a college’s contribution to student outcomes, are becoming increasingly common in higher education. (I’ve written about college value-added in the past, which led to me taking the reins as Washington Monthly’s rankings methodologist.) Pretty much all of the major college rankings at this point include at least one value-added component, and this set of rankings actually shares some similarities with Money’s rankings. And the Brookings report does mention correlations with the U.S. News, Money, and Forbes rankings—but not Washington Monthly. (Sigh.)

The Brookings report uses five different outcome measures, which are then adjusted for available student characteristics and institutional characteristics such as the sector of the college and where it is located:

(1) Mid-career salary of alumni: This measures the median salary of full-time workers with a degree from a particular college and at least ten years of experience. The data are from PayScale, which suffers from being self-reported data for a subset of students, but the data likely still have value for two reasons. First, the authors do a careful job of trying to decompose any biases in the data—for example, correlating PayScale reported earnings with data from other sources. Second, even if there is an upward bias in the data, it should be similar across institutions. As I’ve written about before, I trust the order of colleges in PayScale data more than I trust the dollar values—which are likely inflated.

But there are still a few concerns with this measure. Some of the concerns, such as limiting just to graduates (excluding dropouts) and dropping students with an advanced degree, are fairly well-known. And the focus on salary definitely rewards colleges with large engineering programs, as evidenced by those colleges’ dominance of the value-added list (while art schools look horrible). However, given that ACT and SAT math scores are the other academic preparation measure used, the bias favoring engineering schools may actually be smaller than if verbal/reading scores were also used. I would also have estimated models separately for two-year and four-year colleges instead of putting them in the same model with a dummy variable for sector, but that’s just my preference.

(2) Student loan repayment rate: This represents the opposite of the average three-year student loan cohort default rate over the last three years (so a 10% default rate is framed as a 90% repayment rate). This measure is pretty straightforward, although I do have to question the value-added estimates for colleges with very high repayment rates. Value-added estimates are difficult to conceptualize for colleges with a high probability of success, as there is typically little room for improvement. But here, the highest predicted repayment rate is 96.8% for four-year colleges, while several dozen colleges have actual repayment rates in excess of 96.8%. It appears that linear regressions were used, while some type of robust generalized linear model should have also been considered. (In the Washington Monthly rankings, I use simple linear regressions for graduation rate performance, but very few colleges are so close to the ceiling of 100%.)

(3) Occupational earnings potential: This is a pretty nifty measure that uses LinkedIn data to get a handle of which occupations a college’s graduates pursue during their career. This mix of occupations is then tied to Bureau of Labor Statistics data to estimate the average salary of a college’s graduate, where advanced degree holders are also included. The value-added measure attempts to control for student and institutional characteristics, although it doesn’t control for the preferences of students toward certain majors when entering college.

I’m excited by the potential to use LinkedIn data (warts and all) to look at students’ eventual outcomes. However, it should be noted that LinkedIn is more heavily used in some fields that might be expected (business and engineering) and others that might not be expected (communication and cultural studies). The authors adjust for these differences in representation and are very transparent about it in the appendix. This appendix is definitely on the technical side, but I welcome their transparency.

They also report five different quality measures which are not included in the value-added estimate: ‘curriculum value’ (the value of the degrees offered by the college), the value of skills alumni list on LinkedIn, the percentage of graduates deemed STEM-ready, completion rates within 200% of normal time (8 years for a 4-year college, or 4 years for a 2-year college), and average institutional grant aid. These measures are not input-adjusted, but generally reflect what people think of as quality. However, average institutional grant aid is a lousy measure to include as it rewards colleges with a high-tuition, high-aid model over colleges with a low-tuition, low-aid model—even if students pay the exact same price.

In conclusion, the Brookings report tells readers some things we already know (engineering programs are where to go to make money), but provides a good—albeit partial—look at outcomes across an unusually broad swath of American higher education. I would advise readers to focus on comparing colleges with similar missions and goals, given the importance of occupation in determining earnings. I would also be more hesitant to use the metrics for very small colleges, where all of these measures can be influenced by a relatively small number of people. But the transparency of the methodology and use of new data sources make these value-added rankings a valuable contribution to the public discourse.

How to Calculate–and Not Calculate–Net Prices

Colleges’ net prices, which the U.S. Department of Education defines as the total cost of attendance (tuition and fees, room and board, books and supplies, and other living expenses) less all grant and scholarship aid, have received a lot of attention in the last few years. All colleges are required by the Higher Education Opportunity Act to have a net price calculator on their website, where students can get an estimate of their net price by inputting financial and academic information. Net prices are also used for accountability purposes, including in the Washington Monthly college rankings that I compile, and are likely to be included in the Obama Administration’s Postsecondary Institution Ratings System (PIRS) that could be released in the next several weeks.

Two recently released reports have looked at the net price of attendance, but only one of them is useful to either researchers or families considering colleges. A new Brookings working paper by Phillip Levine makes a good contribution to the net price discussion by making a case for using the median net price (instead of the average) for both consumer information and accountability purposes. He uses data from Wellesley College’s net price calculator to show that the median low-income student faces a net price well below the listed average net price. The reason why the average is higher than the median at Wellesley is because a small number of low-income students pay a high net price, while a much larger number of students pay a relatively low price. The outlying values for a small number of students bring up the average value.

I used data from the 2011-12 National Postsecondary Student Aid Study, a nationally-representative sample of undergraduate students, to compare the average and median net prices for dependent and independent students by family income quartile. The results are below:

Comparing average and median net prices by family income quartile.
Average 10th %ile 25th %ile Median 75th %ile 90th %ile
Dependent students: Parents’ income ($1,000s)
<30 10,299 2,500 4,392 8,113 13,688 20,734
30-64 13,130 3,699 6,328 11,077 17,708 24,750
65-105 16,404 4,383 8,178 14,419 21,839 30,174
106+ 20,388 4,753 9,860 18,420 27,122 39,656
Independent students: student and spouse’s income ($1,000s)
<7 10,972 3,238 5,000 8,889 14,385 22,219
7-19 11,114 3,475 5,252 9,068 14,721 22,320
20-41 10,823 3,426 4,713 8,744 14,362 21,996
42+ 10,193 3,196 4,475 7,931 13,557 20,795
SOURCE: National Postsecondary Student Aid Study 2011-12.

 

Across all family income quartiles for both dependent and independent students, the average net price is higher than the median net price. About 60% of students pay a net price at or below the average net price reported to IPEDS, suggesting that switching to reporting the median net price might improve the quality of available information.

The second report was the annual Trends in College Pricing report, published by the College Board. The conclusion the report reached was that net prices are modest and have actually decreased several years during the last decade. However, their definition of “net price” suffers from two fatal flaws:

(1) “Net price” doesn’t include all cost of attendance components. They publicize a “net tuition” measure and a “net tuition, fees, room and board” measure, but the cost of attendance also includes books and supplies as well as other living expenses such as transportation, personal care, and a small entertainment allowance. (For more on living costs, see this new working paper on living costs I’ve got out with Braden Hosch of Stony Brook and Sara Goldrick-Rab of Wisconsin.) This understates what students and their families should actually expect to pay for college, although living costs can vary across individuals.

(2) Tax credits are included with grant aid in their “net price” definition. Students and their families do not receive the tax credit until they file their taxes in the following year, meaning that costs incurred in August may be partially reimbursed the following spring. That does little to help families pay for college upfront, when the money is actually needed. Additionally, not all families that qualify for education tax credits actually claim them. In this New America Foundation blog post, Stephen Burd notes that about 25% of families don’t claim tax credits—and this takeup rate is likely lower among lower-income families.

Sadly, the College Board report has gotten a lot of attention in spite of its inaccurate net price definitions. I would like to see a robust discussion about the important Brookings paper and how we can work to improve net price data—with the correct definition used.

Do Student Loans Result in Tuition Increases? Why It’s So Hard to Tell

One of the longstanding questions in higher education finance is whether access to federal financial aid dollars is one of the factors behind tuition increases. This was famously stated by Education Secretary William Bennett in a 1987 New York Times editorial:

“If anything, increases in financial aid in recent years have enabled colleges and universities blithely to raise their tuitions, confident that Federal loan subsidies would help cushion the increase. In 1978, subsidies became available to a greatly expanded number of students. In 1980, college tuitions began rising year after year at a rate that exceeded inflation. Federal student aid policies do not cause college price inflation, but there is little doubt that they help make it possible.”

Since Secretary Bennett made his statement (now called the Bennett Hypothesis), more students are receiving federal financial aid. In 1987-1988, the average full-time equivalent student received $2,414 in federal loans, which rose to $6,374 in 2012-2013. The federal government has also increased spending on Pell Grants during this period, although the purchasing power of the grant has eroded due to large increases in tuition.

The Bennett Hypothesis continues to be popular in certain circles, as illustrated by comments by Dallas Mavericks owner and technology magnate Mark Cuban. In 2012, he wrote:

“The point of the numbers is that getting a student loan is easy. Too easy.

You know who knows that the money is easy better than anyone ? The schools that are taking that student loan money in tuition. Which is exactly why they have no problems raising costs for tuition each and every year.

Why wouldn’t they act in the same manner as real estate agents acted during the housing bubble? Raise prices and easy money will be there to pay your price. Good business, right ? Until its not.”

Recently, Cuban called for limiting student loans to $10,000 per year, as reported by Inc.:

“If Mark Cuban is running the economy, I’d go and say, ‘Sallie Mae, the maximum amount that you’re allowed to guarantee for any student in a year is $10,000, period, end of story.’  

We can talk about Republican or Democratic approaches to the economy but until you fix the student loan bubble–and that’s where the real bubble is–we don’t have a chance. All this other stuff is shuffling deck chairs on the Titanic.”

Cuban’s plan wouldn’t actually affect the vast majority of undergraduate students, as loan limits are often below $10,000 per year. Dependent students are limited to no more than $7,500 per year in subsidized and unsubsidized loans and independent students are capped at $12,500 per year. But this would affect graduate students, who can borrow $20,500 per year in unsubsidized loans, as well as students and their families taking out PLUS loans, which are only capped by the cost of attendance.

Other commentators do not believe in the Bennett Hypothesis. An example of this is from David Warren, president of the National Association of Independent Colleges and Universities (the professional association for private nonprofit colleges). In 2012, he wrote that “the hypothesis is nothing more than an urban legend,” citing federal studies that did not find a relationship.

The research on the Bennett Hypothesis can best be classified as mixed, with some studies finding a modest causal relationship between federal financial aid and tuition increases and others finding no relationship. (See this Wonkblog piece for a short overview or Donald Heller’s monograph for a more technical treatment.) But for data reasons, the studies of the Bennett Hypothesis either focus on all financial aid lumped together (which is broader than the original hypothesis) or just Pell Grants.

So do student loans result in tuition increases? There is certainly a correlation between federal financial aid availability and college tuition, but the first rule of empirical research is that correlation does not imply causation. And establishing causality is extremely difficult given the near-universal nature of student loans and the lack of change in program rules over time. It is essential to have some change in the program in order to identify effects separate from other types of financial aid.

In an ideal world (from a researcher’s perspective), some colleges would be randomly assigned to have lower loan limits than others and then longer-term trends in tuition could be examined. That, of course, is politically difficult to do. Another methodological possibility would be to look at the colleges that do not participate in federal student loan programs, which are concentrated among community colleges in several states. But the low tuition charges and low borrowing rates at community colleges make it difficult to even postulate that student loans could potentially drive tuition increases at community colleges.

A potential natural experiment (in which a change is introduced to a system unexpectedly) could have been the short-lived credit tightening of parent PLUS loans, which hit some historically black colleges hard. Students who could no longer borrow the full cost of attendance had to scramble to find other funding, which put pressure on colleges to find additional money for students. But the credit changes have partially been reversed before colleges had to make long-term decisions about pricing.

I’m not too concerned about student loans driving tuition increases at the vast majority of institutions. I think the Bennett Hypothesis is likely the strongest (meaning a modest relationship between loans and tuition) at the most selective undergraduate institutions and most graduate programs, as loan amounts can be substantial and access to credit is typically good. But, without a way to identify variations in loan availability across similar institutions, that will remain a postulation.

[NOTE (7/7/15): Since this piece was initially posted, more research has come out on the topic. See this updated blog post for my most up-to-date take.

Should College Admissions be Randomized?

Sixty-nine percent of students who apply to Stanford University with perfect SAT scores are rejected. Let that sink in for a minute…getting a perfect SAT is far from easy. In 2013, the College Board reported that only 494 students out of over 1.6 million test-takers got a 2400. Stanford enrolled roughly 1700 students in their first-year class in 2012, so not everyone had a perfect SAT score. Indeed, the 25th percentile of SAT scores is 2080, with a 75th percentile of 2350, for the fall 2012 incoming class according to federal IPEDS data. But all of those scores are pretty darned high.

It is abundantly clear that elite institutions like Stanford can pick and choose from students with impeccable academic qualifications. The piece from the Stanford alumni magazine that noted the 69% rejection rate for perfect SAT scorers also noted the difficulty of shaping a freshman class from the embarrassment of riches. All students Stanford considers are likely to graduate from that institution—or any other college.

Given that admissions seem to be somewhat random anyway, some have suggested that elite colleges actually randomize their admissions processes by having students be selected at random conditional on meeting certain criteria. While the current approach provides certain benefits to colleges (most notably allowing colleges to shape certain types of diversity and guaranteeing spots to children of wealthy alumni), randomizing admissions can drastically cut down on the cost of running an admissions office and also reduces the ability of students and their families to complain about the outcome. (“Sorry, folks…you called heads and it came up tails.”)

As a researcher, I would love to see a college commit to randomizing most of all of its admissions process over a period of several years. The outcomes of these randomly accepted students should be compared to both the students who were qualified but randomly rejected and to the outcomes of the previous classes of students. My sense would be that the randomly accepted students would be roughly as successful as those students who were admitted under regular procedures in prior years.

Would any colleges like to volunteer a few incoming classes?

Comparing the US News and Washington Monthly Rankings

In yesterday’s post, I discussed the newly released 2014 college rankings from U.S. News & World Report and how they changed from last year. In spite of some changes in methodology that were billed as “significant,” the R-squared value when comparing this year’s rankings with last year’s rankings among ranked national universities and liberal arts colleges was about 0.98. That means that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a nearly perfect prediction.

In today’s post, I compare the results of the U.S. News rankings to those from the Washington Monthly rankings for national universities and liberal arts colleges ranked by both sources. The rankings from Washington Monthly (for which I’m the consulting methodologist and compiled them) are based on three criteria: social mobility, research, and service, which are not the particular goals of the U.S. News rankings. Yet it could still be the case that colleges that recruit high-quality students, have lots of resources, and have a great reputation (the main factors in the U.S. News rankings) do a good job recruiting students from low-income families, produce outstanding research, and graduate servant-leaders.

The results of my comparisons show large differences between the two sets of rankings, particularly at liberal arts colleges. The R-squared value at national universities is 0.34, but only 0.17 at liberal arts colleges, as shown below:

uswm_natl

uswm_libarts

It is worth highlighting some of the colleges that are high on both rankings. Harvard, Stanford, Swarthmore, Pomona, and Carleton all rank in the top ten in both magazines, showing that it is possible to be both highly selective and serve the public in an admirable way. (Of course, we should expect that to be the case given the size of their endowments and their favorable tax treatment!) However, Middlebury and Claremont McKenna check in around 100th in the Washington Monthly rankings in spite of a top-ten U.S. News ranking. These well-endowed institutions don’t seem to have the same commitment to the public good as some of their highly selective peers.

On the other hand, colleges ranked lower by U.S. News do well in the Washington Monthly ranking. Some examples include the University of California-Riverside (2nd in WM, 112th in U.S. News), Berea College (3rd in WM, 76th in U.S. News), and the New College of Florida (8th in WM, 89th in U.S. News). If nothing else, the high ranks in the Washington Monthly rankings give these institutions a chance to toot their own hour and highlight their own successes.

I fully realize that only a small percentage of prospective students will be interested in the Washington Monthly rankings compared to those from U.S. News. But it is worth highlighting the differences across college rankings so students and policymakers can decide what institutions are better for them given their own demands and preferences.

College Reputation Rankings Go Global

College rankings are not a phenomenon which is limited to the United States. Shanghai Jiao Tong University has ranked research universities for the past decade, and the well-known Times Higher Education rankings have been around for several years. While the Shanghai rankings tend to focus on metrics such as citations and research funding, THE has compiled a reputational ranking of universities around the world. Reputational rankings are a concern in U.S.-only rankings, but extending them to a global scale makes little sense to me.

Thomson Reuters (the group behind the THE rankings) makes a great fuss about the sound methodology of the reputational rankings, which they to their credit acknowledge is a subjective measure. They collected 16,639 responses from academics around the world, with some demographic information available here. But they fail to provide any information about the sampling frame, a devastating omission. The researchers behind the rankings do note that the initial sample was constructed to be broadly representative of global academics, but we know nothing about the response rate or whether the final sample was representative. In my mind, that omission disqualifies the rankings from further consideration. But I’ll push on and analyze the content of the reputational rankings.

The reputational rankings are a combination of separate ratings for teaching and research quality. I really don’t have serious concerns about the research component of the ranking, as the survey asks about research quality of given institutions within the academic’s discipline. Researchers who stay on top of their field should be able to reasonably identify universities with top research departments. I have much less confidence in the teaching portion of the rankings, as someone needs to observe classes in a given department to have any idea of teaching effectiveness. Yet I would be surprised if teaching and research evaluations were not strongly correlated.

The University of Wisconsin-Madison ranks 30th on the global reputation scale, which a slightly higher score for research than teaching. (And according to the map, the university has been relocated to the greater Marshfield area.) That has not stopped Kris Olds, a UW-Madison faculty member, from leveling a devastating critique of the idea of global rankings—or the UW-Madison press office from putting out a favorable release on the news.

I have mixed emotions on this particular set of rankings; the research measure is probably capturing research productivity well, but the teaching measure is likely lousy. However, without more information about the response rate to the THE survey, I cannot view these rankings as being valid.

Another Random List of “Best Value” Colleges

Getting a good value for attending college is on the mind of most prospective students and their families, and as a result, numerous publishers of college rankings have come out with lists of “best value” colleges. I have highlighted the best value college lists from Kiplinger’s and U.S. News in previous posts, as well as discussing my work incorporating a cost component into Washington Monthly’s rankings. Today’s entry in this series comes from the Princeton Review,  a company better known for test preparation classes and private counseling, but they are also in the rankings business.

The Princeton Review released its list of its “Best Value Colleges” today in conjunction with USA Today, and the list is heavily populated with a “who’s who” list of selective, wealthy colleges and universities. Among the top ten private colleges, several of them are wealthy enough to be able to waive all tuition and fees for their few students from modest financial backgrounds. The top ten public institutions do tend to attract a fair number of out-of-state and full-pay students, although there is one surprise name on the list (North Carolina State University—well done!). More data on the top 150 colleges can be found here.

My main complaint with this ranking system, as with other best value colleges lists, is with the methodology. They begin by narrowing their sample from about 2,000 colleges to 650—what they call “the nation’s academically best undergraduate institutions.” This effectively limits the utility of these rankings to students who score a 25 or higher on the ACT, or even higher if students wish to qualify for merit-based grant aid. Student selectivity is further awarded in the academic rating, even though this has no guarantee of future academic performance. Much of the academic and financial aid ratings measures come from student surveys, which are fraught with selection bias. Basically, many colleges handpick the students who take these surveys, which results in an optimistic set of opinions being registers. I wish I could say more about their methodology and point values, but no information is available.

The top 150 list (which can be found here by state) certainly favors wealthy, prestigious colleges with a few exceptions (University of South Dakota, University of Tennessee-Martin, and Southern Utah University, for example). In Wisconsin, only Madison and Eau Claire (two of the three most selective universities in the UW System) made the list. In the Big Ten, there are some notable omissions—Iowa (but Iowa State is included), Michigan State (but Michigan is included), Ohio State, and Penn State.

The best value rankings try to provide information about what college will cost, and whether some colleges provide better “bang for the buck” than others. Providing useful information is an important endeavor, as this recent article in the Chronicle emphasizes. However, the Princeton Review’s list provides useful information to only a small number of academically elite students, many of whom have the financial means to pay for college without taking on much debt. This is illustrated by the accompanying USA Today article featuring the rankings, which notes that fewer than half of all students attending Best Value Colleges take on debt, compared to two-thirds of students nationwide. This differential isn’t just a result of the cost of attendance, but instead the student’s ability to pay for college.