Comments on the Brookings Value-Added Rankings

Jonathan Rothwell and Siddharth Kulkarni of the Metropolitan Policy Program at Brookings made a big splash today with the release of a set of college “value-added” rankings (link to full study and Inside Higher Ed summary) focused primarily on labor market outcomes. Value-added measures, which adjust for student and institutional characteristics to get a better handle on a college’s contribution to student outcomes, are becoming increasingly common in higher education. (I’ve written about college value-added in the past, which led to me taking the reins as Washington Monthly’s rankings methodologist.) Pretty much all of the major college rankings at this point include at least one value-added component, and this set of rankings actually shares some similarities with Money’s rankings. And the Brookings report does mention correlations with the U.S. News, Money, and Forbes rankings—but not Washington Monthly. (Sigh.)

The Brookings report uses five different outcome measures, which are then adjusted for available student characteristics and institutional characteristics such as the sector of the college and where it is located:

(1) Mid-career salary of alumni: This measures the median salary of full-time workers with a degree from a particular college and at least ten years of experience. The data are from PayScale, which suffers from being self-reported data for a subset of students, but the data likely still have value for two reasons. First, the authors do a careful job of trying to decompose any biases in the data—for example, correlating PayScale reported earnings with data from other sources. Second, even if there is an upward bias in the data, it should be similar across institutions. As I’ve written about before, I trust the order of colleges in PayScale data more than I trust the dollar values—which are likely inflated.

But there are still a few concerns with this measure. Some of the concerns, such as limiting just to graduates (excluding dropouts) and dropping students with an advanced degree, are fairly well-known. And the focus on salary definitely rewards colleges with large engineering programs, as evidenced by those colleges’ dominance of the value-added list (while art schools look horrible). However, given that ACT and SAT math scores are the other academic preparation measure used, the bias favoring engineering schools may actually be smaller than if verbal/reading scores were also used. I would also have estimated models separately for two-year and four-year colleges instead of putting them in the same model with a dummy variable for sector, but that’s just my preference.

(2) Student loan repayment rate: This represents the opposite of the average three-year student loan cohort default rate over the last three years (so a 10% default rate is framed as a 90% repayment rate). This measure is pretty straightforward, although I do have to question the value-added estimates for colleges with very high repayment rates. Value-added estimates are difficult to conceptualize for colleges with a high probability of success, as there is typically little room for improvement. But here, the highest predicted repayment rate is 96.8% for four-year colleges, while several dozen colleges have actual repayment rates in excess of 96.8%. It appears that linear regressions were used, while some type of robust generalized linear model should have also been considered. (In the Washington Monthly rankings, I use simple linear regressions for graduation rate performance, but very few colleges are so close to the ceiling of 100%.)

(3) Occupational earnings potential: This is a pretty nifty measure that uses LinkedIn data to get a handle of which occupations a college’s graduates pursue during their career. This mix of occupations is then tied to Bureau of Labor Statistics data to estimate the average salary of a college’s graduate, where advanced degree holders are also included. The value-added measure attempts to control for student and institutional characteristics, although it doesn’t control for the preferences of students toward certain majors when entering college.

I’m excited by the potential to use LinkedIn data (warts and all) to look at students’ eventual outcomes. However, it should be noted that LinkedIn is more heavily used in some fields that might be expected (business and engineering) and others that might not be expected (communication and cultural studies). The authors adjust for these differences in representation and are very transparent about it in the appendix. This appendix is definitely on the technical side, but I welcome their transparency.

They also report five different quality measures which are not included in the value-added estimate: ‘curriculum value’ (the value of the degrees offered by the college), the value of skills alumni list on LinkedIn, the percentage of graduates deemed STEM-ready, completion rates within 200% of normal time (8 years for a 4-year college, or 4 years for a 2-year college), and average institutional grant aid. These measures are not input-adjusted, but generally reflect what people think of as quality. However, average institutional grant aid is a lousy measure to include as it rewards colleges with a high-tuition, high-aid model over colleges with a low-tuition, low-aid model—even if students pay the exact same price.

In conclusion, the Brookings report tells readers some things we already know (engineering programs are where to go to make money), but provides a good—albeit partial—look at outcomes across an unusually broad swath of American higher education. I would advise readers to focus on comparing colleges with similar missions and goals, given the importance of occupation in determining earnings. I would also be more hesitant to use the metrics for very small colleges, where all of these measures can be influenced by a relatively small number of people. But the transparency of the methodology and use of new data sources make these value-added rankings a valuable contribution to the public discourse.

Review of “Designing the New American University”

Since Michael Crow became the president of Arizona State University in 2002, he has worked to reorganize and grow the institution into his vision of a `New American University.’ ASU has grown to over 80,000 students during his time as president through a commitment to admit all students who meet a relatively modest set of academic qualifications. At the same time, the university has embarked upon a number of significant academic reorganizations that have gotten rid of many traditional academic departments and replacing them with larger interdisciplinary schools. Crow has also attracted his fair share of criticism over the years, including for alleged micromanaging and his willingness to venture into online education. (I’ve previously critiqued ASU Online’s program with Starbucks, although many of my concerns have since been alleviated.)

Crow partnered with William Dabars, an ASU professor, to write Designing the New American University (Johns Hopkins Press, $34.95 hardcover) to more fully explain how the ASU model works. The first several chapters of the book, although rather verbose, focus on the development of the American research university. A key concept that the authors raise is isomorphism—the tendency of organizations to resemble a leading organization in the market. Crow and Dabars contend that research universities have largely followed the lead of elite private universities such as Harvard and the big Midwestern land-grant universities that developed following the Civil War. Much has changed since then, so they argue that a new structure is needed.

Chapter 7 is the key chapter of the book, in which the authors detail the design of Arizona State as a ‘New American University’ (and make a nice sales pitch for the university in the process). Crow and Dabars celebrate the growth of Arizona State, which has been matched by only a small number of public research universities. They note that a stronger focus on access has hurt them in the U.S. News rankings, a key measure of prestige—while celebrating their ranking as an ‘Up and Coming School.’ (In the Washington Monthly rankings that I compile, ASU is a very respectable 28th.) The scale of ASU allows the possibility for cost-effective operations, something which the university is trying to measure through their Center for Measuring University Performance.

It certainly seems like some elements of the changes at ASU could potentially be adopted at other research universities, but it is worth noting that research universities make up only about 200-300 of the over 7,500 postsecondary institutions in the United States. I am left wondering what the `New American’ model would look like in other sectors of higher education, which is beyond the scope of this book but an important question to answer. Some other questions to consider are the following:

(1) How would a commitment to growth happen at colleges without the prestige or market power to attract significant numbers of out-of-state students?

(2) ASU seems to have done more academic reorganizations in research-intensive departments. How would this work at a more teaching-oriented institution?

(3) How will the continuing growth of ASU Online, as well as the multiple branch campuses in the Phoenix metropolitan area, affect the organizational structure? At what point, if any, does a university reach the maximum optimal size?

(4) Will ASU’s design remain the same once Michael Crow is not president? (And is that a good thing?)

Overall, this is a solid book that is getting a substantial amount of attention for good reason. While the book could have been about 50 pages shorter while still conveying all of the important information, the final chapter is highly recommended reading. I plan to assign that chapter to my organization and governance classes in the future so they can understand how ASU is growing and succeeding through an atypical higher education model.