The U.S. Department of Education is hard at work developing a Postsecondary Institution Ratings System (PIRS), that will rate colleges before the start of the 2015-16 academic year. In addition to a four-city listening tour in November 2013, ED is seeking public comments and technical expertise to help guide them through the process. The full details about what ED is seeking can be found on the Federal Register’s website, but the key questions for the public are the following:
(1) What types of measures should be used to rate colleges’ performance on access, affordability, and student outcomes? ED notes that they are interested in measures that are currently available, as well as ones that could be developed with additional data.
(2) How should all of the data be reduced into a set of ratings? This gets into concerns about what statistical weights should be assigned to each measure, as well as whether an institution’s score should be adjusted to account for the characteristics of its students. The issue of “risk adjusting” is a hot topic, as it helps broad-access institutions perform well on the ratings, but has also been accused of resulting in low standards in the K-12 world.
(3) What is the appropriate set of institutional comparisons? Should there be different metrics for community colleges versus research universities? And how should the data be displayed to students and policymakers?
The Department of Education has convened a technical panel on January 22 to grapple with these questions, and I will be among the presenters at that symposium. I would appreciate your thoughts on these questions (as well as the utility of federal college ratings in general), either in the comments section of this blog or via e-mail. I also encourage readers to submit their comments to regulations.gov by January 31.
2 thoughts on “The College Ratings Suggestion Box is Open”
1) Rating/ranking is a ritual here — while the argument by Duncan et al is that it serves a policy purpose, it is as much Accountability Theater as anything else. So I’d take everything here with a grain of salt.
2) At the same time, using ratings and rankings as triggers for administrative action still can have short and long-term consequences for individual schools, so I generally look at this type of process at the K-12 level as “let’s see what we can do to minimize idiotic results.”
Based on this, my inclination is to look at the conversations and see what types of institutions are most vulnerable to misidentification, and then identify what mechanisms are most likely to misidentify vulnerable institutions.
Your comments also highlight the importance of considering multiple years of data, something which the current college rankings typically don’t do. I would also note that schools that are identified as really low-achieving (in the K-12 literature) tend not to be great schools, but we need to be more certain that the true score of the worst schools is at least in the bottom quartile.
Thanks for the comments!
Comments are closed.