Testimony to the Advisory Committee on Student Financial Assistance

Below is a copy of my September 12, 2014 testimony at the Advisory Committee on Student Financial Assistance’s hearing regarding the Postsecondary Institution Ratings System (PIRS):

Good morning, members of the Advisory Committee on Student Financial Assistance, Department of Education officials, and other guests. My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University and the methodologist for Washington Monthly magazine’s annual college rankings. All opinions expressed in this testimony are my own, and I thank the Committee for the opportunity to present.

I am focusing my testimony on PIRS as an accountability mechanism, as that appears to be the Obama Administration’s stated goal of developing ratings. A student-friendly rating tool can have value, but I am confident that third parties can use the Department’s data to develop a better tool. The Department should not simultaneously develop a consumer-oriented ratings system, nor should they release a draft of PIRS without providing information about where colleges stand under the proposed system. I am also not taking an opinion on the utility of PIRS as an accountability measure, as the value of the system depends on details that have not yet been decided.

The Department has a limited number of potential choices for metrics in PIRS regarding access, affordability, and outcomes. While I will submit comments on a range of metrics for the record, I would like to discuss earnings metrics today. In order to not harm colleges that educate large numbers of teachers, social workers, and others who have important but lower-salary jobs, I encourage the Department to adopt an earnings metric indexed to federal poverty guideline. For example, the cutoff could be 150% of the federal poverty line for a family of two, or roughly $23,000 per year.

There are a number of methodological decisions that the Department must make in developing PIRS. I focus on five in this testimony.

The first decision is whether to classify colleges into peer groups. While supporters of the idea state it is necessary in order to have more fair comparisons of similar colleges, I do not feel this is necessary in a well-designed accountability system. I suggest combining all four-year institutions into one group and then separating two-year institutions based on whether more associate’s degrees or certificates were awarded, as this distinction affects graduation rates.

Instead of placing colleges into peer groups, some outcomes should be adjusted for inputs such as student characteristics and selectivity. This partially controls for important differences across colleges that are correlated with outcomes, providing an estimate of a college’s “value-added” to students. But colleges should also be held to minimum outcome standards (such as a 25% graduation rate) in addition to minimum value-added standards.

The scoring system and number of colleges in each rating tier are crucial to the potential feasibility and success of PIRS. A simple system with three or four carefully named tiers (no A-F grades, please!) is sufficient to identify the lowest-performing and highest-performing colleges. I would suggest three tiers with the lowest 10% in the bottom tier, the middle 80% in the next tier, and the highest 10% in the top tier. While the scores all have error due to data limitations, focusing on the bottom 10% makes it unlikely any college in the lowest tier has a true performance outside the bottom one-third of colleges. Using multiple years of data will also help reduce randomness in data; I use three years of data for the Washington Monthly rankings.

Finally, the Department must carefully consider how to weight individual metrics. While I would expect access, affordability, and outcomes to be equally weighted, the colleges in the top and bottom tier should not change much when different weights are used for each metric. If the Department finds the results are highly sensitive to model specifications, the utility of PIRS comes into question.

I conclude with three recommendations—two for the Department and one for the policy community. The Department must be willing to adjust ratings criteria as needed and accept feedback on the draft ratings from a wide variety of stakeholders. They also must start auditing IPEDS data from a random sample of colleges in order to make sure the data are accurate, as the implications of incorrectly-reported or falsely-reported data are substantial. Finally, the policy community needs to continue to push for better higher education data. The Student Achievement Measure project has the potential to improve graduation rate reporting, and overturning the federal ban on unit record data will greatly improve the Department’s ability to accurately measure colleges’ performance.

Thank you once again for the opportunity to present and I look forward to answering any questions.

Author: Robert

I am an a professor at the University of Tennessee, Knoxville who studies higher education finance, accountability policies and practices, and student financial aid. All opinions expressed here are my own.

%d bloggers like this: