The U.S. Department of Education recently announced that they will be creating a list of low-value postsecondary programs, and they requested input from the public on how to do so. They asked seven key questions, and I put together 3,000-plus words in comments as a response to submit. Here, I list the questions and briefly summarize my key points.
Question 1: What program-level data and metrics would be most helpful to students to understand the financial (and other) consequences of attending a program?
Four data elements would be helpful. The first is program-level completion rates, especially for graduate or certificate programs where students are directly admitted into programs. Second, given differential tuition and different credit requirements across programs, time to completion and sticker/net prices by program would be incredibly valuable. The last two are debt and earnings, which are largely present in the current College Scorecard.
Question 2: What program-level data and metrics would be most helpful to understand whether public investments in the program are worthwhile? What data might be collected uniformly across all students who attend a program that would help assess the nonfinancial value created by the program?
I would love to see information on federal income taxes paid by former students and use of public benefits (if possible). More information on income-driven repayment use would also be helpful. Finally, there is a great need to rethink definitions of “public service,” as it currently depends on the employer instead of the job function. That is a concern in fields like nursing that send graduates to do good things in for-profit and nonprofit settings.
Question 3: In addition to the measures or metrics used to determine whether a program is placed on the low-financial-value program list, what other measures and metrics should be disclosed to improve the information provided by the list?
Nothing too fancy here. Just list any sanctions/warnings from the federal government, state agencies, or accreditors along with general outcomes for all students at the undergraduate level to account for major switching.
Question 4: The Department intends to use the 6-digit Classification of Instructional Program (CIP) code and the type of credential awarded to define programs at an institution. Should the Department publish information using the 4-digit CIP codes or some other type of aggregation in cases where we would not otherwise be able to report program data?
This is my nerdy honey hole, as I have spent a lot of time thinking on these issues. The biggest two issues with student debt/earnings data right now is that some campuses get aggregated together in reporting and that it’s also impossible to separate outcomes for fully online versus hybrid/in-person programs. Those nuts need to be cracked, and then aggregate up if cell sizes are too small.
Question 5: Should the Department produce only a single low-financial-value program list, separate lists by credential level, or use some other breakdown, such as one for graduate and another for undergraduate programs?
Separate out by credential level and ideally have a good search function by program of study. Otherwise, some low-paying programs will clog up the lists and not let students see relatively lousy programs in higher-paying areas.
Question 6: What additional data could the Department collect that would substantially improve our ability to provide accurate data for the public to help understand the value being created by the program? Please comment on the value of the new metrics relative to the burden institutions would face in reporting information to the Department.
I would love to see program-level completion rates (where appropriate) and better pricing information at the program level. Those items aren’t free to implement, so I would gladly explore other cuts to IPEDS (such as the academic libraries survey) to help reduce additional burden.
Question 7: What are the best ways to make sure that institutions and students are aware of this information?
Colleges will be aware of this information without the federal government doing much, and they may respond to information that they didn’t have before. But colleges don’t have a great record of responding to public shaming if they already knew that affordability was a concern, so I’m not expecting massive changes.
The College Scorecard had small changes around the margins for student behaviors, primarily driven by more advantaged students. I’m not an expert in reaching out to prospective students, but I know that outreach to as many groups as possible is key.