Analyzing the New Cohort Default Rate Data

The U.S. Department of Education today released cohort default rates (CDR) by college, which reflects the percentage of students who default on their loans within three years of entering repayment. This is a big deal for colleges, as any college that had a CDR of more than 30% for three consecutive years could lose its federal financial aid eligibility. I analyzed what we can learn from CDRs—a limited amount—in a blog post earlier this week.

And then things got interesting in Washington. The Department of Education put out a release yesterday noting that some students with loans from multiple servicers (known as “split servicers”) were current on some loans and defaulting on others. In this release, ED noted that the split servicer students were being dropped from CDRs over the last three years—but only if a college was close to the eligibility threshold. This led many to question whether ED was serious about using CDRs as an accountability tool, as well as trying to glean implications for the upcoming college ratings system.

The summary data for cohort default rates by year and sector is available here, and shows a decline from a 14.7% default rate in Fiscal Year 2010 to 13.7% in FY 2011. Default rates in each major sector of higher education also fell, led by a decline from 21.8% to 19.1% in the for-profit sector. However, a comparison of the FY 2009 and 2010 data in this release with the FY 2009 and 2010 data in last year’s release shows no changes from last year–before the split servicer change was adopted. Something doesn’t seem to be right there.

Twenty-one colleges are subject to sanctions under the new CDRs, all but one of which (Ventura Adult and Continuing Education) are for-profit. Most of the colleges subject to sanctions are small beauty or cosmetology institutions and reflect a very small percentage of total enrollment. We don’t know how many other colleges would have crossed over 30%, if not for the split servicer changes.

This year’s data show some very fortunate colleges. Among colleges with a sufficiently high participation rate, six institutions had CDRs of between 29 and 29.9 percent after being over 30% in the previous two years. They are led by Paris Junior College, with a 29.9% CDR in FY 2011 after being over 40% in the previous years. Other colleges weren’t so lucky. For example, the Aviation Institute of Maintenance was at 38.9% in FY 2009, 36.1% in FY 2010, and improved to 31.1% to 2011—but is still subject to sanctions.

FY 2011 CDRs, FY 2009 & 2010 above 30%
Name FY 2011 FY 2010 FY 2009
SEARCY BEAUTY COLLEGE 9.3 30.7 38.2
NEW CONCEPT MASSAGE AND BEAUTY SCHOOL 9.7 30.1 35.2
UNIVERSITY OF ANTELOPE VALLEY 12 31.8 30.6
PAUL MITCHELL THE SCHOOL ESCANABA 12.1 40 68.7
SAFFORD COLLEGE OF BEAUTY CULTURE 13.1 36.8 36.3
COMMUNITY CHRISTIAN COLLEGE 13.9 33.3 38.8
UNIVERSITY OF SOUTHERNMOST FLORIDA 14.6 30.8 35.1
SOUTHWEST UNIVERSITY AT EL PASO 15.5 36.1 37.5
CENTRO DE ESTUDIOS MULTIDISCIPLINARIOS 15.6 39.2 50.9
VALLEY COLLEGE 17.2 36.9 32.7
AMERICAN BROADCASTING SCHOOL 17.5 30.8 44.6
SUMMIT COLLEGE 17.6 30.9 30.5
VALLEY COLLEGE 19.4 56.5 37.5
AMERICAN UNIVERSITY OF PUERTO RICO 21 31.2 36.6
BRYAN UNIVERSITY 21.1 30.2 30.4
SOUTH CENTRAL CAREER CENTER 22 32.6 35.1
PAUL MITCHELL THE SCHOOL ARKANSAS 22 37.5 30
D-JAY’S SCHOOL OF BEAUTY, ARTS & SCIENCES 22.2 37.5 41.9
PAUL MITCHELL THE SCHOOL GREAT LAKES 22.2 34.6 33.9
KILGORE COLLEGE 22.7 30.2 33.5
ANTONELLI COLLEGE 22.8 33 35.1
OLD TOWN BARBER COLLEGE 23 37.7 40
OZARKA COLLEGE 23.1 41.8 35
TESST COLLEGE OF TECHNOLOGY 23.4 33.7 32
CENTURA COLLEGE 23.7 32 35
RUST COLLEGE 23.7 32 31.6
CARSON CITY BEAUTY ACADEMY 23.8 31.8 43.3
BACONE COLLEGE 24.1 32 30
KAPLAN CAREER INSTITUTE 24.1 32.5 33.6
TECHNICAL CAREER INSTITUTES 24.3 38.8 34.9
VICTOR VALLEY COMMUNITY COLLEGE 24.6 32.6 31
SOUTHWESTERN CHRISTIAN COLLEGE 24.6 32.7 43.1
AMERICAN BEAUTY ACADEMY 24.8 35.7 34.6
CENTURA COLLEGE 24.8 31.5 34.7
DENMARK TECHNICAL COLLEGE 25 30.8 31.6
MILAN INSTITUTE OF COSMETOLOGY 25 32.4 41.5
TREND BARBER COLLEGE 25 43.5 60.5
JACKSONVILLE BEAUTY INSTITUTE 25.2 33.3 41.7
CONCEPT COLLEGE OF COSMETOLOGY 25.3 41.5 34.2
EASTERN OKLAHOMA STATE COLLEGE 25.4 31.8 30
OTERO JUNIOR COLLEGE 25.5 34.2 38.2
LANGSTON UNIVERSITY 25.5 32.5 32.9
COLLEGEAMERICA DENVER 25.5 34.8 38.3
AVIATION INSTITUTE OF MAINTENANCE 25.8 36.9 39.6
EMPLOYMENT SOLUTIONS 26 38.5 30
SANFORD-BROWN COLLEGE 26.2 31.6 31.5
CAMBRIDGE INSTITUTE OF ALLIED HEALTH AND TECHNOLOGY 26.6 33.3 35
ANTELOPE VALLEY COLLEGE 26.9 32.6 33.2
UNIVERSITY OF ARKANSAS COMMUNITY COLLEGE AT BATESVILLE 26.9 30.6 31.6
CC’S COSMETOLOGY COLLEGE 27.4 40.3 35.9
MILWAUKEE CAREER COLLEGE 27.6 34.1 32.7
NTMA TRAINING CENTERS OF SOUTHERN CALIFORNIA 27.8 32.1 34.2
CONCORDIA COLLEGE ALABAMA 27.9 31.4 37.5
NORTH AMERICAN TRADE SCHOOLS 28 31 31.1
AVIATION INSTITUTE OF MAINTENANCE 28.1 37.9 39.8
MEDIATECH INSTITUTE 28.4 33.3 33.3
SEBRING CAREER SCHOOLS 29 54.1 57.5
MOHAVE COMMUNITY COLLEGE 29.3 32.7 36.7
CHERYL FELL’S SCHOOL OF BUSINESS 29.4 38 31.2
AVIATION INSTITUTE OF MAINTENANCE 29.4 36.1 38.9
KLAMATH COMMUNITY COLLEGE 29.4 33 31.7
PARIS JUNIOR COLLEGE 29.9 40.7 41.5
STYLEMASTERS COLLEGE OF HAIR DESIGN 30.6 46.6 37
LASSEN COLLEGE 30.8 37.1 37.7
AVIATION INSTITUTE OF MAINTENANCE 31.1 37.5 32.2
CHARLESTON SCHOOL OF BEAUTY CULTURE 31.7 37.5 34
PALLADIUM TECHNICAL ACADEMY 33 39.4 46.2
L T INTERNATIONAL BEAUTY SCHOOL 38.1 37.7 38
TIDEWATER TECH 38.6 42.7 55
JAY’S TECHNICAL INSTITUTE 40.6 53.8 51.5
OHIO STATE COLLEGE OF BARBER STYLING 41.1 37.8 32.9
MEMPHIS INSTITUTE OF BARBERING 44.7 47.2 44.4
FLORIDA BARBER ACADEMY 46.5 41.7 32.5
SAN DIEGO COLLEGE 49.3 34 35.7

Fully 35 colleges with sufficient participation rates had CDRs between 29.0% and 29.9% in FY 2011, including a mix of small for-profit colleges, HBCUs, and community colleges. The University of Arkansas-Pine Bluff, a designated minority-serving institution, has had CDRs of 29.9%, 29.2%, and 29.8% in the last three years. Mt. San Jacinto College and Harris-Stowe State University also had CDRs just under 30% in each of the last three years. Only 19 colleges, representing a mix of institutional types, had CDRs between 30.0% and 30.9%. This includes Murray State College in Oklahoma, which was at 30.0% in FY 2011, 28.9% in FY 2010, and 31.1% in FY 2009. Forty-three colleges were between 28.0% and 28.9%.

FY 2011 CDRs between 29 and 31 percent
Name FY 2011 FY 2010 FY 2009
OHIO TECHNICAL COLLEGE 29 24.1 21.3
DAYMAR COLLEGE 29 28.9 46.2
SEBRING CAREER SCHOOLS 29 54.1 57.5
L’ESPRIT ACADEMY 29.1 0 0
BLACK RIVER TECHNICAL COLLEGE 29.1 27.9 26.6
NEW SCHOOL OF RADIO & TELEVISION 29.1 26.2 28.1
LOUISBURG COLLEGE 29.2 28.7 24.7
MOHAVE COMMUNITY COLLEGE 29.3 32.7 36.7
HARRIS SCHOOL OF BUSINESS 29.3 25.6 17.8
INTELLITEC MEDICAL INSTITUTE 29.3 27.1 24.7
GALLIPOLIS CAREER COLLEGE 29.3 33.9 29.4
CHERYL FELL’S SCHOOL OF BUSINESS 29.4 38 31.2
COLLEGE OF THE SISKIYOUS 29.4 27.7 27.1
AVIATION INSTITUTE OF MAINTENANCE 29.4 36.1 38.9
KLAMATH COMMUNITY COLLEGE 29.4 33 31.7
COLORLAB ACADEMY OF HAIR, THE 29.4 24.3 12.5
DIGRIGOLI SCHOOL OF COSMETOLOGY 29.4 21.6 23.5
VIRGINIA SCHOOL OF MASSAGE 29.4 14.8 22
WASHINGTON COUNTY COMMUNITY COLLEGE 29.5 20.5 12.7
MT. SAN JACINTO COLLEGE 29.5 29.9 26.5
WEST TENNESSEE BUSINESS COLLEGE 29.5 32.6 21.8
BRITTANY BEAUTY SCHOOL 29.5 31.9 26.4
JOHN PAOLO’S XTREME BEAUTY INSTITUTE, GOLDWELL PRODUCTS ARTISTRY 29.5 25 0
HARRIS – STOWE STATE UNIVERSITY 29.6 27.9 26.5
CARIBBEAN UNIVERSITY 29.6 29.9 29.9
GUILFORD TECHNICAL COMMUNITY COLLEGE 29.7 26 19
WARREN COUNTY CAREER CENTER 29.7 22.9 25
STARK STATE COLLEGE 29.7 24.5 17.2
STRAND COLLEGE OF HAIR DESIGN 29.7 17.9 11.1
INDEPENDENCE COLLEGE OF COSMETOLOGY 29.8 21.6 18.4
FRANK PHILLIPS COLLEGE 29.8 25.2 29.1
MEDICAL ARTS SCHOOL (THE) 29.8 21.6 13.1
NEW MEXICO JUNIOR COLLEGE 29.8 24.1 23.1
PARIS JUNIOR COLLEGE 29.9 40.7 41.5
UNIVERSITY OF ARKANSAS AT PINE BLUFF 29.9 29.2 29.8
MURRAY STATE COLLEGE 30 28.9 31.1
JARVIS CHRISTIAN COLLEGE 30 36.5 29.3
BUSINESS INDUSTRIAL RESOURCES 30.1 19.1 20.9
LONG BEACH CITY COLLEGE 30.1 24.2 19
EASTERN GATEWAY COMMUNITY COLLEGE 30.1 0 0
MARTIN UNIVERSITY 30.2 19.8 18.7
LANE COMMUNITY COLLEGE 30.2 30.6 19.5
CAREER QUEST LEARNING CENTER 30.2 24.1 16.1
NIGHTINGALE COLLEGE 30.3 25 16.6
EMPIRE BEAUTY SCHOOL 30.4 31.6 25.2
NATIONAL ACADEMY OF BEAUTY ARTS 30.4 20.6 5.6
BAR PALMA BEAUTY CAREERS ACADEMY 30.5 35.8 26.8
WEST VIRGINIA UNIVERSITY – PARKERSBURG 30.5 25.8 24.1
ENSACOLA SCHOOL OF MASSAGE THERAPY & HEALTH CAREERS 30.5 17.3 10
PROFESSIONAL MASSAGE TRAINING CENTER 30.6 14.8 13
UNIVERSAL THERAPEUTIC MASSAGE INSTITUTE 30.6 23.5 17.2
STYLEMASTERS COLLEGE OF HAIR DESIGN 30.6 46.6 37
CCI TRAINING CENTER 30.8 26.5 26.7
INSTITUTE OF AUDIO RESEARCH 30.8 29.7 17
LASSEN COLLEGE 30.8 37.1 37.7
KAPLAN CAREER INSTITUTE 30.8 34.6 29.7
TRANSFORMED BARBER AND COSMETOLOGY ACADEMY 30.9 66.6 0
MAYSVILLE COMMUNITY AND TECHNICAL COLLEGE 30.9 26.4 24.5
TRI-COUNTY TECHNICAL COLLEGE 30.9 27.2 16.1

Some of the larger for-profits fared better, potentially due to split servicers. The University of Phoenix’s CDR was 19.0% in FY 2011, down from 26.0% in FY 2010 and 26.4%. DeVry University was at 18.5% in FY 2011, down from 23.4% in FY 2010 and 24.1% in FY 2009. ITT Technical Institute also improved, going from 33.3% in FY 2009 to 28.6% and then 22.4% this year. (Everest College disaggregates its data by campus, but the results are similar.)

The CDR data are not without controversy, but they are an important accountability tool going forward. It will be interesting to see whether and how these data will be used in the draft Postsecondary Institution Ratings System later this fall.

What Are Cohort Default Rates Good For?

Today marks the start of U.S. Department of Education’s annual release of cohort default rates (CDR), which reflects the percentage of students who default on their loans within three years of entering repayment. Colleges were informed of their rates today, with a release to the public coming sometime soon. This release, tracking students who entered repayment in Fiscal Year 2011, will the third year that three-year CDRs have been collected and completes a shift from two-year to three-year CDRs for accountability purposes.

Before this year, colleges were subject to sanctions based on their two-year CDRs. Any college that had a two-year CDR of more than 40% in one year could lose its federal student loan eligibility, while any college with a two-year CDR of over 25% for three consecutive years could lose all federal financial aid eligibility. (Colleges with a very small percentage of borrowers can get an exemption.) While this was a rare occurrence (fewer than ten colleges were impacted last year), the switch to a three-year CDR has worried colleges even as the allowed CDR over three years rose from 25% to 30%.

But as the methodology changes, we need to consider what CDR data are actually good for. Colleges take cohort default rates very seriously, and the federal government is likely to use default rates as a component of the often-discussed (and frequently delayed) Postsecondary Institution Ratings System (PIRS). But should the higher education community, policymakers, or the general public take CDRs seriously? Below are some reasons why the default data are far from complete.

(1) Students are tracked over only three years, and income-based repayment makes the data less valuable. I have previously written about these two issues—and why it’s absurd that the Department of Education doesn’t track students over at least ten years. Income-based repayment means that students can be current on their payments even if their payments are zero, which is good for the student but isn’t exactly a ringing endorsement of a given college’s quality.

(2) Individual campuses are often aggregated to the system level, but this isn’t consistent. One of the biggest challenges as a researcher in higher education finance is that data on loan and grant volumes and student loan default rates come from Federal Student Aid instead of the National Center for Education Statistics. This may sound trivial, but some colleges aggregate FSA data to the system level for reporting purposes while all NCES data are at the campus level. This means that while default data on individual campuses within the University of Wisconsin System are available, data from all of the Penn State campuses are aggregated. Most for-profit systems also aggregate data, likely obscuring some individual branches that would otherwise face sanctions.

(3) Defaults are far from the only adverse outcome, but it’s the only one with reported data. Students are not counted as being in default until no payment has been made for at least 271 days, but we have no idea of delinquency rates, hardship deferments, or forbearances related to financial problems by campus. As I recently wrote in a guest post for Access to Completion, the percentage of students having repayment difficulties ranges between 17% and 51%, depending on assumptions made. But we don’t have data on delinquency rates by campus, something which a lot of stakeholders would have interest in.

Does this mean cohort default rates are good for absolutely nothing? No. They’re still useful in identifying colleges (or systems) where a large percentage of borrowers default quickly and a substantial percentage of students borrow. And very low default rates can be a sign that either students are doing well in the labor market after leaving college or that they have the knowledge to enter income-based repayment programs. But for many colleges with middling default rates, far more data are needed (that the Department of Education collects and doesn’t release) to get a better picture of performance.

When the CDR data come out, I’ll have part 2 of this post—focusing on the colleges that are subject to sanctions and what that means for current and future accountability systems.

Testimony to the Advisory Committee on Student Financial Assistance

Below is a copy of my September 12, 2014 testimony at the Advisory Committee on Student Financial Assistance’s hearing regarding the Postsecondary Institution Ratings System (PIRS):

Good morning, members of the Advisory Committee on Student Financial Assistance, Department of Education officials, and other guests. My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University and the methodologist for Washington Monthly magazine’s annual college rankings. All opinions expressed in this testimony are my own, and I thank the Committee for the opportunity to present.

I am focusing my testimony on PIRS as an accountability mechanism, as that appears to be the Obama Administration’s stated goal of developing ratings. A student-friendly rating tool can have value, but I am confident that third parties can use the Department’s data to develop a better tool. The Department should not simultaneously develop a consumer-oriented ratings system, nor should they release a draft of PIRS without providing information about where colleges stand under the proposed system. I am also not taking an opinion on the utility of PIRS as an accountability measure, as the value of the system depends on details that have not yet been decided.

The Department has a limited number of potential choices for metrics in PIRS regarding access, affordability, and outcomes. While I will submit comments on a range of metrics for the record, I would like to discuss earnings metrics today. In order to not harm colleges that educate large numbers of teachers, social workers, and others who have important but lower-salary jobs, I encourage the Department to adopt an earnings metric indexed to federal poverty guideline. For example, the cutoff could be 150% of the federal poverty line for a family of two, or roughly $23,000 per year.

There are a number of methodological decisions that the Department must make in developing PIRS. I focus on five in this testimony.

The first decision is whether to classify colleges into peer groups. While supporters of the idea state it is necessary in order to have more fair comparisons of similar colleges, I do not feel this is necessary in a well-designed accountability system. I suggest combining all four-year institutions into one group and then separating two-year institutions based on whether more associate’s degrees or certificates were awarded, as this distinction affects graduation rates.

Instead of placing colleges into peer groups, some outcomes should be adjusted for inputs such as student characteristics and selectivity. This partially controls for important differences across colleges that are correlated with outcomes, providing an estimate of a college’s “value-added” to students. But colleges should also be held to minimum outcome standards (such as a 25% graduation rate) in addition to minimum value-added standards.

The scoring system and number of colleges in each rating tier are crucial to the potential feasibility and success of PIRS. A simple system with three or four carefully named tiers (no A-F grades, please!) is sufficient to identify the lowest-performing and highest-performing colleges. I would suggest three tiers with the lowest 10% in the bottom tier, the middle 80% in the next tier, and the highest 10% in the top tier. While the scores all have error due to data limitations, focusing on the bottom 10% makes it unlikely any college in the lowest tier has a true performance outside the bottom one-third of colleges. Using multiple years of data will also help reduce randomness in data; I use three years of data for the Washington Monthly rankings.

Finally, the Department must carefully consider how to weight individual metrics. While I would expect access, affordability, and outcomes to be equally weighted, the colleges in the top and bottom tier should not change much when different weights are used for each metric. If the Department finds the results are highly sensitive to model specifications, the utility of PIRS comes into question.

I conclude with three recommendations—two for the Department and one for the policy community. The Department must be willing to adjust ratings criteria as needed and accept feedback on the draft ratings from a wide variety of stakeholders. They also must start auditing IPEDS data from a random sample of colleges in order to make sure the data are accurate, as the implications of incorrectly-reported or falsely-reported data are substantial. Finally, the policy community needs to continue to push for better higher education data. The Student Achievement Measure project has the potential to improve graduation rate reporting, and overturning the federal ban on unit record data will greatly improve the Department’s ability to accurately measure colleges’ performance.

Thank you once again for the opportunity to present and I look forward to answering any questions.

Rankings, Rankings, and More Rankings!

We’re finally reaching the end of the college rankings season for 2014. Money magazine started off the season with its rankings of 665 four-year colleges based on “educational quality, affordability, and alumni earnings.” (I generally like these rankings, in spite of the inherent limitations of using Rate My Professor scores and Payscale data in lieu of more complete information.) I jumped in the fray late in August with my friends at Washington Monthly for our annual college guide and rankings. This was closely followed by a truly bizarre list from the Daily Caller of “The 52 Best Colleges In America PERIOD When You Consider Absolutely Everything That Matters.

But like any good infomercial, there’s more! Last night, the New York Times released its set of rankings focusing on how elite colleges are serving students from lower-income families. They examined the roughly 100 colleges with a four-year graduation rate of 75% or higher, only three of which (University of North Carolina-Chapel Hill, University of Virginia, and the College of William and Mary) are public. By examining the percentage of students receiving Pell Grants in the past three years and the net price of attendance (the total sticker price less all grant aid) for 2012-13, they created a “College Access Index” looking at how many standard deviations from the mean each college was.

My first reaction upon reading the list is that it seems a lot like what we introduced in Washington Monthly’s College Guide this year—a list of “Affordable Elite” colleges. We looked at the 224 most selective colleges (including many public universities) and ranked them using graduation rate, graduation rate performance (are they performing as well as we would expect given the students they enroll?), and student loan default rates in addition to percent Pell and net price. Four University of California colleges were in our top ten, with the NYT’s top college (Vassar) coming in fifth on our list.

I’m glad to see the New York Times focusing on economic diversity in their list, but it would be nice to look at a slightly broader swath of colleges that serve more than a handful of lower-income students. As The Chronicle of Higher Education notes, the Big Ten Conference enrolls more Pell recipients than all of the colleges ranked by the NYT. Focusing on the net price for families making between $30,000 and $48,000 per year is also a concern at these institutions due to small sample sizes. In 2011-12 (the most recent year of publicly available data), Vassar enrolled 669 first-year students, of whom 67 were in the $30,000-$48,000 income bracket.

The U.S. News & World Report college rankings also came out this morning, and not much changed from last year. Princeton, which is currently fighting a lawsuit challenging whether the entire university should be considered a nonprofit enterprise, is the top national university on the list, while Williams College in Massachusetts is the top liberal arts college. Nick Anderson at the Washington Post has put together a nice table showing changes in rankings over five years; most changes wouldn’t register as being statistically significant. Northeastern University, which has risen into the top 50 in recent years, is an exception. However, as this great piece in Boston Magazine explains, Northeastern’s only focus is to rise in the U.S. News rankings. (They’re near the bottom of the Washington Monthly rankings, in part because they’re really expensive.)

Going forward, the biggest set of rankings for the rest of the fall will be the new college football rankings—as the Bowl Championship Series rankings have been replaced by a 13-person committee. (And no, Bob Morse from U.S. News is not a member, although Condoleezza Rice is.) I like Gregg Easterbrook’s idea at ESPN about including academic performance as a component in college football rankings. That might be worth considering as a tiebreaker if the playoff committee gets deadlocked solely using on-field performance. They could also use the Washington Monthly rankings, but Minnesota has a better chance of winning a Rose Bowl before that happens.

[ADDENDUM: Let’s also not forget about the federal government’s effort to rate (not rank) colleges through the Postsecondary Institution Ratings System (PIRS). That is supposed to come out this fall, as well.]