Public Comments to the Department of Education on College Ratings

It may be a new year, but the Obama Administration’s proposed Postsecondary Institution Ratings System (PIRS) is still a hot topic. Most observers in the higher education policy and research communities (myself included) were less than overwhelmed by the proposed metrics released on December 19—sixteen months after the idea of ratings was first floated. My first take on the metrics can be found here, and there are too many good pieces about the metrics to mention them all.

The U.S. Department of Education has invited the public to provide additional feedback about the metrics used in PIRS (as well as the ratings system itself). You can submit your comments here before February 17. Below are my comments that I will submit to ED.

—————

January 5, 2015

My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University as well as the methodologist for Washington Monthly magazine’s annual college rankings. (All opinions are my own.) After carefully examining the draft metrics proposed for potential inclusion in the Postsecondary Institution Ratings System (PIRS), I have the following comments and suggestions:

First, I am encouraged by the decision to exclude nondegree-granting colleges (mainly small for-profit colleges) from PIRS, as they are already subject to gainful employment. Holding them accountable for two different metrics is unreasonable. But in the two-year sector, it is essential to rate colleges that primarily grant associate’s degrees separately from those that primarily grant certificates due to the different lengths of those programs. The Department must divide two-year colleges up by their program emphasis (degree or certificate) in order for those ratings to be viewed as reasonable.

While I am glad to see discussions of multiple data sources in the draft metrics, I think the focus in the short term has to be using IPEDS data and previously-collected National Student Loan Data System (NSLDS) data for student loan repayment or default rates. Using NSLDS data for student background characteristics (such as first-generation status) is nice for the future, but is unlikely to be ready by this fall—particularly if colleges wish to dispute those data. I encourage the Department to focus on two sets of measures: refining readily available metrics from IPEDS and NSLDS for the draft ratings and continuing to develop new metrics for 2018 and beyond.

Most of the metrics proposed seem reasonable, although I am thoroughly confused by the “EFC gap” metric due to the lack of details provided. Would this be a measure of unmet need, of the percentage of FAFSA filers below a certain EFC, or something else? The Department should consider how strongly correlated the EFC gap measure may be with existing net price or family income data already in IPEDS—and also issue additional guidance on what the metric might be so the public can provide more informed comments.

I was disappointed not to see a technical discussion of potential weights that could be used in the system, and there were no mentions of the possibility of using multiple years of data in developing PIRS. It is important that the ratings be reasonably robust to a number of model specifications, including variables selected and weights used. I encourage the Department to continue working in this area and consulting with statisticians and education researchers.

While I do not expect PIRS to be tied to any federal financial aid dollars—and it is quite possible that draft ratings are never released to the public—the Department has a tremendous opportunity to improve data collection. Overturning the ban on student unit record data would significantly improve the quality of the data, but this is a great time to have a conversation about what information should be collected and processed for both public-sector and private-sector accountability systems. I am happy to provide assistance to the Department if desired and I wish them the best of luck in this difficult endeavor.

————-

I encourage everyone with an interest in PIRS to submit comments on the ratings, and to leave a copy of your comments in the comments section of this blog post.

How to Calculate–and Not Calculate–Net Prices

Colleges’ net prices, which the U.S. Department of Education defines as the total cost of attendance (tuition and fees, room and board, books and supplies, and other living expenses) less all grant and scholarship aid, have received a lot of attention in the last few years. All colleges are required by the Higher Education Opportunity Act to have a net price calculator on their website, where students can get an estimate of their net price by inputting financial and academic information. Net prices are also used for accountability purposes, including in the Washington Monthly college rankings that I compile, and are likely to be included in the Obama Administration’s Postsecondary Institution Ratings System (PIRS) that could be released in the next several weeks.

Two recently released reports have looked at the net price of attendance, but only one of them is useful to either researchers or families considering colleges. A new Brookings working paper by Phillip Levine makes a good contribution to the net price discussion by making a case for using the median net price (instead of the average) for both consumer information and accountability purposes. He uses data from Wellesley College’s net price calculator to show that the median low-income student faces a net price well below the listed average net price. The reason why the average is higher than the median at Wellesley is because a small number of low-income students pay a high net price, while a much larger number of students pay a relatively low price. The outlying values for a small number of students bring up the average value.

I used data from the 2011-12 National Postsecondary Student Aid Study, a nationally-representative sample of undergraduate students, to compare the average and median net prices for dependent and independent students by family income quartile. The results are below:

Comparing average and median net prices by family income quartile.
Average 10th %ile 25th %ile Median 75th %ile 90th %ile
Dependent students: Parents’ income ($1,000s)
<30 10,299 2,500 4,392 8,113 13,688 20,734
30-64 13,130 3,699 6,328 11,077 17,708 24,750
65-105 16,404 4,383 8,178 14,419 21,839 30,174
106+ 20,388 4,753 9,860 18,420 27,122 39,656
Independent students: student and spouse’s income ($1,000s)
<7 10,972 3,238 5,000 8,889 14,385 22,219
7-19 11,114 3,475 5,252 9,068 14,721 22,320
20-41 10,823 3,426 4,713 8,744 14,362 21,996
42+ 10,193 3,196 4,475 7,931 13,557 20,795
SOURCE: National Postsecondary Student Aid Study 2011-12.

 

Across all family income quartiles for both dependent and independent students, the average net price is higher than the median net price. About 60% of students pay a net price at or below the average net price reported to IPEDS, suggesting that switching to reporting the median net price might improve the quality of available information.

The second report was the annual Trends in College Pricing report, published by the College Board. The conclusion the report reached was that net prices are modest and have actually decreased several years during the last decade. However, their definition of “net price” suffers from two fatal flaws:

(1) “Net price” doesn’t include all cost of attendance components. They publicize a “net tuition” measure and a “net tuition, fees, room and board” measure, but the cost of attendance also includes books and supplies as well as other living expenses such as transportation, personal care, and a small entertainment allowance. (For more on living costs, see this new working paper on living costs I’ve got out with Braden Hosch of Stony Brook and Sara Goldrick-Rab of Wisconsin.) This understates what students and their families should actually expect to pay for college, although living costs can vary across individuals.

(2) Tax credits are included with grant aid in their “net price” definition. Students and their families do not receive the tax credit until they file their taxes in the following year, meaning that costs incurred in August may be partially reimbursed the following spring. That does little to help families pay for college upfront, when the money is actually needed. Additionally, not all families that qualify for education tax credits actually claim them. In this New America Foundation blog post, Stephen Burd notes that about 25% of families don’t claim tax credits—and this takeup rate is likely lower among lower-income families.

Sadly, the College Board report has gotten a lot of attention in spite of its inaccurate net price definitions. I would like to see a robust discussion about the important Brookings paper and how we can work to improve net price data—with the correct definition used.

Gainful Employment and the Federal Ability to Sanction Colleges

The U.S. Department of Education’s second attempt at “gainful employment” regulations, which apply to the majority of vocationally-oriented programs at for-profit colleges and certain nondegree programs at public and private nonprofit colleges, was released to the public this morning. The Department’s first effort in 2010 was struck down by a federal judge after the for-profit sector challenged a loan repayment rate metric on account of it requiring additional student data collection that would be illegal under current federal law.

The 2014 measure was widely expected to contain two components: a debt-to-earning s ratio that required program completers to have annual loan debt be less than 8% of total income or 20% of “discretionary income” above 150% of the poverty line, and a cohort default rate measure that required fewer than 30% of program borrowers (regardless of completion status) to default on federal loans in less than three years. As excellent articles on the newly released measure in The Chronicle of Higher Education and Inside Higher Ed this morning detail, the cohort default rate measure was unexpectedly dropped from the final regulation. This change in rules, Inside Higher Ed reports, would reduce the number of affected programs from 1,900 to 1,400 and the number of affected students from about one million to 840,000.

There will be a number of analyses of the exact details of gainful employment over the coming days (I highly recommend anything written by Ben Miller at the New America Foundation), but I want to briefly discuss on what the changes to the gainful employment rule mean for other federal accountability policies. Just over a month ago, the Department of Education released cohort default rate data, but they tweaked a calculation at the last minute that had the effect of allowing more colleges to get under the 30% default rate threshold at least once in three years to avoid sanctions.

The last-minute changes to both gainful employment and cohort default rate accountability measures highlight the political difficulty of the current sanctioning system, which is on an all-or-nothing basis. When the only funding lever the federal government uses is so crude, colleges have a strong incentive to lobby against rules that could effectively shut them down. It is long past time for the Department of Education to consider sliding sanctions against colleges with less-than-desirable outcomes if the goal is to eventually cut off financial aid to the poorest performing institutions.

Finally, the successful lobbying efforts of different sectors of higher education make it appear less likely that the Obama Administration’s still-forthcoming Postsecondary Institution Ratings System (PIRS) will be able to tie financial aid to college ratings. This measure still requires Congressional approval, but the Department of Education’s willingness to propose sanctions has been substantially weakened over the last month. It remains to be seen if the Department of Education under the current administration will propose how PIRS will be tied to aid before the clock runs out on the Obama presidency.

Analyzing the New Cohort Default Rate Data

The U.S. Department of Education today released cohort default rates (CDR) by college, which reflects the percentage of students who default on their loans within three years of entering repayment. This is a big deal for colleges, as any college that had a CDR of more than 30% for three consecutive years could lose its federal financial aid eligibility. I analyzed what we can learn from CDRs—a limited amount—in a blog post earlier this week.

And then things got interesting in Washington. The Department of Education put out a release yesterday noting that some students with loans from multiple servicers (known as “split servicers”) were current on some loans and defaulting on others. In this release, ED noted that the split servicer students were being dropped from CDRs over the last three years—but only if a college was close to the eligibility threshold. This led many to question whether ED was serious about using CDRs as an accountability tool, as well as trying to glean implications for the upcoming college ratings system.

The summary data for cohort default rates by year and sector is available here, and shows a decline from a 14.7% default rate in Fiscal Year 2010 to 13.7% in FY 2011. Default rates in each major sector of higher education also fell, led by a decline from 21.8% to 19.1% in the for-profit sector. However, a comparison of the FY 2009 and 2010 data in this release with the FY 2009 and 2010 data in last year’s release shows no changes from last year–before the split servicer change was adopted. Something doesn’t seem to be right there.

Twenty-one colleges are subject to sanctions under the new CDRs, all but one of which (Ventura Adult and Continuing Education) are for-profit. Most of the colleges subject to sanctions are small beauty or cosmetology institutions and reflect a very small percentage of total enrollment. We don’t know how many other colleges would have crossed over 30%, if not for the split servicer changes.

This year’s data show some very fortunate colleges. Among colleges with a sufficiently high participation rate, six institutions had CDRs of between 29 and 29.9 percent after being over 30% in the previous two years. They are led by Paris Junior College, with a 29.9% CDR in FY 2011 after being over 40% in the previous years. Other colleges weren’t so lucky. For example, the Aviation Institute of Maintenance was at 38.9% in FY 2009, 36.1% in FY 2010, and improved to 31.1% to 2011—but is still subject to sanctions.

FY 2011 CDRs, FY 2009 & 2010 above 30%
Name FY 2011 FY 2010 FY 2009
SEARCY BEAUTY COLLEGE 9.3 30.7 38.2
NEW CONCEPT MASSAGE AND BEAUTY SCHOOL 9.7 30.1 35.2
UNIVERSITY OF ANTELOPE VALLEY 12 31.8 30.6
PAUL MITCHELL THE SCHOOL ESCANABA 12.1 40 68.7
SAFFORD COLLEGE OF BEAUTY CULTURE 13.1 36.8 36.3
COMMUNITY CHRISTIAN COLLEGE 13.9 33.3 38.8
UNIVERSITY OF SOUTHERNMOST FLORIDA 14.6 30.8 35.1
SOUTHWEST UNIVERSITY AT EL PASO 15.5 36.1 37.5
CENTRO DE ESTUDIOS MULTIDISCIPLINARIOS 15.6 39.2 50.9
VALLEY COLLEGE 17.2 36.9 32.7
AMERICAN BROADCASTING SCHOOL 17.5 30.8 44.6
SUMMIT COLLEGE 17.6 30.9 30.5
VALLEY COLLEGE 19.4 56.5 37.5
AMERICAN UNIVERSITY OF PUERTO RICO 21 31.2 36.6
BRYAN UNIVERSITY 21.1 30.2 30.4
SOUTH CENTRAL CAREER CENTER 22 32.6 35.1
PAUL MITCHELL THE SCHOOL ARKANSAS 22 37.5 30
D-JAY’S SCHOOL OF BEAUTY, ARTS & SCIENCES 22.2 37.5 41.9
PAUL MITCHELL THE SCHOOL GREAT LAKES 22.2 34.6 33.9
KILGORE COLLEGE 22.7 30.2 33.5
ANTONELLI COLLEGE 22.8 33 35.1
OLD TOWN BARBER COLLEGE 23 37.7 40
OZARKA COLLEGE 23.1 41.8 35
TESST COLLEGE OF TECHNOLOGY 23.4 33.7 32
CENTURA COLLEGE 23.7 32 35
RUST COLLEGE 23.7 32 31.6
CARSON CITY BEAUTY ACADEMY 23.8 31.8 43.3
BACONE COLLEGE 24.1 32 30
KAPLAN CAREER INSTITUTE 24.1 32.5 33.6
TECHNICAL CAREER INSTITUTES 24.3 38.8 34.9
VICTOR VALLEY COMMUNITY COLLEGE 24.6 32.6 31
SOUTHWESTERN CHRISTIAN COLLEGE 24.6 32.7 43.1
AMERICAN BEAUTY ACADEMY 24.8 35.7 34.6
CENTURA COLLEGE 24.8 31.5 34.7
DENMARK TECHNICAL COLLEGE 25 30.8 31.6
MILAN INSTITUTE OF COSMETOLOGY 25 32.4 41.5
TREND BARBER COLLEGE 25 43.5 60.5
JACKSONVILLE BEAUTY INSTITUTE 25.2 33.3 41.7
CONCEPT COLLEGE OF COSMETOLOGY 25.3 41.5 34.2
EASTERN OKLAHOMA STATE COLLEGE 25.4 31.8 30
OTERO JUNIOR COLLEGE 25.5 34.2 38.2
LANGSTON UNIVERSITY 25.5 32.5 32.9
COLLEGEAMERICA DENVER 25.5 34.8 38.3
AVIATION INSTITUTE OF MAINTENANCE 25.8 36.9 39.6
EMPLOYMENT SOLUTIONS 26 38.5 30
SANFORD-BROWN COLLEGE 26.2 31.6 31.5
CAMBRIDGE INSTITUTE OF ALLIED HEALTH AND TECHNOLOGY 26.6 33.3 35
ANTELOPE VALLEY COLLEGE 26.9 32.6 33.2
UNIVERSITY OF ARKANSAS COMMUNITY COLLEGE AT BATESVILLE 26.9 30.6 31.6
CC’S COSMETOLOGY COLLEGE 27.4 40.3 35.9
MILWAUKEE CAREER COLLEGE 27.6 34.1 32.7
NTMA TRAINING CENTERS OF SOUTHERN CALIFORNIA 27.8 32.1 34.2
CONCORDIA COLLEGE ALABAMA 27.9 31.4 37.5
NORTH AMERICAN TRADE SCHOOLS 28 31 31.1
AVIATION INSTITUTE OF MAINTENANCE 28.1 37.9 39.8
MEDIATECH INSTITUTE 28.4 33.3 33.3
SEBRING CAREER SCHOOLS 29 54.1 57.5
MOHAVE COMMUNITY COLLEGE 29.3 32.7 36.7
CHERYL FELL’S SCHOOL OF BUSINESS 29.4 38 31.2
AVIATION INSTITUTE OF MAINTENANCE 29.4 36.1 38.9
KLAMATH COMMUNITY COLLEGE 29.4 33 31.7
PARIS JUNIOR COLLEGE 29.9 40.7 41.5
STYLEMASTERS COLLEGE OF HAIR DESIGN 30.6 46.6 37
LASSEN COLLEGE 30.8 37.1 37.7
AVIATION INSTITUTE OF MAINTENANCE 31.1 37.5 32.2
CHARLESTON SCHOOL OF BEAUTY CULTURE 31.7 37.5 34
PALLADIUM TECHNICAL ACADEMY 33 39.4 46.2
L T INTERNATIONAL BEAUTY SCHOOL 38.1 37.7 38
TIDEWATER TECH 38.6 42.7 55
JAY’S TECHNICAL INSTITUTE 40.6 53.8 51.5
OHIO STATE COLLEGE OF BARBER STYLING 41.1 37.8 32.9
MEMPHIS INSTITUTE OF BARBERING 44.7 47.2 44.4
FLORIDA BARBER ACADEMY 46.5 41.7 32.5
SAN DIEGO COLLEGE 49.3 34 35.7

Fully 35 colleges with sufficient participation rates had CDRs between 29.0% and 29.9% in FY 2011, including a mix of small for-profit colleges, HBCUs, and community colleges. The University of Arkansas-Pine Bluff, a designated minority-serving institution, has had CDRs of 29.9%, 29.2%, and 29.8% in the last three years. Mt. San Jacinto College and Harris-Stowe State University also had CDRs just under 30% in each of the last three years. Only 19 colleges, representing a mix of institutional types, had CDRs between 30.0% and 30.9%. This includes Murray State College in Oklahoma, which was at 30.0% in FY 2011, 28.9% in FY 2010, and 31.1% in FY 2009. Forty-three colleges were between 28.0% and 28.9%.

FY 2011 CDRs between 29 and 31 percent
Name FY 2011 FY 2010 FY 2009
OHIO TECHNICAL COLLEGE 29 24.1 21.3
DAYMAR COLLEGE 29 28.9 46.2
SEBRING CAREER SCHOOLS 29 54.1 57.5
L’ESPRIT ACADEMY 29.1 0 0
BLACK RIVER TECHNICAL COLLEGE 29.1 27.9 26.6
NEW SCHOOL OF RADIO & TELEVISION 29.1 26.2 28.1
LOUISBURG COLLEGE 29.2 28.7 24.7
MOHAVE COMMUNITY COLLEGE 29.3 32.7 36.7
HARRIS SCHOOL OF BUSINESS 29.3 25.6 17.8
INTELLITEC MEDICAL INSTITUTE 29.3 27.1 24.7
GALLIPOLIS CAREER COLLEGE 29.3 33.9 29.4
CHERYL FELL’S SCHOOL OF BUSINESS 29.4 38 31.2
COLLEGE OF THE SISKIYOUS 29.4 27.7 27.1
AVIATION INSTITUTE OF MAINTENANCE 29.4 36.1 38.9
KLAMATH COMMUNITY COLLEGE 29.4 33 31.7
COLORLAB ACADEMY OF HAIR, THE 29.4 24.3 12.5
DIGRIGOLI SCHOOL OF COSMETOLOGY 29.4 21.6 23.5
VIRGINIA SCHOOL OF MASSAGE 29.4 14.8 22
WASHINGTON COUNTY COMMUNITY COLLEGE 29.5 20.5 12.7
MT. SAN JACINTO COLLEGE 29.5 29.9 26.5
WEST TENNESSEE BUSINESS COLLEGE 29.5 32.6 21.8
BRITTANY BEAUTY SCHOOL 29.5 31.9 26.4
JOHN PAOLO’S XTREME BEAUTY INSTITUTE, GOLDWELL PRODUCTS ARTISTRY 29.5 25 0
HARRIS – STOWE STATE UNIVERSITY 29.6 27.9 26.5
CARIBBEAN UNIVERSITY 29.6 29.9 29.9
GUILFORD TECHNICAL COMMUNITY COLLEGE 29.7 26 19
WARREN COUNTY CAREER CENTER 29.7 22.9 25
STARK STATE COLLEGE 29.7 24.5 17.2
STRAND COLLEGE OF HAIR DESIGN 29.7 17.9 11.1
INDEPENDENCE COLLEGE OF COSMETOLOGY 29.8 21.6 18.4
FRANK PHILLIPS COLLEGE 29.8 25.2 29.1
MEDICAL ARTS SCHOOL (THE) 29.8 21.6 13.1
NEW MEXICO JUNIOR COLLEGE 29.8 24.1 23.1
PARIS JUNIOR COLLEGE 29.9 40.7 41.5
UNIVERSITY OF ARKANSAS AT PINE BLUFF 29.9 29.2 29.8
MURRAY STATE COLLEGE 30 28.9 31.1
JARVIS CHRISTIAN COLLEGE 30 36.5 29.3
BUSINESS INDUSTRIAL RESOURCES 30.1 19.1 20.9
LONG BEACH CITY COLLEGE 30.1 24.2 19
EASTERN GATEWAY COMMUNITY COLLEGE 30.1 0 0
MARTIN UNIVERSITY 30.2 19.8 18.7
LANE COMMUNITY COLLEGE 30.2 30.6 19.5
CAREER QUEST LEARNING CENTER 30.2 24.1 16.1
NIGHTINGALE COLLEGE 30.3 25 16.6
EMPIRE BEAUTY SCHOOL 30.4 31.6 25.2
NATIONAL ACADEMY OF BEAUTY ARTS 30.4 20.6 5.6
BAR PALMA BEAUTY CAREERS ACADEMY 30.5 35.8 26.8
WEST VIRGINIA UNIVERSITY – PARKERSBURG 30.5 25.8 24.1
ENSACOLA SCHOOL OF MASSAGE THERAPY & HEALTH CAREERS 30.5 17.3 10
PROFESSIONAL MASSAGE TRAINING CENTER 30.6 14.8 13
UNIVERSAL THERAPEUTIC MASSAGE INSTITUTE 30.6 23.5 17.2
STYLEMASTERS COLLEGE OF HAIR DESIGN 30.6 46.6 37
CCI TRAINING CENTER 30.8 26.5 26.7
INSTITUTE OF AUDIO RESEARCH 30.8 29.7 17
LASSEN COLLEGE 30.8 37.1 37.7
KAPLAN CAREER INSTITUTE 30.8 34.6 29.7
TRANSFORMED BARBER AND COSMETOLOGY ACADEMY 30.9 66.6 0
MAYSVILLE COMMUNITY AND TECHNICAL COLLEGE 30.9 26.4 24.5
TRI-COUNTY TECHNICAL COLLEGE 30.9 27.2 16.1

Some of the larger for-profits fared better, potentially due to split servicers. The University of Phoenix’s CDR was 19.0% in FY 2011, down from 26.0% in FY 2010 and 26.4%. DeVry University was at 18.5% in FY 2011, down from 23.4% in FY 2010 and 24.1% in FY 2009. ITT Technical Institute also improved, going from 33.3% in FY 2009 to 28.6% and then 22.4% this year. (Everest College disaggregates its data by campus, but the results are similar.)

The CDR data are not without controversy, but they are an important accountability tool going forward. It will be interesting to see whether and how these data will be used in the draft Postsecondary Institution Ratings System later this fall.

What Are Cohort Default Rates Good For?

Today marks the start of U.S. Department of Education’s annual release of cohort default rates (CDR), which reflects the percentage of students who default on their loans within three years of entering repayment. Colleges were informed of their rates today, with a release to the public coming sometime soon. This release, tracking students who entered repayment in Fiscal Year 2011, will the third year that three-year CDRs have been collected and completes a shift from two-year to three-year CDRs for accountability purposes.

Before this year, colleges were subject to sanctions based on their two-year CDRs. Any college that had a two-year CDR of more than 40% in one year could lose its federal student loan eligibility, while any college with a two-year CDR of over 25% for three consecutive years could lose all federal financial aid eligibility. (Colleges with a very small percentage of borrowers can get an exemption.) While this was a rare occurrence (fewer than ten colleges were impacted last year), the switch to a three-year CDR has worried colleges even as the allowed CDR over three years rose from 25% to 30%.

But as the methodology changes, we need to consider what CDR data are actually good for. Colleges take cohort default rates very seriously, and the federal government is likely to use default rates as a component of the often-discussed (and frequently delayed) Postsecondary Institution Ratings System (PIRS). But should the higher education community, policymakers, or the general public take CDRs seriously? Below are some reasons why the default data are far from complete.

(1) Students are tracked over only three years, and income-based repayment makes the data less valuable. I have previously written about these two issues—and why it’s absurd that the Department of Education doesn’t track students over at least ten years. Income-based repayment means that students can be current on their payments even if their payments are zero, which is good for the student but isn’t exactly a ringing endorsement of a given college’s quality.

(2) Individual campuses are often aggregated to the system level, but this isn’t consistent. One of the biggest challenges as a researcher in higher education finance is that data on loan and grant volumes and student loan default rates come from Federal Student Aid instead of the National Center for Education Statistics. This may sound trivial, but some colleges aggregate FSA data to the system level for reporting purposes while all NCES data are at the campus level. This means that while default data on individual campuses within the University of Wisconsin System are available, data from all of the Penn State campuses are aggregated. Most for-profit systems also aggregate data, likely obscuring some individual branches that would otherwise face sanctions.

(3) Defaults are far from the only adverse outcome, but it’s the only one with reported data. Students are not counted as being in default until no payment has been made for at least 271 days, but we have no idea of delinquency rates, hardship deferments, or forbearances related to financial problems by campus. As I recently wrote in a guest post for Access to Completion, the percentage of students having repayment difficulties ranges between 17% and 51%, depending on assumptions made. But we don’t have data on delinquency rates by campus, something which a lot of stakeholders would have interest in.

Does this mean cohort default rates are good for absolutely nothing? No. They’re still useful in identifying colleges (or systems) where a large percentage of borrowers default quickly and a substantial percentage of students borrow. And very low default rates can be a sign that either students are doing well in the labor market after leaving college or that they have the knowledge to enter income-based repayment programs. But for many colleges with middling default rates, far more data are needed (that the Department of Education collects and doesn’t release) to get a better picture of performance.

When the CDR data come out, I’ll have part 2 of this post—focusing on the colleges that are subject to sanctions and what that means for current and future accountability systems.

Are Some Elite Colleges Understating Net Prices?

As a faculty member researching higher education finance, I’m used to seeing the limitations in federal data available to students and their families as they choose colleges. For example, the net price of attendance measure (measured as tuition and fees, room and board, books, and other expenses less any grants received) is only for first-time, full-time students—and therefore excludes a lot of students with great financial need. But a new graphic-heavy report from The Chronicle of Higher Education on net price revealed another huge limitation of the net price data.

The report, titled “Are Poor Families Really Paying Half Their Income at Elite Colleges?” looked at the two ways that some of the most selective public and private colleges calculate household income. About 400 colleges require students to file the CSS/Financial Aid PROFILE (or PROFILE for short) in addition to the FAFSA in order to receive institutional aid; unlike the FAFSA, the PROFILE requires all but the lowest-income students to pay an application fee. Selective colleges require the PROFILE because it includes more questions about household assets than the FAFSA, with the goal of getting a more complete picture of middle-income and upper-income families’ ability to pay for college. This form isn’t really necessary for families with low incomes and little wealth, and can serve as a barrier to attending certain colleges –as noted by Rachel Fishman of the New America Foundation.

The Chronicle piece looked at income data from Notre Dame, which provided both the FAFSA and PROFILE definitions of income. The PROFILE definition of family income resulted in far fewer students in the lowest income bracket (below $30,000 per year) than the FAFSA definition. Because Notre Dame targets more aid to the neediest students, the net price using PROFILE income below $30,000 (the very lowest-income students) was just $4,472 per year, compared to $11,626 using the FAFSA definition.

Notre Dame reported net prices to the Department of Education using the FAFSA definition of family income, which is the same way that all non-PROFILE colleges report income for net price. But the kicker in the Chronicle piece is that apparently some colleges use the PROFILE definition of income to generate net price data for the federal government. These selective colleges look much less expensive than a college like Notre Dame that reports data like most colleges do, giving them great publicity. Reporting PROFILE-based net prices can also improve these colleges’ performance on Washington Monthly’s list of best bang-for-the-buck colleges, as we use the average net price paid by students making less than $75,000 per year in the metric. (But many of the elite colleges don’t make the list since they fail to enroll 20% Pell recipients in their student body.)

The Department of Education should put forth language clarifying that net price data should be based on the FAFSA definition of income and not the PROFILE definition that puts fewer students in the lower income brackets and results in a seemingly lower net price. Colleges can report both FAFSA and PROFILE definitions on their own websites, but federal data need to be consistent across colleges.

Building a Better Student Loan Default Measure

Student loan default rates have been a hot political topic as of late given increased accountability pressures at the federal level. Currently, colleges can lose access to all federal financial aid (grants as well as loans) if more than 25% of students defaulted on their loans within two years of leaving college for three consecutive cohorts. Starting later this year, the measure used will be the default rate within three years of leaving college, and the cutoff for federal eligibility will rise to 30%. (Colleges can appeal this result if there are relatively few borrowers.)

But few students should ever have to default on their loans given the availability of various income-based repayment (IBR) plans. (PLUS loans typically aren’t eligible for income-based repayment, but their default rates oddly aren’t tracked and aren’t used for accountability purposes.) If a former student enrolled in IBR falls on tough times, his or her monthly payment will go down—potentially to zero if income is less than 150% of the federal poverty line. As a result, savvy colleges should be encouraging their students to enroll in IBR in order to reduce default rates.

And more students are enrolling in IBR. Jason Delisle at the New America Foundation analyzed new Federal Student Aid data out this week that showed that the number of students in IBR doubled from 950,000 to 1.9 million in the last year while outstanding loan balances went from $52.2 billion to $101.0 billion. The federal government’s total Direct Loan portfolio increased from $361.3 billion to $464.3 billion in the last year, meaning that IBR was responsible for nearly half of the increase in loan dollars.

This shift to IBR means that the federal government needs to consider new options for holding colleges accountable for their outcomes. Some options include:

(1) Using a longer default window. The “standard” loan repayment plan is ten years, but defaults are only tracked for three years. A longer window wouldn’t give an accurate picture of outcomes if more students enroll in IBR, but it would provide useful information on students who expect to do well enough after college that standard payments will be a better deal than IBR. This probably requires replacement of the creaky National Student Loan Data System, which may not be able to handle that many more data requests.

(2) Look at the percentage of students who don’t pay anything under IBR. This would measure the percentage of students making more than 150% of the poverty line, or about $23,000 per year for a former borrower with one other family member. Even with the woeful salaries in many public service jobs (such as teaching), they’ll likely have to pay something here.

(3) Look at the total amount repaid compared to the amount borrowed. If the goal is to make sure the federal government gets its money back, a measure of the percentage of funds repaid might be useful. Colleges could even be held accountable for part of the unpaid amount if desired.

As the Department of Education continues to develop draft college ratings (to come out later this fall), they are hopefully having these types of conversations when considering outcome measures. I hope this piece sparks a conversation about potential loan default or repayment measures that can improve upon the currently inadequate measure, so please offer your suggestions as comments below.

Quick Thoughts on the Ryan Higher Education Budget Discussion Draft

Representative Paul Ryan (R-WI) released a proposal called Expanding Opportunity in America this morning, which covered topics including social benefits, the Earned Income Tax Credit, education, criminal justice, and regulatory reform. My focus is on the higher education section, starting on page 44.

First of all, I’m glad to see a discussion of targeting federal funds right at the start of the higher education section. Ryan notes concerns about subsidies going to students who don’t need them (such as education tax credits going to households making up to $180,000 per year) and the large socioeconomic gaps in college completion. This is important to note for both economic efficiency and targeting middle-income voters.

The policy points are below:

  • Simplify the FAFSA. Most policymakers like this idea at this point, but the question is how to do so. The document doesn’t specify how it should be simplified, or if it should go as far as the Alexander/Bennet proposal to knock the FAFSA back to two questions. Ryan supports getting information about aid available to students in eighth grade and using tax data from two years ago (“prior prior year”) to determine aid eligibility, both of which make great sense. I’ve written papers on both early aid commitment and prior prior year.
  • Reform and modernize the Pell program. Ryan is concerned about the fiscal health of the Pell program and is looking for ways to shore up its finances. He raises the idea of using the Supplemental Educational Opportunity Grant (SEOG)—a Pell supplement distributed by campuses—to help fund Pell. I’ve written a paper about how SEOG and work-study allocations benefit very expensive private colleges over colleges that actually serve Pell recipients. It’s a great idea to consider, but parts of One Dupont just may object. Ryan also suggests allowing students to use their Pell funds however they want (effectively restoring the summer Pell Grant), something which much of the higher education community supports.
  • Cap federal loans to graduate students and parents. This will prove to be a controversial recommendation, with the possibility of interesting political bedfellows. While many are concerned about rising debt and the fiscal implications, there are different solutions. The Obama Administration has instead proposed capping forgiveness at $57,500, while letting students borrow more. I’m conflicted as to what the better path is. Is it better to shift students to the private loan market to get any additional funds, or should they get loans with lower interest rates through the federal government that may result in a fiscal train wreck if loan forgiveness isn’t capped? The Ryan proposal has the potential to help slow the growth in college costs, but potentially at the expense of some students’ goals.
  • Consider reforms to the TRIO programs. TRIO programs serve low-income, first-generation families, but Ryan notes that there isn’t a lot of evidence supporting these programs. I admittedly don’t know as much about TRIO as I should, but I like the call for additional research before judging their effectiveness.
  • Expand funding for federal Work-Study programs. The proposal increases work-study funds through allowing colleges to keep expiring Perkins Loans funds instead of returning them to the federal government. This is the wrong way to proceed because Perkins allocations (and current work-study allocations) are also correlated with the cost of attendance. I would rather see a redistribution of work-study funds based on Pell Grant receipt instead of by cost of attendance, as I’ve noted previously.
  • Build stronger partnerships with post-secondary institutions. Most of this is empty platitudes toward colleges, but the last sentence is critical: “Colleges should also have skin in the game, to further encourage their commitment to outcome-based learning.” There seems to be some support on both sides of the aisle for holding institutions accountable for their performance through methods such as partial responsibility for loan defaults, tying financial aid to outcomes, or college ratings, but an agreement looks less likely at this point.
  • Reform the accreditation process. Ryan supports Senator Lee (R-UT)’s proposal to allow accreditors to certify particular courses instead of degree programs. This is a good idea in general, but the political landscape gets much trickier due to the existence of MOOCs, for-profit colleges (and course providers), and the power of the current higher education lobby. I’ll be interested to see how this moves forward.

Overall, the tenets of the proposal seem reasonable and some parts are likely to get bipartisan support. The biggest questions remaining are whether the Senate will be okay with the House passing Higher Education Act reauthorization components piecemeal (as they are currently doing) and what funding levels will look like for particular programs. In any case, these ideas should generate useful discussions in policy and academic circles.

Should Colleges Be Able to Determine Costs of Living?

I was reading through the newest National Center for Education Statistics report with just-released federal data on the cost of college and found some interesting numbers. (The underlying data are available under the “preliminary release” tab of the IPEDS Data Center.) Table 2 of the report shows the change in inflation-adjusted costs for tuition and fees, books and supplies, room and board, and other expenses included in the cost of attendance figure between 2011-12 and 2013-14.

Tuition and fees rose between three and five percent above inflation in public and private nonprofit two-year and four-year colleges between 2011-12 and 2013-14 while slightly dipping at for-profit colleges (perhaps a response to declining enrollment in that sector). Room and board for students living on campus at four-year colleges also went up about three percent faster than inflation, which seems reasonable given the increasing quality of amenities. But the other results struck me as a little odd:

This tweet got picked up by The Chronicle of Higher Education, and led to a nice piece by Jonah Newman talking to me and a financial aid official about what could be explaining these results. In my view, there are three potential reasons why other costs included in the costs of attendance measure could be falling:

(1) Students could be under such financial stress that they’re doing everything possible to cut back on costs at least partially within their control. Given the rising cost of college, this could potentially explain part of the drop.

(2) Colleges could be trying to keep the total cost of attendance—and thus the net price of attendance, which is the cost of attendance less all grant aid received—low for accountability and public shaming purposes. In my work as methodologist for the Washington Monthly college rankings, a college’s net price factors into its score on the social mobility portion of the rankings and its position on our list of America’s Best Bang for the Buck” Colleges. A higher net price could also hurt colleges in the Obama Administration’s proposed college ratings, a draft of which is due to be released later this fall.

(3) Colleges could be trying to keep the cost of attendance low in order to limit student borrowing because students cannot borrow more than the total cost of attendance. Colleges may think that limiting student loan debt will result in lower default rates (a key accountability measure), and there is some evidence that the for-profit sector may be doing this even if it cuts off students’ access to funds needed to pay for living expenses:

Looking at each of the individual components beyond tuition, fees, and room and board, book and supplies costs staying level with inflation or slightly falling in the nonprofit sector could be reasonable. Pushes to make textbook costs more transparent could be having an impact, as could the ability of students to rent books or access online course material at a lower price than conventional material:

While room and board for students living on campus increased 3-4 percentage points faster than inflation over the last two years, the cost of living off campus (not with family) was estimated to stay constant. However, as Ben Miller at the New America Foundation pointed out to me, some colleges cut their off-campus living expenses to implausibly low values:

The “other expenses” category (such as transportation, travel costs, and some entertainment) dropped between one and five percentage points. These drops could be a function of colleges not accurately capturing what it costs to live modestly because surveying students is an expensive and time-consuming proposition for understaffed financial aid offices. But it could also be a result of pressure from administrators or trustees who want to keep the total cost (on paper) lower.

A potential solution would be to take the room and board estimates for off-campus students and the “other expenses” category out of the hands of colleges and instead use a regionally-adjusted measure of living expenses. The Department of Education could survey students at a selected number of representative colleges to get an idea of their expenses and whether they are what students need in order to be successful in college. They could use this survey to develop estimates that apply to all colleges. There is some precedent for doing this, as the cost of attendance estimates for Federal Work-Study and Supplemental Educational Opportunity Grant campus funding add a $9,975 living cost allowance and a $600 books and supplies allowance for all students. This should be adjusted for regional cost of living (and what costs actually are), but it’s something to consider going forward.

Does College Improve Happiness? What the Gallup Poll Doesn’t Tell Us

The venerable polling organization Gallup released a much-anticipated national survey of 30,000 college graduates on Tuesday, focusing on student satisfaction in the workplace and in life as a whole. I’m not going to spend a lot of time getting into all of the details (see great summaries at Inside Higher Ed, NPR, and The Chronicle of Higher Education), but two key findings merit further discussion.

The first key finding is that not that many graduates are engaged with their job and thriving across a number of elements of well-being (including purpose, social, community, financial, and physical). Having supportive professors is the strongest predictor of being engaged at work, and being engaged at work is a strong predictor of having a high level of well-being.

Second, the happiness of graduates doesn’t vary that much across types of nonprofit institutions, with students graduating from (current?) top-100 colleges in the U.S. News & World Report rankings reporting similar results to less-selective institutions. Graduates of for-profit institutions are less engaged at work and are less happy than graduates of nonprofit colleges, although no causal mechanisms are posed.

While it is wonderful to have data on a representative sample of 30,000 college graduates, adults who started college but did not complete are notably excluded. Given that about 56% of first-time students complete a college degree within six years of first enrolling (according to the National Student Clearinghouse), just surveying students who graduated leaves out a large percentage of adults with some postsecondary experience. Given the (average) economic returns to completing a degree, it might be reasonable to expect dropouts to be less satisfied than graduates; however, this is an empirical question.

Surveying dropouts would also provide better information on the counterfactual outcome for certain types of students. For example, are students who attend for-profit colleges happier than dropouts—and are both of these groups happier than high school graduates who did not attempt college? This is a particularly important policy question given the ongoing skirmishes between the U.S. Department of Education and the proprietary sector regarding gainful employment data.

Surveying people across the educational distribution would allow for more detailed analyses of the potential impacts of college by comparing adults who appear similar on observable characteristics (such as race, gender, and socioeconomic status) but received different levels of education. While these studies would not be causal, the results would certainly be of interest to researchers, policymakers, and the general public. I realize the Gallup Education poll exists in part to sell data to interested colleges, but the broader education community should be interested in what happens to students who did not complete college—or did not even enroll. Hopefully, future versions of the poll will include adults who did not complete college.