Should There Be Gainful Employment for College Athletes?

College athletics, particularly the big-revenue sports of NCAA Division I football and basketball, have been in the news lately for less-than-athletic reasons. The recent push by the Northwestern football team to unionize has led to further discussion of whether college athletes* should be compensated beyond their athletic scholarships. And the University of Connecticut’s national championship team in men’s basketball comes a year after they were banned from the tournament due to woeful academic performance and an eight percent graduation rate. (Big congrats to the UConn women’s team, who won another national championship while graduating 92% of students!)

Now things may not be quite as bad as they look. The NCAA’s preferred measure of academic progress is the Academic Progress Rate (APR), which is scored from 0 to 1000 based on retention and eligibility of athletes. Colleges aren’t penalized for athletes who leave without a degree, as long as they stay eligible while competing. This measure is likely more reasonable for athletes who leave for the professional ranks, but this excludes students who exhaust their eligibility and do not become professionals. The APR doesn’t take graduation into account—a significant limitation in this case.

I can’t help think of what could happen if the general principles of gainful employment—a hot political topic in the vocational portions of higher education—would apply to students with athletic scholarships. While the primary metrics of the current gainful employment proposal (debt to income ratios) may not apply to students with full scholarships, some sort of earning and employment measure could be used to track the future success of former athletes. If former players on college teams were unable to obtain professional athletic or academic major-related employment, the team could be subject to sanctions.

I’d love to hear your thoughts on gainful employment for college athletes in the comment section. I’m not taking an actual stand in favor or against this idea, but it’s something potentially worth additional discussion.

* I’m sure the NCAA would rather that I call them “student-athletes,” but I use “athletes” and “students” where appropriate.

The Black Hole of PLUS Loan Outcomes

Much of the debate about improving federal higher education data quality has focused on whether a student unit record dataset is necessary in order to give students, their families, and policymakers the information they need in order to make better decisions. Last month’s release of College Blackout: How the Higher Education Lobby Fought to Keep Students in the Dark by Amy Laitinen and Clare McCann of the New America Foundation highlighted the potential role of the higher education lobby in opposing unit record data. However, privacy advocates note the concerns with these types of datasets—and these are concerns that policymakers must always keep in mind.

Colleges are already required as a condition of the Higher Education Act to report institutional-level data on some outcomes to the federal government, which are then typically made publicly available through the Integrated Postsecondary Education Data System (IPEDS). In what is an annoying quirk of the federal government’s data reporting systems, the best source for data on the amount of certain types of aid received (such as work-study or the Supplemental Educational Opportunity Grant) is the Office of Postsecondary Education’s website and is not available through IPEDS. Student loan default rates (for Stafford loans) are available on Federal Student Aid’s website, which is also not tied to IPEDS. The lack of a central database for all of these data sources is a pain for analysts (consider the technical appendix to my paper on campus-based aid programs), but it typically can be overcome with a mix of elbow grease and knowledge of the difference between UnitIDs and OPEIDs.

Yet, until last week, we knew absolutely nothing about the outcomes for students and families who took out federal PLUS loans. These loans, which require a credit check for the parents of undergraduate students, have gained attention recently due to the federal government’s 2011 decision to tighten eligibility criteria in order to reduce default rates. This disproportionately affected enrollment at historically black colleges and universities, many of which are private and do not have large endowments that provide institutional aid funds. Some analysts, such as Rachel Fishman at New America, have called for PLUS loans to be severely curtailed or even eliminated.

The Department of Education provided a negotiated rulemaking committee with data on PLUS denial rates and default rates by institutional sector (public, private nonprofit, and for-profit) last week, marking the first time these data had even been made public. These data were only provided after members of the committee complained about a lack of data on the proposals they were discussing. (The data are available here, under the pre-session 2 materials header.) The data on loan balances suggests that the average parent PLUS loan balance among borrowers at four-year private colleges is $27,443, compared to $19,491 at four-year publics and $18,133 at four-year for-profit institutions. Three-year default rates at for-profit colleges were 13.3% in fiscal year 2010, compared to 3.4% at private nonprofits and 3.1% at public institutions. And the total amount of outstanding PLUS loans (undergraduate and graduate students combined) is just over $100 billion, or roughly 10% of all student loan debt.

A piece in Thursday’s Inside Higher Ed quoted a HBCU president who noted that there was no reason to tighten loan criteria given the low default rates in the data. But the public has no idea what any college’s default rate is on PLUS loans, given the release of broad sector-level data. The piece goes on to note that the Department of Education says institutional-level data are not available for PLUS loans, in part because there is no appeal process in place for colleges. This has the effect of insulating programs that take in large amounts of PLUS funds, do not graduate those students, and as a result they default. Right now, there is no accountability whatsoever.

The Department of Education needs to release institutional-level PLUS loan data to improve transparency and accountability. However, they claim that these measures do not exist—an assumption which borders on the absurd given the existence of the data in the National Student Loan Data System and their ability to calculate sector-level measures. ED’s response has been that colleges do not have the ability to appeal the data, but this can be easily remedied. In the meantime, I hope that the higher education community uses the Freedom of Information Act to request these data—and that advocates are willing to go to court when ED says the data do not exist.

Should Campus-Based Financial Aid Be Reallocated?

I am presenting a paper, “Exploring Trends and Alternative Allocation Strategies for Campus-Based Financial Aid Programs,” at the Association for Education Finance and Policy’s annual conference this afternoon.  Here is the abstract:

Two federal campus-based financial aid programs, the Supplemental Educational Opportunity Grant (SEOG) and the Federal Work-Study program (FWS), combine to provide nearly $2 billion in funding to students with financial need. However, the allocation formulas have changed little since 1965, resulting in community colleges and newer institutions getting much smaller awards than longstanding private colleges with high costs of attendance. I document the trends in campus-level allocations over the past two decades and explore several different methods to reallocate funds based on current financial need while limiting the influence of high-tuition colleges. I show that allocation formulas that count a modest amount of tuition toward financial need reallocate aid away from private nonprofit colleges and toward public colleges and universities.

And here are the slides from my presentation, summarizing the study (which is still a work in progress). Any comments are greatly appreciated!

College Accountability and the Obama Budget Proposal

The Fiscal Year 2015 $3.9 trillion budget document from the Obama Administration includes a request of $68.6 billion in discretionary funds for the Department of Education, up $1.3 billion from 2014 funding. This excludes a great deal of mandatory spending on entitlements, including student loan costs/subsidies, some Pell Grant funding, and some other types of financial aid. (Mandatory spending is much harder to eliminate than discretionary funding, as illustrated by this helpful CBO summary.) The budget is also a reflection of the Administration’s priorities, even if many components are unlikely to be approved by Congress. For a nice summary of the Department of Education’s request, see this policy brief from the New America Foundation.

On the higher education front, the Obama budget implies that accountability will be a key priority of the Department of Education. The Administration made two key requests in this area: $10 million to fund continued development of the Postsecondary Institution Ratings System (PIRS) and $647 million for a fund to reward colleges that enroll and graduate Pell recipients. There was a holdover request for $4 billion in mandatory funds for a version of Race to the Top in higher education, but few in the higher education policy community are taking this plan seriously.

The $10 million for PIRS would go toward “further development and refinement of a new college rating system” (see p. T-156). This request is a signal that the Administration is taking the development of PIRS seriously, but the $10 million in funds suggests that large-scale additional data collection is unlikely to happen in the near future. It is also unlikely that the federal government will work to audit IPEDS data for the rating, something that I called for in my recent policy brief on ratings. Even if the specific $10 million request for PIRS is not acted upon, the Department of Education will use other discretionary funds to move forward.

The $647 million request for College Opportunity and Graduation Bonuses, if approved, would provide bonuses to colleges that are successful in enrolling and graduating large numbers of Pell recipients. I view this as a first attempt to tie federal funds to college performance using metrics that are likely to be in PIRS. I would be surprised if any Pell Grant funds get reallocated through college ratings except for perhaps a handful of very low-performing colleges, but it is possible to get some additional bonus funds tied to ratings.

I had a poll on a blog post a couple weeks ago asking for readers’ thoughts of the likelihood that PIRS would be tied to student financial aid dollars by 2018. The majority of the respondents gave this less than a 50% chance of happening, and I am inclined to agree as well. The Administration’s budget priorities suggest a serious push toward tying some funds to performance, although it is worth emphasizing that a future Congress and President must agree.

What are your thoughts of the Obama Administration’s higher education budget, particularly about accountability? If you have any comments to share, please do so and continue the conversation!

The Multiple Stakeholder Problem in Assessing College Quality

One of the biggest challenges the Department of Education’s proposed Postsecondary Institution Ratings System (PIRS) will face is how to present a valid set of ratings to multiple audiences. Much of the discussion at the recent technical symposium was about who should be the key audience: colleges (for accountability purposes) or students (for informational purposes). The determination of what the audience should be will likely influence what the ratings should look like. My research primarily focuses on institutional accountability, and I think that the federal government should focus on that as the goal of PIRS. (I said as much in my presentation earlier this month.)

The student information perspective is much trickier in my view. Students tend to flock to rankings and information sources that are largely based on prestige instead of some measure of “value-added” or societal good. As a result, I view the Washington Monthly college rankings (which I’ve worked on for the past two years) as a much more influential tool to incentivize colleges and policymakers than students. I think that is the right path to take to influence colleges’ priorities, as I have to question whether many students will use college rankings that provide very useful information to students but do not line up with the preexisting idea of what is a “good” college.

I was quoted in an article in Politico this morning regarding PIRS and what can be learned from existing rankings systems. In that article, I expressed similar sentiments, although in a less elegant way. (It’s also a good time to clarify that all opinions I express are my own.) I certainly hope that more than six students use the Washington Monthly rankings to inform their college choice sets, but I do not harbor grand expectations that students will suddenly choose to use our rankings over U.S. News. However, the influence of the rankings on colleges has the potential to help a large number of students through changing institutional priorities.

Will Federal Aid Be Tied to College Ratings? (Poll)

With all of the discussion of what will be included in the proposed Postsecondary Institution Ratings System (PIRS), there has been relatively little discussion about whether federal Title IV financial aid will actually be tied to the ratings by 2018—as the President has specified. I would love to get your thoughts on the feasibility by taking the following poll, and leaving any additional comments below.

 

 

I’ll share my thoughts in a subsequent post, so stay tuned!

Spring Admissions: Expanding Access or Skirting Accountability?

More than one in five first-year students at the University of Maryland now start their studies in the spring instead of the fall, according to this recent article by Nick Anderson in the Washington Post. This seems to be an unusually high percentage among colleges and universities, but the plan makes a lot of sense. Even at selective institutions, some students will leave at the end of the first semester, and more space opens up on campus after other students graduate, study abroad, or take on internships. It can be a way to maximize revenue by better utilizing facilities throughout the academic year.

However, the article also notes that the SAT scores of spring admits are lower at Maryland. Among students starting in spring 2015, the median score was roughly a 1210 (out of 1500), compared to about 1300 for the most recent available data for fall admits in 2012. These students’ test scores suggest that spring admits are well-qualified to succeed in college, even if they didn’t quite make the cut the first time around. (It’s much less realistic to expect high-SAT students to defer, given the other attractive options they likely have.) This suggests Maryland’s program may have a strong access component.

However, deferring admission to lower-SAT students could be done for other reasons. Currently, colleges only have to report their graduation rates for first-time, full-time students who enrolled in the fall semester to the federal government. (That’s one of the many flaws of the creaky Integrated Postsecondary Education Data System, and one that I would love to see fixed.) If these spring admits do graduate at lower rates, the public will never know. Additionally, many college rankings systems give colleges credit for being more selective. With the intense pressure to rise in the U.S. News rankings, even a small increase in SAT scores can be very important to colleges.

So is Maryland expanding access or trying to skirt accountability systems for a number of students? I would probably say it’s more of the former, but don’t discount the pressure to look good to the federal government and external rankings bodies. This practice is something to watch going forward, even though better federal data systems would reduce its effectiveness of shaping a first-year class.

Is the Term “College Ratings” Toxic?

In what will come as a surprise to few observers, much of the higher education community isn’t terribly fond of President Obama’s plan to develop a college ratings system for the 2015-16 academic year. An example of this is a recently released Inside Higher Ed/Gallup survey of college provosts and chief academic officers. Only a small percentage of the 829 individuals who returned surveys were supportive of the ratings and thought they would be effective, as shown below:

  • 12% of provosts agree the ratings will help families make better comparisons across institutions.
  • 12% of provosts agree the ratings will reflect their own college’s strengths.
  • Just 9% agree the ratings will accurately reflect their own college’s weakness.

There is some variation in support by type of college. Provosts at for-profit institutions and public research universities tended to offer more support, while those at private nonprofit institutions were almost unanimous in opposition. But regardless of whether provosts like the idea of ratings, the plan seems to be full steam ahead.

The Association of Public and Land-Grant Universities (APLU) took a productive step in the ratings conversation by releasing their own plan for accountability and cost-effectiveness. This plan centers on three components that could be used to allocate financial aid to colleges: risk-adjusted retention and graduation rates, employment/graduate degree rates, and default/loan repayment rates. Under APLU’s proposal, colleges could fall into one of three groups: a top tier that receives bonus Title IV funds, a middle tier that is held harmless, and a bottom tier that loses some or all Title IV funds.

To me, that sounds like a ratings system. But APLU took care not to call their plan a ratings system, and viewed the Administration’s plans as being “extremely difficult to structure.” It seems like the phrase “college ratings” has become a toxic idea; so rather than call for a simplified set of ratings, APLU discussed the use of “performance tiers.” This sounds a little like the Common Core debate in K-12 education, in which some states have considered renaming the standards in an attempt to reduce opposition.

It will be interesting to see how the discussion on college ratings moves forward over the next several weeks, particularly as more associations either offer their plans or decry the entire idea.  The technical ratings symposium previously scheduled for January 22 will now occur on February 6 on account of snow, and I’ll be presenting my thoughts on how to develop a ratings system for postsecondary education. I’ll post my presentation on this blog at that time.

Can Maintenance of Effort Programs Fund Public Higher Education?

The American Association of State Colleges and Universities released a policy paper this week calling for the federal government to enact (and fund) a program designed to encourage states to increase their support for public higher education. The AASCU brief rightly notes that per-student funding for public higher education has fallen over the past three decades (the magnitude of which is overstated somewhat due to their choice in inflation adjustments), and they propose a potential solution in the form of a maintenance of effort provision.

AASCU’s proposal would give colleges a partial match of their higher education appropriations, as long as per-FTE funding to institutions is higher than 50% of the value of the maximum Pell Grant and did not decline from the previous year’s value. The value of the matching funds would go up as state appropriations to institutions increased. They estimate that their hypothesized program would cost something in the neighborhood of $10-$15 billion per year, which could be paid for by cutting waste, fraud, and abuse in current financial aid systems (particularly among for-profits) and by implementing some sort of risk-sharing for student loans—which I’ve written on recently.

However, I view the plan as having a fatal flaw. By only including state appropriations to institutions in the calculation—and not requiring that the matching funds be spent on higher education—states can game the system to get additional money from the federal government. States could reduce funding to their financial aid programs and direct those funds toward institutional appropriations in order to get federal dollars, which could be used for K-12 education, healthcare, or tax cuts.

If states followed the incentive to eliminate all grant aid and fund institutions instead, tuition would likely decrease (something that AASCU institutions would appreciate). The most recent NASSGAP survey of state aid programs found that states spend $9.4 billion per year on grant aid, two-thirds of which is allocated based on financial need. Putting this money into state appropriations would cost the federal government several billion dollars, with no guarantees of any additional funding for students or institutions.

I have a hard time seeing Congress approving this maintenance of effort plan, regardless of the merits. Lobbyists for the private nonprofit and for-profit sectors are likely to strongly oppose this measure, as are lobbying groups for K-12 education, healthcare, and corrections spending (behind the scenes) since higher education is often cut at the expense of higher ed. In addition, this is likely to be a nonstarter in the House due to its placing restrictions on state priorities.

I’m glad to see this proposal from AASCU, but I don’t see it becoming law anytime soon. I would suggest that they follow up with some more details on their proposed risk-sharing program, as well as how elements of this plan could be incorporated into the Obama Administration’s proposed college ratings.