Associate’s Degree Recipients are College Graduates

Like most faculty members, I have my fair share of quirks, preferences, and pet peeves. While some of them are fairly minor and come from my training (such as referring to Pell Grant recipients as students from low-income families instead of low-income students, since most students have very little income of their own), others are more important because of the way they incorrectly classify students and fail to recognize their accomplishments.

With that in mind, I’m particularly annoyed by a Demos piece with the headline “Since 1991, Only College Graduates Have Seen Their Income Rise.” This claim comes from Pew data showing that only households headed by someone with a bachelor’s degree or more had a real income gain between 1991 and 2012, while households headed by those with less education lost ground. However, this headline implies that students who graduate with associate’s degrees are not college graduates—a value judgment that comes off as elitist.

According to the Current Population Survey, over 21 million Americans have an associate’s degree, with about 60% of them being academic degrees and the rest classified as occupational. This is nearly half the size of the 43 million Americans whose highest degree is a bachelor’s degree. Many of these students are the first in their families to even attend college, so an associate’s degree represents a significant accomplishment with meaning in the labor market.

Although most people in the higher education world have an abundance of degrees, let’s not forget that our college experiences are becoming the exception rather than the norm. I urge writers to clarify their language and recognize that associate’s degree holders are most certainly college graduates.

Improving Data on PhD Placements

Graduate students love to complain about the lack of accurate placement data for students who graduated from their programs. Programs are occasionally accused of only reporting data for students who successfully received tenure-track jobs, and other programs apparently do not have any information on what happened to their graduates. Not surprisingly, this can frustrate students as they try to make a more informed decision about where to pursue graduate studies.

An article in today’s Chronicle of Higher Education highlights the work of Dean Savage, a sociologist who has tracked the outcomes of CUNY sociology PhD recipients for decades. His work shows a wide range of paths for CUNY PhDs, many of whom have been successful outside tenure-track jobs. Tracking these students over their lifetimes is certainly a time-consuming job, but it should be much easier to determine the initial placements of doctoral degree recipients.

All students who complete doctoral degrees are required to complete the Survey of Earned Doctorates (SED), which is supported by the National Science Foundation and administered by the National Opinion Research Center. The SED contains questions designed to elicit a whole host of useful information, such as where doctoral degree recipients earned their undergraduate degrees (something which I use in the Washington Monthly college rankings as a measure of research productivity) and information about the broad sector in which the degree recipient will be employed.

The utility of the SED could be improved by clearly asking degree recipients where their next job is located, as well as their job title and academic department. The current survey asks about the broad sector of employment, but the most relevant response for postgraduate plans is “have signed contract or made definite commitment to a “postdoc” or other work. Later questions do ask about the organization where the degree recipient will work, but there is no clear distinction between postdoctoral positions, temporary faculty positions, and tenure-track faculty positions. Additionally, there is no information requested about the department in which the recipient will work.

My proposed changes to the SED are little more than tweaks in the grand scheme of things, but have the potential to provide much better data about where newly minted PhDs take academic or administrative positions. This still wouldn’t fix the lack of data on the substantial numbers of students who do not complete their PhDs, but it’s a start to providing better data at a reasonable cost using an already-existing survey instrument.

Is there anything else we should be asking about the placements of new doctoral recipients? Please let me know in the comments section.

Burning Money on the Quad? Why Rankings May Increase College Costs

Regardless of whether President Obama’s proposed rating system for colleges based on affordability and performance becomes reality (I expect ratings to appear in 2015, but not have a great deal of meaning), his announcement has affected the higher education community. My article listing “bang for the buck” colleges in Washington Monthly ran the same day he announced his plan, a few days ahead of our initial timeline. We were well-positioned with respect to the President’s plan, which led to much more media attention than we would have expected.

A few weeks after the President’s media blitz, U.S. News & World Report unveiled their annual rankings to the great interest of many students, their families, and higher education professionals as well as to the typical criticism of their methodology. But they also faced a new set of critiques based on their perceived focus on prestige and selectivity instead of affordability and social mobility. Bob Morse, U.S. News’s methodologist, answered some of those critiques in a recent blog post. Most of what Morse said isn’t terribly surprising, especially his noting that U.S. News has much different goals than the President’s goals. He also hopes to take advantage of any additional data the federal government collects for its ratings, and I certainly share that interest. However, I strongly disagree with one particular part of his post.

When asked whether U.S. News rewards colleges for raising costs and spending more money, Morse said no. He reminded readers that the methodology only counts spending on the broadly defined category of educational expenditures, implying that additional spending on instruction, student services, research, and academic support always benefits students. (Spending on items such as recreation, housing, and food service does not count.)

I contend that rewarding colleges for spending more in the broad area of educational expenditures is definitely a way to increase the cost of college, particularly since this category makes up 10% of the rankings. Morse and the U.S. News team desire to have their rankings based on academic quality, which can be enhanced by additional spending—I think this is the point they are trying to make. But the critique is mechanically true, as more spending on “good” expenditures still would raise the cost of college. Additionally, this additional spending need not be on factors that benefit undergraduate students and may not be cost-effective. I discuss both of these two points below.

1. Additional spending on “educational expenditures” may not benefit undergraduate students. A good example of this is spending on research, which runs in the tens or even hundreds of millions of dollars per year at many larger universities. Raising tuition to pay for research would increase educational expenditures—and hence an institution’s spot in the U.S. News rankings—but primarily would benefit faculty, graduate students, and postdoctoral scholars. This sort of spending may very well benefit the public through increased research productivity, but it is very unlikely to benefit first-year and second-year undergraduates.

[Lest this be seen solely as a critique of the U.S. News rankings, the Washington Monthly rankings (for which I’m the methodologist) can also be criticized for potentially contributing to the increase in college costs. Our rankings also reward colleges for research expenditures, so the same critiques apply.]

2. Additional spending may fail a cost-effectiveness test. As I previously noted, any spending on the broad area of “educational expenditures” would be a positive. But there is no requirement that the money be used in an efficient way, or even an effective one. I am reminded of a quote by John Duffy, formerly on the faculty of George Washington University’s law school. He famously said in a 2011 New York Times article: “I once joked with my dean that there is a certain amount of money that we could drag into the middle of the school’s quadrangle and burn, and when the flames died down, we’d be a Top 10 school as long as the point of the bonfire was to teach our students.” On a more serious note, additional spending could be used for legitimate programs that fail to move the needle on student achievement, perhaps due to diminishing returns.

I have a great deal of respect for Bob Morse and the U.S. News team, but they are incorrect to claim that their rankings do not have the potential to increase the cost of college. I urge them to reconsider that statement, instead focusing on why the additional spending for primarily educational purposes could benefit students.

Comparing the US News and Washington Monthly Rankings

In yesterday’s post, I discussed the newly released 2014 college rankings from U.S. News & World Report and how they changed from last year. In spite of some changes in methodology that were billed as “significant,” the R-squared value when comparing this year’s rankings with last year’s rankings among ranked national universities and liberal arts colleges was about 0.98. That means that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a nearly perfect prediction.

In today’s post, I compare the results of the U.S. News rankings to those from the Washington Monthly rankings for national universities and liberal arts colleges ranked by both sources. The rankings from Washington Monthly (for which I’m the consulting methodologist and compiled them) are based on three criteria: social mobility, research, and service, which are not the particular goals of the U.S. News rankings. Yet it could still be the case that colleges that recruit high-quality students, have lots of resources, and have a great reputation (the main factors in the U.S. News rankings) do a good job recruiting students from low-income families, produce outstanding research, and graduate servant-leaders.

The results of my comparisons show large differences between the two sets of rankings, particularly at liberal arts colleges. The R-squared value at national universities is 0.34, but only 0.17 at liberal arts colleges, as shown below:

uswm_natl

uswm_libarts

It is worth highlighting some of the colleges that are high on both rankings. Harvard, Stanford, Swarthmore, Pomona, and Carleton all rank in the top ten in both magazines, showing that it is possible to be both highly selective and serve the public in an admirable way. (Of course, we should expect that to be the case given the size of their endowments and their favorable tax treatment!) However, Middlebury and Claremont McKenna check in around 100th in the Washington Monthly rankings in spite of a top-ten U.S. News ranking. These well-endowed institutions don’t seem to have the same commitment to the public good as some of their highly selective peers.

On the other hand, colleges ranked lower by U.S. News do well in the Washington Monthly ranking. Some examples include the University of California-Riverside (2nd in WM, 112th in U.S. News), Berea College (3rd in WM, 76th in U.S. News), and the New College of Florida (8th in WM, 89th in U.S. News). If nothing else, the high ranks in the Washington Monthly rankings give these institutions a chance to toot their own hour and highlight their own successes.

I fully realize that only a small percentage of prospective students will be interested in the Washington Monthly rankings compared to those from U.S. News. But it is worth highlighting the differences across college rankings so students and policymakers can decide what institutions are better for them given their own demands and preferences.

Breaking Down the 2014 US News Rankings

Today is a red-letter day for many people in the higher education community—the release of the annual college rankings from U.S. News and World Report. While many people love to hate the rankings for an array of reasons (from the perceived focus on prestige to a general dislike of accountability in some sectors), their influence on colleges and universities is undeniable. Colleges love to put out press releases touting their place in the rankings even while decrying their general premise.

I’m no stranger to the college ranking business, having been the consulting methodologist for Washington Monthly’s annual college rankings for the past two years. (All opinions in this piece, of course, are my own.) While Washington Monthly’s rankings rank colleges based on social mobility, service, and research performance, U.S. News ranks colleges primarily based on “academic quality,” which consists of inputs such as financial resources and standardized test scores as well as peer assessments for certain types of colleges.

I’m not necessarily in the U.S. News-bashing camp here, as they provide a useful service for people who are interested in prestige-based rankings (which I think is most people who want to buy college guides). But the public policy discussion, driven in part by the President’s proposal to create a college rating system, has been moving toward an outcome-based focus. The Washington Monthly rankings do capture some elements of this focus, as can be seen in my recent appearance on MSNBC and an outstanding panel discussion hosted by New America and Washington Monthly last week in Washington.

Perhaps in response to criticism or the apparent direction of public policy, Robert Morse (the well-known and well-respected methodologist for U.S. News) announced some changes last week in the magazine’s methodology for this year’s rankings. The changes place slightly less weight on peer assessment and selectivity, while putting slightly more weight on graduation rate performance and graduation/retention rates. Yet Morse bills the changes as meaningful, noting that “many schools’ ranks will change in the 2014 [this year’s] edition of the Best Colleges rankings compared with the 2013 edition.”

But the rankings have tended to be quite stable from year to year (here are the 2014 rankings). The top six research universities in the first U.S. News survey (in 1983—based on peer assessments by college presidents) were Stanford, Harvard, Yale, Princeton, Berkeley, and Chicago, with Amherst, Swarthmore, Williams, Carleton, and Oberlin being the top five liberal arts colleges. All of the research universities except Berkeley are in the top six this year and all of the liberal arts colleges except Oberlin are in the top eight.

In this post, I’ve examined all national universities (just over 200) and liberal arts colleges (about 180) ranked by U.S. News in this year’s and last year’s rankings. Note that this is only a portion of qualifying colleges, but the magazine doesn’t rank lower-tier institutions. The two graphs below show the changes in the rankings for national universities and liberal arts colleges between the two years.

usnews_natl

usnews_libarts

The first thing that jumps out at me is the high R-squared, around 0.98 for both classifications. What this essentially means is that 98% of the variation in this year’s rankings can be explained by last year’s rankings—a remarkable amount of persistence even when considering the slow-moving nature of colleges. The graphs show more movement among liberal arts colleges, which are much smaller and can be affected by random noise much more than large research universities.

The biggest blip in the national university rankings is South Carolina State, which went from 147th last year to unranked (no higher than 202nd) this year. Other universities which fell more than 20 spots are Howard University, the University of Missouri-Kansas City, and Rutgers University-Newark, all urban and/or minority-serving institutions. Could the change in formulas have hurt these types of institutions?

In tomorrow’s post, I’ll compare the U.S. News rankings to the Washington Monthly rankings for this same sample of institutions. Stay tuned!

Policy Options for Pell Reform: The CBO’s Analysis

The federal Pell Grant program has grown dramatically over the past decade, due to both the effects of the Great Recession and changes to the program that made it more generous to students from low- to middle-income families. As spending has more than doubled since 2006 (although it slightly fell in the most recent year for which data is available), some in Congress have grown concerned about the sustainability of the program. This led Senator Jeff Sessions (R-AL), ranking member of the Senate Budget Committee, to request a review of Pell spending and information about the likely costs of various reform options going forward.

The Congressional Budget Office, the nonpartisan agency charged with “scoring” fiscal proposals, released a report yesterday summarizing the estimated fiscal effects of a host of changes to the Pell program. (Inside Higher Ed has a nice summary of the report.) While the goal of the requesting Senator may have been to find ways to lower spending on the program by better targeting awards, the CBO also looked at proposals to make the Pell program more generous and to simplify Pell eligibility.

While I’m glad that the CBO looked at the fiscal effects of various changes to restrict or expand eligibility, I think that Congress will make those decisions on a year-to-year basis (pending the availability of funds) instead of thinking forward over a ten-year window. However, it is notable that the proposal to restrict Pell Grants to students with an expected family contribution of zero—by far the students with the greatest need—would only cut expenditures by $10 billion per year, or just over one-fourth of the program cost. I am more interested in the CBO’s cost estimates for simplifying eligibility criteria. They propose two possible reforms, which are discussed in more detail on pages 24 and 25 of the report.

Proposal 1: Simplify the FAFSA by only requiring students and their families to provide income data from tax returns instead of pulling in asset and income data from other sources. This would slightly affect targeting, as some resources would be unknown to the government, but research has shown that basic income data predicts Pell awards well for most students. The CBO estimates that about two percent more students would receive the Pell Grant and that about one in five students would see an increase of approximately $350. This is estimated to increase program costs by $1 billion per year, or less than 3% of the annual program cost.

Proposal 2: Tie Pell eligibility to federal poverty guidelines instead of EFCs. I am quite interested in this idea, as it would greatly streamline the financial aid eligibility process—but I’m not sure whether I think it is the best idea out there. Basically, the federal poverty guidelines are calculated based on income, household size, and state of residency, and could be used to calculate Pell eligibility. This is indirectly done right now through means-tested benefit programs; for example, eligibility for the free/reduced price lunch program is based on the poverty line (130% for free, 185% for reduced). Since students who have a family member receiving FRL can qualify for a simpler FAFSA already, this may not be such a leap. The CBO estimates that about one in ten students would have their Pell status affected by their model option and that costs would fall by $1.4 billion per year, but the percent of poverty used (up to 250%) would likely be changed in the legislative process.

In the alternatives section of the report (page 26), the CBO discusses committing Pell funds to students in middle and high school—noting that such a program could increase academic and financial preparation for postsecondary. This sounds very similar to a paper that Sara Goldrick-Rab and I wrote on a possible early commitment Pell program (a citation would have been nice!), but they don’t provide any estimates of the costs of that program. We estimate in our paper that the program will cost about $1.5 billion per year, with the federal government likely to at least break even in the long run via increased tax payments (something not discussed in any of the policy options in the brief).

I’m glad to see this report on possible options to Pell reform and I hope that they will continue to get requests to score and examine innovative ideas to improve and reform the delivery of financial aid.