Am I On the Wrong Job Market?

In light of being on the academic job market this year, I was amused to get the following mailing from the local branch of Globe University. (Even though the mailing was addressed to me, it was also addressed to “Or Current Resident.”)

Globe Ad

The message is quite simple: graduates of this university get jobs. The mailing advertises that 100% of graduates with a bachelor’s degree in business administration get employment, although the fine print does mention that “employment is not guaranteed.” However, not much can be said about the graduation rates of students attending any of the Globe campuses, both because very few students attending Globe are first-time, full-time students (which are the only students counted in federal graduation rate calculations) and because many campuses (including Madison) have not been open long enough to have a graduation rate cohort.

To get an idea of graduation rates, I looked at the oldest Globe campus, in Brooklyn Center, MN. The reported graduation rate is 23%, with an overall career placement rate of 72%. Meanwhile, tuition is over $5,000 per quarter before mandatory course fees. I’m not saying that Globe University is a bad bet for students, but some students are likely to benefit more by attending the local technical college.

Moral of the story: Don’t believe colleges which imply that everyone gets a job. This isn’t even true at the most prestigious colleges, let alone for relatively unknown for-profit institutions. (Now, I do hope that my UW-Madison PhD helps me get a great job!)

Not Every College is Elite

Like many happenings in American society, the perception of the college selection process is driven by the most elite people and institutions. There are plenty of stories out there about how students apply to more than ten colleges, yet are lucky to get accepted to only their “safety school.” (For example, look at this blog from the newspaper of America’s elite.) But many prospective students and their families do not realize that the majority of four-year colleges are not highly selective and will admit most high school graduates.

An article in today’s Inside Higher Ed (titled “The (Needless?) Frenzy”) highlights the results of a national survey conducted by the National Association for College Admission Counseling (membership required to see the full report). The results of the survey show that the average four-year institution admits about two-thirds of its applicants, with little difference between public and private colleges. I examined federal IPEDS data for 2010-11 admit rates for the 1569 colleges in this year’s Washington Monthly college rankings and also found that the average college admitted 65% of applicants. The below graph shows the distribution of admission rates:

SOURCE: IPEDS.

Not every college is elite enough to be able to reject most of their applicants. Although students tend to apply to colleges which should give them at least a chance of admission, it is worth noting that top-rated colleges in the Washington Monthly rankings admit more students than the average. For example, my alma mater, Truman State University, admits 74% of its applicants while ranking sixth in the master’s university rankings. Truman is certainly selective (with a median ACT of 28), but it is far from elite. Prospective students and their families need to keep in mind that there are very good schools out there which are not absurdly selective, and policymakers should focus their efforts on making success for these institutions more possible.

More Fun With College Rankings

I was recently interviewed by Koran Addo of the (Baton Rouge) Advocate regarding my work with the Washington Monthly college rankings. I’ve had quite a few phone and e-mail exchanges with college officials and the media about my work, but I want to highlight the resulting article both because it was extremely well done and because it highlights what I consider to be the foolish obsession with college rankings.

Two pieces of the article deserve special attention. First, consider this tidbit:

“LSU System President and Baton Rouge Chancellor William Jenkins said he was ‘clearly disappointed’ to learn that LSU had tumbled six spots from 128th last year to 134th in the U.S. News ‘Best Colleges 2013’ list.”

I wish that college rankings came with confidence intervals—which would provide a rough guide of whether a change over time is more than what we would expect by chance or statistical noise. Based on my work with rankings, I can safely say that such a small change in the rankings is not statistically significant and certainly not educationally meaningful.

The next fun quote from the article is from LSU’s director of research and economic development, Nicole Baute Honorée. She argues that only rankings from the National Science Foundation matter:

“Universities are in the knowledge business, as in creating new knowledge and passing it along. That’s why the NSF rankings are the gold standard.”

The problem is that research expenditures (a) do not guarantee high-quality undergraduate education, (b) do not have to be used effectively in order to generate a high score, and (c) do not reward many disciplines (such as the humanities). They are a useful measure of research clout in the sciences, but I would rely on them as only one of many measures (which is what the Washington Monthly rankings have done since long before I took the reins).

Once again, I urge readers not to rely on a single measure of college quality—and to make sure any measure is actually aligned with student success.

The Big N Conference and Athletic Realignment

Mentioning the name “Big Ten” evokes certain sentiments in the minds of many Americans. Although the conference is much more than just smashmouth, low-scoring football games played on chilly November days under gunmetal skies in places as exotic as Iowa City, Ann Arbor, and West Lafayette, football certainly does rule the roost in the conference. But there has traditionally been much more to the conference than big-time football.

As a doctoral student and a fan of so-called “minor sports” such as wrestling, I appreciate the other benefits of the Big Ten. The academic wing of the Big Ten (plus the University of Chicago, a former conference member in athletics before moving to Division III), the Committee on Institutional Cooperation, is an outstanding resource that helps make accessing research materials from other member colleges much easier. And the runaway financial success of the Big Ten Network helps make athletic programs a more free-standing enterprise, reduces student subsidies for athletics, and provides coverage of a wide range of sports besides football and basketball.

Few other conferences are as financially stable as the Big Ten—the Pacific 12 and Southeastern Conferences are the only other truly stable conferences. While the ironically named Big Ten swallowed up its twelfth member (Nebraska) in the last round of conference realignment, the Pac-12 added Utah and Colorado while the SEC added Missouri and Texas A&M. Three other large-school conferences (the ten-member Big 12, the Atlantic Coast Conference, and the Big East Conference—which also has members who do not play football) have been trying to survive, as one is unlikely to remain near the top of the athletic pecking order in another round of realignment.

The current conference order seemed fairly stable (minus the strange moves made by the Big East Conference) until this week. I was watching the Ohio State-Wisconsin football game Saturday afternoon when I saw an item on the bottom of the screen mentioning that the University of Maryland was in serious discussions to move to the Big Ten from the ACC. Sure enough, that move was made official on Monday, with the clear reasons for the expansion being Maryland’s large budget shortfall in athletics and the goal of adding more TV revenue in the next round of negotiations. Rutgers followed on Tuesday, with a big move from the struggling Big East Conference.

The fourteen-member Big Ten (let’s just call it the Big N, shall we?) may not be done adding members. Sixteen members is an appealing number, with potential candidates in the Universities of Virginia, North Carolina, Kansas, and Connecticut as well as possibly Notre Dame, a longtime point of discussion. Despite the increased amount of travel that college athletes must endure and the weakening of regional rivalries (such as Wisconsin versus Iowa in football), superconferences appear to be the way of the future. It is likely that the SEC and Pac-12 will add members to get to sixteen schools, with some merger of the ACC and Big East making up the fourth power conference. Everyone else will be fighting for scraps at the proverbial kids’ table.

I would love to hear predictions for how the big-time college conferences end up shaking out and whether academic factors will continue to be important for the Big Ten. Your comments and predictions are encouraged!

Paying It Forward: A Different Take on Income-Based Repayment

In prior blog posts, I have been less than charitable toward the federal government’s changes to the income-based repayment policies for student loans. (As a reminder, these changes provide large subsidies to students who attend expensive colleges and particularly those who earn good salaries after having attended law or medical school.) My criticism of the federal government’s way of implementing the program does not mean that I am not open to a better way of income-based repayment. With this in mind, I look at a proposal from the Economic Opportunity Institute, a liberal think tank from Washington State, which suggests an income-based repayment program for students attending that state’s public colleges and universities.

The EOI’s proposal, called “Pay It Forward,” would charge students no tuition or fees upfront and would require students to sign a contract stating that they would pay a certain percentage of their adjusted gross income per year (possibly one percent per year in college) for 25 years after leaving college. It appears that the state would rely on the IRS to enforce payment in order to capture part of the earnings of those who leave the state of Washington. This would be tricky to enforce in theory, given the IRS’s general reticence to step into state-level policies.

I am by no means convinced by the group’s crude simulations regarding the feasibility of the program. This is currently short on details and would also require a large one-time investment to get off the ground and enroll an initial cohort of students. Additionally, it is not clear whether the authors of the report accounted for part-time enrollment patterns in their cost estimates. I also urge caution with this program, as this sort of an income-based repayment program decouples the cost of attendance from what students actually pay. Colleges suddenly have a strong incentive to raise their posted tuition substantially in order to capture this additional revenue.

With all of these caveats, the Pay It Forward program does have the potential to serve as a simple income-based repayment program once analysts do more cost-effectiveness analyses. But this will only work if policymakers keep a close eye on the cost of college in order to result in a revenue-neutral program. My gut feeling is that the group’s estimates understate the cost of college under current rules and don’t consider the possibility of the incentives that will increase cost.

Am I Selling “Mathematical Nonsense?”

When I started a line of research on college rankings and value-added, I assumed that if my work ever saw the light of day, it would be at least somewhat controversial. I’ve gotten plenty of feedback on my academic research on the topic, and most of that has been at least mildly encouraging. And I’ve gotten even more feedback on the Washington Monthly college rankings, most of which has also been fairly positive. This work has given me the opportunity to talk with dozens of institutional researchers, college presidents, and provosts from around the country about their best practices for measuring student success.

But one e-mail that we received was sharply negative and over the top. Frederik Ohles, president of Nebraska Wesleyan University in Lincoln, Nebraska, sent along a wonderful missive. Here is the edited version that ran in this month’s magazine (subscribe to the print version here):

—————–

“There are lots of things that I’ve long admired about your magazine. And for that reason, I had thought you might do a better job in the business of college rankings than U.S. News & World Report. But on reading this year’s issue, I was disappointed. In the Monthly college rankings, Nebraska Wesleyan University is predicted to graduate 66 percent, graduates 65 percent, and you rank us number 144 [out of 254] for that result.

What kind of Rube Goldberg-inspired formula would lead to this result? Sorry, folks, but you’ve discredited yourselves with such mathematical nonsense. In the future you’d better stick to subjects that you know something about. Math and ranking methodologies sure aren’t among them.”

—————–

The e-mail went on to call me and the rest of the College Guide staff “charlatans,” just like the U.S. News staff, but you get the point. In any case, I resisted a strong urge to snark in the published response, an excerpt of which is below:

“You focus entirely on the numerator of the measure in your letter and do not mention the denominator—the annual net price of attendance, in your school’s case $20,723. If the net price of Nebraska Wesleyan University were lower, the school’s ranking on this measure would be higher.”

In my full response, I assured Mr. Ohles that it is my goal to never be a charlatan. But am I selling mathematical nonsense? You be the judge.

Pell Grants and Data-Driven Decisions

I am a big proponent of making data-driven decisions whenever possible, but sadly that isn’t the case among many policymakers. Recently, in an effort to reduce costs, Congress and the Obama Administration agreed to reduce the maximum number of semesters of Pell Grant eligibility from 18 to 12 (which is in line with the federal government’s primary graduation rate measure for students attending four-year colleges). However, this decision was made without considering the cost-effectiveness of the policy change or even without a good idea of how many students would be affected.

Today’s online version of The Chronicle of Higher Education includes a piece that I co-authored with Sara Goldrick-Rab on this policy change. We’re both strong proponents of data-driven decision making, as well as conducting experiments whenever possible to evaluate the effects of policy changes. We come from very different places on the political spectrum (which is why we disagree on whether the federal government can and should hold states accountable for their funding decisions), but there are certainly fundamental points that are just a part of an effective policymaking process.

College Selectivity Does Not Imply Quality

For me, the greatest benefit of attending academic conferences is the ability to clarify my own thinking about important issues in educational policy. At my most recent conference last week, I attended several outstanding sessions on issues in higher education in addition to presenting my own work on early commitment programs for financial aid. (I’ll have more on that in a post in the near future, so stay tuned.) I greatly enjoyed the talks and learned quite a bit from them, but the biggest thing I am taking away from them is something that I think they’re doing wrong—conflating college selectivity with college quality.

When most researchers refer to the concept of “college quality,” they are really referring to a college’s inputs, such as financial resources, student ACT/SAT scores, and high school class rank. What this really means is that a college is selective and has what we consider to be quality inputs. But plentiful, malleable inputs do not imply a quality outcome, given what we would expect from the student and the college. Rather, a quality college helps its student body succeed instead of just recruiting a select group of students. This does not mean that selective colleges cannot be quality colleges; however, it does mean that the relationship is not guaranteed.

I am particularly interested in measuring college quality based on an estimate of its value added to students instead of a measure highly correlated with inputs. Part of my research agenda is on that topic, as illustrated by my work compiling the Washington Monthly college rankings. However, other popular college rankings continue to reward colleges for their selectivity, which creates substantial incentives to game the rankings system in unproductive ways.

For example, a recent piece in The Chronicle of Higher Education illustrates how one college submitted inaccurate and overly optimistic data for the U.S. News rankings. George Washington University, one of the few colleges in the country with a posted cost of attendance of over $50,000 per year, had previously reported that 78% of their incoming freshman class was in the top ten percent of their high school graduating class, in spite of large numbers of high schools declining to rank students in recent years. An eagle-eyed staffer in the provost’s office realized that the number was too high and discovered that the admissions staff was inaccurately estimating the rank for students with no data. As a result, the revised figure was only 58%.

Regardless of whether GWU’s error was of one of omission or malfeasance, the result was that the university appeared to be a higher-quality school under the fatally flawed U.S. News rankings. [UPDATE 11/15/12: U.S. News has removed the ranking for GWU in this year’s online guide.] GWU certainly aspires to be more selective, but keep in mind that selectivity does not imply quality in a value-added sense. Academics and policymakers would be wise to be careful when discussing quality when they really mean selectivity.

A November Surprise in Student Loans

A few weeks ago, I co-authored a piece in The Chronicle of Higher Education on the federal government’s authority to relax income-based repayment requirements for student loans. To summarize the proposal, the federal government was granted the authority (starting in 2014) to allow students to repay student loans using only ten percent of their discretionary income for 20 years, down from 15 percent for 25 years. Our argument in the Chronicle piece is that the program represents an enormous subsidy for students attending expensive colleges and particularly professional schools. We were not the only people with those concerns; the left-leaning New America Foundation put out a similar set of concerns.

I was very surprised to learn yesterday that the Obama Administration published the final regulations for the new income-based repayment program (called “Pay as You Earn” or PAYE) in the Federal Register, which will suddenly take effect much sooner than 2014 and apparently no later than July 1, 2013. There appears to be no regulatory authority for speeding up the changes, other than the federal requirement that regulations be published by November 1 in order to take effect on July 1 of the following year. These regulations continue a disturbing trend of this administration ignoring Congressionally mandated timelines. It is my sincere hope that someone will ask the Department of Education for clarification as to how speeding up implementation is legal, especially when Congress did not agree to that timeline and there is a cost impact (more on that later).

Substantial legal issues aside, it appears that the Department of Education did not seriously consider the moral hazard concerns of people taking out more debt simply because they will not have to repay it. Buried on page 28 of the 61-page regulation document is the following nugget:

“Income-based repayment options may encourage higher borrowing and potentially introduce an unintended moral hazard, especially for borrowers enrolled at schools with high tuitions and with low expected income streams. Some commenters disagreed with the inclusion of this moral hazard statement, noting that the aspect of more generous income-based repayment plans causing increased borrowing has not been established. The Department has not found any definitive studies on the matter but since some analysts, academics, and others have suggested the possibility of this inducement effect, we wanted to address it to ensure comprehensive coverage of this issue.”

The Department of Education then never addresses the topic in the rest of the regulation document, instead focusing on the benefits for borrowers with more modest amounts of debt and household incomes of less than $60,000 per year. I side with Jason Delisle and the New America policy folks, who still note that moral hazard is a substantial concern.

The cost estimates seem way too good to be true, which is often the case in implementing new federal programs. The Department of Education (on p. 35 of the regulations) estimates that the cost will be only $2.1 billion over the next ten years, which seems to be an incredibly low number. Assuming roughly $250 million per year in additional costs over the peak years of the budget window (the last few years should not be included because they don’t include the full cohort costs) only covers $50,000 in loan forgiveness for 5,000 students. There are a lot more than 5,000 law and medical school graduates who could benefit under this program, yet it is unclear whether the Department of Education actually modeled professional school students (they mention on page 34 that graduate students were modeled, but they have much less debt on average).

Despite this change being a substantial shift in student loan policy, the education community and the media don’t seem to be too interested in the substantial cost shifting. The Chronicle had a nice article (subscription required) on the moral hazard regarding the program and Inside Higher Ed mentioned the release of the final rules, but completely missed the point. The conservative Daily Caller also mentioned the changes, but doesn’t get into the questionable legal foundation of advancing the policy before 2014 or the issue of who benefits.

It is easy to link the release of these regulations to electoral politics as usual, and I am certainly skeptical of what happens this time of year. However, given the lack of media coverage and the fact that it all hasn’t been positive, it appears that the Department of Education wanted to release the rules to have them take effect before receiving more public scrutiny. Hopefully, there will be a successful lawsuit delaying the rules on the grounds of the Obama Administration overstepping its legal authority to have the rules take effect in 2013 instead of 2014—and this will give researchers and policymakers a chance to rewrite the rules to target aid to those who truly need it instead of subsidizing expensive professional education programs.