Streamlining Financial Aid in Wisconsin

The Wisconsin Higher Educational Aids Board, the state’s agency administering need-based and merit-based financial aid programs, was recently tasked with forming a commission on financial aid consolidation and modernization. The commission had two primary charges:

(1)    Explore consolidating all state need-based grants into one program.

(2)    Study options for providing grants to students attending college less than half-time.

The current system of need-based grants has separate grants for four different sectors of Wisconsin higher education: the University of Wisconsin System (UWS), the Wisconsin Technical College System (WTCS), the state’s tribal colleges, and the private, non-profit sector (WAICU).  Sadly, HEAB’s final report, which was recently released, failed to streamline the complicated financial aid system in Wisconsin. Each of the four sectors’ grants currently has separate pools of funding, and the report encourages this practice to continue.

The current system of awarding grants by sector needs to be revamped. Buried on page 42 of the report is the current distribution of funding by sector:

Sector Num. Eligible Awarded (%) Spent ($) Unfunded ($) Max Award ($)
UW System 43,808 70.1 58,321,266 32,922,506 2,384
WTCS 74,284 26.2 18,326,312 63,835,738 1,084
Tribal 1,204 26.0 441,963 1,593,276 1,800
Privates 17,935 58.6 26,613,208 23,291,709 2,900
Total 137,231 44.4 103,702,749 121,643,229  


This distribution makes absolutely no sense, in both the percent of eligible students awarded grant money (due to budget constraints) and the maximum award. I can’t speak to the needs of students attending the tribal colleges due to my lack of knowledge of these institutions and the students’ other financial aid awards, but it seems logical to have the same percentage of students receive need-based aid across systems. Given the lower cost of tuition for the technical colleges, I can see why they are receiving smaller grants.

I also don’t see a compelling reason for the state to give more aid to students attending private colleges than those attending public colleges. It is true that the state saves money if a student attends a private college (by being able to appropriate less money for the public sector), but I seriously doubt that students will change their decision to attend a private college if their grant aid is cut by about $500. This is especially the case since some students attending private colleges can receive need-based aid even if they are ineligible for the federal Pell Grant, which is not the case for public colleges.

The report also called for the status quo regarding the lack of eligibility for state grants if a student attended college less than half-time (five or fewer credits per semester). This would only be reversed if each sector supported changing the eligibility rules, sufficient funding became available, and HEAB had additional staff to monitor the additional students, conditions which are unlikely to be met anytime soon.

In my view, the commission completely failed to respond to its charge as little was done to streamline financial aid in Wisconsin or fix persistent inequities in the funding system. The Legislature should seriously consider combining all need-based grant programs into one pot even though the stakeholders on the committee disagree.

Making the College Scorecard More Student Friendly

The Obama Administration and the U.S. Department of Education have spent a great deal of time and effort in developing a simple one-page “college scorecard.” The goal of this scorecard is to provide information about the cost of attending college, average graduation rates, and information regarding student debt. The Department of Education has followed suit with a College Affordability and Transparency Center, which seeks to highlight colleges with unusually high or low costs to students.

Although I have no doubt that the Administration shares my goal of facilitating the availability of useful information to prospective students and their families, I doubt the current measures are having any effect. The college scorecard is difficult to understand, with technical language that is second nature to higher education professionals but is completely foreign to many prospective students. Because of this, I was happy to see a new report from the Center for American Progress, a liberal think tank, suggesting improvements to the measures. (As a side note, liberal and conservative think tanks work together quite a bit on issues of higher education. Transparency and information provision are nearly universal principles, and partisan concerns such as state-level teachers’ unions and charter schools just aren’t as present in higher ed.)

The authors of the report took the federal government’s scorecard and their own version to groups of high school students, where they tested the two versions and suggested improvements. The key points aren’t terribly surprising—focusing on a few important measures with simple language is critical—but it appears that the Department of Education has not yet done adequate testing of their measure. I am also not surprised that students prefer to see four-year graduation rates instead of six-year rates, as everyone thinks they will graduate on time—even though we know that is far from the case.

The changes to the college scorecard are generally a good idea, but I remain concerned about students’ ability to access the information. Even if the scorecard is required to be posted on a college website (like certain outcome measures currently are), it does not mean that it will be easy to access. For example, the graduation rate for first-time, full-time students who received a Pell Grant during their first year of college must be posted on the college’s website, but actually finding this information is difficult. I hope outside groups (such as CAP) will continue to publicize the information, as greater use of the data is the best way to influence colleges’ behavior.

Am I On the Wrong Job Market?

In light of being on the academic job market this year, I was amused to get the following mailing from the local branch of Globe University. (Even though the mailing was addressed to me, it was also addressed to “Or Current Resident.”)

Globe Ad

The message is quite simple: graduates of this university get jobs. The mailing advertises that 100% of graduates with a bachelor’s degree in business administration get employment, although the fine print does mention that “employment is not guaranteed.” However, not much can be said about the graduation rates of students attending any of the Globe campuses, both because very few students attending Globe are first-time, full-time students (which are the only students counted in federal graduation rate calculations) and because many campuses (including Madison) have not been open long enough to have a graduation rate cohort.

To get an idea of graduation rates, I looked at the oldest Globe campus, in Brooklyn Center, MN. The reported graduation rate is 23%, with an overall career placement rate of 72%. Meanwhile, tuition is over $5,000 per quarter before mandatory course fees. I’m not saying that Globe University is a bad bet for students, but some students are likely to benefit more by attending the local technical college.

Moral of the story: Don’t believe colleges which imply that everyone gets a job. This isn’t even true at the most prestigious colleges, let alone for relatively unknown for-profit institutions. (Now, I do hope that my UW-Madison PhD helps me get a great job!)

Not Every College is Elite

Like many happenings in American society, the perception of the college selection process is driven by the most elite people and institutions. There are plenty of stories out there about how students apply to more than ten colleges, yet are lucky to get accepted to only their “safety school.” (For example, look at this blog from the newspaper of America’s elite.) But many prospective students and their families do not realize that the majority of four-year colleges are not highly selective and will admit most high school graduates.

An article in today’s Inside Higher Ed (titled “The (Needless?) Frenzy”) highlights the results of a national survey conducted by the National Association for College Admission Counseling (membership required to see the full report). The results of the survey show that the average four-year institution admits about two-thirds of its applicants, with little difference between public and private colleges. I examined federal IPEDS data for 2010-11 admit rates for the 1569 colleges in this year’s Washington Monthly college rankings and also found that the average college admitted 65% of applicants. The below graph shows the distribution of admission rates:


Not every college is elite enough to be able to reject most of their applicants. Although students tend to apply to colleges which should give them at least a chance of admission, it is worth noting that top-rated colleges in the Washington Monthly rankings admit more students than the average. For example, my alma mater, Truman State University, admits 74% of its applicants while ranking sixth in the master’s university rankings. Truman is certainly selective (with a median ACT of 28), but it is far from elite. Prospective students and their families need to keep in mind that there are very good schools out there which are not absurdly selective, and policymakers should focus their efforts on making success for these institutions more possible.

More Fun With College Rankings

I was recently interviewed by Koran Addo of the (Baton Rouge) Advocate regarding my work with the Washington Monthly college rankings. I’ve had quite a few phone and e-mail exchanges with college officials and the media about my work, but I want to highlight the resulting article both because it was extremely well done and because it highlights what I consider to be the foolish obsession with college rankings.

Two pieces of the article deserve special attention. First, consider this tidbit:

“LSU System President and Baton Rouge Chancellor William Jenkins said he was ‘clearly disappointed’ to learn that LSU had tumbled six spots from 128th last year to 134th in the U.S. News ‘Best Colleges 2013’ list.”

I wish that college rankings came with confidence intervals—which would provide a rough guide of whether a change over time is more than what we would expect by chance or statistical noise. Based on my work with rankings, I can safely say that such a small change in the rankings is not statistically significant and certainly not educationally meaningful.

The next fun quote from the article is from LSU’s director of research and economic development, Nicole Baute Honorée. She argues that only rankings from the National Science Foundation matter:

“Universities are in the knowledge business, as in creating new knowledge and passing it along. That’s why the NSF rankings are the gold standard.”

The problem is that research expenditures (a) do not guarantee high-quality undergraduate education, (b) do not have to be used effectively in order to generate a high score, and (c) do not reward many disciplines (such as the humanities). They are a useful measure of research clout in the sciences, but I would rely on them as only one of many measures (which is what the Washington Monthly rankings have done since long before I took the reins).

Once again, I urge readers not to rely on a single measure of college quality—and to make sure any measure is actually aligned with student success.

The Big N Conference and Athletic Realignment

Mentioning the name “Big Ten” evokes certain sentiments in the minds of many Americans. Although the conference is much more than just smashmouth, low-scoring football games played on chilly November days under gunmetal skies in places as exotic as Iowa City, Ann Arbor, and West Lafayette, football certainly does rule the roost in the conference. But there has traditionally been much more to the conference than big-time football.

As a doctoral student and a fan of so-called “minor sports” such as wrestling, I appreciate the other benefits of the Big Ten. The academic wing of the Big Ten (plus the University of Chicago, a former conference member in athletics before moving to Division III), the Committee on Institutional Cooperation, is an outstanding resource that helps make accessing research materials from other member colleges much easier. And the runaway financial success of the Big Ten Network helps make athletic programs a more free-standing enterprise, reduces student subsidies for athletics, and provides coverage of a wide range of sports besides football and basketball.

Few other conferences are as financially stable as the Big Ten—the Pacific 12 and Southeastern Conferences are the only other truly stable conferences. While the ironically named Big Ten swallowed up its twelfth member (Nebraska) in the last round of conference realignment, the Pac-12 added Utah and Colorado while the SEC added Missouri and Texas A&M. Three other large-school conferences (the ten-member Big 12, the Atlantic Coast Conference, and the Big East Conference—which also has members who do not play football) have been trying to survive, as one is unlikely to remain near the top of the athletic pecking order in another round of realignment.

The current conference order seemed fairly stable (minus the strange moves made by the Big East Conference) until this week. I was watching the Ohio State-Wisconsin football game Saturday afternoon when I saw an item on the bottom of the screen mentioning that the University of Maryland was in serious discussions to move to the Big Ten from the ACC. Sure enough, that move was made official on Monday, with the clear reasons for the expansion being Maryland’s large budget shortfall in athletics and the goal of adding more TV revenue in the next round of negotiations. Rutgers followed on Tuesday, with a big move from the struggling Big East Conference.

The fourteen-member Big Ten (let’s just call it the Big N, shall we?) may not be done adding members. Sixteen members is an appealing number, with potential candidates in the Universities of Virginia, North Carolina, Kansas, and Connecticut as well as possibly Notre Dame, a longtime point of discussion. Despite the increased amount of travel that college athletes must endure and the weakening of regional rivalries (such as Wisconsin versus Iowa in football), superconferences appear to be the way of the future. It is likely that the SEC and Pac-12 will add members to get to sixteen schools, with some merger of the ACC and Big East making up the fourth power conference. Everyone else will be fighting for scraps at the proverbial kids’ table.

I would love to hear predictions for how the big-time college conferences end up shaking out and whether academic factors will continue to be important for the Big Ten. Your comments and predictions are encouraged!

Paying It Forward: A Different Take on Income-Based Repayment

In prior blog posts, I have been less than charitable toward the federal government’s changes to the income-based repayment policies for student loans. (As a reminder, these changes provide large subsidies to students who attend expensive colleges and particularly those who earn good salaries after having attended law or medical school.) My criticism of the federal government’s way of implementing the program does not mean that I am not open to a better way of income-based repayment. With this in mind, I look at a proposal from the Economic Opportunity Institute, a liberal think tank from Washington State, which suggests an income-based repayment program for students attending that state’s public colleges and universities.

The EOI’s proposal, called “Pay It Forward,” would charge students no tuition or fees upfront and would require students to sign a contract stating that they would pay a certain percentage of their adjusted gross income per year (possibly one percent per year in college) for 25 years after leaving college. It appears that the state would rely on the IRS to enforce payment in order to capture part of the earnings of those who leave the state of Washington. This would be tricky to enforce in theory, given the IRS’s general reticence to step into state-level policies.

I am by no means convinced by the group’s crude simulations regarding the feasibility of the program. This is currently short on details and would also require a large one-time investment to get off the ground and enroll an initial cohort of students. Additionally, it is not clear whether the authors of the report accounted for part-time enrollment patterns in their cost estimates. I also urge caution with this program, as this sort of an income-based repayment program decouples the cost of attendance from what students actually pay. Colleges suddenly have a strong incentive to raise their posted tuition substantially in order to capture this additional revenue.

With all of these caveats, the Pay It Forward program does have the potential to serve as a simple income-based repayment program once analysts do more cost-effectiveness analyses. But this will only work if policymakers keep a close eye on the cost of college in order to result in a revenue-neutral program. My gut feeling is that the group’s estimates understate the cost of college under current rules and don’t consider the possibility of the incentives that will increase cost.

Am I Selling “Mathematical Nonsense?”

When I started a line of research on college rankings and value-added, I assumed that if my work ever saw the light of day, it would be at least somewhat controversial. I’ve gotten plenty of feedback on my academic research on the topic, and most of that has been at least mildly encouraging. And I’ve gotten even more feedback on the Washington Monthly college rankings, most of which has also been fairly positive. This work has given me the opportunity to talk with dozens of institutional researchers, college presidents, and provosts from around the country about their best practices for measuring student success.

But one e-mail that we received was sharply negative and over the top. Frederik Ohles, president of Nebraska Wesleyan University in Lincoln, Nebraska, sent along a wonderful missive. Here is the edited version that ran in this month’s magazine (subscribe to the print version here):


“There are lots of things that I’ve long admired about your magazine. And for that reason, I had thought you might do a better job in the business of college rankings than U.S. News & World Report. But on reading this year’s issue, I was disappointed. In the Monthly college rankings, Nebraska Wesleyan University is predicted to graduate 66 percent, graduates 65 percent, and you rank us number 144 [out of 254] for that result.

What kind of Rube Goldberg-inspired formula would lead to this result? Sorry, folks, but you’ve discredited yourselves with such mathematical nonsense. In the future you’d better stick to subjects that you know something about. Math and ranking methodologies sure aren’t among them.”


The e-mail went on to call me and the rest of the College Guide staff “charlatans,” just like the U.S. News staff, but you get the point. In any case, I resisted a strong urge to snark in the published response, an excerpt of which is below:

“You focus entirely on the numerator of the measure in your letter and do not mention the denominator—the annual net price of attendance, in your school’s case $20,723. If the net price of Nebraska Wesleyan University were lower, the school’s ranking on this measure would be higher.”

In my full response, I assured Mr. Ohles that it is my goal to never be a charlatan. But am I selling mathematical nonsense? You be the judge.

Pell Grants and Data-Driven Decisions

I am a big proponent of making data-driven decisions whenever possible, but sadly that isn’t the case among many policymakers. Recently, in an effort to reduce costs, Congress and the Obama Administration agreed to reduce the maximum number of semesters of Pell Grant eligibility from 18 to 12 (which is in line with the federal government’s primary graduation rate measure for students attending four-year colleges). However, this decision was made without considering the cost-effectiveness of the policy change or even without a good idea of how many students would be affected.

Today’s online version of The Chronicle of Higher Education includes a piece that I co-authored with Sara Goldrick-Rab on this policy change. We’re both strong proponents of data-driven decision making, as well as conducting experiments whenever possible to evaluate the effects of policy changes. We come from very different places on the political spectrum (which is why we disagree on whether the federal government can and should hold states accountable for their funding decisions), but there are certainly fundamental points that are just a part of an effective policymaking process.

College Selectivity Does Not Imply Quality

For me, the greatest benefit of attending academic conferences is the ability to clarify my own thinking about important issues in educational policy. At my most recent conference last week, I attended several outstanding sessions on issues in higher education in addition to presenting my own work on early commitment programs for financial aid. (I’ll have more on that in a post in the near future, so stay tuned.) I greatly enjoyed the talks and learned quite a bit from them, but the biggest thing I am taking away from them is something that I think they’re doing wrong—conflating college selectivity with college quality.

When most researchers refer to the concept of “college quality,” they are really referring to a college’s inputs, such as financial resources, student ACT/SAT scores, and high school class rank. What this really means is that a college is selective and has what we consider to be quality inputs. But plentiful, malleable inputs do not imply a quality outcome, given what we would expect from the student and the college. Rather, a quality college helps its student body succeed instead of just recruiting a select group of students. This does not mean that selective colleges cannot be quality colleges; however, it does mean that the relationship is not guaranteed.

I am particularly interested in measuring college quality based on an estimate of its value added to students instead of a measure highly correlated with inputs. Part of my research agenda is on that topic, as illustrated by my work compiling the Washington Monthly college rankings. However, other popular college rankings continue to reward colleges for their selectivity, which creates substantial incentives to game the rankings system in unproductive ways.

For example, a recent piece in The Chronicle of Higher Education illustrates how one college submitted inaccurate and overly optimistic data for the U.S. News rankings. George Washington University, one of the few colleges in the country with a posted cost of attendance of over $50,000 per year, had previously reported that 78% of their incoming freshman class was in the top ten percent of their high school graduating class, in spite of large numbers of high schools declining to rank students in recent years. An eagle-eyed staffer in the provost’s office realized that the number was too high and discovered that the admissions staff was inaccurately estimating the rank for students with no data. As a result, the revised figure was only 58%.

Regardless of whether GWU’s error was of one of omission or malfeasance, the result was that the university appeared to be a higher-quality school under the fatally flawed U.S. News rankings. [UPDATE 11/15/12: U.S. News has removed the ranking for GWU in this year’s online guide.] GWU certainly aspires to be more selective, but keep in mind that selectivity does not imply quality in a value-added sense. Academics and policymakers would be wise to be careful when discussing quality when they really mean selectivity.