Blog (Kelchen on Education)

To Be a “Best Value,” Charge Higher Tuition

In addition to the better-known college rankings from U.S. News & World Report, the magazine also publishes a listing of “Best Value” colleges. The listing seems helpful enough, with the goal of highlighting colleges which are a good value for students and their families. However, this list rewards colleges for charging higher tuition and being more selective, factors that are not necessarily associated with true educational effectiveness.

U.S. News uses dubious methodology to calculate its Best Value list (a more detailed explanation can be found here). Before I get into the rankings components, there are two serious flaws with the methodology. First, colleges are only eligible to be on the list if they are approximately in the top half of the overall rankings. Since we already know that the rankings better measure prestige than educational effectiveness, the best value list must be taken with a shaker of salt right away. Additionally, for public colleges, U.S. News uses the cost of attendance for out-of-state students, despite the fact that the vast majority of students come from in-state. It is true that relatively more students at elite public universities (like the University of Wisconsin-Madison) come from out of state, but even here over 70% of freshmen come from Wisconsin or Minnesota. This decision inflates the cost of attending public institutions and thus shoves them farther down the list

The rankings components are as follows:

(1) “Ratio of quality to price”—60%. This is the score on the U.S. News ranking (their measure of quality) divided by the net price of attendance, which is the cost of attendance less need-based financial aid. It is similar to what I did in the Washington Monthly rankings to calculate the cost-adjusted graduation rate measure. This measure has some merits, but suffers from the flaws of a prestige-based numerator and a net price of attendance that is biased toward private institutions.

(2) The percentage of undergraduates receiving need-based grants—25%. This measure rewards colleges with lots of Pell Grant recipients (which is just fine) as well as colleges with large endowments or high posted tuition which can offer lots of grants (which isn’t related to the actual price a student pays). If every student with a household income of under one million dollars received a $5 need-based grant, a college would look good on this measure…this measure can be gamed.

(3) Average discount—15%. This is the average amount of need-based grants divided by the net price of attendance. This certainly rewards colleges with high posted tuition and lots of financial aid.

Once again, I don’t focus on the actual rankings, but I will say that the top of the list is dominated by elite private colleges with massive endowments. Daniel Luzer of Washington Monthly (with whom I’ve had the pleasure of working over the past six months) has a good take on the top of the Best Value list. He notes that although these well-endowed institutions do provide a lot of financial aid to needy students, they don’t educate too many of these students.

I am glad that the 800-pound gorilla in the college rankings game is thinking about whether a college is a good value to students, but their methodological choices mean that colleges which are really educating students in a cost-effective manner are not being rewarded.

Explaining Subgroup Effects: The New York City Voucher Experiment

The last three-plus years of my professional life have been consumed by attempting to determine the extent to which a privately funded need-based grant program affected the outcomes of college students from low-income families in the state of Wisconsin. Although the overall impact on university students in the program’s first cohort are effectively zero, this masks a substantial amount of heterogeneity in outcomes across different types of people. One of the greatest challenges that we have faced in interpreting the results is determining whether program impacts are truly different across subgroups (such as academic preparation and type of initial institution attended), something which is sorely lacking in many studies. (See our latest findings here.)

Matthew Chingos of the Brookings Institution and Paul Peterson of Harvard had to face a similar challenge in explaining subgroup effects in their research on a voucher program in New York City. They concluded that, although the overall impact of offering vouchers to disadvantaged students on college enrollment was null, there were positive and statistically significant effects for black students. This research got a great deal of publicity, including in the right-leaning Wall Street Journal. (I’m somewhere in the political middle on vouchers in K-12 education—I am strongly in favor of open enrollment across public school districts and support vouchers for qualified non-profit programs, but am much more hesitant to support vouchers for faith-based and unproven for-profit programs.)

This research got even more attention today with a report by Sara Goldrick-Rab of the University of Wisconsin-Madison (my dissertation chair) which sought to downplay the subgroup effects (see this summary of the debate in Inside Higher Ed). This brief, released through the left-leaning National Education Policy Center (here is the review panel), notes that the reported impacts for black students are in fact not statistically different from Hispanic students and that the impacts for black students may not even be statistically significant due to data limitations (word to the wise: the National Student Clearinghouse is not a perfect data source). I share Sara’s concerns about statistical significance and subgroup effects. [UPDATE: Here is the authors’ response to Sara’s report, which is not surprising. If you like snark with your policy debates, I recommend checking out their Twitter discussion.]

I am generally extremely hesitant to make much out of differences in impacts by race (as well as other characteristics like parental education and family income) for several reasons. First, it is difficult to consistently measure race. (How are multiracial students classified? Why do some states classify students differently?) Second, although researchers should look at differences in outcomes by race, the question then becomes, “So what?” If black students do benefit more from a voucher program than Hispanic students, the policy lever isn’t clear. It is extremely difficult to target toward one race and not another. Chingos and Peterson were correct in their WSJ piece to make the correct comparison—if vouchers worked for black students in New York City, they might work in Washington, DC. Finally, good luck enacting a policy that makes opportunities available only for people of a certain racial background; this is much less of a problem when considering family income or parental education.

Although the true effects of the voucher program for black students may not have been statistically significant, the program is still likely to be cost-effective given the much lower costs of the private schools. Researchers and educators should carefully consider what these private schools are doing to generate similar educational outcomes at a lower cost—and also consider whether private schools are spending less money per student due to educating students with fewer special needs. I would like to see more of a discussion of cost-effectiveness in both of these pieces.

Measuring Prestige: Analyzing the U.S. News & World Report College Rankings

The 2013 U.S. News college rankings were released today and are certain to be the topic of discussion for much of the higher education community. Many people grumble about the rankings, but it’s hard to dismiss the rankings due to their profound impact on how colleges and universities operate. It is not uncommon for colleges to set goals to improve their ranking, and data falsification is sadly a real occurrence.  As someone who does research in the fields of college rankings and accountability policy, I am glad to see the rankings come out every fall. However, I urge readers to take these rankings as what they intend to be—a measure of prestige rather than college effectiveness.

The measures used to calculate the rankings are generally the same as last year and focus on six or seven factors, depending on the type of university:

–Academic reputation (from peers and/or high school counselors)

–Selectivity (admit rate, high school rank, and ACT/SAT scores)

–Faculty resources (salary, terminal degree status, full-time status, and class size)

–Graduation and retention rates

–Financial resources per student

–Alumni giving rates

–Graduation rate performance (only for research universities and liberal arts colleges)

Most of these measures can directly be controlled by increasing tuition and/or enrolling only the most academically prepared students. The only measure that is truly independent of prestige is the graduation rate performance measure, in which the actual graduation rate is regressed on student characteristics and spending to generate a predicted graduation rate. While U.S. News doesn’t release its methodology for calculating its predicted graduation rate measure, the results are likely similar to what I did in the Washington Monthly rankings.

I am pleased to see that U.S. News is using an average of multiple years of data for some of its measures. I do this in my own work on estimating college effectiveness, although this was not a part of Washington Monthly’s methodology this year (it may be in the future, however). The use of multiple years of data does reduce the effects of random variation (and helps to smooth the rankings), but I am concerned that U.S. News uses only the submitted years of data if not all years are submitted. This gives colleges an incentive to not report a year of bad data on measures such as alumni giving rates.

Overall, these rankings are virtually unchanged from last year, but watch colleges crow about moving up two or three spots when their score hardly changed. These rankings are big business in the prestige market; hopefully, students who care more about educational effectiveness consider other measures of college quality in addition to these rankings.

I’ll put up a follow-up post in the next few days discussing the so-called “Best Value” college list from U.S. News. As a preview, I don’t hold it in high regard.

Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.

How (Not) to Rank Colleges

The college rankings marketplace became a little bit more crowded this week with the release of a partial set of rankings from an outfit called The Alumni Factor. The rankings, headed by Monica McGurk, former partner at McKinsey & Company, rank colleges primarily based on the results of proprietary alumni surveys. The site claims to have surveyed approximately 100-500 alumni across the age distribution at 450 universities in creating these rankings. (By this point in time, a few alarms should be sounding. More on those later.)

The Alumni Factor uses thirteen measures from their surveys to rank colleges, in addition to graduation and alumni giving rates from external sources. These measures include development of cognitive and noncognitive skills, bang for the buck, average income and net worth, and “overall happiness of graduates.” The data behind the rankings has supposedly been verified by an independent statistician, but the verification is just a description of confidence intervals (which are never again mentioned). They provide rankings for the top 177 schools, which are not visible unless you sign up for their services. As a result, I do not discuss the overall rankings.

While I commend the Alumni Factor for making more information available to consumers (albeit behind a paywall), I don’t see this set of rankings as being tremendously useful. My first concern is how they selected the colleges to be included in the rankings. The list of the top 177 schools consists primarily of prestigious research universities and liberal arts colleges; few to no good public bachelor’s to master’s level universities (such as my alma mater) are included in the list. This suggests that the creators of the ranking care much more about catering to an audience of well-heeled students and parents than making a serious attempt to examine the effectiveness of lesser-known, but potentially higher-quality, colleges.

I am extremely skeptical of survey-based college ranking systems because they generally make every college look better than they actually are. The website mentions that multiple sources were used to reach alumni, but the types of people who respond to surveys are generally happy with their college experience and doing well in life. (Just ask anyone who has ever worked in alumni relations.)

The website states that “we did not let our subjective judgment of what is important in education lead to values-based weightings that might favor some schools more than others.” I am by no means a philosopher of education, but even I know that the choice of outcomes to be measured is a value judgment about what is important in education. The fact that four out of fifteen of the measures used measure income and net worth, while only one reflects bang for the buck, shows their priority on labor market outcomes. I don’t have a problem with that, but they cannot ignore the decisions they made in creating the rankings.

Finally, these survey outcomes are for alumni only and exclude anyone who did not graduate from college. Granted, the colleges on the initial list of 450 all probably have pretty high graduation rates, but it is important to remember that not everyone graduates. It’s another example of these rankings being limited to prestigious colleges, as well as the perception that everyone who starts college will graduate. It’s just not true.

Although no set of college rankings is perfect (and not even the ones that I’ve worked on), The Alumni Factor’s rankings serves a very limited population of students who are likely well-served by the existing college guides and rankings. This is a shame because the survey measures could have been tailored to better suit students who are much closer to being on the margin of graduating from college.

For those who are interested, the 2013 U.S. News & World Report college rankings will come out on Wednesday, September 12. I’ll have my analysis of those rankings later in the week.

Disclaimer: I am the consulting methodologist for the 2012 Washington Monthly college rankings. This post reflects only my thoughts and was not subject to review by any other individual or organization.

Analyzing UW-Madison’s Accountability Report

Recent legislative changes required the University of Wisconsin-Madison to submit an annual accountability report summarizing the university’s accomplishments over the previous year. While the UW System and UW-Madison already do a commendable job of making basic performance data public, this year’s accountability report nicely summarizes the performance data. A few highlights are below:

–The retention and graduation rates (for first-time, full-time students) are very high, as they should be given students’ academic and financial resources. Nearly 94% of students returned for a second year and 83% graduated within six years using the most recent data available. The retention and graduation rates are lower for targeted minority students (91% and 69%, respectively), but the gap is not nearly as large at UW-Madison as at many other universities.

–Just over half (52%) of all undergraduate students filed the FAFSA in 2011. Of these students, the median family income was just over $99,000. Given that most students who do not file the FAFSA and enroll in a selective college come from high-income families, the median family income of UW-Madison undergraduates is likely well in excess of $100,000 per year. This report does not include retention and graduation rates by Pell Grant receipt, but other UW-Madison data reports do.

–Roughly eight in ten students reported being able to enroll in desired classes most or all of the time (using data from the National Survey of Student Engagement). This is an improvement of roughly ten percentage points in the past five years, but more still needs to be done.

–ECON 101 (principles of microeconomics) was taken by 2,831 students in fall 2010 or spring 2011. That number makes me glad that I am no longer a TA for that course!

–UW-Madison claims an impact of $12.4 billion on the Wisconsin economy and creates or supports over 128,000 jobs. I am skeptical of those numbers, but the impact is clearly large. (But the question remains—what can we be doing better with our available funds?)

The accountability report for the rest of the UW System is available here.

Analyzing Wisconsin’s Workforce Development Report

Some people in Wisconsin, particularly in the business community, feel that the state’s secondary and postsecondary education systems do not efficiently prepare students for the types of jobs available in the local labor market. To address these concerns, Tim Sullivan, special consultant to Governor Walker on Economic, Workforce, and Education Development, compiled a report suggesting changes to the state’s educational systems and policies with the goal of improving Wisconsin’s workforce development programs.

I am pleased to see a clear plan on how the Walker administration may wish to move forward with economic development and education policy. Many of the suggestions are reasonable and can be implemented in a tough fiscal climate, but I do encourage careful evaluation of the policies and the consideration of unintended consequences. As an economist of education, I am focusing on the pieces of the report directly pertaining to Wisconsin’s postsecondary education systems instead of the recommendations regarding tax policy, unemployment insurance, and immigration reform. While these policies may tangentially affect higher education, they are not my area of expertise. They are also much less likely to be adopted than the core educational recommendations.

The first policy recommendation that I was expecting to see was that Wisconsin’s data from K-12 and postsecondary education should be linked with unemployment insurance data to examine the labor market outcomes of students by secondary and postsecondary institutions and programs. This has been done in other states, most notably Florida, with at least a modest amount of success. However, this long-overdue policy change was not a part of the report. The call for increased usage of the Labor Market Information software does provide useful information to students and institutions, but is less effective from a policy perspective.

The report is right on to note that Wisconsin has a large number of entities offering workforce development services. To the extent that it is feasible, this should be consolidated into one office. (I would also suggest that the Council for Workforce Readiness and a College and Workforce Readiness Council, both chaired by Mr. Sullivan, be consolidated.)The recommendation that there be a common core of transferrable courses across WTCS and UW System should also be implemented, as it has the potential to give students peace of mind and potentially reduce costs.

I am optimistic about the recommendations regarding evidence-based budgeting and performance-based funding. Both of these proposals have the potential to use the Wisconsin Idea, drawing upon the expertise of the higher education community to provide digestible data to decision makers in the state. These also can drive a cost-effectiveness agenda, which is essential to keeping public support for higher education funding. As someone who does research on cost-adjusted performance measures in higher education, I am glad to see the attention paid to some of the drawbacks of performance-based funding; however, I urge policymakers to consider ways in which colleges can “game the system” to increase graduation and job placement rates by accepting only the best students. Funding colleges in part based on outcomes fits in well with the call for stackable credentials, which provide good measurement points.

There are several recommendations in the report that are of concern to me, as they have the potential to reallocate state resources in less than optimal ways. The first area of concern is the call for UW System to guarantee that students can graduate in four years if certain conditions are met by the student (such as taking a sufficient number of classes per semester). This type of program can work well in certain situations; for example, many students at UW-Madison cannot get into intermediate microeconomics—a prerequisite for courses in several majors—due to not enough faculty members being assigned to teach the course. If this recommendation is to become policy, I strongly encourage the creation of a set of clear administrative rules so students and colleges can operate with a degree of certainty. The University of Minnesota’s four-year graduation agreement is a good starting point.

The focus on four-year graduation rates is not appropriate for all students or institutions. If a student comes to college without any prior credits, enrolls full-time (12 credits per semester), and works during the summer without taking any classes, he or she will take five years to accumulate 120 credits. However, if students enter with prior credits, graduating in four years while maintaining a good GPA is much more feasible. Policymakers should focus on making sure students who begin the semester enrolled full-time complete at least 12 credits per semester and make satisfactory progress toward a degree.

I am skeptical that making students with a bachelor’s degree pay a higher tuition rate to attend a WTCS institution will be cost-effective in the long run. Although there are currently no available data to examine this question, I would expect the majority of these individuals to have been in the workforce for at least a decade and have lost their job prior to returning to college. It is likely the case that paying a higher tuition rate would induce some of these students not to enroll and earn a substantially lower income. It may be a better investment for the state to foot more of the bill if the additional tax revenue exceeds the tuition subsidy. I would encourage a thorough analysis of these students and their circumstances before making a decision.

The recommendation that the WHEG be made available to part-time students in WTCS will at best do little to help increase the skill sets of Wisconsin workers. The WHEG currently has two separate pots of money—one for UW System students and one for WTCS students. The WTCS pool of money is consistently exhausted well before the start of the semester, which is when part-time students are more likely to enroll. This means that no money will be available to fund the program expansion unless it is taken from full-time WTCS or UW System students. These students do still receive Pell Grants, which are often sufficient to cover the cost of tuition for the neediest individuals. For these reasons, extending the WHEG to part-time students, many of whom are working adults, is unlikely to increase enrollment and completion rates.

In order to provide the WHEG to more students in WTCS, whether part-time of full-time, I would suggest scaling back or ending the Wisconsin Postsecondary Education Credit. This nonrefundable tax credit to the employer is less likely to be as effective as a voucher directly given to the student because not all businesses are willing to heavily invest in an employee who may not stay with the company for long enough to recoup the investment. Additionally, the nonrefundable nature of the credit means that only certain businesses are willing to make the investment—those which are highly profitable and/or have relatively small amounts of depreciation or other deductible expenses. I would also recommend dropping the distinction between UW System and WTCS students in the WHEG funding pool; this also streamlines the administrative process.

The report highlights the sizable minority of students with less than a bachelor’s degree who make more money than those with a bachelor’s degree. However, it is unclear whether we can identify these students and guide them toward the path which is more likely to have a higher expected salary. The Academic and Career Plan may be able to help with this somewhat, but this also requires K-12 teachers and guidance counselors to understand a student’s strengths and weaknesses. We do not have any empirical research as to the effectiveness of the ACP program, but it could certainly work well under the right circumstances.

I am not concerned that certain WTCS campuses spend money on GED preparation and liberal arts/transfer courses. Someone has to spend the money on GED preparation, and although the argument could certainly be made that high schools should pay for the coursework, it is easier from the viewpoints of technical expertise and administrative burden to offer the courses through the technical colleges. While the UW Colleges offer the majority of liberal arts/transfer courses at the two-year level, the three WTCS campuses which have liberal arts articulation agreements (Madison, Milwaukee, and Nicolet) are not near a UW College which offers the same services. This means that services are not being duplicated to as great of an extent as feared.

This report is far from perfect, but it does raise important questions about Wisconsin’s workforce development programs and proposes some feasible, implementable policies to address the concerns. I hope that the K-12 and higher education communities and the Walker administration can work together to improve the plan and find at least some common ground.

Estimating College Effectiveness: The 2012 Washington Monthly College Rankings

One of my primary research interests is examining whether and how we can estimate the extent to which colleges are doing a good job educating their students. Estimating institutional effectiveness in K-12 education is difficult, even with the ready availability of standardized tests across multiple grades. But in higher education, we do not even have these imperfect measures across a wide range of institutions.

Students, their families, and policymakers have been left to grasp at straws as they try to make important financial and educational decisions. The most visible set of college performance measures is the wide array of college rankings, with the ubiquitous U.S. News and World Report college rankings being the most influential in the rankings market.  However, these rankings serve much more as a measure of a college’s resources and selectivity instead of whether students actually benefit.

I have been working with Doug Harris, who is now an associate professor of economics at Tulane University, on developing a model for estimating a college’s value-added to students. Unfortunately, due to data limitations, we can only estimate value added with respect to graduation rates—the only outcome available for nearly all colleges. To do this, we estimate a college’s predicted graduation rate given certain student and fixed institutional characteristics and then compare that to the actual graduation rate.

This measure of college value-added gives us an idea of whether a college graduates as many students as would be expected, but this does not address whether colleges are educating students in an efficient manner. To estimate a college’s cost-effectiveness for students and their families, we divide the above value-added measure by the net price of attendance (defined as the cost of attendance minus all grant and scholarship aid). Colleges can then be ranked on this cost-effectiveness measure, which yields much different results than the U.S. News rankings.

This research has attracted attention in policy circles, which has been a great experience for me. I thought that at least some people would be interested in this way of looking at college performance, but I was quite flattered to get an e-mail from Washington Monthly magazine asking if I would be their methodologist for the 2012 college rankings. I have always appreciated their rankings, as the focus is much more on estimating educational effectiveness and public service instead of the quality of the incoming student body.

The 2012 college rankings are now available online and include a somewhat simplified version of our cost-adjusted value-added measure. This measure consists of one-half of the social mobility score, which is one-third of the overall rankings (research and service are the other two components). The college guide includes an article I wrote with Rachel Fishman of the New America Foundation looking at how some colleges provide students with good “bang for the buck.”

I am excited to see some of my research put into practice, and hopefully at least some people will find it useful. I welcome any questions or comments about the rankings and methodology!

Hello world!

Hello! My name is Robert Kelchen and I am a doctoral candidate in the Department of Educational Policy Studies at the University of Wisconsin-Madison. I am interested in higher education policy, particularly cost-effectiveness, financial aid, and data concerns. I plan to update this blog on a fairly regular basis as I go on the job market this year and beyond, so stay tuned for updates. And I always appreciate possible suggestions for blog topics!

–Robert