Another Random List of “Best Value” Colleges

Getting a good value for attending college is on the mind of most prospective students and their families, and as a result, numerous publishers of college rankings have come out with lists of “best value” colleges. I have highlighted the best value college lists from Kiplinger’s and U.S. News in previous posts, as well as discussing my work incorporating a cost component into Washington Monthly’s rankings. Today’s entry in this series comes from the Princeton Review,  a company better known for test preparation classes and private counseling, but they are also in the rankings business.

The Princeton Review released its list of its “Best Value Colleges” today in conjunction with USA Today, and the list is heavily populated with a “who’s who” list of selective, wealthy colleges and universities. Among the top ten private colleges, several of them are wealthy enough to be able to waive all tuition and fees for their few students from modest financial backgrounds. The top ten public institutions do tend to attract a fair number of out-of-state and full-pay students, although there is one surprise name on the list (North Carolina State University—well done!). More data on the top 150 colleges can be found here.

My main complaint with this ranking system, as with other best value colleges lists, is with the methodology. They begin by narrowing their sample from about 2,000 colleges to 650—what they call “the nation’s academically best undergraduate institutions.” This effectively limits the utility of these rankings to students who score a 25 or higher on the ACT, or even higher if students wish to qualify for merit-based grant aid. Student selectivity is further awarded in the academic rating, even though this has no guarantee of future academic performance. Much of the academic and financial aid ratings measures come from student surveys, which are fraught with selection bias. Basically, many colleges handpick the students who take these surveys, which results in an optimistic set of opinions being registers. I wish I could say more about their methodology and point values, but no information is available.

The top 150 list (which can be found here by state) certainly favors wealthy, prestigious colleges with a few exceptions (University of South Dakota, University of Tennessee-Martin, and Southern Utah University, for example). In Wisconsin, only Madison and Eau Claire (two of the three most selective universities in the UW System) made the list. In the Big Ten, there are some notable omissions—Iowa (but Iowa State is included), Michigan State (but Michigan is included), Ohio State, and Penn State.

The best value rankings try to provide information about what college will cost, and whether some colleges provide better “bang for the buck” than others. Providing useful information is an important endeavor, as this recent article in the Chronicle emphasizes. However, the Princeton Review’s list provides useful information to only a small number of academically elite students, many of whom have the financial means to pay for college without taking on much debt. This is illustrated by the accompanying USA Today article featuring the rankings, which notes that fewer than half of all students attending Best Value Colleges take on debt, compared to two-thirds of students nationwide. This differential isn’t just a result of the cost of attendance, but instead the student’s ability to pay for college.

Is Money from Parents Bad for Students?

Most people would generally consider a student getting money from his or her parents while in college to be a good thing—after all, most traditional-age college students tend to have few resources of their own and additional money from Mom and Dad might help students work fewer hours (generally considered a good thing). But a new paper in the American Sociological Review by Laura Hamilton, an assistant professor of sociology at the University of California-Merced, challenges this assumption. In a paper titled “More Is More or More Is Less? Parental Financial Investments During College” (abstract here), she finds that parental financial assistance increases the likelihood of graduation, but is associated with lower student GPAs.

As a sociologist, Hamilton came to the project with the perspective that more financial resources are a good thing for a student due to the mere availability of resources and social capital. I don’t start from that perspective—and instead look at what students can do with the available funds. But I am also concerned that no-strings-attached gifts from parents might not be a good thing, since they may lack the performance requirements of merit-based financial aid. Additionally, the need for additional funds might reflect the inability of a student from a middle- to upper-income family to secure merit-based aid.

Hamilton uses two old, workhorse datasets in her analysis—the Baccalaureate and Beyond Study (B&B) of students who graduated in 1993 and the Beginning Postsecondary Students Study (BPS) of students who began college in 1990. She uses the B&B to focus on cumulative GPA at graduation as an outcome, which has two main limitations: we don’t know the relationship between parental assistance on dropout or changes in college major which may be associated with GPA. Because of that, she uses the BPS to look at graduation rates. Neither dataset is perfect or free of issues of causality, but it’s not a bad starting point (the datasets have to be appropriate to get into a top-tier journal like ASR).

The positive relationship between parental assistance and graduation rates won’t raise many eyebrows, but her claim that among students who get to graduation, those with higher levels of parental assistance have lower GPAs is more controversial. My biggest concern with the article is that appears that more help from the parents allows some marginal students to stay in school who otherwise would not have appeared in the dataset. If some of the 2.0 GPA students with parental assistance would have dropped out, there may not be differences in the GPAs of students who successfully completed college. Because of this, I have to take the finding on GPAs with a grain of salt.

 

On another note, this article also can teach scholars quite a bit about how to interact with the media. The mixed conclusion gives the education press and the general public an opportunity to run with a provocative conclusion—parents shouldn’t give their kids money (if they can) because they might just slack off. The headline in today’s Inside Higher Ed piece on the article (“Spoiled Children”) is an example of how research findings can be spun to get more eyeballs. While the media should run more reasonable headlines, it is the responsibility of academics to call out the education press when they play these sorts of games.

Innovating for Success in Financial Aid

Most education researchers and policymakers would likely agree that the current financial aid distribution system is both inefficient and not as effective as it could be. Under current rules, the vast majority of students do not learn about their eligibility for need-based financial aid until their senior year of high school. While waiting this long can help the federal and state governments make sure their aid dollars are targeted toward students who are currently the most financially needy, waiting that long to notify students of their aid awards makes little sense for students from persistently poor families.

There have been numerous efforts to streamline the financial aid process over the past several years, but they have neglected the importance of timing. If students know their financial aid package well before reaching college age, they can both academically and financially prepare for college should that be a match with their career and personal ambitions. However, most research fails to suggest possible solutions to important informational deficiencies.

Today, I am pleased to release a working paper with my frequent co-author (and political opposite) Sara Goldrick-Rab that seeks to advance the research agenda on the importance of timing in the financial aid process.  Under current policy, students whose families receive federal means-tested benefits in grade 12 currently are awarded the maximum Pell Grant (which results in the maximum award for many state and institutional grants). In our paper, we estimate what could happen to both college enrollment rates and government revenue if the aid award would happen in grade 8 instead of grade 12.

Pell Grant program costs would increase under this policy change for two reasons—because some students would likely be induced to attend college by the promise of financial aid and because about 30% of students would likely receive more money than under current law. But the federal government would also see an increase in tax revenue through the additional earnings of these students. Under a fairly conservative set of assumptions in a Monte Carlo simulation (make your own assumptions here and here), the program is fairly likely to result in positive net fiscal benefits over the long run.

Even though the initial results from this study appear to be promising, I still lose sleep at night about whether people will respond in the expected ways and whether any perverse incentives could be in play. As a result, any such policy change should be explored in a demonstration program to see whether the program is cost-effective in real life.

This paper will get a fair amount of media attention, which will hopefully result in useful feedback from smart people in the academic and policy communities. I would also love to hear your thoughts on the paper as well as the fun methodological assumptions.

The Limitations of “Data-Driven” Decisions

It’s safe to say that I am a data-driven person. I am an economist of education by training, and I get more than a little giddy when I get a new dataset that can help me examine an interesting policy question (and even more exciting when I can get the dataset coded correctly). But there are limits to what quantitative analysis can tell us, which comes as no surprise to nearly everyone in the education community (but can be surprising to some other researchers). Given my training and perspectives, I found an Education Week article on the limitations of data-driven decisions by Alfie Kohn, a noted critic of quantitative analyses in education, interesting.

Kohn writes that our reliance on quantifiable measures (such as test scores) in education result in the goals of education being transformed to meet those measures. He also notes that educators and policymakers have frequently created rubrics to quantify performance that used to be more qualitatively assessed, such as writing assignments. These critiques are certainly valid and should be kept in mind at all times, but then his clear agenda against what is often referred to as data-driven decision making shows through.

Toward the end of his essay, he launches into a scathing criticism of the “pseudoscience” of value-added models, in which students’ gains on standardized tests or other outcomes are estimated over time. While nobody in the education or psychometric communities is (or should be) claiming that value-added models give us a perfect measure of student learning, they do provide us with at least some useful information. A good source for more information on value-added models and data-driven decisions in K-12 education can be found in a book by my longtime mentor and dissertation committee member Doug Harris (with a foreword by the president of the American Federation of Teachers).

Like it or not, policy debates in education are being increasingly being shaped by the available quantitative data in conjunction with more qualitative sources such as teacher evaluations. I certainly don’t put full faith in what large-scale datasets can tell us, but it is abundantly clear that the accountability movement at all levels of education is not going away anytime soon. If Kohn disagrees with the type of assessment going on, he should propose an actionable alternative; otherwise, his objections cannot be taken seriously.

Explaining Subgroup Effects: The New York City Voucher Experiment

The last three-plus years of my professional life have been consumed by attempting to determine the extent to which a privately funded need-based grant program affected the outcomes of college students from low-income families in the state of Wisconsin. Although the overall impact on university students in the program’s first cohort are effectively zero, this masks a substantial amount of heterogeneity in outcomes across different types of people. One of the greatest challenges that we have faced in interpreting the results is determining whether program impacts are truly different across subgroups (such as academic preparation and type of initial institution attended), something which is sorely lacking in many studies. (See our latest findings here.)

Matthew Chingos of the Brookings Institution and Paul Peterson of Harvard had to face a similar challenge in explaining subgroup effects in their research on a voucher program in New York City. They concluded that, although the overall impact of offering vouchers to disadvantaged students on college enrollment was null, there were positive and statistically significant effects for black students. This research got a great deal of publicity, including in the right-leaning Wall Street Journal. (I’m somewhere in the political middle on vouchers in K-12 education—I am strongly in favor of open enrollment across public school districts and support vouchers for qualified non-profit programs, but am much more hesitant to support vouchers for faith-based and unproven for-profit programs.)

This research got even more attention today with a report by Sara Goldrick-Rab of the University of Wisconsin-Madison (my dissertation chair) which sought to downplay the subgroup effects (see this summary of the debate in Inside Higher Ed). This brief, released through the left-leaning National Education Policy Center (here is the review panel), notes that the reported impacts for black students are in fact not statistically different from Hispanic students and that the impacts for black students may not even be statistically significant due to data limitations (word to the wise: the National Student Clearinghouse is not a perfect data source). I share Sara’s concerns about statistical significance and subgroup effects. [UPDATE: Here is the authors’ response to Sara’s report, which is not surprising. If you like snark with your policy debates, I recommend checking out their Twitter discussion.]

I am generally extremely hesitant to make much out of differences in impacts by race (as well as other characteristics like parental education and family income) for several reasons. First, it is difficult to consistently measure race. (How are multiracial students classified? Why do some states classify students differently?) Second, although researchers should look at differences in outcomes by race, the question then becomes, “So what?” If black students do benefit more from a voucher program than Hispanic students, the policy lever isn’t clear. It is extremely difficult to target toward one race and not another. Chingos and Peterson were correct in their WSJ piece to make the correct comparison—if vouchers worked for black students in New York City, they might work in Washington, DC. Finally, good luck enacting a policy that makes opportunities available only for people of a certain racial background; this is much less of a problem when considering family income or parental education.

Although the true effects of the voucher program for black students may not have been statistically significant, the program is still likely to be cost-effective given the much lower costs of the private schools. Researchers and educators should carefully consider what these private schools are doing to generate similar educational outcomes at a lower cost—and also consider whether private schools are spending less money per student due to educating students with fewer special needs. I would like to see more of a discussion of cost-effectiveness in both of these pieces.