I was happy to learn this morning that my research on value-added with respect to college graduation rates (with Doug Harris) was covered in an Education Week blog post by Sarah Sparks. While I am glad to get media coverage for this week, the author never reached out to me to make sure her take on the article was accurate. (I had a radio show in college and this was one of the things that was drilled into my head, so I am probably a little too sensitive regarding fact-checking.) As a result, there are some concerns with the Ed Week post that need to be addressed. My concerns are as follows:
(1) The blog post states that we “analyzed data on six-year graduation rates, ACT or SAT placement-test scores and the percentage of students receiving federal need-based Pell grants at 1,279 colleges and universities from all 50 states from 2006-07 through 2008-09.” While that is true, we also used a range of other demographic and institutional measures in our value-added models. Using ACT/SAT scores and Pell Grants to predict graduation rates explains only about 60% of the variation in institutional graduation rates, while including the additional demographic measures that we use explains an additional 15% or so of the variation. The post should have briefly mentioned this, as it helps set our work apart from previous work (and particularly the U.S. News rankings).
(2) After generating the predicted graduation rate and comparing it to the actual graduation rate, we adjust for cost in two different ways. In what we call the student/family model, we adjust for the net price of attendance (this is what I used in the Washington Monthly rankings this year). And in the policymaker model, we adjust for educational expenditures per full-time equivalent student. The blog post characterizes our rankings as “value-added rankings and popularity with families.” While the popularity with families is an accurate depiction of the student/family model, the term “value-added rankings” doesn’t reflect the policymaker model that well.
(3) While we do present the schools in the top ten of our measures by Carnegie classification, we spend a great amount of time discussing the issues of confidence intervals and statistical significance. Even if a school has the highest value-added score, its score is generally not different from other high-performing institutions. We present the top-ten lists for illustrative purposes only and would encourage readers not to consider the lists to be perfect.
As an aside, there are five other papers in the Context for Success working group which also examine measuring college value-added that were not mentioned in the article, plus an outstanding literature review by Tom Bailey and Di Xu. I highly recommend reading through the summaries of those articles to learn more about the state of research in this field.
UPDATE (10/29): I had a wonderful e-mail conversation with the author and the above points have now been addressed. Chalk this up as another positive experience with the education press.