It’s safe to say that I am a data-driven person. I am an economist of education by training, and I get more than a little giddy when I get a new dataset that can help me examine an interesting policy question (and even more exciting when I can get the dataset coded correctly). But there are limits to what quantitative analysis can tell us, which comes as no surprise to nearly everyone in the education community (but can be surprising to some other researchers). Given my training and perspectives, I found an Education Week article on the limitations of data-driven decisions by Alfie Kohn, a noted critic of quantitative analyses in education, interesting.
Kohn writes that our reliance on quantifiable measures (such as test scores) in education result in the goals of education being transformed to meet those measures. He also notes that educators and policymakers have frequently created rubrics to quantify performance that used to be more qualitatively assessed, such as writing assignments. These critiques are certainly valid and should be kept in mind at all times, but then his clear agenda against what is often referred to as data-driven decision making shows through.
Toward the end of his essay, he launches into a scathing criticism of the “pseudoscience” of value-added models, in which students’ gains on standardized tests or other outcomes are estimated over time. While nobody in the education or psychometric communities is (or should be) claiming that value-added models give us a perfect measure of student learning, they do provide us with at least some useful information. A good source for more information on value-added models and data-driven decisions in K-12 education can be found in a book by my longtime mentor and dissertation committee member Doug Harris (with a foreword by the president of the American Federation of Teachers).
Like it or not, policy debates in education are being increasingly being shaped by the available quantitative data in conjunction with more qualitative sources such as teacher evaluations. I certainly don’t put full faith in what large-scale datasets can tell us, but it is abundantly clear that the accountability movement at all levels of education is not going away anytime soon. If Kohn disagrees with the type of assessment going on, he should propose an actionable alternative; otherwise, his objections cannot be taken seriously.