As many readers of this blog know, I like to start my weekdays by catching up on the education news of the day during the 5:00 hour of the morning and sharing a few particularly interesting pieces via Twitter. But a few times every month, I see coverage of reports or articles that leaves me grumbling over dubious methodology or a lack of critiques of the work. For example, there was a piece in Inside Higher Ed this week looking at a report by a little-known consulting group that set off flashing alarm bells in my head as the sun was rising. Judging by the tweets I received in response to my quick take, it is time to expand my thoughts into blog form.
Here are my four questions that everyone should ask when reading media coverage of new research in the education world (and some of these also apply to the research itself).
Question 1: Is information available about the data and methods? Even in online-only media outlets, journalists are often operating under word limits and cannot get into the full details of a study. This means that some important questions about the study’s data and methods may not be answered without looking at the full report. Sometimes, academic articles can be hard to get from behind journal paywalls and think tank reports are slightly delayed due to the use of embargoes and fancy DC releases, but they can generally be obtained fairly easier. But be skeptical of coverage of reports that are not being released to the public, as the work may not be ready for public scrutiny in spite of getting media attention.
Question 2: Do the author(s) properly describe the results given the research design? There are two common errors that author(s) make in both writing up their research and talking about it with reporters. (Granted, the latter is harder than the former, given that most researchers don’t get trained in talking with the media.) The first is to overgeneralize their results; if a study took place at two colleges, the results would likely not hold across all of American higher education. The second is to confuse correlation with causation by using causal language such as “impact” or “effect” to describe observational studies or regressions. (Save that for quasi-experimental or experimental studies, as spurious correlations abound!) Also, for survey research, avoid false precision (like reporting two decimal places as in the IHE piece mentioned above) and be clear about things like recruitment strategies and response rates. Qualitative researchers, in my experience, tend to be much more thoughtful about their study’s limitations than do quantitative researchers.
Question 3: Did the reporter interview the author(s)? In March, I loudly called out a report by a website called the Student Loan Report that claimed 21% of students used at least some financial aid funds to buy bitcoin. But the report still received coverage from outlets such as The Chronicle of Higher Education’s daily briefing, Inside Higher Ed, and CNBC. It turns out that the “author” of the report, Drew Cloud, was not a real person following a Chronicle investigation. Yet the company got away with his online presence for years because he would only do short e-mail interviews. Researchers are often concerned about the unpredictability phone interviews and have a hard time fitting the time into their schedules (trust me on this—I do several of them every week, including an unscheduled interview while writing this post!), but it’s a good way for reporters to probe deeper into the study and verify that the author is a real person. E-mail interviews can be appropriate (and I do them on occasion), but phone interviews are ideal.
Question 4: Did the reporter talk with other experts? Before a publication runs a piece describing a new research study, the editor should always make sure that the reporter talked with other experts (content matter and/or research methods) to verify the quality of the study. For example, a recent Inside Higher Ed piece covering a new article on the implications of open educational resources for students did this very well. The reporter talked with three outside researchers (myself included) as well as both of the article’s authors to give readers a fuller picture of the study. Going back to the Drew Cloud saga, my persistent gripes on Twitter led to a Chronicle piece in which I was interviewed about the limitations of the dubious bitcoin survey so at least some readers could get an outside opinion of the study.
Thanks, Robert. Good food for thought… will share (and discuss) with our staff.