Earlier this month, I was in Philadelphia for the annual American Education Research Association conference—the largest gathering of education researchers in the country. It is, in a word, overwhelming: roughly 13,000 education researchers sporting name tags and blue bags, looking exhausted or annoyed; scurrying through the vast maze of conference center hallways searching for sessions, colleagues, coffee.
Three things struck me about the whole event and the state of education research. First, with thousands of researchers examining some aspect of education that they are hoping no one else has studied, it seems to me that a lot should be known about education. The amount of research and breadth of topics is staggering. I randomly sat down at a roundtable discussion (number 21 of 40 or so tables) and it turned out that five researchers were studying the question of how teachers use assessment data in their instructional practices. This is exactly what I’d hoped to hear more about and I just stumbled into the conversation.
But I also discovered that while much is investigated, nothing is very definitive. So of the five research studies, a few were still in early stages, others had conducted some research but found “no effects,” and so on. Even with the “no effects” research, we all (including the researcher) hypothesized that perhaps this tweak or that would improve the results and they should try again. I’ve found this often happens in education research. Maybe it is appropriate that research is never really finished, but it is nonetheless frustrating when the primary recommendation is usually “more research is needed.”
Every once in a great while, something is actually concluded. We have decided, for example, that class size reduction, once considered the great silver bullet, does not consistently correlate with higher student achievement. Good to know. At least we don’t need to spend money on that any more. But even here, it turns out that class size reduction under the right circumstances does produce effects. So, the answer to the question, “does this work?” is most accurately answered by “it depends.” Context, culture, fidelity of implementation, and the mysteries of the human brain all figure into the answer and the inexplicable alchemy of it all make it difficult to create many generalizable and useable action steps emerging from research.
Look, I get it. No self-respecting researcher is ever going to make an unqualified statement that X always causes Y. That doesn’t describe reality, particularly in complex systems populated by unpredictable humans. But research is being funded in large part because we want to understand how to get better at what we are doing. To improve our education system we will need to take action and we’d like to do this based on evidence.
So, what to do? I have three suggestions. One, make it easier to find the research. One way to do this would be to create a taxonomy that categorizes all of those thousands of research studies and a clearinghouse with a really good search engine that stores them. Two, we need more meta-analyses that sum up what is known and give practitioners guidance (as best we know) about what should be done and what should be avoided. Lastly, we need open anonymized data (that protects student privacy) so that all of the millions of data points that are emanating from online education can be accessible to those thousands of researchers so they can learn stuff and speak with greater authority about what they know.
And we need more coffee stands in convention centers.