Photo Credit: “In the stacks” by Anna Creech, licensed under CC BY NC SA 2.0

We fund a lot of research in the Global Development and Population Program. So we want to be sure that the researchers we’re supporting are using sound methods and reporting accurate findings. We want to contribute to the global public good of new knowledge, not the global public bad of weak science. Reaching that aspiration can be a challenge.

We are not the largest funder of research on issues in global development, but we’re one of them. Out of some $25 million in grants approved for our program at our Board meeting earlier this week, about a third of the dollars went to institutions that primarily engage in research; that proportion is characteristic of our whole grants portfolio. If you count up all our active research and evaluation-oriented grants, the total comes to about $100 million.

We occasionally fund specific studies that have narrow research questions, but more often we support research programs in think tanks, universities, and similar organizations. Topics covered range from those in political science about how citizens interact with local governments to public health investigations to estimate the incidence of unsafe abortion. The proposals aren’t like the thick protocols submitted to the National Academy of Sciences, the Wellcome Trust, or the National Institutes of Health, and when we review them we don’t anonymize, or blind, them as some of those institutions do. But—just like the funds from public research funders—the dollars we provide to researchers are used to design and field surveys, run field trials, analyze large data sets, and run policy simulations.

Eventually, some of the research will appear as published papers in political science, economics, demography, and public health; those work products will be subject to journals’ peer review, and investigators will struggle through the “revise and resubmit” obstacle course. Many of the studies, though, are not headed for publication in professional journals. Rather, findings are shared through institutional websites with a range of audiences in the form of working papers, reports, and policy briefs. All of it—we dearly hope—will help to increase the chances that policymakers will have (and use!) more and better information to make key decisions.

Which brings me to today’s conundrum: We are not staffed like a research funding institution, and we cannot count on journals’ quality assurance processes to vet all of the final products. So we have to figure out how to judge research quality, from proposal to finished product. That’s not so easy.

Research quality is a concept with many dimensions: Is it relevant? Are the choices about how to collect and analyze data appropriate, and are the methods applied correctly? Are the findings communicated in ways that work for technical and for policy audiences?

As grant makers with deep knowledge of the fields in which we work, we’re in a pretty good position to assess relevance of the questions and accessibility of the findings. It’s far harder, though, to figure out if the sampling design is sound, or if the statistical methods are the right ones, and are used correctly. We don’t have time to read every research paper our grantees produce, and I’m pretty sure they don’t want their program officer asking them a lot of questions about statistical power, endogenous variables and fixed-effects modeling. But we do have to find ways to assess the soundness of the research.

Here are a few ways we do it, and I freely admit that none of them is perfect:

  • The most common, and my least favorite by a country mile, is reputation. We assume the quality of the research is high when we’re working with researchers and institutions that have an established reputation for quality. This is self-evidently a risky strategy, but I’m pretty sure we are not alone among funders in using it. This isn’t blind review; it’s blinded-by-star-power review. And it’s one I’d like us to depend on a lot less.
  • We often ask about an institution’s own systems for quality assurance. Many think tanks, for instance, have peer review arrangements that include both in-house and external reviewers. We’ll ask questions about how they select the reviewers, what they do with comments, and whether they’ve ever had to retract a paper. We applaud grantees’ efforts to adhere to high levels of scientific transparency, including putting out original data sets to permit reanalysis.
  • We sometimes suggest ways to reinforce an organization’s own quality assurance processes, and may even provide extra resources for this purpose—for example, to recruit an advisory board that includes members with specialized knowledge, depending on them to vet the technical details. This can have a lot of benefits, including strengthening institutions beyond the one-time research effort.
  • We occasionally commission a quality assessment in which an outside expert audits a sample of the grantees’ work products, and reports findings to us. While not a full institutional evaluation, this can give us valuable information about strengths and weaknesses we might not otherwise have detected.
  • Knowing our own limitations, we occasionally bundle research funding into a regranting arrangement administered by a group that does have research skills in-house. This is the case, for example, with the International Initiative for Impact Evaluation and the International Development Research Center, both of which are partners in large regranting efforts.
  • We invest in field-wide efforts to foster greater quality such as impact evaluation registries and replication studies.

We believe in the value of research to refine concepts, develop coherent theory, and create a strong empirical basis for decision-making. That’s why, year after year, we recommend to our Board that they dedicate significant funds to individual studies and to research-based organizations. But with every grant recommendation we feel a heavy sense of responsibility: that research had better be good research. We know that’s the real test of a good research funder.