Back to basics: What we’re learning about supporting the field of evidence-informed policymaking
Over the last 12 months we have made a concerted effort to understand where we can make the most valuable contribution to strengthening the evidence-informed policymaking field. We brought together some of the world’s leading evidence-informed policy thinkers to help generate big ideas. We sought broader feedback on these ideas, and from the Hewlett Foundation’s Grantee Perception Report. We launched a call for proposals to identify new African policy research partners and learn how they think about overcoming systemic barriers to evidence use. We also turned to network analysis of Twitter data to get a preliminary sense of how the field is structured and took advantage of global conferences to talk with as many partners as possible.
Throughout this period, we heard from a wide spectrum of researchers and “doers” from at least five continents: from small single-country organizations to large multi-national NGOs; from small private donors to large multi-laterals. From all of this, several common themes emerged.
What we learned about the field of evidence-informed policymaking
Common challenges but not yet common goals.
Many of the people we consulted cited similar challenges to getting the evidence they produce (or fund the production of) into use by government decisionmakers. High turn-over of government champions; performance or political incentives that don’t reward evidence use; entrenched bureaucratic systems that weren’t built for and are hard to adapt for routine consideration of evidence; budget shortfalls; low capacity to find and discern among evidence sources; and so on. Yet, most of the organizations generating data and evidence focus primarily on finding better ways to get their own evidence used in specific cases. For example, a health policy researcher might focus on trying to influence a specific program or policy at the Ministry of Health in a given country. Many fewer put themselves in the shoes of the ministry official who is on the receiving end of various scholars’ research; they tend not to have an explicit goal of changing the government capacities, incentives, and systems that hinder evidence use across the board. So, while many organizations in the EIP “field” are united in their challenges, many remain individual in their goals.
Many siloes, a few bridges.
Most individuals are deeply committed to the work they do, whether it is in the area of impact evaluation, data analytics, economic and social policy research, or open data. They value connections, shared goals, and professional advancement within these communities. The field mapping revealed various tightly-knit evidence communities that are not very well connected to each other. They coalesce around specific methods, sectors, or professional affiliations rather than around a common goal that evidence be used to improve government decision-making and outcomes for people. Yet the mapping and our own experience also suggest that some people and organizations have explicitly positioned themselves as bridgers and connectors. They share ideas, focus on higher-level common goals, promote the ideas of others even if they aren’t within their silo, and draw on range of evidence methods (from policy research to impact evaluation, from demographic surveys to satellite data) to advance their own work. This kind of bridging is essential because there is no one type of evidence that can address all policy questions.
Messaging, not hierarchy, matters.
In the spirit of fostering shared goals and bridging siloes, many of the people we consulted see the value of having a shared vocabulary and framework around evidence-informed policymaking. Some wanted to more formally conceptualize the field with a theory of change, defined outcomes and shared principles. However, others expressed concerns that such a structured effort would exacerbate exclusivity (who’s evidence is more important) and risk the reification of a hierarchy of types of knowledge (whether impact evaluations are the “gold standard”).
While formal conceptualization of the field may be too heavy-handed or counter-productive, many observed that better EIP messaging is essential. For example, many people have told us how powerful they found Ruth Levine’s speech about the moral case for using evidence. While no single set of messages will work everywhere, many saw value in creating a better narrative, possibly including a messaging framework (such as this messaging guide about ocean conservation) that could be tailored to resonate with policymakers and with citizens in various contexts.
Fascination with institutionalization.
Almost everyone we consulted is interested to learn how governments can systematically embed evidence into decision-making, and what outside organizations can do to help. Indeed, some already try to do this by helping government counterparts establish evidence-oriented government units or develop requirements to consult or use evidence, such as national evaluation policies. Yet there is clear appetite for more learning about existing government mechanisms, how they work, which work better for fostering learning and which for accountability, what role transparency and non-governmental actors play in their success, and so on.
Start with what you’ve got.
The people we consulted highlighted the importance of inspiring political commitments to use evidence. However, there was little appetite for a new global evidence-oriented initiative or partnership. Some doubted that a call for better evidence could achieve high-level political buy-in, and others worried that even if governments made commitments, they would not deliver on them. Instead, people advised that an evidence-to-policy agenda be integrated in a more intentional way into existing global efforts, such as the Open Government Partnership and/or global health initiatives like GAVI, the Vaccine Alliance and the Global Fund to Fight AIDS, Tuberculosis, and Malaria. Likewise, we heard interest in focusing on sector or country opportunities to advance the use of evidence, and in using learning from these experiences to inform and inspire the field.
Getting the funders’ house in order.
One message came through loud and clear: what funders do affects how the field works. If we want actors in different evidence communities to break down siloes and work to foster more systemic evidence use by governments, we need a community of funders that thinks this way too. We need funders that not only care about and invest in research and data products, but also care about and invest in the institutions that produce them and have the initiative, position, and capacity to encourage their use. Many shared our views on the importance of flexible core support as critical for institutional health and policy influence, and of covering full costs so that great projects aren’t delivered at the expense of great institutions.
Likewise, we need funders who don’t take for granted that quality research will make its way into policymaking. Rather we encourage funders to be curious about the barriers that hinder uptake of evidence and invest in helping overcome them. Funders often orient portfolios around evidence types: sector-specific research projects or impact evaluations; open data, data science, or official statistics. Rarely do they look across their portfolios and consider supporting efforts that would allow all these programs to have more impact, such as helping strengthen the capacities, incentives, and institutional systems that determine whether the evidence they fund will get used by government officials to inform decisions.
Ideally this community of funders can also be deliberate about coordination and avoiding duplication. Our consultations revealed a small set of funding organizations interested in exploring light coordination, at the very least in sharing learning, keeping tabs on each other’s strategies and grantmaking, and maybe eventually co-creating some type of coordinated effort.
Our next steps
Where does all this learning leave us? We started this journey thinking about big new things we might do to strengthen the EIP field. We are ending it with a back-to-basics mindset and some humbler questions. We would like to hear from you about how these field-building basics relate to your work and how you respond to the questions below.
To us, the basics include:
Invest in institutions: Fundamentally, a field is a set of governmental and non-governmental institutions and the nature of how they work together. The heart of our work is supporting non-governmental organizations, many of whom work very closely with governmental institutions, to increase the use of evidence by governments. We will continue to provide flexible funding that allows organizations to respond to compelling opportunities, to start new initiatives and partnerships, and to position themselves to have lasting impact beyond the duration of individual projects. We will do a better job of supporting our grantees to capture and share their learning in ways that will benefit other researchers, practitioners, scholars, government technocrats, political leaders, and champions for evidence-informed policymaking.
Provide catalytic support: In addition to longer-term support for organizations, we can move fast, flexible, targeted resources in response to emerging opportunities, especially those related to advocacy, strategic planning, convening, and launching multi-stakeholder initiatives.
Make connections and foster learning: The Twitter mapping hints at the role that funders can play in making connections across communities. We will continue to connect excellent people and institutions that might benefit from knowing about each other’s work. We will do a better job understanding and using the language of others – bridging to them, rather than imposing our own new framing on them.
Find and support positive deviance: Rather than trying to orchestrate globally-coordinated initiatives, we can look for positive deviance that fortifies the field. We can identify and support those who are already bridging across communities, addressing the systemic barriers to evidence use, sharing frank learning, communicating well, taking the long view, or taking evidence principles into existing initiatives.