Putting the “decisions” into evidence-informed decision making: Three lessons from MCC and beyond
Government officials around the world make thousands of decisions a day. How to allocate budgets, how to design and target public programs, where to invest in infrastructure or public services, when to expand or end a program based on its performance. These decisions are consequential for the people governments are meant to serve. Whether or not they are informed by data can even be a matter of life and death. In democratic societies, government officials have a fiduciary, moral, and sometimes legislative obligation to make decisions based on the best available data and evidence.
There is a growing community focused on evidence-informed decision making — groups working to promote better use of data and evidence everywhere from U.S. cities to countries across Africa, and around the global.
But they are often more focused on producing the evidence than on the critical piece of how decisions are made with evidence.
Focusing on the evidence leads us to first produce data, research, or evaluation, then ask how to promote its use by government officials.
Focusing on the decisions leads us to first understand what decisions officials make on a regular basis, and how they make them, then consider what evidence is needed.
This focus on how governments make decisions with evidence could be described as “institutionalizing” evidence use or taking a more “systemic” approach. No matter what you call it, the point is for evidence-use to be routine and integrated into day-to-day operations, rather than sporadic.
When it comes to producing evidence, it is easy to imagine the role of non-governmental organizations. When it comes to changing how governments make decisions with evidence, what is the role of external actors?
To figure this out, I look back to my experience at the Millennium Challenge Corporation (MCC)[1]. MCC is a U.S. development assistance agency designed from inception to bring evidence and data into decision making. I spent six happy years at MCC, often on the forefront of promoting, learning from, and navigating tensions in its evidence-use principles.
The MCC experience illustrates three ways outsiders can shape how government insiders make decisions with evidence:
1. Focus on routine decisions and the tools and practices that inform them.
If you want to make evidence-use more routine, start with the decisions government agencies make regularly. These repeated decisions — like annual budget allocations or quarterly reports to parliament — are expected (by citizens or legislators) and generally have structures to support them. At MCC, these routines include deciding who to fund and what sectors to invest in in each country.
The tools MCC uses include country scorecards to inform partner selection, economic rate of return (ERR) models to design programs so their expected benefits outweigh costs, and independent evaluations. It also has standard practices like annual board meetings to decide on which countries to select (informed by the scorecards).
MCC isn’t alone in having codifying evidence-use practices. South Africa’s Department of Planning, Monitoring and Evaluation (DPME) offers ministries guidance on how to address the findings of an evaluation (citation). Mayor Jim Strickland helped Memphis, Tennessee, earn its “smart city” credentials by holding monthly meetings with department heads to assess performance data on everything from crime to library attendance (and nicely showcases stories of how data are used to improve city services).
So, for outsiders, the first step is to understand what routine decisions are and how they are made. Second is to focus on the culture in which they are made.
2. Cultivate culture and transparency.
Even when the right tools and practices are in place, using evidence can be difficult politically. This was certainly true at MCC. One of its biggest tests came with the release of its first five impact evaluations. After years of promising increases in household income as a result of its investments, evaluations of its agricultural programs did not find this outcome. Some officials were tempted to hide these evaluations under the rug, but after much internal debate, MCC’s political leaders — led valiantly by Sheila Herrling, then Vice President of Policy and Evaluation and Chuck Cooper, then Vice President of Congressional and Public Affairs — ultimately went public and shared deep institutional learning from the experience. Most importantly, MCC made fundamental changes in how they design and monitor progress across programs based on lessons from this experience. They did all of this because personal integrity, internal culture, and external expectations demanded it.
It also helped that MCC ranked in the top tier on Publish What You Fund’s Aid Transparency Index, and had a reputation to uphold. This offers a clue about the role of outsiders in influencing internal agency culture. I can speak to the endless hours spent inside MCC and other U.S. agencies working to improve performance on this index. Indeed the role of external advocates and scholars has been critical in making progress and weathering setbacks on open government, open budgets, and open contracting. Improved transparency in these areas gives non-governmental organizations lots of opportunities to see and inform routine decisions that governments make.
3. Be the ecosystem of support and accountability.
MCC’s culture of evidence use, together with transparency of decision-making tools and practices, has enabled an ecosystem of scholars and advocates to grow up around it, performing both support and accountability functions. The Center for Global Development’s MCA monitor publishes annual predictions of MCC’s country selection based on independent analysis of the third-party indicators on which MCC relies. MCC’s analytical tools have gotten stronger over the years thanks to external scholars at the Center for Global Development and the World Bank regularly providing research, dialogue and critique to improve them. Pressure for even greater accountability and transparency about results from these and other groups led MCC to develop evaluation briefs that summarize major findings from long and complex evaluation reports.
One of the most striking examples of this ecosystem at work was during the roll-out of MCC’s first five impact evaluations. In the face of fear that congressional members would use the evaluation findings against MCC, a chorus of NGO leaders from the Modernizing Foreign Assistance Network publicly praised MCC for transparency, accountability, and its commitment to evaluation rather than criticizing it for failing to achieve its goals. The Center for Global Development also chimed in with multiple pieces calling on politicians to focus on the value of transparent evaluation rather than only on the shortcoming in program results, and on the importance of learning from the evaluations. This gave MCC leaders the nudge and support they needed to stick to their evidence principles.
The ecosystem beyond MCC
This kind of ecosystem is not limited to MCC. It is exciting to see more and more organizations providing support and accountability for government decision-making.
In some cases, organizations generate evidence to inform a specific decision, such as IDinsight’s decision-focused impact evaluations. Others apply existing evidence to pressing decisions, such as the rapid response work of the Africa Centre for Systematic Reviews & Knowledge Translation.
Others are going a step further, to influence how governments make decisions in an ongoing way. The Institute for Economic Affairs (IEA) in Kenya is strengthening decision-making tools for some of the most consequential decisions the Kenyan government makes. Together with the Urban Institute, IEA is working to improve and make more transparent the models that Kenyan government agencies use to forecast and allocate revenue.
Some are helping codify evidence-use practices within governments. The African Institute for Development Policy has worked with parliaments and ministries in Malawi and Kenya to develop evidence-use guidelines. The Overseas Development Institute, in collaboration with the South African Department of Environmental Affairs (DEA) has developed tools to strengthen evidence use in policymaking. In addition to the serious job of changing how government agencies work, these collaborations also build momentum and champions for evidence use. Government officials from these partnerships in Malawi and South Africa have won the Africa Evidence Leadership Award in 2018 and 2019 respectively.
When it comes to internal agency culture, the big responsibility lies within government. But non-governmental actors like Results for All, the Urban Institute, and What Works Cities are helping with the how-to of institutional culture change. The Invest in What Works Federal Standard of Excellence tries to increase incentives for evidence use by ranking U.S. federal agencies on their use of data and evidence. Others are working directly with government decisionmakers to change their practices. The African Center for Parliamentary Affairs homed in on something Ghanaian parliamentary committees were already doing — overseeing national implementation of the Sustainable Development Goals, and is helping them build the capacity and culture of using data to bolster their oversight and decisions.
Hewlett is proud to support and learn from a group of African organizations working in different ways to improve how governments make decisions with evidence, and we have heard a lot of interest from the broader evidence community about fostering institutionalized evidence use.
This story started with a government agency, MCC. But the moral of the story is that non-governmental organizations have an important role in changing how governments work. The research and advocacy ecosystem that has grown up around MCC, and is emerging across Africa and beyond, shows that outsiders can influence how governments make decisions with evidence by focusing on routine decisions and helping strengthen and hold accountable the tools, practices, and culture that inform them.
[1] The author and the Hewlett Foundation do not have any formal relationship with MCC at this time.
The author thanks Norma Altshuler, Dana Hovig, Eliya Zulu, Rose Oronje, Buddy Shah and Brad Parks for helping shape the ideas outlined here.