Research follows one rhythm, policy another. Researchers arrive at study questions, design fieldwork, collect and analyze new data, figure out what it means, write up technical articles, and then – and only then – may translate their new knowledge into policy recommendations. Sometimes they have to get funding along the way, which slows the whole business down even more. Months melt into years.
Policymakers, on the other hand, act with alacrity, responding to problems they observe or to pressure from constituents regardless of whether there are facts to draw on. Simply because of the need for speed, “get it done” dominates “get it right.”
Or so we seem to like to believe.
One of the most common excuses for the failure of policymakers to use high quality evidence is that they can’t wait for, say, the cost-effectiveness analysis of a job training program or the randomized controlled trial of a new intervention to keep girls in secondary school. But I think that assumption needs to be challenged. With the evidence.
Exhibit A: In some domains, policy waits (a long time) for analysis. In health and medicine, certainly, we are all accustomed to the fact that rigorous testing is needed before promising interventions are widely available. From the start of clinical testing of a new medicine, it’s typically a dozen years until a product appears on pharmacy shelves – and many drugs that start clinical trials don’t demonstrate safety and efficacy, so never get approved.
But beyond that obvious example, research and development lead time is long when governments are creating new weapons systems, and when they undertake feasibility studies and design work for bridges, roads and dams. The governors of Central Banks look at long trends in inflation and investment before moving interest rates by even a quarter of one percent. And in many other areas of public policy, from environmental regulations to curriculum reform, decision makers don’t just permit analytic work to be done and digested; they require it.
Exhibit B: Policymaking can be agonizingly slow. Politicians and agency heads may like to think of themselves as acting with urgency, but – let’s be honest here – they often do not. To take one example: For at least 15 years, critics of food aid have pushed for reform of a system that does some good, but often inefficiently and in ways that distort local food markets in recipient countries.
Policy analysts have systematically documented the problems, and proposed several alternatives, highlighting the costs and benefits of each. For at least 10 years, the U.S. Congress has debated food aid reform, and the current Administration has proposed modest changes in the way food is purchased and transported. However, major actions – the actions for which evidence has been amassed – have yet to be taken. In this area and in many others, policymaking is, in fact, far slower than the research. It is not until the weight of the evidence is so strong, and the advocacy around it sufficiently powerful, that decision makers are able to overcome inertia.
Exhibit C: There are enduring questions in many fields. Is it better to pay health care workers on the basis of services rendered or outcomes achieved? How can welfare and other social protection programs for the poor be well targeted without creating burdensome paperwork or stigmatizing recipients? Why do people in need refuse free services? What are the ways to accelerate changes in harmful social norms, like those that underlie race or gender discrimination?
These are questions that can be answered, in part, through thoughtful empirical research, generating a body of evidence about how people live and how they respond to different services or incentives – empirical research that can provide insights when specific policy initiatives arise. Investments in these studies don’t pay off immediately, but they can contribute to better policy over a long time horizon.
For these reasons, I’ve stopped giving much weight to the shibboleth that the slow pace of research diminishes its value for policymaking. That’s the good news. The bad news is that approaches touting “rapid” research may not be the solution at all. We’ll have to think harder about the real constraints to the use of evidence, and work harder to solve them.
Women learning about health issues in a village in Sahre Bocar, Senegal. (Photo Credit: Jonathan Torgovnik/Reportage by Getty Images)
The positions are polarized. The debates are divisive. Arguments mischaracterize opponents’ views. Am I talking about the U.S. presidential election? Nope. I’m talking about the repetitive, tendentious quarrels on the merits and disadvantages of random assignment methods to assess “what works” in social programs in developing countries. Seriously.
For the past 15 years or so, evaluation methods originally inspired by tests of new medicines have been applied to answer very different kinds of questions in the developing world. Randomized controlled trials, or RCTs, have been conducted to measure the effectiveness of social programs, which provide resources—health care, schooling, job training or even cash—in particular ways to individuals or households with the expectation that those interventions will improve specific outcomes.
Evaluations of program impact using random assignment methods try to find out whether a particular program really made a difference. Did the job training get young people jobs or would they have been hired anyway? Will community oversight improve the quality of local infrastructure projects so that roads and water systems last?
In general terms these evaluations are asking: What is the net effect of the program? And were the assumptions about what it would take to improve social outcomes correct? Finding answers to these questions is of great—and shared--interest to those who fund, design implement and potentially benefit from programs. How to get these answers it is where opinions diverge.
On one side we have academic researchers who design and conduct studies using a method that tries to separate the effects of a particular intervention from changes that would have occurred anyway. Think: Michael Kremer at Harvard, Abhijit Banerjee and Esther Duflo at MIT, Paul Gertler at UC Berkeley, Dean Karlan at Yale, and others.
On the other side are academic researchers who have multiple and varied critiques of the method and its application. There are Princeton’s Nobel laureate Angus Deaton, and Lant Pritchett and Ricardo Hausmann, both at Harvard.
For reasons that are beyond my understanding, the fight is intense, personal and confusing to those of us who see a dispute that is framed as “either-or” when it could (and should) be “both-and.”
Three basic concerns are leveled at the use of RCTs, although they are often woven together in perplexing ways.
1. It’s not the only way to know what works
RCTs are not the only legitimate source of knowledge – even most of its strongest proponents will agree to that. RCTs require a specific intervention and a defined population. That excludes lots of important policy changes which are nationwide, like civil service reforms, or are untargeted, like mass media campaigns. We care about the effectiveness of those efforts, too, and non-RCT analyses can shed light on whether they are working, even if “with the program” cannot be compared to an actual population “without the program.”
Even when an RCT might be the strongest way to estimate net impact, an experimental design may not be feasible because of practical constraints. In those instances, no one would ignore the insights that observational studies and other sources of evidence can provide – although we should do our best to figure out if there are alternative explanations for what’s observed. And when RCTs are possible, the findings don’t answer every important question. Complementary analyses of the quality of program implementation are invaluable.
It is true enough that RCTs do not provide the answer to all questions, but is that a reason reject them? We don’t set the bar that high for any other methods in which we invest time and money. So let’s not do that for RCTs.
2. It’s the wrong way to know what works
Far more than for drugs, context crucially affects how social programs are implemented and what impact they have. So, some argue, findings from a randomized evaluation in one context won’t apply to others.
We have a lot of evidence, however, about the value of social experiments from both high-income and developing settings. We’ve benefited from randomized evaluations of social programs for decades, and in the United States they have contributed to both accountability and learning in early childhood education, social protection, and job training.
Rand’s health insurance studies in the 1970s showed that neither critics nor proponents were entirely correct about how people would respond to subsidized premiums. RCTs of “Scared Straight,” a program designed to reduce juvenile delinquency, revealed that it had exactly the opposite effect. A social experiment with conditional cash transfers in Mexico demonstrated not only that the program improved health and education of children, but also that fears over encouraging domestic abuse did not materialize.
These social experiments provided convincing evidence of net impact. But they also helped program designers clarify their theories of human behavior and reveal key assumptions. The knowledge the experiments generated helped refine and improve the programs.
Yes, RCTs should be conducted in as real-world settings as possible, and we shouldn’t overspend on boutique experiments that could never be implemented at a significant scale. But randomized evaluations remain a valuable tool to generate knowledge that is sorely needed to figure out how money and other inputs can turn into better health, education and employment outcomes.
3. It’s driving us to emphasize the wrong type of development programs
This is a critique that invokes the specter of a Food and Drug Administration for development interventions. In that world, funders would only support a subset of discrete (rather than system-wide) interventions aimed at one (rather than a whole constellation of) development outcomes. You get “deworming” and “chlorine dispensers” rather than “address the structural drivers of poverty.” This competes directly with systems thinking, complexity theory, multisectoral work, and a whole set of approaches that resist the “if x then y” thinking intrinsic to impact evaluation. It is a critique of a nonexistent world in which RCTs not only crowd out other types of inquiry but also crowd out programs that cannot be evaluated through RCTs.
But let’s be real. We don’t live in that world and never will, no matter how many RCTs are conducted. The majority of official and private development dollars are spent on programs that are not and will not be subject to RCTs. But by providing insights into individual and community behaviors, RCTs will provide information that is useful to questions that involve complexity.
And the existence of evidence from RCTs – the fact that the effectiveness of some programs can be established in a systematic, scientific way – does raise the bar in a healthy way for the use of the strongest possible data and evidence for all decisions. In an age when RCTs have visibility and even cache, there’s less space for pure political discretion and greater incentive to find ways to use data and evidence to make decisions about where to spend precious dollars.
Advocating for greater attention to what is happening rather than what we hope will happen – the essence of rigorous evaluation – is where both sides can meet. You don’t have to choose methodologies to agree on the value of transparent, consistent measurement that neither ignores the complexity of context nor uses it as an excuse not to ask hard questions. In some circumstances, RCTs will be the best way to answer those questions. In others, it may be to design programs that continuously search for improvements in a dynamic try-measure-learn-fix-try again cycle.
All of the people involved in the disputes about whether RCTs are or are not the “right” methodology are themselves brilliant champions for the integration of reason, logic and evidence into public policy. That brilliance will shine brighter if they come together to support the generation and use of evidence of many kinds for the many types of policy decisions that matter.
Sometime in the next month or so, I’m going to sit down at my dining room table, open up a file folder stuffed with paper, and double-click on TurboTax. Then, like most Americans, I’m going to fill out forms that connect me to my government – forms that create a connection stronger, perhaps, than a passport application or even a ballot. Yes, my 1040.
Paying taxes is never fun, but it is an essential part of the relationship citizens and corporations have with their governments, national governments have with states or provinces, and citizens have with other citizens. All those relationships are features of a functional state that can express both ambition and compassion.
While most Americans don’t enjoy paying taxes, we certainly like and use the roads, schools and services they pay for. But let’s think for a moment about what it means when those systems barely exist.
Tax systems in many Sub-Saharan African countries are severely underdeveloped. Governments are starved for resources to serve their citizens because they don’t collect enough taxes from a sufficiently diverse set of sources, and they don’t generate revenue for public purposes in a way that is transparent and free from corruption and evasion.
In many countries in the region, tax as a share of GDP has risen little in the past 15 years; in some countries it has declined. Most high-income countries have a tax as a share of GDP of 25 to 45 percent, but in African countries, on average, it’s about 16 percent. And most of the taxes that African governments do collect come not from tax on income and goods and services, as is common in higher-income countries, but from international trade taxes like import duties.
The most obvious consequence of low levels of tax collection is that there are simply not enough domestic revenues to pay for health care, roads, schooling, police and civil defense, water systems, energy grids . . . and on and on. Poor countries have even poorer public services and infrastructure and they become more dependent on external donors than they should. Governments that tax poorly are losing out on a host of opportunities to build and strengthen the social contract that keeps nation together.
Transparent, well-designed and well-administered taxation serves many purposes beyond collecting money. For instance, it is through a progressive tax structure that well-off households share with those who have less. It is through sharing of tax revenues that a national government can even out regional differences in resources. It can adjust, for instance, for the disproportionate wealth in coastal regions that have the benefits of trade and tourism, and for the disproportionate poverty in inland regions that do not. Through a balanced tax system, a government can live up to the promises of economic and social equity. That knit together the social fabric, reduce the risks of civil conflict, and create national unity.
A fit-for-purpose tax system can appropriately capture a portion of the gains from commercial activity. This is a particularly important challenge to grapple with in developing countries, where the informal economy plays such a significant role and the informality itself leaves so many people – especially women – so vulnerable.
Informal workers such as street traders supply essential goods and services, but they operate outside of government regulation and, usually, evade taxes. Without regulation they are also without protection, and are subject to abuses; police can confiscate their goods, and chase them from their place of business. In concert with other policies, a well-designed tax regime can help to create the regulatory framework that recognizes informal work, and treats it like other forms of economic activity – with all the responsibilities and all the benefits.
A strong, responsible tax system can help reduce corruption in both the public and corporate sectors. For example, professionally run revenue collection can mitigate the risk that a public official will be bribed so that a wealthy business owner can get away without paying his fair share. And good tax policy, implemented well, can greatly reduce the more than $1 trillion in illicit financial flows that are draining out of low-income countries each year.
Finally, a tax system that touches most people gives citizens a stake in what their governments do, a hook to hold governments to account for delivering quality services at the local level, and making the right big-picture budget allocation decisions at the national level. Yes, voters can demand accountability; but voters who are also taxpayers have a much louder voice and greater legitimacy in making their claim.
In short, the policies and practices that permit governments to raise resources from domestic sources are hugely important to both the tangible development we all care about – the staffed and stocked clinics, the pipes to deliver clean water – and to the intangibles – the quality of relationships among and between citizens and their governments. It’s hard, in fact, to think of a domain of policy and public sector action that has a greater influence over both near- and long-term development outcomes.
For long-term development, helping to build a tax system that is appropriate to the context and draws on good practices from around the world is surely among the most important contributions to sustainability and good governance. In fact, it’s essential to fulfill the oft-claimed ultimate goal of development work – to “put ourselves out of business.” And the return on investment can be remarkable: A $5.8 million USAID project to strengthen tax administration in El Salvador, for example, yielded a $350 million increase in annual revenue. Moreover, development agencies can and should invest in the capacity of civil society groups to hold governments accountable for fair tax practices, and good use of the money raised. This is a crucial element to reinforce good policies and practices.
Are donor countries stepping up to the challenge? There are some positive signs, but a long way to go. Let’s just take a look at what the United States is up to. In September 2014, Secretary of State John Kerry announced that the United States would invest $63.5 million, focused on mobilizing domestic resources for health. And last July at the Financing for Development Conference, for instance, the United States joined more than 30 countries – wealthy and poor alike – in an initiative to strengthen tax systems in developing nations.
This is important progress. But it’s starting from a low baseline. To help countries improve their tax systems, USAID currently spends a tiny amount – something like two-tenths of a percent of its global development and health budgets. A big bump in resources on domestic resource mobilization is long overdue, and will bring not just the cold, hard cash to fill public coffers that are running low, but also a strong and healthy relationship between governments and civil society.
What all this means to me, as an American taxpayer, is that when I sit down to fill out my 1040 I’ll be thinking about the one penny (or less) per dollar that goes to development assistance. And I’ll be hoping that a larger share of that penny will be used to help other countries beef up their own capacity to collect shillings, francs, centavos, and naira in ways that build stronger, safer and more just societies.
Policymakers diverting funds earmarked for long-term investments to pay for immediate political fire-fighting, then using accounting technicalities to conceal their budgetary sleight of hand. NGOs calling attention to the high-stakes shell game, only to see the government practices worsen the next year. This all sounds depressingly familiar, doesn’t it?
What’s surprising is where it’s happening: in the very countries that have been the standard-bearers of good governance, transparency, and adherence to a long-term vision of fairness in social and economic development around the world.
In Denmark, Sweden, Norway, the Netherlands, and other European countries, government funding for international aid—known as official development assistance—is increasingly being reassigned to cover the expenses of receiving and resettling refugees within the donor countries’ borders. But thanks to a specific set of international reporting rules, the totals for aid spending remain high—so high, in fact, that those countries are likely to retain their ranking atop the Center for Global Development’s Commitment to Development index.
The shift in funding is dramatic. In 2016, Denmark intends to spend one-third of its total aid budget—some $600 million—to cover refugee support within its own borders. Sweden and Norway are making even more drastic reallocations.
These are countries that have been stalwart supporters of long-term development investments in many countries in Africa and Asia. Their funding has permitted both the public sector and non-governmental organizations to improve education, provide vital health services, protect the rights of women and girls, and expand the infrastructure that allows farmers to get products to market and businesses to create jobs for young people. They have been among the most active advocates for generous, predictable, transparent aid flows, often chastising the United States for failing to adopt effective practices such as participation in sector-wide pooled funding aligned with the priorities of recipient governments. They were also influential in promoting ambitious international goal-setting, codified in the Sustainable Development Goals agreed by UN member states just a few months ago.
The consequences of these budget cuts are profound. Many of the non-governmental organizations we support have learned recently that promised funding will be cut by 30% or more, leaving them unable to finish projects and, in some cases, threatening the viability of the organizations themselves. NGOs have few, if any, alternative sources to draw on. (Although our own funding to them remains steady, we are unable to fill in this sudden gap.) Governments, too, are reeling from cuts in programs that have long been dependent on donor aid. The most likely public sector response will be to seek greater support from other sources, and particularly from China.
But this is more than a story about changing priorities in a stressed-out world; it’s a story about transparency. European donor countries are able to shift resources to needs within their own borders while appearing to be as generous as they have always been for one simple reason: the Organization for Economic Cooperation and Development (OECD) permits “in-donor refugee costs”—expenditures related to receiving and resettling refugees during the first 12 months—to be included in the reported totals of official development assistance. While this has been allowed informally since the early 1980s, and formally since a 1988 Statistical Reporting Directive from the OECD Development Assistance Committee (available in this 2013 report), the practice has gained in popularity in the past couple of years. As the challenge of refugees has grown, governments in Northern Europe have applied the reporting rule with more vigor and less rigor. They have shifted large shares of their aid budgets, including to expenses associated with resettlement past the 12-month mark. Several watchdog NGOs in Europe have tried to draw attention to this budget game, seeking to galvanize opposition to the use of aid resources for donor-country needs. To date, those efforts appear to have failed, in part because the overall “aid” numbers stay high and the pain is being felt far away.
Without question, the explosive increase in refugees from Syria and many other countries is putting a major strain on European countries, and the response demands that resources be mobilized. The money must be found somewhere. But drawing resources down from development assistance is short-sighted, sacrificing long-term benefits and the disrupting important work in progress. And exploiting OECD reporting rules hides the governments’ actions from their own people—taxpayers who, in the past, set the standard for global citizenship. They deserve better, and so do these European countries’ longtime partners in the developing world.
If you’ve been around the international development business long enough, you’ve probably heard someone ask, “So, what do we know about what works?” Maybe it’s a parliamentarian, a minister, or the Administrator of USAID. Maybe it’s a reporter. Or maybe it’s a newly arrived junior member of a project team, hoping her middle-aged colleagues—who clearly have all the answers—will rattle off a list of evidence-based interventions to improve health, teach kids, empower women, or reform the civil service.
The response depends on who’s doing the answering. “Old development hands,” the folks who have worked in dozens of countries on project after project, are likely to say we know a lot about what works, and success depends on community engagement (or gender-sensitive project design, or careful monitoring, or something else hard-won experience has convinced them is critical but too-often overlooked). Members of the research community will often say we know very little—maybe even nothing—because few interventions have been subject to rigorous evaluation, or the respondents haven’t had time to keep up on the literature. If the person answering happens to have done a systematic review of all the research on a particular topic, the response will be an exhaustive (and thoroughly exhausting) review of what works and what doesn’t. And the rest of us? Well, we’ll just shrug—or bluff.
That’s why I love the International Initiative for Impact Evaluation’s Evidence Gap Maps, elegant, colorful visualizations of what we know and what we have yet to discover. Based on searches for all relevant impact evaluations and systematic reviews on a given topic, the maps are a one-stop-evidence-shop for expert and curious non-expert alike. They make us smarter.
Take, for example, the question of what we know about how to improve teaching, pupil attendance, and learning outcomes in primary and secondary school. A pretty important question. The 3ie Evidence Gap Map displays on a single page how interventions like school meals, scholarships, cash transfers, teacher incentives and community monitoring affect each of these outcomes. For well-studied interventions, you can click through to curated impact evaluations and systematic reviews. For interventions that have rarely or never been evaluated, the white space tells the tale of a major evidence gap. Like all the Gap Maps, the one on education captures far more research than most people have time or energy to sift through.
Just as geographic maps chart territory without telling you precisely where to go, the Evidence Gap Maps themselves don’t offer recommendations for policy or practice. But they tell us what studies are most relevant to our specific question and context, and are an invaluable tool to help those in the research community (and their funders) set priorities.
It is precisely to help set research priorities that we’ve started working with 3ie on an Evidence Gap Map on adolescent reproductive health programs and outcomes. We’re at the start of the process, which entails precisely defining what we mean by “adolescent” and “reproductive health programs,” and what outcomes—from youth empowerment to pregnancy prevention—we and other funders might want to learn most about. The next few months will be filled with extensive literature searches, and then sorting, synthesizing, and summarizing.
My colleagues and I are eager to see the result, because we’ll be able to hone in on the questions that have been investigated the least, which in turn will help us focus our (always limited) research dollars. Best of all, we won’t have to resort to outdated experience or whatever research findings we happen to remember when someone asks, “So, what do we know about what works?”
If I had to come up with one word to describe what I want to see in a grantee organization, it’s this: courage.
Courage comes in many forms. We see it most clearly in the organizations we support to ensure that women around the world have access to comprehensive reproductive health care, including abortion. These organizations are made up of people who do their work under extraordinarily trying conditions. Overseas, they must stretch scarce resources and find creative ways to reach women who themselves are disempowered. In the United States, they face relentless attempts to throw up legal and regulatory barriers to their work. And all too often, they also confront threats to their personal safety and security. Unlike most of us, they need courage just to keep showing up for work every day.
Grantee organizations in Africa, Asia, and Latin America also demonstrate courage in their commitment to shining a light on government policies and practices. They risk intimidation, illegitimate restrictions on their actions, and even harsh reprisals from those who fear or resent their work on behalf of their fellow citizens. After an era of liberalization, we now see governments around the world imposing more and more limits on civil society. While often cloaked in the language of “national security” or fraud prevention, government actions are closing the space for civil society, and mission-driven organizations may be unable to maintain their funding, find offices to rent, or hold public meetings. When your own government challenges your organization’s right to exist, it takes real courage to stick with it.
Finally—and to be clear, this is courage of a far less dramatic kind—some of the organizations we support in Washington are willing to speak truth to power, even at the risk of becoming unpopular within a policy community that prizes access. Criticizing a sitting administration, particularly one that is “friendly” toward the issues you care about, can affect your standing in subtle but important ways. It takes a kind of courage to be the skunk at the garden party.
So what can we do to support courageous organizations and the people within them? The obvious answer is that we can put them at the top of our priority list for grants. And we do. Money can’t protect organizations from the many threats they face, or keep backbones stiff in the face of attacks. But it can help take one challenge off the table for them, particularly when our support comes with the reassurance that it is offered not in spite of the difficulties they face but because of them.
As people around here headed out the door for a week of vacation filled with family, friends, and yes, Christmas presents, I asked a few of my colleagues in the Global Development and Population Program what gifts they’d already gotten this year from our grantees. Puzzled, one colleague gently reminded me that our employee handbook specifically proscribes accepting gifts from people whose work we support. So I explained: “No, not material gifts. I mean what gifts of knowledge or joy or time did you get?”
Here are some of their responses:
“Several of my grantees have provided very frank and honest assessments of challenges they’re facing in their work. That candor is a gift. They’re not trying to sell us anything—they’re treating us as trusted partners. It allow us to know what’s going on and to help if we can. That’s a gift, too.”
“I’m grateful for the gift of something-better-than-I-ever-imagined from the team that put together the Data Impacts cases. I shared the seed of an idea with amazing people, and they grew it into something fabulous.”
“I got the gift of seeing how our funding benefits real people—through the storytelling done by Marie Stopes in their great video series we supported, and through the Images of Empowerment project, which generated hundreds of beautiful photos that anyone can use for free.”
“One prospective grantee gave me the gift of breaking up with me before I had to break up with them. The grant was getting complicated and I was starting to try to problem-solve and ask them to change elements of the proposal. But it looked like it was going to fall through. Before I had a chance to say ‘no,’ they emailed to tell me they decided not to pursue the grant. That was a gift.”
“Although we’re concluding our funding for the Population and Poverty Research Network, some of the young researchers who had been funded under that initiative have volunteered to self-organize future conferences on a smaller scale to continue the scholarly exchange on population and development research. That’s a wonderful gift—a testament to the value of the PopPov network, and it shows that we did indeed help to revitalize the field.”
“The inaugural steering committee meeting for the People’s Action for Learning Network was a true gift. The spirit of collaboration, open sharing of success and failure, and incredible warmth across the network is so special.”
“We all got the gift of feedback from the Grantee Perception Survey.”
That’s only a partial list, of course. We are often (and repeatedly) amazed at the creativity and commitment of the people whose work we support. We learn new things and feel buoyed by the spirit of genuine partnership they bring to each conversation. We are indeed grateful for the many gifts our grantees give us every day.
For more than a decade, the Hewlett Foundation has supported organizations working on a very big worldwide challenge: how to increase people’s ability to understand where their governments get money, how that money is spent, and whether commitments for delivering health care, education and other public services are being fulfilled. While this is not an effort to export American-style democracy, it is based on the notion that, empowered with information, citizens can encourage politicians, public officials and other power-brokers to be accountable; greater accountability will translate into better government services; and better services, in turn, will improve health and well-being.
Our portfolio of grants in the field of “transparency and accountability” is wide-ranging. It includes coalitions of civil society organizations pressing their national governments to publish information about how much they receive for access to oil, gas and mineral resources. And it includes groups getting the word out to citizens that they have a right to ask local public officials to fix broken wells and poorly performing schools. Today, we’re sharing our updated strategy, which builds on the past and aims to make our transparency efforts more effective in leading to government accountability.
That work has paid off. Compared with just a decade ago, far more information about the government revenue sources, public budgets and expenditures is now routinely available. Beyond this “new normal” of fiscal transparency, civil society is pressing for – and sometimes achieving – greater citizen participation. In some cases, this takes the form of involvement in participatory budgeting. In others, it means that citizens themselves are collecting information about the quality of government services in their communities, and are providing feedback to responsible officials.
But for all the progress, we have to face a basic fact: greater transparency in most countries has not triggered many citizens to use the newly available information. Without citizens acting on this information to hold their leaders accountable, the problems of poor quality government services persist.
So, ten years in, we’ve taken a fresh look. We’ve spent the past year or so digging into what’s resulted from our grantmaking, consulting with experts in the field of transparency and accountability, and reflecting on our own strengths and limitations. The result? A revised strategy to advance transparency, participation and accountability that reaffirms our commitment to the field, while recognizing and bolstering the critical role of citizen participation.
The first challenge is improving the enabling environment. This means that we will continue to work with organizations to create and reinforce norms and standards that enable greater transparency and participation. We have a lot of experience in this area, and remain committed to the promotion of global norms—as well as related efforts at the regional level—that create the conditions for citizens to have access to more and better information so they know what to expect from their governments. The existence of global norms encourages disclosure of information about government activities, and particularly public expenditures. While most grants in this category will involve consolidating gains already made, we expect also to explore some new frontiers. For instance, because of the growing importance of domestic finance, we expect to support work that reveals the magnitude and sources of illicit outflow of funds from developing countries, and that increases transparency around tax revenues and public procurement.
The other aspect of the enabling environment that we are well positioned to work on is ensuring that information about resources and service quality is collected and can be used (and in some cases generated) by citizens. We continue to believe that access to information is a fundamental enabler for accountability, but we know that just releasing information is not enough. We’ll expect that transparency-related efforts will push harder to achieve citizen participation by considering questions of who will use the information and how. Thus, we’ll seek more opportunities to make information relevant and accessible to citizens. In some cases, this means supporting the collection of information by citizens themselves, and we are particularly enthusiastic about the potential of citizen-led learning assessments—which were taken up by nine countries with our Quality Education in Developing Countries initiative—to reveal shortcomings in the quality of basic education.
The second challenge represents newer territory for us, but we see it as crucially important. This is strengthening the ability of citizens to speak and act around service delivery challenges, while building channels that permit citizens to engage with all levels of government.
In our grantmaking, we expect to work with citizen groups and coalitions of civil society organizations that are working collectively and in a sustained way around using accountability mechanisms to tackle service delivery gaps and challenges. While we’re not able to provide support directly to small, local citizen groups, we will look for opportunities to support regional and national organizations with strong connections to and legitimacy among local groups. We will pay special attention to ensuring that partnerships we foster between smaller and larger organizations are open, productive and based on mutual respect, trust, and learning.
We will also work to identify, construct, and learn how to use conduits for citizens to interact with and provide feedback to public sector institutions. We expect that some organizations we’ll support will press to strengthen formal institutions, such as those responsible for requests for information or for overseeing audits. Other grantees may pursue more informal routes to constructive citizen-government engagement.
Throughout our grantmaking, a primary aim is to contribute knowledge to a dynamic field. Much of that learning will be undertaken by grantees themselves, and we’ll look forward to having strong relationships with organizations that have demonstrated a commitment to learning, adapting and sharing what they learn with others. We will also make specific investments in research, evaluation, and learning networks, with an emphasis on building or leveraging strong connections between academic and practitioner communities.
Overall, our new strategy is more a story of continuity than change. We hope our grantmaking will continue to contribute to the larger conversation in the transparency and accountability field, as well as progress toward strengthening the accountability relationship between governments and their citizens to improve the delivery of quality public services.
We welcome comments on the new strategy. Share your thoughts below, or email us at GDandP@hewlett.org.
One of the most straightforward transactions in the foundation world is the no-cost extension. In our program at the Hewlett Foundation, at least, our practice has been to approve them routinely. A grantee simply writes to say that work is progressing more slowly than expected and that they wish to have the end-date for the project pushed out a few months (or longer) with no change in the overall budget. We say yes, and change the dates in our grant file so that we know when to expect the final report. This is so easy—a few keystrokes, really—that it’s natural to think there is no negative consequence, no reason to question, no cost.
But that’s wrong.
The cost of no-cost extensions is, in fact, quite high. If the Foundation is doing its work correctly, we should be supporting work that is relevant, high-impact, and urgent. It should matter—a lot—if a study is published this year or two years from now, if a policy working group concludes its work before or after the next regional summit. We should be supporting work that is going to change lives for the better—with no time to waste. It should matter to us, and it should matter even more to the researchers and policy advocates who are doing the work. If the delay doesn’t matter, then I think we have some pretty important questions to ask about why not.
We should never refuse the occasional, well-justified request for a no-cost extension, of course. A longer-than-usual rainy season can delay data collection, and when a crucial team member quits unexpectedly, the clock has to stop until a replacement can be found. Life happens. But we’ve had lots of grantees who routinely ask for and seem to depend on no-cost extensions—yes, my academic friends, I’m looking at you (although not just at you). No-questions-asked no-cost extensions enable procrastination and poor project management, and send the signal that later is as good as sooner. That’s simply not true, and its a habit we want to break.
In the Global Development and Population Program, some grant awards now come with the warning that we will not approve no-cost extensions, and that if the funds are not expended by the scheduled end of the project, they will have to be returned to the Foundation for other uses. We’ve heard some grumbling, and in a couple of cases we’ve ended up getting checks back from prestigious universities. But, all in all, the reactions have been more positive than negative, because our message has been correctly interpreted: your work matters a lot, to us and to the world. So get it done and make a difference, today. Isn’t that what we should all want to hear?
In a world where both resources for policy research and the attention spans to take it in are finite—that is, in the real world—less can definitely be more. Less research can mean that there’s more money and time for the activities other than data collection, analysis, and synthesis—more time and money, that is, for the activities that can make the difference between the proverbial “report on a shelf” and the study that informs better policy decisions in a meaningful way. Less sophistication can mean more understanding: Asking simple, straightforward questions and using descriptive data can provide just the type of information that decision makers understand and value. Less verbiage can mean more reading: writing shorter documents focused on questions the intended audience actually wants the answers to can dramatically increase the likelihood that they will be read.
Within the Global Development and Population Program, we support economists and other social scientists at universities and think tanks who are pursuing important research questions: Which economic growth path will disproportionately benefit the poor, women, and those in the informal economy? What types of health service delivery can most effectively reach young people with vital reproductive health care? How can countries avoid the “resource curse” to obtain broad benefits from extractive industries? As the research is designed and conducted, whether the studies are one-country case studies, multi-country data analysis, or experimental impact evaluations, the researchers make thousands of decisions about the scope, methods, and means of communicating results. In each grant proposal, and in our ongoing relationships with researchers, we try to see whether those decisions are made with an eye toward getting the great possible value out of the investment in research.
The tradeoff we see most often is between doing more research or spending more time on engagement with journalists, advocates, policymakers, and others who might interpret and use the findings. In general, researchers maximize budgets and schedules for the research, and shortchange the activities that help ensure that their research is relevant to current policy debates, shared in the many venues and formats that are needed to achieve real impact, beyond any research publication. As funders, we frequently ask how members of the policy community will be engaged from the outset and how the work will be disseminated in a way and at a time that corresponds to the intended audiences’ needs. Too often, the answers are vague, with little evidence that the proposed budget could accommodate the significant amounts of labor, travel, coffee-and-sandwiches, and other quotidian, essential costs for policy outreach.
We also often see researchers reaching to explore ever more nuanced policy questions and applying sophisticated econometric and other abstruse techniques. It’s impressive, and may be just the ticket to get the resulting paper into a prestigious journal (or at least into a years-long cycle of revising-and-resubmitting). But more often than not the analyses that serve policy audiences are those that simply and compellingly bring to light facts about the conditions of people’s lives, the quality of public services, and the potential costs or savings from a particular government program. That is, the studies that present descriptive and basic analytic results in straightforward ways that connect to specific policy domains and decisions—the kind that a technocrat in the Ministry of Health, Education, Planning, or Finance might need to come up with a better program design and stronger budget request.
The final way that less can be more is in the presentation of findings. For their academic and think tank peers, policy researchers feel compelled to “show their work,” sharing all the details of study design, conceptual framework, analytic approach and—where they sometimes lose even me—the multiple specifications of the multivariate models they tried before landing on the “right” one. This adds up to far too much information for policy audiences, who are likely to tune out at the first mention of “sample size” and “statistical power.” What most people in the advocacy and policy community want to know is: Why is this important? What did you find? How does this fit into what we know from other sources (i.e., does this challenge or confirm conventional wisdom)? And, crucially, so what? What should we do differently now that we have these results? Researchers who can answer those questions succinctly and precisely are able to attract and sustain attention—and win our admiration for their communication skills.
As I’ve written before, we believe in the power of evidence to improve people’s lives, and we commit millions of dollars to specific studies and to policy research institutions. We want those dollars, in aggregate, to have the greatest impact they can—but that won’t happen until the researchers themselves do less to do more.