In March of 2014, when we recommended an initial three-year, $50 million phase of exploratory grantmaking for the Madison Initiative to the Hewlett Foundation board, we noted in our proposal memo that “the democratic system we want to change is more accurately described as a system of systems (and subsystems) on a national scale. These interconnect in ways no one fully understands, partly because the systems and subsystems are themselves dynamic. This, in turn, requires what has come to be known as an ’emergent’ strategy – meaning a strategy that is itself dynamic and meant to be reevaluated and adapted as the work proceeds. Instead of proceeding for some period of time before commissioning a retrospective assessment by an independent evaluator, we will implement an ongoing assessment process designed to inform prospective decision making, one that will be performed by evaluators working not at arm’s length but closely with our team from the beginning of the work.”
As part of our commitment to openness and learning, this overview describes how we have gone about evaluating the Madison Initiative in this fashion – from early foundational work we did beginning in 2013, to efforts that informed the initiative’s renewal in 2016, to a preview of future evaluation efforts. This overview provides a narrative context for the key data, information, and assessments that we have developed during the course of our ongoing evaluation and includes links to the underlying material.
Laying the foundations for the Initiative
In 2013, as we were beginning to assess whether and how we might enter the field, we knew that other funders were supporting grantees working on democracy-related issues. But we did not have a solid understanding of what constituted the U.S. democracy field, nor did we have a clear view of who was funding whom, to do what, at what levels of support – key pieces of information for a new funder seeking to develop and assess potential funding strategies. To establish this basic information, we joined forces with seven other foundations already working to improve democracy as well as the Foundation Center to develop a comprehensive taxonomy of the democracy field, breaking it down into four main categories and 18 sub-categories. Drawing on the Foundation Center’s comprehensive database, we could map the funders and grantee organizations working in different parts of the field, along with how much and where grant dollars were flowing. We have subsequently joined forces with the Democracy Fund and the Foundation Center to keep this information updated so that it can serve as a resource for other funders, grantseekers, and researchers in the field.
By the time the Hewlett board approved the initial phase of the Madison Initiative in March of 2014, we had consulted with more than 100 fellow funders, grantees, commentators, researchers, and political and civic leaders across the ideological spectrum to solicit their views about the problems facing U.S. democracy and potential solutions for them. We had also commissioned a set of assessments from leading political scientists on solutions to polarization and convened them to discuss their proposed remedies. The papers were subsequently published in Nate Persily, ed., Solutions to Polarization in America (New York: Cambridge University Press, 2015). What we had not yet done was systematically share our proposed approach with outside parties and solicit their questions and input about it. In the spring and summer of 2014, our evaluators at the Center for Evaluation Innovation (CEI) solicited anonymous feedback from 17 outside experts – fellow funders, academic researchers, think tankers, and commentators – to get their candid perspectives on the approach we were planning to take. This “pressure test” of our strategy gave us a better feel for what was driving polarization, the potential impact, and feasibility of the different interventions we were considering, and more broadly, what an informed and insightful group of critical friends saw as the appropriate goals for our work and what we might be missing.
Another key underpinning of our developmental evaluation in the early stages of our work was the generation of a comprehensive systems map to identify what we believed were the primary variables in the “system of systems” we were seeking to change as well as the relationships and interplay between and among these variables. This was an important and useful exercise in multiple respects. It helped us surface and reconcile different assumptions and perspectives about potential intervention points within our team. It also helped us share our integrated point of view and solicit feedback on it from the broader field as we made our systems map publicly available and reviewed and discussed it with external partners in multiple settings. Finally, the mapping also proved to be a boon for philanthropic collaboration. The governance team at the Democracy Fund used our map as a starting point for a similar undertaking they invited us to participate in during 2015 and 2016. Their process and the map it produced, in turn, substantially transformed and improved upon our work. This process also helped our teams – the two largest funders focused on this area of the democracy field – to deepen our working relationships and sync up our understanding of the problems we are endeavoring to solve and potential solutions for them.
Beginning to evaluate our grantmaking
In mid-2015, as we approached the half-way point of the initial exploratory phase grantmaking for the initiative, we made a point of gathering systematic feedback from our grantees on both the substance of our work and how we were going about it to assess whether we were headed in the right overall direction and using the best route to get there. On the substance of our strategy, our evaluators at CEI surveyed our existing grantees under conditions of anonymity to ascertain what they thought of our strategy, which lines of grantmaking held more or less promise for realizing our goals, and what the field as a whole needed to be successful. On the process side, we gained particular insight about what we were doing well and where and how we could improve our grant practices from our grantees’ input via the Grantee Perception Survey (GPS) that the Center for Effective Philanthropy fielded among Hewlett Foundation grantees earlier in 2015. You can review the Madison Initiative’s 2015 Grantee Perception Report results and compare them with internal and external foundation benchmarks here. Finally, to fine-tune our grant practices – the processes through which we solicit and decide on proposals, make grants, and gather and learn from grantee reports – we undertook an internal assessment that also sought out grantee perspectives and led to substantial changes in our grant practices, which are described in this SSIR article.
Beginning in 2015, we also initiated a series of evaluations of different clusters of grantee work across our portfolio. These assessments were done in close conjunction with the grantees involved in the hope that they would not only inform our work but also those of the organizations we have been supporting. The goal of this first wave of cluster evaluations was formative and developmental, not summative – i.e., we were focused more on the questions like “what is the nature of the work being done?” and “where/how could it be improved?” as opposed to “what impact are we having?” You can access evaluations of work we have supported in the following areas by clicking on the links below:
Renewing and expanding the Madison Initiative
In the summer of 2016, as we were preparing to propose a renewal of the Madison Initiative to our board, we prepared an overview of our evolving strategy and what we had learned through the first phase of our work to share with and get feedback from 100 partners (grantees, co-funders, and advisors) at the annual meeting we held for this group. We subsequently posted this document on the foundation’s website to get additional input. The main points of feedback we took away from the discussion of it at our partner meeting are available here.
Later that summer, to inform our final renewal proposal, our evaluators from CEI prepared an extensive Developmental Evaluation Report that integrated and distilled the findings and recommendations from all of the evaluation work they and we had undertaken during the exploratory phase of the Madison Initiative, from the initial “pressure test” memo through the various cluster assessments that had been completed by this point in time.
We submitted our renewal proposal to the board in October of 2016. We had taken into account the political ferment and disruption that marked the 2016 presidential campaign, which underscored in our minds the urgency of the work we had embarked on. In speaking to our board less than a week after this tumultuous and surprising election, we admitted to feeling a bit at sea. We were still assessing what just happened, why, what it might portend, and what we should do about it. The board, recognizing the gravity of the moment, approved our request for a 5-year, $100 million renewal of the initiative. But they also instructed us to proceed in a spirit of contingency. We agreed that we might need to redefine the problem or modify our approach to tackling it, depending on how we made sense of what had just happened as well as what we observed in the ensuing months.
2017 thus found us revisiting our plans for renewing the initiative. We laid out the scope and initial conclusions of this reassessment in another update memo to solicit feedback during our partner meeting in June of that year. Midway through the year, we had zeroed in on the problem of digital disinformation and the role it plays in polarizing our politics as the one that could warrant an expansion of our work. We undertook an assessment of where and how philanthropy could make a positive impact in this area. After concluding that there was in particular a pressing need for jump-starting research in this area, we commissioned a literature review of the key open questions and convened scholars, advocates, and funders to begin to develop a shared research agenda. In March of 2018, in the wake of this exploratory work, the Hewlett board authorized expanding the Madison Initiative to include the objective of combating digital disinformation with an initial focus on supporting research on the nature of the rapidly developing problem and what could be done address it. In June of 2018, we integrated this new objective into a revised strategy for the initiative.
Next steps in the evaluation of the Initiative
After revising our strategy in the first part of 2018, we proceeded to flesh out a corresponding outcomes framework that we will use to focus and assess the Initiative through 2021. This summary of the framework gives an overview of the key objectives and longer-run outcomes we are working towards. We also used the framework to develop interim outcomes, implementation markers, learning questions, and evaluation plans that will guide our work and learning in the years ahead. With the specificity of this new framework, our efforts to assess and adapt our work based on what we are learning along the way have entered a new phase, one that will be increasingly though still not exclusively focused on summative questions about whether, where, and how we are having impact through our work.
The initial evaluation work in this next phase is already underway. For example, later in 2018 we had the opportunity to reflect and learn from rich troves of data from two surveys of our grantees – one focused on assessing the demographic diversity of the boards, leadership, and staff of our grantees, and the other a second grantee perception report administered by the Center for Effective Philanthropy. Both surveys illuminate paths forward for improving our work. In addition, we have evaluations planned in 2019 for work we have supported to improve the culture of Congress and to reform the budget and appropriations process. We have also commissioned a historical assessment of the adoption and initial implementation of Ranked Choice Voting in Maine.
We will update this overview document with links to all the relevant evaluation documents that are planned and pending as they become available. In the meantime, if you have any questions about the evaluation work described above, or suggestions for future evaluations, please contact Daniel Stid, Director of the Madison Initiative, at email@example.com.
*The William and Flora Hewlett Foundation is a nonpartisan, private charitable foundation that advances ideas and supports institutions to promote a better world. The foundation does not lobby or earmark grant funds for prohibited lobbying activities, as defined in the federal tax laws. The foundation’s funding for policy work is limited to permissible forms of support only, such as general operating support grants that grantees can allocate at their discretion and project support grants for nonlobbying activities (e.g., public education and nonpartisan research). Additionally, the foundation may fund nonpartisan political activities by grantees in compliance with the electioneering rules. The foundation does not engage in or use its resources to support or oppose political candidates or parties.
**This post was originally published on January 24, 2019.