December 2, 2014 — By Julia Coffman and Tanya Beer
Julia Coffman and Tanya Beer are the director and associate director, respectively, of the Center for Evaluation Innovation, an organization that focuses on the evaluation of "areas that are challenging to assess, such as advocacy and systems change." They are supporting the staff of the Madison Initative as that strategy is developed and launched.
At the Center for Evaluation Innovation, our mission is to push the evaluation field forward in new directions. This often means we’re doing things we’ve never done before, like using systems mapping in our evaluation work. In an earlier blog post, Daniel introduced the systems map for the Madison Initiative and the rationale for creating it. Now that the first draft is public (and open for comments!), we have some early thoughts about using systems mapping as an evaluation tool.
First, to set the context: We are conducting a developmental evaluation of the Madison Initiative’s initial phase of experimentation, learning, and field building. Our role is to be a “critical friend” to the strategy team, asking tough evaluative questions, uncovering assumptions, and collecting and interpreting data that inform ongoing strategic decisions. This role deeply affects our choice of evaluation questions, tools, and methods, starting with our choice to use a systems map rather than a theory of change to guide our evaluation work.
Why use a systems map instead of a theory of change to guide the evaluation?
We don’t think of theories of change and systems mapping as mutually exclusive, nor do we think one tool is better than the other. They both help to frame and shape evaluation priorities and plans. But different circumstances call for different tools. We chose systems mapping because it’s particularly well suited for thinking through possibilities for change in a complex and uncertain environment like democracy reform. It helps us to see how cause-and-effect relationships are entangled and mutually reinforcing, rather than one-way and linear. It helps us to explore how pushing on one lever in the system might have knock-on effects in other parts, or how a change strategy might need to interrupt a vicious cycle.
Another critical difference is that a systems map is not a representation of the Foundation’s strategy. Instead, it illustrates the Foundation’s understanding of the broader system in which its strategies are positioned. It also helps us to see how various actors—including other democracy funders and non-grantees—are positioned in that same system. So rather than isolating the Madison Initiative from other change efforts as theories of change often do, the systems map keeps us mindful of how the Madison Initiative interacts with other change efforts.
Finally, during the Madison Initiative’s first phase, rather than make grants aligned with a specific theory of change, the Foundation is starting by spreading a series of smaller grantmaking “bets” within these systems to see where grantees might get traction and what this reveals about the system’s hazier parts. As we learn more, the Foundation may move toward more specific theories of change. We’re not sure yet. But for now, the grantmaking strategy is best understood and supported by seeing how the Foundation’s investments and the work of the grantees it supports are situated within the larger systems they aim to change.
How will we use the systems map for evaluation?
First, we are using the map to “pressure test” the spread-bet strategy, examining the extent to which the Foundation’s grants (individually and as a whole) correspond with the conditions and dynamics that drive Congressional dysfunction. Are organizations focusing on dynamics of the system that research and other actors suggest are movable? Is it possible that change in one area will have unintended consequences in other parts of the system?
We’ve also used it to generate and prioritize our early evaluation and learning questions, which differ from more traditional evaluation questions that focus on “Are we doing things right (or in accordance with the foundation’s strategy)?” These are broader and get at: “Given what we think and understand about the system, are we doing the right things to promote Congressional functioning?” and “Do we need to understand the problem and the dynamics of the systems differently?”
Now we are building an early learning framework and evaluation plan to explore those questions. As we evaluate and learn more about how the system operates, we will revise the map to reflect our changing understanding and identify emerging questions, about every six months or so.
Any disadvantages or concerns?
There was a risk that the mapping exercise could become tedious, with an overemphasis on “getting it right” (see Jeff Mohr’s advice about that). In truth, the process did take longer than we thought. While there are some things we could have done to avoid that, be forewarned that this is not a one-day endeavor! It’s also difficult to get the right level of “zoom,” such that there is sufficient detail to uncover important dynamics that are beyond the obvious, but not so much detail that no one has the stomach to use it.
After looking at the map, an evaluator friend asked whether the process really produced enough fresh insights that it was worth the effort. The mapping process did raise meaningful questions and spur tough discussions. But we recognize it now risks becoming a static communications artifact laid to rest on the same proverbial dusty shelf as countless theories of change. While we’ve got ideas about how to integrate it into our ongoing learning process, we expect there will be many lessons about how to make this work.
We look forward to building the map out and drilling down in certain areas. But we don’t want to go overboard. With the Kumu software, we have endless possibilities for adding bells and whistles. But unless there is a strategic or learning utility to adding more detail, we want to be careful of being lured by its shine. If any evaluators have thoughts about how to keep the map alive and useful—or cautionary tales about systems mapping—please share them. We’ll keep you posted how it’s going!