How can foundations help grantees measure and improve their performance? This is a tough question for philanthropy, and we have wrestled with it in the Madison Initiative. While foundations routinely express their determination to drive outcomes and impact, too often their interactions with grantees on issues of performance measurement are counter-productive. The more insistent foundations are on accountability for results from those they fund, the greater the temptation grantees face to show how well they are doing and how much impact they are having—imperatives that tend to undermine the sustained patterns of measurement, reflection, and learning that are needed for ongoing improvement in outcomes. I have taken to calling this dynamic the performance measurement trap.
With the advice of our developmental evaluators at the Center for Evaluation Innovation, we recently refined our grant application forms in an effort to break out of this trap. The rest of this post is a verbatim excerpt of what we are now asking of prospective grantees with respect to performance measurement. We’d welcome your feedback so we can continue to fine tune our approach:
One of our most important goals in supporting your organization is to help you measure, reflect on, and learn from your results so that you can continue to focus and improve your impact. Insofar as our grantees are able to do this, so can we. But this requires a different approach to measurement and reporting than is typical for foundations and grantees. In this alternative approach, the primary constituency is your organization, not ours. The primary purpose is supporting your improvement in the future, not reporting to us on your performance in the past. The primary disposition is a spirit of inquiry and openness, not of advocating for your organization and trying to put your best foot forward.
We appreciate grantees that approach their work with the understanding that things don’t always work out as planned, and that their grasp on the circumstances they are trying to change in the world is inevitably imperfect and in need of adjustment. Conversely, we will be inclined toward skepticism with grantees whose worldview and strategy holds up year after year, for whom everything is materializing as they had intended, and who only have success stories to share.
To help establish the right learning dynamics, we would like you to situate what you will be measuring and how you will be measuring it in the context of the hypotheses your work is in effect testing. Whatever your strategies or plans, you are no doubt working on one or more hypotheses through which you are seeking to bring about positive change(s) in the world. You are essentially saying, ‘if we do XX, then YY will happen.’
We would like you to articulate at least one and no more than three hypotheses that you will be testing over the period of the grant. We recognize that you may be working on more hypotheses than this, in which case please prioritize the sub-set that you think will be most important to share and track with us. For the sake of clarity and focus, each hypothesis should be captured in one sentence as an “if / then” statement. The “if” clause should describe one line of work, activity, investment, or some other “input.” The “then” clause should describe the positive change(s) that will result, or—put differently—the outcome(s) you expect will follow from this particular “input.”
For each hypothesis, we also want to know the key evidence that you will assemble in order to test and refine it during the period of the grant. To the greatest extent possible the evidence should include objective measures and indicators (quantitative or qualitative) to which you will have access. Define the measure or indicator, describe the current state or “baseline,” and project the improvement on that baseline that you are targeting. In some instances the evidence may need to be generated in part from subjective judgments and feedback, in which case let’s talk about how you or a disinterested observer or evaluator will go about gathering this evidence as systematically as possible.
With your interim and final reports back to us, and in our subsequent discussions about these reports, we can both reflect on what you learned and how you might course correct through this process. What insights have you gained about your work? What aspects of your hypotheses have been validated? What parts have been invalidated? Why? What evidence is informing your conclusions? Where are refinements needed in your work, and how will you go about making them? What are your revised hypotheses to test in the subsequent work? These are the questions we look forward to discussing with you.