Accountability for What? Results Produced or Producing Results?

People solving social problems in innovative ways naturally want to know that their time and grantmaking dollars are making a difference and that their efforts moving forward will produce an even better payoff. Hence the investment in evaluation. But social innovation and evaluation can be uncomfortable bedfellows. Traditional evaluation fails social innovation on three counts:

  1. It evaluates results against outcomes and indicators established at the beginning of the initiative. But the pace inherent in innovation requires that people adjust their thinking as the work unfolds. When this happens, the initiative and the evaluation can diverge, making activities related to the evaluation feel out of sync and constraining.
  2. Because evaluation reports are typically written for audiences external to/separate from the actual work, it insists on a high level of fidelity of data. As a result, such reporting too often excludes important but non-linear, non-triangulated outlier results that the people doing the work need to consider as they adjust and improve. Reports then risk being “white washed” and too generalized to be of much value.
  3. Finally, the traditional cycle of annual or bi-annual reporting is too slow to inform the dynamic environments in which innovative change agents operate. Reports come too late to help track and test thinking at defining moments when it would be most useful.

What’s the alternative? We and others have been working on this problem. The field of Developmental Evaluation is devoted to re-thinking evaluation in complex and dynamic social change initiatives. But we at 4QP have been thinking about this question from a completely different context that we believe has some lessons for the field.

Several years ago, 4QP partner Marilyn and her former colleague Charles Parry had the opportunity to study an urban police department’s adoption of New York’s CompStat model. CompStat relies on very simple trend data regarding a bucket of crimes –burglaries, car thefts, aggravated assaults. Every three months, each district leader would discuss their district’s trend data with peers and the commissioner. If burglaries were up, they were expected to do their best to understand why and talk about what they planned to do to address the problem. They knew that, a few meetings later, they’d be in front of their peers again and they wanted to be able to demonstrate that their thinking and actions succeeded in improving the trend line.

Meanwhile, their peers were free, when the need arose, to “steal” and refine these innovations to improve trend data in their own districts. This impressive, self-reinforcing platform for learning as an institution made room for humility and curiosity, even in the face of accountability and competition. (An unexpected result included requests by beat cops for better data and analysis tools.)

This story illustrates how people on the ground can strengthen their capacity to produce results by reflecting deliberately on very simple and frequent data reporting, which both stimulates and captures outlier innovations. It helps them strengthen their thinking and, therefore, their capacity to produce results in the future, even as their environments change.

Our big takeaway? Evaluation of social innovation should focus not just on accountability for results, but for surfacing and testing the thinking that produced results. It is that capacity to think through how to achieve outcomes in complex and dynamic situations that will ensure greater payoff in the future.

This fast-cycle learning is what we aim to support with Emergent Learning. It is not easy. But when everyone around you is doing this kind of quick, fit-for-purpose reflection on results, innovation starts to become “just how we do our work here.”

 

4QP and Tanya Beer of the Center for Evaluation Innovation will be co-facilitating a discussion of this topic at this fall’s American Evaluation Association conference. Please join us.

This entry was posted in Uncategorized. Bookmark the permalink.

1 Response to Accountability for What? Results Produced or Producing Results?

  1. Dan Wilson says:

    Great post. We at the Ontario Trillium Foundation and other folks using a Developmental Evaluation approach have experienced a weird tension between “accountability” and “learning.” In fact, I think DE and emergent learning offer a different model of accountability: an accountability for improving and maximizing results, not just for measuring and reporting them. Engineers Without Borders has also done a lot of thinking about this.

Leave a Reply

Your email address will not be published.