Blog

Evidence in Action: Using Evidence as a Tool, Not an Axe – Creating a Culture for Learning from Evidence

Every day we see strategies that are working and delivering results in a rapidly changing world. This Evidence in Action blog series highlights the voices of social innovation organizations to highlight effective interventions in communities across the country and evidence-based policy and practice leaders to elevate both the results-driven solutions being advanced to help solve our most pressing social problems and to describe the evidence-based federal programs that are critical to developing and scaling effective human and social services. Today, we hear from Beth Boulay of Abt Associates, a global leader in research, evaluation, and program implementation, about the value of evidence both as a tool for determining what works but for also uncovering why something didn’t work.

How do we realize the promise of evidence-based policymaking if we only report evidence when it reveals what works, and file it away when it identifies what doesn’t? We don’t. How do we ensure that null and negative findings are reported along with positive ones? We need to nurture a culture in which such evidence is valued for learning, and not used as an axe. And in at least one case, we have a model on which we can build.

Until recently, the focus has been on simply increasing the amount of available evidence, since evidence-based policymaking was hamstrung by the dearth of evidence in many policy areas. There simply was not enough evidence available for decision makers to use. But that’s changing. For example, federal agencies are investing substantial tax-payer dollars in innovation programs aimed at leveraging the creativity and energy of those working on the ground to refine, implement, and evaluate approaches to tackling persistent problems in areas like education (the Investing in Innovation Fund and now the Education, Innovation and Research program), workforce development (Workforce Innovation Fund), and community solutions (Social Innovation Fund), to name a few. These investments are broadening the evidence base available to decision makers.

But we should expect innovation of effective social programs to have fits and starts. Increasingly, however, receipt of funding is tied to an organization’s ability to reference findings from rigorous impact studies that demonstrate effectiveness. This contributes to a culture in which conducting a rigorous evaluation of the effectiveness of your work carries very high stakes for your organization. That is, as we address the lack of evidence overall, we risk creating a new barrier to evidence-based policymaking: an evidence base that is skewed towards including only those programs that work. What didn’t work, and importantly why it didn’t work, is only spoken of in whispers.

Despite the current high stakes culture, at least one group of grantees willingly took the leap, and reported findings no matter the results. Our team at Abt Associates, overseen by the Institute of Education Sciences, asked all of the grantees funded by the Investing in Innovation (i3) Fund and their evaluators to establish, in advance, the research questions their evaluation would address [see this presentation for more]. Specifically, we asked them to specify four key aspects of their research questions: the intervention under study, the group to which those receiving the intervention would be compared, the educational level of the participants, and the outcome domains they would measure to assess the effectiveness of the intervention. The earliest cohorts of i3 grants are complete and the results of their independent evaluations are being released, and one thing is clear: they delivered on their promise to report both what worked and what didn’t. They reported findings to our team, no matter the direction or size, for 97 percent of the research questions they posed.

Importantly, the null and negative findings provide grantees and the broader field food for thought. For example, faced with overall findings that their approach to helping schools use data from interim assessments and standards-based planning in their practice were ineffective, the Achievement Network (ANet) dug deep into the evaluation data and identified ‘readiness conditions’ that distinguished schools that benefited from ANet from those that did not (see here for more). The lesson that there may be prerequisites to achieving impacts is relevant to many organizations working to help teachers and schools improve outcomes for kids.

In the end, asking grantees to establish research questions in advance and collecting the findings from those analyses at the end ensured that all of the evidence is available for decision makers. But it is only one step towards creating a learning culture in which null or negative findings are a tool and not an axe. Indeed, the i3 grantees’ decision to be transparent may have future ramifications. Will funding be available for further development and innovation for those that missed the mark early on? Will interest in turning null findings into meaningful impacts wane? Our hope is that funders, researchers, and policymakers instead create culture of learning in which we keep our collective eye on the prize: figuring out what works and scaling it to improve outcomes. With that as our guidepost, evidence about what didn’t work is as meaningful and valuable as evidence about what did.

 

This post is part of America Forward’s Evidence in Action blog series. Follow along on Twitter with #EvidenceinAction and catch up on the series here.

Previous Article Evidence in Action: Using Pay for Success to Keep Families Together October 24, 2017 < Next Article Evidence in Action: The Bard Early Colleges and a Broadened Vision of Educational Opportunity October 24, 2017 >

Leave a Reply

You must be logged in to post a comment.