How A Developmental Evaluation Will Improve Tanzania’s Boresha Afya Program

Related Project:
Coordinating Implementation Research to Communicate Learning and Evidence (CIRCLE)
Author(s):
Ashwin Budden
Organization:
USAID Health Research Program, CIRCLE Project

This blog describes the basic principles of a developmental evaluation and examines how it differs from traditional evaluation approaches. A subsequent blog will spotlight specific themes as well as evidence and learning from the CIRCLE Project in Tanzania. We invite you to follow us on this site.

USAID’s Health Research Program is conducting a Developmental Evaluation of Boresha Afya, a 5-year, multi-partner-driven initiative to support the Tanzanian government to improve quality, efficiency, and utilization of integrated health care services. Boresha Afya is focused on 19 regions in three zones: Lake/Western, North/Central, and Southern. The project aims to deliver an integrated package of essential services in communities and health facilities where there is a higher disease burden, while focusing on streamlining service delivery.

Given the project’s complexity,  a Developmental Evaluation (DE) was deemed best suited to monitor and adapt the project as needed. Through the DE process, USAID is able to generate evidence in real-time, support adaptive management practices, and catalyze rapid learning and decision-making to improve the quality, efficiency, utilization, and scalability of integrated health services in Tanzania.

A Developmental Evaluation workshop in progress
Source: CIRCLE Project

So what is a Developmental Evaluation?

Developmental Evaluation is an approach to evaluation that supports continuous learning and adaptation of interventions through real-time evidence generation and engagement with intended evaluation users.

DE is an example of a new methodology of implementation research being utilized in Tanzania and can be used in other contexts as a quality improvement or stand-alone activity.

What are hallmarks of a Developmental Evaluation?

Managing changeThe ‘development’ in DE is fundamentally about the capacity to adapt existing interventions to changing conditions. Global health programs, like the Boresha Afya project in Tanzania, operate in cultural, institutional, and environmental contexts that continually change, sometimes in unpredictable ways. Interventions struggle or fail because they are focused  on predetermined objectives and results frameworks that risk lack of attention to what’s emerging and what may really matter in dynamic environments.

Source: CIRCLE Project

 

 

We need to be honest about why we are doing evaluation…just for auditing, just for donor requirements, or for something else?”

Boresha Afya implementing partner
 

Supporting the ‘use’ of evaluationA key challenge facing global health programs: evaluation results don’t get used for planning and program improvement. This was all too apparent when speaking with the Boresha Afya stakeholders about their prior experiences with evaluation. Evaluation  was viewed as “an administrative box to be ticked” for future funding. What’s more, end-line evaluations typically “end up on the shelf, unused” because they are often mis-timed with planning cycles or too long and cumbersome to digest. DE flips standard assumptions about evaluation on its  head by focusing on evaluation users and evaluation use for timely learning and innovation.

So why is DE different?

DE involves fairly continuous data collection, analysis and feedback cycles, rather than pre and post evaluation activities.  By gathering evidence in real-time as the project unfolds, evaluators have frequent opportunities to provide feedback on project implementation and outcomes when it is most needed.

DE is embedded in the program, physically co-locating the evaluation team with implementers. In this way, they can become intimately familiar with the context, the people involved, and the nuances of the implementation process, while remaining independent. Evaluators and implementers develop a collaborative partnership, in which the evaluator provides timely feedback and decision support.

DE is flexible in design and implementation.  DE favors an agile and complex-systems aware approach to capture emerging results and the non-linear processes, relationships and contextual factors that can influence and help explain them. The credibility of DE rests, in part, on rigorously applying methods that are appropriate to the situation and questions at hand, and that address evolving priorities.

DE is user-focused.Traditional evaluations typically focus on delivering a mid- or end-line evaluation report to a wide network of stakeholders.

Source: CIRCLE Project

DE shifts the focus to the needs and priorities of key decision-makers. By fostering collaborative engagement, evaluators and intended evaluation users can rapidly solve problems and make decisions about the intervention’s development, translating evidence into meaningful action. Ultimately the success of DE rests on whether results are used for positive change.

As an alternative to the final evaluation report, DE can utilize a range of products to rapidly communicate evidence such as data briefs, infographics, digital dashboards, field reports, and even short videos.

DE emphasizes learning and “smart failure.” Rather than external judgment of a program’s merit, DE focuses on learning and co-designing solutions during the project cycle  and views failures as opportunities for innovation. It’s about smart failure.” To support learning, evaluators systematically document decision-making processes and actions overtime. At a deeper level, this iterative practice contributes to organizational change in that it fosters a culture of learning and adaptive management.

 

This development evaluation was conducted by the CIRCLE* Project, through the generous support of the American people.  Stay tuned for more on developmental evaluation. You can also access additional information listed in the references below and at Better Evaluation.org.

* Coordinating Implementation Research to Communicate Learning and Evidence

References

Eoyang G and Holladay RJ. 2013. Adaptive Action: Leveraging Uncertainty in Your Organization. Stanford University Press.

Gamble JAA. 2008. A Developmental Evaluation Primer. The J.W. McConnell Family Foundation.

Laycock A, Bailie J, Matthews V, Cunningham F, Harvey G, Percival N, and Bailie B. 2017. Using developmental evaluation to strengthen dissemination & use of quality improvement data from Aboriginal & Torres Strait Islander health centres. https://www.aes.asn.au/images/stories/files/conferences/2017/91AlisonLaycock.pdf

Parkhurst M, Preskill H, Jewlya L, and Moore M. 2016. The Case for Developmental Evaluation. FSG website: https://www.fsg.org/blog/case-developmental-evaluation

Patton, Michael Quinn (2012). Essentials of utilization-focused evaluation: a primer. Thousand Oaks, Ca: Sage Publications.