Health Research Program

When to use IR to address implementation challenges?

Picture yourself as the Director of Reproductive Health (RH) in the Ministry of Health. A recent survey showed that women often wait many months following delivery to resume family planning and that birth spacing (or interpregnancy interval) is lower than what is considered optimal.  The RH Technical Working Group wants to strengthen post-partum family planning services to address this problem. What is the best way to do this? Should you consider IR? Would quality improvement, program evaluation or a collaborating, learning and adapting (CLA) approach be a better choice? What do these terms all mean, and how do they differ from each other?  We can use various approaches to strengthen programs that are not performing well. IR is one of these approaches, but IR is not always the best way forward. This TIP will help the reader understand how IR compares to other approaches to address bottlenecks and when it is appropriate to use IR.

How does IR compare with other approaches to addressing program challenges?

Implementation (is) the act of carrying an intention into effect…which can be policies, programmes, or individual practices (collectively called interventions).”4 “Implementation research is the scientific inquiry into questions concerning implementation” 4Comparing IR with other methods to strengthen programs should begin with accepted definitions of implementation and implementation research (IR) (see box for definitions of the two terms). There is relative consensus on the definition of implementation;      however, there are varying definitions of IR.  IR also tends to overlap with or include some related methods in these definitions. For example, in the situation of a program intended to improve maternal and neonatal outcomes, measuring the quality of ANC can be applied both in quality improvement as well as in IR. Likewise, data from an end line survey in a program evaluation of maternal care could be used to identify problems with timeliness of ANC that could then be used as part of an IR study to strengthen the effectiveness of messaging to community members regarding ANC. Table 3.1 describes several methods that are frequently used to assess and address program challenges. While the purpose of these strategies is focused on program improvement, the purpose of IR is to generate generalizable or contextually specific knowledge about a specific research question that should lead to program or policy development or change.  Figure 3.1 presents how these two purposes are complementary by illustrating the relationship between IR and routine monitoring and program evaluation (M&E) and the value added of IR to the M&E that has become a mainstream component of most health program efforts.

TIP #3_Figure 3.1_Sept 14

Using the ANC example above, program evaluation data collected might reveal that the program is not reaching the targeted beneficiaries and/or not being implemented as intended (process evaluation), and the expected results are not being achieved (outcome evaluation).  By engaging stakeholders in a process of inclusive planning and review of program evaluation data, an IR question can be formed that seeks to generate knowledge as to why the program is not meeting expectations.  For instance, IR might seek to understand if the implementation strategy being used is feasible in this setting or if it is acceptable to community members.  IR might test two or more different implementation strategies for improving outcomes.  IR supports a deeper understanding of the context when embedded within existing routine monitoring and program evaluation.

Table 3.1: Commonly used methods to assess and address implementation challenges

TIP #3_Table 3.1_Sept 14

 

When should you use IR?

Program managers normally review data to identify problems in health programs. These data may come from routine monitoring, recent studies (e.g., quality of care studies or program evaluations), surveillance systems, or even implementers’ knowledge regarding the program.  Depending upon the problem and its context, the best way forward will depend on key considerations including available resources and time. IR may be useful when …

  • Necessary managerial changes have been implemented, but problem persists
  • The root of the problem is not fully understood and there is no clear solution
  • A potential solution has been identified, but its effectiveness or appropriateness in the local context is not known
  • More than one possible solution has been identified, but it is not clear which is most appropriate in local context

In addition, Figure 3.2 presents a step-by-step approach to help you decide whether IR might be appropriate for your problem and context. It is important to acknowledge that some IR questions can be answered in part or fully through routine data collection mechanisms that may already be in place (e.g. service delivery records, supervision visit reports, client exist interviews, etc.).  Where Ministry of Health buy-in and collaboration is strong, utilizing and enhancing these data collection mechanisms may eliminate the need for a larger IR study.

TIP #3_Figure 3.2_Sept 14

Key resources

Approach Key Resource
Routine Monitoring USAID Monitoring e-Toolkit: https://usaidlearninglab.org/monitoring-toolkit
Collaborating, Learning and Adapting USAID Collaborating, Learning and Adapting (CLA): https://usaidlearninglab.org/faq/collaborating%2C-learning%2C-and-adapting-cla
Quality Improvement Rakhmanova, N., & Bouchet, B. (2017). Quality improvement handbook: a guide for enhancing the performance of health care systems. FHI 360.

University Research Co, LLC.  Quality improvement overview. https://www.urc-chs.com/sites/default/files/urc-overview-quality-improvement.pdf

Program Evaluation Centers for Disease Control and Prevention (2011). Introduction to program evaluation for public health programs: A self-study guide.

USAID Evaluation e-Toolkit:  https://usaidlearninglab.org/evaluation-toolkit

Operations Research Zachariah, R., Harries, A. D., et al. (2009). Operational research in low-income countries: what, why, and how?. The Lancet Infect Diseases, 9(11), 711-717.
Implementation Research Peters, D. H., Tran, N. T., & Adam, T. (2013). Implementation research in health: a practical guide. World Health Organization.
Health Services Research Lohr, K. N., & Steinwachs, D. M. (2002). Health services research: an evolving definition of the field. Health services research, 37(1), 15.
Health Program Evaluation MEASURE Evaluation M&E Fundamentals (This is a free 2-hour online training). Available from: http://www.globalhealthlearning.org/course/m-e-fundamentals
Health policy and systems research Gilson, Lucy (Ed.) (‎2012). Health policy and systems research: a methodology reader. AHPSR, WHO, Geneva.

References
  1. Neta, G., Brownson, R. C., & Chambers, D. A. (2018). Special Article Opportunities for Epidemiologists in Implementation Science: A Primer. Am J 187(5):899-910.
  2. Rabin and Brownson. Terminology for Dissemination and Implementation Research. Chapter 2 in: Brownson, R. C., Colditz, G. A., & Proctor, E. K. (Eds.). (2018). Dissemination and implementation research in health: translating science to practice. Oxford University Press.
  3. Peters David H, Adam Taghreed, et al. Implementation research: what it is and how to do it BMJ 2013; 347 :f6753
  4. Theobald S, Brandes N, et al. (2018).  Implementation research: new imperatives and opportunities in global health.  The Lancet, 392 (10160), 2214-2228.
  5. Hirschhorn LR, Ojikutu B & Rodriguez W. (2007). Research for change:  using implementation research to strengthen HIV care and treatment scale-up in resource-limited settings.  The Journal of infectious diseases, 196 (S3), S516-S522.
  6. Bhatia, Manu. What is Monitoring and Evaluation?  A Guide to the Basics.  July 16, 2018. https://humansofdata.atlan.com/2018/07/what-is-monitoring-and-evaluation/
  7.  CDC. Types of Evaluation. https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf