Incorporating Program Model Adaptation into Implementation Research
Researchers studying education, youth development, or family support interventions sometimes encounter situations where the program staff is adjusting and adapting program components. Sometimes this is done to fit the needs of clients or budgets. Adaptations also may be made during multiyear programs and studies, for reasons such as staff turnover and budget changes. Such adaptations are likely to occur whether or not sites in a randomized controlled trial are receiving specific implementation guidance.
Instead of viewing adaptation only as an impediment to treatment fidelity, a nuisance that must be managed, we’ve been thinking about how we can anticipate these adaptations in our implementation research.
Consider the program plan. If the program is implemented as planned, an impact is expected. If the program plan is adapted before implementation begins, we can call it planned adaptation. If the program plan is adapted while implementation is ongoing, it is usually ad hoc or not sanctioned by the program developer; in that case, we can consider it unplanned adaptation. Either of these types of adaptation may change the dosage or quality of the intervention, which could change the expected impact.
By carefully considering potential adaptations during the study planning phase, implementation researchers will be better situated to study adaptations themselves and their relationships to program impacts. Some examples of what we’ve drawn on as we’ve been thinking about this phenomenon are Chambers and Norton (2016); Hullemann and Cordray (2009); Durlak and DuPre (2008).
With program adaptations in mind, here are some questions that implementation researchers might ask when planning a study:
- Intervention: Is the intervention a discrete service or a package of services with multiple components? What can be changed while still providing a fair test of the intervention? How much technical assistance or monitoring is needed to ensure that nonnegotiable elements of the intervention are unchanged?
- Implications for dosage: Is it possible for service providers to adapt the timing or intensity of the intervention in a way that would change the amount of the intervention that participants receive? If so, is there a minimum amount to have a fair test?
- Implications for sample: Is it possible for service providers to adapt eligibility screening, thus changing who receives the intervention?
- Implications for contrast: What kinds of adaptations would make the intervention align more closely with the control or comparison conditions in the study? To what extent can the study technical assistance team discourage but still monitor such adaptations?
Implications for impacts: How might adaptation and fidelity of implementation interact to dilute or enhance estimated impacts? The table below provides a framework for thinking about these issues.
Suggested citation for this post:
Balu, Rekha. 2017. “Incorporating Program Model Adaptation into Implementation Research.” Implementation Research Incubator (blog), March. http://www.mdrc.org/publication/incorporating-program-model-adaptation-implementation-research-0.