The Many Benefits of Mixed-Methods Evaluations of Social Programs
A Q&A with MDRC President Virginia Knox
On June 5, 2019, MDRC then-Vice President Virginia Knox participated in a forum, “Bringing Rigor and Intentionality to Mixed Methods Evaluations of Social Programs,” sponsored by the Association for Public Policy Analysis and Management and Westat. Prior to the event, APPAM Communications Manager Ramon Robinson interviewed Knox about the role of mixed-methods research in evaluating social programs and about her vision for the methodology.
Ramon Robinson: Why do you think quantitative analysis is often prized over qualitative? Why is a mix of the two better?
Virginia Knox: This is an intriguing question since, of course, we all combine quantitative and qualitative analysis every day to make decisions. As an individual, when deciding between walking to work and taking the subway, I weigh some factors that are easy to quantify (cost, time, calories burned, carbon emissions) and some that are qualitative or harder-to-measure (my enjoyment of nice weather). Policy analysis and program evaluation, though, help us to make collective decisions by systematically weighing the benefits and costs of alternative ways to use public resources. I think the evaluation field relies considerably on quantitative analyses because ranking what one option accomplishes relative to another requires quantitative measures of their performance. The limitations of quantitative analyses depend on the goals of a particular evaluation, but we often need qualitative perspectives to fully understand the problem that an intervention or policy is trying to solve, to meaningfully understand the nature of the intervention as it operates, and later to accurately interpret the “how and why” that underlie any quantitative assessments of program performance.
Robinson: You’ve spent a big part of your career evaluating social programs. In your experience, what benefit does evaluation of social programs gain from combining quantitative and qualitative analysis?
Knox: There are too many benefits to list in this short space, but perhaps the most important is that open-ended qualitative inquiry adds a dose of humility to an evaluation. Rather than assume that evaluators can know from the outset all of the ways in which a new program or policy might affect people’s outcomes, a more inductive approach — such as conducting qualitative interviews or observations of the setting — explicitly acknowledges that we have a lot to learn from managers, program staff, or participants who are directly engaged in the activity we’re evaluating. They can often provide insights about why a new approach is succeeding or falling short, if we design our evaluations to include their perspectives. (For more on the benefits of combining inductive and deductive reasoning, see a recent post by Branda Nowell and Kate Albrecht in our Implementation Research Incubator.)
Robinson: What are some ways that you’d like to see the envelope pushed more in mixed methods evaluation?
Knox: At MDRC, we are trying to move beyond multi-method evaluations, which include both quantitative and qualitative components in a study. When it is feasible and appropriate, we want to develop mixed-methods evaluations that intentionally integrate findings from quantitative and qualitative methods, either to improve the study as it unfolds sequentially or to combine deep, nuanced inquiry with broad, generalizable conclusions. For example, a set of interviews with program staff or observations early in a study can inform the final version of a quantitative survey instrument. This is just one example of how a mixed-methods study can be designed that we will talk about on June 5th.
A video of the APPAM-Westat forum will be available on the APPAM website in mid-June.