Nine Lessons About Doing Evaluation Research
Howard Bloom’s Remarks on Accepting the Peter H. Rossi Award
Remarks on Accepting the Peter H. Rossi Award by Howard S. Bloom, MDRC’s Chief Social Scientist, at the Association for Public Policy Analysis and Management Conference, November 5, 2010, Boston.
It is a true privilege to receive the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation. This award is especially meaningful to me because Professor Rossi had two important, although indirect, influences on the evaluation path that I took.
Professor Rossi’s first influence was through his former student and my good friend, Bill McAuliffe, with whom I shared an office during our first years teaching at Harvard in the early 1970s. Through endless conversations, Bill convinced me that research design was far more important for assessing causality than was statistical analysis (although both are clearly crucial). Professor Rossi’s second influence was through our joint participation on a Department of Labor Advisory Panel in the early 1980s. This panel caused the Department to radically change its approach to evaluating federal employment and training programs from longitudinal comparison-group studies to a randomized trial, the National JTPA Study — for which I subsequently became Co-Principal Investigator with Judy Gueron and Larry Orr, two outstanding colleagues.
That leads me to my next point — the fact that I have been blessed by wonderful colleagues throughout my career. As an Assistant Professor at Harvard, I worked especially closely with Sunny Ladd and Johnny Yinger, who supported my growing fascination with evaluation research by co-teaching courses on its methods and applying them to our joint research. As a Professor at NYU, I had many supportive colleagues, among whom Jim Knickman, Jan Blustein, and Dennis Smith in particular shared my interest in evaluation research. I also had many fine graduate students who now work in the field, foremost among whom are Hans Bos and Laura Peck.
But it was not until Judy Gueron convinced me to come to MDRC in 1999 that I began to get real traction in my work. Being at MDRC is like having an indefinite sabbatical on steroids. I get to do what I care about most (develop, use, and teach rigorous evaluation methods) with unending support from the best colleagues imaginable. For this I give special thanks to Judy Gueron, Gordon Berlin, Bob Granger, Jim Riccio, Jim Kemple, Fred Doolittle, Pei Zhu, Marie-Andrée Somers, Mike Weiss, Corrine Herlihy, Alison Black, and Becky Unterman. While at MDRC, I have also been privileged to work closely with terrific academic colleagues like Steve Raudenbush, Mark Lipsey, Sean Reardon, and Carolyn Hill. Then, of course, there is my wife Sue, the ultimate colleague. With the help of people like these, I would have to be brain dead to not be productive.
I was told that my remarks today should be substantive and last no more than eight minutes. So what follows very quickly and in no particular order are nine important lessons that I have learned about doing evaluation research and would like to share with you:
- The three keys to success are "design, design, design" (just like "location, location, location" in real estate). No form of statistical analysis can fully rescue a weak research design.
- You cannot get the "right answer" unless you pose the right question. Disagreements among capable researchers often reflect differences in the research questions that motivate them (explicitly or implicitly). Thus, it is well worth spending the time needed to clearly articulate your research questions, being as specific as possible about the intervention, population, and outcomes of interest.
- A "fair test" of an intervention requires that there be a meaningful treatment contrast (the difference in services received by treatment group and control or comparison group members). This condition has two subparts: (1) the intervention must be implemented properly, and (2) services to control or comparison group members cannot be too substantial.
- The most credible evidence is that which is based on assumptions that are clear and convincing. Thus, researchers should put all of their cards on the table when explaining what they did, what they found, and what they think it means.
- The old saying, "keep it simple, stupid," is crucial for meeting the preceding condition. This is especially important for evaluation research — because no matter how simple a research design is, the resulting study will be more complicated because of its interaction with the real world.
- You probably don’t fully understand something if you cannot explain it. The best way to avoid this trap is to teach everyone who is willing to listen about what you are trying to do and how you are trying to do it.
- Thoughtful and constructive feedback is a researcher’s best friend. Hence, you should seek review early and often.
- Evaluation research is a team sport. It is impossible to overstate the importance of complementary policy, programmatic, data, research, and dissemination skills on an evaluation team.
- The best way to change how evaluation researchers do their work is to change how they are taught to think about it. Thus, methodological training is essential both during graduate school and throughout one’s career.
In closing, I would like to thank this year’s selection committee for adding me to the pantheon of prior Rossi Award winners.