Article Text

Download PDFPDF
Program evaluation—balancing rigor with reality
  1. Susan S Gallagher
  1. Center for Injury and Violence Prevention, Education Development Center, 55 Chapel Street, Newton, MA 02458-1060, USA e-mail: sgallagher@edc.org

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Funders require it. Practitioners fear it. Epidemiologists narrowly focus it on outcomes. Perspectives on evaluation vary, but there is general agreement that we need more of it in the injury field. The question is how much? and what kinds of evaluation can be done in a real world situation, with limits on resources? Randomized control trials may be the “gold standard” and quasi-experimental design a close second choice, but most of us do not have the luxury of engaging in such studies. Rigorous and multilevel evaluations that encompass formative (design), process (implementation), and outcome (knowledge, behaviors, injury rates, and institutionalization) measures are still relatively rare in the injury prevention literature.

    Four articles in this issue (see 125, 130, 151, and 154) illustrate the range of possibilities as well as many of the difficulties inherent in evaluating community based programs. Three of the four employ multiple strategies in a pre-post design, with varying outcome measures. Although none of these studies is perfect, there is, nevertheless, much that can be learned from them.

    The study of Bhide et al (154) contains some elements of a process evaluation by examining the effectiveness of the manner of distribution of a one shot, prepackaged educational program. It presents the number exposed to the program and the components that were implemented as intended, for example the leader's guide. Given the minimal penetration (16%) and only partial implementation (46%), it is unlikely that a better designed outcome based evaluation would have shown more encouraging, statistically significant changes. In …

    View Full Text

    Linked Articles