Article Text
Statistics from Altmetric.com
Funders require it. Practitioners fear it. Epidemiologists narrowly focus it on outcomes. Perspectives on evaluation vary, but there is general agreement that we need more of it in the injury field. The question is how much? and what kinds of evaluation can be done in a real world situation, with limits on resources? Randomized control trials may be the “gold standard” and quasi-experimental design a close second choice, but most of us do not have the luxury of engaging in such studies. Rigorous and multilevel evaluations that encompass formative (design), process (implementation), and outcome (knowledge, behaviors, injury rates, and institutionalization) measures are still relatively rare in the injury prevention literature.
Four articles in this issue (see 125, 130, 151, and 154) illustrate the range of possibilities as well as many of the difficulties inherent in evaluating community based programs. Three of the four employ multiple strategies in a pre-post design, with varying outcome measures. Although none of these studies is perfect, there is, nevertheless, much that can be learned from them.
The study of Bhide et al (154) contains some elements of a process evaluation by examining the effectiveness of the manner of distribution of a one shot, prepackaged educational program. It presents the number exposed to the program and the components that were implemented as intended, for example the leader's guide. Given the minimal penetration (16%) and only partial implementation (46%), it is unlikely that a better designed outcome based evaluation would have shown more encouraging, statistically significant changes. In …
Linked Articles
- Original Article
- Original Article
- Brief report
- Brief report