Evaluation and other issues
- Montreal Children's Hospital and McGill University, Montreal, Canada
- Correspondence to: Professor Pless;
Evaluation, bones of contention, Injury Prevention Online, how to annoy an editor, and board changes
EVALUATION AGAIN: THINK BIG
For many, “evaluation” is a feared word. It may be as intimidating for researchers as it is for those responsible for programs. The threat it conveys reflects the difficulty in doing scientifically respectable evaluation studies, and for program people, the ever-present possibility that the result will fail to justify their efforts. In spite of these barriers, we cannot responsibly ignore the pressure to evaluate. We cannot justify applying a different standard to the preventive interventions we advocate than those that apply to pharmaceutical manufacturers, for example. What is sauce for the goose is sauce for the gander: we are all bound by the need to make our programs evidence based. Thus, every preventive initiative should be evaluated as well as resources permit and any that are being promoted that make no attempt to do so must be viewed with caution and skepticism.
By far the most challenging tasks for evaluators is assessing the worth of community programmes. In this issue we present two examples of how difficult this can be (p 18 and p 23). As well, invited commentaries (p 6 and p 8) offer words of advice and some of consolation. Their suggestions are important for readers who intend to conduct this sort of evaluation in the future.
One of the most daunting issues is that most evaluations compare only one or, at best, two communities, usually before and after an intervention. But no matter how many subjects (or injured people) there may be in each community (that is, no matter how large its population) the main comparison uses an “n”of 2. This is so because from a statistical viewpoint, those in one community share many characteristics, that may affect how they respond to …