Article Text

Download PDFPDF

Evaluating injury prevention interventions
  1. M Hodge
  1. Department of Epidemiology and Biostatistics, McGill University, Montreal, Canada
  1. Correspondence to:
 Dr Matthew Hodge, UNICEF Health Section, 3 UN Plaza, New York, NY 10017, USA;
 mhodge{at}unicef.org

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evaluating what works is essential in efforts to prevent injuries

Two papers, one from Sweden1 and one from Australia,2 in this issue describe evaluations of injury prevention interventions. In both, multimodal community based interventions were implemented in defined geographic areas. Both face the challenges of evaluating a complex intervention, delivered in a “real world” setting and without a randomized trial structure.

Evaluating injury prevention efforts is vital to reduce the rising toll of mortality, morbidity, and economic losses arising from injuries, not only to identify effective prevention measures but also to shift resources from what does not work to what does. For these reasons, it is essential that evaluation be of the highest methodological standard possible.

BELIEF IN INJURY PREVENTION IS WHY WE DO EVALUATION BUT NOT HOW

In most scientific inquiry, the investigator approaches a problem with a hunch or more formally, a hypothesis. In the injury prevention field, most of us are believers, of varying degrees of fervency, that injuries can be prevented and that interventions to do so can be implemented. These beliefs motivate evaluation but the evaluation itself can rarely if ever provide the positivist proof that the intervention reduced rates or severity of injuries.

Ensuring that the evaluation's conclusions are able to withstand alternative explanations is critical. For this reason, evaluations ideally begin from a premise of no effect and seek to demonstrate that this is not so. This distinction is important, in part because the falsification of the hypothesis of “no effect” is at the heart of commonly used frequentist statistical tests such as those reported in both papers.

Two additional elements are also important: clarification of the contrast and the need for efforts to rule out alternative explanations of the observed effects.

CLARIFYING THE CONTRAST

In a randomized study, the contrast is clear. Some subjects (individuals or communities) receive one intervention and the others do not. In pharmaceutical trials, placebos further clarify the contrast by theoretically nullifying the difference between taking a pill and not taking a pill, since all participants take something and typically are unaware of whether it is active agent or placebo (blinding).

The interventions reported in this issue were not randomized to some communities and even if they were, providing a “placebo” counterpart to the communities not receiving the intervention would be difficult. Nevertheless, offsetting some of the differences between the communities being compared is possible using analytic tools.

On the positive side, Ozanne-Smith and colleagues report using age standardization to remove differences in injury rates attributable to different age structures of the communities they are comparing. More common, however, is the combination of a “diagnosis” (that is, there may be an unaccounted for difference) but no attempt to offset the diagnosed problem. For example, Lindqvist and colleagues note that the proportion of people with injuries receiving care at the university hospital differed fourfold between the communities under comparison but provide minimal analysis as to how this affects the injury rates being compared.

Many evaluations assess interventions in place or implemented over several years. In such situations, a rigorous evaluation would endeavour to account for secular changes in injury rates over the duration of intervention. For example, if injury rates fell by 20% in the intervention community, a conclusion that the intervention was successful would not be warranted if the national rate also fell 20% over the same time. Both evaluations could have done better on this issue.

An additional feature of injury prevention interventions is the potential for severity shifts—that is, little change in overall injury rates but a relative decrease in more severe injuries offset by a relative increase in less severe injuries. Standardized injury severity data are typically not available from medical or hospital records but could potentially be extrapolated using standardized post-event assessments of severity. In many developed countries where the bulk of injuries are non-fatal, shifting severity to less severe injuries may well be an important public health victory.

RULING OUT ALTERNATIVE EXPLANATIONS

One of the greatest challenges for non-randomized studies is ensuring that the results are free from bias. Selection of the “control” or “non-intervention” community is a potential source of bias in both of these studies. Details as to what sort of reproducible process was used to select the comparison communities would be a first step to demonstrating that the results are not merely quirks of comparison.

In addition, where the validity of medical or hospital records is questioned (whether for accuracy of injury diagnosis or for geographic location of the victim), scenario analysis can provide some sense of the robustness of the evaluation's conclusions to the vagaries of administrative data. Consider a study of two bordering communities sharing a single health care facility. Community A receives an injury prevention intervention and community B does not. This design has some attractive features including a single source for medical care that covers both communities. Evaluation of this hypothetical intervention demonstrates that injuries for which medical care was sought declined by 10% in community A when compared to community B. In this case, one such scenario analysis could involve reclassifying a random 10% of medical records from both communities. Then, if the effect persists (that is, community A has a lower rate than community B) one would be more confident that this is attributable to the intervention. Merely stating that sensitivity analysis or scenario analysis was done is a poor substitute for reporting the results of the most relevant of such analyses.

A FINAL WORD

Some would argue that evaluations such as those reported in these two papers can never reach the standards of proof provided by randomized trials. Although correct, that view misses the point—most public health interventions are not evaluable in randomized trial conditions and the combination of political and technical infeasibility suggests that this will not change soon. Both groups are to be commended for grappling with important questions regarding what works and what does not work to prevent injuries. It is up to all of us in the field to ensure that we address those questions as rigorously and scientifically as possible and by so doing, provide a solid evidence foundation for advocating intensified efforts to prevent injuries.

Evaluating what works is essential in efforts to prevent injuries

REFERENCES

Linked Articles

  • Editorial
    I B Pless