Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
There is a belief, widely held in the injury prevention community, that injury surveillance is an important—perhaps crucial—prerequisite for effective injury control. The argument is typically made that surveillance systems are needed to demonstrate the magnitude of an injury problem, to identify target populations (defined demographically, geographically or on the basis of a shared risk factor) or high-priority injury mechanisms, and to monitor trends in incidence over time (presumably, I suppose, as these improve in response to prevention programs implemented). In response to this belief, there is a great deal of effort spent promoting, designing, and administering surveillance systems. Recognizing that the greatest burden of injury is borne by persons in the developing world, the WHO published guidelines in 2001 to “provide practical advice on how to develop information systems for the collection of systematic data on injuries … in settings where resources, including trained staff and electronic equipment, are limited.”1
The guidelines are thoughtful, practical, and clearly designed to be implementable by individuals operating in less-resourced environments. The developers, to their credit, included a scheme for evaluating any surveillance mechanism thus created. In this issue of the journal, Liu and colleagues (see page 105) report their experience using the WHO approach to design an injury surveillance system in an urban Chinese emergency department and then evaluating that system using the metrics suggested by the WHO.2 The results are interesting in so far as they illustrate where the guidelines could be improved, where surveillance systems fall short and—ultimately—why the focus on …
Competing interests: None.