Article Text

Download PDFPDF

Surveillance: to what end?
  1. Brian D Johnston
  1. Dr Brian D Johnston, Harborview Medical Center, 325 Ninth Ave, Box 359774, Seattle, WA 98104, USA; ipeditor{at}bmjgroup.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

There is a belief, widely held in the injury prevention community, that injury surveillance is an important—perhaps crucial—prerequisite for effective injury control. The argument is typically made that surveillance systems are needed to demonstrate the magnitude of an injury problem, to identify target populations (defined demographically, geographically or on the basis of a shared risk factor) or high-priority injury mechanisms, and to monitor trends in incidence over time (presumably, I suppose, as these improve in response to prevention programs implemented). In response to this belief, there is a great deal of effort spent promoting, designing, and administering surveillance systems. Recognizing that the greatest burden of injury is borne by persons in the developing world, the WHO published guidelines in 2001 to “provide practical advice on how to develop information systems for the collection of systematic data on injuries … in settings where resources, including trained staff and electronic equipment, are limited.”1

The guidelines are thoughtful, practical, and clearly designed to be implementable by individuals operating in less-resourced environments. The developers, to their credit, included a scheme for evaluating any surveillance mechanism thus created. In this issue of the journal, Liu and colleagues (see page 105) report their experience using the WHO approach to design an injury surveillance system in an urban Chinese emergency department and then evaluating that system using the metrics suggested by the WHO.2 The results are interesting in so far as they illustrate where the guidelines could be improved, where surveillance systems fall short and—ultimately—why the focus on surveillance as a prerequisite for injury prevention is misguided.

The problems start early for the Shantou injury surveillance project. It is not clear why the system was designed: who needed the information and how would it be used? Without this understanding, we cannot know how an “injury” is defined in their system; nor can we assess the adequacy of their case definition and the gold standard they select (against which their surveillance system will be tested)3

Liu and colleagues note that their existing ED registries are believed to capture the primary clinical diagnoses of all who present for care and, as such, can be used as a gold standard to identify injuries in their study. It would be nice to see this claim validated through a more detailed chart review. One can imagine, for example, that some cases coded as an “injury” might not fulfill the authors’ case definition of “first time” visits to the ED for injury, instead representing follow-up or continued care for an existing injury.

The authors, however, are most interested in the use of their surveillance system to capture detailed injury information on the subset of ED admissions flagged as injury-related. They use WHO defined metrics to achieve this. The nomenclature is confusing, but—as the authors note—drawn directly from the WHO materials. They report an “injury rate,” which turns out to be the proportion of injury-related visits among all ED visits, and characterize the system’s “accuracy” (which seems to be what most epidemiologists would call sensitivity) and “accuracy rate” (a measure of data quality, the proportion of injury charts with no missing or incomplete data). An effort to revise the terminology applied in this section of the guidelines seems justified.

The sensitivity of the system was disappointingly low at 53%. As the authors note, this may be because their surveillance system relied on treating clinicians to complete the extended data forms. While the WHO guidelines do suggest building surveillance into existing workflow, with clinical diagnostic information supplied by healthcare workers, the Chinese experience illustrates the difficulty of doing so. This shortcoming again highlights the initial design flaw: who is going to use these data? Clinicians have little incentive to complete additional forms unless they can see a benefit in doing so. Uptake might have been better if data collection was folded into a structured injury care worksheet that offered value-added to the user (pictorial “trauma grams” or tables to quickly calculate and document Glasgow Coma Scale, for example). Alternatively, personnel hired and trained by the surveillance system may be required to get the data and data quality desired.

More disappointing, to me at least, was that no data were given in this report on the distribution or accuracy of system-reported injury mechanisms, intent or severity measures. These seem to be the new data made available through the surveillance system. It would be helpful to know whether the data made available were reliable, accurate, or useful. But, again, the evaluation framework in the WHO guidelines does not provide direction or structure in this respect.

What are the lessons learnt in this study? First, there is clearly room for improvement in the particular surveillance system described. The system was designed according to the 2001 WHO guidelines, evaluated against the same and found wanting. The authors point to a number of potential improvements to their approach. Second, and more important in my view, the WHO guidelines understate the importance of tying surveillance to the end user. This is apparent in the introduction where the practical application of surveillance data is relegated to a footnote reminding us that “the final link the surveillance chain is the application of these data to prevention and control.”1 In the evaluation framework, proposed by the WHO and employed in the study by Liu, no metrics are suggested to measure the use of surveillance data to inform, direct, or monitor prevention interventions.

As Barry Pless has eloquently argued in these pages, there is little evidence to suggest that surveillance per se results in injury prevention.4 It is naïve to assume that creating a surveillance system—even a really good one—will result in the sudden alignment of political will and resources to address identified problems. One can even argue that surveillance done poorly is detrimental to the cause of prevention, as it wastes time and resources, and may mis-specify or understate the true burden of the injury problem.

Too often, surveillance systems are structured in a manner that divorces the information collected from those best positioned to make use of it. Ideally the end users of the data should be subsidizing the surveillance system, integrally involved in its design and conduct, and—ultimately—getting what they need from it. How will these data be used? In what format would they be most usefully reported? What data capture the true burden of the conditions of interest, and where can these data be found? In many cases, the answer will not be creation of an elaborate surveillance system. To document a snapshot of the local injury burden, a well-designed community-based survey may be more accurate (especially in capturing injuries that fail to reach the hospital or are treated outside of the formal healthcare system) and more efficient. Community surveys are nicely described in a companion WHO document5 and have been used with remarkable success, for example, to document the burden of child injury mortality in Asia.6

No one will argue that we are better off without data to inform our decisions, but we need to start by knowing what decisions we will be making and, thus, what data we really need. I suspect that most successful examples of surveillance systems directing prevention programs have started with broad input from a variety of stakeholders committed to taking action on their findings. We have published examples of this success7 and would be pleased to see more manuscripts detailing this process. Ultimately, it is this outcomes-driven approach to surveillance that should be emphasized in the next iteration of the WHO guidelines and in the design of any programs those guidelines inform.

REFERENCES

Footnotes

  • Competing interests: None.

Linked Articles