Article Text
Statistics from Altmetric.com
Although most routine health statistics provide information on injuries, the level of detail is often insufficient for injury prevention purposes. To fill this gap, many industrialised countries have introduced specially designed national and local injury surveillance systems based in emergency departments. Examples have been reported from Australia, Canada, Greece, New Zealand, the Netherlands, Norway, Sweden, the United States, and the UK.1 The European Home and Leisure Accident Surveillance System (EHLASS) operates in the 15 current member states of the European Union. The EHLASS system collects data on all (or most) patients presenting with an injury or poisoning to a sample of hospitals. Because of the high costs of data collection, the usefulness and cost effectiveness of such systems is coming under increasing scrutiny.
Surveillance has been defined as the “continuous analysis, interpretation and feedback of systematically collected data”.2 Injury surveillance systems have a number of important virtues. They monitor injury incidence, identify risk factors, and assist in the planning and evaluation of injury prevention programmes.3 Emergency department based systems have helped fill a major gap in our understanding of non-fatal injuries, which comprise a large (but usually unknown) proportion of medically treated injuries. In comparison with routine data sources, these systems provide greater detail and are timely. Moreover, their stability over time allows the analysis of secular trends and the identification of rare events, and new and emerging hazards such as a dangerous new toy.
Nevertheless, in spite of these advantages, several crucial questions regarding emergency department surveillance need to be addressed. The United States Centers for Disease Control have developed criteria for evaluating any surveillance system. These include usefulness, simplicity, flexibility, acceptability, sensitivity, positive predictive value, representativeness, and timeliness.4 Few systems, however, have been subjected to rigorous scrutiny. One exception is the Canadian Hospitals Injury Reporting and Prevention Program (CHIRPP). A study to determine the positive predictive value, sensitivity, and representativeness of the system has recently been reported.5 Across the CHIRPP centres, sensitivity varied from 30% to 91%. Systematic errors in data capture were observed, with poisonings, injuries resulting in hospital admission, and injuries presenting overnight more likely to be missed. The authors concluded that CHIRPP data were of relatively high quality, but should be used with caution in aetiological studies. In this issue of Injury Prevention an evaluation of CHIRPP at the Children's Hospital of Eastern Ontario by the same authors found that 35% of children attended other hospitals in the region not covered by surveillance.6
Whatever surveillance system is used, data derived from emergency departments are inherently flawed. They are seldom population based and may by biased by a number of factors including age, sex, ethnic origin, socioeconomic position, health insurance status, time, and geographic location. Moreover, a substantial proportion of medically treated injuries may be treated elsewhere. Thus, the extent to which people utilise emergency departments after sustaining an injury varies widely between communities depending upon the other services available. In some areas and not others, primary care clinicians may provide extensive telephone advice and consultation obviating the need for presentation to emergency departments. It is estimated, for example, that telephone consultations to poison control centres reduce medically treated non-hospitalised visits by 24%.7 Thus, the injuries captured by surveillance may not be at all representative of injuries seen over a wider area.
Resources are a perennial problem. While the cost per injury of collecting data at emergency departments is lower than for a health survey,8 emergency department surveillance generates large numbers of injury records in a short period of time. In the case of CHIRPP, over 120 000 records are added annually.9 Emergency department systems are expensive in staff time and salaries. Data collection, coding, and computerisation are labour intensive activities, particularly for total patient surveillance. As discussed by Mackenzie and Pless in this issue,9 dedicated, trained staff can lead to a marked improvement in data capture and coding. For example, CHIRPP coordinators make considerable efforts to track down missing cases. Investing heavily in data collection may result in the equally essential processes of analysis, dissemination, and use of data being inadequately resourced. Because injury patterns and causes revealed by surveillance tend to be stable over time, this prompts the inevitable question about diminishing returns and whether other approaches are more cost effective.
Alternative strategies to total surveillance of all cases presenting with injuries should be considered. Retrospective systematic sampling of the CHIRPP data in Glasgow showed that a well planned and conducted sampling strategy may be a valid alternative to total patient sampling in emergency departments.10 Although sampling would produce few savings on staff training, the burden of coding and computerisation would be substantially reduced. Of course, surveillance based on data collected from just one hospital (whether sampled or not) is a relatively poor alternative to a population based system. Since sampling does not appear to be detrimental to the quality of data collected at any one hospital, the adoption of a sampling strategy in a representative sample of hospitals in an area may result in better quality epidemiological information than a single hospital collecting information on all injury events. That said, sampling also has its drawbacks: staff forgetfulness, potentially biased case selection according to severity, and the inability of a sample to provide a truly comprehensive profile of injuries. However, the cost savings offered by this method may override the objections.
A sampling approach to surveillance is adopted for product related injuries in both the United States National Electronic Injury Surveillance System (NEISS) and in the European Union (EHLASS). NEISS uses a sample of 101 hospitals to represent the nation and a recent report has recommended that it be expanded to include all injuries.3 Yet another option is to collect data in sample weeks from a sample of hospitals as is done in the United States National Hospital Ambulatory Medical Care Survey, which includes injury data as part of a broader survey.11
An even more basic question should be asked regarding the evaluation of emergency department surveillance systems. Is it the best use of resources to rely on emergency department surveillance as the principal injury surveillance system (as it appears they do in Canada), particularly because the types of injuries seen are very different from those that result in hospitalisation or death? Most injuries seen in emergency departments are minor and heal rapidly, with little long term sequelae. On the other hand, deaths and hospitalisations have a much larger and more long term cost to society. Insufficient effort has been made so far to improve the use of mortality and hospital data for injury prevention activities. For example, little use is made of data from medical examiners or coroners. Even for emergency department data, it may be useful to collect data on injuries over a certain severity threshold, for example an abbreviated injury score of greater than 2. This would result in a smaller and more manageable dataset, and would be less expensive to operate than total patient surveillance. Such a strategy may also help to eradicate biases introduced through differences in hospital admission related to social or geographical factors.
A multimethod approach to monitoring injuries is the ideal. Research methodologists advocate triangulation of data from two sources to eliminate biases, validate data, and expand the information base. Indeed, Mulder has proposed that emergency department surveillance and household or health surveys should be used in tandem to limit the biases in each.8 Similarly the use of other sources of mortality data (such as free text) has improved surveillance for drownings.12 No single “best buy” exists for monitoring injury in a population. Individual institutions and agencies should consider carefully all the options and select the one most appropriate to their needs within the resources available.
Acknowledgments
The authors wish to acknowledge the International Collaborative Effort (ICE) on Injury Statistics for stimulating the debate that led to this article. The ICE is sponsored by the National Center for Health Statistics, United States Centers for Disease Control and Prevention with funding from the National Institute of Child Health and Development, National Institutes of Health.