Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Pick up almost any issue of Injury Prevention and you will find statements such as the following:
“Results of this study are subject to limitations. NEISS-AIP only includes injuries treated in hospital emergency departments and thus excludes those treated in physician offices, outpatient clinics or at home. Our results then are probably an undercount of the total injuries sustained.”1
This statement is meaningless unless the authors say what they are talking about: what injuries they are seeking to describe. An essential element of the description of the study methods, for papers that deal with quantitative results, is a clear statement of the definition of injury that is the intended focus of the paper.
In the above example, if the authors were interested in describing all injuries, including trivial or superficial injuries, then they had no business using attendance at the emergency department (ED) as their case definition, as it is patently clear that ED data would capture only a minority of these trivial injuries. One reason for using ED data to describe the epidemiology of injury is that no-one is interested in preventing trivial injuries. Rather we are interested in preventing injuries that are consequential, eg, in terms of death, threat to life, (threat of) disability, reduced quality of life, and cost.
This quote typifies what we have read in many injury epidemiology papers. Many of these papers use case definitions of convenience (eg, ambulance attendances, ED attendances, hospitalizations) to describe the epidemiology of injury without any definition of the underlying injury outcome of interest. As we will demonstrate, this is “putting the cart before the horse”. We contend that, in order for the reader of a paper to assess whether the chosen case definition used is appropriate, he/she needs to know what injuries are the focus of the authors’ investigation.
It is essential therefore that all papers that present quantitative research findings, where the aim is to provide an unbiased estimate of prevalence, incidence (rate), association, or effect, should include the following: a theoretical definition of injury relevant to the research question; a case definition, including the nature and quality of the data from which the cases were obtained; and a discussion of any mismatch between the theoretical definition of injury and the case definition used. If authors use (implicitly or explicitly) inappropriate definitions, then misleading descriptions of the epidemiology of injury are likely to ensue, with potential downstream effects on priority setting, policy making, prevention, and control.
In this commentary, we show that the failure to do this undermines the results and conclusions in offending papers. In this commentary, we take the following approach: we (a) conceptualize the problem, (b) discuss aspects of the scope and definition of injury, (c) discuss the choice of case definition and (d) illustrate our points with some empirical evidence, before (e) providing a set of recommendations about what quantitative papers should present with regard to the above.
CONCEPTUALIZING THE PROBLEM
As an illustration, fig 1A represents the injuries captured by two hypothetical data sources. For example, the circle could represent hospital discharges of cases of work-related injury, and the triangle compensated work-related injury.
It will be apparent from this hypothetical example that these shapes represent different populations (as defined by what injuries are eligible to be captured), and that they include some common cases (the area with darker shading).
In fig 1B, the circle and the triangle are the same as in fig 1A. The square represents injuries that are the focus of a hypothetical investigation in a study—in this example, injury that is serious in terms of threat to life.
In fig 1A, simply adding hospital data to compensated injury data (and removing duplicates) would result in some representation of work-related injury. However, if our focus is serious threat to life injuries (as indicated by the box in fig 1B), the total shaded area in fig 1A (light and dark shading) bears no relationship to it. That is, without some intelligent case selection, the data sources do not reflect what we are interested in. For example, if a case is defined as any hospital discharge with a diagnosis of work-related injury, then many of the cases captured using this case definition would not be consistent with the focus of the study/definition of injury.
It should be noted, however, that many of the serious injuries, as represented by the square, result in admission to hospital. So it is possible to select hospital discharges that approximate well the focus of this hypothetical study (eg, by choosing a case definition that includes only hospitalizations that exceed some threat to life severity threshold).
Figure 1C conceptualizes a new example. Again the circle and the triangle are the same as in fig 1A,B. In fig 1C, the diamond represents injury that is the focus of investigation in this second hypothetical study—namely work-related injury that is serious in terms of threat of disability. This again illustrates the mismatch between the coverage of the data sources (either on their own or in combination) and the injury definition used. This shows that the dataset represented by the triangle (eg, compensated work-related injury) captures almost all of the cases of serious (threat of disability) injury, although it captures many more work-related injuries than is the focus of the study.
The dataset represented by the circle (eg, hospital discharges) captures only a minority of the injuries that are the focus of this second hypothetical study—so hospital discharges are a poor basis for a case definition if the interest is in serious injury as measured by threat of disability.
It is worth noting that coverage of injuries that are important in terms of threat to life is not identical with coverage in terms of threat of disability (fig 1D). This is illustrated by the examples of (1) penetrating eye injury and (2) amputation of the thumb. Both of these injuries, on their own, are associated with little threat to life, but both are disabling injuries.
The examples given above are not intended to imply that the investigator should necessarily aim to identify a single data source. They have simply been included as illustrations. Where appropriate, studies should use multiple sources. For example, in a recent study of serious (threat to life) non-fatal work-related injury, we used linked data sources and identified whether the event was work-related from one source that captures compensated work-related injury data, and serious threat to life injuries from another (namely hospital inpatient data).2
INJURIES AND THEIR DEFINITION
Scope: physical and psychological injury, acute and chronic?
One issue that should be addressed early in any paper is the scope of the injury definition. Does the definition that is used include only physical injury, or does it also include psychological injury? Internationally, the most commonly accepted operational definitions of injury include those pathologies in the “Injury” chapter of the International classification of disease (ICD) codes. It is of interest to note that the “Injury” chapter of ICD-10 includes “Maltreatment syndromes” (T74). This category includes “Neglect and abandonment”, “Physical abuse”, “Sexual abuse”, and “Psychological abuse” without any reference to physical injury. In other words, some forms of intentional psychological harm/injury are covered by the “Injury” chapter of ICD. Psychological injury is important. However, it is ill-defined and difficult to measure. Almost certainly, many cases of psychological injury would be coded to other chapters of the ICD, notably to codes within the “Mental and behavioral disorders” chapter. If an author’s injury definition includes psychological injury, then its definition and measurement needs to be addressed. For this commentary, and our illustrations, we will deal exclusively with physical injury.
Does the definition include “chronic” injury, such as occupational overuse syndrome? Typically, these injuries are often classified to diagnosis codes outside of the “Injury” chapter of ICD-9 and ICD-10. Although, as Langley and Brenner3 state:
“Some have argued that most of these conditions are chronic and should thus be excluded from the operational definition of injury… Assuming one accepts this argument, it raises an interesting question. Are we to assume that all strains and sprains coded in the range 840–848 [ie within the injury chapter of ICD-9] have occurred acutely? Given that there are no guidelines in this respect we feel such an assumption unwise.” (p 70)
If an author’s injury definition includes “chronic injury”, again a clear theoretical and case definition is needed.
As a consequence of these types of discussion, it is important that an unequivocal definition of injury is included in a quantitative research paper to make clear what pathologies are included and excluded from the investigation. This is essential because the ability to replicate an investigator’s work is a fundamental part of scientific investigation.
Theoretical definition of injury
The theoretical definition of injury that is used should make it clear what the focus of the investigation is—for example:
injuries that are important because they result in death or carry a significant threat to life
injuries that result in disability or carry a significant threat of disability
injuries that are of significant cost, whether at the individual or societal level.
The next issues are the choice of:
a case definition that is consistent with the chosen theoretical definition of injury
the source(s) of data from which cases will be ascertained
Often, hospital inpatient data are chosen as the source. What follows focuses on the potential problems associated with this choice.
Injury that is a threat to life
For example, let’s say that we are interested in describing the epidemiology of serious injury, where “serious” is defined by threat to life exceeding a given threshold. We choose as an outcome measure discharges from hospital after at least an overnight stay. This outcome is biased with respect to the chosen definition of serious injury for the reasons described below.
There is a tendency to assume that all injury that results in hospital admission is serious (eg, in this example, in terms of threat to life). Work we published recently shows that many traumatic brain injury cases admitted to hospital, for example, are of minor severity as measured by the Abbreviated Injury Scale (AIS-85) or by ICISS, severity scores that describe threat to life.4 In addition, fig 2 shows the distribution of hospital discharges from New Zealand hospitals for publicly funded treatment by ICISS threat to life severity score. It shows that many hospitalizations have a score close to 1 (guaranteed survival)—that is, many hospitalizations are associated with little or no threat to life.
Furthermore, whether a person is admitted to hospital for their injury is affected by a wide range of extraneous factors independent of injury severity, as measured by threat to life.7 These include social factors (distance between hospital and home), concern about intentionality (eg, child abuse), workload of doctors, bed/theatre availability, and cost of access to care. They tend to affect admission for minor injury more, and serious injury admissions very much less. For example, if you fracture your femur, you will almost always be admitted,8 whereas if you have a minor head injury, you may or may not be admitted depending on who is available to assess you at ED and the diagnostic tools available (eg, availability of scanning equipment and staff to investigate brain injury in hospital outpatient department).9 In the first case, we can be confident that the hospital discharge database captures almost all cases of interest, but that is not so for the latter.
It should be noted that serious injuries, as measured by ICISS or AIS (ie, AIS = 3+) for example, are in the minority among hospitalizations compared with minor and moderately severe injury—see fig 2 and Stephenson et al.4 To avoid a misleading description of the epidemiology of injury when using hospitalization data, it is important to control these extraneous factors (eg, health service supply and access factors). One method is to use a severity threshold that captures only serious injuries (such as fractured femur), which have a high probability of admission.12
For the above reasons, it is only appropriate to use hospitalization data alone when describing the epidemiology of serious threat to life injury. If the author’s definition includes minor or moderately severe injury (in terms of threat to life), then other sources should also be used.
Threat of disability
If the injuries of interest are defined in terms of threat of disability, then choosing a relevant outcome measure based on hospital inpatient data will not be useful in many instances. This is illustrated in fig 1C using our theoretical example. For example, falls on the same level in older people may result in relatively superficial physical injury that does not result in hospital attendance. Nevertheless, the injury can be very disabling, in that it can result in a fear of future falls, which leads to a fear of leaving home, and so results in major restriction of participation—one dimension of a model of disability.13 These events are common, and therefore basing an outcome measure solely on hospitalizations would fail to capture many serious injuries, where “serious” in this case is measured by (threat of) disability.
Figure 3 also illustrates this for the working population. It shows the distribution of length of time off work for injury (source: Accident Compensation Corporation (ACC)) separated into those who are admitted to hospital and those who are not (source: ACC data linked to hospital discharges). Time off work is an indicator of participation restriction, ie, of disability. It can be seen from fig 3 that, even for injury that results in 3 months or more off work, the proportion of cases that resulted in admission to hospital was very low. Therefore this illustrates that, if used to describe the epidemiology of significant injury defined in terms of (threat of) disability, hospitalizations do not capture most of the cases of interest.
If one is interested in describing the epidemiology of significant injury, as measured by cost (at either the individual or societal level), then hospital inpatient data may also be unsatisfactory. For example, New Zealand has a no-fault compensation system for injury, which is administered by the ACC (www.acc.co.nz). Back injury/pain is a significant compensation cost to the ACC, for example, through the payment of earnings-related compensation for time off work because of back injury/pain. Most of these cases of back injury/pain do not result in admission to hospital. If these outcomes are included in the definition of injury, some important injuries as measured by cost would not be captured by using hospitalization data alone.
We have illustrated the problems associated with (a) a failure to define what injuries are the focus of a study, and (b) a failure to describe how the case definition used relates to the target definition of injury, using incident cases of injury that result in hospitalization as an example of a case definition. This assessment could be repeated for other data sources, but has not been done here to keep this commentary within reasonable limits. Nevertheless, it will be apparent that, if authors use (implicitly or explicitly) inappropriate definitions, then misleading descriptions of the epidemiology of injury are likely to ensue, with potential downstream effects on priority setting, policy making, prevention, and control.
In view of this, we argue that journals that accept epidemiological studies for review should require authors to:
Define in their paper what injuries were the focus (but not in terms of the utilization of particular health services, survey response or capture by one or more reporting systems), and indicate how the definition is relevant to the research question
Describe the data sources (including their quality) from which the outcome measures were derived
Specify the case definition, ie, the methods used to ascertain cases for the study
Discuss any mismatch between the injury definition and the case definition used
As Injury Prevention is the leading injury prevention journal, we challenge the new Editorial Board to lead such an initiative.
Thanks to Gabrielle Davie (IPRU) for preparing figs 2 and 3. These figures are based on data supplied to IPRU by the New Zealand Health Information Service and by the Accident Compensation Corporation of New Zealand. Thanks also to Professor Charlotte Paul, Department of Preventive and Social Medicine, University of Otago, who provided critical comment on the penultimate draft of this commentary.
Competing interests: None.