Objective: To assess whether the use of integrated hospitalization and mortality data sources and/or the inclusion of comorbidity improve the predictive ability of the International Classification of Disease (ICD)-based Injury Severity Score (ICISS).
Design: Models using either the ICISS based solely on hospital discharge data or one of nine modified ICISSs as the predictor variable were assessed on their ability to predict survival using logistic regression modeling.
Setting: New Zealand.
Patients or subjects: Inpatients, with an S00–T89 ICD-10-AM principal diagnosis, and fatalities, with any S00–T89 ICD-10-AM diagnosis, occurring in 2000–2003.
Main outcome measures: Models were compared in terms of their discrimination (concordance), calibration, and goodness-of-fit.
Results: 186 835 cases including 9968 deaths met the inclusion criterion. The modified ICISS that included both mortality data and Charlson comorbid conditions at the ICD-10-AM level had the best concordance and high calibration. Calibration curves indicated that scores using hospital discharge data only to calculate survival risk ratios underestimated mortality, whereas scores using hospital discharge and mortality data overestimated mortality.
Conclusions: Valid measurement of injury severity is important for both meaningful research and surveillance and to assist in classifying information to meet specific injury policy, prevention, and control needs. This study suggests that the predictive ability of ICISS would be improved if both mortality and comorbidity data were included in its calculation.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Valid measurement of injury severity is critical for the production of valid information from the analysis of injury data to inform policy and injury prevention practice. In injury research, routinely collected administrative databases are often used, as they provide an inexpensive opportunity to analyze hospital outcomes. The International Classification of Disease (ICD)-based Injury Severity Score (ICISS) has been shown to be a useful tool in estimating injury severity from administrative databases.1–4 Investigating whether the measurement of injury severity can be improved by modifying the ICISS methodology is important, as it presents the potential to produce more trustworthy injury statistics for the purposes of measuring the impact of policy and practice in reducing injury worldwide.
The ICISS methodology involves estimating probability of death directly from ICD injury diagnoses by examining a large set of cases for which survival status is known. Although most research to date on the development of ICISS has been based on trauma center data, the same techniques have been successfully applied to national population-level statistics using injury hospitalizations databases.5
Determining which injuries are “serious” by the ICISS method involves calculating a survival risk ratio (SRR) for individual injury diagnosis codes. An SRR is the proportion of cases with a certain injury diagnosis in which the patient does not die, or in other words, a given SRR represents the likelihood that a patient will survive a particular injury.
Using the standard ICISS methodology, each patient’s ICISS (survival probability) is the product of the SRRs associated with all the diagnoses listed on the patient’s hospital discharge record.1 For patients with an isolated injury, the ICISS is simply the SRR of the particular injury diagnosis. In 2003, Kilgo et al6 compared the standard multiple-injury ICISS with a worst-injury ICISS which, rather than being the product of the SRRs, was the smallest SRR among the diagnoses for a patient. Results indicated that worst-injury ICISS discriminated survival better, fitted better, and explained more variance than the multiple-injury ICISS.
Previous research applying the ICISS methodology to injury hospitalizations has estimated the probability of survival to hospital discharge given admission.1 2 5 If patients who died from their injuries after being discharged from hospital were included, the probability of survival given admission would be estimated. As some patients are never admitted to hospital (eg, they die at the scene of the injury), the ideal set of cases would include all injury deaths irrespective of whether or not hospital admission occurred, as this gives an estimate of probability of survival or, in other words, threat to life.
Although numerous papers detail the contribution of pre-existing non-injury conditions to injury severity, the literature on injury severity scores that include comorbidity is limited. The Trauma and Injury Severity Score (TRISS) was outperformed by TRISSCOM, a modified score that integrated comorbidity as the presence/absence of at least one of eight predefined comorbid conditions.7 In contrast, Gabbe et al8 reported that the inclusion of comorbidity using an index score did not result in substantial improvement to the performance of TRISS.
Existing approaches for assessing comorbidity in routinely collected data were identified. A prominent measure in the literature concerned with the effects of comorbidity in health outcomes and burden of care is the Charlson Comorbidity Index. This was first adapted to administrative databases in 1993, with the resulting algorithm developed further to generate 17 yes/no comorbidity variables for each patient record.9 An alternative approach was the Harborview Assessment for Risk of Mortality (HARM).10 The 11 comorbid conditions considered for the HARM model were based on work by Morris et al11 in 1990.
The purpose of the present study was to determine whether the predictive ability of ICISS can be improved, firstly, by including deaths that occur outside hospital, and, secondly, through the inclusion of either a Charlson or HARM comorbidity component.
New Zealand Health Information Service’s (NZHIS’s) National Minimum Dataset (NMDS) is a national collection of hospital discharge information.12 For this work, a subset of the NMDS, namely all publicly funded inpatient treatment of injuries in New Zealand hospitals, was used. It is estimated that 99% of all hospital injury discharges in 2002 were publicly funded.13 14
For 2000–2003, there were 318 394 hospital discharges in the NMDS with a principal diagnosis in the range S00–T89 (Injury and Poisoning chapter excluding sequelae). Over this period, the maximum number of diagnoses fields available per discharge was 99. Re-admissions for the same injury were identified using a previously described approach.15
NZHIS’s Mortality Collection (MC) classifies the underlying cause of all deaths registered in New Zealand. Data collection is mainly from death certificates and coroner’s reports.16 In the MC, 111 278 fatalities were registered in 2000–2003, of which 6641 had at least one diagnosis in the range S00–T89.
Linkage between the NMDS and MC was deterministic using patients’ Master National Health Index, which is the cornerstone of NZHIS’s data collections. Master National Health Indexes uniquely identify healthcare users and are mandatory fields in both the NMDS and MC.
During this research it became apparent that the recording and coding of deaths that occur outside hospital (MC) was very different from the recording and coding of deaths in hospital (NMDS). For cases where data were available from both datasets, only the NMDS was used. For cases of injury death in which hospitalization did not occur before the death, the diagnosis information had to be obtained from the MC. As date of injury is not available in the MC, fatal injuries in which the person was not admitted to hospital were assumed to occur on the date of death.
The dataset for analysis was obtained from cases where the injury occurred within the period 1 January 2000 to 31 August 2003 and satisfied one of the following:
Hospitalization with an S00–T89 ICD-10-AM principal diagnosis where the patient was discharged dead and the admission was within 90 days of the injury date, excluding those re-admissions where the first admission was not an injury (n = 1969)
First admission with an S00–T89 ICD-10-AM principal diagnosis where the patient either stayed at least one night in hospital or died within 90 days (n = 182 414)
Fatality located in the MC with an S00–T89 ICD-10-AM diagnosis in any field (n = 6008)
There were 186 835 injury cases from 167 479 people identified from these non-mutually exclusive criteria. Occurrences of people hospitalized more than once for separate injury events explain why there are fewer people than cases.
The outcome measured for each case was survival/death. Cases were coded as “dead” if they met one of the following conditions:
Were in (1) or (3) above
Were the most recent first admission with an S00–T89 ICD-10-AM principal diagnosis and an injury date within the period 1 January 2000 to 31 August 2003 that could be located in the 2000–2003 MC where the date of death was within 90 days of the injury date (n = 5210)
Of the 186 835 cases, there were 9968 fatalities (5.3%). For further details, see appendix 1 online.
Table 1 lists the Charlson and HARM comorbid conditions. A Stata module “charlson” that maps ICD-9-CM Charlson comorbidity diagnoses codes to ICD-10 was used with minor modifications to adapt the module for ICD-10-AM.17 18 The HARM comorbid conditions were defined using ICD-9-CM diagnoses codes, so forward and back mapping was used to determine the appropriate ICD-10-AM codes.10 19 For further details, see appendix 2 online.
Although the data contained multiple patient records for some people, comorbidity information was calculated independently for each case. Comorbidity SRRs were calculated at the ICD-10-AM code level (ie, one SRR calculated for each ICD-10-AM code within a given comorbidity) and at the variable level (ie, one SRR calculated for a given comorbidity meaning that all ICD-10AM codes that make up the “comorbidity” were aggregated).
Calculation of ICISS
Ten ICISSs were calculated for each case (table 2). SRRs for ICISS1-5 were calculated from a subset that did not use the MC to identify cases (182 673 cases including 1969 (1.1%) fatalities). All cases with a particular diagnosis code (eg, S01.1) listed anywhere on the patient’s NMDS or MC record were included in the SRR calculation for that diagnosis code. All ICISSs were calculated using the full dataset.
For the 13 349 cases located in both the NMDS and MC, a comparison of the average number of diagnoses per person (mean = 6.2 and 1.9, respectively) indicated that the number of diagnoses recorded was different. Owing to concern about the bias that this may introduce, Kilgo’s “worst-injury” methodology was applied.6 Thus ICISS1 was calculated as follows:
ICISS1 = smallest (injurySRR1, injurySRR2,…,injurySRRn)
Consistency between the two parts of the method was obtained by having the comorbid SRRs contribute as follows:
ICISS = smallest (injurySRR1, injurySRR2,…,injurySRRn) × smallest (comorbiditySRR1, comorbiditySRR2,…, comorbiditySRRn)
Logistic regression models using the full dataset, with ICISS as the predictor variable and survival as the outcome variable, were fitted. To compare the ICISSs, discrimination and calibration were assessed.20 21 Discrimination is the ability of the model to distinguish survivors from non-survivors and was measured by concordance on a scale of 0–1, with 1 indicating perfect separation of the groups. The concordance is equal to the area under the receiver operating characteristic curve.21
Non-parametric bootstrapping with 200 replications was used to correct the standard errors in the calculation of the confidence intervals for bias caused by the use of a single dataset for design and testing.21
The relative calibration of scores was compared using calibration curves and the Hosmer–Lemeshow (H-L) statistic. Calibration curves are plots of observed against estimated mortality (1 − ICISS), with cases grouped by estimated mortality. These curves enable comparison of the scores with each other and with a straight 45° line (perfect calibration). The H-L statistic indicates the accuracy of the model’s estimates of probability of death. A perfectly calibrated model has a H-L statistic of zero. The higher the H-L statistic, the poorer the fit.20 The statistical significance of the H-L statistic has not been assessed, as it is inappropriate to do so with large samples.22 The R2 is a descriptive goodness-of-fit measure between 0 and 1 that describes the proportion of variance explained by the model; higher values are better. Stata V9.2 was used for all statistical analysis.
Table 3 lists the concordance values, H-L statistics, and R2 statistics for all ICISS models.
ICISS9, which included both mortality data and Charlson comorbid conditions at the ICD-10-AM level, had the best concordance. Scores calculated using the NMDS and MC (ICISS6–10) all had better concordance than those calculated using only the NMDS. Scores calculated using comorbidity data had higher concordance than corresponding scores that did not include comorbidity (ICISS2–5 vs ICISS1; ICISS7–10 vs ICISS6). The inclusion of comorbidity using Charlson conditions produced scores with better concordance than those that used the HARM approach (ICISS4–5 vs ICISS2–3; ICISS9–10 vs ICISS7–8). Scores calculated using comorbidity SRRs at the ICD-10-AM level had higher concordance than respective scores calculated using SRRs at the comorbidity variable level (ICISS2 vs ICISS3; ICISS4 vs ICISS5; ICISS7 vs ICISS8; ICISS9 vs ICISS10).
As the vast majority of cases have ICISSs close to 1 (low estimated mortality), only cases with estimated mortality of 30% or less are presented in the calibration curves (figs 1 and 2). This corresponds to presenting 90–99% of the data. Although the differences in performance between the scores are difficult to assess from the calibration curves, it is evident that scores for which only the NMDS was used to calculate SRRs underestimated mortality, whereas scores that also included the MC overestimated mortality. Calibration was generally better at lower levels of estimated mortality.
Typically the patterns observed in the relative calibration of the scores were the same as those noted for concordance. ICISS10, which included both mortality data and Charlson comorbid conditions at the variable level, had the lowest H-L statistic, although the calibration of ICISS9 was very similar.
The same order of performance observed for concordance was seen for the R2 statistics. ICISS9, which included both mortality data and Charlson comorbid conditions at the ICD-10-AM level, explained 33% of the variation. This was almost three times higher than that observed for the score with the lowest R2 statistic, ICISS1.
Table 4 lists the results by diagnosis group. “Head injuries” represented 13% of the cases, with “other mechanical trauma” representing 61%. “Complications” and “other injuries” represented 13% and 12%, respectively. Head injuries had the highest concordance, and complications the least. Within each diagnostic group, ICISS9 had the best concordance. For head injuries, ICISS7 had the lowest H-L statistic, whereas, for other mechanical trauma, ICISS3 did. For complications, the best calibrated score was ICISS9, whereas for other injuries it was ICISS8.
Comparing the performance of scores indicates that the predictive ability of ICISS can be enhanced through the use of integrated hospitalization and mortality data sources. A marked improvement in the performance of ICISS was also evident when comorbidity diagnoses were taken into account.
The high H-L statistics for ICISS1–5 and the underestimation of mortality in the calibration curves was not surprising, as these scores were calculated using only NMDS-based SRRs which included 55% of the deaths attributed to injury, whereas the dataset used to validate the scores used both NMDS and MC data. The overestimation of mortality by ICISS6–10 may have been caused by the tendency for less specific injury diagnosis codes to be used in the MC compared with the NMDS. For example, for hospitalized injury deaths with both NMDS and MC data, S09.9 (Unspecified head injury) was used more than twice as often as the first diagnosis in the MC than in the NMDS.
The H-L statistic was used to summarize calibration and to compare the relative performance of the models. Although other approaches could have been used, its use is consistent with previous published work investigating the validity of ICISS.2 4 5
As indicated above, where the case could be located in the NMDS, hospital diagnostic information was used. For the 4473 not hospitalized (2% of the 186 835 cases; 45% of the 9968 fatalities), we had no option but to use the diagnosis information from the MC. The use of diagnosis information from different sources has the potential to introduce error, but no alternative method was apparent.
A linkage of injury deaths that occurred in hospital to injury deaths in the MC highlighted some concerns. Using a definition of a “hospitalized injury death” as one in which a patient with a principal ICD-10-AM diagnosis in the range S00–T89 was discharged dead from a first admission, for only half of the hospitalized injury deaths was an S00–T89 diagnosis listed on the MC record. These findings are not unexpected, as it has been previously reported that a range of factors may lead to the exclusion of important or predominant causes of the process leading to death.23 For this study, it was assumed that the diagnoses for cases identified solely from the MC were recorded correctly and fully. This may not always be true, but, in the absence of a proper review of case notes, it is impossible to know.
The predictive ability of the ICD-based Injury Severity Score (ICISS) can be enhanced by the use of integrated hospitalization and mortality data sources.
A marked improvement in the performance of ICISS was evident when comorbidity diagnoses were taken into account.
The modified ICISS that included both mortality data and Charlson comorbid conditions at the ICD-10-AM level had the best concordance and high calibration.
Calibration curves indicated that survival risk ratios calculated using hospital discharge data only underestimated mortality, whereas those calculated using hospital discharge and mortality data overestimated mortality.
Using the MC enabled inclusion of deaths of people never admitted to hospital and deaths after discharge from hospital. For the latter, the appropriate time lapse between injury and death to ensure that death could be attributed to the injury was unclear, so an enquiry was sent to international colleagues via the International Collaborative Effort (ICE) discussion list. No clear response was elicited, so an approach was developed based on empirical evidence that, of injury hospitalizations from which the patient was discharged dead, 95% died in hospital within 90 days of admission for injury. When a 90-day cut-off was proposed to ICE, it was well accepted and thus adopted for this study. Given that 29% (2899/9968) of the deaths were included solely because they satisfied this criterion, further work should explore the effect of using a different cut-off.
Rather than using the standard ICISS methodology and obtaining the product of the SRRs for every injury diagnosis listed for a case, this work used only the SRR associated with the worst injury. This restricted the variation in scores and limited the number of covariate patterns in the models. The 186 835 cases produced only 458 unique scores for ICISS1. The inclusion of comorbidity increased the number of covariate patterns. This lack of variation may have limited the predictive ability of the scores.
Allowing a separate SRR for each of the ICD-10-AM comorbidity codes considerably increased the set of SRRs from which the worst was chosen. Not surprisingly, scores based on comorbidity SRRs at the ICD-10-AM level performed better than those that included SRRs calculated at the comorbidity variable level.
The age of patients was not included here. Although a recent study reported that the presence of comorbidity was associated with increased mortality after trauma and that this was additional to, and independent of, the effect of increasing age,24 further work should look at the importance of including age in ICISS. Future work should focus on whether the inclusion of comorbidity is better than inclusion of the simpler item, age, and whether there is justification for including both age and comorbidity in ICISS.
Valid measurement of injury severity is important both for meaningful monitoring of trends and to assist in classifying information to meet specific injury policy prevention and control needs. This study suggests that the predictive ability of the standard ICISS method of determining injury severity would be improved if both mortality data and comorbidity were considered.
We thank Paul Brown, Information Manager, Statistics New Zealand for sponsoring this research, and an anonymous reviewer for helpful comments on the draft.
▸ Appendices 1 and 2 are published online only at http://injuryprevention.bmj.com/content/vol14/issue4
Funding: This research was supported by a grant from Official Statistics Research, Statistics New Zealand.
Competing interests: None.