Article Text

Download PDFPDF

Accuracy of external cause-of-injury coding in hospital records
  1. K McKenzie1,
  2. E L Enraght-Moony1,
  3. S M Walker1,
  4. R J McClure2,
  5. J E Harrison3
  1. 1
    National Centre for Classification in Health, Queensland University of Technology, Brisbane, Queensland, Australia
  2. 2
    Monash University Accident Research Centre, Monash University, Melbourne, Victoria, Australia
  3. 3
    Research Centre for Injury Studies, Flinders University, Adelaide, South Australia, Australia
  1. Dr K McKenzie, National Centre for Classification in Health, School of Public Health, Queensland University of Technology, Kelvin Grove 4059, Queensland, Australia; k.mckenzie{at}qut.edu.au

Abstract

Objective: To appraise the published evidence regarding the accuracy of external cause-of-injury codes in hospital records.

Design: Systematic review.

Data sources: Electronic databases searched included PubMed, PubMed Central, Medline, CINAHL, Academic Search Elite, Proquest Health and Medical Complete, and Google Scholar. Snowballing strategies were used by searching the bibliographies of retrieved references to identify relevant associated articles.

Selection criteria: Studies were included in the review if they assessed the accuracy of external cause-of-injury coding in hospital records via a recoding methodology.

Methods: The papers identified through the search were independently screened by two authors for inclusion. Because of heterogeneity between studies, meta-analysis was not performed.

Results: Very limited research on the accuracy of external cause coding for injury-related hospitalisation using medical record review and recoding methodologies has been conducted, with only five studies matching the selection criteria. The accuracy of external cause coding using ICD-9-CM ranged from ∼ 64% when exact code agreement was examined to ∼85% when agreement for broader groups of codes was examined.

Conclusions: Although broad external cause groupings coded in ICD-9-CM can be used with some confidence, researchers should exercise caution for very specific codes until further research is conducted to validate these data. As all previous studies have been conducted using ICD-9-CM, research is needed to quantify the accuracy of coding using ICD-10-AM, and validate the use of these data for injury surveillance purposes.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Injuries are a significant cause of morbidity and mortality internationally, with the World Health Organization estimating that fatal injuries affect almost 6 million people worldwide, hospitalisations account for around 30 times as many deaths, and emergency department presentations account for around 300 times as many deaths.1 Hospital separations and mortality data are routinely used to monitor and assess injury causation and incidence, to inform injury research, policy and practice.

The ICD is the major system in use worldwide for the coding of morbidity data, and the ICD-10-AM is a modification of this classification used in all Australian hospitals.2 The ICD-10-AM is used to assign alphanumeric codes to diagnoses, procedures and external causes of injury recorded in patient medical records to enable analysis and comparison of Australian morbidity data. In addition, the ICD-10-AM is used in 11 other countries worldwide and is being evaluated for use in an additional 16 countries.

Medical record reviews have a long history in the disease diagnosis area for validating and assessing the accuracy of coding of different clinical diagnoses. There is considerable pressure for accurate diagnosis coding from those responsible for case mix funding and resource allocation, and from the clinical researchers who use data for clinical categorisation of diseases and epidemiological purposes. Hence, there has been an interest over many years and countries in the assessment of the accuracy of diagnosis coding. A review by Williamson3 in 2004 reported 129 published documents on accuracy in morbidity coding. Similarly, Campbell et al4 in a systematic review of diagnosis coding accuracy identified 30 studies in the UK alone. Despite the plethora of research on the accuracy of diagnosis coding, there has been very limited research on the accuracy of external cause-of-injury coding in hospital data.59 Currently, there is a lack of knowledge, understanding and familiarity with the use of hospital data for injury surveillance, and there are very few injury researchers driving a programme of quality assurance of these data.

We conducted a systematic review of the literature to appraise the available evidence on the accuracy of external cause coding in hospital records in Australia. This review provides evidence for evaluation of the validity of national injury estimates based on these morbidity data.

METHODS

Study question

What is the accuracy of ICD external cause-of-injury coding in hospital admissions records in Australia?

Search strategy

The following phrase was used to search a range of databases, and results were collated by two reviewers: (“external cause” OR e-code OR “e code”) AND injury AND (quality OR validity OR reliability OR accuracy OR concordance OR consistency OR completeness OR documentation) AND (coding OR ICD) AND hospital AND (recod* OR abstract* OR review*). The text search was conducted in association with the following MeSH terms strategy: ((Medical Records/*classification) OR (International Classification of Diseases+)) AND (Wounds and Injuries/*classification OR Wounds and Injuries/*etiology) AND Documentation/standards AND Hospitals.

The databases searched included PubMed, PubMed Central, Medline, CINAHL, Academic Search Elite, Proquest Health and Medical Complete, and Google Scholar. No time restrictions were included to ensure that all articles indexed within each database were retrieved. In addition to the systematic keyword search approach, snowballing strategies (ie, following up on citations that emerge from other citations) were used by searching the bibliographies and citation links of retrieved references to identify relevant associated articles. Grey literature searches were conducted to identify locally published reports and presentations. In addition, the following key journals were hand searched for articles on external cause data: Journal of Trauma, Injury and Infection Control, American Journal of Public Health, Australia & New Zealand Journal of Public Health, Injury Prevention.

Inclusion/exclusion criteria

The papers identified were independently screened by two authors (KM and EE-M) for inclusion. Studies were included in the review if they assessed the accuracy of external cause-of-injury coding in hospital records by a recoding methodology (n = 5). Seventy-nine studies were excluded that were: (a) not recoding studies (eg, epidemiological studies, data/policy recommendation reports (n = 73)); (b) studies not specifically focusing on community injuries (eg, recoding studies on other clinical diagnoses, recoding studies on adverse events) (n = 2); or (c) studies in which data were collected from emergency department records only (n = 4). (Note: a large number of irrelevant papers that were not recoding studies were returned in PubMed Central using this search phrase.)

Synthesis of study results

Papers were reviewed and summarised in tabular and text form. Because of heterogeneity between studies, meta-analysis was not performed.

RESULTS

Very limited research has been conducted on the accuracy of external cause-of-injury coding for injury-related hospitalisation using medical record review and recoding methodologies. Only five studies were found that matched the selection criteria.5810Table 1 summarises the details of these five studies.

Table 1 Summary of previous research using medical record recoding methods to examine the accuracy of external cause coding in injury-related hospital records

Study setting, population and study design

All published studies of this nature have been conducted using hospital data coded using ICD-9-CM. No studies have been conducted on ICD-10-AM, which has been used in Australia since 1998. Three of the studies were in the USA,6710 one in New Zealand,5 and one in Australia.8

The number of case records reviewed ranged from 323 to 1670, with the range of data obtained from hospitalisations occurring in the years 1985 through to 1998. All except one of the studies selected cases based on a principal diagnosis of an injury (ICD-9-CM Code range 800–999); the remaining study10 selected cases on the basis of the presence of an external cause code. Within the studies, a mixture of simple random sampling and stratified random sampling was used to select cases for review.

All of the studies used an independent coder to review and recode the selected medical records, with three of the studies specifically stating that attempts were made to blind the reviewer to the original codes.5710 Only one of the studies stated that additional information was abstracted from the medical record in addition to the recoding task, with a narrative description of the cause of injury and place of occurrence recorded separately for each form from the medical record.6

“Accuracy” measures and statistical analysis

Accuracy of coding was largely operationalised as the concordance/agreement between the original codes and the recoded data. Each of these studies examined accuracy in terms of levels of agreement, including complete external cause code agreement, agreement to the 4th digit ICD code, agreement to the 3rd digit ICD code, agreement to the group level, and disagreement being the main “accuracy” categories used. Differences in the assignment of intent and/or mechanism were explored in four of these five studies to further explore where the differences in coding patterns could be identified.5710

Statistical analysis was largely descriptive, showing percentage agreement of coding. Two studies attempted to identify correlates of coding accuracy using logistic regression, to identify whether certain characteristics (such as hospital size, length of stay, patient age) correlated with a higher likelihood of being assigned a different code from the original code.58

Study findings

Studies examining external cause coding accuracy found that percentage agreement between coders ranged between 59% when very specific code assignment was examined to 95% when broad category assignment was examined (table 1).

The studies that evaluated the accuracy of the complete external cause code reported an average percentage agreement of 64% (59%,10 66%7 and 67%6). Where accuracy was examined to the 4th digit ICD code level (with errors in the 5th digit), the percentage agreement of coders was reported as 82% by both Langley et al5 and Langlois et al.6 In addition, the data of Langley et al showed 85% agreement to the 3rd digit ICD code.

The studies that examined percentage agreement of coding by code block found variable results across the different code blocks. LeMier et al7 examined the accuracy of coding by external cause mechanism (ie, the degree to which coders agreed on the way in which the injury was sustained), and found 87% agreement in mechanism of injury. Both LeMier et al and Smith et al10 examined the agreement of coders in terms of the intent (ie, unintentional, intentional self-harm, assault) and found that the percentage agreement in coding of intent was 95% and 86%, respectively.710 In terms of the accuracy of codes for unintentional falls, code agreement was on average 70% (66%,7 73%5). Motor vehicle traffic crashes were reported as having an agreement in code assignment of 63–81%.57 Finally, Langley et al5 examined the extent of coder agreement within the intent blocks of intentional self-harm and assault and found percentage agreement within these code blocks of 83% and 86%, respectively.

MacIntyre et al8 examined the types of errors in external cause code assignment and identified three categories of error: errors of omission (ie, missing external causes); superfluous external cause codes (ie, unnecessary codes); and discrepant external cause codes (ie, those where coders did not agree on code assignment as traditionally examined in recoding studies). They found that errors of omission accounted for 21% of errors identified, superfluous external cause codes accounted for 11% of errors identified, and discrepant external cause codes comprised 68% of errors identified.

Key points

  • Injuries are a major cause of morbidity and mortality in the Australian population.

  • Hospital separations and mortality data are routinely used to monitor and assess injury causation and incidence in order to inform injury research, policy and practice.

  • There is currently a limited empirical basis to validate these data, with only five studies identified internationally that examined the accuracy of external cause data in hospital separation data (using ICD-9-CM which has been superseded in many countries by ICD-10).

  • The accuracy of external cause coding using ICD-9-CM ranges from ∼ 64% when exact code agreement is examined to ∼85% when agreement for broader groups of codes is examined.

  • Although researchers may be able to use broad external cause code blocks with some level of confidence, they should exercise caution for very specific external cause codes until further research is conducted to validate these data using the current version of ICD.

Two studies examined correlates of coding accuracy using logistic regression. The first study examined the size of the hospital as a correlate of external cause coding accuracy, controlling for the principal injury type.5 The second study examined several correlates of external cause coding accuracy, including whether the admission was an emergency admission, length of stay, number of diagnoses and procedures, type of injury, age of the patient, hospital, and mortality outcomes.8 Both studies found that none of the factors examined showed any significant correlation with coding accuracy.58

DISCUSSION

One of the most notable findings of this systematic review was the considerable lack of research on the accuracy of external cause-of-injury coding in hospital records; only five papers met the inclusion criteria for this review. Although these data are used routinely to monitor and assess injury causation and incidence, to develop burden of disease estimates, and to inform injury research, policy and practice, there is currently a limited empirical basis to validate their quality.

This review shows that the accuracy of external cause coding using ICD-9-CM ranges from ∼64% when exact code agreement is examined to ∼85% when agreement to the three digit level is examined. Differences in coding accuracy were evident when different external cause axes and code blocks were examined—that is, agreement levels differed depending on whether the intent was deemed to be intentional or unintentional and depending on the mechanism of injury (eg, motor vehicle crash, fall). Thus, although researchers examining data coded from countries using ICD-9-CM may be able to use broad external cause code blocks with some level of confidence, researchers should exercise caution for very specific code blocks until further research is conducted to validate these data.

As all previous studies have been conducted using ICD-9-CM, urgent research is needed to quantify the accuracy of external cause coding using ICD-10 (and the clinical variants of the ICD-10 such as ICD-10-AM, ICD-10-CM and ICD-10-CA) and validate the use of these data for injury surveillance purposes. ICD-10 external cause codes are vey different from ICD-9-CM codes in terms of structure, and the clinical variations of ICD-10 provide additional codes for place of injury and activity at the time of the injury as well as increased levels of specificity across code blocks. As a consequence, it is expected that the accuracy of coding under the ICD-10 classification system will vary from that under the ICD-9-CM classification system.

REFERENCES

Footnotes

  • Contributors: KM contributed to the conceptual design of the manuscript, and was responsible for conducting the systematic literature review, writing the first draft of the manuscript, compiling all authors’ responses, and preparing the final version of the manuscript. ELE-M contributed to the conceptual design of the manuscript, assisted with the systematic literature review, and reviewed and commented on each draft of the manuscript. SW contributed to the conceptual design of the manuscript, provided context to the manuscript in terms of clinical coding processes, and reviewed and commented on each draft of the manuscript. RJM contributed to the conceptual design of the manuscript, provided context to the manuscript in terms of injury prevention implications, and reviewed and commented on each draft of the manuscript. JEH contributed to the conceptual design of the manuscript, and provided context to the manuscript in terms of injury surveillance implications.

  • Funding: This research is funded by an Australian Research Council Linkage Project grant, Injury Prevention and Control Australia, the Victorian Department of Human Services, and the Queensland Health - Health Information Centre.

  • Competing interests: None.