Statistics from Altmetric.com
In a recent letter, Cummings and Rivara1 misstate my point regarding changes in estimated belt effectiveness in the mid-1980s using the comparison of front seat occupant pairs. They cite my statement, “What is not explained by the theory [about misclassification of seatbelt use by police] is the sudden gap in police reported use by the dead and survivors that appeared in the mid-1980s”2 as faulting them for not explaining why prevalence of seatbelt use changed from 1975 to 1998. How could anyone who uses the English language with a modicum of proficiency interpret “sudden” as 23 years and “gap in police reported use by the dead and survivors” as general prevalence of belt use?
Actually, a cursory look at the graph in Cummings paper that I critiqued indicates that the major reduction in risk ratios indicative of seatbelt effectiveness occurred during a short period in the mid-1980s when belt use laws were being debated and initially enacted in a few states. I noted that this debate could have changed police behavior in belt use classification in crashes, a point they ignored. I also pointed out that reductions in deaths related to on-road observations of belt use prevalence controlling for other factors do not support their claim of 65%–70% belt effectiveness when used, a point they ignored.
I understand the distinction between what they call differential and non-differential misclassification. In a 1976 paper, I indicated how a small systematic error by police in assessing belt use in crashes would result in large error in estimating belt effectiveness, a paper which Cummings dismissed as expressing “concern”.3 Cummings claims that his comparison of NASS investigators’ reports and police reports of belt use support the non-differential classification theory but that assumes that the NASS investigators possess the gold standard for assessing belt use. One of the major criteria for acceptance of research findings is plausibility. The risk ratios derived from post-1984 FARS and NASS data are not plausible given changes in belt use and death rates controlling for other factors.
So what is the big deal if seatbelts are standard equipment and reduce injury? Excessive claims of belt effectiveness lead to overemphasis on increasing belt use to the neglect of other needed policies. Belt use in the US is near 70% and yet about 32 000 occupants of passenger cars, sport utility vehicles, and light trucks are dying each year in collisions. In recent US Congressional hearings on sport utility vehicles, for example, spokespersons for the auto industry claimed that belt use is low in fatal sport utility vehicle rollovers, based on erroneous police reports in FARS, as if low belt use absolved the industry of making stable vehicles. If belt use were 100%, many people would nevertheless die and be maimed in rollovers of vehicles that are unnecessarily unstable.
Assessing belt use after the fact of a rollover is particularly problematic because crash forces in the body area where the belt touches the person are less severe in a laterally rotating vehicle than in more direct impacts with other vehicles and objects, so that belt marks on the torso may be less evident and damage to the belts is less likely. People die more from head injury when the roof crushes in, or they impact surfaces external to the vehicle if they are ejected. Police officers, and apparently NASS investigators, too often assume that an ejected occupant was unbelted when, in fact, rotation of the vehicle results in occupant slippage out of belts in some cases and belts becoming unlatched due to impact on the latches in others. In both rollovers and non-rollovers, crash investigators may assume non-use of belts simply because the occupant died.
In a second letter, Koepsell et al also misrepresent what I wrote about their ill-considered use of imputation of missing values.4 They quote my statement, “… missing data on velocity changes in crashes were imputed partly from injury severity scores, again a cause imputed from an effect and then used as a control in the study, a true scientific ‘no-no’”. They construe that statement as saying that “Robertson argues that measures of crash outcome should not be used to impute values on a covariate which will later enter the main analysis as a predictor of crash outcome”. In fact, I would not publish a study if I had to rely on imputed data. In my opinion, their study should not have been done or published, given that more than 40% of cases in NASS have missing values of delta-V and the seatbelt use assessment contains the serious biases noted previously. If someone imputed values on a variable in more than 40% of the cases of an evaluation of efficacy and safety of a drug, the study would not likely be published or taken seriously if it was. Why should any less be acceptable in the study of injury control measures?
As a previous admirer of a substantial proportion of the research produced at the University of Washington’s Injury Prevention and Research Center by several of these same authors, it pains me to see them produce foolish papers and attempt to discredit a critic by distorting the criticism.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.