I offer brief rejoinders to Robertson's critique of my comments:
(a) Robertson may indeed have all the data available for the
specified vehicles in his statistical analysis. Nonetheless, the
theoretical underpinnings in any such statistical analysis assume an
infinite population from which the real-world data are drawn.
(b) I am not an adherent of the risk compensation hypothesis, wh...
I offer brief rejoinders to Robertson's critique of my comments:
(a) Robertson may indeed have all the data available for the
specified vehicles in his statistical analysis. Nonetheless, the
theoretical underpinnings in any such statistical analysis assume an
infinite population from which the real-world data are drawn.
(b) I am not an adherent of the risk compensation hypothesis, which
is normally attributed to Wilde. Rather I find Fuller's [1] learning
theory more satisfying. Whatever one may think about individual theories,
it is certainly the case that engineering interventions can entail
untoward side-effects which can undermine their expected beneficial
effects.
(c) Those who can be adversely affected whatever the merits of
engineering interventions might be are often pedestrians and cyclists. In
the UK and many other European countries, a real problem relating to
obesity arises from the perceived and actual dangers of the roads for
pedestrians and cyclists. Hence, the wide-spread "school run", for example
[2]. This problem is associated - amongst other things - with engineering
interventions designed to improve the conditions for drivers. It will not
likely be reversed by further such interventions.
References
1. Fuller R. On learning to make risky decisions. Ergonomics 1988;
31: 519-526.
2. Hillman M, Adams J, Whitelegg J. 1991. One false move: A study of
children's independent modility. Policy Studies Unit, London.
In Table 2 on page 2 of the manuscript "Seatbelt and child-restraint
use in Kazakhstan: attitudes and behaviours of medical university
students," the last two questions focus on how often the respondent
fastened children appropriately. However, there is no choice for if the
respondent never rode with children in the past year. If that was the
case the respondents may choose the response "never"-not because they did
not fas...
In Table 2 on page 2 of the manuscript "Seatbelt and child-restraint
use in Kazakhstan: attitudes and behaviours of medical university
students," the last two questions focus on how often the respondent
fastened children appropriately. However, there is no choice for if the
respondent never rode with children in the past year. If that was the
case the respondents may choose the response "never"-not because they did
not fasten the seatbelt or appropriate restraint but did not travel with
children. This would lead to false negative findings. In addition, the
total N for respondents looks the same as the other questions so it does
not appear that filtering took place. Since the sample was young and
mostly single it may be they were never in a vehicle with young children.
I am hoping the authors can provide some clarity about this issue
Our study had the specific, stated objective of determining whether
New York’s ban on drivers’ use of hand-held phones led to short-term and
long-term changes in the use rates of hand-held phones while driving. Our
intent was not to assess the relative safety effects of hands-free versus
handheld devices. In the discussion, we note that any subs...
Our study had the specific, stated objective of determining whether
New York’s ban on drivers’ use of hand-held phones led to short-term and
long-term changes in the use rates of hand-held phones while driving. Our
intent was not to assess the relative safety effects of hands-free versus
handheld devices. In the discussion, we note that any substitution of
hands-free phones for handheld phones may have diluted any potential crash
effects of the law, but that we were not able to assess the extent to
which such substitution took place.
There is a growing body of experimental research on the effects of
drivers’ phone use on driver performance. As we state in the introduction
to our study, there are experimental studies that suggest that performance
degradations are similar for conversations with hand-held or hands-free
devices. There also are studies that show impairments associated with the
manual aspects of using phones that are not fully hands-free. It is
unknown whether the findings of experimental studies apply to real-world
driving, and the association between phone use and crash risk is not
addressed with these methods. Thus, we do not agree with you that “there
is a large body of evidence showing that there is no safety benefit to be
gained from hands free devices.” Rather we believe it is an unsettled
issue. We also are not aware of evidence from real-world studies of
drivers to support the assertion that bans on drivers’ use of hand-held
phones produce more frequent and longer calls on hands-free devices with
more negative consequences. This also remains an unsettled issue.
Reference
1. Hockey. Handheld vs Handsfree [electronic response to A T McCartt and LL Geary; Longer term effects of New York State’s law on drivers’ handheld cell phone use] injuryprevention.com 2004http://ip.bmjjournals.com/cgi/eletters/10/1/11#42
Changes in %HI unrelated to %HW
Common sense tells us that if the reduction in head injuries were due to helmet laws, percent head injury (%HI) should decline in response to the increase in percent helmet wearing (%HW).
Fig 1 shows this was not the case either in Ontario or British Columbia (BC), two provinces c...
Changes in %HI unrelated to %HW
Common sense tells us that if the reduction in head injuries were due to helmet laws, percent head injury (%HI) should decline in response to the increase in percent helmet wearing (%HW).
Fig 1 shows this was not the case either in Ontario or British Columbia (BC), two provinces containing 90% of the population of helmet-law provinces in Canada. The greatest decline for BC was a fall of 7.4 percentage points from 94/95 to 95/96 (before the law was enacted). The greatest decline for Ontario (5.4 percentage points) was from 96/97 to 97/98, when helmet wearing was also declining.
The more recent data for Ontario confirm the lack of relationship. The downward trend continues, despite a return to pre-law helmet wearing by 1999. The lowest %HI was for 01/02 when helmet wearing had returned to pre-law levels.
The lack of relationship between %HW and %HI would convince most people that Canada's helmet laws had little benefit, so it is difficult to understand why Macpherson et al. continue to claim:
a) that the data from 94/95 to 97/98 show the laws were effective and
b) we can't draw any useful conclusions from the more recent data, because there is no "concurrent comparison group"[5].
Other road safety measures Even with "concurrent comparison groups", common sense is needed to interpret them correctly. Fig 2 shows a greater declining trend in fatal and serious pedestrian injuries in helmet-law provinces than no-law provinces. The divergence in pedestrian trends obviously wasn't caused by helmet laws. So it seems illogical to claim the trends in %HI (which bear no relationship with the timing of the laws) demonstrate that the Canadian legislation was effective. The trends could have had similar underlying causes (e.g. safer roads) rather than those for cyclists being due simply to helmet laws.
Although bike/motor vehicle collisions (BMVC) cause only a small proportion of total injuries to cyclists, a study of all brain injuries to cyclists in an entire year in San Diego county found that BMVC caused every single fatal or seriously debilitating brain injury.[6] Overall road safety is therefore a major determinant of the risk of debilitating head injury.
A peer-reviewed paper in 1996 showed very strong relationships (r = 0.94, P<_0.02 between="between" hi="hi" of="of" child="child" cyclists="cyclists" and="and" pedestrians="pedestrians" in="in" victoria="victoria" australia.7="australia.7" fell="fell" from="from" _18.3="_18.3" the="the" year="year" before="before" helmet="helmet" law="law" to="to" _10.7="_10.7" second="second" legislation="legislation" compared="compared" a="a" decline="decline" _15.6="_15.6" _13.5="_13.5" for="for" cyclists.7="cyclists.7" greater="greater" strongly="strongly" suggests="suggests" main="main" cause="cause" was="was" not="not" law.="law." p="p"/>These results supersede those of Cameron et al. (1994)[8]. I cannot understand why Macpherson's response to Wardlaw[5] still cites Cameron as evidence that Victoria's helmet law "was effective in reducing head injuries", instead of later research pointing out the significant effects of reduced cycling and large declines in %HI of pedestrians.
Timely reporting of results Finally, Malcolm Wardlaw is correct that timely and accurate reporting of results is important. Numbers counted in the 1999 Ontario survey were published in 2001, but helmet wearing rates for the same year (1999) were not published until August 2006. If, as Macpherson say, she agrees that timely reporting is important, why was the vitally important information that helmet wearing rates returned to pre-law levels by 1999 not mentioned earlier?
If, in 2005, the BMA had known that enforcement in Ontario was ineffective and %HW was at pre-law levels from 1999 onwards, as well as that (as Fig 1 shows) neither the timing of helmet laws nor the changes in %HW bear any relationship with the trends in %HI, their stance on helmet legislation laws might have been different.
References 1. Transport Canada. Road Safety in Canada - 2003. Report prepared for the Canadian Council of Motor Transport Administrators (CCMTA) Standing Committee on Road Safety Research and Policies: Road Safety and Motor Vehicle Regulation Directorate (available at:http://www.tc.gc.ca/roadsafety/tp/tp13951/2003/pdf/tp13951%20EN-S.pdf), 2006.
2. Macpherson AK. An evaluation of the effectiveness of bicycle helmet legislation (powerpoint presentation, available at http://www.circl.pitt.edu/home/webinars/ppt/macphersonwebinar.ppt), 2006.
3. Macpherson AK, Parkin PC, To TM. Mandatory helmet legislation and children's exposure to cycling. Inj Prevent 2001;7(3):228-30.
4. CIHI. Injury Hospitalizations (includes 2000-01 and 2001-02 data): Canadian Institute for Health Information, 2003.
5. Macpherson AK, Macarthur C, To T, Wright J, Chipman M, Parkin P. Reply to Mr. Wardlaw's letter "Timely reporting of research is necessary": E-letter, Injury Prevention http://ip.bmj.com/cgi/eletters/12/4/231#1667, 2007.
6. Kraus JF, Fife D, Conroy C. Incidence, severity, and outcomes of brain injuries involving bicycles. Am J Public Health 1987;77(1):76-8.
7. Hillman M. Health benefits of cycling greatly outweigh loss of life years from deaths. BMJ 1997;314:69.
8. Cameron MH, Vulcan AP, Finch CF, Newstead SV. Mandatory bicycle helmet use following a decade of helmet promotion in Victoria, Australia--an evaluation. Accid Anal Prevent 1994;26(3):325-37.
This article brings into prospective a dangerous and habitual
practice of leaving infants and children unattended in vehicles and its
serious ill effects on health most notably being death. Although the
article describes scenario in a developed western setup such incidents are
increasingly becoming common in developing Asian countries like India and
require immediate attention.
This article brings into prospective a dangerous and habitual
practice of leaving infants and children unattended in vehicles and its
serious ill effects on health most notably being death. Although the
article describes scenario in a developed western setup such incidents are
increasingly becoming common in developing Asian countries like India and
require immediate attention.
The study addresses an important and often ignored issue of
occurrence of hyperthermia in children left unattended in vehicles. There
is a strong need that print and social media should bring such shocking
occurrences into public domain more piercingly so that people specially
parents together with caretakers would become aware of such problems
leading to them being more careful and attentive. The community as a whole
should bring about basic necessary changes in attitude and perception of
individuals to prevent such dreadful events.
We agree with the authors that policy makers and law makers need to
take a serious and stringent look into such issues so that timely
interventions and strict regulations can be put into place. The community
and government should work interactively towards developing guidelines,
safety norms and mostly create awareness amongst ignorant parents.
Lastly we would like to state that in a country like India with warm
winters and ambient temperatures being high normal in most part of country
throughout the year along with little awareness about such a catastrophic
phenomenon, a study in Indian context is urgently required.
The paper asserts that the dimunition of risk is due to the increase
in cyclists. Could it be the other way round, that more cycle as it
becomes less risky (due to unknown factors...)?
The risk reduction is purely for cyclists/walkers. Would the
population as a whole experience less risk if they all drove? In extremis,
if all cycled, they would have no cars to collide with, while if none
cycle...
The paper asserts that the dimunition of risk is due to the increase
in cyclists. Could it be the other way round, that more cycle as it
becomes less risky (due to unknown factors...)?
The risk reduction is purely for cyclists/walkers. Would the
population as a whole experience less risk if they all drove? In extremis,
if all cycled, they would have no cars to collide with, while if none
cycled, there would be zero cycling risk.
It would be instructive to know if walkers/cyclists reduced their
risk of heart attacks and other diseases mediated by regular exercise.
The study by Denton and Fabricius [1] uses local newspaper accounts to discover
instances of defensive gun use in the Phoenix, Arizona area during a brief
period in 1998 and concludes that there are far fewer such occurrences
than reported by criminologists who performed nationwide telephone
surveys.
While telephone surveys are certainly vulnerable to some significant
sources of bias, including those re...
The study by Denton and Fabricius [1] uses local newspaper accounts to discover
instances of defensive gun use in the Phoenix, Arizona area during a brief
period in 1998 and concludes that there are far fewer such occurrences
than reported by criminologists who performed nationwide telephone
surveys.
While telephone surveys are certainly vulnerable to some significant
sources of bias, including those related to recall and self-reporting, it
is hard to imagine that anyone would consider the methods used by Denton
and Fabricius to be sound.
We belive that this work is fundamentally flawed for at least two reasons.
First, the findings of criminologists confirm the intuitively obvious fact
that most instances of defensive gun use are never reported to the police.
Those who successfully use their guns in self-defense often would just as
soon not involve the police. If a shooting does not result in a wounding
or death, the police might very well never learn of the occurrence. If an
individual wounded in such an incident did not seek medical attention
(which would be subject to mandatory reporting to authorities), the police
(again) would likely never learn of the incident. Finally,
criminologists' surveys cited by Denton and Fabricius indicate that in the
majority of defensive gun uses the firearm is not actually discharged.
Instead, mere brandishing of the weapon deters the intentions of a
criminal.
Second, the authors apparently assume that newspaper accounts are a
reliable means of counting incidents of defensive gun use reported to the
police. As sociologist John Lott documents in his recent book (The Bias
Against Guns), newspapers routinely run stories of the criminal use of
guns but rarely report defensive gun uses, which are considered much less
"newsworthy." This judgment of newsworthiness may simply be based on the
notion that an incident in which nobody actually got shot is less
interesting to readers, but it probably also reflects the well-documented
anti-gun bias of news reporters and editors.
Given that the data collection methods employed by Denton and Fabricius
are clearly inadequate to discover the actual number of defensive gun uses
in the area and during the time period they attempted to examine, it is
certainly impossible to use their data as the basis for drawing any valid
conclusions.
I must say that I am nothing short of astonished that a journal produced
by the elite BMJ Publishing Group would have accepted this manuscript for
publication.
Reference
1. J F Denton and W V Fabricius. Reality check: using newspapers, police reports, and court records to assess defensive gun use. Inj Prev 2004; 10: 96-98.
The
question before the reader is this: is Olivier and Walter's
reanalysis[1] of Walker's data[2] constructed around the false claim
that increasing the sample size increases the risk of Type I
errors;[3] or around "increasing power when computing sample
size leads to...
The
question before the reader is this: is Olivier and Walter's
reanalysis[1] of Walker's data[2] constructed around the false claim
that increasing the sample size increases the risk of Type I
errors;[3] or around "increasing power when computing sample
size leads to an increase in the probability of a type I error"[4]--
and is the latter claim true or false anyway, if it means anything at
all?
There
is no space here to properly dispense with this matter, or the many
other faults of Olivier and Walter's reanalysis. This is done
elsewhere.[5, 6] Instead I confine myself to three observations.
1.
"Increasing power when computing sample size" means sample
size is a varying output (result), while power is a variable input.
Since the probability of Type I errors is claimed to thereby
increase, it too is an output. But then no result is forthcoming,
because Olivier and Walter are positing one equation in the two
unknowns. Thus their counter-assertion here is also false: the Type I
error level is left undetermined, free to be chosen as seen fit. And
indeed Olivier and Walter saw fit to choose exactly the same
criterion for statistical significance as Walker: alpha = 0.05.[1, 2]
How
then did Olivier and Walter[4] seemingly give an example where the
Type I error rate thereby increased? While purporting to increase
power "when computing sample size", in fact they held
sample size fixed.
2.
Here Olivier and Walter claim an effect size of d = 0.12, and
elsewhere d < 0.2, "is trivial by Cohen's
definition". That too is false: Cohen never defined any effect
size as trivial.[7] He proposed only that d = 0.2 was "small"
but not trivial.[7] The numerical example Cohen gave to introduce his
concept of d was in fact d = 0.1, without any
disparaging remark-- but with the corresponding sample sizes
extensively tabulated.[8]
3.
Seven millimetres is approximately one-third the diameter of
handlebar tubing, and seven centimetres is a vital fraction of the
diameter of a human limb, skull, or torso. If after an initial set-up
of whatever passing distance, an unexpected excursion of driver or
rider changes the gap to 0, and helmet wearing to one either of those
distances closer, the contact goes from none to brushing to one with
sufficient mechanical purchase to be disastrous. In other words,
finding out what effect size is clinically significant or not is the
business of the scientist, not the statistician.
I
imagine the reader may not yet have enough information to decide
which false claim Olivier and Walter's reanalysis is based upon. I
also imagine that for most readers, that it is one or another or all
of them, is enough.
References
1.
Olivier J, Walter SR. Bicycle helmet wearing is not associated with
close motor vehicle passing: a re-analysis of Walker, 2007. PLoS One
2013;8(9): e75424. doi:10.1371/journal.pone.0075424
2.
Walker I. Drivers overtaking bicyclists: Objective data on the
effects of riding position, helmet use, vehicle type and apparent
gender. Accident Analysis and Prevention 2007;39:417-425.
doi:10.1016/j.aap.2006.08.010
3.
Kary M. Unsuitability of the epidemiological approach to bicycle
transportation injuries and traffic engineering problems. Inj Prev
Published Online First: doi:10.1136/ injuryprev-2013-041130
4.
Olivier J, Walter SR. Too much statistical power can lead to false
conclusions: a response to 'Unsuitability of the epidemiological
approach to bicycle transportation injuries and traffic engineering
problems' by Kary. Inj Prev Published Online First: doi:10.1136/
injuryprev-2014-041452
5.
Kary M. False and more false than ever. Published 8 Dec 2014. PLoS
One [eLetter]
http://www.plosone.org/annotation/listThread.action?root=84090
6.
Kary M. Some context. Published 9 Dec 2014. PLoS One [eLetter]
http://www.plosone.org/annotation/listThread.action?root=75589
7.
Cohen J. A power primer. Psychol Bull 1992;112:155-159.
8.
Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd
Revised edn. Orlando, FL: Academic Press, 1977.
The article by Macpherson et al[1] relies on surveys from 111 sites around East York (Toronto) and some questions remain about these surveys. Data from two reports provides confusing indications on the level of cycling. In 2001[2] figures were published for the hourly rate for several years and by comparison in 2003[3] counts for 8-years were provided based on 1 hour observation at each site. An hourly rate...
The article by Macpherson et al[1] relies on surveys from 111 sites around East York (Toronto) and some questions remain about these surveys. Data from two reports provides confusing indications on the level of cycling. In 2001[2] figures were published for the hourly rate for several years and by comparison in 2003[3] counts for 8-years were provided based on 1 hour observation at each site. An hourly rate is calculated base on the 111 sites and 1 hour per site ,'A' divided by 111. The table below shows the data;
Table 1
* data from 2003, 568 wearing helmets from 1227 is 46%, not 45 % as quoted in ref 1.
Robinson [4] stated
"The Canadian study had 111 pre-selected sites, each recorded for one hour, but weather conditions were not reported (though elsewhere 1999 was described as a particularly sunny summer; A K Macpherson, personal communication). Table 1 in the Macpherson et al paper[2] shows that, in some years, some sites were recorded more than once. Moreover, observations were not at the same time of day and day of the week each year (A K Macpherson, personal communication)"
A number of aspects arise,
1) Can extra count details be added to the table, for 1999 and 2001.
2) Why the counts for years 1993 to 1997 were quite different in the published reports.
3) Why the total hours of surveys calculated should vary from 112 hours to 425 hours.
4) Why the observation hours were not a multiply of 111, as per number of sites.
5) Which survey details would be more likely to reflect the true level of cycling activity, 2001 or 2003, if either.
6) Could there have been an 17% drop in cycling, 2003 data - average count pre law 1275, post law 1059.
7) Can other data be added to the table.
8) How reliable are the surveys for indicating the overall level of cycling activity for those aged to 19 years.
9) Helmet use of 46% before legislation is identical to that in 2001 at 46%, seems like no appreciable effect from legislation.
Extra information would help provide a clearer picture to be guided by and most appreciated.
References
1. Macpherson AK, Macarthur C, To TM, et al. Economic disparity in bicycle helmet use by children six years after the introduction of legislation. Inj Prev 2006;12:231-235
2. Macpherson AK, Parkin PC, To TM. Mandatory helmet legislation and children’s exposure to cycling. Inj Prev 2001;7:228–30.
3. Parkin PC, Khambalia A, Kmet L, et al. Influence of socio-economic status on the effectiveness of bicycle helmet legislation for children: a prospective observational study. Pediatrics 2003;112:e192
4. Robinson DL, Helmet laws and cycle use, RESEARCH LETTER, Inj Prev 2003;9:380-381
Injury Prevention recently explored firearm issues,
introducing what might be called the “Fabricius Method”
of analysis. Invented by ASU professor William
Fabricius with his 12-year-old son John Denton, it
works simply enough. They counted gunfire stories in
one newspaper, and concluded guns are rarely used
for anything good. I imagine many heartily embrace this
conclusion.
Injury Prevention recently explored firearm issues,
introducing what might be called the “Fabricius Method”
of analysis. Invented by ASU professor William
Fabricius with his 12-year-old son John Denton, it
works simply enough. They counted gunfire stories in
one newspaper, and concluded guns are rarely used
for anything good. I imagine many heartily embrace this
conclusion.
Newspaper reports however are an easily impeachable
incomplete data set lacking any controls. They are
selective, commercially driven, an arbitrary batch of
anecdotes. Scientific, statistically valid conclusions
cannot be thus derived. Additionally, newspaper bias
on guns is demonstrably great.[1]
Fabricius-and-son “found” Maricopa County had two
defensive gun uses (DGUs), seven gunshot suicides
and 81 gunshot incidents in 103 days. However, police
precincts locally receive gunshot reports in the
thousands. Official Arizona mortality reports suggest
161 gunshot suicides [2] during the study period, not
seven. If Fabricius’ count is 23 times too low, as
suicides imply, two DGUs represent 46 lives
saved/crimes prevented. Similar factors are posted on
my website, gunlaws.com.
If the team had used USA Today instead of a
community newspaper, the Fabricius Method would
have found zero lives saved and zero crimes prevented
by gunfire, for the entire country, for an entire year
(2001).[3] That is not science.
It is as if they compared obituaries and births, and
concluded America is terminal. The Fabricius Method
would find a preponderance of Blacks are athletes,
entertainers or criminals.
Fabricius-and-son derived hurtful, anti-human-rights
conclusions without support. They denigrated 13
scholarly reports that uniformly conflict with their
ill-advised non-science.[4]
Injury Prevention injured itself by publishing such
unprofessional work. A retraction is warranted, with
support for this methodology and its spurious
conclusions disavowed.
Fabricius should make clear whether ASU endorses
his work, as he implies, or extricate that fine university
from this humiliating Bellesiles-like debacle.
I appeal to you: Do not let your personal desire to reach
"The Fabricius Conclusion" (guns are bad)
compromise your professional judgment about "The
Fabricius Method" (counting local news stories is a
valid measure of firearms activity).
Alan Korwin
Author: Gun Laws of America
References
1. Lott Jr. JR, The Bias Against Guns.
Washington, DC: Regnery Publishing Inc. 2003; Bovard
J, Lost Rights. New York, NY: Palgrave - St. Martins -
Griffin 2000; Goldberg B, Bias. Washington, DC:
Regnery Publishing Inc.; Kates Jr. DB, and Kleck G, The
Great American Gun Debate: Essays on Firearms and
Violence. San Francisco: Pacific Research Institute for
Public Policy 1991.
2. Arizona Dept. of Health Services Mortality Report,
Suicide Deaths by Gender, Means of Injury and Year,
Arizona Residents, 1992 - 2002.
3. Lott Jr. JR, The Bias Against Guns. Washington, DC:
Regnery Publishing Inc. 2003: 40.
4. Kleck G, Gertz M. “Armed Resistance to Crime: The
Prevalence and Nature of Self-Defense with a Gun”.
Journal of Criminal Law and Criminology 1995.
Conflict of Interest Statement
I have written/co-written seven books on gun laws in
America, including the unabridged guide to federal gun
law (“Gun Laws of America”), and belong to:
The Brady Campaign to End Gun Violence
The National Rifle Association
The American Civil Liberties Union
The Arizona Civil Liberties Union
Gun Owners of America
The Society of Professional Journalists
The Arizona Book Publishing Association
and numerous other groups.
Dear Editor
I offer brief rejoinders to Robertson's critique of my comments:
(a) Robertson may indeed have all the data available for the specified vehicles in his statistical analysis. Nonetheless, the theoretical underpinnings in any such statistical analysis assume an infinite population from which the real-world data are drawn.
(b) I am not an adherent of the risk compensation hypothesis, wh...
In Table 2 on page 2 of the manuscript "Seatbelt and child-restraint use in Kazakhstan: attitudes and behaviours of medical university students," the last two questions focus on how often the respondent fastened children appropriately. However, there is no choice for if the respondent never rode with children in the past year. If that was the case the respondents may choose the response "never"-not because they did not fas...
Dear Editor
Regarding the eLetter by McCartt and Geary.[1]
Our study had the specific, stated objective of determining whether New York’s ban on drivers’ use of hand-held phones led to short-term and long-term changes in the use rates of hand-held phones while driving. Our intent was not to assess the relative safety effects of hands-free versus handheld devices. In the discussion, we note that any subs...
Dear Editor
Changes in %HI unrelated to %HW
Common sense tells us that if the reduction in head injuries were due to helmet laws, percent head injury (%HI) should decline in response to the increase in percent helmet wearing (%HW).
Fig 1 shows this was not the case either in Ontario or British Columbia (BC), two provinces c...
Sir,
This article brings into prospective a dangerous and habitual practice of leaving infants and children unattended in vehicles and its serious ill effects on health most notably being death. Although the article describes scenario in a developed western setup such incidents are increasingly becoming common in developing Asian countries like India and require immediate attention.
The study addresses...
Dear Editor
The paper asserts that the dimunition of risk is due to the increase in cyclists. Could it be the other way round, that more cycle as it becomes less risky (due to unknown factors...)?
The risk reduction is purely for cyclists/walkers. Would the population as a whole experience less risk if they all drove? In extremis, if all cycled, they would have no cars to collide with, while if none cycle...
Dear Editor
The study by Denton and Fabricius [1] uses local newspaper accounts to discover instances of defensive gun use in the Phoenix, Arizona area during a brief period in 1998 and concludes that there are far fewer such occurrences than reported by criminologists who performed nationwide telephone surveys. While telephone surveys are certainly vulnerable to some significant sources of bias, including those re...
The question before the reader is this: is Olivier and Walter's reanalysis[1] of Walker's data[2] constructed around the false claim that increasing the sample size increases the risk of Type I errors;[3] or around "increasing power when computing sample size leads to...
Dear Editor
The article by Macpherson et al[1] relies on surveys from 111 sites around East York (Toronto) and some questions remain about these surveys. Data from two reports provides confusing indications on the level of cycling. In 2001[2] figures were published for the hourly rate for several years and by comparison in 2003[3] counts for 8-years were provided based on 1 hour observation at each site. An hourly rate...
Dear Editor
Injury Prevention recently explored firearm issues, introducing what might be called the “Fabricius Method” of analysis. Invented by ASU professor William Fabricius with his 12-year-old son John Denton, it works simply enough. They counted gunfire stories in one newspaper, and concluded guns are rarely used for anything good. I imagine many heartily embrace this conclusion.
Newspaper rep...
Pages