Article Text

Download PDFPDF

Validity of smoke alarm self-report measures and reasons for over-reporting
  1. Rebecca Stepnitz1,
  2. Wendy Shields1,
  3. Eileen McDonald2,
  4. Andrea Gielen2
  1. 1Department of Health Policy and Management, Johns Hopkins School of Public Health, Baltimore, Maryland, USA
  2. 2Department of Health Behavior and Society, Johns Hopkins School of Public Health, Baltimore, Maryland, USA
  1. Correspondence to Rebecca Stepnitz, 3621 44th Avenue South, Minneapolis, MN 55406, USA; step0310{at}umn.edu

Abstract

Objectives Many residential fire deaths occur in homes with no or non-functioning smoke alarms (SAs). Self-reported SA coverage is high, but studies have found varying validity for self-report measures. The authors aim to: (1) determine over-reporting of coverage, (2) describe socio-demographic correlates of over-reporting and (3) report reasons for over-reporting.

Methods The authors surveyed 603 households in a large, urban area about fire safety behaviours and then tested all SAs in the home. 23 participants who over-reported their SA coverage were telephoned and asked about why they had misreported.

Results Full coverage was reported in 70% of households but observed in only 41%, with a low positive predictive value (54.2%) for the self-report measure. Most over-reporters assumed alarms were working because they were mounted or did not think a working alarm in a basement or attic was needed to be fully protected.

Conclusions If alarms cannot be tested, researchers or those counselling residents on fire safety should carefully probe self-reported coverage. Our findings support efforts to equip more homes with hard-wired or 10 year lithium battery alarms to reduce the need for user maintenance.

  • Smoke alarms
  • fire prevention
  • self-report validity
  • interventions
  • burn
  • public health
  • education
  • behavioural
  • information tech
  • community
  • child
  • counselling
  • evaluation
  • psychological
  • violence

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background and introduction

In 2010, residential fires were responsible for an estimated 13 800 injuries and 2640 deaths among civilians in the USA.1 In all, 40% of residential fire deaths between 2003 and 2006 occurred in homes with no smoke alarms (SAs), and another 23% occurred in homes where an alarm was present but was not functioning due to missing, disconnected or dead batteries.2 SAs, when present and functioning, have demonstrated their effectiveness in reducing the risk of death by a half in the event of a fire.3 The National Fire Protection Association recommends that residential homes have an SA in every bedroom, outside every sleeping area and on every level of the home.4

A national cross-sectional random-digit dial telephone survey found that 93% of surveyed households reported having at least one working SA on every level of their home.5 However, this estimate, like most national and many local estimates, is based on self-reported data, and studies using observation to validate such reports have had varied results. One study in King County, Washington, found high positive predictive values (PPVs) for self-reports of most safety behaviours including a PPV of 89% for having SAs on every level of the home (the observation did not assess whether alarms were functioning).6 Another study in a low-income area of Oklahoma found that although 66% of households reported having any working SAs, when tested, only 49% of households had them,7 while another study in Dane County, Wisconsin, found a 90% PPV using the same measure.8 Earlier research conducted by two of the present authors with low-income families in Baltimore found a low PPV of 52% for self-reports of having at least one working SA. The PPV was even lower (26%) when comparing self-reports of having a working alarm on every level with observed coverage.9

Low PPVs, indicating over-reporting of SA presence and functional status, are troubling for both research and practical reasons. They raise the methodological concern of social desirability and undermine the internal validity of studies that rely on self-report. Perhaps more importantly, they suggest that many individuals may have a false sense of security in their SA coverage when they are, in reality, out of compliance with safety recommendations. We could find no studies that investigated the extent to which social desirability bias is the explanation for over-reporting having working SAs. During a survey conducted in randomly selected households in a large urban area, we had the opportunity to ask about SA coverage, test alarms for functionality and follow-up with over-reporters. The aims of this paper are to: (1) determine over-reporting of SA coverage, (2) describe socio-demographic correlates of over-reporting and (3) report reasons given by residents for over-reporting.

Methods

Data collection

Between July and December 2009, we completed interviews and home observations with 603 households selected from 12 census tracts in East Baltimore, Maryland, a relatively low-income urban area. In three waves, a random selection of 3503 of the 9826 households in the study area were first contacted via mail with a project letter and then visited by interviewers during daylight hours, Monday through Saturday. Selected households were visited up to five times before they were deemed unresponsive. Each random selection wave occurred approximately 2 months apart, after most households were resolved (surveyed, refused to be surveyed or deemed unresponsive) from the prior selection.

From the 3503 addresses, we excluded 193 who refused via telephone in response to the project letter and another 39 who participated in a pilot test of the survey application. Data collectors attempted to visit the remaining 3271 addresses in person and excluded another 2144 addresses for various reasons (vacant, not residential addresses, apartment complexes the Fire Department was not responsible for, no eligible resident). Of the remaining 1127 eligible households, 515 refused participation (45.7%). Initial surveys were conducted with 612 participants (54.3% of eligible households), and nine had incomplete data and were removed from the final sample of 603 completed surveys (see figure 1).

Interviews were conducted inside the home with an English speaking adult resident who gave informed consent to participate. During the last 2 months of the household survey, which followed the third wave of random selections, all (n=30) respondents who over-reported their SA coverage (15% of the 199 total respondents who completed the initial survey at that time) were recontacted by telephone and 23 completed the follow-up interview. One respondent refused the follow-up interview; the remaining six could not be reached. The study was approved by the Johns Hopkins Institutional Review Board.

Measures of socio-demographics, SA coverage and follow-up

During the initial survey, socio-demographic measures were queried for descriptive purposes and self-reported by respondents. Racial and ethnic categories were the same used in the 2000 US Census. Respondents were asked questions about their fire safety-related behaviours. After completing the questions, data collectors tested all SAs in the home by pressing a ‘test’ button to determine whether the battery was functioning. Respondents were made aware of the home observation before the interview began during the consent process. Self-reported SA coverage, defined here as having a working SA on every level of the home including the attic or basement, was measured by asking ‘Do you have a working smoke alarm on every level of your home?’ without further prompting. All initial survey measures were pilot tested with study area community members to improve wording and help ensure that respondents understood the questions.

The telephone follow-up consisted of four questions aimed at better understanding reasons for over-reporting and prompting suggestions for improving the wording of questions to reduce over-reporting (questions are included in the results, table 1).

Table 1

Comparison of self-reported and observed smoke alarm coverage

Analysis

PPV and negative predictive value (NPV) were calculated for responses to the question ‘Do you have a working smoke alarm on every level of your home?’ Among those who reported having working SAs on every level, PPV was calculated as the percent who were observed to have working SAs on every level (ie, percent that were true ‘yes’ responses). Among those who reported not having working SAs on every level, NPV was calculated as the percent who were observed to not have working SAs on every level (ie, per cent that were true ‘no’ responses). Specificity was calculated as the percent of people observed to not have a working SA on every level who reported not having an SA on every level. Sensitivity was calculated as the percent of people observed to have a working SA on every level who reported having a working SA on every level. Bivariate analyses comparing socio-demographic characteristics of over-reporters versus correct reporters were conducted using SPSS V.17.0. Responses to the two follow-up questions that elicited yes/no responses were tallied. Responses to the two more open-ended follow-up questions were coded according to common themes that arose during a preliminary review of the collected follow-up responses.

Results

Characteristics of respondents

Respondents were mostly female subjects (70%), the head of the household (81%), and black or African American (62%). Approximately half of respondents rented rather than owned their homes, 86% had a high school diploma or more years of education, and 49% had a per capita income of $10 000 or less.

Self-reported SA behaviour

A total of 44 respondents did not know their SA status and were excluded from the analysis (for one, SA coverage status is unknown; 13 of the remaining 43, or 30.2%, had full SA coverage upon observation). Of the remaining 559 respondents, 70% reported having a working SA on every level of their home.

Validity of self-report and characteristics of over-reporters

By observation, 41% of households had a working SA on every level of the home. In total, 65% of the sample (n=364) correctly reported their SA coverage and 35% (n=195) did not. The self-reported measure of SA coverage showed low validity with a PPV of 54.2% (95% CI 49.2% to 59.2%) when compared with the ‘gold standard’ of a data collector testing the alarms (see table 2). The NPV was 90% (95% CI 85.5% to 94.5%). The sensitivity was 92.5% (95% CI 89.1% to 95.9%) and the specificity was 46.2% (95% CI 40.8% to 51.6%). The 17 respondents who reported that they did not have working SAs but who actually did were excluded from further analyses.

Table 2

Socio-demographic characteristics of the sample by smoke alarm coverage reporting status

The 178 over-reporting respondents were more likely to be black or African American (χ2=10.96, p=0.001) and not the head of the household (χ2=7.219, p=0.004) than the 364 respondents who correctly reported their SA coverage (see table 3). Female subjects and those living in a household earning $10 000 or less per capita were somewhat more likely to be over-reporters, although this difference was not statistically significant at p=0.05.

Table 3

Follow-up questions exploring reasons for over-reporting smoke alarm coverage

Reasons for over-reporting

There were no significant differences in the socio-demographic characteristics between the 23 follow-up respondents and the 178 over-reporters in the total sample. Twelve (52%) of the follow-up respondents said they assumed all of their alarms were working because they were mounted and/or not beeping to signal the need for a battery change (see table 3). Another five (22%) said they thought they had adequate coverage even though they knew one or more levels of the home were not covered by a working SA (often the basement or attic). The remainder of the responses related to forgetting to put alarms back up and assuming that someone else had made sure they were working.

When asked for suggestions on changing the wording of the self-report measure questions to increase response accuracy, nine (39%) had no suggestion for improving it. Another nine recommended prompting the respondent to check the alarms or think more carefully about whether they had working alarms on every level. These suggestions included asking ‘do you have a working smoke alarm on every level, including your basement?’ and ‘when were the batteries last changed?’ Four respondents (17%) suggested that the data collector encourage honesty among respondents by assuring him/her that the survey is not going to get him/her in trouble with Social Services or simply by telling people to be honest or straight-forward.

When asked if they felt like they had to tell the data collector they had working SAs on every level because they knew they should have them, eight (35%) said ‘yes’. When asked if having the data collector in the home had made them feel pressured, all 23 follow-up respondents said ‘no’.

Unprompted, four respondents (17%) reported that since the original interview when they were informed by the data collector that they did not have a working alarm on every level, they had installed the needed alarms and batteries or had made plans to acquire alarms.

Discussion

This study adds to the literature on validity of self-report for measuring SA behaviour in populations. The PPVs described in the literature to date have clearly varied (from 26% to 90%), which could be a result of different measures of SA coverage, some, the present study included, with more strict definitions. The most comparable study reviewed above (conducted by Chen, Gielen and McDonald also in Baltimore City) found a PPV half that found by the present study (26% compared with 54.2%).9 The reason for this discrepancy could likely be the setting of the initial interview, via telephone in advance of the home observation in the 2003 study as opposed to in the home at the time of home observation in the present study. The validity of respondent-reported home safety behaviours has been seen to decrease when questions are asked over telephone rather than in the home.7 ,8 ,10 Overall, self-report of SA coverage has shown troublingly low validity, and research that needs to rely on self-report to evaluate programmes or monitor trends should bear in mind the degree to which there is error in such measurement, especially with regard to which population groups may be at increased risk for incorrectly reporting their levels of protection.

Despite our small number of follow-up respondents, the follow-up interviews do shed some additional light on these findings. The majority of those respondents we were able to follow-up with seem to have given the wrong information because they assumed the alarms were working for a variety of reasons. These respondents also had helpful suggestions for improving self-report measures in the future. In particular, asking respondents to test the alarm during the interview as well as better use of probes and encouragement are options that would be well worth trying in future survey research. A quarter of respondents, indicating that they were aware of their SA coverage, gave wrong information because they did not think they actually needed a working alarm on every level. Even though a very specific question about SA coverage was asked, it was interpreted by many respondents as ‘do you feel you have enough working smoke alarms?’ It is concerning that 35% of follow-up respondents indicated that they had reported greater than actual coverage because they knew that they should. This social desirability bias occurred despite the fact that respondents had been informed before the interview began that data collectors would be testing alarms in the home as part of the interview.

Although we interviewed a large and diverse sample, respondents were sampled from an urban inner city area that has high rates of poverty. Our sample reflected this, with about a half of respondents reporting a per capita income of $10 000 per year or less. Therefore, results should not be generalised to populations with very different characteristics. We also had a small number of participants in the qualitative follow-up survey. Nevertheless, the results shed new light on the interesting methodological challenges of obtaining valid self-report measures of important safety behaviours. In collecting data on home safety behaviours, we would recommend that researchers be mindful of the ways in which survey questions might be interpreted and answered in unintended ways. Such suggestions as asking residents to test alarms over the phone could be adapted to other safety behaviours to promote more accurate reporting. It is, however, important to note that the focus of the larger study was promoting complete SA coverage; reasons for under-reporting, which may have further implications for safety research design, were not explored here.

The results also have practical utility for promoting adequate SA protection. Given our findings that the high rates of over-reporting are likely due to inattention and incorrect perceptions regarding ‘safe’ SA coverage within the home, public education efforts should use messages that raise the salience of SA functionality. It seems that some people think SAs can be forgotten about once they are installed, and they would benefit from reminders about checking alarms as well as changing the batteries. Clinicians who counsel their patients about the need for SAs should consider how they can incorporate better assessment questions to guide their counselling. For example, paediatric anticipatory guidance about safety often starts with asking the parent if they are performing the behaviour (eg, Do you have smoke alarms?). As found in an earlier study, giving appropriate advice depends on having the correct information.9 Our results suggest that clinicians can fully rely on a negative answer to such assessment questions, but they should probe carefully any positive answer. Clinicians should ask about whether SAs are present specifically in the basement or attic, where some may feel they are not needed. The same implications apply to others who provide individually focused education such as public education firefighters and other public health practitioners engaging in fire safety counselling.

Furthermore, the lack of awareness or confusion shown by the over-reporting respondents in our follow-up sample regarding their SA coverage suggests that outfitting homes with more permanent alarms could be the best option to ensure that more households are protected from fire deaths. Hard-wired or sealed 10 year lithium battery alarms outfitted with a hush feature can be screwed into the wall or ceiling, making it easier to silence nuisance alarms and harder to remove the unit. Maintenance of lithium battery alarms is less demanding than the more common alkaline battery alarms, which require new batteries approximately every 6 months. When installed in homes, lithium battery alarms remain functioning longer than alkaline battery alarms.11 ,12 Given the low PPV seen for self-reporting SA coverage, efforts to equip homes with more permanent and tamper-proof alarms may prove worthwhile.

What is already known on this subject

  • Smoke alarms reduce the risk of death in a fire.

  • Self-reported use of smoke alarms is high.

  • Validity of self-report measures has varied.

What this study adds

  • Insight into why residents over-report smoke alarm coverage.

  • Highlights the need for more permanent alarms that do not require 6-month maintenance.

References

Footnotes

  • Funding This work was supported by the Centers for Disease Control and Prevention grant number R18 CE001339 and by the National Institute of Child Health and Human Development grant number R01 HD059216-04.

  • Competing interests None.

  • Patient consent This was not a medical study. All respondents signed a consent form approved by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board.

  • Ethics approval Approval provided by Johns Hopkins Bloomberg School of Public Health Institutional Review Board.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement Additional explanatory data may be available upon request.