Objective To measure the validity of safety behaviours, safety equipment use and hazards reported on a questionnaire by parents/carers with children aged under 5 years participating in a series of home safety case-control studies.
Methods The questionnaire measured safety behaviours, safety equipment use and hazards being used as exposures in five case-control studies. Responses to questions were compared with observations made during a home visit. The researchers making observations were blind to questionnaire responses.
Results In total, 162 families participated in the study. Overall agreement between reported and observed values of the safety practices ranged from 48.5% to 97.3%. Only 3 safety practices (stair gate at the top of stairs, stair gate at the bottom of stairs, stairs are carpeted) had substantial agreement based on the κ statistic (k=0.65, 0.72, 0.74, respectively). Sensitivity was high (≥70%) for 19 of the 30 safety practices, and specificity was high (≥70%) for 20 of the 30 practices. Overall for 24 safety practices, a higher proportion of respondents over-reported than under-reported safe practice (negative predictive value>positive predictive value). For six safety practices, a higher proportion of respondents under-reported than over-reported safe practice (negative predictive value<positive predictive value).
Conclusions This study found that the validity of self-reports varied with safety practice. Questions with a high specificity will be useful for practitioners for identifying households who may benefit from home safety interventions and will be useful for researchers as measures of exposures or outcomes.
This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 3.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/3.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Self-administered questionnaires have been extensively used in injury prevention research and evaluation.1 ,2 They can be used as the sole research instrument (eg, in descriptive epidemiological studies) or as just one tool within research, such as RCTs or case-control studies.3 Questionnaires have also been used by practitioners, for example, in order to identify those who may benefit from home safety interventions.4 However they are used, it is crucial that care is taken with planning, and that they are rigorously designed.5–9
A key issue in survey research is validity, and concerns have been raised that self-reported safety practices might overestimate safe behaviour.10 ,11 Measures with few false positives will be useful for practitioners for identifying those who may benefit from home safety interventions and for researchers, as high levels of specificity have been found to minimise bias in estimates of treatment effects in trials.12 More recently, different types of survey methods have been tested including face-to-face interviews, telephone interviews and researcher administered questionnaires; with considerable variation in their findings.13–20 It is important to note that these studies also varied in terms of topics covered; number of questions; timing of the observations in relation to the self-report and settings for the self-report.
There have been few empirical studies that validated self-administered questionnaires in child home injury prevention. One small study (n=64) investigating a range of home safety topics, found a fairly high degree of consistency between self-reported and observed practices.21 Another small study (n=30) of poison prevention practices found sensitivity, specificity and predictive value of self-reported possession, safe storage of, and exposure to substances varied between substances.22 It is of note that both studies took place within one city in England, and findings need confirmation from larger studies and with different populations.
Accordingly, our study aimed to validate a questionnaire comparing self-reported practices with home observations. The questionnaire was completed by parents or carers with children aged under five participating in five large multicentre case-control studies of home safety.23 The critical function of the questionnaire in the research was to measure safety behaviours, safety equipment use and hazards, which were used as exposures in the case-control studies investigating modifiable risk factors for falls, poisoning and scalds.
The case-control studies recruited cases aged 0–4 years attending emergency departments (ED), minor injury units, or admitted to hospital in National Health Service (NHS) trusts with stairway falls, falls on the same level, falls from furniture, poisonings or suspected poisonings, or thermal injuries where the injury occurred in the home or garden.23 Cases were recruited from NHS trusts in and around four study centres in Nottingham, Bristol, Norwich and Newcastle-upon-Tyne, with recruitment commencing in June 2010. Controls were matched to the cases by age and sex, and recruited from general practices in the same study areas.
Data on exposures (safety behaviours, safety equipment use and hazards) and on potential confounding factors (sociodemographic and economic information, child health, development and behaviour, maternal mental health, parenting daily hassles) were collected by parent-completed questionnaires.23 Three age-specific questionnaires were developed (age 0–12 months, 13–36 months and 37–59 months). Case questionnaires were piloted on 11 parents/carers of children attending EDs at participating NHS Trusts, and control questionnaires were piloted on 29 parents/carers attending local children's centres. Questionnaires were checked for comprehension and ease of completion by a lay research advisor. For the case-control studies, most questionnaires were posted to participants and returned by post, but among cases a small number were completed in the ED with a researcher. All parents participating in the case-control studies were asked whether they would be interested in taking part in further research.
The answers to 78 of the questionnaire items which could be ascertained by observation were assessed during home visits to a subset of case-control study participants who expressed interest in taking part in further research. Potential participants were contacted by phone and visits were organised as soon as possible following receipt of the completed questionnaire. Respondents receiving home visits were given a £5 shopping voucher to thank them for their time.
A checklist was designed for completion during the home visit including exposures relating to the kitchen, stairs, use of infant equipment, and safe storage of medicines and household products. Home visits were conducted by one or two members from four research centre teams, blind to participants’ questionnaire responses. Participants were not told that they were taking part in a validation study: the home visit was explained in terms of finding out more about home safety generally.
Participants were asked to guide the researcher(s) on a tour of their home as required by the checklist, during which observations were made of the relevant safety behaviours, safety equipment use and hazards. This included, where appropriate, measurements of stair steepness and width of the biggest gap between banister rails. As well as conducting observations of current exposures, researchers asked about changes pertaining to exposures which had been made within the previous 3 months.
Data were entered into an access database, verified by double data entry and analysed using StataSE11. For the statistical analysis the answers to some questions were combined to categorise certain practices as ‘assumed to be safe’ or ‘potentially unsafe’ giving 31 binary exposures in total (shown in tables 2⇓–4); for example, all medicines stored safely (yes/no), all household products stored safely (yes/no), all medicines and household products stored safely (yes/no). Storage was categorised as ‘safe’ if either there were no medicines/household products in the house, or if they were all stored at adult eye level or above, and/or in locked cupboards, cabinets, drawers or fridges. There were two additional safety variables which were width of the largest gap between banister rails and stair steepness (categorised as ‘too steep’ or ‘not too steep’ using questionnaire responses, and expressed as a ratio of stair height divided by stair depth using measurements from the home visit).
The sample size was calculated based on an estimated sensitivity of 80% (the number of participants reporting a specific exposure divided by the number observed to have the exposure). Assuming a minimum of 20% of participants had the exposure, and to estimate the sensitivity with a 95% CI of ±20%, then 16 exposed participants would be needed and so 80 home visits were required. This would enable a specificity of 80% to be estimated to within ±9.8%, assuming 80% of participants did not have the exposure. Since sensitivity and specificity may vary between cases and controls, 80 cases and 80 controls were required across the four study centres.
We calculated a number of different statistics to compare reported values with values observed at the home visit. For binary variables, overall percentage agreement and κ coefficients with 95% exact CIs were calculated. κ Values less than zero were considered to indicate poor agreement, 0–0.20 slight agreement, 0.21–0.40 fair agreement, 0.41–0.60 moderate agreement, 0.61–0.80 substantial agreement and 0.81–1.00 to reflect almost perfect agreement.24 Sensitivity, specificity, positive and negative predictive values (with 95% exact CIs) were calculated assuming observed values were the ‘true’ values (see figure 1).
For the analysis of the widest gap between stair banister railings, a median difference between observed and reported values was calculated, with an IQR and a Wilcoxon signed-rank test was performed. Stair steepness was calculated (defined as the height : depth ratio) based on measurements at the home visit and mean values compared between those reporting their stairs as too steep and those who did not with an unpaired t test.
The primary analysis compared responses on the questionnaires with the home observations. However, respondents may have made changes to their safety practices after completing the questionnaire and before the home visit. To assess whether differences between reported and observed practices may have arisen due to this, we asked about any changes made in the last 3 months during the home visit, and created a modified value for each exposure to reflect self-reported exposure at the time of the visit. If for any exposure the percentage of people reporting a change in the previous 3 months was more than 20% for any cell within the table comparing reported and observed values, the numbers were adjusted to accommodate an assumed change in reported values from yes to no and vice versa, and PPVs and NPVs were recalculated.
A total of 162 participants (81 controls and 81 cases) received home visits. The period of time between receipt of questionnaire and the visit being carried out varied between 1 and 92 days, the median being 29 days. Table 1 shows the characteristics of families participating in the home observations and case and control study participants who did not have a home observation. For most characteristics, there appears to be no significant difference between those participating in the home observations and the cases and controls that did not participate in home observations. However, respondents with children aged under 1 year were less likely and those with children aged 13–36 months were more likely to take part in the home observations. Respondents with male children, those in single parent families, and those in households with more adults out of work were also more likely to participate in the home observations.
In table 2, κ coefficients ranged from 0.2 (slight agreement) to 0.74 (substantial agreement) for safety practices relating to falls. However, there was substantial agreement for only three practices (has a stair gate at the top of stairs, has stair gate at the bottom of stairs, and, stairs are carpeted), whereas for nine practices the agreement was moderate or lower. Sensitivity was high (≥70%) for 8 of the 12 practices and specificity was high (≥70%) for 10 of the 12 practices.
There was no statistically significant difference (p=0.08) in measured stair steepness (stair height:depth ratio) between those reporting their stairs were too steep (n=23, mean=0.87, SD=0.21) and those who did not (n=121, mean=0.82, SD=0.09). Observed banister gaps were significantly larger than reported gaps (n=55, p=0.002). The median reported gap was 3.0 inches (IQR=2.0–4.0 inches) compared with a median observed gap of 3.8 inches (IQR=3.5–4.3 inches).
Table 3 shows the κ coefficients ranged from −0.03 to 0.54 for safety practices relating to poisonings. Only two practices had moderate agreement (κ values of 0.41–0.60) which were medicines kept in a fridge (κ=0.54), and all household products stored at adult eye level or above (κ=0.48). Sensitivity and specificity was high (≥70%) for 8 of the 15 practices.
In table 4, the κ coefficients indicated that two of the three safety practices relating to scalds had moderate agreement (kettle kept at back of kitchen surface, and safety gate across kitchen doorway) and one only had slight agreement (has cordless kettle or curly flex, κ=0.13). Sensitivity was high (≥70%) for all the three practices and specificity was high (≥70%) for two of the three practices.
Further analysis was undertaken to determine whether recent changes in practice could account for the discrepancies between reported and observed exposures. Findings using the adjusted figures were broadly similar to those from the main analysis.
Out of the 30 safety practices for which predictive values could be calculated, for 24 practices a higher proportion of respondents over-reported than under-reported safe practice (NPV exceeds PPV). For six practices a higher proportion of respondents under-reported than over-reported safe practice (PPV exceeds NPV).
This research provides evidence about the validity of self-reported practices from a home safety questionnaire and indicates which questions could be reliably used in future research and practice. It is the largest study of its kind, and shows that for this questionnaire, the sensitivity, specificity and positive and negative predictive value of self-reported practices vary between safety practices. Eighteen of the 30 practices had at least a fair degree of chance-corrected agreement, but only three practices had substantial agreement (has a stair gate at the top of stairs, has stair gate at the bottom of stairs, and stairs are carpeted). Poison prevention practices appeared to have poorer agreement than falls or scald prevention practices. Overall, more respondents over-reported safe practice than under-reported them, but despite this, two-thirds of the questions had a high specificity. These questions will be useful for practitioners for identifying households with unsafe practices who may benefit from home safety interventions and for researchers who wish to use measures of unsafe practice as exposures or outcomes.
Sensitivity was high (>70%) for 19 out of 30 safety practices. Questions with high sensitivity will be useful for practitioners to identify families who do not need home safety interventions because they already have safe practices, and for researchers wishing to use measures of safe practice as exposures or outcomes. Questions with high sensitivity and specificity, of which there were 10 in our study, will result in least misclassification of exposures in observational studies.25 Only seven out of 30 of the observed practices had a high PPV (>70%). It is of note that six of these related to a standard item of equipment, for example, a cordless kettle or safety gate. Previous studies have also found responses about safety devices, and were reported more accurately than those about practices not requiring devices.14 ,17 One study also reported that some parents may have ‘experienced some confusion’ over certain safety devices.14 This was noted by our researchers in relation to some items, for example, some families were unsure what furniture corner covers and stationary play centres were. Future questions on these will need more description, and possibly pictures, to reduce such confusion.
Previous studies have suggested potential reasons for disagreements between self-reports and observations, some of which may be applicable to our study.11 ,16 ,20 ,22 These include providing socially desirable responses such as, for safe storage of medicines, and having inaccurate perceptions about safety. Disagreements could also be due to changes to safety practices or movement of hazards or safety equipment such as baby walkers or stair gates, between questionnaire completion and observation. Our analysis allowing for such changes did not substantially alter our results, suggesting this is unlikely to explain our findings.
Previous validation studies have reported varied findings.14 ,16 ,17 ,20 ,22 There are substantial methodological differences between these studies and our current study which may explain some of the variation. For example, one study compared only five practices, and the researchers read the questions to the families in their homes and then immediately observed the home,17 whereas, another undertook telephone interviews about four practices, and then researchers followed this up with observations 2–4 weeks later.20
The most comparable study to the research reported in this paper is our earlier study.21 That study reported greater consistency between self-reported data and observations (the PPV was 78% or above for 15 of the 16 practices). The two studies were similar in terms of using self-completion questionnaires covering a range of home safety topics; observations were undertaken at least a day after the self-reports; and researchers observing were blind to questionnaire responses. Although the questionnaire we used in the current study was based on the earlier one, it had notable differences in terms of size, overall design and the complexity of certain questions. For example, the later questionnaire assessed a much larger number of safety practices, safety equipment use and hazards (approximately 16 pages of questions compared with 7 pages), had more questions per page, had less white space, less vertical flow, and had more complex matrix questions.6 ,26 These differences may explain some of the variations in the findings from the two studies.
This study had a number of strengths including being a large multisite study, with researchers at each site who were trained for the home observations, and blind to questionnaire responses at the time of the observations. Parents were not informed that the purpose of the visit was to validate a questionnaire, but they may have remembered their responses to the questionnaire, and this may have influenced their behaviour before or on the day of the observation.
Our comparison of characteristics of participants and non-participants suggests some differences between these two groups. Although we found no significant difference for most characteristics, respondents with children aged less than 1 year were less likely, and those with children aged 13–36 months were more likely to participate in home observations. Respondents with male children, those in single parent households and in households with a higher number of adults out of work were more likely to take part in home observations. This suggests our findings are likely to be generalisable to higher-risk households which is important, as these are likely to be targeted for home safety interventions and are likely to be the population of interest for injury researchers. Practitioners and researchers wishing to use our questions in populations at low risk of injury should consider further validation of our questions.
κ Statistics were used in this study as they can provide a general measure of agreement corrected for agreement due to chance, but we were aware that they are influenced by the prevalence of the exposure and are susceptible to bias.27 ,28 Additionally, we calculated a range of validation measures comparing reported safety practices with observed values, so that these can be used to inform a range of further studies. A final potential limitation of the validation of the questionnaire is that it was not possible to undertake observations immediately after questionnaire completion. However, we endeavoured to keep the time period between the respondents completing the questionnaire and the home visits as short as was practical. Additionally, our analyses suggest changes made between questionnaire completion and home visit do not explain disagreements between observed and self-reported safety practices.
The motivation for this study was to validate exposures used in a series of case-control studies. Our findings suggest there will be less misclassification of exposures for falls prevention and scalds prevention practices than for poison prevention practices. Where exposures are misclassified, and where this does not occur differentially between cases and controls, measures of association between exposures and injury will be biased towards unity.25 This would mean that any significant associations we found were conservative estimates of the ‘true’ association. An assessment of the extent to which under-reporting and over-reporting of safety practices varies between cases and controls will be reported elsewhere.
For some safety practices, further research is needed to develop better measures, for example, in relation to poison prevention practices, particularly those investigating safe storage. The use of more individual questions rather than matrix questions should be examined, as this format may make the questionnaire easier for respondents to understand and complete.
In conclusion, future home safety researchers and practitioners using self-completed questionnaires or home safety checklists may wish to use some of the questions that were part of our research tool. In choosing questions, they should take heed of our results and be cognisant of the known principles of questionnaire design.6 ,7 ,9 ,26 Questions with a high specificity will be useful for practitioners who want to identify families who do not have certain safe practices and who would benefit from home safety interventions, and for researchers wishing to use them as measures of exposure or outcome.
What is already known on this subject
The validity of self-reported data is an important issue in injury research.
Different types of research tools have been validated in different settings.
Few studies have attempted to validate postal questionnaires.
What this study adds
This study found that the validity of self-reports from a questionnaire varied with safety practice and indicates which questions could be reliably used in future research and practice.
Our questions with a high specificity will be useful for practitioners who want to target families with home safety interventions.
The authors are indebted to the study participants who completed the questionnaire and allowed us to visit them in their homes.
Contributors DK had the initial idea for the study. MW trained the researchers and coordinated the sites. PB and CC conducted analyses. MW and PB led the writing of the article. All authors meet the uniform requirements for manuscripts submitted to medical journals.
Funding This study was funded by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research funding scheme (RP-PG-0407-10231). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.
Competing interests None.
Ethics approval Ethical approval has been provided by the Nottingham 1 Ethics Committee (reference number=09/H0407/14).
Provenance and peer review Not commissioned; externally peer reviewed.