Statistics from Altmetric.com
Balancing rigor and the real world
The systematic review of the literature “Community based programs to prevent poisoning in children 0–15 years” that is published in this issue prompted a debate between reviewers and authors (see p 43). This process revealed a number of important issues that warrant further analysis and comment. In this editorial I try to identify these issues to provide a foundation for further action.
The reviewers raised two basic questions. One suggested that the paper be rejected because it only found four articles of adequate scientific merit. The authors replied that a paper’s merit should be judged on the technical competence of the review and not on the results. They argued that a decision based on the number of articles found would amount to a publication bias against new or poorly funded fields.
The second issue concerned the notion of community based intervention. It was suggested that this field did not have sufficient coherence or precision to make it possible to generalise interventions. In response, the authors noted that the characteristics of community based models are well known. They include shared ownership of a problem and its solution by community members as well as experts. They pointed to the need for “a distinction between efficacy trials of specific counter-measures where the individual is the unit of study vs effectiveness trials involving community interventions”. While locked cabinets may be efficacious, community based programs that aim to reduce poisoning by encouraging widespread use of lockable cabinets are not necessarily effective. Because making a difference at the population level is the focus of injury prevention efforts we need to move from “what works” to “how to make it work on a large scale”.
The to and fro between reviewers and authors raised a number of issues. Two are at the tip of an iceberg of debate about theory and method. As is true of many previous debates in public health, the arguments are coloured by values and preconceptions; in this case, about what constitutes evidence and what constitutes an intervention. Until this is resolved research involving more recently established fields of study that do not fit classical research paradigms may be barred from publication.
Rather than becoming a third party in the original discussion I want to try to clarify some of the underlying issues. In 1991 I wrote an article to define the characteristics of community based injury prevention.1 Nixon et al used my definition: “The community-based model for injury prevention is an explicit approach to achieving reductions in the incidence of injury at the population level by application of multiple countermeasures, and multiple strategies in the context of community-defined problems, and community-owned solutions”. In other words, a community based strategy is not a specific single intervention, but a set of processes to facilitate effective implementation of one or more interventions. In each community setting the intervention may differ because communities differ, as does the nature of the problem.
The focus of community based injury prevention is to develop a systems response that matches prevention strategies to the cultural, social, and political setting. A population outcome is the goal. Such a broad focus means that strict control of intervention, subject, and analysis required for a true experiment or clinical trial, is impossible. This sends shudders down the spines of those brought up in the empirical tradition and it is tempting to write off community based interventions as too unstable, too difficult to evaluate, and too hard to replicate.
Other public health systems with more traditional professional affiliations might be considered in a similar vein but are less questioned because of their connections to core clinical fields. It appears that we are not yet ready to deal with the outcomes of complex systems as a subject of our research and that provenance rather than science may be the basis of choosing what is questioned and what is not.
The reviewer’s response to Nixon et al implied politely but scathingly that community based programmes have neither form nor substance. Their contribution is also questioned because of their diversity, because they lack rigorous evidence of effectiveness, or because it is more difficult to generalise from the results.
The fundamental question for community based injury prevention is “does it work in the real world?” If the answer is “yes” then we must develop evidence about what factors are necessary for success to be replicated. Unlike clinical treatments, replication is based on principles rather than rigid prescriptions.
How do we classify, define, and compare the different approaches used?
How do we define outcomes and at what level should these outcomes be measured?
How do we choose problem identification, solution selection, and implementation strategies?
How do we identify and measure any general or synergistic effects between multiple interventions in the same community?
Nixon et al’s paper only addresses the first broad question but opens the way to other issues. The authors found few high quality studies linking community based programs to the desired outcomes. This is different from evidence of no effect, and their conclusion, quite correctly, is that more research is needed. In contrast, the reviewers raised questions about clarity of definition but instead of considering how they might be addressed, dismissed the field as unworthy. We must be clear about the different levels and types of evidence for community based interventions and understand the reasons for both the lack of clarity and of evidence. These are threefold: (1) An agreed language and concept structure has yet to emerge in this complex environment. (2) The settings in which community based prevention take place are not easily controlled. (3) Community based interventions are rarely funded to a level that supports rigorous evaluation.
Evaluation of community programs moves out of the comfort zone of traditional research designs. The research and evaluation questions that need to be asked are different and the methods are not those that have become the gold standard for the evaluation of individually based interventions. Currently acceptable methods focus on experimental, quasiexperimental, or case control methods. The requirements for these methods are hard to meet for community based programs. A true experiment, in which the unit of intervention is the population, would require randomisation of populations. This is financially impractical but not theoretically impossible. A community intervention with a matched community control is far more feasible but still challenging because, unlike individual, communities vary widely in characteristics related to exposure to risk. Even where matching populations are found the final comparison comes down to a single case with control design and a critical reviewer can easily dismiss results. The use of time series designs has also been used. These are useful, providing there are long term stable patterns of incidence in more than one community.
It is rare to find a situation where there is sufficient funding, expertise, and multiple communities available to use accepted methods with sufficient rigour to produce high quality results. This means that reviewers are likely to advise journals not to publish the results. But failure to meet rigorous standards is not due to an inherent lack of expertise or professionalism; it simply reflects the nature of the field and the state of play of research methods. The rare circumstances where the criteria for rigorous evaluation can be met tend to occur in affluent communities and where the interventions are limited and carefully controlled. The Holy Grail is sufficient understanding of the dynamics to be able to effectively apply the methods in any setting, including those with few material resources. The methods used in community based evaluations could change so that these studies can result in publications that do not encounter the usual criticisms. Unfortunately, this would mean that the principles of cooperation, multiple intervention, and adaptation that lie at the heart of these interventions would be undermined.
Research methods are required that allow us to better judge what mix of strategies work with which populations so that communities can select effective prevention strategies. A broad based child poisoning prevention strategy needs to be fine tuned to the range of exposures in the community, to the education of parents and professionals, the attitudes of government, and local service delivery networks. Research needs to address all these issues.
The literature must stimulate discussion on the definitions, values, and principles of community based prevention. It should promote debate on the definition of boundaries and scope and the measurement of efficacy and effectiveness. It should promote the development and testing of new methods and report on the progress made. Above all, it should report on best available practice with critical and insightful comment on problems with methods and conclusions.
This means that at times the literature will need to include reports with insufficient evidence and reports of the results of studies where theory suggests that sample sizes, control groups, and data integrity are less than ideal, but are the best possible. Learning in public health is best promoted by the critical sharing of evidence, not by the censorship of evidence that is less than perfect.
Balancing rigor and the real world
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.