Intended for healthcare professionals

Education And Debate

Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research

BMJ 2000; 320 doi: https://doi.org/10.1136/bmj.320.7231.376 (Published 05 February 2000) Cite this as: BMJ 2000;320:376
  1. Phil Alderson, deputy director (palderson{at}cochrane.co.uk)a,
  2. Ian Roberts, directorb
  1. a UK Cochrane Centre, NHS Research and Development Programme, Oxford OX2 7LG
  2. b Child Health Monitoring Unit, Institute of Child Health, University College, London WC1N 1EH
  1. Correspondence to: P Alderson

    Many systematic reviews are inconclusive and reinforce the message that there is clinical uncertainty. Phil Alderson and Ian Roberts argue that journals should make a point of publishing such reviews rather than waiting for reviews that show marked benefit or harm. Some experts disagree, however, but we failed to persuade them to commit their views to print.

    Studies with dramatic findings make interesting reading. Journal editors understandably want to publish articles that their readers will enjoy. This is one cause of publication bias, where research with less dramatic results tends to be published in journals with a smaller circulation, if indeed it is published at all. Systematic reviews are no less vulnerable to this bias than other types of research. Should journals resist this pressure and make a point of publishing systematic reviews even if all they show is continuing clinical uncertainty? The answer will depend on the importance we attach to demonstrating uncertainty in medical practice.

    Summary points

    Denying uncertainty does not benefit patients and may increase health service costs

    More large scale randomised trials need to be conducted based on the “uncertainty principle”

    Systematic reviews with more dramatic results tend to be methodologically weaker

    Publication bias against reviews which show uncertainty may create incentives for poor quality reviews

    Admitting uncertainty helps clarify treatment options and stimulates further research


    Embedded Image

    (Credit: SUE SHARPLES)

    Censoring uncertainty does not benefit patients

    Worldwide, several million people are treated each year for severe head injury.1 Over a million of them die, and many more are permanently disabled. In many places, hyperventilation, mannitol, drainage of cerebrospinal fluid, barbiturates, and corticosteroids are routinely used in the intensive care management of severely head injured patients, yet none of these interventions has been reliably shown to reduce death or disability. Indeed, on the basis of the currently available randomised evidence, it is impossible to refute either a moderate increase or a moderate decrease in the risk of death or disability.2 Not surprisingly, use of these treatments varies widely.3-5 A 1996 survey of treatments for raised intracranial pressure in 44 neurosurgical units in the United Kingdom and Ireland found that hyperventilation was used in 89% of units, drainage of cerebrospinal fluid in 69% of units, barbiturates in 69% of units, and corticosteroids in 14% of units.3 Given the uncertainty about the effectiveness of these interventions, this variation in practice might be seen as a large but poorly controlled experiment. Patients taking part in this experiment enjoy none of the benefits of treatment decisions being vetted by a research ethics committee. No useful clinical information will result from their participation. In this instance it is hard to see how the censoring of clinical uncertainty serves the interests of current or future patients.

    Greater openness about uncertainty would challenge the prevailing health care culture of uncontrolled experimentation on the many and controlled experimentation on the few. Reliable answers to many important therapeutic questions require large scale randomised controlled trials.6 Acknowledging uncertainty is a prerequisite for such trials, and the “uncertainty principle” can be used to simplify trial entry criteria and make large trials possible. According to this principle, a patient can be entered into a randomised controlled trial if, and only if, the responsible clinician and the patient are substantially uncertain which of the trial treatments would be most appropriate. Publication of systematic reviews showing uncertainty stimulates and facilitates such trials, as in the case of the current Medical Research Council trial of corticosteroids in head injury.7

    More trials and larger trials would undoubtedly bring some surprises. Human albumin solution was widely regarded to be safe and effective in the fluid management of critically ill patients until a systematic review of the evidence from randomised trials challenged this. The review raised clinical uncertainty despite 50 years of use of human albumin in medicine.8 Uncertainty is the lifeblood of clinical research, and the censorship of clinical uncertainty can only pave the way for pallid research agendas reflecting commercial interests.

    Reasons for publication bias

    Reviews are more likely to have dramatic findings if their methods are weak. Journal prejudice against reviews of research which have revealed uncertainty may be one reason why the characteristics of reviews published in the Cochrane Database of Systematic Reviews differ from those published in print journals. Compared with Cochrane reviews, samples of reviews published in print journals have shown more evidence of publication bias9 and less evidence of methodological rigour.10 For example, failure to search non-English literature introduces bias because studies with dramatic results appear preferentially in the English language literature. If clinical journals prefer reviews with dramatic findings, they may be creating incentives to do poor quality reviews, which is in no-one's best interests. These trends may mislead those who think they have found evidence that they believe should influence clinical practice.

    Benefits of admitting uncertainty

    So the uncertainty demonstrated in systematic reviews can help clarify the options available to clinicians and patients. It can stimulate more research and better research and so help to resolve uncertainty. Uncertainty should not be hidden away as an embarrassment. We should be willing to admit that “we don't know” so that the evidential base of health care can be improved for future generations.

    The censorship of uncertainty is the enemy of evidence based care. The opportunity cost of the illusion of clinical certainty, whether measured in terms of clinical trials never conducted, morbidity and mortality resulting from the use of inappropriate forms of care, or the health resources squandered, might easily exceed the annual health care budget. Confidence in the face of ignorance may be the hallmark of a traditional medical education, but journal editors need not collude in this.

    Acknowledgments

    We thank Iain Chalmers for helpful comments.

    Footnotes

    • Competing interests PA and IR are involved in the Cochrane Collaboration.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    View Abstract