Article Text

Download PDFPDF

How to Read a Paper: The Basics of Evidence Based Medicine.
  1. Barry Pless, Editor

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

It may appear odd for an editor to review a book with this title. But after reading it I was convinced it was appropriate for me to do so and I have even resolved to review in a later issue a companion text, How to Write a Paper. One justification for urging you to read this book is that knowing what others look for when reading a paper should have a strong influence on what you do, if and when you decide to write a paper. And we, like all other journals, are in a constant quest for more and better papers.

I confess that the subtitle “The basics of evidence based medicine” almost put me off. I'm one of those who, as David Weatherall describes in the foreword, is “slightly hurt by the concept”. I bristle at the phrase because it suggests that those of use who work in areas not dominated by randomized trials are “frivolous”. But the author, Trisha Greenhalgh assures me that much of the hype about “evidence based medicine” (EBM) is packaging. Indeed, if the basic skills involved in EBM boil down to “assessing the scientific validity and practical relevance of papers articles found in the medical literature”, then I have no quarrel. This is what I trust our reviewers do, what our authors try to do, and what our readers should do.

Having said that, we can always use help, and this book is abundantly helpful. Although its core is a synthesis of elements found in most introductory epidemiological or biostatistical texts, it is more than the sum of its parts. It is impossible to imagine even the most sophisticated scientist not benefiting from a thorough read.

I was especially impressed with the chapter entitled “Getting your bearings (what is this paper about?)”. This includes a box entitled “Common reasons why papers are rejected for publication” that is so on-target that I urge authors preparing papers for Injury Prevention (or any other journal) to place a photocopy of this box over their desk and read it daily. The rest of this chapter reviews basic research designs, explains them clearly, creates a “hierarchy of evidence”. Chapter 4 moves on to the heart of the matter, “Assessing methodologic quality”. This includes a sensible discussion of critically important questions about the most revealing element of any paper—its Methods section. The chapter entitled “Statistics for the non-statistician” is neatly balanced: not too little that it might do a disservice to the value of well chosen, well executed statistical tests, and not too much, such that it could add to the fear of readers who consider themselves innumerate (as, incredibly, Greenhalgh herself professes to be!).

Several of the chapters that follow are variations on these themes. The chapter on drug trials includes another of the author's tongue-in-cheek boxes; one listing “Ten tips for the pharmaceutical industry on how to present their products in the best light”. Apart from this bit of amusement, however, this chapter offers little more that is relevant for most readers of Injury Prevention. The same applies, in part, to the chapter on diagnostic or screening tests. The concluding chapters, on guidelines, economic analysis, and qualitative research, are perhaps of secondary interest for those involved in injury prevention, but it is worth noting that we have published one paper on the basics of economic analysis and another that uses a qualitative approach.

But from this point to the end, things pick up again, at least in terms of issues of interest for readers of this journal. The chapter on systematic reviews and meta-analysis almost convinced me that this new wave, which we are bound to encounter more often in the future, is of value. The author is right: meta-analysis is a term that exemplifies the “fear and loathing” many of us feel towards “evidence based medicine”. But I concluded my reading of this chapter much the wiser and, I meekly admit, with less loathing.

It would be unfortunate if scientific readers ignored this book believing they know it all (I didn't; they don't), and sadder still, if non-scientific readers passed up this opportunity to educate themselves painlessly. The appendices are useful and the index is excellent. The price is right and the author writes like an angel.

I have only two fears about endorsing this book so wholeheartedly. First, that I will have nothing original left to say when I finally get to review How to Write a Paper. Second, that readers who digest its contents and apply their skills to papers we publish will find many flaws. They will and they should. But I have yet to read, write, or publish the perfect paper—one that could satisfy all the “evidence based” criteria. Our authors do the best they can—as do editors—and, all in all, personally I think we all do extremely well.