Article Text
Statistics from Altmetric.com
From the editor’s desk
PAGE CHARGES: AN UNWELCOME NECESSITY
It is with considerable regret—and not without some disagreement among the Editorial Board—that we announce the initiation of page charges for authors. Beginning 1 January 2004, authors will be asked to pay $150US for each complete printed page. This decision was only reached after months of deliberation and is a reflection of the economical realities of journal publishing in the 21st century. In this case, it reflects in part our well intentioned but, in retrospect, misguided venture to make the complete journal accessible online at no cost. Not surprisingly, many previous subscribers succumbed to simple economics. The result was a loss of vital income for a journal that does not attract advertisers or other sources of income.
The financial consequences were such that, quite understandably, our publishers reassessed the situation. Among the various scenarios, one element that held promise for balancing the books was page charges. This was agreed with two important exceptions: the first is an exemption for readers in low income countries. They will continue to receive this and all other BMJ journals online at no cost. The second, perhaps more controversial, is a waiver of charges for authors who subscribe to the print or online editions.
This makes sense for two reasons. The goal of any contributor is to see the product of their work in print, preferably in respected and widely read journals. Whether they themselves read such journals regularly, if at all, is seemingly secondary. Yet, when I teach students how to write for publication, I stress the immense value of being totally familiar with their target journals. At the very least I insist they strictly abide by the instructions for authors. Little alienates editors more than receiving a paper that has been prepared for another journal without the necessary adaptations for the journal to which it is eventually submitted. The benefits of familiarity with the journal of choice, preferably as a regular reader (that is, subscriber), should be evident. Hence this concession may ensure submissions better tailored to our readership as well as our structural and stylistic requirements, and simultaneously attract more subscribers because it is by far the more appealing financial option.
The page charge decision was not made lightly and we wish we did not have to make it at all. But it is now widely acknowledged that the business of publishing journals has become increasingly challenging.1 Publishers are not charities and ours has been more charitable than most. There are limits, however, and the loss resulting from the free online meal could not be sustained indefinitely. The economics of the choice for authors should be compelling; even a subscription to the print edition of $150 per year for six issues is much less costly than the $600 a four page paper would entail. The web-only subscription at $40 per year is an unparalleled bargain; we urge aspiring authors and all readers to take advantage of it.
By way of a benchmark of sorts, Potter notes that most Elsevier journals charge between $1000 to $6900 for one year institutional subscriptions and the organic chemistry journal, Tetrahedron, is a snip at $20 763. The Public Library of Science intends to charge authors $2000 to cover reviewing and editing of articles. John Hoey, the editor of the Canadian Medical Association Journal estimates it costs about $5000 “to edit and illustrate an article and get it online”.1 Put that in your pipe and smoke it, and while you do, ponder our generous proposal!
REFERENCE
1
BRIDGING FROM RESEARCH TO PRACTICE
This topic has arisen repeatedly over the past 10 years and with good reason. It is one of the great unresolved issues, not just for injury prevention researchers but for almost everyone in applied research. I will not, therefore, apologize for raising the question again: what needs to be done to ensure that solid research findings are put into practice? In a guest editorial Moller (p 2) suggests that one step is for journals such as this one to become more flexible in its interpretation of scientific standards so that more community based interventions, flawed though they may be, can be published. He makes some important points and in fairness to ourselves, we have moved far in the direction in which Moller is trying to push us. We certainly agree that one of the reasons why it is so difficult to cross the bridge is because essential planks are missing. What works in the artificial settings where efficacy studies are conducted will not necessarily work equally well, if at all, in the real world were effectiveness studies are performed. Moller’s point is that those studies, especially if the unit of intervention is a community, are likely to be so flawed that reviewers will object strongly to their publication. This is what makes being an editor so much fun!
These issues aside, there are other matters to consider. I mentioned previously a set of papers in the American Journal of Public Health that caught my attention and admiration. One of these is highly pertinent to the question of “translational research” (a term used differently by investigators in different fields). This paper, by Glasgow, Lichtenstein, and Marcus asks: Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition.1 I urge readers concerned with this topic (and most should be), to read the original because my précis will fail to do justice to the authors’ ideas.
In brief, they make some of the same points as Moller, but add others that they believe lie at the heart of the matter: “the logic and assumptions behind the design of efficacy and effectiveness research trials”. They note that the widely held belief that it is the products of successful efficacy studies that are the best candidates for effectiveness and dissemination studies may be entirely wrong. Their argument is situated in a framework of five phases of intervention research proposed by Greenwald and Cullen.2 It focuses on the nature of phase 4—effectiveness studies intended “to measure the impact of an intervention when it is tested within a population that is representative of the intended target audience”. One key element is generalizability to intended program users, followed by large scale implementation.
As the commentary notes, however, this seemingly logical sequence often fails to materialize and truly successful effectiveness trials are scarce. Thus, the American Journal of Public Health paper argues that the model is flawed in part at least because researchers engaged in each phase have distinctively different values and methods. They suggest a new model to give “balanced emphasis to internal and external validity”. For details of the model, RE-AIM, see the original paper.1 It concludes with four recommendations aimed at researchers, editors, and funding bodies, and I quote these verbatim, as follows:
-
Researchers should pay increased attention to moderating factors in both efficacy and effectiveness research.
-
Realize that public health impact involves more than just efficacy.
-
Include external validity reporting criteria in author guidelines.
-
Increase funding for research focused on moderating variables, external validity, and robustness.
Each of the recommendations includes a detailed list of concrete suggestions. We all need to consider these fully and move rapidly to adopt those we agree with. This is required reading for everyone who shares the growing concern that too many measures that have been shown to be capable of reducing injuries fail to reach the bedside or the communities for which they are intended.
From the editor’s desk