Article Text
Statistics from Altmetric.com
Author's note: This paper is written in the first person for two reasons. First, I teach writers to use the “active voice” whenever possible and must practise what I preach. Second, and more germane, this is a highly personal account of my impressions of an area in injury prevention that I make no pretence I know well. It is an account of a voyage of discovery and cannot be dignified by the trappings of an objective assessment that typical scientific writing conveys.
When I was invited to give this wrap-up talk, I was flattered but wondered what someone whose main interest is not occupational injuries might have to say that could be of interest to the audience. In light of this uncertainty, I decided to base this presentation largely on my experiences as editor of this journal in the field.
I was invited when our family was preparing to celebrate both Passover and Easter. At the Passover meal it is customary for the youngest to ask four questions, the theme of which is rhetorical along the lines of “Why is this night different from all others in the year?” Then the answers are given each beginning with “On all other nights we do so such and such, but on this night we do thus and such”. Together the questions and answers summarise the Passover ritual.
It struck me that this was the sort of question I asked myself after receiving the invitation. I wondered in what way occupational injuries differ from most other injuries such that I knew so little about them. I also wondered what I could say about publishing material pertaining to these injuries that might be different from publishing studies about seat belt use, for example. Consequently, I decided that I would frame this presentation around four questions, albeit not quite of biblical proportions. To these I have only three answers, hence the title.
The first question is: Is there any fundamental difference between occupational injuries and safety and all others, and if so, what is it?
The second is where should occupational safety research be published? Or, put differently, how should anyone decide on which journal, or type of journal, should have the privilege of publishing their work? Where might it have the greatest effect?
The third is what enhances getting research published?
And the last is: Does having good research published in good journals really make a difference? If not, what needs to be done to change this?
(1) Is occupational safety different from other areas of injury prevention?
The first is the tough, almost existential question about whether there is a fundamental difference between occupational injuries, or occupational safety, and all other categories of injury prevention, and if there is, what is the nature of that difference.
On the one hand, the simple answer is that there is no difference, or should not be, because all injuries are health problems. An injury is an injury whether it occurs at work or during non-work activities. It matters little whether it comes from being hit by a car or a tractor, or whether it comes from falling from a ladder at home or at work. Moreover all are important because of their numbers, their potential seriousness, their preventability, and their cost.
On the other hand, it is equally reasonable to argue that occupational safety differs in several important respects. To begin with, I assume the nature of the data available is, or should be, better than what is customarily available for many other types of injuries for at least three reasons. First, because these data are obtained systematically; second, because the population at-risk is usually well defined, and third, because it should be more often possible to estimate exposures than in many other areas of injury epidemiology. (These statements apply to data collected at or by the workplace; national data rarely identify work injuries, and one estimate from the National Health Interview Survey is that 20% of “at work” injuries are not captured by traditional workplace based systems; G Smith, personal communication.)
Secondly, I also assume, perhaps naively, that research in occupational safety is often different because it involves a more direct connection between the researcher and the sponsor. Put another way, there may more often be a direct financial payoff from preventive research to the employer. In contrast, if highway safety improves, it is not transport that benefits financially but the health care sector (through decreased medical care expenses). It is probably not an exaggeration to suggest that there are no other fields of injury prevention where the benefits of a safety measure accrue directly to the sponsor of the research. (In saying this I am assuming that the research is supported by the employer or by an insurance company.) In spite of this compelling argument, it appears there is little corporate sponsored workplace research in the United States, Liberty Mutual being a notable exception. Nevertheless, the point remains: experience in another “closed system”—the Indian Health Service—shows how much an investment in prevention research can result in substantial savings, both monetary and personal. This is also the experience of Workmen's Compensation systems who fund research in some Canadian provinces.
Third, in any case, regulations or laws may make it easier to enforce workplace rules than it would be if the same rules were aimed at the outside community. Against this are pressures in the workplace and, for many, forces that are beyond one's control that serve to increase risk.
Finally, another interesting difference is that in general it should be much easier to evaluate preventive interventions because the workplace provides more of a closed laboratory. In such a setting, policies and practices as well as control technology can be tested under better conditions.
(2) Where should occupational safety research be published?
The second question is how anyone working in this field decides on a journal in which they would like to see their work published. Stated more generally the question is how does one choose which journal is most appropriate for any particular paper?
This seemingly simple question has a complex answer. It is safe to begin with the self evident assumption that authors would always prefer to publish in the “best”, that is, the most prestigious, peer reviewed journal, with a high impact factor rather than in some inferior beast. Setting aside any doubts I may have about the value and validity of impact factors, clearly, by this measure or some other, certain journals are more prestigious than others. So why not aim high? In fact, I often urge students and colleagues to put a first class journal at the top of their list of preferences. Although submitting any paper to a world class journal should not be done with any great expectations that it will be accepted, it is reasonable to assume such a journal will provide expert reviews. These, in turn, should help improve your paper so that it has a better chance of success when you send it to a more realistic target, some notches below in the pecking order.
There is also a loftier principal that ideally should be considered when choosing a journal: deciding where it will do the most good. It is always wise to at least consider who the target audience is when choosing a journal and it is not too much to expect that the target audience includes those best able to make good use of your findings. (This principle is based on the perhaps misguided assumption that people actually read what you write and are eager to respond to what you recommend.) If it is true, however, that target audiences are important, an argument can be made that the fewer journals there are the more likely it is that the target will be reached. It is also possible to make the opposite argument—that more niche, or specialty journals, are needed.
The third option is to choose the journal where you realistically have the best chance of having it accepted with the least hassle—probably the most popular of the options.
In light of these points, I was prompted to try to discover where occupational safety papers are actually published. To answer this I examined each paper in a supplement on occupational safety published by the American Journal of Preventive Medicine last year1 and listed every journal cited in the references. I then categorised them as follows: (1) basic science journals, mostly psychology; (2) general journals, usually medical or public health, but not specific to occupational safety; (3) mainstream occupational health and safety journals; (4) specialized journals (including proceedings, book chapters, doctoral dissertations, or other unpublished formats); and (5) other, which includes some specialized journals as well as proceedings and theses (see table 1).
Journals in which occupational safety papers have been published
The main conclusion prompted by this exercise is that in occupational safety (as is no doubt true of other injury prevention fields) no journal dominates; there is no focus. A grand total of 46 different journals (perhaps not all of which are peer reviewed) cannot be good for the field because no one can keep up with such a widely scattered body of literature. But the solution is not to suggest that all these journals cease publishing or that all these papers could appear in Injury Prevention.
In his discussion of the systematic reviews, that were the heart of the American Journal of Preventive Medicine issue, Beahler described some of the challenges. Of the 41 871 titles uncovered only 1356 were potentially eligible, and of these, only 207, or 15%, made it to the final review. One reason most failed is that they lacked outcome measures. Another was that much of the relevant literature was not published in peer reviewed journals and would not, therefore, be included in a Medline search. Thus, other sources, such as NIOSHTIC would be needed.
So a case can be made for trying harder to concentrate the good material in fewer reputable, interdisciplinary, and preferably international journals. This case rests on the belief that most scientific endeavors are enhanced when they expand their disciplinary boundaries. In injury prevention, as in so much else, we invariably do better when we can learn from colleagues in other professions. For example, epidemiologists do not possess the only key to truth. By the same token, we also have much to learn from colleagues in other parts of the world. So, whenever possible, international journals are preferred to those that are parochial.
Another possible (and probably inevitable solution), is to make greater use of web publishing. Whether I like this prospect or not, the web is bound to become part of the answer for those who can't afford to subscribe to more than two or three journals or who cannot make frequent trips to the library. For that matter, I am certain that no library has all 46 of the journals listed.
(3) How to get good studies published
Next there is the question of how to get work in this field published in any respectable journal. The simple answers are to ask interesting and important questions, do solid research, present it well, and choose your journal carefully.
It would not be appropriate to say much about how to do good research: all readers should know as much (or more) than I do about design, measurement, and analysis—the “DMA” that is the cornerstone of any scientific inquiry. The self evident point about good science aside, however, the more salient and problematic issue is how to present the science well. There is much to be said about this and indeed, there are many excellent books on the topic.2–5
For the past several years I have given a two semester, two credit course on scientific presentations . . . essentially about how to write better. Table 2 is a distillate of the general suggestions I present about how to make writing palatable. Make no mistake, although great science will probably always get published no matter how badly it is presented, it may be difficult to see how good it is if it is really badly written. More importantly, most science is neither great nor awful but rather in a large intermediate gray zone where how well it is written can tip the balance in your favor, or in the opposite direction.
Basic tips on how to get published
To summarise, the first cardinal rule is know your audience: that is, the journal to which you intend to submit, and that means at least checking the instructions for authors and following them to the letter. Nothing upsets an editor more than papers coming from authors who have not taken the trouble to read their journal, let alone check their web site for instructions. Being a subscriber is a great help, but I assure you there is no need for paranoia; no editor I know of checks the subscriber list when a new paper arrives.
The next rule is to revise, revise, and revise. Cut everything that is not essential to make your meaning clear; it is almost always the case that the shorter a paragraph or a paper, the clearer it is. We have a 3000 word limit and you would be amazed at how much better a 5000 word paper is when the authors are forced to prune mercilessly. If you can't find words or phrases that can be sacrificed because you are so attached to what you have written, have a critical colleague read it—or, better still—your spouse.
Another suggestion is to write from the inside out. Start with the results, perhaps with the tables and graphs, then write the methods, and leave the tough stuff, the introduction and discussion for the end when you are feeling confident and relaxed. Make sure that what is in tables and graphs is not repeated in extenso in the text; just comment on the findings.
There is much more that could be said, but there is one final rule: be persistent (within limits, of course) and learn to read the coded messages in editors' letters. What may appear to be a rejection may actually be an invitation to revise and resubmit, and if you decide it is, be sure that you take all the reviewers comments to heart.
(4) Does what you publish make a difference?
The last question, put simply, is whether doing good research and publishing in “good” journals really makes a difference in terms of helping save lives or reducing injuries? This is the question to which I have no answer.
But I must offer some caveats. Much depends on the type of research. From my perspective as an investigator as well as that of an editor, I am convinced that there is a hierarchy in research. Any study that takes us closer to real answers will always be preferred than those that merely define the problem. This is as true for occupational safety as it is for injury prevention as a whole. Studies using stronger designs are invariably more convincing than those using weaker designs. Above all, there is a pressing need for more solid evaluation studies of preventive measures we believe to be effective.
I also have a political bias about such issues and still labor under the possibly false belief that good policies (and that means good politics, and often legislation) actually can solve many of the problems in injury prevention. In fact, again naively, I also assume that policymaking in the context of occupational safety is a lot simpler than in many other areas of injury prevention because of the relationship I mentioned previously.
This is one reason why I have advocated that with respect to injury prevention, health departments should, in effect, be first among equals. Health departments in general, and public health in particular, should have the major responsibility in the injury prevention arena and I say this with no disrespect to any other government body with a stake in the issue. In one respect, the National Institute for Occupational Safety and Health is part of a public health department at the highest level. Injuries are similar to other preventive diseases; it is health that pays the bills; and prevention is cost effective. In light of this, no other government department has as much to gain or lose by sponsoring or spurning effective preventive opportunities. If this view were heeded, health departments would have oversight responsibility and the power and resources to influence the agendas and spending of other government departments.
But this is probably just a pipe dream. The reality is that this is the question to which I do not know the answer. I suspect, however, that in the absence of a stronger role for health departments, whatever difference is made by a publication is much less than most of us would like to believe. For example, the evidence suggests that many of the best steps toward prevention involve regulations or legislation, and, of course, their enforcement. This, in turn, means getting the research to policymakers and here we have a huge problem. It is the gap between research and action; what some have called information transfer or simply research implementation. In injury prevention it has been estimated that there would be 30% fewer deaths if all we now know to be efficacious were implemented fully.6
Because implementation so often involves policymaking, I have struggled with this issue for years: how to cross the bridge between science and policymaking. I even did a study where I tried to tease out the factors responsible for the success or failure of Canadian investigators working on child health issues that had a policy related component.7 I discovered that it is not a matter of more information, but rather personal contacts that mattered most. But establishing such contacts between implementers and researchers was not easy. Consequently, we proposed the creation of a research broker: someone whose job it is to identify early on who the consumer of any piece of research is and as often as necessary to bring the researcher into direct contact with that consumer.
A slice of occupational research studies
Returning to the caveat that much of the answer to the final question depends on the type of research, we come to the tough love message—that it seems we are doing too few intervention studies. To support this point I embarked on another exercise. I carefully reviewed the program for the National Occupational Injury Research Symposium (NOIRS) conference and, with some trepidation, attempted to clarify the nature of each study being presented based only on the titles. I also tried to classify them into a hierarchy that reflects the values I place on each type of study (see table 3).
Nature of research presented at NOIRS 2000
This resulted in three main groups. Group A included all the studies that were essentially descriptive. This included 151 (75%) of the 200 papers in the program. (This is not to suggest that descriptive studies are bad science, unimportant, or trivial. Up to a point, they are essential to all that follows and there are times, albeit rare, when a descriptive study alone may lead directly to some form of prevention.)
Of the descriptive studies, 19 used surveillance and 73 used other methods of data gathering such as surveys. Included in this category are studies that went beyond description to include risk factors or some other analytic component, such that relationships between or among variables were examined (22) and papers that described programs or processes (37).
Group B, methodologic, included 11 describing research methods, for example, the case-crossover papers, and 19 in “technology”, for example, how to create a non-slip surface. The latter are analogous to helmet design or airbags in other aspects of injury prevention.
Group C, interventions, included a few pay-dirt papers; those most editors would fall over themselves for because they describe, and ideally, evaluate, preventive interventions. Yet, there were only 17 of these—less than 10%—and only two may have been randomised trials.
A few cautions: first, the classification may be inaccurate, but if there was any doubt, I leaned toward assigning a presentation to the highest category. Second, the program for a meeting like NOIRS may be a poor sample from which to draw conclusions about the field it represents. It is noteworthy, however, that the results seem similar to the current state of play in many other areas of injury prevention.
Nevertheless, my reaction to this exercise was surprise and disappointment. I had expected many more interventions because of the reasons given earlier. And it appears I was not alone. Several contributors to the American Journal of Preventive Medicine issue made similar observations. Beahler noted that the systematic reviews included little evaluation research, few randomized trials, and that many reports were not in Medline.8 Rosenstock and Thacker wrote that “many (studies) evaluated effective interventions” but added that often the effective elements were not determined, the durability and generalizability of the findings were unclear, and that many topics were excluded because there were no evaluations.9 The concern about too few evaluations of interventions was also made in two special topic papers.10,11
The scarcity of intervention studies may reflect the relative “newness” of the field of injury prevention in general and occupational safety in particular. The modern era in this field is only about 20 years old. Intervention studies assume we know what to do and that sufficient funding is available to implement and evaluate. Although clearly we need to know more about what works in many areas, in too many others we do know and it is only the funding that is missing.
Ignorance of implementation strategies cannot be used to explain these disappointing findings. Apart from the broker idea, there are many other methods that seem worth exploring. Nevertheless, it is still not proven that publishing in journals (good, bad, or otherwise) makes a difference in getting the results enacted. Probably policymakers rarely read journals. But if they did, it seems reasonable to assert that good data are important. “Although political considerations will always play a prominent role in policy development, politics that has to contend with the results of good science should produce better policy than politics based on poor science or no science at all”.12
In this context we can ask whether it matters where you publish your results, especially if they describe or evaluate interventions, and doubly so if they point strongly to a positive result? We can hope that in such cases good publications do make a difference, especially if they are properly packaged. Nevertheless, that difference may well be far less than most realise or expect.
Why so cautious an outlook?
First, in spite of saying that occupational safety research should be easier to implement than many other areas in injury prevention, the yields may not be large enough to prompt action. Second, we need to be realistic about the kind of publications that trigger change. In most fields, injury prevention and occupational safety among them, too few publications address prevention. Finally, those that do so are spread over many journals, one result of which is that there is little cross fertilisation across disciplines.
Conclusion
Whether the label is injury prevention or occupational safety, it seems inescapable that we are doing far too little intervention research. Instead, much of what is being done is fiddling around the edges, too often repeating what is already well known.
Would a randomized controlled trial of a workplace program that substantially reduced falls or burns get published in Injury Prevention or any other premier journal? Most certainly. Even if the results were negative it would be prized. Would it trigger action? Perhaps not, but it may have a better chance of doing so if there were a broker to translate the findings or help the researcher to explain them.
Would it help more if the findings appeared in Injury Prevention or more generally, in fewer mainstream journals? It may well do so. We try to hammer home a user friendly take-home message by asking that every original article conclude with a section entitled “Implications for prevention”. However, this is small potatoes; what matters most is the quality of the science, and in the context of this presentation, the willingness to conduct more evaluations of what appears to be effective in reducing injuries.