Background: Duplication should be avoided in research and only effective intervention programs should be implemented.
Objective: To arrive at a consensus among injury control investigators and practitioners on the most important research questions for systematic review in the area of injury prevention.
Design: Delphi survey.
Methods: A total of 34 injury prevention experts were asked to submit questions for systematic review. These were then collated; experts then ranked these on importance and availability of research.
Results: Twenty one experts generated 79 questions. The prevention areas with the most number of questions generated were fires and burns, motor vehicle, and violence (other than intimate partner), and the least were other interventions (which included Safe Communities), and risk compensation. These were ranked by mean score. There was good agreement between the mean score and the proportion of experts rating questions as important or very important. Nine of the top 24 questions were rated as having some to a substantial amount of research available, and 15 as having little research available.
Conclusions: The Delphi technique provided a useful means to develop consensus on injury prevention research needs and questions for systematic review.
- systematic review
- Delphi survey
Statistics from Altmetric.com
Injury control as a field is beginning to enter maturity. Over the last four decades, thousands of research articles have been published, and fatalities for most injuries have declined in industrialized countries. Resources both for research studies and implementation of intervention programs, however, are still scarce and must be used wisely. This requires that duplication be avoided in research and that only effective intervention programs be implemented.
One of the keys to accomplish both of these goals is systematic reviews of intervention research. These systematic reviews attempt to include the complete universe of research done on a topic, examine the quality of the research, and determine if there is evidence for an effect, evidence for no effect, or inadequate evidence by which to make a judgment. If the latter, systematic reviews can serve as an important guide to both investigators and funding agencies on important research questions which should be further investigated.
Since quality systematic reviews are time consuming to conduct, priorities for conducting these reviews should be established. These priorities include the importance of the research question (in terms of morbidity and mortality), the availability of information with which to conduct a review, and the degree of uncertainty that experts in the field have about the research question. This study was undertaken to address the last criterion by determining the degree of uncertainty of experts in the field about a topic and asking the experts to rank the importance of different injury control topics for review. It used a Delphi design to arrive at a consensus among injury control investigators and practitioners on the most important research questions for review in the area of injury prevention.
Web based Delphi design
This study utilized a modified web based Delphi method to collect expert opinion and achieve consensus on research questions in the field of injury prevention. The Delphi method was used because it affords anonymity to participants and privacy for iteration and to change one's mind over several rounds.1 Pressure from dominant individuals within groups is eliminated by anonymity. The Delphi method also helps to minimize the effects of group interactions and maximizes the ability to elicit expert knowledge.2,3
We used a web based Delphi as an economical and efficient method for the survey, avoiding the time spent on data entry and conventional mailing. A list of research questions for systematic review was determined through three rounds of the Delphi surveys. All three rounds were posted electronically on a web site hosted by the University of Washington. Individuals located nationally and internationally were able to participate in group consensus and determination of research priorities using this system.
Experts were selected based on prior published research in the area of injury prevention or had been nominated by directors of injury prevention programs. These experts were chosen to represent the broad field of injury prevention, ranging from investigators to program managers, and to be representative of the English speaking world of injury prevention experts. They were chosen from various membership lists, including the editorial board of Injury Prevention, ISCAIP (International Society for Child and Adolescent Injury Prevention), and the directors of injury control research centers. Separate experts were chosen for consideration of systematic reviews in acute care of trauma and rehabilitation of trauma. There is no requisite number of experts in a Delphi survey; commonly, between 20 and 40 people are included.4 We invited 34 experts from seven different countries to participate in the first round. Each expert was contacted through an email invitation letter and provided with a unique URL link to the web site and their own survey. In the majority of cases, surveys were completed online and the data was directly downloaded into a database. Because of the length of the surveys in the second and third rounds, experts were provided with the option of printing out a hard copy and faxing it back to us. In such cases, the data were entered electronically by the research coordinator. The web based system further enabled us to track non-respondents. Reminder emails were sent to those experts who had not submitted responses within four weeks. Upon completion of each round the data were transferred from the web system to an Excel spreadsheet for analysis.
We provided all experts with a list of 64 systematic review topics that had been published in 16 different injury prevention areas. The topics were available online at the Harborview Injury Prevention Research Center web site (www.hiprc.org). This information was provided to avoid generating questions that had already been subjected to systematic review and to establish a common base of knowledge for all participants. This imbalance in topic knowledge among participants has been a prior criticism of the Delphi method.5–7 Delphi critics contend that some experts, because they are less informed or familiar with the research available in particular topic areas, may not be as qualified or experienced to vote on or provide an opinion in that area. Published systematic reviews were identified through extensive directed literature searches and communication with health agencies and professional organizations including the National Center for Injury Prevention and Control, the Cochrane Injuries Group, and the US Community Preventive Services Task Force.
Delphi round 1: question generation
The first round consisted of open ended question generation. Each of the 34 experts was asked to provide up to 10 injury prevention research questions that would be appropriate for systematic review. Each expert was provided with “tips for building research questions”, which asked them to define five elements for each question. These included the research topic, patient population or problem, intervention, comparison intervention (if applicable), and outcome. This technique was utilized to minimize ambiguity on the part of the investigators, and to collect research questions that were more specific and appropriate for systematic review. Experts were asked to select topics for frequent and severe injury problems, and prevention strategies that have been evaluated, but for which clear conclusions about the size of the intervention effect are currently not available. The five elements were then combined to form a research question. The experts were specifically told to focus on prevention, and not the acute care of trauma patients or their rehabilitation.
Participants were given one month to formulate questions. Investigators then gathered, reviewed, and posted on the web a modified list of questions for the second round. Questions that were similar in nature were combined, and questions that focused only on risk factors for injury without studying an intervention were eliminated.
Delphi round 2: rating
All 34 experts were included in the second round whether or not they submitted questions in the first round; they were given one month to respond. In round 2, each expert rated the relative importance of each topic on a five point scale, which considered importance, frequency of the problem, morbidity/mortality, and its priority for review. A score of 5 (very important), was defined as a very common injury problem, with very significant morbidity/mortality, and/or very high priority for review. At the other end of the scale 1 (very unimportant) was defined as a very uncommon problem, with very low morbidity/mortality, and/or very low priority for review. Experts were offered the opportunity to comment on the research questions, and to clarify or provide additional comments to support any of their ratings. These comments were reviewed only by the investigators. The scores were collected via the web or by fax, and analyzed.
There are no established rules for determining consensus within the Delphi method.6,8 Our goal was to achieve consensus over the most important questions, as well as to develop a list of questions that was reasonable in length and scope. Questions were ranked by average score with questions in the highest tertile selected for final consensus scoring. We also calculated standard deviations and the proportion of respondents rating each question as important or very important.
Delphi round 3: final consensus rating
In round 3, the remaining questions were presented to the experts for final rating; who were again given one month to respond. The questions were sent only to the 28 experts who responded to the second survey. For each question, the average score and per cent of experts voting 4 or 5 on the second round was presented for them to consider in the process of final scoring. Experts again were asked to rate each question in terms of importance, and also to indicate the amount of research they believed was available for a systematic review on each respective question. This was measured using a three point scale: 3 a substantial amount of research available, 2 some research, and 1 very little research conducted to date.
Of the 34 experts who were sent the initial survey, 21 (62%) provided questions in the first round. There were 126 questions generated, with a mean (SD) of 6 (3.9) questions per person. These were combined into 79 distinct questions for the second round of the survey. Twenty eight experts (82%) participated in the second round and 26 (76%) in the third round) .
The 79 questions were grouped into 21 areas of research (table 1). The prevention areas with the most number of questions generated were fires and burns, motor vehicle, and violence (other than intimate partner), and the least were other interventions (which included Safe Communities), and risk compensation.
In the second round, there was a strong correlation between the mean score and the proportion of experts who scored a question as important or very important (r=0.956). In the top tertile, consisting of 27 questions, the scores ranged from 4.42 to 2.82, and the standard deviation was <1 in 78% of these questions.
The final ranking of the top tertile questions and the mean score for the amount of information available for systematic review is shown in table 2. Two questions were removed from the top tertile because a review had already been done or a review was in progress, and two questions were combined into one. The final list consisted of 24 questions. Nine of these 24 questions were rated as having some to a substantial amount of research available, and 15 as having little research available.
The complete list of questions and their scores is available on the Harborview Injury Prevention and Research Center web site (www.HIPRC.org).
The electronic Delphi design proved to be an economical and feasible method to solicit questions and develop consensus on research topics for systematic review. While it has the disadvantage of not including face-to-face discussion, it avoids dominance of the group by one or a small number of individuals, ands allows equal input from all. In the top tertile of questions, there was good agreement on the importance of the questions. Unfortunately, many of these questions were viewed as having little published evidence available for review.
There are limitations that must be considered. The intent of the survey, as stated in the directions to the participants, was to solicit research topics for systematic review. Appropriate questions for review are those that are important and for which there are studies which need to be critiqued and summarized. Nevertheless, many participants suggested questions, which while important, were more appropriate for primary research than for review because of the lack of sufficient prior studies.
We attempted to get a representative sample of injury control investigators and individuals who focus on injury prevention program implementation as participants in this study. We tried to achieve inclusiveness in the group of experts and avoided having the panel dominated by either investigators or program managers. The group was international, and represented diverse interests.
Given the scarce resources available for injury prevention research and program implementation in all countries of the world, focus of research on important, previously unanswered questions is necessary. Systematic reviews should be done on available research to guide both new research and prevention efforts. We believe this survey will be of use to both investigators and practitioners in working to accomplish these goals.
The authors would like to acknowledge Jeff Coben, Carolyn DiGuiseppi, Andrea Gielen, David Grossman, Mike Hayes, David Hemenway, Arthur Kellerman, R Krishnan, Liz McLouglin, Jim Mercy, Jim Nixon, Robyn Norton, Joan Ozanne-Smith, Rick Pain, Corrie Peak-Asa, Barry Pless, Ian Roberts, Carol Runyan, Richard Schieber, Ian Scott, Jonathan Shepherd, Jo Sibert, David Sleet, Dan Sosin, David Stone, and Craig Zwerling for their contribution of time and ideas as members of the survey's expert panel.
Funded by grant number R49/CCR W2570-16 from the CDC.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.