Background: The Canadian Hospitals Injury Reporting and Prevention Program (CHIRPP) is an emergency department-based injury surveillance system that was devised in Canada and has been in operation since 1990. CHIRPP was imported to Glasgow’s Royal Hospital for Sick Children at Yorkhill in 1996 and ran for 10 years.
Objective: To critically review CHIRPP at Yorkhill (Y-CHIRPP). The following two key questions were posed. (1) Did Y-CHIRPP fail, and, if so, why? (2) What generalisable lessons can be learned about injury surveillance?
Methods: A retrospective qualitative review of Y-CHIRPP was carried out. In gathering information, the aims were to: (a) describe the processes involved in running Y-CHIRPP; (b) identify changes made to that process over the 10 years; (c) determine the strengths and weaknesses of Y-CHIRPP.
Results: Taken together, and with reference to the WHO attributes of a good surveillance system, the findings suggest that Y-CHIRPP largely met the criteria of simplicity, flexibility, and acceptability. Criteria that were not, or only intermittently, met were reliability, utility, sustainability, and timeliness.
Conclusions: Y-CHIRPP was, at best, a partial success. To maintain the viability of an injury surveillance system and to secure the long-term support of hospital staff, it is important that the system is perceived as an injury prevention service tool and not a research method. Experience with Y-CHIRPP suggests that injury surveillance requires three supporting posts: an emergency department staff member, a data analyst, and someone with responsibility for developing and/or lobbying for the implementation of preventive measures.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
The World Health Organization (WHO) has described injury surveillance as a cyclical process.1 It has been defined as: the systematic ongoing collection, collation, and analysis of injury data and the timely dissemination of information to those who take preventive action.2 The emergency department (ED) is a logical location for surveillance of injuries, although there are many other potential sources of injury data, including police records, in-patient hospital data, trauma registries, ambulance records, and primary healthcare clinics.
Injury surveillance in EDs has been tried with some success in several countries, including England, Wales, Canada, and Australia.3–5 This paper attempts to offer a critical review of a long-running, but now discontinued, ED-based injury surveillance system in Glasgow, UK, in an attempt to draw practical lessons of relevance to all concerned with injury surveillance. Specifically, we posed the following two key questions. (1) Did the injury surveillance system fail, and, if so, why? (2) What generalisable lessons can be learned from the Glasgow experience?
THE CANADIAN HOSPITALS INJURY REPORTING AND PREVENTION PROGRAM (CHIRPP)
CHIRPP is an ED-based injury surveillance system devised in Canada that has been in operation since 1990.4 6 7 It currently collects data about childhood injuries from 14 hospitals in seven provinces and one territory and has been subjected to a formal evaluation.7
History of the Yorkhill CHIRPP
CHIRPP was imported to Glasgow’s Royal Hospital for Sick Children at Yorkhill in 1996 (after a 2-year pilot project) and ran for 10 years. Yorkhill is the largest children’s hospital in Scotland and the third largest in the UK. The aims of the Yorkhill version of CHIRPP, hereafter referred to as Y-CHIRPP, included: to monitor injuries, including any rise in cause-specific injuries; to monitor injury-related health improvement measures; to generate data to facilitate injury prevention initiatives.
We retrospectively reviewed all retained written material relating to Y-CHIRPP, including correspondence, reports, peer-reviewed publications, and material relating to injury prevention strategies resulting from Y-CHIRPP. People involved in the running of Y-CHIRPP, including staff who had since moved positions or retired, were identified and interviewed in person or by telephone by one of us (DS). The interviews were semi-structured and aimed to determine the processes involved in establishing and operating Y-CHIRPP together with the interviewees’ views of the strengths and weaknesses of the system. We did not interview injured children or their parents.
In gathering information, our aim was threefold: (a) to describe the processes involved in the running of Y-CHIRPP; (b) to identify changes made; (c) to determine the strengths and weaknesses of Y-CHIRPP.
Evaluating strengths and weaknesses of Y-CHIRPP
According to WHO, a good injury surveillance system should have seven attributes: simplicity, flexibility, acceptability, reliability, utility, sustainability, and timeliness. In describing and evaluating the processes involved in Y-CHIRPP, we refer to the steps of an injury surveillance system as described by WHO.1 For each step, the strengths and weaknesses of Y-CHIRPP are discussed and the performance of the system overall summarized.
In evaluating the system, we depended on the documentation and interviewee responses available to us. Accordingly, some of the information we gathered is necessarily incomplete. We acknowledge that we were obliged to adopt a highly subjective approach. Our account of the processes involved is qualitative, and our judgments of Y-CHIRPP against the WHO criteria were dichotomous (pass/fail).
Installing CHIRPP at Yorkhill
The installation of CHIRPP at Yorkhill has been described in detail previously.3 The Canadian CHIRPP team supported the establishment of CHIRPP at Yorkhill by visiting the site and through subsequent correspondence.
The costs of installing Y-CHIRPP were modest. These included the recruitment of a full-time clerical officer to code and enter data, the purchase of a dedicated desktop computer, software, and telephone line, and printing costs for forms required for data collection. The original CHIRPP form was modified to reflect local requirements (eg, postcode rather than zip code). The CHIRPP software was installed with assistance from the Canadian CHIRPP team.
Figure 1 shows key processes in the data collection. Data collection began with the delivery of forms to the ED by the data clerk and proceeded with the help of the triage nurse, ED nurses, and physicians. Approximately 10 000 cases per annum were entered on to the database.
The data collection form was short and easy to complete, requiring around 3–5 min for the parent or patient and a further 30–60 s for the physician or nurse to complete. Except for the patient-identifying information, information collected did not duplicate that collected elsewhere.
Initial completion rates of over 90% were achieved in the first few months of the pilot phase, settling back subsequently to 70–80%. Local public health officials welcomed the arrival of CHIRPP.
ED staff seemed ambivalent about the system, resulting in variable data-capture rates. There was no consistently applied mechanism for assuring data quality, in terms of either completeness or accuracy. In the early years, the capture rate was monitored monthly, but this practice was ultimately discontinued. The software was originally DOS-based and inflexible. Later, the system was converted to Microsoft Access, allowing data managers to make changes to both the questionnaire and the data entry and manipulation stages.
Y-CHIRPP was reviewed periodically by a small steering committee (see below), and minor (mainly administrative) modifications were made to its operation. In addition, unplanned changes (such as turnover of personnel) occurred.
Initially, ED reception staff were responsible for identifying eligible patients (ie, those with an injury or ingestion) and distributing the Y-CHIRPP forms. It became apparent that the reception staff were uncomfortable accepting responsibility for reporting injuries that required a degree of clinical insight (as well as additional work). Subsequently, the ED triage nurse adopted this role.
Maintaining high capture rates—to ensure that as close as possible to 100% of injured children attending the ED were included—proved a major challenge. Several remedies were attempted including returning incomplete forms to staff and rewarding physicians who had high completion rates with “prizes.” Generally, these initiatives were time-limited in their impact.
Entering and processing the data
Coding of free text
The original data collection form used free text to capture information on the incident resulting in the injury. The text was subsequently coded allowing data to be more easily interrogated. This information was then grouped into defined variable categories representing the sequence of events leading to the injury. Table 1 shows an example of coding for an injury where a child playing in the school playground was knocked over and hit the floor while fighting with another child.
Although the data entry clerk tended to use a relatively small number of codes, the total number available ran into thousands, resulting in large frequency tables that required further manipulation to produce meaningful data. Inevitably, uncommon injuries were inconsistently coded. Several of the categories used for the coded variables were not mutually exclusive, and the distinction between the data coded in the “mechanism” and “breakdown” variables was not consistently drawn, rendering their interpretation problematic.
In 2003, because of mounting work pressures on the data clerk, the original free text form was converted into tick boxes to reduce the time required for data entry. The smaller amount of free text allowed forms to be entered on to the database via an optical scanner. The result of converting to tick boxes was that the breadth and depth of information captured was reduced.
Within Yorkhill, a busy ED department providing care for 10 000–15 000 injured children per year, a part time data clerk was sufficient for the data processing stage. The clerk was line-managed by a senior administrator and integrated into the clinical audit office of the hospital, allowing periods of sickness and annual leave to be covered by colleagues.
All ED medical and nursing staff were trained in how to complete their respective Y-CHIRPP tasks. This was also viewed as an opportunity to secure staff support for the program.
The personnel requirements for data entry and processing were minimal, and management support for the data clerk was strong in the early years. The active involvement of ED staff in data collection, rather than a dedicated CHIRPP-funded post, restrained the costs of the system.
A Y-CHIRPP steering committee was established. This comprised a local general practitioner, an ED consultant, a senior nurse, a data manager, and a health promotion officer from the local health authority. The group met on a semi-regular basis to discuss the management of the system.
Training and motivating busy and often overworked ED staff required much time and resourcefulness. The rotation of ED trainee physicians every 6 months posed particular challenges. Consequently there were periods of poor capture rates and data quality. Some clinical staff viewed the surveillance system as research rather than an injury prevention tool, and this may have reduced their motivation.
Interpreting data and dissemination
The generation of tables and charts was carried out by data analysts based in the clinical audit office of the hospital. Absolute and relative frequencies of all injuries in relation to selected demographic and time variables were produced intermittently. These were reported to the Y-CHIRPP steering committee and ED staff. Basic frequency distributions could be produced within a few weeks of the data collection.
Although there was no forum for regular dissemination of results to external audiences, some findings were reported in peer-reviewed journals,8–10 a single issue of a Y-CHIRPP newsletter was published, and data were periodically presented to stakeholders.
Data were disseminated through responses to ad hoc requests for data, mainly from health professionals and community safety staff (including police officers) requesting postcode-specific data of age, sex, and/or injury-specific frequencies. These data were requested mainly to assess local injury burdens and to inform injury prevention campaigns. Occasionally, data were used to support funding requests by academics or others. Several student projects were based on Y-CHIRPP data and methodology.
The reporting of results and use of data for prevention was not the designated responsibility of any particular staff or committee member but was performed on an ad hoc basis.
It was difficult to define the population served by the hospital and thus to determine a denominator for calculating injury rates. This made comparison of injury risk with that found in other hospitals or regions epidemiologically problematic (although potential users seldom raised this as an obstacle to action).
Using the results to plan prevention strategies
Anecdotal accounts, along with the many ad hoc requests for data, suggest that the system was used by the wider injury prevention community and was perceived as a useful source of data. The existence of Y-CHIRPP appeared to raise the profile of injury prevention in Glasgow and Scotland during the period of its operation.
The Y-CHIRPP team pro-actively developed six child safety initiatives based around leaflet and poster campaigns. These were funded by the National Health Service (NHS) and disseminated widely throughout Scotland. Anecdotal reports suggest that they were well received, although no formal evaluation was performed.
Little is known about the details of any initiatives developed by non-NHS agencies that received Y-CHIRPP data. No formal mechanism existed for feedback of the views of data users. The impact of Y-CHIRPP on risk-taking behavior or on injury incidence rates in the population is unknown.
There was no formal evaluation strategy, apart from the intermittent monitoring of the capture rates. In addition, several reports8 10 and peer-reviewed articles3 11 made reference to the performance of Y-CHIRPP.
The overall system
These various strengths and weaknesses of the different stages of the system, taken together, were reviewed with reference to the WHO attributes of a good surveillance system.1 The findings may be summarized as follows.
The data collection forms were easy to complete for both the patient and the medical staff. Basic analyses could be performed relatively quickly. Data coding was cumbersome because of the large number of available codes, and this complicated both the data entry and analysis. We judged that the criterion of simplicity was, however, largely met.
Once transferred from its original hardware and associated software program, data managers could (and did) change the format of the data collection form. We judged that the criterion of flexibility was largely met.
Parents and the wider injury prevention community seemed generally supportive of Y-CHIRPP. ED staff were less supportive of the system, some perceiving the program to be a research project rather than an injury prevention tool. We judged that this criterion was partly met.
Data quality was compromised by low capture rates at weekends and in the evenings. Even during working hours, high capture rates were not consistently maintained. We judged that this criterion was not met.
Utility and sustainability
The lack of dedicated personnel and/or resources to report results and plan prevention measures resulted in the under-exploitation of data. There was no formal mechanism for users of the data to feedback to the Y-CHIRPP team. The lack of formal evaluation of either processes or outcomes may have undermined data quality. The costs of collecting, entering, and processing the data were modest (around £20 000 sterling per annum), largely because of the reliance on ED staff for data collection. This extra workload proved unsustainable, particularly in the light of the apparently minimal preventive impact of the data. We judged that this dual criterion was not met.
Basic analyses could be produced within a few weeks of the patient’s visit to the ED if personnel were available. In practice, tables and charts were generated only sporadically or in response to specific requests. We judged that this criterion was partly met.
We conducted a retrospective, qualitative review of the processes involved in establishing and operating an injury surveillance system in Scotland’s largest children’s hospital. Judged by the WHO criteria, we suggest that Y-CHIRPP was, at best, only a partial success during its existence.
Why did Y-CHIRPP ultimately fail and what lessons can we draw from its 10-year existence? Running Y-CHIRPP was a major undertaking demanding the enthusiasm and hard work of a small number of committed people. Virtually all of this effort focused on the data collection process rather than the analysis and utilization of the data. Arguably, this proved Y-CHIRPP’s undoing. To maintain the viability of an injury surveillance system and to secure the long-term support of ED staff, our experience suggests that it is essential that the system is perceived as an injury prevention service tool and not a research method. The most effective way to achieve this is to demonstrate the use of the data for injury prevention initiatives.
The key challenges
The major challenges to effective injury surveillance faced by Y-CHIRPP were threefold: ensuring data completeness, optimizing the use of data, and evaluating the effectiveness of the system. Those involved in installing and running Y-CHIRPP recognized the nature of those challenges but were unable to overcome them successfully because of practical constraints, particularly limited resources.
Data completeness need not achieve 100%, but it is important that most injured children attending the ED are captured by the system. Exclusive reliance by Y-CHIRPP on busy ED staff to collect data resulted in periods of poor data capture.
Given that the pattern of injuries has been shown to be fairly consistent over time, periodic surveillance may be sufficient for most preventive purposes and almost certainly more cost-effective than continuous surveillance.8 On the other hand, if the aim of surveillance is to identify clusters of specific injuries, as a result, for example, of the marketing of a new consumer product, or to evaluate injury prevention initiatives, periodic surveillance would be less appropriate.
Prevention is the ultimate raison d’etre of injury surveillance. Data collection is a first but insufficient step. Thereafter, the data need to be analysed, interpreted, disseminated, and used for prevention. The last of these stages requires close engagement with external bodies and professionals. The absence of dedicated posts to support all of these key activities is likely to result in a failure to realize the full potential of injury surveillance.
Evaluation has to be an ongoing process to ensure that the standards of data quality are maintained, that coding practices are appropriate, and that the data are appropriately analyzed, interpreted, and utilized.
The above challenges may become insurmountable obstacles if key stages in the surveillance process rely on clinical or clerical staff who regard the system as lying outside either their competence or contractual responsibilities.
We reviewed the operation of an injury surveillance system (CHIRPP) at a large children’s hospital in Scotland (Royal Hospital for Sick Children, Yorkhill) which ran from 1996–2006.
Our findings suggest that the injury surveillance system largely met the WHO criteria of simplicity, flexibility, and acceptability. Criteria that were not, or only intermittently, met were reliability, utility, sustainability, and timeliness.
To maintain the viability of an injury surveillance system and to secure the long-term support of hospital staff, it is important that the system is perceived as an injury prevention service tool and not a research method.
We suggest that injury surveillance requires three supporting posts: an emergency department staff member, a data analyst, and someone with responsibility for developing and/or lobbying for the implementation of preventive measures.
Need for adequate resources and capacity
From our experience of Y-CHIRPP, we suggest that injury surveillance requires the support of three clearly delineated posts: an ED staff member, a data analyst, and someone responsible for developing and/or lobbying for the implementation of preventive measures in the population. A full-time staff member present in the ED responsible for injury surveillance would facilitate data collection and allow rigorous data quality testing to be used. We recognize, however, that the 24 hour presence of a dedicated surveillance team member in the ED department is unrealistic and that there will remain some reliance on clinical staff to collect data outside of normal working hours. To achieve this, senior clinical ED staff need to demonstrate leadership to their colleagues through their commitment to the system. The availability of a skilled data analyst would ensure the continuous generation of outputs from the system and provide regular and ad hoc (ie, response mode) reports to the local and wider injury prevention community. The final but crucial link in the chain is a dedicated staff member with the remit of exploiting the data for injury prevention practice and/or policy making. Such a post requires considerable expertise and could be either internally or externally based. In either case, a close and productive liaison between the ED-based surveillance system and outside agencies is an essential ingredient of success.
In summary, Y-CHIRPP operated, with only partial success, at Scotland’s largest children’s hospital over a 10-year period, beyond which it proved unsustainable. The lessons learned from this pioneering experiment in injury surveillance should prove useful to others, in the UK and elsewhere, contemplating establishing an ED-based injury surveillance system in their locality.
IMPLICATIONS FOR PREVENTION
Injury surveillance is widely regarded as an important preventive tool. ED-based injury surveillance is conceptually simple but operationally challenging. The three activities of data collection, analysis, and prevention are interdependent and all are crucial to the ultimate success of the enterprise. In resource terms, this implies that three clearly delineated posts are required to support these three roles. An excessive focus on one at the expense of the others will undermine the sustainability of the system.
The Y-CHIRPP experiment could not have occurred without the participation of a large number of people at Yorkhill Hospital. These include: Mr N Doraiswamy (Consultant, emergency department), Mr M Jamieson (Medical Director), Ms E Meenagh (Clinical Effectiveness Coordinator), Mr S Beaton (Clinical Effectiveness Office), Sister Hammett (emergency department), Dr J Paton (Chair, Clinical Effectiveness Committee), Ms A Morrison (PEACH Unit), and many others too numerous to name. We are indebted to all of them, as well as to the parents, carers, and children attending the emergency department, for enabling us to learn so much about childhood injuries and their causes over such a relatively short time period. We also acknowledge the key roles played by Dr I B Pless of McGill University along with his colleagues in Health Canada in introducing us to CHIRPP and in helping us to launch our Scottish version of the system.
See Commentary, p 220
Competing interests: None.