Article Text

Download PDFPDF

122 Quasi-experimentation: study designs for analyzing policy effects
  1. Shabbar Ranapurwala1,2,
  2. Kate Fitch1,
  3. Julie Kafka3
  1. 1Department of Epidemiology, University of North Carolina, Chapel Hill, USA
  2. 2Injury Prevention Research Center, University of North Carolina, Chapel Hill, USA
  3. 3Department of Health Behavior, University of North Carolina, Chapel Hill, USA


Injury and violence are one of the top causes of deaths for individuals under 55 years of age and lead to the largest loss of person-years of life compared to other causes of death. Further, non-fatal injuries are associated with high financial and societal costs, including for emergency care, lost productivity, and long-term disability. Due to this, lawmakers and regulators are quick to institute policies and laws that may prevent injuries and death (e.g., opioid prescribing limits, firearm laws, helmet laws for motorcycles, cellphone use laws while driving, etc.). Further, natural events like hurricanes, a pandemic, or rapid shift in socioeconomics (e.g., stock market crashes) may lead to sudden impact on physical, mental, social, or financial health of populations and may exacerbate injuries and violence. Accurate understanding of the impact of public policy is extremely important so that we can know whether there are any intentional or unintentional policy effects. This is even more important in injury and violence prevention where many policy interventions are built on observational data because conducting randomized trials is difficult due to ethical considerations (e.g., firearm policies). Hence, it is very important to apply the best available quasi-experimental design methods to evaluate policies, understand assumptions for these methods, and avoid getting erroneous results and interpretations. In this workshop, we will share the lessons learnt from conducting multiple policy evaluations. The workshop will be divided into both didactic and skills-based sections to learn data organization and analyses. We will begin by expounding upon the theoretical basis of four commonly used methods in policy evaluations: 1) difference-in-differences (DiD), 2) (controlled) interrupted time series (CITS), 3) synthetic controls, and 4) combining CITS with synthetic controls. We will then use real world data driven examples from injury and violence prevention to show potential benefits and pitfalls of each method given different data and circumstances. We will walk the attendees through multiple real-world examples from published literature (our own and other colleagues’ publications). We will examine how these methods may produce similar or different results and lead to different interpretations from the same data. The attendees will learn to critically evaluate which methods to use and when. Attendees will also learn how to examine and address potential effect measure modification by race, sex, age, and other factors in these quasi-experimental studies. Finally, we will give attendees a hands-on learning experience by sharing test datasets and allowing them to conduct their own analyses. We will share our publicly available website (currently under development) that the attendees can access long after the workshop as a resource. This workshop will provide attendees with theoretical and practical skills to apply the methods in various settings and explain the choice of their methods to both scientific and lay audiences.

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.