Article Text
Abstract
Background We used panoramic images and neural networks to measure street-level built environment features with relevance to pedestrian safety.
Methods Street-level features were identified from systematic literature search and local experience in Bogota, Colombia (study location). Google Street View© panoramic images were sampled from 10,810 intersection and street segment locations, including 2,642 where pedestrian collisions occurred 2015–2019; the most recent, nearest (<25 meters) available image was selected for each sampled intersection or segment. Human raters annotated image features which were used to train neural networks. Neural networks and human raters were compared across all features using mean Average Recall (mAR) and mean Average Precision (mAP) estimated performance. Feature prevalence was compared by pedestrian vs non-pedestrian collision locations.
Results Thirty features were identified related to roadway (e.g., medians), crossing areas (e.g., crosswalk), traffic control (e.g., pedestrian signal), and roadside (e.g., trees) with streetlights the most frequently detected object (N=10,687 images). Neural networks achieved mAR=15.4 versus 25.4 for humans, and a mAP=16.0. Bus lanes, pedestrian signals, and pedestrian bridges were significantly more prevalent at pedestrian collision locations, whereas speed bumps, school zones, sidewalks, trees, potholes and streetlights were significantly more prevalent at non-pedestrian collision locations.
Conclusion Neural networks have substantial potential to obtain timely, accurate built environment data crucial to improve road safety. Training images need to be well-annotated to ensure accurate object detection and completeness.
Learning Outcomes 1) Describe how neural networks can be used for road safety research; 2) Describe challenges of using neural networks.