Distributed acoustic sensing (DAS) is a technology that repurposes telecommunication fibers to record vibrations in the ground with meter-scale resolution. DAS records vibrations from sources such as earthquakes, vehicles, or footsteps with dense spatial sampling difficult to achieve with an array of traditional seismometers. Thus, DAS provides an ideal method for recording human activities because fibers are ubiquitous in the urban environment. However, the complexity of urban environments and the large amount of data recorded by DAS require array-based, automated methods to efficiently detect and categorize human activities. We evaluate how well three machine learning models (k-nearest neighbor, convolutional neural networks, and recurrent-convolutional neural networks) can identify various activities recorded by DAS. Our findings reveal that both k-nearest neighbor and neural network methods perform well in high signal-to-noise ratio (SNR) settings. However, their accuracy decreases at SNRs less than 4. We also explore the spatial sensitivity of DAS to a human walking by back-projecting recorded vibrations from footsteps. This revealed a site-specific footstep sensitivity distance threshold of 24 m for DAS, which is consistent with footstep thresholds observed for traditional seismometers. Furthermore, we apply Kalman filtering on back-projected locations of all recorded human activities. These filtered locations provide a spatiotemporal path of each activity which can be directly compared to GPS ground truth recordings. By combining machine learning and activity location results, we calculate a multi-dimensional model of moving human activities. Our study demonstrates the potential of DAS for accurately identifying and locating human activities, providing a valuable tool for facility monitoring for future arms control treaties.
SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525. SAND2025-00332A