The widespread adoption of wearable health monitoring devices has highlighted the need for robust algorithms to detect and mitigate motion artifacts in photoplethysmographic (PPG) signals. This article introduces a novel approach to motion artifact segmentation, supported by the creation of a new dataset. PPG signals from publicly available repositories were curated and enhanced with precise segmentation masks, enabling rigorous model training and evaluation. Three deep learning architectures—UNet, LSTM-UNet, and Atrous-UNet—were assessed using a comprehensive set of metrics, including dice score, intersection over union (IoU), and average Hausdorff distance (AHD), to highlight the model performance beyond standard evaluations. This multimetric approach underscored the importance of addressing diverse aspects such as boundary precision and overall robustness. Atrous-UNet emerged as the most effective model, offering a balance of high performance and computational efficiency suitable for real-time deployment. The models were validated on real-world data from open-source wearable devices, EmotiBit and Bangle.js 2 (accuracy above 80% for both), demonstrating generalizability across varying hardware and environmental conditions. Comparisons with expert and nonexpert human annotators revealed that the models significantly outperformed the two groups in reliability and detection consistency. The models were optimized using quantization techniques to enable deployment on resource-constrained devices. Although this introduced some performance losses, the models retained robust artifact detection capabilities with an accuracy of over 78%.

Segmentation of Motion Artifacts in Wearable PPG Signals Using Lightweight Neural Networks

Marco Bolpagni
;
Silvia Gabrielli;
2025-01-01

Abstract

The widespread adoption of wearable health monitoring devices has highlighted the need for robust algorithms to detect and mitigate motion artifacts in photoplethysmographic (PPG) signals. This article introduces a novel approach to motion artifact segmentation, supported by the creation of a new dataset. PPG signals from publicly available repositories were curated and enhanced with precise segmentation masks, enabling rigorous model training and evaluation. Three deep learning architectures—UNet, LSTM-UNet, and Atrous-UNet—were assessed using a comprehensive set of metrics, including dice score, intersection over union (IoU), and average Hausdorff distance (AHD), to highlight the model performance beyond standard evaluations. This multimetric approach underscored the importance of addressing diverse aspects such as boundary precision and overall robustness. Atrous-UNet emerged as the most effective model, offering a balance of high performance and computational efficiency suitable for real-time deployment. The models were validated on real-world data from open-source wearable devices, EmotiBit and Bangle.js 2 (accuracy above 80% for both), demonstrating generalizability across varying hardware and environmental conditions. Comparisons with expert and nonexpert human annotators revealed that the models significantly outperformed the two groups in reliability and detection consistency. The models were optimized using quantization techniques to enable deployment on resource-constrained devices. Although this introduced some performance losses, the models retained robust artifact detection capabilities with an accuracy of over 78%.
File in questo prodotto:
File Dimensione Formato  
Segmentation_of_Motion_Artifacts_in_Wearable_PPG_Signals_Using_Lightweight_Neural_Networks.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 2.85 MB
Formato Adobe PDF
2.85 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/366107
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact