Automated road segmentation is considered an essential aspect of the development and planning of cities. However, automatically extracting road information from remote sensing imagery with manual labeling is still challenging due to the road network structures diversity. We propose a deep learning method that is joint learning from very high-resolution remote sensing images and volunteer geographic information. The method can handle complex scenarios. We modified the channel attention residual U-Net model with a weighted loss technique to address the problem of unbalanced training data and increase accuracy. Experimental results indicate that U-Net, Residual U-Net, and Attention U-Net achieved an overall accuracy of 0.91%, 0.93%, and 0.97%, respectively, and the proposed method with 0.99% overall accuracy has the best performance within the tested U-Net series.

Joint Learning Framework for Roads Semantic Segmentation from VHR and VGI Data

Usmani, Munazza;Bovolo, Francesca;Napolitano, Maurizio
2023-01-01

Abstract

Automated road segmentation is considered an essential aspect of the development and planning of cities. However, automatically extracting road information from remote sensing imagery with manual labeling is still challenging due to the road network structures diversity. We propose a deep learning method that is joint learning from very high-resolution remote sensing images and volunteer geographic information. The method can handle complex scenarios. We modified the channel attention residual U-Net model with a weighted loss technique to address the problem of unbalanced training data and increase accuracy. Experimental results indicate that U-Net, Residual U-Net, and Attention U-Net achieved an overall accuracy of 0.91%, 0.93%, and 0.97%, respectively, and the proposed method with 0.99% overall accuracy has the best performance within the tested U-Net series.
2023
979-8-3503-2010-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/342467
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact