Rapid identification of areas affected by changes is a challenging task in many remote sensing applications. Sentinel-1 (S1) images provided by the European Space Agency (ESA) can be used to monitor such situations due to its high temporal and spatial resolution and indifference to weather. Though a number of deep learning based methods have been proposed in the literature for change detection (CD) in multi-temporal SAR images, most of them require labeled training data. Collecting sufficient labeled multi-temporal data is not trivial, however S1 provides abundant unlabeled data. To this end, we propose a solution for CD in multi-temporal S1 images based on unsupervised training of deep neural networks (DNNs). Unlabeled single-time image patches are used to train a multilayer convolutional-autoencoder (CAE) in unsupervised fashion by minimizing the reconstruction error between the reconstructed output and the input. The trained multilayer CAE is used to extract multi-scale features from both the pre and post change images that are analyzed for CD. The multi-scale features are fused according to a detail-preserving scale-driven approach that allows us to generate change maps by preserving details. The experiments conducted on a S1 dataset from Brumadinho, Brazil confirms the effectiveness of the proposed method.
Unsupervised change-detection based on convolutional-autoencoder feature extraction
Bergamasco, Luca;Saha, Sudipan;Bovolo, Francesca;
2019-01-01
Abstract
Rapid identification of areas affected by changes is a challenging task in many remote sensing applications. Sentinel-1 (S1) images provided by the European Space Agency (ESA) can be used to monitor such situations due to its high temporal and spatial resolution and indifference to weather. Though a number of deep learning based methods have been proposed in the literature for change detection (CD) in multi-temporal SAR images, most of them require labeled training data. Collecting sufficient labeled multi-temporal data is not trivial, however S1 provides abundant unlabeled data. To this end, we propose a solution for CD in multi-temporal S1 images based on unsupervised training of deep neural networks (DNNs). Unlabeled single-time image patches are used to train a multilayer convolutional-autoencoder (CAE) in unsupervised fashion by minimizing the reconstruction error between the reconstructed output and the input. The trained multilayer CAE is used to extract multi-scale features from both the pre and post change images that are analyzed for CD. The multi-scale features are fused according to a detail-preserving scale-driven approach that allows us to generate change maps by preserving details. The experiments conducted on a S1 dataset from Brumadinho, Brazil confirms the effectiveness of the proposed method.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.