Explainable artificial intelligence (XAI) approaches started to be studied in the last period to improve the interpretability of increasingly complex deep learning (DL) methods for remote sensing (RS) applications. Many XAI approaches are designed for supervised DL methods, but few interpret unsupervised models since they are challenging to explain due to the lack of semantic information. Change detection (CD) methods automatically identify changes between RS images acquired in a geographical area at different times. Among them, most of the CDs are unsupervised methods since gathering labeled multitemporal data is challenging. The unsupervised CD is an almost unexplored task for XAI, and so is the use of XAI for computational efficiency optimization. In this article, we propose an XAI approach for unsupervised CD tasks. The proposed method forces a convolutional autoencoder (CAE) to learn explainable hidden-layer features for CD using a greedy layerwise training that retains only the feature providing change information. It also automatically adapts the model depth according to the input image spatial resolution and spatial context information. Novel convolutional layers are added in the greedy layerwise process until they provide information according to the Kullback–Leibler (KL) divergence measure. In the case the new layer learns insufficient information, the process stops. A multiscale CD method retrieves the change maps. We test the interpretability of the proposed method on three CD datasets composed of bitemporal multispectral images acquired by Landsat-8 and Sentinel-2 for detecting burned and deforested areas.

XChange: An Explainable Dynamic Convolutional Autoencoder for Unsupervised Change Detection

Luca Bergamasco
Writing – Original Draft Preparation
;
Francesca Bovolo
Writing – Review & Editing
2025-01-01

Abstract

Explainable artificial intelligence (XAI) approaches started to be studied in the last period to improve the interpretability of increasingly complex deep learning (DL) methods for remote sensing (RS) applications. Many XAI approaches are designed for supervised DL methods, but few interpret unsupervised models since they are challenging to explain due to the lack of semantic information. Change detection (CD) methods automatically identify changes between RS images acquired in a geographical area at different times. Among them, most of the CDs are unsupervised methods since gathering labeled multitemporal data is challenging. The unsupervised CD is an almost unexplored task for XAI, and so is the use of XAI for computational efficiency optimization. In this article, we propose an XAI approach for unsupervised CD tasks. The proposed method forces a convolutional autoencoder (CAE) to learn explainable hidden-layer features for CD using a greedy layerwise training that retains only the feature providing change information. It also automatically adapts the model depth according to the input image spatial resolution and spatial context information. Novel convolutional layers are added in the greedy layerwise process until they provide information according to the Kullback–Leibler (KL) divergence measure. In the case the new layer learns insufficient information, the process stops. A multiscale CD method retrieves the change maps. We test the interpretability of the proposed method on three CD datasets composed of bitemporal multispectral images acquired by Landsat-8 and Sentinel-2 for detecting burned and deforested areas.
File in questo prodotto:
File Dimensione Formato  
Dynamic_CD_TGRS-compressed.pdf

solo utenti autorizzati

Tipologia: Documento in Pre-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 630.12 kB
Formato Adobe PDF
630.12 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/363149
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact