Building change detection (CD), important for its application in urban monitoring, can be performed in near real time by comparing prechange and postchange very-high-spatial-resolution (VHR) synthetic-aperture-radar (SAR) images. However, multitemporal VHR SAR images are complex as they show high spatial correlation, prone to shadows, and show an inhomogeneous signature. Spatial context needs to be taken into account to effectively detect a change in such images. Recently, convolutional-neural-network (CNN)-based transfer learning techniques have shown strong performance for CD in VHR multispectral images. However, its direct use for SAR CD is impeded by the absence of labeled SAR data and, thus, pretrained networks. To overcome this, we exploit the availability of paired unlabeled SAR and optical images to train for the suboptimal task of transcoding SAR images into optical images using a cycle-consistent generative adversarial network (CycleGAN). The CycleGAN consists of two generator networks: one for transcoding SAR images into the optical image domain and the other for projecting optical images into the SAR image domain. After unsupervised training, the generator transcoding SAR images into optical ones is used as a bitemporal deep feature extractor to extract optical-like features from bitemporal SAR images. Thus, deep change vector analysis (DCVA) and fuzzy rules can be applied to identify changed buildings (new/destroyed). We validate our method on two data sets made up of pairs of bitemporal VHR SAR images on the city of L'Aquila (Italy) and Trento (Italy).
Building Change Detection in VHR SAR Images via Unsupervised Deep Transcoding
Saha, Sudipan;Bovolo, Francesca;
2021-01-01
Abstract
Building change detection (CD), important for its application in urban monitoring, can be performed in near real time by comparing prechange and postchange very-high-spatial-resolution (VHR) synthetic-aperture-radar (SAR) images. However, multitemporal VHR SAR images are complex as they show high spatial correlation, prone to shadows, and show an inhomogeneous signature. Spatial context needs to be taken into account to effectively detect a change in such images. Recently, convolutional-neural-network (CNN)-based transfer learning techniques have shown strong performance for CD in VHR multispectral images. However, its direct use for SAR CD is impeded by the absence of labeled SAR data and, thus, pretrained networks. To overcome this, we exploit the availability of paired unlabeled SAR and optical images to train for the suboptimal task of transcoding SAR images into optical images using a cycle-consistent generative adversarial network (CycleGAN). The CycleGAN consists of two generator networks: one for transcoding SAR images into the optical image domain and the other for projecting optical images into the SAR image domain. After unsupervised training, the generator transcoding SAR images into optical ones is used as a bitemporal deep feature extractor to extract optical-like features from bitemporal SAR images. Thus, deep change vector analysis (DCVA) and fuzzy rules can be applied to identify changed buildings (new/destroyed). We validate our method on two data sets made up of pairs of bitemporal VHR SAR images on the city of L'Aquila (Italy) and Trento (Italy).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.