In this paper, we present an in-depth analysis of the use of convolutional neural networks (CNN), a deep learning method widely applied in remote sensing-based studies in recent years, for burned area (BA) mapping combining radar and optical datasets acquired by Sentinel-1 and Sentinel-2 on-board sensors, respectively. Combining active and passive datasets into a seamless wall-to-wall cloud cover independent mapping algorithm significantly improves existing methods based on either sensor type. Five areas were used to determine the optimum model settings and sensors integration, whereas five additional ones were utilised to validate the results. The optimum CNN dimension and data normalisation were conditioned by the observed land cover class and data type (i.e., optical or radar). Increasing network complexity (i.e., number of hidden layers) only resulted in rising computing time without any accuracy enhancement when mapping BA. The use of an optimally defined CNN within a joint active/passive data combination allowed for (i) BA mapping with similar or slightly higher accuracy to those achieved in previous approaches based on Sentinel-1 (Dice coefficient, DC of 0.57) or Sentinel-2 (DC 0.7) only and (ii) wall-to-wall mapping by eliminating information gaps due to cloud cover, typically observed for optical-based algorithms.
CNN-based burned area mapping using radar and optical data
Bovolo, Francesca
2021-01-01
Abstract
In this paper, we present an in-depth analysis of the use of convolutional neural networks (CNN), a deep learning method widely applied in remote sensing-based studies in recent years, for burned area (BA) mapping combining radar and optical datasets acquired by Sentinel-1 and Sentinel-2 on-board sensors, respectively. Combining active and passive datasets into a seamless wall-to-wall cloud cover independent mapping algorithm significantly improves existing methods based on either sensor type. Five areas were used to determine the optimum model settings and sensors integration, whereas five additional ones were utilised to validate the results. The optimum CNN dimension and data normalisation were conditioned by the observed land cover class and data type (i.e., optical or radar). Increasing network complexity (i.e., number of hidden layers) only resulted in rising computing time without any accuracy enhancement when mapping BA. The use of an optimally defined CNN within a joint active/passive data combination allowed for (i) BA mapping with similar or slightly higher accuracy to those achieved in previous approaches based on Sentinel-1 (Dice coefficient, DC of 0.57) or Sentinel-2 (DC 0.7) only and (ii) wall-to-wall mapping by eliminating information gaps due to cloud cover, typically observed for optical-based algorithms.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.