It is a classical task to automatically extract road networks from very high-resolution (VHR) images in remote sensing. This paper presents a novel method for extracting road networks from VHR remotely sensed images in complex urban scenes. Inspired by image segmentation, edge detection, and object skeleton extraction, we develop a multitask convolutional neural network (CNN), called RoadNet, to simultaneously predict road surfaces, edges, and centerlines, which is the first work in such field. The RoadNet solves seven important issues in this vision problem: 1) automatically learning multiscale and multilevel features [gained by the deeply supervised nets (DSN) providing integrated direct supervision] to cope with the roads in various scenes and scales; 2) holistically training the mentioned tasks in a cascaded end-to-end CNN model; 3) correlating the predictions of road surfaces, edges, and centerlines in a network model to improve the multitask prediction; 4) designing elaborate architecture and loss function, by which the well-trained model produces approximately single-pixel width road edges/centerlines without nonmaximum suppression postprocessing; 5) cropping and bilinear blending to deal with the large VHR images with finite-computing resources; 6) introducing rough and simple user interaction to obtain desired predictions in the challenging regions; and 7) establishing a benchmark data set which consists of a series of VHR remote sensing images with pixelwise annotation. Different from the previous works, we pay more attention to the challenging situations, in which there are lots of shadows and occlusions along the road regions. Experimental results on two benchmark data sets show the superiority of our proposed approaches.
|Titolo:||RoadNet: Learning to Comprehensively Analyze Road Networks in Complex Urban Scenes from High-Resolution Remotely Sensed Images|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||1.1 Articolo in rivista|