The integration of laser scanning and photogrammetry has become a critical approach in architectural and civil surveying, leveraging the geometric precision of Terrestrial Laser Scanners and the high-quality textures achievable through photogrammetric surveys. Despite the advances, challenges persist in efficiently merging these data sources, particularly due to limitations in sensor integration and varying levels of Ground Sampling Distance. This study presents a novel data fusion methodology, operating at raw and intermediate levels, bypassing the need for data pre-alignment, sensor trajectories or coloured point clouds. The approach employs deep learning-based matchers to achieve automated co-registration of RGB images and TLS data, offering advantages such as global registration, multi-modal matching, direct scaling and referencing, and enhanced sensor fusion during the photogrammetric bundle adjustment. Additionally, the method enables the direct orientation of single images and texture mapping without requiring dense point clouds. The pipeline is validated with an architectural surveying scenario, demonstrating its efficacy in comparison with commercial solutions.

Co-registering Laser Scanning Point Clouds and Photogrammetric Images with Deep Learning Multi-Modal Matching

Morelli, Luca;Perda, Giulio;Trybała, Paweł;Rigon, Simone;Sutherland, Neil;Remondino, Fabio;
2024-01-01

Abstract

The integration of laser scanning and photogrammetry has become a critical approach in architectural and civil surveying, leveraging the geometric precision of Terrestrial Laser Scanners and the high-quality textures achievable through photogrammetric surveys. Despite the advances, challenges persist in efficiently merging these data sources, particularly due to limitations in sensor integration and varying levels of Ground Sampling Distance. This study presents a novel data fusion methodology, operating at raw and intermediate levels, bypassing the need for data pre-alignment, sensor trajectories or coloured point clouds. The approach employs deep learning-based matchers to achieve automated co-registration of RGB images and TLS data, offering advantages such as global registration, multi-modal matching, direct scaling and referencing, and enhanced sensor fusion during the photogrammetric bundle adjustment. Additionally, the method enables the direct orientation of single images and texture mapping without requiring dense point clouds. The pipeline is validated with an architectural surveying scenario, demonstrating its efficacy in comparison with commercial solutions.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/357027
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact