InfraRed Thermography 3D-Data Fusion (IRT-3DDF), an emerging field of research combining 2D thermal images with 3D models, has demonstrated its competency visualising the performance of historic buildings and the behaviour of materials under varying environmental conditions. However, for 3D thermal models to become viable tools in the assessment of architectural heritage, fully-automatic IRT-3DDF methods capable of producing both geometrically- and radiometrically-accurate models require greater investigation. Therefore, using a new repository of multi-modal, multi-sensor and multi-platform datasets, this paper presents a fully-automatic IRT-3DDF method using deep learning-based multi-modal image matching, fusing multiple aerial and terrestrial sensors using combined bundle block adjustments. Results demonstrate the successful orientation several multi-sensor datasets using pre-trained neural networks, achieving sub-centimetre geometric accuracy and radiometrically consistent thermal models suitable for building diagnostics. Future work will assess the generalisability of the proposed pipeline across additional datasets, expanding its application to broader conservation, repair and maintenance (CRM) practices.

Multi-Modal and Multi-Sensor Photogrammetric Data Fusion Exploiting a New Repository for Infrared Thermography Datasets

Neil Sutherland;Luca Morelli;Fabio Remondino
2025-01-01

Abstract

InfraRed Thermography 3D-Data Fusion (IRT-3DDF), an emerging field of research combining 2D thermal images with 3D models, has demonstrated its competency visualising the performance of historic buildings and the behaviour of materials under varying environmental conditions. However, for 3D thermal models to become viable tools in the assessment of architectural heritage, fully-automatic IRT-3DDF methods capable of producing both geometrically- and radiometrically-accurate models require greater investigation. Therefore, using a new repository of multi-modal, multi-sensor and multi-platform datasets, this paper presents a fully-automatic IRT-3DDF method using deep learning-based multi-modal image matching, fusing multiple aerial and terrestrial sensors using combined bundle block adjustments. Results demonstrate the successful orientation several multi-sensor datasets using pre-trained neural networks, achieving sub-centimetre geometric accuracy and radiometrically consistent thermal models suitable for building diagnostics. Future work will assess the generalisability of the proposed pipeline across additional datasets, expanding its application to broader conservation, repair and maintenance (CRM) practices.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/369887
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact