Augmented Reality (AR) is already transforming many fields, from medical applications to industry, entertainment and heritage. In its most common form, AR expands reality with virtual 3D elements, providing users with an enhanced and enriched experience of the surroundings. Until now, most of the research work focused on techniques based on markers or on GNSS/INS positioning. These approaches require either the preparation of the scene or a strong satellite signal to work properly. In this paper, we investigate the use of visual-based methods, i.e., methods that exploit distinctive features of the scene estimated with Visual Simultaneous Localization and Mapping (V-SLAM) algorithms, to determine and track the user position and attitude. The detected features, which encode the visual appearance of the scene, can be saved and later used to track the user in successive AR sessions. Existing AR frameworks like Google ARCore, Apple ARKit and Unity AR Foundation recently introduced visual-based localization in their frameworks, but they target mainly small scenarios. We propose a new Mobile Augmented Reality (MAR) methodology that exploits OPEN-V-SLAM to extend the application range of Unity AR Foundation and better handle large-scale environments. The proposed methodology is successfully tested in both controlled and real-case large heritage scenarios. Results are available also in this video: https://youtu.be/Q7VybmiWIuI.

Unveiling large-scale historical contents with V-SLAM and markerless mobile AR solutions

Torresani, A.;Rigon, S.;Farella, E. M.;Menna, F.;Remondino, F.
2021

Abstract

Augmented Reality (AR) is already transforming many fields, from medical applications to industry, entertainment and heritage. In its most common form, AR expands reality with virtual 3D elements, providing users with an enhanced and enriched experience of the surroundings. Until now, most of the research work focused on techniques based on markers or on GNSS/INS positioning. These approaches require either the preparation of the scene or a strong satellite signal to work properly. In this paper, we investigate the use of visual-based methods, i.e., methods that exploit distinctive features of the scene estimated with Visual Simultaneous Localization and Mapping (V-SLAM) algorithms, to determine and track the user position and attitude. The detected features, which encode the visual appearance of the scene, can be saved and later used to track the user in successive AR sessions. Existing AR frameworks like Google ARCore, Apple ARKit and Unity AR Foundation recently introduced visual-based localization in their frameworks, but they target mainly small scenarios. We propose a new Mobile Augmented Reality (MAR) methodology that exploits OPEN-V-SLAM to extend the application range of Unity AR Foundation and better handle large-scale environments. The proposed methodology is successfully tested in both controlled and real-case large heritage scenarios. Results are available also in this video: https://youtu.be/Q7VybmiWIuI.
File in questo prodotto:
File Dimensione Formato  
isprs-archives-XLVI-M-1-2021-761-2021_compressed.pdf

accesso aperto

Licenza: PUBBLICO - Creative Commons 2.1
Dimensione 849 kB
Formato Adobe PDF
849 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11582/328286
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact