Augmented Reality (AR) is already transforming many fields, from medical applications to industry, entertainment and heritage. In its most common form, AR expands reality with virtual 3D elements, providing users with an enhanced and enriched experience of the surroundings. Until now, most of the research work focused on techniques based on markers or on GNSS/INS positioning. These approaches require either the preparation of the scene or a strong satellite signal to work properly. In this paper, we investigate the use of visual-based methods, i.e., methods that exploit distinctive features of the scene estimated with Visual Simultaneous Localization and Mapping (V-SLAM) algorithms, to determine and track the user position and attitude. The detected features, which encode the visual appearance of the scene, can be saved and later used to track the user in successive AR sessions. Existing AR frameworks like Google ARCore, Apple ARKit and Unity AR Foundation recently introduced visual-based localization in their frameworks, but they target mainly small scenarios. We propose a new Mobile Augmented Reality (MAR) methodology that exploits OPEN-V-SLAM to extend the application range of Unity AR Foundation and better handle large-scale environments. The proposed methodology is successfully tested in both controlled and real-case large heritage scenarios. Results are available also in this video: https://youtu.be/Q7VybmiWIuI.
|Titolo:||Unveiling large-scale historical contents with V-SLAM and markerless mobile AR solutions|
|Data di pubblicazione:||2021|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|
File in questo prodotto:
|isprs-archives-XLVI-M-1-2021-761-2021_compressed.pdf||N/A||PUBBLICO - Creative Commons 2.1||Open Access Visualizza/Apri|