Autonomous navigation in agricultural environments such as vineyards and orchards typically depends on sensor fusion, integrating GPS, inertial measurement units (IMUs), Light Detection and Ranging (LiDAR), and stereo cameras for precise navigation. However, resource-limited edge devices and environmental factors like seasonal changes or GPS signal loss underscore the need for more efficient sensing solutions that can complement current technologies and reduce measurement uncertainties. These challenges, along with economic constraints and limited technical expertise, hinder the widespread adoption of robotic systems in farming. In this work, we explore and compare various deep learning-based segmentation methods for accurately detecting drivable areas in unstructured, GPSdenied orchard environments. Particular emphasis is placed on deploying these methods to edge devices, where efficient model inference is critical. We analyze three deep learning-based segmentation methods: a lightweight DeepLabv3-inspired model and the two latest YOLO (You Only Look Once) versions for segmentation. Our results demonstrate the feasibility of deploying these models on edge devices such as NVIDIA Jetson Orin Nano and the superior performance of the YOLO models, achieving high terrain segmentation precision and season-robust real-time inference on previously unseen data.

Segmentation of Drivable Areas in GPS-Denied and Unstructured Orchard Environments

Girlanda, Federico;Shamsfakhr, Farhad;Vecchio, Massimo;Antonelli, Fabio
2025-01-01

Abstract

Autonomous navigation in agricultural environments such as vineyards and orchards typically depends on sensor fusion, integrating GPS, inertial measurement units (IMUs), Light Detection and Ranging (LiDAR), and stereo cameras for precise navigation. However, resource-limited edge devices and environmental factors like seasonal changes or GPS signal loss underscore the need for more efficient sensing solutions that can complement current technologies and reduce measurement uncertainties. These challenges, along with economic constraints and limited technical expertise, hinder the widespread adoption of robotic systems in farming. In this work, we explore and compare various deep learning-based segmentation methods for accurately detecting drivable areas in unstructured, GPSdenied orchard environments. Particular emphasis is placed on deploying these methods to edge devices, where efficient model inference is critical. We analyze three deep learning-based segmentation methods: a lightweight DeepLabv3-inspired model and the two latest YOLO (You Only Look Once) versions for segmentation. Our results demonstrate the feasibility of deploying these models on edge devices such as NVIDIA Jetson Orin Nano and the superior performance of the YOLO models, achieving high terrain segmentation precision and season-robust real-time inference on previously unseen data.
File in questo prodotto:
File Dimensione Formato  
Segmentation_of_Drivable_Areas_in_GPS-Denied_and_Unstructured_Orchard_Environments.pdf

solo utenti autorizzati

Licenza: Non specificato
Dimensione 11.67 MB
Formato Adobe PDF
11.67 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/364487
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact