Assistive technologies for blind people are showing a fast growth, providing useful tools to support daily activities and to improve social inclusion. Most of these technologies are mainly focused on helping blind people to navigate and avoid obstacles. Other works emphasize on providing them assistance to recognize their surrounding objects. Very few of them however couple both aspects (i.e., navigation and recognition). With the aim to address the aforesaid needs, we describe in this paper an innovative prototype, which offers the capabilities to (i) move autonomously and to (ii) recognize multiple objects in public indoor environments. It incorporates lightweight hardware components (camera, IMU, and laser sensors), all mounted on a reasonably-sized integrated device to be placed on the chest. It requires the indoor environment to be ‘blind-friendly’, i.e., prior information about it should be prepared and loaded in the system beforehand. Its algorithms are mainly based on advanced computer vision and machine learning approaches. The interaction between the user and the system is performed through speech recognition and synthesis modules. The prototype offers to the user the possibility to (i) walk across the site to reach the desired destination, avoiding static and mobile obstacles, and (ii) ask the system through vocal interaction to list the prominent objects in the user's field of view. We illustrate the performances of the proposed prototype through experiments conducted in a blind-friendly indoor space equipped at our Department premises.
Recovering the Sight to blind People in indoor Environments with smart Technologies
Mohamed Lamine Mekhalfi;
2016-01-01
Abstract
Assistive technologies for blind people are showing a fast growth, providing useful tools to support daily activities and to improve social inclusion. Most of these technologies are mainly focused on helping blind people to navigate and avoid obstacles. Other works emphasize on providing them assistance to recognize their surrounding objects. Very few of them however couple both aspects (i.e., navigation and recognition). With the aim to address the aforesaid needs, we describe in this paper an innovative prototype, which offers the capabilities to (i) move autonomously and to (ii) recognize multiple objects in public indoor environments. It incorporates lightweight hardware components (camera, IMU, and laser sensors), all mounted on a reasonably-sized integrated device to be placed on the chest. It requires the indoor environment to be ‘blind-friendly’, i.e., prior information about it should be prepared and loaded in the system beforehand. Its algorithms are mainly based on advanced computer vision and machine learning approaches. The interaction between the user and the system is performed through speech recognition and synthesis modules. The prototype offers to the user the possibility to (i) walk across the site to reach the desired destination, avoiding static and mobile obstacles, and (ii) ask the system through vocal interaction to list the prominent objects in the user's field of view. We illustrate the performances of the proposed prototype through experiments conducted in a blind-friendly indoor space equipped at our Department premises.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.