This paper presents two early studies aimed at investigating issues concerning the design of multimodal interaction - based on voice commands and mid-air gestures - with mobile technology specifically designed for visually impaired and elderly users. These studies have been carried out on a new device allowing enhanced speech recognition (interpreting lip movements) and mid-air gesture interaction on Android devices (smartphone and tablet PC). The initial findings and challenges raised by these novel interaction modalities are discussed. These mainly centre on issues of feedback and feedforward, the avoidance of false positives and point of reference or orientation issues regarding the device and the mid-air gestures.

Design of multimodal interaction with mobile devices. Challenges for visually impaired and elderly users

Ferron Michela;Mana Nadia;Mich Ornella;
2018-01-01

Abstract

This paper presents two early studies aimed at investigating issues concerning the design of multimodal interaction - based on voice commands and mid-air gestures - with mobile technology specifically designed for visually impaired and elderly users. These studies have been carried out on a new device allowing enhanced speech recognition (interpreting lip movements) and mid-air gesture interaction on Android devices (smartphone and tablet PC). The initial findings and challenges raised by these novel interaction modalities are discussed. These mainly centre on issues of feedback and feedforward, the avoidance of false positives and point of reference or orientation issues regarding the device and the mid-air gestures.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/313170
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact