In this paper we present the multimodal data collected for developing a system able to influence the behavior of small groups in an informal and non goal-oriented conversation scenario. The prototype system looks like a table in a museum cafeteria and it is aimed at inducing the people sitting around to talk about their visit to the museum. To this aim, the system provides visual cues to foster participants’ engagement in the conversation. The cues are contextualized by automatically monitoring the group dynamics and by continuously planning and executing minimalist strategies based on the participants’ speaking activity and visual attention. In the paper, we shortly describe the system, its main components and functionalities. We then present the two data collections carried out to gather multimodal data to tune the basic perceptual modules of the system (voice activity detector and face tracker) and to improve the presentation engine of the visual cues.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte di FBK.
Titolo: | Multimodal Corpora for an Automatic System Fostering Participants’ Engagement in Informal Conversations around a Museum Café Table |
Autori: | |
Data di pubblicazione: | 2011 |
Abstract: | In this paper we present the multimodal data collected for developing a system able to influence the behavior of small groups in an informal and non goal-oriented conversation scenario. The prototype system looks like a table in a museum cafeteria and it is aimed at inducing the people sitting around to talk about their visit to the museum. To this aim, the system provides visual cues to foster participants’ engagement in the conversation. The cues are contextualized by automatically monitoring the group dynamics and by continuously planning and executing minimalist strategies based on the participants’ speaking activity and visual attention. In the paper, we shortly describe the system, its main components and functionalities. We then present the two data collections carried out to gather multimodal data to tune the basic perceptual modules of the system (voice activity detector and face tracker) and to improve the presentation engine of the visual cues. |
Handle: | http://hdl.handle.net/11582/48403 |
Appare nelle tipologie: | 4.1 Contributo in Atti di convegno |