The discussion reported in this paper derives from our experience in designing and implementing two location-aware adaptive systems: HyperAudio and HIPS. Both systems are hand-held electronic museum guides that adapt their behavior to the one of the individual visitor. The common idea behind is to create an augmented museum where the main interaction modality is the physical move: visitor's movements are traced by the system and interpreted as implicit input. While moving in the augmented museum the visitor contemporary explores the associated information space. Data are organized as an adaptive hypertext: each node has a marker set describing content and form and link are labeled. markers and labels are analyzed and used at run time to compose presentations on the fly. A presentation has an audio message and a set of suggested links; both are adapted depending on the context. Adaptation is realized taking into account the physical space, the history of interaction and the user model (or in a broader sense the visit model). particularly important in our scenario is the interpretation of the visitor’s physical position and movements seen as implicit interactions with the system. For example, when approaching a new exhibit a description is automatically provided; after a long stay in front of an object, another description is proposed. thus the interaction is implicit since there is no intentionality on the part of the visitor to communicate his/her position or interest. The selection of the content and the linguistic form of the description are contest sensitive. Phrases containing direct reference to the space ("in front of you", "this is"), reference to already seen objects ("you saw previously", "you just saw"), or suggesting new exhibits ("located behind you", "on the opposite wall") are introduced, on the bases of the context, to exploit presentation effectiveness
Modeling Context Is Like Talking Pictures
Not, Elena;Strapparava, Carlo;Stock, Oliviero;Zancanaro, Massimo
2000-01-01
Abstract
The discussion reported in this paper derives from our experience in designing and implementing two location-aware adaptive systems: HyperAudio and HIPS. Both systems are hand-held electronic museum guides that adapt their behavior to the one of the individual visitor. The common idea behind is to create an augmented museum where the main interaction modality is the physical move: visitor's movements are traced by the system and interpreted as implicit input. While moving in the augmented museum the visitor contemporary explores the associated information space. Data are organized as an adaptive hypertext: each node has a marker set describing content and form and link are labeled. markers and labels are analyzed and used at run time to compose presentations on the fly. A presentation has an audio message and a set of suggested links; both are adapted depending on the context. Adaptation is realized taking into account the physical space, the history of interaction and the user model (or in a broader sense the visit model). particularly important in our scenario is the interpretation of the visitor’s physical position and movements seen as implicit interactions with the system. For example, when approaching a new exhibit a description is automatically provided; after a long stay in front of an object, another description is proposed. thus the interaction is implicit since there is no intentionality on the part of the visitor to communicate his/her position or interest. The selection of the content and the linguistic form of the description are contest sensitive. Phrases containing direct reference to the space ("in front of you", "this is"), reference to already seen objects ("you saw previously", "you just saw"), or suggesting new exhibits ("located behind you", "on the opposite wall") are introduced, on the bases of the context, to exploit presentation effectivenessI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.