While current LLMs achieve excellent performance in information seeking tasks, their conversational abilities when participants need to collaborate to jointly achieve a communicative goals (e.g., booking a restaurant, fixing an appointment, etc.) are still far from those exhibited by humans. Among various collaborative strategies, in the paper we focus on proactivity, i.e., when a participant offers useful information that was not explicitly requested. Wepropose a new task, called last utterance proactivity prediction aimed at assessing the capacity of an LLM to detect proactive utterances in a dialogue. In the task, a model is given a small portion of a dialogue (that is, a dialogue snippet) and asked to determine whether the last utterance of the snippet is proactive or not. There are several benefits in using dialogue snippets: (i) they are more manageable than full dialogues, allowing to reduce complexity; (ii) several phenomena in dialogue, including proactivity, depend on a short context, which allows a model to learn from snippets, rather than full dialogues; and (iii) dialogue snippets make it easier to experiment on balanced datasets, overcoming the skew distribution of proactivity in whole dialogues. In the paper, we first introduce a dataset for the last utterance proactivity prediction task. The dataset has then been used to instruct an LLMto classify proactivity. We run a series of experiments showing that predicting proactive utterance in a dialogue is feasible in a few-shot configuration, opening the road towards models that are able to generate proactive utterances like humans do.

Last Utterance Proactivity Prediction in Task-oriented Dialogues

Brenna, Sofia
;
Magnini, Bernardo
2024-01-01

Abstract

While current LLMs achieve excellent performance in information seeking tasks, their conversational abilities when participants need to collaborate to jointly achieve a communicative goals (e.g., booking a restaurant, fixing an appointment, etc.) are still far from those exhibited by humans. Among various collaborative strategies, in the paper we focus on proactivity, i.e., when a participant offers useful information that was not explicitly requested. Wepropose a new task, called last utterance proactivity prediction aimed at assessing the capacity of an LLM to detect proactive utterances in a dialogue. In the task, a model is given a small portion of a dialogue (that is, a dialogue snippet) and asked to determine whether the last utterance of the snippet is proactive or not. There are several benefits in using dialogue snippets: (i) they are more manageable than full dialogues, allowing to reduce complexity; (ii) several phenomena in dialogue, including proactivity, depend on a short context, which allows a model to learn from snippets, rather than full dialogues; and (iii) dialogue snippets make it easier to experiment on balanced datasets, overcoming the skew distribution of proactivity in whole dialogues. In the paper, we first introduce a dataset for the last utterance proactivity prediction task. The dataset has then been used to instruct an LLMto classify proactivity. We run a series of experiments showing that predicting proactive utterance in a dialogue is feasible in a few-shot configuration, opening the road towards models that are able to generate proactive utterances like humans do.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/357528
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact