We demonstrate a system with integrated acting, planning and learning algorithms that uses hierarchical operational models to perform tasks in dynamically changing environments. In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation. Deliberative actors, which integrate acting and planning, have typically needed to use both of these models together—which causes problems when attempting to develop the different models, verify their consistency, and smoothly interleave acting and planning. As an alternative, we demonstrate an acting and planning engine in which both planning and acting use the same operational models. These rely on hierarchical task-oriented refinement methods offering rich control structures. In addition, we also have learning strategies that guide the actor and the planner.

A Demonstration of Refinement Acting, Planning and Learning System Using Operational Models

Malik Ghallab;Paolo Traverso;
2021-01-01

Abstract

We demonstrate a system with integrated acting, planning and learning algorithms that uses hierarchical operational models to perform tasks in dynamically changing environments. In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation. Deliberative actors, which integrate acting and planning, have typically needed to use both of these models together—which causes problems when attempting to develop the different models, verify their consistency, and smoothly interleave acting and planning. As an alternative, we demonstrate an acting and planning engine in which both planning and acting use the same operational models. These rely on hierarchical task-oriented refinement methods offering rich control structures. In addition, we also have learning strategies that guide the actor and the planner.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/327750
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact