Rackham

Supervision for Human-Robot Interaction

"historical" background

The very first supervision system running on Rackham (see ICAR05 publication ) was just a (not so simple) state base reactive system. This system has run an amount of time and gave us the opportunity to make tests on functional modules but turned out to be too restrictive considering system evolution.

Based on this analysis, we have build a new supervision system.

main ideas

Our aim was to build a supervision system based on joint intention theory that will allow to take into account the human-robot interaction at the planning and execution level. For this, each human entering in the field of the robot is taken into account as an agent with whom the robot can collaborate or at least interact. To be homogeneous, the robot itself is also considered as an agent. Each agent is characterized by its abilities (tasks types that it can perform), its potential commitment to a task at hand and once involved, observers help to follow each agent involvement and state towards the task. This information is then used by the robot to achieve its task and to adapt it to human action, reaction or lack of reaction. Two other considerations have been taken into account:

  • multi-modality: in HRI, it's often the case that a given information comes through various ways (e.g. a human can give the same order by gesture, by speech, by the help of a touchscreen or by a combination of these modalities).

  • re-usability: new functionalities are available along the project, we argue that our supervision system is built in a modular way to be able to integrate incrementally new robot abilities.

    integration

    We have developed and implemented an agent-based supervision system that deals with tasks in terms of individual tasks (only the robot is involved), joint tasks (the robot and another agent are involved) and activities that corresponds to low level functionalities that are not further decomposed. Each task is defined by a plan and dedicated monitors. A plan corresponds to a succession of sub- tasks and/or activities. Monitors serve to state whether a task is unachieved, achieved, impossible, irrelevant or stopped.

    Consequently, the system that can be controlled at different levels at the same time. If something is detected at a given level, the system is able to take it into account at that level by applying adapted solutions and propagating, when necessary, events towards the higher or the lower levels. In the current implementation, the supervisor is written in Open-PRS. The task plans are hand-coded (a set of pre-defined task library) and only the robot is able to propose a task. However, the supervisor is built, taking into account future extensions involving on-line task planning. Concerning joint tasks, their associated plans include several steps that allow to respect the joint intention scheme:

    • PRETASK: concerning the establishment of the joint task between a person and the robot,

    • INTASK: concerning the progress of the defined task,

    • POSTTASK: concerning the end of the task. Joint intention theory requires that all agents must be informed of the end of the task,

    • CHECKTASK: concerning the monitoring of the agents commitment to the task.

    Each of these *TASK can be defined as an individual task, a joint task or an activity.

    results

    This supervision scheme has been tested on Rackham during its last stays at the Space City Museum in Toulouse (video) .The root plan of the robot was: "Initialisation", "Looking for a visitor" and "Guide visitor". Initialisation was only done once, after the robot switched between the last two tasks. In the video, we show mostly the transition between "Looking for a visitor" and "Guide visitor". During the "Looking for visitor" task, the robot search in front of it by the help of the face detection and back to it by the help of the touch screen. Finding someone, the robot will turn in front of the person (turning also the camera if needed). During the first part (PRETASK) of the Guide visitor task, the robot ask the visitor if he wants to be guided. During four days, Rackham ran in average one hour per day (the only time limitation was the presence of a Space City animator). 1/4 of the time the robot are Looking for visitor and 3/4 of the time, Rackham Guide visitor.

    Comparing to the previous system, it is easier to add new capabilities to the robot and to program new tasks involving interaction. In addition, monitoring is facilitated by the help of layered levels that are all reactive.

    contact

    This is a very short and non-exhaustive presentation of the system, more information could be found in the publication page. If you are interested by this or would made suggestion or proposition, please feel free to contact: Aurélie Clodic (aclodic _at_ laas.fr)