p Jido

Jido

Supervision for Human-Robot Interaction:   ||   SHARY  ||

Context

Human-robot interaction context implies that humans and robots must communicate in order to share common knowledge about the task to be performed. When executing a collaborative task, the robot supervision system must perform not only the task itself but is also responsible for monitoring human activity in order to be permanently responsive by detecting incoming communication acts and producing the appropriate answer.

During task execution, the supervision system must take into account both the robotic and HRI context. It is responsible for producing legible and safe behavior, by providing appropriate response to execution errors and human requests. We therefore try to determine whether and when the robot should interact and/or take initiative towards the human, when they are both trying to achieve a common task. The human and the robot share the same environment, they are often close to each other and perceive each others activity. The challenge is to equip the robot with suitable context-dependent abilities to make it capable of achieving tasks in the vicinity and/or in interaction with a human partner.

Based on this analysis, we have build a new supervision system called SHARY ( Supervisor adapted to human robot interactions ).

main ideas

Our aim idea was to build a supervision system based on joint intention theory that will allow to take into account the human-robot interaction at the planning and execution level. For this, each human entering in the field of the robot is taken into account as an agent with whom the robot can collaborate or at least interact. To be homogeneous, the robot itself is also considered as an agent. Each agent is characterized by its abilities (tasks types that it can perform), its potential commitment to a task at hand and once involved, observers help to follow each agent involvement and state towards the task. This information is then used by the robot to achieve its task and to adapt it to human action, reaction or lack of reaction. Two other considerations have been taken into account:

integration

We have developed and implemented an agent-based supervision system that deals with tasks in terms of individual tasks (only the robot is involved), joint tasks (the robot and another agent are involved) and activities that corresponds to low level functionalities that are not further decomposed. Each task is defined by a plan and dedicated monitors. A plan corresponds to a succession of sub- tasks and/or activities. Monitors serve to state whether a task is unachieved, achieved, impossible, irrelevant or stopped.

Consequently, the system that can be controlled at different levels at the same time. If something is detected at a given level, the system is able to take it into account at that level by applying adapted solutions and propagating, when necessary, events towards the higher or the lower levels. In the current implementation, the supervisor is written in Open-PRS. The task plans are hand-coded (a set of pre-defined task library) and only the robot is able to propose a task. However, the supervisor is built, taking into account future extensions involving on-line task planning.

System Implementation

As said previously the goal of the SHARY supervision system is to execute tasks in human robot interaction context. The figure above describes the robot general design based on a 3 layer architecture (Alami1998). Human communication acts are perceived via dedicated perception modules: a face detection system is able to identify people recorded in a database, a laser based positioning system can determine human position and orientation as well as basic movement behaviors, finally a touch screen can inform the supervisor about buttons pressed.

The SHARY supervision system consists of two main components: a ``task and robot knowledge data base'' and a ``task refinement and execution engine''. The ``task and robot knowledge data base'' is composed of:

Besides the contextual environment information, this robot knowledge is completely independent from the execution context. It contains uninstantiated methods that can be used in different robotic platform with no modifications except for atomic tasks which are usually robot specific. During the execution, this knowledge is used and instantiated in order to produce a context-dependent plan. The second component of SHARY is ``task refinement and execution engine'' that embed the decision mechanisms which are responsible for:

SHARY is implemented using the openPRS (Ingrand1996) procedural reasoning system. Figure above gives an example of how the communication library as well as one of the get Recipe functions are encoded. SHARY is currently being used in the Jido robotic platform.

results

SHARY is currently running in the Jido robotic platform controlling the execution of more than twenty functional modules in order to perform collaborative task achievement, essentially dealing with fetch-and-carry tasks in a human environment. The scenario we are aiming to, will picture Jido autonomously grasping objects and proposing them to humans in its surrounding. The figure above shows two experiments in which Jido wants to give the bottle it is holding to the person in front of it.

For this experiment Jido is setup to give a bottle to a person. Jido starts monitoring the environment until a person stays long enough in front of it. Using speech synthesis, Jido asks the person if he is interested in having the bottle. The person agreement is detected by Jido which then starts realizing the task. In the nominal case the person keeps his face oriented toward the robot and grabs the object. This is detecetd using force sensors mounted on the arm. Jido is also monitoring the human face and it is capable of detecting loss of attention from the person. This case is illustrated in the bottom sequence of pictures where Jido is suspending its ``give- object'' task by moving back its arm. When the user focuses again his attention on Jido, the robot resumes the task.

related publications

Supervision and Motion Planning for a Mobile Manipulator Interacting with Humans

Sisbot, E. Akin; Clodic, Aurélie; Alami, Rachid; Ransan, Maxime 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI 2008)

Human Robot collaborative task achievement requires adap- ted tools and algorithms for both decision making and mo- tion computation. The human presence as well as its be- havior must be considered and actively monitored at the decisional level for the robot to produce synchronized and adapted behavior. Additionally, having a human within the robot range of action introduces security constraints as well as comfort considerations which must be taken into ac- count at the motion planning and control level. This pa- per presents a robotic architecture adapted to human robot interaction and focuses on two tools: a human aware ma- nipulation planner and a supervision system dedicated to collaborative task achievement.

full pdf file

Planning human centered robot activities

Montreuil, Vincent; Clodic, Aurelie; Ransan, Maxime; Alami, Rachid. ISIC. IEEE International Conference on Systems, Man and Cybernetics, 2007.

This paper addresses high-level robot planning issues for an interactive cognitive robot that has to act in presence or in collaboration with a human partner. We describe a task planner called HATP (for Human Aware Task Planner). HATP is especially designed to handle a set of human-centered constraints in order to provide Òsocially acceptableÓ plans that are oriented toward collaborative task achievement. We provide an overall description of HATP and discuss its main structure and algorithmic features.

full pdf file

The management of mutual beliefs for human robot interaction

Clodic, Aurelie and Ransan, Maxime and Alami, Rachid and Montreuil, Vincent. IEEE International Conference on System, Man, and Cybernetics (SMC 2007)

Human-robot collaborative task achievement re- quires the robot to reason not only about its current beliefs but also about the ones of its human partner. In this paper, we introduces a framework to manage shared knowledge for a control should occur from the agent to other entities. In robotic system. In a first part, we define which beliefs should be taken into account ; we then explain a manner to achieve them using communication schemes. Several examples are presented to illustrate the purpose of beliefs management including a real experiment demonstrating a 'give object' task between the Jido robotic platform and a human.

full pdf file

A study of interaction between dialog and decision for human-robot collaborative task achievement

Clodic, Aurelie; Alami, Rachid; Montreuil, Vincent; Li, Shuyin; Wrede, Britta; Swadzba, Agnes; The 16th IEEE International Symposium on Robot and Human interactive Communication, 2007. RO-MAN 2007.

Human-robot collaboration requires both communicative and decision making skills of a robot. To enable flexible coordination and turn-taking between human users and a robot in joint tasks, the robot's dialog and decision making mechanism have to be synchronized in a meaningful way. In this paper, we propose a integration framework to combine the dialog and the decision making processes. With this framework, we investigate various task negotiation situations for a social robot in a fetch-and-carry scenario. For the technical realization of the framework, the interface specification between the dialog and the decision making systems is also presented. Further, we discuss several challenging issues identified in our integration effort that should be adddressed in the future. full pdf file

contact

This is a very short and non-exhaustive presentation of the system, more information could be found in the publication page. If you are interested by this or would made suggestion or proposition, please feel free to contact: Aurélie Clodic (aclodic@laas.fr) or Maxime Ransan (mransan@laas.fr)