Jido

Interaction

For now, the interactions are mainly estab- lished through the following components: the dynamic "obstacles" detectors (aspect and sono), the isy face detector, the 3D animated face with speech synthesis, displays and inputs from the touch screen, control of the robots'lights. While the first two allow to detect the presence or the departure of people, the last ones permit the robot to "express" itself and thus establish exchanges. The vocal synthesis is highly enriched by a 3D animated head displayed on the screen. This talking head, or clone, is developed by the Institut de la Communication Parlée (http://www.icp.inpg.fr). The clone is based on a very accurate articulatory 3D model of the postures of a speaking locutor with realistic synthetic rendering thanks to 3D texture pro- jection. From a given text, the speech synthesizer produces coordinated voice and facial movements (jaw, teeth, lips, etc.).

The directions of the head and of the eyes can be dynam- ically controlled. This capability is important as it allows to reinforce an interaction, looking towards the interlocutor face detected by isy, or to point out an object or a part of the exhibition currently mentioned by the robot. The clone appears in front of the touch-screen each time the robot has to speak. Meaningful messages have been prepared, corresponding to the various situations encountered by the robot or to the places that need to be described during the visit.

The robot interface, written in Java, is made of independent components or microGUIs directly controlled by the supervi- sor through a dedicated communication channel. The available microguis are: a map of the environment including the current robot position and trajectory, the local "aspect" map displayed as a radar, the image of the "eye" camera with the faces currently detected by isy, the clone or talking head, pop-up warning messages, top messages, localization window (init).