Laboratoire d’Analyse et d’Architecture des Systèmes
M.FONTMARTY, F.LERASLE, P.DANES
RAP
Manifestation avec acte : Reconnaissance des Formes et Intelligence Artificielle (RFIA 2010), Caen (France), 18-22 Janvier 2010, 8p. , N° 10199
Diffusable
121137M.FONTMARTY, P.DANES, F.LERASLE
RAP
Manifestation avec acte : IEEE International Conference on Image Processing (ICIP 2009), Le Caire (Egypte), 7-11 Novembre 2009, pp.2553-2556 , N° 09459
Diffusable
121095M.FONTMARTY, F.LERASLE, P.DANES
RAP
Manifestation avec acte : IEEE International Conference on Image Processing (ICIP 2009), Le Caire (Egypte), 7-11 Novembre 2009, pp.4101-4104 , N° 09458
Diffusable
121094T.GERMA, F.LERASLE, N.OUADAH, V.CADENAT, M.DEVY
RAP, CDTA d'Alger
Manifestation avec acte : International Conference on Intelligent Robots and Systems (IROS 2009), Saint Louis (Etats-Unis), 11-15 Octobre 2009, pp.5591-5596 , N° 09623
Diffusable
119333J.BROCHARD, B.BURGER, A.HERBULOT, F.LERASLE
OLC, RAP
Manifestation avec acte : Workshop A Improving Human Robot Communication , Toyama (Japon), 28 Septembre 2009, 4p. , N° 09626
Diffusable
119291M.FONTMARTY, F.LERASLE, P.DANES
RAP
Manifestation avec acte : IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2009, Toyama (Japon), 27 Septembre-2 octobre 2009, pp.829-834 , N° 09457
Diffusable
119236B.BURGER, F.LERASLE, I.FERRANE
RAP, IRIT-UPS
Manifestation avec acte : International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), Toyama (Japon), 27 Septembre - 2 Octobre 2009, pp.699-704 , N° 09905
Diffusable
121539M.GOLLER, T.KERSCHER, J.M.ZOLLNER, R.DILLMANN, M.DEVY, T.GERMA, F.LERASLE
Karlsruhe, RAP
Manifestation avec acte : 14th International Conference on Advanced Robotics (ICAR 2009), Munich (Allemagne), 22-26 Juin 2009, 6p. , N° 09254
Diffusable
117855M. VALLEE, B.BURGER, D. ERTL, F.LERASLE, J. FALB
Vienne, RAP
Manifestation avec acte : International Conference on Advanced Robotics (ICAR 2009), Vienne (Autriche), 22-26 Mai 2009, pp.130-135 , N° 09572
Diffusable
119074T.GERMA, F.LERASLE, T.SIMON
RAP, IUT Figeac
Revue Scientifique : International Journal of Pattern Recognition and Artificial Intelligence, Vol.23, N°3, pp.591-616, Mai 2009 , N° 09074
Diffusable
Plus d'informations
This paper deals with video-based face recognition and tracking from a camera mounted on a mobile robot companion. All persons must be logically identified before being authorized to interact with the robot while continuous tracking is compulsory in order to estimate the person's approximate position. A first contribution relates to experiments of still-image-based face recognition methods in order to check which image projection and classifier associations give the highest performance of the face database acquired from our robot. Our approach, based on Principal Component Analysis (PCA) and Support Vector Machines (SVM) improved by genetic algorithm optimization of the free-parameters, is found to outperform conventional appearance-based holistic classifiers (eigenface and Fisherface) which are used as benchmarks. Relative performances are analyzed by means of Receiver Operator Characteristics which systematically provide optimized classifier free-parameter settings. Finally, for the SVM-based classifier, we propose a non-dominated sorting genetic algorithm to obtain optimized free-parameter settings. The second and central contribution is the design of a complete still-to-video face recognition system, dedicated to the previously identified person, which integrates face verification, as intermittent features, and shape and clothing color, as persistent cues, in a robust and probabilistically motivated way. The particle filtering framework, is well-suited to this context as it facilitates the fusion of different measurement sources. Automatic target recovery, after full occlusion or temporally disappearance from the field of view, is provided by positioning the particles according to face classification probabilities in the importance function. Moreover, the multi-cue fusion in the measurement function proves to be more reliable than any other individual cues. Evaluations on key-sequences acquired by the robot during long-term operations in crowded and continuously changing indoor environments demonstrate the robustness of the tracker against such natural settings. Mixing all these cues makes our video-based face recognition system work under a wide range of conditions encountered by the robot during its movements. The paper concludes with a discussion of possible extensions.