A robotic system must perceive the environment in which a task (navigation, manipulation) has to be executed. For this purpose, it must at first build several representations of the environment and make use of these models during a task execution, in order to locate itself, to detect unforeseen events or to recognize and locate objects. RAP works has concerned mainly three topics during the last years:
- 1) the mapping of indoor or outdoor environments,
- 2) the perception for object grasping
- 3) and more recently obstacle detection.
Mapping of environments (PhDs : J.Sola, 2007; A.Zureiki, 2008)
RAP has contributed on Simultaneous Localization And Mapping (SLAM), especially on EKF- SLAM. A mobile robot builds incrementally a stochastic map, from the fusion of features extracted from successive views using Extended Kalman Filtering. A Bearing-Only SLAM method has been proposed for the mapping of a static environment. 2D interest points are detected and tracked in successive images acquired from the embedded camera : a 2D pixel gives only an optical cone on which the corresponding 3D point is located. The main contribution concerns the undelayed initialization of the 3D point associated to a new detected 2D pixel. At first in 2005, a Federated Information Sharing (FIS) method has been proposed to initialize without delay a 3D point by a set of Gaussian vectors in a stochastic map as soon as it is perceived in one image, and to select progressively only one estimate using next observations of the same point [IROS2005] . See hereafter a figure about the incremental reconstruction of a 3D point considering a conic optical ray. On the left, hypothesis selection using 4 views acquired by the mobile camera. On the right, an image with expectation ellipses for points (blue) and for rays (red).

Same results are obtained now using the Inverse Depth representation proposed by other authors in 2006. Then our approach has been extended to cope with different configurations :
- stereovision (Bi-Cam SLAM) [ICRA2007]
- dynamic environment (SLAMMOT),
- observations acquired from several cameras mounted on different robots (SLAM-multi).
Recently the same initialization strategy has been applied for the reconstruction of 3D segments [IROS2009]. Ongoing work concerns the learning of a robot trajectory using the same stochastic map framework.
Another work has aimed to build a textured and surfacic 3D map from 2D and 3D data acquired from a camera and a 3D laser range finder embedded on a mobile robot. Interest points or 2D segments are extracted from images, while 3D planar patches are segmented from range data. The SLAM framework [ICINCO2008] has been proposed to fuse successive perceptions of these planes, and of points and segments supported by these planes ; texture is mapped on planes.
Perception for object grasping (PhDs : A. Restrepo, 2005; M.Cottret, 2007; F.Trujillo, 2008)
RAP has worked mainly on object detection [IAS2006], 3D mapping [ICIP2004] and recognition using an appearance representation [VISAPP2009]. The detection of unknown objects that could be used as landmarks, was done thanks to a classical saliency map. The 3D mapping of unknown objects and the appearance-based recognition of known ones are based on interest points extracted from classical detectors (Harris, SIFT, Kadir…) or from a new Harris Laplace detector in color images, using a camera mounted on a manipulator, moved around an object. 3D reconstruction is obtained from nonlinear Optimization (Sparse Bundle Adjustment). An active recognition strategy has been proposed, using entropy maximization in order to select optimal camera positions to validate concurrent hypothesis about objects detection from images.
Obstacle detection (PhDs : G.Aviña, 2005 ; V.Lemonde, 2005)
The problem consists in detecting mobile obstacles from data acquired from a robot. Stereovision-based or monocular-based methods were initially proposed ; ongoing work addresses obstacle detection from a belt of cameras (how to build a robot-centered occupancy grid from N cameras mounted around a robot ?) or from a single moving camera (how to segment several moving obstacles from the background, using a set of detected and tracked interest points ?). A new project begins about obstacle detection from multi-spectral images acquired from an airplane moving on taxiways.
On-going PhDs
- D.Marquez, on SLAM and convoy navigation
- JM.Codol (with the NAVONTIME company, and the MRS research group at LAAS) on Hybrid navigation from GPS and vision
- B.Coudrin (with the NOOMEO company), on 3D modelling from a portable sensor
- B.Ducarouge (with Ecole des Mines Albi), on thermal metrology from IR cameras.
- J.Harvent (with Ecole des Mines Albi), on Non Destructive Test from stereovision.
- Y.Raoui (with University Mohammed V Agdal, Rabat, Maroc), on visual navigation from vision and RFID tags
- W.Aït Farès (with University Mohammed V Agdal, Rabat, Maroc), on scene analysis and object tracking for an assistant robot.
- N.Sallem, on Vision for manipulation tasks
- A.Gonzalez (with the MRS research group at LAAS), on navigation assistance on airplane, in all weathers.
- D.Almanza, on mobile object detection and identification from a mobile camera.