RAP - Sensor-based Motion

Multi-sensor based Long Range Navigation in Poorly Known Environments

The reactive navigation of a robot towards a target has been considered, as a sequence of vision and laser based servoing tasks.  A first contribution concerned the reconstruction of pointwise visual features to be used for feedback when they are not detected due to occlusions.  The reconstructed values were computed as the solutions to the open-loop model (velocity screw to features velocities differential equation) initialized by the last extracted features and an estimation of their (unknown) depths.  The depths estimates were themselves produced by an original predictor-corrector scheme.  In comparison with the literature, our strategy is particularly suitable for real time application thanks to its light computational cost and its reliability.  It was successfully validated by experiments [DurandPetiteVille_IROS2010] [DurandPetiteVille_ICINCO2010] (A. Durand Petiteville’s thesis).

To enable navigation in a poorly known environment cluttered by occluding and non-occluding static and/or dynamic obstacles, we decomposed it into interconnected processes and defined a structure suited to the application.  The involved processes are: a supervisor which triggers suitable local sensor-based controllers in view of the available data; a modeling of the environment by a topological map; an action process which synthesizes suitable controllers and sequences them.  Similar ideas were used and experimentally validated for coordinated human/robot navigation, where a robot must keep behind a moving tutor continuously tracked by vision and RFID [DurandPetiteVille_ECCCDC2011].

Sensor-based Motion
Long-range navigation tolerant to signal loss and occlusions


Dual-arm Visual Servoing

The aim is to develop vision-based coordinated control of two robotic arms for object manipulation.  Our approach involves three steps: an open-loop model uniting the motion of both arms to the visual features in the cameras image planes; a decomposition of the task into subtasks, each being performed by a local image-based controller; a sequencing of these controllers ensuring the control signals continuity.

Ongoing work along this approach aims at closing the cap of a pen.  Visual features have been selected from a eye-in-hand and a eye-to-hand cameras so as to account for the relative pose of the two end-effectors.  The local controllers and their sequencing have been validated in simulation (R. Fleurmond’s thesis).


Multicriteria Analysis of Visual Servos

We temporarily pursued our work on the “multicriteria” analysis of visual servos (i.e., including constraints such as target visibility, actuators’ saturations, and exclusion of 3D areas) [Danes_BookChapter2010].  Keeping a transcription of the problem as the stability analysis of a nonlinear “rational” system under rational constraints, we showed how a “multicriteria basin of attraction” can be computed on the basis of piecewise-biquadratic Lyapunov functions and feasibility/optimization programs subject to linear matrix inequalities (LMIs).  More recent results concerned the design of less conservative solutions at reasonable computational cost (S. Durola’s thesis).