About this theme related to Control Theory, RAP researchers have studied the analysis and control of mobile robots using mainly visual servoing, possibly based on detection and tracking functions developed on the two other themes. Two approaches have been developed, using either
- 1) the “task function” paradigm for navigation in cluttered scenes (moreover in human environments)
- 2) or Lyapunov theory and Linear Matrix Inequalities (LMIs) for the analysis / synthesis of visual servos taking into account several constraints.
Vision-based navigation in cluttered scenes (PhD : D.Folio, 2007)
This first approach addresses problems of both multisensor-based navigation and visual servoing of mobile robots. A first objective has been to perform navigation tasks in unknown environments on the basis of visual and range sensors. The considered environments are supposed static and may be cluttered with occluding and non-occluding obstacles. The first obtained results have shown that avoiding both occlusions and collisions over-constrains the robot motion and that it is not the most suitable strategy.
Hereafter, on the left, a result obtained with a first kind of method: it can be seen that the task fails when the occlusion occurs, while on the right, using a second kind of method, the task is successfully performed despite occlusions.
We have then developed new techniques able to let the occlusions occur if it is required by the mission success (figure 2) [IROS2008]. We have proposed a set of methods allowing to reconstruct the visual features when they are unavailable. We have compared the obtained solutions from different points of view (accuracy, swiftness, etc). Both experimental and simulation results have shown the validity of the proposed approaches.
More recently, we have addressed the problem of executing vision-based navigation tasks with respect to humans in populated environments. This work has been initiated through the Commrob project whose global objective was to design a trolley able to assist a user in a shopping center. The system was equipped with a camera mounted on pan-tilt unit and the user was given a RFID tag to enhance the detection process. The considered task has consisted in making the trolley autonomously follow a user detected by an embedded human tracker or thanks to the RFID tag when he is not visible. We have developed a complete multi-sensor-based control strategy relying on the visual data provided by the tracker when they are available and on the RFID data when the user cannot be detected. This strategy has been experimented on Rackham robot and validated through numerous different experiments [accepted IROS09, CVIU09]. Following these results, we have improved the vision-based controller robustness by mixing sliding-mode techniques and the task function formalism classically applied to design visual servoing controllers. As previously, this control law has been experimented on Rackham and the obtained results have proven the efficiency of the chosen technique with respect to more conventional approaches [accepted ECMR09]. We have also studied the vision-based control law sensibility with respect to different errors on camera calibration parameters, depth and visual data noise [ACL296].
Multicriteria visual servoing
This second activity concerns the design of generic methods to the “multicriteria” analysis and synthesis of visual servos, i.e. taking account of all the constraints of the problem: convergence, visibility, actuators saturations, exclusion of 3D areas, singularities avoidance, etc. We have proposed a sound and versatile approach, which consists in recasting the multicriteria analysis (resp. synthesis) of many visual based positioning schemes as the stability analysis (resp. the stabilization) of a nonlinear rational system subject to rational constraints. Advanced control strategies have then been sought, expressing the stability/stabilization conditions as the minimization of a convex criterion subject to Linear Matrix Inequalities (LMIs). Such optimization problems are indeed convex and can be solved with available dedicated software at a moderate complexity.
The first results were developed within the framework of quadratic stability. As the symmetry and convexity properties of the consequent ellipsoidal invariant sets is penalizing in this robotics context, nontrivial extensions were proposed in [European Journal of Control, 2006].
In collaboration with Daniel F. Coutinho, associate professor in control at PUCRS, Porto Alegre, Brazil, alternative solutions were then developed to convert the Lyapunov based analysis conditions into LMI problems. Interestingly, these enabled the use of more involved Lyapunov functions, e.g. biquadratic [ROCOND2006] or piecewise biquadratic [CDC2006], leading to much less conservative conclusions [ICRA2009].
Prior to the research period related herein, it had been shown that our statement of the visual servoing problem induces a striking duality—in the sense of the duality between control and estimation—between visual servoing and visual based localization. A new approach to set-membership filtering of rational systems has been developed [ACTI1414], to be applied to the visual based localization problem.
- A.Durand Petiteville on visual-based navigation in cluttered areas.
- S.Durola on multicriteria visual servoing.
- M.Ndiaye, on the control of a dual-arms robot.