Defenses for autonomously-adapting systems

Autonomous adaptation is a desirable property for systems whose resource availability changes dynamically (e.g., as in peer-to-peer and mobile computing settings) or that need to evolve in a dynamic environment (e.g., as in autonomous robotics). However, truly autonomous systems, which are capable of choosing and executing high-level actions without direct human supervision, are not yet sufficiently dependable to be deployed in critical applications. Indeed, several threats need to be considered in such systems:

  • a level of adversity beyond the capability of the autonomous adaptation mechanisms;

  • lack of precision in the perception of the system state and environment;

  • faults and other deficiencies in the design of the autonomous adaptation mechanisms, such as heuristics or other design compromises introduced to allow computational tractability.

So, the counterpart of autonomous adaptation is the confidence that can be placed on the underlying mechanisms. In particular, what defenses can be provided as countermeasures against the threats introduced by autonomous adaptation?

We have addressed this issue in the context of autonomous robots, in collaboration with the RIS research group  at LAAS. To date, we have carried out two investigations in this direction.

First, we designed, implemented and validated a temporal planner able to tolerate design faults in its declarative domain model and search heuristics [1, 2]. The fault-tolerant planner is built from several elementary planners acting as a redundant set, coordinated by a simple, and thus supposedly fault-free, management module called FTplan. Various error detection mechanisms can be associated with FTplan to detect planner errors either before plan execution (planning timeout, plan analyzer) or during plan execution (plan action termination checks, plan goal satisfaction checks). A prototype fault-tolerant planner was developed and evaluated by fault-injection.

Second, we have set the foundations of “safety modes”, a method for structuring real-time checking of safety constraints for multi-functional robots [3, 4]. The idea of safety modes stems from the observation that autonomous robots need to be able to adapt their functionality according the current situation and current system objectives, so the set of safety constraints to be satisfied can also change dynamically (or else the most restrictive constraints would need to be applied continuously, which could severely impact performance in non-critical situations). For each safety mode, we define two types of constraints: permissiveness constraints, which specify the authorized domains of variation of key physical variables (e.g., robot speed), and context constraints, which must be satisfied by the environment (e.g., no humans are within a given distance of the robot) if the corresponding permissiveness constraint is to be admitted.

In the near future, our research on this theme will focus on the safety of autonomous robot systems based on the use of safety monitors. Different types of monitors can be used to provide multiple lines of defense:

a) monitors that are physically independent of the main control channel;

b) monitors intercepting interactions within the main control channel;

c) high-level monitors capable of reasoning about the safety of long-term plans of autonomous action. A systematic approach to using such safety monitors requires methods and tools [pointeur vers “8. risk analysis for autonomously-adapting systems”] for identifying the safety rules to be checked, and the corresponding languages and safety monitor design patterns.

Key publications

[1] B. Lussier, M. Gallien, J. Guiochet, F. Ingrand, M.-O. Killijian, D. Powell. “Fault Tolerant Planning for Critical Robots.” In 37th Annual IEEE/IFIP Int. Conf. on Dependable Systems and Networks (DSN 2007), pp. 144-53, Edinburgh, UK, 2007.

[2] B. Lussier, M. Gallien, J. Guiochet, F. Ingrand, M.-O. Killijian, D. Powell. “Planning with Diversified Models for Fault-Tolerant Robots.” In 17th. Int. Conf. on Automated Planning and Scheduling (ICAPS), pp. 216-23, Providence, Rhode Island, USA, 2007.

[3] J. Guiochet, D. Powell, E. Baudin, J.-P. Blanquart. “Online Safety Monitoring using Safety Modes.” In 6th IARP-IEEE/RAS-EURON Joint Workshop on Technical Challenges for Dependable Robots in Human Environments, Pasadena, CA, USA, 2008.

[4] J. Guiochet, D. Powell, E. Baudin, J. P. Blanquart. “Surveillance en ligne de la sécurité basée sur les modes de sécurité.” In 30° Congrès sur la Maîtrise des Risques et la Sûreté de Fonctionnement (Lambda-Mu), p. 7, Avignon, 7-9 octobre 2008.