Risk analysis for autonomously-adapting systems

Recent advances in robotics technologies have opened multiple opportunities for the use of autonomous systems in contact with humans to support various activities of our daily life. In such contexts, it is crucial to identify potential threats related to physical interactions between system and human and to assess the associated risks that might affect safety and reliability. Because of the complexity of human-system interactions, rigorous and systematic approaches are needed to assist the developers in: i) the identification of significant threats and the implementation of efficient protection mechanisms to cope with these threats, and ii) the elaboration of a sound argumentation to justify the level of safety that can be achieved by the system. The risk analysis method HAZOP-UML that we developed, is a guided method to identify potential occurrences of harm, their causes and their severity [GMGP10]. The results from risk analysis are then used as input to build an argumentation about system safety, using a modeling language for safety cases (Goal Structuring Notation) [GDHKP13]. These approaches have been experimented and successfully applied on real case studies in a European Project (FP7-PHRIENDS) and a national project (ANR-MIRAS).

 

A model-based safety analysis and argumentation process

 

Publications

[GDHKP13] Guiochet J., Do Hoang Q.-A., Kaaniche M. and Powell D., Model-Based Safety Analysis of Human-Robot Interactions: the MIRAS Walking Assistance Robot, International Conference on Rehabilitation Robotics (ICORR2013), Seattle, USA, 2013

[GMGP10] J. Guiochet, D.Martin-Guillerez, D.Powell: Experience with a model-based user-centered risk assessment for service robot, International High Assurance Systems Engineering Symposium (HASE 2010), San Jose (USA), 1-4 November 2010, 10p.

 

Back to TSF Research Topics page