TrustMeIA is a cluster («chantier») that federates the activities related to confidence in autonomous systems carried out in research labs in Toulouse (France). It is supported by the RTRA STAE foundation. TrustMeIA aims at building a network through actions such as : workshops, seminars, collaborative platforms, collaborative internships grants.
Issues related to autonomous systems are found today in many application areas, such as aeronautics, transportaton, agriculture, manufactoring, and medecine. One of the major obstacles to the deployment of these systems is the trust that can be placed in them. Indeed, the bricks used in their design rely more and more on decision-making mechanisms or algorithms derived from Artificial Intelligence (IA) that can deviate from acceptable behavior, and thus induce unwanted effects like accidents.
Several features of autonomous systems mean that many techniques used in the development of critical systems must be reconsidered (including dependability techniques). For example, the non-determinism of these systems (even in simulation), the absence of rigorous specification or behavioral models, make verification techniques as test or model checking more complex.
The TrustMeIA objective is to bring together researchers and industrials in the field of autonomous systems and computer science to initiate open discussions, open publications, open plate-form and collaborative projects in the domain of trusted autonomous systems. Other aspects related to trust as societal, legal or ethical may also be part of the TrustMeIA actions.
Topics (not limited to)
- Decision making under uncertainty and incompleteness : probabilistic and non-probabilistic approaches
- Validation, confidence estimation and certification (Formal methods for expression and verification of requirements, testing, assurance cases, etc.)
- New software architectures for safe autonomous systems (integrity levels management, isolation of components in intelligence architectures, runtime verification, monitoring)
- Transparency and explainability of perception, inference, actions
- Security and autonomous systems
- Legal, ethical, societal aspects, superintelligence issues