Laboratoire d’analyse et d’architecture des systèmes
R.WANG, J.GUIOCHET, G.MOTET, W.SCHON
Revue Scientifique : Safety Science, 34p., Décembre 2017, https://doi.org/10.1016/j.ssci.2017.11.012 , N° 17460
Railway standard EN50129 clarifies the safety acceptance conditions of safety-related electronic systems for signalling. It requires using a structured argumentation, named Safety Case, to present the fulfilment of these conditions. As guidance for building the Safety Case, this standard provides the structure of high-level safety objectives and the recommendations of development techniques according to different Safety Integrity Levels (SIL). Nevertheless, the rationale connecting these techniques to the high-level safety objectives is not explicit. The proposed techniques stem from experts belief in the effectiveness and efficiency of these techniques to achieve the underlying safety objectives. So, how should one formalize and assess this belief? And as a result how much confidence can we have in the safety of railway systems when these standards are used? To deal with these questions, the paper successively addresses two aspects: 1) making explicit the safety assurance rationale by modelling the Safety Case with GSN (Goal Structuring Notation) according to EN5012x standards ; 2) proposing a quantitative framework based on Dempster-Shafer theory to formalize and assessing the confidence in the Safety Case. A survey amongst safety experts is carried out to estimate the confidence parameters. With these results, an application guidance of this framework is provided based on the Wheel Slide Protection (WSP) system.
J.DUCHENE, C.LE GUERNIC, E.ALATA, V.NICOMETTE, M.KAANICHE
TSF, INRIA Rennes
Revue Scientifique : Journal of Computer Virology and Hacking Techniques, 27p., Décembre 2017, doi 10.1007/s11416-016-0289-8 , N° 17109
Communication protocols enable structured information exchanges between different entities. A description, at different levels of detail, is necessary for many applications, such as interoperability or security audits. When such a description is not available, one can resort to protocol reverse engineering to infer the format of exchanged messages or a model of the protocol. During the past 12 years, several tools have been developed in order to automate, entirely or partially, the protocol inference process. Each of those tools has been developed with a specific application goal for the inferred model, leading to specific needs, and thus different strengths and limitations. After identifying key challenges, the paper presents a survey of protocol reverse engineering tools developed in the last decade. We consider tools focusing on the inference of the format of individual messages or of the grammar of sequences of messages. Finally, we propose a classification of these tools according to different criteria, that is aimed at providing relevant insights about the techniques used by each of these tools and comparatively to other tools, for the classification of messages, the inference of their format or of the grammar of the protocol. This classification also permits to identify technical areas that are not sufficiently explored so far and that require further development in the future.
L.MASSON, J.GUIOCHET, H.WAESELYNCK, K.CABRERA CASTILLOS, S.CASSEL, M.TORNGREN
TSF, Uppsala, KTH
Rapport LAAS N°17416, Décembre 2017, 8p.
Robots and autonomous system have become a part of our everyday life, therefore guaranteeing their safety is a crucial issue. Among the possible methods for guaranteeing safety, monitoring is widely used, but few methods exist to generate safety rules to implement such monitors. Particularly, building safety monitors that do not constrain excessively the system's ability to perform its tasks is necessary as those systems operate with few human interventions. We propose in this paper a method to take into account the system's desired tasks in the specification of strategies for monitors and apply it to a case study. We show that we can synthesize a more important number of strategies and we facilitate the reasoning about the trade-off between safety and function-alities.
Manifestation avec acte : International Conference on Network and Service Management ( CNSM ) 2017 du 26 novembre au 30 novembre 2017, Tokyo (Japon), Novembre 2017, 16p. , N° 18004
Despite the efforts made from both the research community and the industry in inventing new methods to deal with distributed denial of service attacks, they stay a major threat in the Internet network. Those attacks are numerous, and can prevent, in most serious cases, the targeted system from answering any request from its clients. Detecting such attacks means dealing with several difficulties, such as their distributed nature or the several evasions techniques available to the attackers. The detection process has also a cost, which includes both the resources needed to perform the detection and the work of the network administrator. In this paper we introduce AATAC (Autonomous Algorithm for Traffic Anomaly Detection), an unsupervised DDoS detector that focuses on reducing the computational resources needed to process the traffic. It models the traffic using a set of regularly created snapshots. Each new snapshot is compared to this model using a k-NN based measure to detect significant deviations toward the usual traffic profile. Those snapshots are also used to provide the network administrator with an explicit and dynamic view of the traffic when an anomaly occurs. Our evaluation shows that AATAC is able to efficiently process real traces with low computational resources requirements, while achieving an efficient detection producing a low number of false-positives.
J.M.Larré, K.CABRERA CASTILLOS, J.GUIOCHET
Rapport LAAS N°17404, Novembre 2017, 18p.
M.LI, G.ZHU, Y.SAVARIA, M.LAUER
Ecole Montréal, TSF
Revue Scientifique : IEEE Transactions on Industrial Informatics, Vol.13, N°5, pp.2118-2129, Octobre 2017, DOI: 10.1109/TII.2017.2732345 , N° 17317
AFDX is a safety critical network in which a redundancy management mechanism is employed to enhance the reliability of the network. However, as stated in the ARINC664-P7 standard, there still exists a potential problem, which may fail redundant transmissions due to sequence inversion in the redundant channels. In this paper, we explore this phenomenon and provide its mathematical analysis. It is revealed that the variable jitter and the transmission latency difference between two successive frames are the two main sources of sequence inversion. Thus, two methods are proposed and investigated to mitigate the effects of jitter pessimism, which can eliminate the potential risk. A case study is carried out and the obtained results confirm the validity and applicability of the developed approaches.
C.BERTERO, M.ROY, C.SAUVANAUD, G.TREDAN
Manifestation avec acte : International Symposium on Software Reliability Engineering ( ISSRE ) 2017 du 23 octobre au 26 octobre 2017, Toulouse (France), Octobre 2017, 10p. , N° 17295
Event logging is a key source of information on a system state. Reading logs provides insights on its activity, assess its correct state and allows to diagnose problems. However, reading does not scale: with the number of machines increasingly rising, and the complexification of systems, the task of auditing systems' health based on logfiles is becoming overwhelming for system administrators. This observation led to many proposals automating the processing of logs. However, most of these proposal still require some human intervention, for instance by tagging logs, parsing the source files generating the logs, etc. In this work, we target minimal human intervention for logfile processing and propose a new approach that considers logs as regular text (as opposed to related works that seek to exploit at best the little structure imposed by log formatting). This approach allows to leverage modern techniques from natural language processing. More specifically, we first apply a word embedding technique based on Google's word2vec algorithm: logfiles' words are mapped to a high dimensional metric space, that we then exploit as a feature space using standard classifiers. The resulting pipeline is very generic, computationally efficient, and requires very little intervention. We validate our approach by seeking stress patterns on an experimental platform. Results show a strong predictive performance (≈ 90% accuracy) using three out-of-the-box classifiers.
R.WANG, J.GUIOCHET, G.MOTET
Manifestation avec acte : International Conference on Computer Safety, Reliability and Security ( SafeComp ) 2017 du 12 septembre au 15 septembre 2017, Trento (Italie), Septembre 2017, 14p. , N° 17189
Confidence in safety critical systems is often justified by safety arguments. The excessive complexity of systems nowadays introduces more uncertainties for the arguments reviewing. This paper proposes a framework to support the argumentation assessment based on experts' decision and confidence in the decision for the lowest level claims of the arguments. Expert opinion is extracted and converted in a quantitative model based on Dempster-Shafer theory. Several types of argument and associated formulas are proposed. A preliminary validation of this framework is realized through a survey for safety experts.
J.ROUX, E.ALATA, V.NICOMETTE, M.KAANICHE
Manifestation avec acte : European Dependable Computing Conference ( EDCC ) 2017 du 04 septembre au 08 septembre 2017, Genève (Suisse), Septembre 2017, 4p. , N° 17230
Nowadays, more and more Internet-of-Things (IoT) smart products, interconnected through various wireless communication technologies (Wifi, Bluetooth, Zigbee, Z-wave, etc.) are integrated in daily life, especially in homes, factories, cities, etc. Such IoT technologies have become very attractive with a large variety of new services offered to improve the quality of life of the endusers or to create new economic markets. However, the security of such connected objects is a real concern due to weak or flawed security designs, configuration errors or imperfect maintenance. Moreover, the vulnerabilities discovered in IoT products are often difficult to eliminate because, most of the time, they cannot be patched easily. Therefore, protection mechanisms are needed to mitigate the potential risks induced by such objects in private and public connected areas. In this paper, we propose a novel approach to detect potential attacks in smart places (e.g. smart homes) by detecting deviations from legitimate communication behavior, in particular at the physical layer. The proposed solution is based on the profiling and monitoring of the Radio Signal Strenght Indication (RSSI) associated to the wireless transmissions of the connected objects. A machine learning neural network algorithm is used to characterize legitimate communications and to identify suspiscious scenarios. We show the feasibility of this approach and discuss some possible application cases.
A.KRITIKAKOU, T.MARTY, M.ROY
INRIA Rennes, TSF
Revue Scientifique : ACM Transactions on Design Automation of Electronic Systems, Vol.23, N°2, 13p., Septembre 2017 , N° 17377
In real-time mixed-critical systems, Worst-Case Execution Time analysis (WCET) is required to guarantee that timing constraints are respected —at least for high criticality tasks. However, the WCET is pessimistic compared to the real execution time, especially for multicore platforms. As WCET computation considers the worst-case scenario, it means that whenever a high criticality task accesses a shared resource in multi-core platforms, it is considered that all cores use the same resource concurrently. This pessimism in WCET computation leads to a dramatic under utilization of the platform resources, or even failing to meet the timing constraints. In order to increase resource utilization while guaranteeing real-time guarantees for high criticality tasks, previous works proposed a run-time control system to monitor and decide when the interferences from low criticality tasks cannot be further tolerated. However, in the initial approaches, the points where the controller is executed were statically predefined. In this work, we propose a dynamic run-time control which adapts its observations to on-line temporal properties, increasing further the dynamism of the approach, and mitigating the unnecessary overhead implied by existing static approaches. Our dynamic adaptive approach allows to control the ongoing execution of tasks based on run-time information, and increases further the gains in terms of resource utilization compared with static approaches.