RAP - Integrated Sensors
Embedded Sensor for Obstacle Detection, Identification and Tracking
The main goal is to co-design an hardware-software implementation of well-known vision-based obstacle detection, identification and tracking algorithms, suited to realistic robotic contexts. This leads to design smart vision sensors whose task is to extract information out of the scene and communicate it to a higher-level processing resource. Such sensors are highly constrained in term of size, power consumption, cost and throughput. Advanced Driver Assistance System (ADAS) and Autonomous Vehicle constitute application fields in which throughput is a limiting factor for the vehicle speed and reactivity (D. Botero’s thesis). The research has been carried along three distinct methodologies: detection by pixel classification; detection by geometric transformation; detection by movement analysis. One of the key aspects of development is hardware prototyping on FPGAs (Field Programmable Gate Arrays) for which the algorithm-level and hardware-level optimizations are prevalent (A. Alhamwi & F. Brenot’ theses).
Hardware Accelerator for Vision Based Localization
This topic explores the design of hardware architectures to speed up existing vision based localization techniques like SLAM, by providing an image filtering front-end to feed the filtering/optimization back-end with high-level information. We have been working on a prototype which implements the whole set of image filtering operations for EKF SLAM. This prototype provides real time, low latency, information on a vehicle embedding a single camera and an inertial measurement unit. It consists in the connection of programmable logic (FPGA) with an embedded processor (POWER-PC). A co-design flow was followed to partition the application between the two resources and evaluate the communication costs at run-time. This helped to modify the existing C-SLAM software to take advantage of the available processing parallelism.
SLAM co-processor
Dataflow based Design Methodology for Hardware-Software Co-design
We have developed a methodology that guides the designer through the application prototyping process to produce a valid hardware-software implementation of a given algorithm. This methodology is integrated in the PREESM software and generates SystemC prototypes from a dataflow description for each step of the design flow. It helps to explore the optimization space for a given application on a given architecture [Pelcat_LNEE2013].
The “Embedded Audition for Robotics” (“EAR”) Sensor
In connection with robot audition, a new System-on-a-Programmable-Chip (SoPC) architecture was designed for our “EAR” smart auditory sensor. A soft-processor and custom hardware modules designed for application-specific operations (co-processors for intensive operations, timers, communications…) were hardcoded on FPGA. This enables an efficient handling of resources, a progressive, flexible and fast development (C/C++-based SDKs, MATLAB/HDL toolboxes…), and a validation at each stage of the workflow. One cycle of MUSIC-MAICE detection-localization now takes less than 25ms. This architecture has also been adapted to binaural audition and MEMS microphones [Lunati_IROS2012].