In this group, we focus on the development of embedded systems with shared-resource constraints. Shared resources may be computing unit or communication medium, but also non-functional as time or energy. The main activity is the conception (analysis, development, implementation) of embedded systems with guarantees on shared resources constraints; including guarantees on safety and performances.
What kind of shared resources?
What kind of guarantees?
What kind of conception?
The SharedResources group studies the development of embedded systems with shared-resource constraints in the following areas:
Certified Micro-Architecture Aware Optimized Compilation
We design efficient instruction schedulers in the CompCert certified compiler. Our optimisations first invoke some untrusted oracle able to find a fast instruction scheduling wrt the latency constraints and the resource constraints (on computation units) of the (multiple-issue) pipeline in the processor. Then, a certified checker verifies that this scheduling preserves the semantics (by certified symbolic execution and other static analyzes). We apply such schedulings both before register allocation and after. We are currently targeting Kalray VLIW, ARM and RISC-V cores.
Distributed fault-tolerant algorithms
Distributed computing relates to systems made of autonomous communicating entities. The main differences between such distributed systems and central ones are the absence of common time, asynchronism, and partial access of computing entities to the global state of the system. Distributed systems allow to model a vast variety of networks, from the LANs (Local Area Networks) to large-scale networks such as the Internet. Modern distributed systems can be large-scale, dynamic, and / or resource constrained. These characteristics make them more vulnerable to faults. Now, the scale of these systems as well as the adversarial environment where they may be deployed limit the possibility of human intervention to repair them. Fault-tolerance, i.e., the ability of a distributed algorithm to withstand faults, is then mandatory. In this context, our research focuses on self-stabilization, a versatile lightweight technique to withstand transient faults in a distributed system.
Executable High-Level Hardware Models for Software Development and Evaluation
We have been working with STMicrolectronics since 2002 on the notion of Transaction-Level Models (TLM) for hardware platforms. This is a level of detail which is: 1) sufficiently detailed so as to enable software development, long before the actual hardware is available; 2) sufficiently abstract and simple to allow for efficient simulations. TLM has been used for various purposes, ranging from functional validation to energy-consumption early estimations. The current topic is to use TLM in order to help programmers take full advantage of a hardware platform.
Timing analysis and implementation of critical applications on multi-core platforms
We focus on the implementation of time-critical applications. We work on the whole conception workflow from the design of time-critical applications to the timing analysis with guaranteed static bounds on execution time and delays. The main on-going work focus on:
Online Efficient Scheduling for Multi-Core and Multi-Thread Systems