If I visit an exotic destination, I would surely do some sightseeing. I would cycle to that destination and I would use Google Maps for navigation. At most crossroads, I need to know in which direction I should turn. Other apps running on my phone might delay the indications provided by Google Maps and therefore I can mistakenly make a wrong turn and possibly be eaten by some large animal.
The reason why this is happening is that on all modern multicore processors, applications on different cores of the system are competing for the shared resources. This is not a problem in most applications but it can be in time-sensitive ones. If we design any such time-sensitive programs, we need to know what to expect in terms of interference so that we can accommodate for the delays. The most common approach for evaluating delays in multicore systems is to run the time-sensitive applications alongside the most aggressive enemy programs and measure how much damage they can do. In our RTAS 2020 paper, we explore a technique for automatically uncovering the most aggressive enemy programs.
Now let us go over the high-level view of our approach. We create a parameterizable enemy template for the enemy and try to find the parameters that make it more aggressive. We then use a search strategy to discover the appropriate parameters. We use a so-called victim program, that is designed to be the perfect training dummy for the enemy template. A victim will make heavy use of the shared resource, making it very vulnerable. A search strategy will try a set of parameter, then another set, then another set, until it will discover the one that will weaken the victim program the most.
A problem that we encountered was that measuring interference can often be problematic. For example, if the processor gets very hot, its frequency will automatically be decreased to force it to cool down and prevent any damage to the hardware. This is a normal mechanism encountered in every processor nowadays but it makes comparing the aggressiveness of enemies quite difficult. Therefore we had to cool down the processor between measurements to make sure that the frequency was not decreased from handling a previous enemy.
After discovering the most aggressive enemy for each shared resource, we need to explore how a mixture of these mixture enemies impacts all shared resources. This mix will be an environment for which there does not exist another environment that can cause more interference to all shared resources. We call this mixture of enemies that are attacking different resources, the hostile environment.
We tried our technique on a few commercially available development boards. The accessible price of these boards makes them fairly popular in the embedded systems world. On all the development boards, applications running alongside the hostile environment showed substantial delays in their execution time. However, the difference between the delays varied significantly. Even though some systems have the same processor architecture, their implementation varied, making their sensitivity to interference quite different. It is likely that this can be explained by microarchitectural differences between these boards; however, we are not aware of the exact mechanism that causes this difference, since low-level details are generally not available for most commercial systems.