Round-robin scheduling

One of the easiest ways to conceptualize actual task execution is with round-robin scheduling. In round-robin scheduling, each task gets a small slice of time to use the processor, which is controlled by the scheduler. As long as the task has work to perform, it will execute. As far as the task is concerned, it has the processor entirely to itself. The scheduler takes care of all of the complexity of switching in the appropriate context for the next task:

This is the same three tasks that were shown previously, except that instead of a theoretical conceptualization, each iteration through the tasks' loops are enumerated over time. Because the round-robin scheduler assigns equal time slices to each task, the shortest task (Task 1) has executed nearly six iterations of its loop, whereas the task with the slowest loop (Task 2) has only made it through the first iteration. Task 3 has executed three iterations of its loop. 

An extremely important distinction between a super loop executing the same functions versus a round-robin scheduling routine executing them is this: Task 3 completed its moderately tight loop before Task 2. When the super loop was running functions in a serial fashion, Function 3 wouldn't even have started until Function 2 had run to completion. So, while the scheduler isn't providing us with true parallelism, each task is getting it's fair share of CPU cycles. So, with this scheduling scheme, if a task has a shorter loop, it will execute more often than a task with a longer loop.

All of this switching does come at a (slight) cost – the scheduler needs to be invoked any time there is a context switch. In this example, the tasks are not explicitly calling the scheduler to run. In the case of FreeRTOS running on an ARM Cortex-M, the scheduler will be called from the SysTick interrupt (more details can be found in Chapter 7, The FreeRTOS Scheduler). A considerable amount of effort is put into making sure the scheduler kernel is extremely efficient and takes as little time to run as possible. However, the fact remains that it will run at some point and consume CPU cycles. On most systems, the small amount of overhead is generally not noticed (or significant), but it can become an issue in some systems. For example, if a design is on the extreme edge of feasibility because it has extremely tight timing requirements and very few spare CPU cycles, the added overhead may not be desirable (or completely necessary) if the super loop/interrupt approach has been carefully characterized and optimized. However, it is best to avoid this type of situation wherever possible, since the likelihood of overlooking a combination of interrupt stack-up (or nested conditionals taking longer every once in a while) and causing the system to miss a deadline is extremely high on even moderately complex systems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.140.5