Parallel Tasks and Scheduling

If the complexity of the system increases, and the software has to manage multiple peripherals and events at the same time, it is more convenient to rely on an operating system to coordinate and synchronize all the different operations. Separating the application logic into different threads offers a few important architectural advantages. Each component performs the designed operation within its own running unit, and it may release the CPU while it is suspended, waiting for input or a timeout event.

In this chapter, the mechanisms used to implement a multithreading embedded operating system are observed, through the development of a minimalistic operating system, tailored to the reference platform, and written step by step from scratch, providing a working scheduler to run multiple tasks in parallel.

The scheduler's internals are mostly implemented within system service calls, and its design impacts the system performance and other features, such as different priority levels and time constraints for real-time-dependent tasks. Some of the possible scheduling policies, for different contexts, are explained and implemented in the example code.

Running multiple threads in parallel implies that resources can be shared, and there is the possibility of concurrent access to the same memory. Most microprocessors designed to run multithreading systems provide primitive functions, accessible through specific assembly instructions, to implement locking mechanisms, such as semaphores. The example operating system exposes mutex and semaphore primitives that can be used by the threads to control access to the shared resources.

By introducing memory protection mechanisms, it is possible to provide a separation of the resources based on their addresses, and let the kernels supervise all the operations involving the hardware through a system call interface. Most real-time embedded operating systems prefer a flat model with no segmentation, to keep the kernel code as small as possible and with a minimal API, to optimize the resources available for the applications. The example kernel will show us how to create a system call API to centralize the control of the resources, using physical memory segmentation to protect the resources of the kernel, the system control block, the mapped peripherals' and the other tasks, with the purpose of increasing the level of safety of the system.

This chapter is split into the following sections:

  • Task management
  • Scheduler implementation
  • Synchronization
  • System resource separation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.31.159