Application code

The role of the application code is to coordinate, from the highest layer in the project design, all the modules involved, and orchestrate the heuristics of the system. A clean main module that is well-designed allows us to keep a clear view of all the macroscopic blocks of the system, how they are related to each other, and the timing of execution of the various components.

Bare-metal applications are built around a main endless loop function, which is in charge of distributing the CPU time among the entry points of the underlying libraries and drivers. The execution happens sequentially, so the code cannot be suspended, except by interrupt handlers. For this reason, all the functions and library calls invoked from the main loop are supposed to return as fast as possible, because stall points hidden inside other modules may compromise the reactivity of the system, or even block forever, with the risk of never returning to the main loop. Ideally, in a bare-metal system, every component is designed to interact with the main loop using the event-driven paradigm, with a main loop constantly waiting for events, and mechanisms to register callbacks to wake up the application on specific events.

The advantage of the bare-metal, single-thread approach is that synchronization among threads is not needed, all the memory is accessible by any function in the code, and it is not necessary to implement complex mechanisms, such as context and execution model switches.

If multiple tasks are meant to run on top of an operating system, each task should be confined as much as possible within its own module, and explicitly export its start function and public variables as global symbols. In this case, tasks can sleep and call blocking functions, which should implement the OS-specific blocking mechanisms. Thanks to the flexibility of the Cortex-M CPU, there are different degrees of threads and process separation that can be activated on the system. The CPU offers multiple tools to facilitate the development of multithreading systems with separation among tasks, multiple execution modes, kernel-specific registers, privilege separation, and memory-segmentation techniques. These options allow architects to define complex systems, more oriented to general-purpose applications, which offer privilege separation and memory segmentation among processes, but also smaller, simpler, more straightforward systems, which don't need these as they are generally designed for a single purpose.

Selecting an executing model that is based on non-privileged threads results in a much more complex implemention of the context changes in the system, and may impact the latency of the real-time operations, which is the reason why bare-metal, single-threaded solutions are still preferred for most real-time applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.154.103