Chapter 7. Real-Time Framework Implementation

Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

— Donald E. Knuth

In this chapter I describe the implementation of a lightweight real-time framework called QF. As shown in Figure 7.1, QF is the central component of the QP event-driven platform, which also includes the QEP hierarchical event processor (described in Part I of this book) as well as the preemptive run-to-completion (RTC) kernel (QK) and the software tracing instrumentation (QS).

QP Components (in gray) and their relationship to the target hardware, board support package (BSP), and the application.

Figure 7.1. QP Components (in gray) and their relationship to the target hardware, board support package (BSP), and the application.

The focus of this chapter is on the generic, platform-independent QF source code. I devote Chapter 8 entirely to describing the platform-specific code that depends on the specific processor, compiler, and operating system/RTOS (including the case where QF is used without an RTOS).

I describe QF in the top-down fashion beginning with an overview of the QF features, then presenting the framework structure, both logical (partitioning into classes) and physical (partitioning into files). In the remaining bulk of the chapter I explain the implementation of the QF services. As usual, I mostly refer to the C source code (located in the <qp>qpc directory in the accompanying code). I mention the C++ version (located in the <qp>qpcpp directory) only when the differences from C become important.

Key Features of the QF Real-Time Framework

QF is a generic, portable, scalable, lightweight, real-time framework designed specifically for the domain of real-time embedded systems (RTES). QF can manage up to 63 concurrently executing active objects,1 This does not mean that your application is limited to 63 state machines. Each active object can manage an unlimited number of stateful components, as described in the “Orthogonal Component” state pattern in Chapter 5. which are encapsulated tasks (each embedding a state machine and an event queue) that communicate with one another asynchronously by sending and receiving events.

The “embedded” design mindset means that QF is efficient in both time and space. Moreover, QF uses only deterministic algorithms, so you can always determine the upper bound on the execution time, the maximum interrupt disabling time, and the required stack space of any given QF service. QF also does not call any external code, not even the standard C or C++ libraries. In particular, QF does not use the standard heap (malloc() or the C++ operator new). Instead, the framework leaves to the clients the instantiation of any framework-derived objects and the initialization of the framework with the memory it needs for operation. All this memory could be allocated statically in hard real-time applications, but you could also use the standard heap or any combination of memory allocation mechanisms in other types of applications.

Source Code

The companion Website to this book at www.quantum-leaps.com/psicc2/ contains the complete source code for all QP components, including QF. I hope that you will find the source code very clean and consistent. The code has been written in strict adherence to the coding standard documented at www.quantum-leaps.com/doc/AN_QL_Coding_Standard.pdf.

All QP source code is “lint-free.” The compliance was checked with PC-lint/FlexLint static analysis tool from Gimpel Software (www.gimpel.com). The QP distribution includes the <qp>qpcportslint subdirectory, which contains the batch script make.bat for compiling all the QP components with PC-lint.

The QP source code is also 98 percent compliant with the Motor Industry Software Reliability Association (MISRA) Guidelines for the Use of the C Language in Vehicle-Based Software [MISRA 98]. MISRA created these standards to improve the reliability and predictability of C programs in critical automotive systems. Full details of this standard can be obtained directly from the MISRA Website at www.misra.org.uk. The PC-lint configuration used to analyze QP code includes the MISRA rule checker.

Finally and most important, I believe that simply giving you the source code is not enough. To gain real confidence in event-driven programming, you need to understand how a real-time framework is ultimately implemented and how the different pieces fit together. This book, and especially this chapter, provides this kind of information.

Portability

All QF source code is written in portable ANSI-C, or in the Embedded C++ subset2 Embedded C++ subset is defined online at www.caravan.net/ec2plus/. in case of QF/C++, with all processor-specific, compiler-specific, or operating system-specific code abstracted into a clearly defined platform abstraction layer (PAL).

In the simplest standalone configurations, QF runs on “bare-metal” target CPU completely replacing the traditional RTOS. As shown in Figure 7.1, the QP event-driven platform includes the simple nonpreemptive “vanilla” scheduler as well as the fully preemptive kernel QK. To date, the standalone QF configurations have been ported to over 10 different CPU architectures, ranging from 8-bit (e.g., 8051, PIC, AVR, 68H(S)08), through 16-bit (e.g., MSP430, M16C, x86-real mode) to 32-bit architectures (e.g., traditional ARM, ARM Cortex-M3, Cold Fire Altera Nios II, x86).

The QF framework can also work with a traditional OS/RTOS to take advantage of the existing device drivers, communication stacks, middleware, or any legacy code that requires a conventional “blocking” kernel. To date, QF has been ported to six major operating systems and RTOSs, including Linux (POSIX) and Win32.

As you'll see in the course of this chapter, high portability is the main challenge in writing a widely useable real-time framework like QF. Obviously, coming up with an efficient PAL that would correctly capture all possible platform variances required many iterations and actually porting the framework to several CPUs, operating systems, and compilers. (I describe porting the QF framework in Chapter 8.) The www.quantum-leaps.com Website contains the steadily growing number of QF ports, examples, and documentation.

Scalability

All components of the QP event-driven platform, especially the QF real-time framework, are designed for scalability so that your final application image contains only the services that you actually use. QF is designed for deployment as a fine-granularity object library that you statically link with your applications. This strategy puts the onus on the linker to eliminate any unused code automatically at link time instead of burdening the application programmer with configuring the QF code for each application at compile time.

As shown in Figure 7.2, a minimal QP/C or QP/C++ system requires some 8KB of code space (ROM) and about 1KB of data space (RAM) to leave enough room for a meaningful application code and data. This code size corresponds to the footprint of a typical, small, bare-bones RTOS application except that the RTOS approach typically requires more RAM for the stacks.

Note

A typical, standalone QP configuration with QEP, QF, and the “vanilla” scheduler or the QK preemptive kernel, with all major features enabled, requires around 2-4KB of code. Obviously you need to budget additional ROM and RAM for your own application code and data. Figure 7.2 shows the application footprint.

RAM/ROM footprints of QP, QP-nano, and other RTOS/OS. The chart shows approximate total system size as opposed to just the RTOS/OS footprints. Note the logarithmic axes.

Figure 7.2. RAM/ROM footprints of QP, QP-nano, and other RTOS/OS. The chart shows approximate total system size as opposed to just the RTOS/OS footprints. Note the logarithmic axes.

However, the event-driven approach scales down even further, beyond the reach of any conventional RTOS. To address still smaller systems, a reduced QP version called QP-nano implements a subset of features provided in QP/C or QP/C++. QP-nano has been specifically designed to enable active object computing with hierarchical state machines on low-end 8- and 16-bit embedded MCUs. As shown in Figure 7.2, a meaningful QP-nano application starts from about 100 bytes of RAM and 2KB of ROM. I describe QP-nano in Chapter 11.

On the opposite end of the complexity spectrum, QP applications can also scale up to very big systems with gigabytes of RAM and multiple or multicore CPUs. The large-scale applications, such as various servers, have often large numbers of stateful components to manage, so the efficiency per component becomes critical. It turns out that the lightweight, event-driven, state machine-based approach easily scales up and offers many benefits over the traditional thread-per-component paradigm.

Support for Modern State Machines

As shown in Figure 7.1, the QF real-time framework is designed to work closely with the QEP hierarchical event processor (Chapter 4). The two components complement each other in that QEP provides the UML-compliant state machine implementation, whereas QF provides the infrastructure of executing such state machines concurrently.

The design of QF leaves a lot of flexibility, however. You can configure the base class for derivation of active objects to be either the QHsm hierarchical state machine (Section 4.5 in Chapter 4), the simpler QFsm nonhierarchical state machine (Section 3.6 in Chapter 3), or even your own base class not defined in the QEP event processor. The latter option allows you to use QF with your own event processor.

Direct Event Posting and Publish-Subscribe Event Delivery

The QF real-time framework supports direct event posting to specific active objects with first-in, first-out (FIFO) and last-in, first-out (LIFO) policies. QF also supports the more advanced publish-subscribe event delivery mechanism, as described in Section 6.4 in Chapter 6. Both mechanisms can coexist in a single application.

Zero-Copy Event Memory Management

Perhaps that most valuable feature provided by the QF real-time framework is the efficient “zero-copy” event memory management, as described in Section 6.5 in Chapter 6. QF supports event multicasting based on the reference-counting algorithm, automatic garbage collection for events, efficient static events, “zero-copy” event deferral, and up to three event pools with different block sizes for optimal memory utilization.

Open-Ended Number of Time Events

QF can manage an open-ended number of time events (timers). QF time events are extensible via structure derivation (inheritance in C++). Each time event can be armed as a one-shot or a periodic timeout generator. Only armed (active) time events consume CPU cycles.

Native Event Queues

QF provides two versions of native event queues. The first version is optimized for active objects and contains a portability layer to adapt it for either blocking kernels, the simple cooperative “vanilla” kernel (Section 6.3.7), or the QK preemptive kernel (Section 6.3.8 in Chapter 6). The second native queue version is a simple “thread-safe” queue not capable of blocking and designed for sending events to interrupts as well as storing deferred events. Both native QF event queue types are lightweight, efficient, deterministic, and thread-safe. They are optimized for passing just the pointers to events and are probably smaller and faster than full-blown message queues available in a typical RTOS.

Native Memory Pool

QF provides a fast, deterministic, and thread-safe memory pool. Internally, QF uses memory pools as event pools for managing dynamic events, but you can also use memory pools for allocating any other objects in your application.

Built-in “Vanilla” Scheduler

The QF real-time framework contains a portable, cooperative “vanilla” kernel, as described in Section 6.3.7 of Chapter 6. Chapter 8 presents the QF port to the “vanilla” kernel.

Tight Integration with the QK Preemptive Kernel

The QF real-time framework can also work with a deterministic, preemptive, nonblocking QK kernel. As described in Section 6.3.8 in Chapter 6, run-to-completion kernels, like QK, provide preemptive multitasking to event-driven systems at a fraction of the cost in CPU and stack usage compared to traditional blocking kernels/RTOSs. I describe QK implementation in Chapter 10.

Low-Power Architecture

Most modern embedded microcontrollers (MCUs) provide an assortment of low-power sleep modes designed to conserve power by gating the clock to the CPU and various peripherals. The sleep modes are entered under the software control and are exited upon an external interrupt.

The event-driven paradigm is particularly suitable for taking advantage of these power-savings features because every event-driven system can easily detect situations in which the system has no more events to process, called the idle condition (Section 6.3.7). In both standalone QF configurations, either with the cooperative “vanilla” kernel or with the QK preemptive kernel, the QF framework provides callback functions for handling the idle condition. These callbacks are carefully designed to place the MCU into a low-power sleep mode safely and without creating race conditions with active interrupts.

Assertion-Based Error Handling

The QF real-time framework consistently uses the Design by Contract (DbC) philosophy described in Section 6.7 in Chapter 6. QF constantly monitors the application by means of assertions built into the framework. Among others, QF uses assertions to enforce the event delivery guarantee, which immensely simplifies event-driven application design.

Built-in Software Tracing Instrumentation

As described in Section 6.8 in Chapter 6, a real-time framework can use software-tracing techniques to provide more comprehensive and detailed information about the running application than any traditional RTOS. The QF code contains the software-tracing instrumentation so it can provide unprecedented visibility into the system. Nominally the instrumentation is inactive, meaning that it does not add any code size or runtime overhead. But by defining the macro Q_SPY, you can activate the instrumentation. I devote all of Chapter 11 to software tracing.

Note

The QF code is instrumented with QS (Q-Spy) macros to generate software trace output from active object execution. However, the instrumentation is disabled by default and for better clarity will not be shown in the listings discussed in this chapter. Refer to Chapter 11 for more information about the QS software-tracing implementation.

QF Structure

Figure 7.3 shows the main QF classes and their relation to the application-level code, such as the “Fly ‘n’ Shoot” game example from Chapter 1. As all real-time frameworks, QF provides the central base class QActive for derivation of active object classes. The QActive class is abstract, which means that it is not intended for direct instantiation but rather only for derivation of concrete3 Concrete class is the OOP term and denotes a class that has no abstract operations or protected constructors. Concrete class can be instantiated, as opposed to abstract class, which cannot be instantiated. active object classes, such as Ship, Missile, and Tunnel shown in Figure 7.3.

QEP event processor, QF real-time framework, and the “Fly ‘n’ Shoot” application.

Figure 7.3. QEP event processor, QF real-time framework, and the “Fly ‘n’ Shoot” application.

By default, the QActive class derives from the QHsm hierarchical state machine class defined in the QEP event processor (Chapter 4). This means that by virtue of inheritance active objects are HSMs and inherit the init() and dispatch() state machine interface. QActive also contains a thread of execution and an event queue, which can be native QF classes, or might be coming from the underlying RTOS.

QF uses the same QEvent class for representing events as the QEP event processor. Additionally, the framework supplies the time event class QTimeEvt, with which the applications make timeout requests.

QF provides also several services to the applications, which are not shown in the class diagram in Figure 7.3. These additional QF services include generating new dynamic events (Q_NEW()), publishing events (QF_publish()), the native QF event queue class (QEQueue), the native QF memory pool class (QMPool), and the built-in cooperative “vanilla” kernel (see Chapter 6, Section 6.3.7).

QF Source Code Organization

Listing 7.1 shows the platform-independent directories and files comprising the QF real-time framework in C. The structure of the C++ version is almost identical except that the implementation files have the .cpp extension.

Listing 7.1. Platform-independent QF source code organization

  • qpc                     - QP/C root directory (qpcpp for QP/C++)

         |

         +-doxygen          - QP/C documentation generated with Doxygen

         | +-html          - “QP/C Reference Manual” in HTML format

         | | +-index.html   - The starting HTML page for the “QP/C Reference Manual”

         | | +- . . .

         | +-Doxyfile       - Doxygen configuration file to generate the Manual

         | +-qpc.chm        - “QP/C Reference Manual” in CHM Help format

         | +-qpc_rev.h      - QP/C revision history

         |

        +-include         - QP platform-independent header files

       | +-qf.h          - QF platform-independent interface

       | +-qequeue.h      - QF native event queue facility

       | +-qmpool.h       - QF native memory pool facility

    | +-qpset.h        - QF native priority set facility

       | +-qvanilla.h     - QF native “vanilla” cooperative kernel interface

       |

       +-qf               - QF real-time framework

       | +-source        - QF platform-independent source code (*.C files)

       | | +-qf_pkg.h     - internal, interface for the QF implementation

       | | +-qa_defer.c   - definition of QActive_defer()/QActive_recall()

       | | +-qa_ctor.c    - definition of QActive_ctor()

       | | +-qa_fifo.c    - definition of QActive_postFIFO()

       | | +-qa_fifo_.c   - definition of QActive_postFIFO_()

       | | +-qa_get_.c    - definition of QActive_get_()

       | | +-qa_lifo.c    - definition of QActive_postLIFO()

       | | +-qa_lifo_.c   - definition of QActive_postLIFO_()

       | | +-qa_sub.c     - definition of QActive_subscribe()

       | | +-qa_usub.c   - definition of QActive_unsubscribe()

       | | +-qa_usuba.c      - definition of QActive_unsubscribeAll()

       | | +-qeq_fifo.c   - definition of QEQueue_postFIFO()

       | | +-qeq_get.c    - definition of QEQueue_get()

       | | +-qeq_init.c   - definition of QEQueue_init()

       | | +-qeq_lifo.c   - definition of QEQueue_postLIFO()

       | | +-qf_act.c     - definition of QF_active_[]

       | | +-qf_gc.c      - definition of QF_gc_()

       | | +-qf_log2.c    - definition of QF_log2Lkup[]

       | | +-qf_new.c     - definition of QF_new_()

       | | +-qf_pool.c    - definition of QF_poolInit()

       | | +-qf_psini.c   - definition of QF_psInit()

       | | +-qf_pspub.c   - definition of QF_publish()

       | | +-qf_pwr2.c    - definition of QF_pwr2Lkup_[]

       | | +-qf_tick.c    - definition of QF_tick()

       | | +-qmp_get.c    - definition of QMPool_get()

       | | +-qmp_init.c   - definition of QMPool_init()

       | | +-qmp_put.c    - definition of QMPool_put()

       | | +-qte_arm.c    - definition of QTimeEvt_arm_()

       | | +-qte_ctor.c   - definition of QTimeEvt_ctor()

       | | +-qte_darm.c   - definition of QTimeEvt_darm()

       | | +-qte_rarm.c    - definition of QTimeEvt_rearm()

       | | +-qvanilla.c   - “vanilla” cooperative kernel implementation

         | |

         | +-lint           - QF options for lint

         | | +-opt_qf.lnt   - PC-lint options for linting QF

         |

         +-ports            - Platform-specific QP ports

         | +- . . .

         +-examples        - Platform-specific QP examples

         | +- . . .

The QF source files contain typically just one function or a data structure definition per file. This design aims at deploying QF as a fine-granularity library that you statically link with your applications. Fine granularity means that the QF library consists of several small, loosely coupled modules (object files) rather than a single module that contains all functionality. For example, a separate module qa_lifo.c implements the QActive_postLIFO() function; therefore, if your application never calls this function, the linker will not pull in the qa_lifo.obj module. This strategy puts the burden on the linker to do the “heavy lifting” of automatically eliminating any unused code at link time, rather than on the application programmer to configure the QF code for each application at compile time.

Critical Sections in QF

QF, just like any other system-level software, must protect certain sequences of instructions against preemptions to guarantee thread-safe operation. The sections of code that must be executed indivisibly are called critical sections.

In an embedded system environment, QF uses the simplest and most efficient way to protect a section of code from disruptions, which is to lock interrupts on entry to the critical section and unlock interrupts at the exit from the critical section. In systems where locking interrupts is not allowed, QF can employ other mechanisms supported by the underlying operating system, such as a mutex.

Note

The maximum time spent in a critical section directly affects the system's responsiveness to external events (interrupt latency). All QF critical sections are carefully designed to be as short as possible and are of the same order as critical sections in any commercial RTOS. Of course, the length of critical sections depends on the processor architecture and the quality of the code generated by the compiler.

To hide the actual critical section implementation method available for a particular processor, compiler, and operating system, the QF platform abstraction layer includes two macros, QF_INT_LOCK() and QF_INT_UNLOCK(), to lock and unlock interrupts, respectively.

Saving and Restoring the Interrupt Status

The most general critical section implementation involves saving the interrupt status before entering the critical section and restoring the status upon exit from the critical section. Listing 7.2 illustrates the use of this critical section type.

Listing 7.2. Example of the “saving and restoring interrupt status” policy

  • {

  • (1)      unsigned int lock_key;

    (1)     . . .

  • (2)      lock_key = get_int_status();

  • (3)      int_lock();

    (3)     . . .

  • (4)      /* critical section of code */

    (4)     . . .

  • (5)      set_int_status(lock_key);

    (5)     . . .

    (5) }

  • (1) The temporary variable lock_key holds the interrupt status across the critical section.

  • (2) Right before entering the critical section, the current interrupt status is obtained from the CPU and saved in the lock_key variable. Of course, the name of the actual function to obtain the interrupt status can be different in your system. This function could actually be a macro or inline assembly statement.

  • (3) Interrupts are locked using the mechanism provided by the compiler.

  • (4) This section of code executes indivisibly because it cannot be interrupted.

  • (5) The original interrupt status is restored from the lock_key variable. This step unlocks interrupts only if they were unlocked at step 2. Otherwise, interrupts remain locked.

Listing 7.3 shows an example of the “saving and restoring interrupt status” policy.

Listing 7.3. QF macro definitions for the “saving and restoring interrupt status” policy

  • (1) #define QF_INT_KEY_TYPE            unsigned int

  • (2)     #define QF_INT_LOCK(key_)            do {

    (2)     (key_) = get_int_status();               

    (2)      int_lock();                              

    (2)     } while (0)

  • (3)     #define QF_INT_UNLOCK(key_) set_int_status(key_)

  • (1) The macro QF_INT_KEY_TYPE denotes a data type of the “interrupt key” variable, which holds the interrupt status. Defining this macro in the qf_port.h header file indicates to the QF framework that the policy of “saving and restoring interrupt status” is used, as opposed to the policy of “unconditional locking and unlocking interrupts” described in the next section.

  • (2) The macro QF_INT_LOCK() encapsulates the mechanism of interrupt locking. The macro takes the parameter key_, into which it saves the interrupt lock status.

Note

The do {…} while (0) loop around the QF_INT_LOCK() macro is the standard practice for syntactically correct grouping of instructions. You should convince yourself that the macro can be used safely inside the if-else statement (with the semicolon after the macro) without causing the “dangling-else” problem. I use this technique extensively in many QF macros.

The main advantage of the “saving and restoring interrupt status” policy is the ability to nest critical sections. The QF real-time framework is carefully designed to never nest critical sections internally. However, nesting of critical sections can easily occur when QF functions are invoked from within an already established critical section, such as an interrupt service routine (ISR). Most processors lock interrupts in hardware upon the interrupt entry and unlock upon the interrupt exit, so the whole ISR is a critical section. Sometimes you can unlock interrupts inside ISRs, but often you cannot. In the latter case, you have no choice but to invoke QF services, such as event posting or publishing, with interrupts locked. This is exactly when you must use this type of critical section.

Unconditional Locking and Unlocking Interrupts

The simpler and faster critical section policy is to always unconditionally unlock interrupts in QF_INT_UNLOCK(). Listing 7.4 provides an example of the QF macro definitions to specify this type of critical section.

Listing 7.4. QF macro definitions for the “unconditional interrupt locking and unlocking” policy

  • (1) /* QF_INT_LOCK_KEY not defined */

  • (2) #define QF_INT_LOCK(key_)            int_lock()

  • (3) #define QF_INT_UNLOCK(key_)          int_unlock()

  • (1) The macro QF_INT_KEY_TYPE is not defined in this case. The absence of the QF_INT_KEY_TYPE macro indicates to the QF framework that the interrupt status is not saved across the critical section.

  • (2) The macro QF_INT_LOCK() encapsulates the mechanism of interrupt locking. The macro takes the parameter key_, but this parameter is not used in this case.

  • (3) The macro QF_INT_UNLOCK() encapsulates the mechanism of unlocking interrupts. The macro always unconditionally unlocks interrupts. The parameter key_ is ignored in this case.

The policy of “unconditional locking and unlocking interrupts” is simple and fast, but it does not allow nesting of critical sections, because interrupts are always unlocked upon exit from a critical section, regardless of whether interrupts were already locked on entry.

The inability to nest critical sections does not necessarily mean that you cannot nest interrupts. Many processors are equipped with a prioritized interrupt controller, such as the Intel 8259A Programmable Interrupt Controller (PIC) in the PC or the Nested Vectored Interrupt Controller (NVIC) integrated inside the ARM Cortex-M3. Such interrupt controllers handle interrupt prioritization and nesting before the interrupts reach the processor core. Therefore, you can safely unlock interrupts at the processor level, thus avoiding nesting of critical sections inside ISRs. Listing 7.5 shows the general structure of an ISR in the presence of an interrupt controller.

Listing 7.5. General structure of an ISR in the presence of a prioritized interrupt controller

  • (1) void interrupt ISR(void) { /* entered with interrupts locked in hardware */

  • (2)       Acknowledge the interrupt to the interrupt controller (optional)

  • (3)       Clear the interrupt source, if level triggered

  • (4)      QF_INT_UNLOCK(dummy); /* unlock the interrupts at the processor level */

  • (5)       Handle the interrupt, use QF calls, e.g., QF_tick(), Q_NEW or QF_publish()

  • (6)      QF_INT_LOCK(dummy);  /* lock the interrupts at the processor level */

  • (7)      Write End-Of-Interrupt (EOI) instruction to the Interrupt Controller

  • (8) }

  • (1) Most processors enter the ISR with interrupts locked in hardware.

  • (2) The interrupt controller must be notified about entering the interrupt. Often this notification happens automatically in hardware before vectoring (jumping) to the ISR. However, sometimes the interrupt controller requires a specific notification from the software. Check your processor's datasheet.

  • (3) You need to explicitly clear the interrupt source, if it is level triggered. Typically you do it before unlocking interrupts at the CPU level, but a prioritized interrupt controller will prevent the same interrupt from preempting itself, so it really does not matter if you clear the source before or after unlocking interrupts.

  • (4) Interrupts are explicitly unlocked at the CPU level, which is the key step of this ISR. Enabling interrupts allows the interrupt controller to do its job, that is, to prioritize interrupts. At the same time, enabling interrupts terminates the critical section established upon the interrupt entry. Note that this step is only necessary when the hardware actually locks interrupts upon the interrupt entry (e.g., the ARM Cortex-M3 leaves interrupts unlocked).

  • (5) The main ISR body executes outside the critical section, so QF services can be safely invoked without nesting critical sections.

Note

The prioritized interrupt controller remembers the priority of the currently serviced interrupt and allows only interrupts of higher priority than the current priority to preempt the ISR. Lower- and same-priority interrupts are locked at the interrupt controller level, even though the interrupts are unlocked at the processor level. The interrupt prioritization happens in the interrupt controller hardware until the interrupt controller receives the end-of-interrupt (EOI) instruction.

Internal QF Macros for Interrupt Locking/Unlocking

The QF platform abstraction layer (PAL) uses the interrupt locking/unlocking macros QF_INT_LOCK(), QF_INT_UNLOCK(), and QF_INT_KEY_TYPE in a slightly modified form. The PAL defines internally the parameterless macros, shown in Listing 7.6. Please note the trailing underscores in the internal macros’ names.

Listing 7.6. Internal macros for interrupt locking/unlocking (file <qp>qpcqfsourceqf_pkg.h)

  • #ifndef QF_INT_KEY_TYPE  /* simple unconditional interrupt locking/unlocking */

             #define QF_INT_LOCK_KEY_

             #define QF_INT_LOCK_()          QF_INT_LOCK    (ignore)

             #define QF_INT_UNLOCK_()            QF_INT_UNLOCK  (ignore)

  • #else                                   /* policy of saving and restoring interrupt status */

            #define QF_INT_LOCK_KEY_               QF_INT_KEY_TYPE intLockKey_;

            #define QF_INT_LOCK_()             QF_INT_LOCK    (intLockKey_)

            #define QF_INT_UNLOCK_()               QF_INT_UNLOCK  (intLockKey_)

  • #endif

The internal macros QF_INT_LOCK_KEY_, QF_INT_LOCK_(), and QF_INT_UNLOCK_() enable writing the same code for the case when the interrupt key is defined and when it is not. The following code snippet shows the usage of the internal QF macros. Convince yourself that this code works correctly for both interrupt-locking policies.

  • void QF_service_xyz(arguments) {

           QF_INT_LOCK_KEY_

          . . .

           QF_INT_LOCK_();

          . . .

          /* critical section of code */

          . . .

          QF_INT_UNLOCK_();

  • }

Active Objects

As shown in Figure 7.3, the QF real-time framework provides the base structure QActive for deriving application-specific active objects. QActive combines the following three essential elements:

  • • It is a state machine (derives from QHsm or some other class with a compatible interface).

  • • It has an event queue.

  • • It has an execution thread with a unique priority.

Listing 7.7 shows the declaration of the QActive base structure and related functions.

Listing 7.7. The QActive base class for derivation of active objects (file <qp>qpcincludeqf.h)

  • (1) #ifndef QF_ACTIVE_SUPER_

  • (2)      #define QF_ACTIVE_SUPER_                  QHsm

  • (3)      #define QF_ACTIVE_CTOR_(me_, initial_)  QHsm_ctor((me_), (initial_))

  • (4)      #define QF_ACTIVE_INIT_(me_, e_)        QHsm_init((me_), (e_))

  • (5)      #define QF_ACTIVE_DISPATCH_(me_, e_)   QHsm_dispatch((me_), (e_))

  • (6)      #define QF_ACTIVE_STATE_                  QState

    (6) #endif

    (6) typedef struct QActiveTag {

  • (7)          QF_ACTIVE_SUPER_  super;         /* derives from QF_ACTIVE_SUPER_ */

  • (8)         QF_EQUEUE_TYPE    eQueue;              /* event queue of active object */

    (8) #ifdef QF_OS_OBJECT_TYPE

  • (9)          QF_OS_OBJECT_TYPE osObject;/* OS-object for blocking the queue */

    (9) #endif

    (9) #ifdef QF_THREAD_TYPE

  • (10)        QF_THREAD_TYPE   thread; /* execution thread of the active object */

    (10) #endif

  • (11)        uint8_t           prio;             /* QF priority of the active object */

  • (12)        uint8_t          running;      /* flag indicating if the AO is running */

    (12) } QActive;

  • (13) void QActive_start(QActive *me, uint8_t prio,

    (13)                    QEvent const *qSto[], uint32_t qLen,

    (13)                    void *stkSto, uint32_t stkSize,

    (13)                    QEvent const *ie);

  • (14) void QActive_postFIFO(QActive *me, QEvent const *e);

  • (15) void QActive_postLIFO(QActive *me, QEvent const *e);

  • (16) void QActive_ctor(QActive *me, QState initial);

  • (17) void QActive_stop(QActive *me);

  • (18) void QActive_subscribe(QActive const *me, QSignal sig);

  • (19) void QActive_unsubscribe(QActive const *me, QSignal sig);

  • (20) void QActive_unsubscribeAll(QActive const *me);

  • (21) void QActive_defer(QActive *me, QEQueue *eq, QEvent const *e);

  • (22) QEvent const *QActive_recall(QActive *me, QEQueue *eq);

  • (23) QEvent const *QActive_get_(QActive *me);

  • (1) The macro QF_ACTIVE_SUPER_ specifies the ultimate base class for deriving active objects. This macro lets you define (in the QF port) any base class for QActive as long as the base class supports the state machine interface. (See Chapter 3, “Generic State Machine Interface.”)

  • (2) When the macro QF_ACTIVE_SUPER_ is not defined in the QF port, the default is the QHsm class provided in the QEP hierarchical event processor.

  • (3) The macro QF_ACTIVE_CTOR_() specifies the name of the base class constructor.

  • (4) The macro QF_ACTIVE_INIT_() specifies the name of the base class init() function.

  • (5) The macro QF_ACTIVE_DISPATCH_() specifies the name of the base class dispatch() function.

  • (6) The macro QF_ACTIVE_STATE_ specifies the type of the parameter for the base class constructor.

By defining the macros QF_ACTIVE_XXX_ to your own class, you can eliminate the dependencies between the QF framework and the QEP event processor. In other words, you can replace QEP with your own event processor, perhaps based on one of the techniques discussed in Chapter 3, or not based on state machines at all (e.g., you might want to try protothreads [Dunkels+ 06]). Consider the following definitions:

  • (7) The first member super specifies the base class for QActive (see the sidebar “Single Inheritance in C” in Chapter 1).

  • (8) The type of the event queue member eQueue is platform-specific. For example, in the standalone QF configurations, the macro QF_EQUEUE_TYPE is defined as the native QF event queue QEqueue (see Section 7.8). However, when QF is based on an external RTOS, the event queue might be implemented with a message queue of the underlying RTOS.

  • (9) The data member osObject is used in some QF ports to block the native QF event queue. The osObject data member is necessary when the underlying OS does not provide an adequate queue facility, so the native QF queue must be used. In that case the osObject data member holds an OS-specific primitive to efficiently block the native QF event queue when the queue is empty. See Chapter 8, “POSIX QF Port,” for an example of using the osObject data member.

  • (10) The data member thread is used in some QF ports to hold the thread handle associated with the active object.

  • (11) The data member prio holds the priority of the active object. In QF, each active object has a unique priority. The lowest possible task priority is 1 and higher-priority values correspond to higher-urgency active objects. The maximum allowed active object priority is determined by the macro QF_MAX_ACTIVE, which currently cannot exceed 63.

  • (12) The data member running is used in some QF ports to represent whether the active object is running. In these ports, writing zero to the running member causes exit from the active object's event loop and cleanly terminates the active object thread.

  • (13) The function QActive_start() starts the active object thread. This function is platform-specific and is explained in Section 7.4.3.

  • (14) The function QActive_postFIFO() is used for direct event posting to the active object's event queue using the FIFO policy.

  • (15) The function QActive_postLIFO() is used for direct event posting to the active object's event queue using the LIFO policy.

  • (16) The function QActive_ctor() is the “constructor” of the QActive class. This constructor has the same signature as the constructor of QHsm or QFsm (see Section 4.5.1 in Chapter 4). In fact, the main job of the QActive constructor is to initialize the state machine base class (the member super).

  • #define QF_ACTIVE_SUPER_                    MyClass

  • #define QF_ACTIVE_CTOR_(me_, ini_)          MyClass_ctor((me_), (ini_))

  • #define QF_ACTIVE_INIT_(me_, e_)            MyClass_init((me_), (e_))

  • #define QF_ACTIVE_DISPATCH_(me_, e_)        MyClass_dispatch((me_), (e_))

  • #define QF_ACTIVE_STATE_                    void*

  • (17) The function QActive_stop() stops the execution thread of the active object. This function is platform-specific and is discussed in Chapter 8. Not all QF ports need to define this function.

Note

In the C++ version, the QActive constructor is protected. This prevents direct instantiation of the QActive class, since it is intended only for derivation (the abstract class).

Note

In the C++ version, QActive::stop() is not equivalent to the active object destructor. The function merely causes the active object thread to eventually terminate, which might not happen immediately.

  • (18-20) The functions QActive_subscribe(), QActive_usubscribe(), and QActive_unsubscribeAll() are used for subscribing and unsubscribing to events. I discuss these functions in the upcoming Section 7.6.2.

  • (21,22) The functions QActive_defer() and QActive_recall() are used for efficient (“zero copy”) deferring and recalling of events, respectively. I describe these functions in the upcoming Section 7.5.4.

  • (23) The function QActive_get_() is used to remove one event at a time from the active object's event queue. This function is used only inside QF and never at the application level. In some QF ports the function QActive_get_() can block. I describe this function in the upcoming Section 7.4.2.

Internal State Machine of an Active Object

As shown in Figure 7.3, every concrete active object, such as Ship, Missile, or Tunnel in the “Fly ‘n’ Shoot” game example from Chapter 1, is a state machine because it derives indirectly from the QHsm base class or a class that supports a generic state machine interface (see the data member super in Listing 7.7(7)). Derivation means simply that every pointer to QActive or a structure derived from QActive can always be safely used as a pointer to the base structure QHsm. Such a pointer can therefore always be passed to any function designed to work with the state machine structure. At the application level, you can mostly ignore the other aspects of your active objects and view them predominantly as state machines. In fact, your main job in developing a QF application consists of elaborating the state machines of your active objects.

Event Queue of an Active Object

Event queues are essential components of any event-driven system because they reconcile the asynchronous production of events with the RTC semantics of their consumption. An event queue makes the corresponding active object appear to always be responsive to events, even though the internal state machine can accept events only between RTC steps. Additionally, the event queue provides buffer space that protects the internal state machine from bursts in event production that can, at times, exceed the available processing capacity.

You can view the active object's event queue as an outer rind that provides an external interface for injecting events into the active object and at the same time protects the internal state machine during RTC processing. To perform these functions, the event queue must allow any thread of execution (as well as an ISR) to asynchronously post events, but only one thread—the local thread of the active object—needs to be able to remove events from the queue. In other words, the event queue in QF needs multiple-write but only single-read access.

From the description so far, it should be clear that the event queue is quite a sophisticated mechanism. One end of the queue—the end where producers insert events—is obviously shared among many tasks and interrupts and must provide an adequate mutual exclusion mechanism to protect the internal consistency of the queue. The other end—the end from which the local active object thread extracts events—must provide a mechanism for blocking the active object when the queue is empty. In addition, an event queue must manage a buffer of events, typically organized as a ring buffer.

As shown in Figure 6.13 in Chapter 6, the “zero copy” event queues do not store actual events, only pointers to event instances. Typically these pointers point to event instances allocated dynamically from event pools (see Section 7.5.2), but they can also point to statically allocated events. You need to specify the maximum number of event pointers that a queue can hold at any one time when you start the active object with the QActive_start() function (see the next section). The correct sizing of event queues depends on many factors and generally is not a trivial task. I discuss sizing event queues in Chapter 9.

Many commercial RTOSs natively support queuing mechanisms in the form of message queues. Standard message queues are far more complex than required by active objects because they typically allow multiple-write as well as multiple-read access (the QF requires only single-read access) and often support variable-length data (not only pointer-sized data). Usually message queues also allow blocking when the queue is empty and when the queue is full, and both types of blocking can be timed out. Naturally, all this extra functionality, which you don't really need in QF, comes at an extra cost in CPU and memory usage. The QF port to the μC/OS-II RTOS described in Chapter 8 provides an example of an event queue implemented with a message queue of an RTOS. The standalone QF ports to x86/DOS and ARM Cortex-M3 (used in the “Fly ‘n’ Shoot” game from Chapter 1) provide examples of using the native QF event queue. I discuss the native QF active object queue implementation in Section 7.8.3.

Thread of Execution and Active Object Priority

Every QF active object executes in its own thread. The actual control flow within the active object thread depends on the multitasking model actually used, but the event processing always consists of the three essential steps shown in Listing 7.8.

Listing 7.8. The three steps of an active object thread

  • (1) QEvent const *e = QActive_get_(a);    /* get the next event for AO ‘a’ */

  • (2) QF_ACTIVE_DISPATCH_(&a->super, e);              /* dispatch to the AO ‘a’ */

  • (3) QF_gc(e);     /* determine if event ‘e’ is garbage and collect it if so */

  • (1) The event is extracted from the active object's event queue by means of the function QActive_get_(). This function might block in blocking kernels. In Section 7.8.3 I describe the implementation of QActive_get_() for the native QF active object queue. In Chapter 8 I describe the QActive_get_() implementation when a message queue of an RTOS is used instead of the native QF event queue.

  • (2) The event is dispatched to the active object's state machine for processing (see Listing 7.7(5) for the definition of the QF_ACTIVE_DISPATCH_() macro).

Note

Step 2 constitutes the RTC processing of the active object's state machine. The active object's thread continues only after step 2 completes.

In the presence of a traditional RTOS (e.g., VxWorks) or a multitasking operating system (e.g., Linux), the three event processing steps just explained are enclosed by the usual endless loop, as shown in Figure 6.12(A) in Chapter 6. Under a cooperative “vanilla” kernel (Figure 6.12(B)) or an RTC kernel (Figure 6.12(C)), the three steps are executed in one-shot fashion for every event.

The QActive_start() function creates the active object's thread and notifies QF to start managing the active object. A QF application needs to call the QActive_start() function on behalf of every active object in the system. In principle, active objects can be started and stopped (with QActive_stop()) multiple times during the lifetime of the application. However, in most cases, all active objects are started just once during the system initialization.

The QActive_start() function is one of the central elements of the framework, but obviously it strongly depends on the underlying multitasking kernel. Listing 7.9 shows the pseudocode of QActive_start().

Listing 7.9. QActive_start() function pseudocode

  • (1) void QActive_start(QActive *me,

  • (2)                     uint8_t prio,                 /* the unique priority */

  • (3)                     QEvent const *qSto[], uint32_t qLen, /* event queue */

  • (4)                     void *stkSto, uint32_t stkSize, /* per-task stack */

  • (5)                     QEvent const *ie) /* the initialization event */

    (5) {

  • (6)        me->prio = prio;                                    /* set the QF priority */

  • (7)        QF_add_(me);         /* make QF aware of this active object */

  • (8)        QF_ACTIVE_INIT_(me, ie);     /* execute the initial transition */

  • (9)        Initialize the event queue object ‘me->eQueue’ using qSto and qLen

  • (10)        Create and start the thread ‘me->thread’ of the underlying kernel

    (10) }

  • (1) The argument ‘me’ is the pointer to the active object being started.

  • (2) The argument ‘prio’ is the priority you assign to the active object. In QF, every active object must have a unique priority, which you assign at startup and cannot change later. QF uses a priority numbering system in which priority 1 is the lowest and higher numbers correspond to higher priorities.

Note

You can think of QF priority 0 as corresponding to the idle task, which has the absolute lowest priority not accessible to the application-level tasks.

Note

The “initialization event” ‘ie’ gives you an opportunity to provide some information to the active object, which is only known later in the initialization sequence (e.g., a window handle in a GUI system). Note that the active object constructor runs even before main() (in C++), at which time you typically don't have all the information to initialize all aspects of an active object.

Note

This design allows the initialization event (passed to QActive_start() as the ‘ie’ pointer) to be allocated on the stack of the caller. Note that the initialization event is not recycled.

Event Management in QF

QF implements the efficient “zero-copy” event delivery scheme, as described in Section 6.5 in Chapter 6. QF supports two kinds of events: (1) dynamic events managed by the framework, and (2) other events (typically statically allocated) not managed by QF. For each dynamic event, QF keeps track of the reference counter of the event (to know when to recycle the event) as well as the event pool from which the dynamic event was allocated (to recycle the event back to the same pool).

Event Structure

QF uses the same event representation as the QEP event processor described in Part I. Events in QF are represented as instances of the QEvent structure (shown in Listing 7.10), which contains the event signal sig and a byte dynamic_ to represent the internal “bookkeeping” information about the event.

Listing 7.10. QEvent structure defined in <qp>qpcincludeqevent.h

  • typedef struct QEventTag {     /* QEvent base structure */

           QSignal sig;  /* public signal of the event instance */

           uint8_t dynamic_; /* attributes of a dynamic event (0 for static event) */

  • } QEvent;

As shown in Figure 7.4, the QF framework uses the QEvent.dynamic_ data byte in the following way.4 I avoid using bit fields because they are not quite portable. Also, the use of bit fields would be against the required MISRA rule 111. The six least-significant bits [0..5] represent the reference counter of the event, which has the dynamic range of 0..63. The two most significant bits [6..7] represent the event pool ID of the event, which has the dynamic range of 1..3. The pool ID of zero is reserved for static events, that is, events that do not come from any event pool. With this representation, a static event has a unique, easy-to-check signature (QEvent.dynamic_ == 0). Conversely, the signature (QEvent.dynamic_ != 0) unambiguously identifies a dynamic event.

Note

The QEvent data member dynamic_ is used only by the QF framework for managing dynamic events (see the following section). For every static event, you must initialize this member to zero. Otherwise, the QEvent.dynamic_ data member should never be of interest to the application code.

Allocation of bits in the QEvent.dynamic_ byte.

Figure 7.4. Allocation of bits in the QEvent.dynamic_ byte.

Dynamic Event Allocation

Dynamic events allow reusing the same memory over and over again for passing different events. QF allocates such events dynamically from one of the event pools managed by the framework. An event pool in QF is a fixed-block-size heap, also known as a memory partition or memory pool.

The most obvious drawback of a fixed-block-size heap is that it does not support variable-sized blocks. Consequently, the blocks have to be oversized to handle the biggest possible allocation. A good compromise to avoid wasting memory is to use not one but a few heaps with blocks of different sizes. QF can manage up to three event pools (e.g., small, medium, and large events, like shirt sizes).

Event pools require initialization through QF_poolInit() function shown in Listing 7.11. An application may call this function up to three times to initialize up to three event pools in QF.

Listing 7.11. Initializing an event pool to be managed by QF (file <qp>qpcqfsourceqf_pool.c)

  • /* Package-scope objects --------------------------------------------------*/

  • (1) QF_EPOOL_TYPE_ QF_pool_[3];                       /* allocate 3 event pools */

  • (2) uint8_t QF_maxPool_;                   /* number of initialized event pools */

  • /*........................................................................*/

  • (3) void QF_poolInit(void *poolSto, uint32_t poolSize, QEventSize evtSize) {

    (3)                            /* cannot exceed the number of available memory pools */

  • (4)        Q_REQUIRE(QF_maxPool_ < (uint8_t)Q_DIM(QF_pool_));

    (4)                  /* please initialize event pools in ascending order of evtSize: */

  • (5)        Q_REQUIRE((QF_maxPool_ == (uint8_t)0)

    (5)                            || (QF_EPOOL_EVENT_SIZE_(QF_pool_[QF_maxPool_ - 1]) < evtSize));

    (5)                            /* perfom the platform-dependent initialization of the pool */

  • (6)        QF_EPOOL_INIT_(QF_pool_[QF_maxPool_], poolSto, poolSize, evtSize);

  • (7)        ++QF_maxPool_;                                         /* one more pool */

    (7) }

  • (1) The macro QF_EPOOL_TYPE_ represents the QF event pool type. This macro lets the QF port define a particular memory pool (fixed-size heap) implementation that might be already provided with the underlying kernel or RTOS. If QF is used standalone or if the underlying RTOS does not provide an adequate memory pool, the QF framework provides the efficient native QMPool class. Note that an event pool object is quite small because it does not contain the actual memory managed by the pool (see Section 7.9).

  • (2) The variable QF_maxPool_ holds the number of pools actually used, which can be 0 through 3.

Note

All QP components, including the QF framework, consistently assume that variables without an explicit initialization value are initialized to zero upon system startup, which is a requirement of the ANSI-C standard. In embedded systems, this initialization step corresponds to clearing the .BSS section. You should make sure that in your system the .BSS section is indeed cleared before main() is called.

  • (3) According to the general policy of QF, all memory needed for the framework operation is provided to the framework by the application. Therefore, the first parameter ‘poolSto’ of QF_poolInit() is a pointer to the contiguous chunk of storage for the pool. The second parameter ‘poolSize’ is the size of the pool storage in bytes, and finally, the last parameter ‘evtSize’ is the maximum event size that can be allocated from this pool.

Note

The number of events in the pool might be smaller than the ratio poolSize/evtSize because the pool might choose to internally align the memory blocks. However, the pool is guaranteed to hold events of at least the specified size evtSize.

Note

The subsequent calls to QF_poolInit() function must be made with progressively increasing values of the evtSize parameter.

Listing 7.12 shows the implementation of the QF_new_() function, which allocates a dynamic event from one of the event pools managed by QF. The basic policy is to allocate the event from the first pool that has a block size big enough to fit the requested event size.

Listing 7.12. Simple policy of allocating an event from the smallest event-size pool (file <qp>qpcqfsourceqf_new.c)

  • (1) QEvent *QF_new_(QEventSize evtSize, QSignal sig) {

    (1)        QEvent *e;

    (1)                       /* find the pool id that fits the requested event size … */

    (1)       uint8_t idx = (uint8_t)0;

  • (2)        while (evtSize > QF_EPOOL_EVENT_SIZE_(QF_pool_[idx])) {

    (2)               ++idx;

  • (3)                Q_ASSERT(idx < QF_maxPool_);  /* cannot run out of registered pools */

    (3)       }

  • (4)        QF_EPOOL_GET_(QF_pool_[idx], e);         /* get e -- platform-dependent */

  • (5)        Q_ASSERT(e != (QEvent *)0);          /* pool must not run out of events */

  • (6)        e->sig = sig;                              /* set signal for this event */

    (6)                                       /* store the dynamic attributes of the event:

    (6)                                                          * the pool ID and the reference counter == 0

    (6)                                                          */

  • (7)        e->dynamic_ = (uint8_t)((idx + 1) << 6);

  • (8)        return e;

    (8) }

  • (1) The function QF_new_() allocates a dynamic event of the requested size ‘evtSize’ and sets the signal ‘sig’ in the newly allocated event. The function returns a pointer to the event.

  • (2) This while loop scans through the QF_pool_[] array starting from pool id = 0 in search of a pool that would fit the requested event size. Obtaining the event size of a pool is a platform-specific operation because various RTOSs that support fixed-size heaps might report the event size in a different way. This platform dependency is hidden in the QF code by the indirection layer of the macro QF_EPOOL_EVENT_SIZE_().

  • (3) This assertion fires when the while loop runs out of the event pools, which means that the requested event is too big for all initialized event pools.

  • (4) The macro QF_EPOOL_GET_() obtains a memory block from the pool found in the previous step.

  • (5) The assertion fires when the pool returns the NULL pointer, which indicates depletion of this pool.

Note

The QF framework treats the inability to allocate an event as an error. The assertions in lines 3 and 5 are part of the event delivery guarantee policy. It is the application designer's responsibility to size the event pools adequately so that they never run out of events.

Typically, you will not use QF_new_() directly but through the Q_NEW() macro defined as follows:

  • #define Q_NEW(evtT_, sig_) ((evtT_ *)QF_new_(sizeof(evtT_), (sig_)))

The Q_NEW() macro dynamically creates a new event of type evT_ with the signal sig_. It returns a pointer to the event already cast to the event type (evT_*). Here is an example of dynamic event allocation with the macro Q_NEW():

  • MyEventXYZ *e_xyz = Q_NEW(MyEventXYZ, XYZ_SIG);  /* dynamically allocate */

  • /* NOTE: no need to check for validity of the event pointer */

  • e_xyz->foo = …;                           /* fill the event parameters… */

  • QF_publish((QEvent *)e_xyz);                           /* publish the event */

The assertions inside QF_new_() guarantee that the pointer is valid, so you don't need to check the pointer returned from Q_NEW(), unlike the value returned from malloc(), which you should check.

Note

In C++, the Q_NEW() macro does not invoke the constructor of the event. This is not a problem for the QEvent base struct and simple structs derived from it. However, you need to keep in mind that subclasses of QEvent should not introduce virtual functions because the virtual pointer won't be set up during the dynamic allocation through Q_NEW().5 A simple solution would be to use the placement new() operator inside the Q_NEW() macro to enforce full instantiation of an event object, but it is currently not used, for better efficiency and compatibility with older C++ compilers, which might not support placement new().

Automatic Garbage Collection

Most of the time, you don't need to worry about recycling dynamic events, because QF does it automatically when it detects that an event is no longer referenced.

Note

The explicit garbage collection step is necessary only in the code that is out of the framework's control, such as ISRs receiving events from “raw” thread-safe queues (see upcoming Section 7.8.4).

QF uses the standard reference-counting algorithm to keep track of the outstanding references to each dynamic event managed by the framework. The reference counter for each event is stored in the six least significant bits of the event attribute dynamic_. Note that the data member dynamic_ of a dynamic event cannot be zero because the two most significant bits of the byte hold the pool ID, with valid values of 1, 2, or 3.

The reference counter of each event is always updated and tested in a critical section of code to prevent data corruption. The counter is incremented whenever a dynamic event is inserted into an event queue. The counter is decremented by the QF garbage collector, which is called after every RTC step (see Listing 7.8(3)). When the reference counter of a dynamic event drops to zero, the QF garbage collector recycles the event back to the event pool number stored in the two most significant bits of the dynamic_ attribute.

  • (1) void QF_gc(QEvent const *e) {

  • (2)        if (e->dynamic_ != (uint8_t)0) {              /* is it a dynamic event? */

  • (3)               QF_INT_LOCK_KEY_

  • (4)               QF_INT_LOCK_();

  • (5)               if ((e->dynamic_ & 0x3F) > 1) {   /* isn't this the last reference? */

  • (6)                        --((QEvent *)e)->dynamic_;   /* decrement the reference counter */

  • (7)                      QF_INT_UNLOCK_();

  •                         }

  • (8)               else {      /* this is the last reference to this event, recycle it */

  • (9)                        uint8_t idx = (uint8_t)((e->dynamic_ >> 6) - 1);

  • (10)                      QF_INT_UNLOCK_();

  • (11)                      Q_ASSERT(idx < QF_maxPool_);          /* index must be in range */

  • (12)                      QF_EPOOL_PUT_(QF_pool_[idx], (QEvent *)e);

    (12)             }

    (12)       }

    (12) }

  • (1) The function QF_gc() garbage-collects one event at a time.

  • (2) The function checks the unique signature of a dynamic event. The garbage collector handles only dynamic events.

  • (3) The critical section status is allocated on the stack (see Section 7.3.3).

  • (4) Interrupts are locked to examine and decrement the reference count.

  • (5) If the reference count (lowest 6 bits of the e->dynamic_ byte) is greater than 1, the event should not be recycled.

  • (6) The reference count is decremented. Note that the const attribute of the event pointer is “cast away,” but this is safe after checking that this must be a dynamic event (and not a static event possibly placed in ROM).

  • (7) Interrupts are unlocked for the if-branch.

  • (8) Otherwise, reference count is becoming zero and the event must be recycled.

  • (9) The pool ID is extracted from the two most significant bits of the e->dynamic_ byte and decremented by one to form the index into the QF_pool_[] array.

  • (10) Interrupts are unlocked for the else branch. It is safe at this point because you know for sure that the event is not referenced by anybody else, so it is exclusively owned by the garbage collector thread.

  • (11) The index must be in the expected range of initialized event pools.

  • (12) The macro QF_EPOOL_PU_() recycles the event to the pool QF_pool_[idx]. The explicit cast removes the const attribute.

Deferring and Recalling Events

Event deferral comes in very handy when an event arrives in a particularly inconvenient moment but can be deferred for some later time, when the system is in a much better position to handle the event (see “Deferred Event” state pattern in Chapter 5). QF supports very efficient event deferring and recalling mechanisms consistent with the “zero-copy” policy.

QF implements explicit event deferring and recalling through QActive class functions QActive_defer() and QActive_recall(), respectively. These functions work in conjunction with the native “raw” event queue provided in QF (see upcoming Section 7.8.4). Listing 7.13 shows the implementation.

Listing 7.13. QF event deferring and recalling (file <qp>qpcqfsourceqa_defer.c)

  •           void QActive_defer(QActive *me, QEQueue *eq, QEvent const *e) {

                    (void)me;                 /* avoid compiler warning about ‘me’ not used */

  • (1)         QEQueue_postFIFO(eq, e);     /* increments ref-count of a dynamic event */

    (1) }

    (1) /*.....................................................................*/

  • (2) QEvent const *QActive_recall(QActive *me, QEQueue *eq) {

  • (3)        QEvent const *e = QEQueue_get(eq);  /* get an event from deferred queue */

    (3)        if (e != (QEvent *)0) {                             /* event available? */

    (3)              QF_INT_LOCK_KEY_

  • (4)               QActive_postLIFO(me, e);  /* post it to the front of the AO's queue */

  • (5)               QF_INT_LOCK_();

  • (6)               if (e->dynamic_ != (uint8_t)0) {          /* is it a dynamic event? */

  • (7)                     Q_ASSERT((e->dynamic_ & 0x3F) > 1);

  • (8)                      --((QEvent *)e)->dynamic_;   /* decrement the reference counter */

    (8)              }

  • (9)                 QF_INT_UNLOCK_();

    (9)        }

  • (10)      return e;/*pass the recalled event to the caller (NULL if not recalled) */

    (10) }

  • (1) The function QActive_defer() takes posts the deferred event into the given “raw” queue ‘eq.’ The event posting increments the reference counter of a dynamic event, so the event is not recycled at the end of the current RTC step (because it is referenced by the “raw” queue).

  • (2) The function QActive_recall() attempts recalling an event from the provided “raw” thread-safe queue ‘eq.’ The function returns the pointer to the recalled event or NULL if the provided queue is empty.

  • (3) The event is extracted from the queue. The “raw” queue never blocks and returns NULL if it is empty.

  • (4) If an event is available, it is posted using the last-in, first-out (LIFO) policy into the event queue of the active object. The LIFO policy is employed to guarantee that the recalled event will be the very next to process. If other already queued events were allowed to precede the recalled event, the state machine might transition to a state where the recalled event would no longer be convenient.

  • (5) Interrupts are locked to decrement the reference counter of the event, to account for removing the event from the “raw” thread-safe queue.

  • (6) The unique signature of a dynamic event is checked.

  • (7) The reference counter must be at this point at least 2 because the event is referenced by at least two event queues (the deferred queue and the active object's queue).

  • (8) The reference counter is decremented by one to account for removing the event from the deferred queue.

  • (9) Interrupts are unlocked.

  • (10) The recalled event pointer is returned to the caller.

Note

Even though you can “peek” inside the event right at the point it is recalled, you should typically handle the event only after it arrives through the active object's queue. See the “Deferred Event” state pattern in Chapter 5.

Event Delivery Mechanisms in QF

QF supports only asynchronous event exchange within the application, meaning that the producers post events into event queues, but do not wait for the actual processing of the events. QF supports two types of asynchronous event delivery:

  • 1 The simple mechanism of direct event posting, when the producer of an event directly posts the event to the event queue of the consumer active object.

  • 2 A more sophisticated publish-subscribe event delivery mechanism, where the producers of events publish them to the framework and the framework then delivers the events to all active objects that had subscribed to this event.

Direct Event Posting

QF supports direct event posting through the QActive_postFIFO() and QActive_postLIFO() functions. These functions depend on the active object's employed queue mechanism. In the upcoming Section 7.8.3, I show how these functions are implemented when the native QF active object queue is used. In Chapter 8, I demonstrate how to implement these functions to use a message queue of a traditional RTOS.

Note

Direct event posting should not be confused with event dispatching. In contrast to asynchronous event posting through event queues, direct event dispatching is a simple synchronous function call. Event dispatching occurs when you call the QHsm_dispatch() function, as in Listing 7.8(2), for example.

Direct event posting is illustrated in the “Fly ‘n’ Shoot” example from Chapter 1, when an ISR posts a PLAYER_SHIP_MOVE event directly to the Ship active object:

  • QActive_postFIFO(AO_ship, (QEvent *)e);    /* post event ‘e’ to the Ship AO */

Note that the producer of the event (ISR) in this case must only “know” the recipient (Ship) by an “opaque pointer” QActive*, and the specific definition of the Ship active object structure is not required. The AO_ship pointer is declared in the game.h header file as:

  • extern QActive * const AO_Ship;            /* opaque pointer to the Ship AO */

The Ship structure definition is in fact entirely encapsulated in the ship.c module and is inaccessible to the rest of the application. I recommend using this variation of the “opaque pointer” technique in your applications.

Publish-Subscribe Event Delivery

QF implements publish-subscribe event delivery through the following services:

  • • The function QF_psInit() to initialize the publish-subscribe mechanism

  • • Functions QActive_subscribe(), QActive_unsubscribe(), and QActive_unsubscribeAll() for active objects to subscribe and unsubscribe to particular event signals

  • • The function QF_publish() for publishing events

Delivering events is the most frequently performed function of the framework; therefore, it is important to implement it efficiently. As shown in Figure 7.5, QF uses a lookup table indexed by the event signal to efficiently find all subscribers to a given signal. For each event signal index (e->sig), the lookup table stores a subscriber list. A subscriber list (typedef’d to QSubscrList) is just a densely packed bitmask where each bit corresponds to the unique priority of the active object. If the bit is set, the corresponding active object is the subscriber to the signal, otherwise the active object is not the subscriber.

Signal to Subscriber-List lookup table QF_subscrList_[].

Figure 7.5. Signal to Subscriber-List lookup table QF_subscrList_[].

The actual size of the QSubscrList bitmask is determined by the macro QF_MAX_ACTIVE, which specifies the maximum active objects in the system (the current range of QF_MAX_ACTIVE is 1..63). Subscriber list type QSubscrList is typedef’ed in Listing 7.14.

Listing 7.14. QF_psInit() (file <qp>qpcinitqf.h)

  • typedef struct QSubscrListTag {

  •        uint8_t bits[((QF_MAX_ACTIVE - 1) / 8) + 1];

  • } QSubscrList;

To reduce the memory taken by the subscriber lookup table, you have options to reduce the number of published signals and reduce the number of potential subscribers QF_MAX_ACTIVE. Typically, however, the table is quite small. For example, the table for a complete real-life GPS receiver application with 50 different signals and up to eight active objects costs 50 bytes of RAM.

Note

Not all signals in the system are published. To conserve memory, you can enumerate the published signals before other nonpublished signals and thus arrive at a lower limit for the number of published signals.

Before you can publish any event, you need to initialize the subscriber lookup table by calling the function QF_psInit(), which is shown in Listing 7.15. This function simply initializes the pointer to the lookup table QF_subsrcrList_ and the number of published signals QF_maxSignal.

Listing 7.15. QF_psInit() (file <qp>qpcqfsourceqf_psini.c)

  • QSubscrList *QF_subscrList_;       /* initialized to zero per C-standard */

  • QSignal QF_maxSignal_;                /* initialized to zero per C-standard */

  • void QF_psInit(QSubscrList *subscrSto, QSignal maxSignal) {

           QF_subscrList_ = subscrSto;

           QF_maxSignal_  = maxSignal;

    }

Active objects subscribe to signals through QActive_subscribe(), shown in Listing 7.16.

Listing 7.16. QActive_subscribe() function (file <qp>qpcqfsourceqa_sub.c)

  • (1) void QActive_subscribe(QActive const *me, QSignal sig) {

    (1)        uint8_t p = me->prio;

  • (2)         uint8_t i = Q_ROM_BYTE(QF_div8Lkup[p]);

    (2)        QF_INT_LOCK_KEY_

  • (3)         Q_REQUIRE(((QSignal)Q_USER_SIG <= sig)

    (3)                           && (sig < QF_maxSignal_)

    (3)                           && ((uint8_t)0 < p) && (p <= (uint8_t)QF_MAX_ACTIVE)

    (3)                           && (QF_active_[p] == me));

    (3)        QF_INT_LOCK_();

  • (4)         QF_subscrList_[sig].bits[i] |= Q_ROM_BYTE(QF_pwr2Lkup[p]);

    (4)        QF_INT_UNLOCK_();

    (4) }

  • (1) The function QActive_subscribe() subscribes a given active object ‘me’ to the event signal ‘sig.’.

  • (2) The index ‘i’ represents the byte index into the multibyte QSubscrList bitmask (see Listing 7.14). The array QF_div8Lkup[] is a lookup table that stores the precomputed values of the following expression: QF_div8Lkup[p] = (p – 1)/8, where 0 < p < 64. The QF_div8Lkup[] lookup table is defined in the file <qp>qpcqfsourceqf_pwr2.c and occupies 64 bytes of ROM.

Note

Obviously, you don't want to use precious RAM for storing constant lookup tables. However, some compilers for Harvard architecture MCUs (e.g., GCC for AVR) cannot generate code for accessing data allocated in the program space (ROM), even though the compiler can allocate constants in ROM. The workaround for such compilers is to explicitly add assembly code to access data allocated in the program space. The macro Q_ROM_BYTE() retrieves a byte from the given ROM address. This macro is transparent (i.e., copies its argument) for compilers that can correctly access data in ROM.

I don't explicitly discuss the mirror function QActive_unsubscribe(), but it is virtually identical to QActive_subscribe() except that it clears the appropriate bit in the subscriber bitmask. Note that both QActive_subscribe() and QActive_unsubscribe() require an active object as the first parameter “ me,” which means that only active objects are capable of subscribing or unsubscribing to events.

The QF real-time framework implements event publishing with the function QF_publish() shown in Listing 7.17. This function performs efficient “zero-copy” event multicasting. QF_publish() is designed to be callable from both the task level and the interrupt level.

Listing 7.17. QF_publish() function (file <qp>qpcqfsourceqa_pspub.c)

  • (1) void QF_publish(QEvent const *e) {

    (1)         QF_INT_LOCK_KEY_

    (1)            /* make sure that the published signal is within the configured range */

  • (2)         Q_REQUIRE(e->sig < QF_maxSignal_);

    (2)        QF_INT_LOCK_();

    (2)        if (e->dynamic_ != (uint8_t)0) {              /* is it a dynamic event? */

    (2)              /*lint -e1773                            Attempt to cast away const */

  • (3)              ++((QEvent *)e)->dynamic_;   /* increment reference counter, NOTE01 */

    (3)        }

    (3)        QF_INT_UNLOCK_();

  • (4) #if (QF_MAX_ACTIVE <= 8)

    (4)       {

  • (5)              uint8_t tmp = QF_subscrList_[e->sig].bits[0];

  • (6)              while (tmp != (uint8_t)0) {

  • (7)                     uint8_t p = Q_ROM_BYTE(QF_log2Lkup[tmp]);

  • (8)                      tmp &= Q_ROM_BYTE(QF_invPwr2Lkup[p]);   /* clear subscriber bit */

  • (9)                      Q_ASSERT(QF_active_[p] != (QActive *)0);  /* must be registered */

  •                                     /* internally asserts if the queue overflows */

  • (10)                     QActive_postFIFO(QF_active_[p], e);

    (10)            }

    (10)       }

  • (11) #else

    (11)        {

  • (12)              uint8_t i = Q_DIM(QF_subscrList_[0].bits);

    (12)             do {               /* go through all bytes in the subscription list */

    (12)                   uint8_t tmp;

    (12)                   --i;

  • (13)                   tmp = QF_subscrList_[e->sig].bits[i];

    (13)                   while (tmp != (uint8_t)0) {

    (13)                          uint8_t p = Q_ROM_BYTE(QF_log2Lkup[tmp]);

    (13)                                 tmp &= Q_ROM_BYTE(QF_invPwr2Lkup[p]);/*clear subscriber bit */

  • (14)                           p = (uint8_t)(p + (i << 3));         /* adjust the priority */

    (14)                                 Q_ASSERT(QF_active_[p] != (QActive *)0);/*must be registered*/

    (14)                                     /* internally asserts if the queue overflows */

    (14)                                     QActive_postFIFO(QF_active_[p], e);

    (14)                             }

    (14)                      } while (i != (uint8_t)0);

    (14)                 }

    (14) #endif

  •     

  • (15)        QF_gc(e);                      /* run the garbage collector, see NOTE01 */

    (15) }

  • (1) The function QF_publish() publishes a given event ‘e’ to all subscribers.

  • (2) The precondition checks that the published signal is in initialized range (see Listing 7.15).

  • (3) The reference counter of a dynamic event is incremented in a critical section. This protects the event from being prematurely recycled before it reaches all subscribers.

    (3) The QF_publish() function must ensure that the event is not recycled by a subscriber before all the subscribers receive the event. For example, consider the following scenario: A low-priority active object dynamically allocates an event with Q_NEW() and publishes it by calling QF_publish() in its own thread of execution. In the course of multicasting the event, QF_publish() posts the event to a high-priority active object, which immediately preempts the current thread and starts processing the event. After the RTC step, the high-priority active object calls the garbage collector (see Listing 7.8(3)). If QF_publish() did not increment the event counter in step 3, the counter would be only 1 because the event has only been posted once. The high-priority active object would recycle the event. After resuming the low-priority thread, the QF_publish() might want to keep posting the event to some other subscribers, but the event would be already recycled.

  • (4) The conditional compilation is used to distinguish the simpler and faster case of single-byte QSubscrList (see Listing 7.14).

  • (5) The entire subscriber bitmask is placed in a temporary byte.

  • (6) The while loop runs over all 1 bits set in the subscriber bitmask until the bitmask becomes empty.

  • (7) The log-base-2 lookup quickly determines the most significant 1 bit in the bitmask, which corresponds to the highest-priority subscriber. The structure of the lookup table QF_log2Lkup[tmp], where 0 < tmp <= 255, is shown in Figure 7.6. The QF_log2Lkup[] lookup table is defined in the file <qp>qpcqfsourceqf_log2.c and occupies 256 bytes of ROM.

    The binary logarithm lookup table QF_log2Lkup[] maps byte value to the most significant 1-bit number (bits are numbered starting with 1 for the LSB).

    Figure 7.6. The binary logarithm lookup table QF_log2Lkup[] maps byte value to the most significant 1-bit number (bits are numbered starting with 1 for the LSB).

Note

To avoid priority inversions, the event is multicast starting from the highest-priority subscriber.

Time Management

QF manages time through time events, as described in Section 6.6.1 of Chapter 6. In the current QF version, time events cannot be dynamic and must be allocated statically. Also, a time event must be assigned a signal upon instantiation (in the constructor) and the signal cannot be changed later. This latter restriction prevents unexpected changes of the time event while it still might be held inside an event queue.

Time Event Structure and Interface

QF represents time events as instances of the QTimeEvt class (see Figure 7.3). QTimeEvt, as all events in QF, derives from the QEvent base structure. Typically, you will instantiate the QTimeEvt structure directly, but you might also further derive more specialized time events from it to add some more data members and/or specialized functions that operate on the derived time events. Listing 7.18 shows the QTimeEvt class, that is, the QTimeEvt structure declaration and the functions to manipulate it.

Listing 7.18. QTimeEvt structure and interface (file <qp>qpcincludeqf.h)

  •           typedef struct QTimeEvtTag {

  • (1)        QEvent super;                                    /* derives from QEvent */

  • (2)        struct QTimeEvtTag *prev;/* link to the previous time event in the list */

  • (3)        struct QTimeEvtTag *next;    /* link to the next time event in the list */

  • (4)        QActive *act;         /* the active object that receives the time event */

  • (5)        QTimeEvtCtr ctr;         /* the internal down-counter of the time event */

  • (6)        QTimeEvtCtr interval;       /* the interval for the periodic time event */

    (6) } QTimeEvt;

  • (7) void QTimeEvt_ctor(QTimeEvt *me, QSignal sig);

  • (8) #define QTimeEvt_postIn(me_, act_, nTicks_) do {

    (8)       (me_)->interval = (QTimeEvtCtr)0;

    (8)        QTimeEvt_arm_((me_), (act_), (nTicks_));

    (8) } while (0)

  • (9) #define QTimeEvt_postEvery(me_, act_, nTicks_) do {

    (9)         (me_)->interval = (nTicks_);

    (9)          QTimeEvt_arm_((me_), (act_), (nTicks_));

    (9) } while (0)

  • (10) uint8_t QTimeEvt_disarm(QTimeEvt *me);

  • (11) uint8_t QTimeEvt_rearm(QTimeEvt *me, QTimeEvtCtr nTicks);

    (11) /* private helper function */

  • (12) void QTimeEvt_arm_(QTimeEvt *me, QActive *act, QTimeEvtCtr nTicks);

  • (1) The QTimeEvt structure derives from QEvent.

  • (2,3) The two pointers ‘prev’ and ‘next’ are used as links to chain the time events into a bidirectional list (see Figure 7.7).

    Armed QTimeEvt objects linked in a bidirectional linked list and disarmed time events outside the list.

    Figure 7.7. Armed QTimeEvt objects linked in a bidirectional linked list and disarmed time events outside the list.

  • (4) The active object pointer ‘act’ stores the recipient of the time event.

  • (5) The member ‘ctr’ is the internal down-counter decremented in every QF_tick() invocation (see the next section). The time event is posted when the down-counter reaches zero.

  • (6) The member ‘interval’ is used for the periodic time event (it is set to zero for the one-shot time event). The value of the interval is reloaded to the ‘ctr’ down-counter when the time event expires, so the time event keeps timing out periodically.

  • (7) Every time event must be initialized with the constructor QTimeEvt_ctor(). You should call the constructor exactly once for every time event object before arming the time event. The most important action performed in this function is assigning a signal to the time event. You can reuse the time event any number of times, but you should not change the signal. This is because a pointer to the time event might still be held in an event queue and changing the signal could lead to subtle and hard-to-find errors.

  • (8) The macro QTimeEvt_postIn() arms a time event ‘me_’ to fire once in ‘nTicks_’ clock ticks (a one-shot time event). The time event gets directly posted (using the FIFO policy) into the event queue of the active object ‘act_.’ After posting, a one-shot time event gets automatically disarmed and can be reused for a one-shot or periodic timeout requests.

  • (9) The macro QTimeEvt_postEvery() arms a time event ‘me_’ to fire periodically every ‘nTicks_’ clock ticks (periodic time event). The time event gets directly posted (using the FIFO policy) into the event queue of the active object ‘act_’. After posting, the periodic time event gets automatically rearmed to fire again in the specified ‘nTicks_’ clock ticks.

  • (10) The function QTimeEvt_disarm() explicitly disarms any time event (one-shot or periodic). The time event can be reused immediately after the call to QTimeEvt_disarm(). The function returns the status of the disarming operation: 1 if the time event has been actually disarmed and 0 if the time event has already been disarmed.

  • (11) The function QTimeEvt_rearm() reloads the down-counter ‘ctr’ with the specified number of clock ticks. The function returns the status of the rearming operation: 1 if the time event has been actually armed and 0 if the time event has been disarmed. In the latter case, the QTimeEvt_rearm() function arms the time event.

  • (12) The helper function QTimeEvt_arm_() inserts the time event into the linked list of armed timers. This function is used in the QTimeEvt_postIn() and QTimeEvt_postEvery() macros.

Note

An attempt to arm an already armed time event (one-shot or periodic) raises an assertion. If you're not sure that the time event is disarmed, call the QTimeEvt_disarm() function before reusing the time event.

Figure 7.7 shows the internal representation of armed and disarmed time events. QF chains all armed time events in a bidirectional linked list. The list is scanned from the head at every system clock tick. The list is not sorted in any way. Newly armed time events are always inserted at the head. When a time event gets disarmed, either automatically when a one-shot timer expires or explicitly when the application calls QTimeEvt_disarm(), the time event is simply removed from the list. Removing an object from a bidirectional list is a quick, deterministic operation. In particular, the list does not need to be rescanned from the head. Disarmed time events remain outside the list and don't consume any CPU cycles.

The System Clock Tick and the QF_tick() Function

To manage time events, QF requires that you invoke the QF_tick() function from a periodic time source called the system clock tick (see Chapter 6, “System Clock Tick”). The system clock tick typically runs at a rate between 10Hz and 100Hz.

Listing 7.19 shows the implementation of QF_tick(). This function is designed to be called from both the interrupt context and the task-level context, in case the underlying OS/RTOS does not allow accessing interrupts or you want to keep the ISRs very short. QF_tick() must always run to completion and never preempt itself. In particular, if QF_tick() runs in an ISR context, the ISR must not be allowed to preempt itself. In addition, QF_tick() should not be called from two different ISRs, which potentially could preempt each other. When executed in a task context, QF_tick() should be called by one task only, ideally by the highest-priority task.

Listing 7.19. QF_tick() function (file <qp>qpcqfsourceqf_tick.c)

  •         void QF_tick(void) {                                          /* see NOTE01 */

                  QTimeEvt *t;

                  QF_INT_LOCK_KEY_

  • (1)        QF_INT_LOCK_();

  • (2)        t = QF_timeEvtListHead_;       /* start scanning the list from the head */

  • (3)        while (t != (QTimeEvt *)0) {

  • (4)                if (--t->ctr == (QTimeEvtCtr)0) {   /* is time evt about to expire? */

  • (5)                      if (t->interval != (QTimeEvtCtr)0) { /* is it periodic timeout? */

  • (6)                             t->ctr = t->interval;               /* rearm the time event */

    (6)                     }

  • (7)                      else { /* one-shot timeout, disarm by removing it from the list */

  • (8)                             if (t == QF_timeEvtListHead_) {

  • (9)                                   QF_timeEvtListHead_ = t->next;

    (9)                             }

  • (10)                            else {

  • (11)                                   if (t->next != (QTimeEvt *)0) {  /* not the last event? */

  • (12)                                         t->next->prev = t->prev;

  •                                             }

  • (13)                                   t->prev->next = t->next;

    (13)                           }

  • (14)                            t->prev = (QTimeEvt *)0;         /* mark the event disarmed */

    (14)                      }

  • (15)                          QF_INT_UNLOCK_();/* unlock interrupts before calling QF service */

    (15)                     /* postFIFO() asserts internally that the event was accepted */

  • (16)                       QActive_postFIFO(t->act, (QEvent *)t);

    (16)                }

  • (17)                 else {

    (17)                       static uint8_t volatile dummy;

  • (18)                        QF_INT_UNLOCK_();

  • (19)                        dummy = (uint8_t)0;   /* execute a few instructions, see NOTE02 */

    (19)               }

  • (20)                 QF_INT_LOCK_();        /* lock interrupts again to advance the link */

  • (21)                 t = t->next;

    (21)         }

  • (22)         QF_INT_UNLOCK_();

    (22) }

  • (1) Interrupts are locked before accessing the linked list of time events.

  • (2) The internal QF variable QF_timeEvtListHead_ holds the head of the linked list.

  • (3) The loop continues until the end of the linked list is reached (see Figure 7.7).

  • (4) The down-counter of each time event is decremented. When the counter reaches zero, the time event expires.

  • (5) The ‘interval’ member is nonzero only for a periodic time event.

  • (6) The down-counter of a periodic time event is simply reset to the interval value. The time event remains armed in the list.

  • (7) Otherwise the time event is a one-shot and must be disarmed by removing it from the list.

  • (8-13) These lines of code implement the standard algorithm of removing a link from a bidirectional list.

  • (14) A time event is internally marked as disarmed by writing NULL to the ‘prev’ link.

  • (15) Interrupts can be unlocked after the bookkeeping of the linked list is done.

  • (16) The time event posts itself to the event queue of the active object.

  • (17) The else branch is taken when the time event is not expiring on this tick.

  • (18) Interrupts can be unlocked.

  • (19) On many CPUs, the interrupt unlocking takes effect only on the next machine instruction, which happens here to be an interrupt lock instruction (line (20)). The assignment of the volatile ‘dummy’ variable requires a few machine instructions, which the compiler cannot optimize away. This ensures that the interrupts actually get unlocked so that the interrupt latency stays low.

Note

The critical section lasts for only one time event, not for the whole list.

Arming and Disarming a Time Event

Listing 7.20 shows the helper function QTimeEvt_arm_() for arming a time event. This function is used inside the macros QTimeEvt_postIn() and QTimeEvt_postEvery() for arming a one-shot or periodic time event, respectively.

Listing 7.20. QTimeEvt_arm_() (file <qp>qpcqfsourceqte_arm.c)

  •        void QTimeEvt_arm_(QTimeEvt *me, QActive *act, QTimeEvtCtr nTicks) {

                  QF_INT_LOCK_KEY_

                  Q_REQUIRE((nTicks > (QTimeEvtCtr)0)  /* cannot arm a timer with 0 ticks */

                                      && (((QEvent *)me)->sig >= (QSignal)Q_USER_SIG)/*valid signal */

  • (1)                                   && (me->prev == (QTimeEvt *)0)   /* time evt must NOT be used */

    (1)                                && (act != (QActive *)0));  /* active object must be provided */

    (1)               me->ctr = nTicks;

  • (2)                me->prev = me;                                 /* mark the timer in use */

    (2)               me->act = act;

    (2)              QF_INT_LOCK_();

  • (3)               me->next = QF_timeEvtListHead_;

  • (4)               if (QF_timeEvtListHead_ != (QTimeEvt *)0) {

  • (5)                     QF_timeEvtListHead_->prev = me;

    (5)              }

  • (6)               QF_timeEvtListHead_ = me;

  •                        QF_INT_UNLOCK_();

           }

  • (1) The preconditions include checking that the time event is not already in use. A used time event has always the ‘prev’ pointer set to non-NULL value.

  • (2) The ‘prev’ pointer is initialized to point to self, to mark the time event in use (see also Figure 7.7).

  • (3) Interrupts are locked to insert the time event into the linked list. Note that until that point the time event is not armed, so it cannot unexpectedly change due to asynchronous tick processing.

  • (3-6) These lines of code implement the standard algorithm of inserting a link into a bidirectional list at the head position.

Listing 7.21 shows the function QTimeEvt_disarm() for explicitly disarming a time event.

Listing 7.21. QTimeEvt_disarm() (file <qp>qpcqfsourceqte_darm.c)

  •         uint8_t QTimeEvt_disarm(QTimeEvt *me) {

                   uint8_t wasArmed;

                   QF_INT_LOCK_KEY_

  • (1)        QF_INT_LOCK_();

  • (2)        if (me->prev != (QTimeEvt *)0) {   /* is the time event actually armed? */

    (2)              wasArmed = (uint8_t)1;

  • (3)              if (me == QF_timeEvtListHead_) {

  • (4)                    QF_timeEvtListHead_ = me->next;

    (4)             }

  • (5)              else {

  • (6)                     if (me->next != (QTimeEvt *)0) {   /* not the last in the list? */

  • (7)                            me->next->prev = me->prev;

    (7)                    }

  • (8)                     me->prev->next = me->next;

    (8)             }

  • (9)               me->prev = (QTimeEvt *)0;        /* mark the time event as disarmed */

  •                }

  •                else {                                  /* the time event was not armed */

  •                       wasArmed = (uint8_t)0;

  •                }

  •                QF_INT_UNLOCK_();

  • (10)        return wasArmed;

    (10) }

  • (1) Critical section is established right away.

  • (2) The time event is still armed if the ‘prev’ pointer is not NULL.

  • (3-8) These lines of code implement the standard algorithm of removing a link from a bidirectional list (compare also Listing 7.19(8-13)).

  • (9) A time event is internally marked as disarmed by writing NULL to the ‘prev’ link.

  • (10) The function returns the status: 1 if the time event was still armed at the time of the call and 0 if the time event was disarmed before the function QTimeEvt_disarm() was called. In other words, the return value of 1 ensures the caller that the time event has not been posted and never will be, because disarming takes effect immediately. Conversely, the return value of 0 informs the caller that the time event has been posted to the event queue of the recipient active object and was automatically disarmed.

The status information returned from QTimeEvt_disarm() could be useful in the state machine design. For example, consider a state machine fragment shown in Figure 7.8. The entry action to “stateA” arms a one-shot time event me->timer1. Upon expiration, the time event generates signal TIMER1, which causes some internal or regular transition. However, another event, say BUTTON_PRESS, triggers a transition to “stateB.” The events BUTTON_PRESS and TIMER1 are inherently set up to race each other and so it is possible that they arrive very close in time. In particular, when the BUTTON_PRESS event arrives, the TIMER1 event could potentially follow very shortly thereafter and might get queued as well. If that happens, the state machine receives both events. This might be a problem if, for example, the next state tries to reuse the time event for a different purpose.

Reusing a one-shot time event.

Figure 7.8. Reusing a one-shot time event.

Figure 7.8 shows the solution. The exit action from “stateA” stores the return value of QTimeEvt_disarm() in the extended state variable me->g1. Subsequently, the variable is used as a guard condition on transition TIMER1 in “stateB.” The guard allows the transition only if the me->g1 flag is set. However, when the flag is zero, it means that the TIMER1 event was already posted. In this case the TIMER1 event sets only the flag but otherwise is ignored. Only in the next TIMER1 instance is the true timeout event requested in “stateB.”

Native QF Event Queue

Many RTOSs natively support message queues, which provide a superset of functionality needed for event queues of active objects. QF is designed up front for easy integration of such external message queues. However, in case no such support exists or the available implementation is inefficient or inadequate, QF provides a robust and efficient native event queue that you can easily adapt to virtually any underlying operating system or kernel.

The native QF event queues come in two flavors, which share the same data structure (QEQueue) and initialization but differ significantly in behavior. The first variant is the event queue specifically designed and optimized for active objects (see Section 7.8.3). The implementation omits several commonly supported features of traditional message queues, such as variable-size messages (native QF event queues store only pointers to events), blocking on a full queue (QF event queue cannot block on insertion), and timed blocking on empty queues (QF event queues block indefinitely), to name just a few. In exchange, the native QF event queue implementation is small and probably faster than any full-blown message queue of an RTOS.

The other, simpler variant of the native QF event queue is a generic “raw” thread-safe queue not capable of blocking but useful for thread-safe event delivery from active objects to other parts of the system that lie outside the framework, such as ISRs or device drivers. I explain the “raw” queue in Section 7.8.4.

The QEQueue Structure

The QEQueue structure is used in both variants of the native QF event queues. Figure 7.8 shows the relationships between the various elements of the QEQueue structure and the ring buffer managed by the event queue. The available queue storage consists of the external, user-allocated ring buffer plus an extra location frontEvt inside the QEvent structure. The QEQueue event queue holds only pointers to events (QEvent *), not the actual event instances.

As indicated by the dashed lines in Figure 7.9, all outgoing events must pass through the frontEvt data member. This extra location outside the ring buffer optimizes queue operation by allowing it to frequently bypass, the buffering because very often queues alternate between empty and nonempty states with just one event present in the queue at a time. In addition, the frontEvt pointer serves as a queue status indicator, whereas the NULL value of frontEvt indicates that the queue is empty. The indices head, tail, and end are relative to the ring pointer. Events are always extracted from the buffer at the tail index. New events are typically inserted at the head index. Inserting events at the head and extracting from the tail corresponds to FIFO queuing (the postFIFO() operation). QEQueue also allows inserting new events at the tail, which corresponds to LIFO queuing (the postLIFO() operation). Either way, the tail always decrements when the event is extracted, as does the head index when an event is inserted. The index 0 limits the range of the head and tail indices that must “wrap around” to end once they reach 0. The effect is a counterclockwise movement of the indices around the ring buffer, as indicated by the arrow in Figure 7.9. Other data members of the QEQueue structure include the current number of free events in the buffer (nFree) and the minimum number of free events ever present in the buffer (nMin). The nMin member tracks the worst-case queue utilization (the low-watermark of free events in the queue), which provides a valuable data point for fine-tuning the ring buffer size.

The relationship between the elements of the QEQueue structure and the ring buffer.

Figure 7.9. The relationship between the elements of the QEQueue structure and the ring buffer.

Listing 7.22 shows the declaration of the QEQueue structure. The QEQueueCtr data type determines the dynamic range of the queue indices and counters. It is typedef’ed to uint8_t, uint16_t, or uint32_t, depending on the macro QF_EQUEUE_CTR_SIZE. You can define the macro QF_EQUEUE_CTR_SIZE in the QF port file qf_port.h in the correct port directory. If the macro is not defined, the default size of 1 byte is assumed, which results in QEQueueCtr data type being typdef’ed to uint8_t (up to 255 events in the ring buffer).

Listing 7.22. QEQueue structure (file <qp>qpcincludeqequeue.h)

  • #ifndef QF_EQUEUE_CTR_SIZE

          #define QF_EQUEUE_CTR_SIZE 1

  • #endif

  • #if (QF_EQUEUE_CTR_SIZE == 1)

  •        typedef uint8_t QEQueueCtr;

  • #elif (QF_EQUEUE_CTR_SIZE == 2)

  •        typedef uint16_t QEQueueCtr;

  • #elif (QF_EQUEUE_CTR_SIZE == 4)

  •        typedef uint32_t QEQueueCtr;

  • #else

           #error “QF_EQUEUE_CTR_SIZE defined incorrectly, expected 1, 2, or 4”

  • #endif

  • typedef struct QEQueueTag {

             QEvent const *frontEvt;   /* pointer to event at the front of the queue */

           QEvent const **ring;         /* pointer to the start of the ring buffer */

             QEQueueCtr end;  /* offset of the end of the ring buffer from the start */

           QEQueueCtr head;         /* offset to where next event will be inserted */

           QEQueueCtr tail;        /* offset of where next event will be extracted */

           QEQueueCtr nFree;           /* number of free events in the ring buffer */

             QEQueueCtr nMin;    /* minimum number of free events ever in the buffer */

  • } QEQueue;

Initialization of QEQueue

Listing 7.23 shows the event queue initialization function QEQueue_init(). The function takes the preallocated contiguous storage for the ring buffer (an array of QEvent* pointers, qSto[]) and the length of the buffer qLen, which is the number of preallocated event pointers. The function sets the QEQueue data members to emulate an empty event queue. The body of the function is not protected with a critical section because the application should never access a queue before it is initialized.

Listing 7.23. QEQueue_init() (file <qp>qpcqfsourceqeq_init.c)

  • void QEQueue_init(QEQueue *me, QEvent const *qSto[], QEQueueCtr qLen) {

           me->frontEvt = (QEvent *)0;                   /* no events in the queue */

           me->ring     = &qSto[0];

           me->end      = qLen;

           me->head     = (QEQueueCtr)0;

           me->tail     = (QEQueueCtr)0;

           me->nFree    = qLen;                             /* all events are free */

           me->nMin     = qLen;                              /* the minimum so far */

  • }

Note that you can initialize an event queue with parameters qSto == NULL and qLen == 0. Such an event queue will still be able to hold one event because the frontEvt location also counts toward the queue capacity.

The Native QF Active Object Queue

The QEQueue structure is not quite complete to serve as the event queue of an active object because it does not provide any data member for implementing blocking of the active object thread when the queue is empty. Such a mechanism is always platform-specific and typically it is an operating-system primitive, such as a semaphore, a condition variable in POSIX, or a Win32 object that can be used with the WaitForSingleObject() and SetEvent() Win32 APIs.

The OS-specific blocking primitive is intentionally not included in the QEQueue structure to enable using this structure for both the active object event queue and the generic “raw” thread-safe queue discussed in the next section. Instead, the OS-specific thread-blocking primitive is included directly in the level QActive structure as the data member osObject of type QF_OS_OBJECT_TYPE (Listing 7.7(9)). The macro QF_OS_OBJECT_TYPE is obviously part of the platform abstraction layer and is defined differently for different QF ports. The osObject member is initialized in the platform-specific QActive_start() function (see Listing 7.9). You can think of the native QF active object queue as the aggregate of the QEQueue structure and the QActive.osObject data member.

The interface of the active object event queue consists of three functions: QActive_postFIFO(), QActive_postLIFO(), and QActive_get_(). The implementation of these functions is located in the files qa_fifo.c, qa_lifo.c, and qa_get_.c, respectively. These files should be included in the QF port only when the port uses the native event queue. Otherwise, the functions QActive_postFIFO(), QActive_postLIFO(), and QActive_get_() should be implemented differently, perhaps with the RTOS-specific message queue.

Listing 7.24 shows how the QActive_get_() function extracts events from the queue. This function is called only from the thread routine of the active object that owns this queue (see Listing 7.8(1)). You should never call QActive_get_() from the application-level code (hence the trailing underscore in the function name).

Listing 7.24. Extracting events from the event queue with QActive_get_() (file <qp>qpcqfsourceqa_get_.c)

  • (1) QEvent const *QActive_get_(QActive *me) {

    (1)       QEvent const *e;

    (1)       QF_INT_LOCK_KEY_

    (1)       QF_INT_LOCK_();

  • (2)        QACTIVE_EQUEUE_WAIT_(me);       /* wait for event queue to get an event */

  • (3)        e = me->eQueue.frontEvt;

  • (4)        if (me->eQueue.nFree != me->eQueue.end) {  /* any events in the buffer? */

    (4)                                                    /* remove event from the tail */

  • (5)               me->eQueue.frontEvt = me->eQueue.ring[me->eQueue.tail];

  • (6)               if (me->eQueue.tail == (QEQueueCtr)0) {   /* need to wrap the tail? */

  • (7)                     me->eQueue.tail = me->eQueue.end;                /* wrap around */

    (7)             }

  • (8)               --me->eQueue.tail;

  • (9)              ++me->eQueue.nFree;       /* one more free event in the ring buffer */

    (9)       }

  • (10)        else {

  • (11)               me->eQueue.frontEvt = (QEvent *)0;           /* queue becomes empty */

  • (12)               QACTIVE_EQUEUE_ONEMPTY_(me);

    (12)      }

    (12)       QF_INT_UNLOCK_();

  • (13)        return e;

  •            }

  • (1) The function QActive_get_() returns a read-only (const) pointer to an event that has been previously posted to the active object ‘me.’ The function always returns a valid event pointer.

  • (2) In some QF ports, the function must block until an event arrives. Blocking is always a platform-specific operation and the function handles it through the platform-specific macro QACTIVE_EQUEUE_WAIT_(). Note that this macro is invoked from the critical section. The macro might unlock interrupts momentarily, but it must restore critical section before it returns. I describe the implementation of the QACTIVE_EQUEUE_WAIT_() macro for POSIX threads in Chapter 8.

  • (3) At this point the queue cannot be empty anymore—it either was not empty to begin with or it just received an event after blocking. The event at the front of the queue is copied for delivery to the caller from the front event.

  • (4) If not all events in the buffer are free, the buffer must contain some events.

  • (5) The event pointer is copied from the tail index in the buffer to the front event.

  • (6) The tail index is checked for a wraparound.

  • (7) If wraparound is required, the tail index is moved to the end of the buffer. This makes the buffer circular.

  • (8) The tail index is always decremented, including just after the wraparound. I've chosen to decrement the tail (and also the head) index because it leads to a more efficient implementation than incrementing the indices. The wraparound occurs in this case at zero rather than at the end. Comparing a variable to a constant zero is more efficient than any other comparison.

  • (9) The nFree counter is incremented to account for freeing one event in the buffer.

  • (10) Otherwise the queue is becoming empty.

  • (11) The front event is set to NULL.

  • (12,13) Additionally, a platform-specific macro QACTIVE_EQUEUE_ONEMPTY_() is called. The job of this macro is to inform the underlying kernel that the queue is becoming empty, which is required in some QF ports. I show this macro implementation for the QF port to the cooperative “vanilla” kernel discussed in Chapter 8 as well as in the QF port to the preemptive run-to-completion QK kernel that I cover in Chapter 10.

Listing 7.25 shows the implementation of the QActive_postFIFO() queue operation. This function is used for posting an event directly to the recipient active object. Direct event posting can be performed from any part of the application, including interrupts.

Listing 7.25. Inserting events into the event queue with QActive_postFIFO() (file <qp>qpcqfsourceqa_fifo.c)

  •          void QActive_postFIFO(QActive *me, QEvent const *e) {

                    QF_INT_LOCK_KEY_

  • (1)       QF_INT_LOCK_();

    (1)       if (e->dynamic_ != (uint8_t)0) {              /* is it a dynamic event? */

  • (2)              ++((QEvent *)e)->dynamic_;       /* increment the reference counter */

    (2)      }

  • (3)       if (me->eQueue.frontEvt == (QEvent *)0) {               /* empty queue? */

  • (4)              me->eQueue.frontEvt = e;                  /* deliver event directly */

  • (5)              QACTIVE_EQUEUE_SIGNAL_(me);               /* signal the event queue */

    (5)      }

  • (6)        else {         /* queue is not empty, insert event into the ring-buffer */

    (6)                  /* the queue must be able to accept the event (cannot overflow) */

  • (7)              Q_ASSERT(me->eQueue.nFree != (QEQueueCtr)0);

  •                                      /* insert event into the ring buffer (FIFO) */

  • (8)               me->eQueue.ring[me->eQueue.head] = e;

  • (9)              if (me->eQueue.head == (QEQueueCtr)0) {   /* need to wrap the head? */

  • (10)                    me->eQueue.head = me->eQueue.end;                /* wrap around */

    (10)             }

  • (11)              --me->eQueue.head;

  • (12)              --me->eQueue.nFree;                 /* update number of free events */

  • (13)              if (me->eQueue.nMin > me->eQueue.nFree) {

  • (14)                    me->eQueue.nMin = me->eQueue.nFree;        /* update min so far */

    (14)             }

    (14)       }

    (14)        QF_INT_UNLOCK_();

    (14) }

  • (1) The whole function body runs in a critical section.

  • (2) The reference count of a dynamic event is incremented to account for another outstanding reference to the event.

Note

Incrementing the reference count is essential and must be performed in every QActive_postFIFO() implementation, including implementations not based on the native QF event queue but, for example, on a message queue of an RTOS.

Note

The QF framework treats the inability to post an event as an error. This assertion is part of the event delivery guarantee policy. It's the application designer's responsibility to size the event queues adequately for the job at hand.

The QF framework provides also the QActive_postLIFO() queue operation. I don't discuss the code (located in the file <qp>qpcqfsourceqa_lifo.c) because it is very similar to QActive_postFIFO() except that the event is inserted at the tail index.

The “Raw” Thread-Safe Queue

The QEQueue structure can be used directly as the native QF “raw” thread-safe queue. The basic operations of the “raw” thread-safe queue are QEQueue_postFIFO(), QEQueue_postLIFO(), and QEQueue_get(). None of these functions can block. This type of queue is employed for deferring events (see Section 7.5.4) and also can be very useful for passing events from active objects to ISRs, as shown in Listing 7.26.

Listing 7.26. Using the “raw” thread-safe queue to send events to an ISR

  •          /* Application header file -----------------------------------------------*/

           #include “qequeue.h”

  • (1) extern QEQueue APP_isrQueue;                          /* global “raw” queue */

  • (2) typedef struct IsrEvtTag { /* event with parameters to be passed to the ISR */

    (2)       QEvent super;

    (2)       . . .

    (2) } IsrEvt;

    (2) /* ISR module ------------------------------------------------------------*/

  • (3) QEQueue APP_isrQueue;                      /* definition of the “raw” queue */

    (3) void interrupt myISR() {

    (3)       QEvent const *e;

    (3)       . . .

  • (4)         e = QEQueue_get(&APP_isrQueue);    /* get an event from the “raw” queue */

  • (5)         if (e != (QEvent *)0) {                             /* event available? */

  • (6)               Process the event e (could be dispatching to a state machine)

    (6)              . . .

  • (7)               QF_gc(e);                           /* explicitly recycle the event */

    (7)        }

    (7)       . . .

    (7) }

    (7) /* Active object module --------------------------------------------------*/

    (7) QState MyAO_stateB(MyAO *me, QEvent const *e) {

    (7)        switch (e->sig) {

    (7)              . . .

    (7)               case SOMETHING_INTERESTING_SIG: {

    (7)                      IsrEvt *pe = Q_NEW(IsrEvt, ISR_SIG);

    (7)                       pe->… = …                       /* set the event attributes */

  • (8)                       QEQueue_postFIFO(&APP_isrQueue, (QEvent *)pe);

    (8)                      return (QSTATE)0;

    (8)               }

    (8)               . . .     

    (8)        }

    (8)        return (QState)&MyAO_stateA;

    (8) }

    (8) /* main module -----------------------------------------------------------*/

    (8) static QEvent *l_isrQueueSto[10];  /* allocate a buffer for the “raw” queue */

    (8) main() {

    (8)       . . .

    (8)                                                    /* initialize the “raw” queue */

  • (9)       QEQueue_init(&APP_isrQueue, l_isrQueueSto, Q_DIM(l_isrQueueSto));

    (9)      . . .

    (9) }

  • (1) In the application header file, you declare the external “raw” event queue so that various parts of the code can access the queue.

  • (2) In the same header file, you'll typically also declare all event types that the raw queue can accept.

  • (3) In the ISR module, you define the “raw” queue object.

  • (4) Inside an ISR, you call QEQueue_get() to get an event.

Note

The function QEQueue_get() uses internally a critical section of code. If you are using the simple unconditional interrupt-locking policy (see Section 7.3.2), you must be careful not to call QEQueue_get() with interrupts locked, as might be the case inside an ISR.

The actual implementation of the QEQueue functions QEQueue_postFIFO(), QEQueue_postLIFO(), and QEQueue_get() is very straightforward since no platform-specific macros are necessary. All these functions are reentrant because they preserve the integrity of the queue by using critical sections of code.

Native QF Memory Pool

In Section 7.5.2, I introduced the concept of an event pool—a fixed block–size heap specialized to hold event instances. Some RTOSs natively support such fixed block–size heaps (often called memory partitions or memory pools). However, many platforms don't. This section explains the native QF implementation of a memory pool based on the QMPool structure.

The native QF memory pool is generic and can be used in your application for storing any objects, not just events. All native QF memory pool functions are reentrant and deterministic. They can be used in any parts of the code, including the ISRs. Figure 7.10 explains the relationships between the various elements of the QMPool structure and the memory buffer managed by the memory pool.

The relationship between QMPool structure elements and the memory buffer.

Figure 7.10. The relationship between QMPool structure elements and the memory buffer.

The native QF memory pool requires the actual pool storage to be allocated externally and provided to the pool upon initialization. Internally, the memory pool tracks memory by dividing it into blocks of the requested size and linking all unused memory blocks in a singly-linked list (the free list originating at the QMPool.free pointer). This technique is standard for organizing stack-like data structures, where the structure is accessed from one end only (LIFO). QMPool also uses a handy trick to link free blocks together in the free list without consuming extra storage for the pointers. Because the blocks in the free list are not used for anything else, QMPool can reuse the blocks as linked list pointers. This use implies that the block size must be big enough to hold a pointer [Lafreniere 98, Labrosse 02]. Listing 7.27 shows the declaration of the QMPool structure.

Listing 7.27. QMPool structure (file <qp>qpcincludeqmpool.h)

  •       typedef struct QMPoolTag {

  • (1)      void *free;               /* the head of the linked list of free blocks */

  • (2)      void *start;                   /* the start of the original pool buffer */

  • (3)      void *end;                               /* the last block in this pool */

  • (4)      QMPoolSize blockSize;                  /* maximum block size (in bytes) */

  • (5)      QMPoolCtr nTot;                               /* total number of blocks */

  • (6)      QMPoolCtr nFree;                     /* number of free blocks remaining */

  • (7)      QMPoolCtr nMin;   /* minimum number of free blocks ever in this pool */

    (7) } QMPool;

  • (1) The only data member strictly required for allocating and freeing blocks in the pool is the head of the free list ‘free.’ The other data members are for making the memory pool operations more robust.

  • (2,3) The start and end pointers are used as delimiters of the valid range of memory blocks managed by this pool. I have specifically added them to enable writing an assertion to ensure that every memory block returned to the pool is in the range of memory managed by the pool.

  • (4) The member blockSize holds the block size of this pool in bytes.

  • (5) The member nTot holds the total number of blocks in the pool. This member allows me to assert the invariant that the number of free blocks in the pool at any given time cannot exceed nTot.

  • (6) The member nFree holds the current number of free blocks in the pool.

  • (7) The member nMin holds the lowest number of free blocks ever present in the pool.

The QMPoolSize data type is typedef’ed as uint8_t, uint16_t, or uint32_t, configurable by the macro QF_MPOOL_SIZ_SIZE. The dynamic range of the QMPoolSize data type determines the maximum size of blocks that can be managed by the native QF event pool. Similarly, the QMPoolCtr data type is typedef’ed as uint8_t, uint16_t, or uint32_t, depending on the macro QF_MPOOL_CTR_SIZE. The dynamic range of the QMPoolCtr data type determines the maximum number of blocks that can be stored in the pool. The macros QF_MPOOL_SIZ_SIZE and QF_MPOOL_CTR_SIZE should be defined in the QF port file qf_port.h in the correct port directory. If the macros are not defined, the default size of 2 is assumed for both of them, which results in QMPoolSize and QMPoolCtr data types typedef’ed to uint16_t.

Initialization of the Native QF Memory Pool

You must initialize a memory pool before you can use it by calling the function QMPool_init(), to which you provide the pool storage, the size of the storage, and the block size managed by the pool. A general challenge in writing this function is portability, because storage allocation is intrinsically machine-dependent [Kernighan 88]. Perhaps the trickiest aspect here is the proper and optimal alignment of the blocks within the contiguous memory buffer. In particular, the alignment of blocks must be such that every new block can be treated as a pointer to the next block. The code in listing 7.28 illustrates how to control the machine dependencies, at the cost of extensive but careful typecasting.

Listing 7.28. Initialization of the QF memory pool with QMPool_init() (file <qp>qpcqfsourceqmp_init.c)

  •          void QMPool_init(QMPool *me, void *poolSto,

                                          uint32_t poolSize, QMPoolSize blockSize)

           {

                   QFreeBlock *fb;

                   uint32_t corr;

                   uint32_t nblocks;

                   /* The memory block must be valid

                   * and the poolSize must fit at least one free block

                   * and the blockSize must not be too close to the top of the dynamic range

                   */

  • (1)       Q_REQUIRE((poolSto != (void *)0)

  • (2)                         && (poolSize >= (uint32_t)sizeof(QFreeBlock))

  • (3)                         && ((QMPoolSize)(blockSize + (QMPoolSize)sizeof(QFreeBlock))

    (3)                                  > blockSize));

  •                 /*lint -e923                    ignore MISRA Rule 45 in this expression */

  • (4)       corr = ((uint32_t)poolSto & ((uint32_t)sizeof(QFreeBlock) - (uint32_t)1));

  • (5)       if (corr != (uint32_t)0) {                         /* alignment needed? */

  • (6)                corr = (uint32_t)sizeof(QFreeBlock) - corr;/*amount to align poolSto*/

  • (7)              poolSize -= corr;                 /* reduce the available pool size */

    (7)      }

    (7)      /*lint -e826 align the head of free list at the free block-size boundary*/

  • (8)       me->free = (void *)((uint8_t *)poolSto + corr);

  •                 /* round up the blockSize to fit an integer # free blocks, no division */

  • (9)        me->blockSize = (QMPoolSize)sizeof(QFreeBlock);  /* start with just one */

  • (10)       nblocks = (uint32_t)1;    /* # free blocks that fit in one memory block */

  • (11)       while (me->blockSize < blockSize) {

  • (12)               me->blockSize += (QMPoolSize)sizeof(QFreeBlock);

  • (13)               ++nblocks;

    (13)      }

  • (14)       blockSize = me->blockSize;      /* use the rounded-up value from now on */

  •                        /* the pool buffer must fit at least one rounded-up block */

  • (15)       Q_ASSERT(poolSize >= (uint32_t)blockSize);

    (15)                                   /* chain all blocks together in a free-list… */

  • (16)       poolSize -= (uint32_t)blockSize;/*don't link the last block to the next */

  • (17)       me->nTot  = (QMPoolCtr)1;         /* the last block already in the pool */

  • (18)       fb        = (QFreeBlock *)me->free;/*start at the head of the free list */

  • (19)       while (poolSize >= (uint32_t)blockSize) {     /* can fit another block? */

  • (20)              fb->next = &fb[nblocks];   /* point the next link to the next block */

    (20)              fb = fb->next;                         /* advance to the next block */

    (20)              poolSize -= (uint32_t)blockSize;  /* reduce the available pool size */

    (20)              ++me->nTot;                /* increment the number of blocks so far */

    (20)       }

  • (21)        fb->next  = (QFreeBlock *)0;            /* the last link points to NULL */

  • (22)        me->nFree = me->nTot;                            /* all blocks are free */

  • (23)        me->nMin  = me->nTot;              /* the minimum number of free blocks */

  • (24)        me->start = poolSto;             /* the original start this pool buffer */

  • (25)        me->end   = fb;                          /* the last block in this pool */

    (25) }

Note

Many CPU architectures put special requirements on the proper alignment of pointers. For example, the ARM processor requires a pointer to be allocated at an address divisible by 4. Other CPUs, such as the Pentium, can accept pointers allocated at odd addresses but perform substantially better when pointers are aligned at addresses divisible by 4.

To achieve better portability and optimal alignment of blocks, the QF memory pool implementation uses internally a helper structure QFreeBlock, which represents a node in the linked-list of free blocks. QFreeBlock is declared as follows in the file <qp>qpcqfsourceqf_pkg.h:

  • typedef struct QFreeBlockTag {

           struct QFreeBlockTag *next;              /* link to the next free block */

  • } QFreeBlock;

  • (1) The precondition requires a valid pointer to the pool storage.

  • (2) The pool size must be able to fit at least one free block. Later, after aligning the pool storage and rounding up the block size, this assertion will be strengthened (see label (15)).

  • (3) The argument blockSize must not be too close to the upper limit of its dynamic range, to avoid unexpected wraparound at rounding up the block size.

  • (4) The expression assigned to corr computes the misalignment of the provided storage poolSto with respect to free block (pointer) size boundary.

  • (5) Nonzero value of corr indicates misalignment of the pool storage.

  • (6) Now corr holds the correction needed to align poolSto.

  • (7) The available pool size is reduced by the amount of the correction.

  • (8) The head of the free list is set at the start of the pool storage aligned at the nearest free block boundary.

  • (9-14) To achieve alignment of all blocks in the pool, I round up the specified blockSize to the nearest integer multiple of QFreeBlock size. With the head of the free list already aligned at the QFreeBlock size and all blocks being integer multiples of the QFreeBlock size, I can be sure that every block is aligned as well. Note that instead of computing the number of free blocks going into the blockSize as (nblocks = (blockSize + sizeof(QFreeBlock) – 1)/sizeof(QFreeBlock) + 1), I compute the value iteratively in a loop. I decided to do this to avoid integer division, which would be the only division in the whole QP code base. On many CPUs division requires a sizable library function and I didn't want to pull this code.

  • (15) For the correctness of the following algorithm, the pool must fit at least one rounded-up block.

  • (16) The very last memory block is not linked to the next, so it is excluded.

  • (17) The total count of blocks in the pool starts with one, to account for the last block.

  • (18) The free block pointer starts at the head of the free list.

  • (19) The loop chains all free blocks in a single-linked list until the end of the provided buffer storage.

  • (20) The next link points to the free block, which is an integer multiple of the free block size nblocks, computed at step 13.

  • (21) The last link is explicitly pointed to NULL.

  • (22,23) Initially, all blocks in the pool are free.

  • (24) The original pool buffer pointer is stored in me->start.

  • (25) The pointer to the last block is stored in me->end.

Obtaining a Memory Block from the Pool

The implementation of the QMPool_get() function shown in Listing 7.29 is straightforward. The function returns a pointer to a new memory block or NULL if the pool runs out of blocks. This means that when you use QMPool directly as a general-purpose memory manager, you must validate the pointer returned from QMPool_get() before using it in your code. Note, however, that when QF uses QMPool internally as the event pool, the framework asserts that the pointer is valid (see Listing 7.12(3) in Section 7.5.2). QF considers running out of events in an event pool as an error.

Listing 7.29. Obtaining a block from a pool with QMPool_get() (file <qp>qpcqfsourceqmp_get.c)

  • void *QMPool_get(QMPool *me) {

          QFreeBlock *fb;

          QF_INT_LOCK_KEY_

          QF_INT_LOCK_();

          fb = (QFreeBlock *)me->free;                /* get a free block or NULL */

           if (fb != (QFreeBlock *)0) {                   /* free block available? */

                    me->free = fb->next;     /* adjust list head to the next free block */

                 --me->nFree;                                 /* one less free block */

                 if (me->nMin > me->nFree) {

                        me->nMin = me->nFree;            /* remember the minimum so far */

                 }

          }

           QF_INT_UNLOCK_();

           return fb;            /* return the block or NULL pointer to the caller */

  • }

Recycling a Memory Block Back to the Pool

Listing 7.30 shows the QMPool_put() function for recycling blocks back to the pool. The most interesting aspects of this implementation are the preconditions. Assertion at label (1) makes sure that the recycled block pointer lies in range of a memory buffer managed by the pool (see Figure 7.10). Assertion 2 checks that the number of free blocks is less than the total number of blocks (a new block is just about to be inserted into the pool).

Listing 7.30. Recycling a block back to the pool with QMPool_put() (file <qp>qpcqfsourceqmp_put.c)

  •      void QMPool_put(QMPool *me, void *b) {

               QF_INT_LOCK_KEY_

  • (1)      Q_REQUIRE((me->start <= b) && (b <= me->end)       /*  must be in range */

  • (2)                && (me->nFree <= me->nTot)); /* # free blocks must be < total */

  •             QF_INT_LOCK_();

               ((QFreeBlock *)b)->next = (QFreeBlock *)me->free;/* link into free list */

               me->free = b;                       /* set as new head of the free list */

               ++me->nFree;                        /* one more free block in this pool */

               QF_INT_UNLOCK_();

    }

Note

The C standard guarantees meaningful pointer comparisons, such as the precondition (1) in Listing 7.30, only if compared pointers point to the same array. Strictly speaking, this is only the case when the pointer ‘b’ is indeed in range. When pointer ‘b’ is out of range, the comparison might not be meaningful, and theoretically the precondition might not catch the foreign block being recycled into the pool.

Native QF Priority Set

The QF native priority set is generally useful for representing sets of up to 64 elements numbered 1..64. For example, you can use such a set to represent groups of GPS satellites (numbered 1..32) or any other elements. The set provides deterministic and efficient operations for inserting, removing, and testing elements as well as determining the largest-number element in the set. The latter operation is very helpful for quickly finding out the highest-priority active object ready to run, and I use it inside the cooperative “vanilla” kernel (see the next Section 7.11) and also inside the preemptive run-to-completion QK kernel (see Chapter 10). The QF priority set implementation is adapted from the algorithm described in [Bal Sathe 88 and Labrosse 02]. Listing 7.31 shows the declaration of the QPset64 data structure.

Listing 7.31. QPSet64 structure

  • typedef struct QPSet64Tag {

           uint8_t bytes;          /* condensed representation of the priority set */

           uint8_t bits[8];           /* bitmasks representing elements in the set */

    • } QPSet64;

Figure 7.11 graphically summarizes the semantics of QPSet64 data members. The bits of the array QPSet64.bits[8] correspond to the set elements as follows:

  • QPSet64.bits[0] represent elements 1..8

  • QPSet64.bits[1] represent elements 9..16

  • . . .

  • QPSet64.bits[7] represent elements 57..64

Dependency between QPSet64.bits[] and QPSet64.bytes.

Figure 7.11. Dependency between QPSet64.bits[] and QPSet64.bytes.

In addition, to speed up access to the bitmasks, the redundant summary of the bitmasks is stored in the member QPSet64.bytes with the following semantics of the bits:

  • bit 0 in QPSet64.bytes is 1 when any bit in QPSet64.bits[0] is 1

  • bit 1 in QPSet64.bytes is 1 when any bit in QPSet64.bits[1] is 1

  • . . .

  • bit 7 in QPSet64.bytes is 1 when any bit in QPSet64.bits[7] is 1

With this data representation, all operations on the set are fast and deterministic, meaning that the operations always take the same number of CPU cycles to execute, regardless of how many elements are in the set. All QPSet64 operations are implemented “inline” as macros to avoid the overhead of a function call.

For example, determining whether the set is not empty is remarkably simple:

  • #define QPSet64_notEmpty(me_) ((me_)->bytes != (uint8_t)0)

Also, finding the largest element in the set is deterministic and looks as follows:

  • #define QPSet64_findMax(me_, n_) do {

          (n_) = (uint8_t)(QF_log2Lkup[(me_)->bytes] - 1);

          (n_) = (uint8_t)(((n_) << 3) + QF_log2Lkup[(me_)->bits[n_]]);

  • } while(0)

The QPSet64_findMax() macro assumes that the set ‘me_’ is not empty. It assigns the number of the largest element in the set to the parameter ‘n_.’ The algorithm uses the binary logarithm lookup table (see Figure 7.6) twice: first the largest 1 bit in the QPSet64.bytes bitmask and the second time on the QPSet64.bits[n_] bitmask to determine the largest 1 bit in the bitmask. The largest set element is the combination of the bit number returned by the lookup (a number in the range 1..8) plus the index multiplied by 8, to account for the byte position in the bits[] array.

Inserting an element ‘n_’ into the set ‘me_’ is implemented as follows:

  • #define QPSet64_insert(me_, n_) do {

          (me_)->bits[QF_div8Lkup[n_]] |= QF_pwr2Lkup[n_];

          (me_)->bytes |= QF_pwr2Lkup[QF_div8Lkup[n_] + 1];

  • } while(0)

Finally, here is the macro for removing an element ‘n_’ from the set ‘me_’:

  • #define QPSet64_remove(me_, n_) do {

          (me_)->bits[QF_div8Lkup[n_]] &= QF_invPwr2Lkup[n_];

           if ((me_)->bits[QF_div8Lkup[n_]] == (uint8_t)0) {

                 (me_)->bytes &= QF_invPwr2Lkup[QF_div8Lkup[n_] + 1];

           }

  • } while(0)

Native Cooperative “Vanilla” Kernel

QF contains a simple cooperative “vanilla” kernel, which works as I described in Section 6.3.7 in Chapter 6. The “vanilla” kernel is implemented in two files: the qvanilla.h header file located in <qp>qpcinclude directory and the qvanilla.c source file found in <qp>qpcqfsource directory.

The “vanilla” kernel operates by constantly polling all event queues of active objects in an endless loop. The kernel always selects the highest-priority active object ready to run, which is the highest-priority active object with a nonempty event queue (see Figure 6.8 in Chapter 6). The scheduler maintains the global status of all event queues in the application in the priority set called the QF_readySet_. As shown in Figure 7.12, QF_readySet_ represents a “ready-set” of all nonempty event queues in the system. For example, an element number ‘p’ is present in the ready-set if and only if the event queue of the active object with priority ‘p’ is nonempty. With this representation, posting an event to an empty queue with priority ‘p’ inserts the element number ‘p’ to the QF_readySet_ set. Conversely, retrieving the last event from the queue with priority ‘q’ removes the element number ‘q’ from the ready-set QF_readySet_.

Representing state of all event queues in the QF_readySet_ priority set.

Figure 7.12. Representing state of all event queues in the QF_readySet_ priority set.

7.11.1 The qvanilla.c Source File

Listing 7.32 shows the complete implementation of the “vanilla” kernel.

Listing 7.32. The “vanilla” kernel implementation (<qp>qpcqfsourceqvanilla.c)

  • (1) #include “qf_pkg.h”

    (1) #include “qassert.h”

    (1) Q_DEFINE_THIS_MODULE(qvanilla)

    (1) /* Package-scope objects -----------------------------------------------*/

  • (2) QPSet64 volatile QF_readySet_;   /* QF-ready set of active objects */

    (2) /*.....................................................................*/

    (2) void QF_init(void) {

    (2)        /* nothing to do for the “vanilla” kernel */

    (2) }

    (2) /*.....................................................................*/

  • (3) void QF_stop(void) {

    (3)        /* nothing to cleanup for the “vanilla” kernel */

    (3)        QF_onCleanup();                                     /* cleanup callback */

    (3) }

    (3) /*.....................................................................*/

  • (4) void QF_run(void) {                                           /* see NOTE01 */

    (4)        uint8_t p;

    (4)        QActive *a;

    (4)        QEvent const *e;

    (4)        QF_INT_LOCK_KEY_

  • (5)         QF_onStartup();                       /* invoke the QF startup callback */

  • (6)         for (;;) {                                       /* the background loop */

  • (7)                QF_INT_LOCK_();

  • (8)                if (QPSet64_notEmpty(&QF_readySet_)) {

  • (9)                      QPSet64_findMax(&QF_readySet_, p);

  • (10)                      a = QF_active_[p];

  • (11)                      QF_INT_UNLOCK_();

  • (12)                      e = QActive_get_(a);    /* get the next event for this AO */

  • (13)                      QF_ACTIVE_DISPATCH_(&a->super, e);  /* dispatch to the AO */

  • (14)                      QF_gc(e); /* determine if event is garbage and collect it if so */

    (14)              }

  • (15)               else {                        /* all active object queues are empty */

    (15) #ifndef QF_INT_KEY_TYPE

  • (16)                      QF_onIdle();                                      /* see NOTE02 */

    (16) #else

  • (17)                      QF_onIdle(intLockKey_);                           /* see NOTE02 */

    (17) #endif                                                   /* QF_INT_KEY_TYPE */

    (17)              }

    (17)       }

    (17) }

    (17) /*.....................................................................*/

  • (18) void QActive_start(QActive *me, uint8_t prio,

    (18)                                   QEvent const *qSto[], uint32_t qLen,

    (18)                                   void *stkSto, uint32_t stkSize,

    (18)                                   QEvent const *ie)

    (18) {

  • (19)         Q_REQUIRE(((uint8_t)0 < prio) && (prio <= (uint8_t)QF_MAX_ACTIVE)

    (19)                           && (stkSto == (void *)0));   /* does not need per-actor stack */

    (19)       (void)stkSize;         /* avoid the “unused parameter” compiler warning */

  • (20)         QEQueue_init(&me->eQueue, qSto, (QEQueueCtr)qLen);/* initialize QEQueue */

  • (21)         me->prio = prio;           /* set the QF priority of this active object */

  • (22)         QF_add_(me);                     /* make QF aware of this active object */

  • (23)         QF_ACTIVE_INIT_(&me->super, ie);          /* execute initial transition */

  •            }

  •            /*.....................................................................*/

  •            void QActive_stop(QActive *me) {

  •                   QF_remove_(me);

  •            }

  • (1) As every QF source file, the qvanilla.c file includes to the wider “package-scope” QF interface qf_pkg.h, located in <qp>qpcqfsource. The qf_pkg.h header file includes the platform-specific QF port header file qf_port.h, but it additionally defines some internal macros and objects shared only internally within QF.

  • (2) QF_readySet_ priority set represents the ready-set of the scheduler. I declared it volatile to inform the compiler to never cache this variable because it can change unexpectedly in interrupts (e.g., when ISRs post or publish events).

  • (3) The function QF_stop() stops execution of the QF framework. In the case of the “vanilla” kernel, this function has nothing to do except invoke the QF_onCleanup() callback function to give the application a chance to clean up and exit to the underlying operating system (e.g., consider a “vanilla” kernel running on top of DOS). I summarize all QF callback functions in Section 8.1.8 in Chapter 8.

  • (4) Applications call the function QF_run() from main() to transfer the control to the framework. This function implements the entire “vanilla” kernel.

  • (5) The QF_onStartup() callback function configures and starts interrupts. This function is typically implemented at the application level (in the BSP). I summarize all QF callback functions in Section 8.1.8 in Chapter 8.

  • (6) This is the event loop of the “vanilla” kernel.

  • (7) Interrupts are locked to access the QF_readySet_ ready-set.

  • (8) If the ready-set QF_readySet_ is not empty, the “vanilla” kernel has some events to process.

  • (9) The QF priority set quickly discovers the highest-priority, not-empty event queue, as described in Section 7.10.

  • (10) The active object pointer ‘a’ is resolved through the QF_active_[] priority-to-active object lookup table maintained internally by QF.

  • (11) Interrupts can be unlocked.

  • (12-14) These are the three steps of the active object thread (see Listing 7.8).

  • (15) The else branch is taken when all active object event queues are empty, which is by definition the idle condition of the “vanilla” kernel.

  • (16,17) The “vanilla” kernel calls the QF_onIdle() callback function to give the application a chance to put the CPU to a low-power sleep mode or to perform other processing (e.g., software-tracing output; see Chapter 11). The QF_onIdle() function is typically implemented at the application level (in the BSP). Note that the signature of QF_onIdle() depends on the critical section mechanism you choose. The function takes no parameters when the simple “unconditional interrupt unlocking” policy is used but needs the interrupt status parameter when the “saving and restoring interrupt status” policy is used (see Section 7.3).

Note

Most MCUs provide software-controlled low-power sleep modes, which are designed to reduce power dissipation by gating the clock to the CPU and various peripherals. To ensure a safe transition to a sleep mode, the “vanilla” kernel calls QF_onIdle() with interrupts locked. The QF_onIdle() function must always unlock interrupts internally, ideally atomically with the transition to a sleep mode.

Figure 7.13 shows a typical execution scenario in the “vanilla” kernel. As long as events are available, the event loop calls various active objects to process the events in run-to-completion fashion. When all event queues run out of events, the event loop calls the QF_onIdle() function to give the application a chance to switch the MCU to a low-power sleep mode. The “vanilla” kernel must invoke QF_onIdle() with interrupts locked. If the interrupts were enabled after the event loop determines that the ready-set is empty (Listing 7.32(20)), but before calling QF_onIdle() (where the switching to the low-power mode actually takes place), an interrupt could preempt the event loop at this exact point and an ISR could post new events to active objects, thus invalidating the idle condition.

Entering low-power sleep modes in the “vanilla” kernel.

Figure 7.13. Entering low-power sleep modes in the “vanilla” kernel.

By the simplistic nature of the “vanilla” kernel, the event loop always resumes exactly at the point it was interrupted, so the event loop would enter the low-power sleep mode while events would be waiting for processing! The MCU will be stopped for a nondeterministic period of time until the next interrupt wakes it up. Thus unlocking interrupts before transitioning to a low-power state opens a time window for a race condition between any enabled interrupt and the transition to the low-power mode.

Entering a sleep mode while interrupts are disabled poses a chicken-and-egg problem for waking the system up, because only an interrupt can terminate the low-power sleep mode. To operate in the “vanilla” kernel, the MCU must allow entering the low-power sleep mode and enabling the interrupts at the same time, without creating the race condition described above.

Many MCUs indeed allow such an atomic transition to the sleep mode. Other MCUs support multiple levels of disabling interrupts and can accomplish low-power transitions with interrupts disabled at one level. Yet other class of MCUs doesn't provide any way of entering the low-power mode with interrupts disabled and requires some different approaches. Refer the ESD article “Use an MCU's low-power modes in foreground/background systems” [Samek 07b] for an overview of safe sleep mode transitions in various popular MCUs.

7.11.2 The qvanilla.h Header File

The qvanilla.h header file, shown in Listing 7.33, integrates the “vanilla” kernel into the QF framework. The most important function of this header file is to codify the updates to the ready-set (QF_readySet_) as events are posted and removed from the active object event queues.

Listing 7.33. The “vanilla” kernel interface (<qp>qpcincludeqvanilla.h)

  •           #ifndef qvanilla_h

             #define qvanilla_h

  • (1) #include “qequeue.h”    /* “Vanilla” kernel uses the native QF event queue  */

  • (2) #include “qmpool.h”     /* “Vanilla” kernel uses the native QF memory pool  */

  • (3) #include “qpset.h”      /* “Vanilla” kernel uses the native QF priority set */

    (3)                     /* the event queue and thread types for the “Vanilla” kernel */

  • (4) #define QF_EQUEUE_TYPE              QEQueue

    (4)                                              /* native QF event queue operations */

  • (5) #define QACTIVE_EQUEUE_WAIT_(me_)

    (5)        Q_ASSERT((me_)->eQueue.frontEvt != (QEvent *)0)

  • (6) #define QACTIVE_EQUEUE_SIGNAL_(me_)

    (6)        QPSet64_insert(&QF_readySet_, (me_)->prio)

  • (7) #define QACTIVE_EQUEUE_ONEMPTY_(me_)

    (7)        QPSet64_remove(&QF_readySet_, (me_)->prio)

    (7)                                               /* native QF event pool operations */

  • (8) #define QF_EPOOL_TYPE_                         QMPool

  • (9) #define QF_EPOOL_INIT_(p_, poolSto_, poolSize_, evtSize_)

    (9)        QMPool_init(&(p_), poolSto_, poolSize_, evtSize_)

  • (10) #define QF_EPOOL_EVENT_SIZE_(p_)      ((p_).blockSize)

  • (11) #define QF_EPOOL_GET_(p_, e_)            ((e_) = (QEvent *)QMPool_get(&(p_)))

  • (12) #define QF_EPOOL_PUT_(p_, e_)            (QMPool_put(&(p_), e_))

  • (13) extern QPSet64 volatile QF_readySet_;    /** QF-ready set of active objects */

  •            #endif                                                         /*qvanilla_h*/

  • (1) The “vanilla” kernel uses the native QF event queue, so it needs to include the qequeue.h header file.

  • (2) The “vanilla” kernel uses the native QF memory pool, so it needs to include the qmpool.h header file.

  • (3) The “vanilla” kernel uses the native QF priority set, so it needs to include the qpset.h header file.

  • (4) The “vanilla” kernel uses QEQueue as the event queue for active objects (see also Listing 7.7(8)).

  • (5) The “vanilla” kernel never blocks. It calls QActive_get_() only when it knows for sure that the event queue contains at least one event (see Listing 7.32(12)). Since this is certainty in this type of kernel, the QACTIVE_EQUEUE_WAIT_() macro (see Listing 7.24(2)) asserts that the event queue is indeed not empty.

  • (6) The macro QACTIVE_EQUEUE_SIGNAL_() is called from QActive_postFIFO() and QActive_postLIFO() when an event is posted to an empty queue (see Listing 7.25(5)). This is exactly when the priority of the active object needs to be inserted into the ready-set QF_readySet_. Note that QF_readySet_ is modified within a critical section.

  • (7) The macro QACTIVE_EQUEUE_ONEMPTY_() is called from QActive_get_() when the queue is becoming empty (see Listing 7.24(12)). This is exactly when the priority of the active object needs to be removed from the ready-set QF_readySet_. Note that QF_readySet_ is modified within a critical section.

  • (8-12) The “vanilla” kernel uses QMPool as the QF event pool. The platform abstraction layer (PAL) macros are set to access the QMPool operations (see Section 7.9).

  • (13) The QF_readySet_ is declared as volatile because it can change asynchronously in an ISR.

QP Reference Manual

The source code available from the companion Website to this book at www.quantumleaps.com/psicc2/ contains the complete “QP Reference Manual” in HTML and CHM-Help formats (see Figure 7.14). The Reference Manual has been generated by Doxygen (www.doxygen.org), which is an open-source documentation-generation system for C, C++, Java, and other languages. The HTML documentation is found in <qp>qpcdoxygenhtml, while the CHM Help format is located in <qp>qpcqpc.chm.

Screen shots of the “QP/C Reference Manual,” which is available in HTML and CHM Help formats.

Figure 7.14. Screen shots of the “QP/C Reference Manual,” which is available in HTML and CHM Help formats.

Note

The “QP/C++ Reference Manual” for the QP/C++ version is located in <qp>qpcpp.

The “QP Reference Manual” is quite detailed. Every file, structure (class), function, macro, and typedef is documented. The Doxygen tool does a superb job in cross-referencing the manual, so you can find information quickly. The manual also contains plenty of examples and useful code snippets that you can conveniently cut and paste into your own code. Finally, if you choose to modify the QP source code, you can regenerate the Doxygen documentation by yourself because I provide the Doxyfile.

Summary

QF is a generic, portable, scalable, lightweight, deterministic, real-time framework for embedded systems. QF supports many advances features, such as “zero-copy” event management, publish-subscribe event delivery, and automatic garbage collection for events. All QF code is written in portable ANSI-C (or Embedded C++ subset in case of QF/C++) with processor-specific, compiler-specific, and operating system-specific code abstracted into a clearly defined platform abstraction layer (PAL). The high portability was one of the main concerns and challenges in designing and implementing QF.

QF can run on “bare-metal” CPUs, MCUs, and DSPs, completely replacing a traditional RTOS. The framework can also work with a conventional OS/RTOS to take advantage of the existing device drivers, communication stacks, and legacy code.

Overall, the QF represents a minimal and efficient realization of the active object– based computing model. The framework design avoids all potentially risky or nondeterministic programming techniques internally but does not limit the application designer to only real-time embedded systems. The QF framework has a small memory footprint of about 2-4KB, including the QEP hierarchical event processor, which is of the same order as a small, bare-bones RTOS. QF has been successfully used in hundreds of commercial products in consumer, medical, industrial, wireless, networking, research, defense, robotics, automotive, space exploration, and many other application types worldwide.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.63.106