5.1. The Kernel

The kernel is the core to the eCos system. The kernel provides the standard functionality expected in an RTOS, such as interrupt and exception handling, scheduling, threads, and synchronization. These standard functional components that comprise the kernel are fully configurable under the eCos system to meet your specific needs. The eCos kernel is implemented in C++ allowing applications implemented in C++ to interface directly to the kernel; however, there is no official C++ API provided.

There is a configuration option to allow the use of a C kernel API. The kernel API functions are defined in the file kapi.h. The eCos kernel also supports interfacing to standard μTRON and POSIX compatibility layers. Further information about the compatibility layers can be found in Chapter 8, Additional Functionality and Third-Party Contributions.

A few criteria were the focus of the eCos kernel development to allow it to meet its real-time goals:

  • Interrupt latency— the time taken to respond to an interrupt and begin execution of an ISR is kept low and deterministic.

  • Dispatch latency— the time taken from when a thread becomes ready to run to the point it begins execution is kept low and deterministic.

  • Memory footprint— the memory resources required for both code and data is kept minimal and deterministic for a given system configuration. Dynamic memory allocation is configurable in the core components to ensure that the embedded system does not run out of memory.

  • Deterministic kernel primitives— the execution of kernel operations is predictable, allowing an embedded system to meet real-time requirements.

The performance measurements of these real-time criteria can be found in the online documentation at:

http://sources.redhat.com/ecos/docs.html

The eCos kernel API does not return standard error codes for its functions, a practice typical in most APIs. Error return codes ensure that the application is using the functions correctly. However, in an embedded system, processing error return codes can cause a number of problems such as eating up valuable processing cycles and codes space to check the return value. In addition, in an embedded system, typically there is no way to recover from certain errors so the application would be halted.

Instead, the eCos kernel provides assertions that can be enabled or disabled within the eCos package. Typically, assertions are enabled during debugging, allowing the kernel functions to perform certain error checking. If a problem is discovered, an assertion failure is reported and the application is terminated. This allows you to debug the problem using the various debugging facilities provided. After the debug process is complete, assertions can be disabled in the kernel package. This approach has several advantages such as limiting error checking overhead within the function, eliminating the need for application error checking, and if an error occurs, the application is halted allowing immediate debugging of the problem rather than relying on the check of a return code. Additional information about asserts and tracing can be found in Chapter 7, Other eCos Architecture Components.

The kernel components offer methods to ease debugging. One such method, enabled by configuration options, is kernel instrumentation. Instrumentation allows the kernel to invoke routines, whenever certain events occur, which write event records into a circular buffer for analysis at a later time. These event records include time stamps, record type, and other supporting data that can be used for kernel debugging or analysis of time-critical kernel events.

5.1.1. Kernel Directory Structure

The kernel package is located in the repository under the kernel subdirectory. A snapshot of the kernel source file directories, located under kernelcurrentsrc, is shown in Figure 5.1. The common subdirectory consists of the implementations for the clock, exception, thread and timer classes, as well as the kernel C API.

Figure 5.1. eCos kernel source files directory structure.


The debug subdirectory includes the interface calls from a ROM monitor into the kernel, allowing thread-level debugging. The instrumentation code, which allows kernel event logging, is found under the instrmnt subdirectory. The intr subdirectory contains the kernel interrupt handling class implementation.

Next, the scheduler code, be it bitmap, lottery (which is experimental), or multilevel queue, is in the sched subdirectory. Finally, the sync subdirectory contains the semaphore, flag, message box, condition variable, and mutex synchronization primitive class implementations.

5.1.2. Kernel Startup

The kernel startup procedure is invoked from the HAL, after all hardware initialization is complete, as shown in Figure 2.3 in Chapter 2. The last step in Figure 2.3 is to call cyg_start, which is the beginning of the kernel startup procedure. The kernel startup procedure is contained in one core function, cyg_start, which calls other default startup functions to handle various initialization tasks. These default functions are placeholders for you to override the function with necessary initialization for a specific application. The default kernel startup functions can be overridden by simply providing the same function name in the application code.

NOTE

The function cyg_start can also be overridden; however, this should rarely be done. The kernel startup procedure provides sufficient override points for the installation of application-specific initialization code.


Code Listing 5.1 shows the prototype functions that must be used to override the different kernel startup routines.

Code Listing 5.1. Kernel startup function prototypes.
1  void cyg_start( void );
2
3  void cyg_prestart( void );
4
5  void cyg_package_start( void );
6
7  void cyg_user_start( void );

The kernel startup procedure is shown in Figure 5.2. The core kernel startup function, cyg_start, is located in the file startup.cxx under the infra package subdirectory.

Figure 5.2. Kernel startup procedure.


The next function called from within cyg_start is cyg_prestart. This default function is located in the file prestart.cxx under the infra subdirectory. The prestart function does not perform any initialization tasks. cyg_prestart is the first override point in the kernel startup process should any initialization need to be performed prior to other system initialization.

Next, cyg_package_start is called. This function is located in the file pkgstart.cxx under the infra subdirectory. The cyg_package_start function allows other packages, such as μITRON and ISO C library compatibility, to perform their initialization prior to invoking the application's start function. For example, if the μITRON compatibility layer package is configured for use in the system, the function cyg_uitron_start is called for initialization. It is possible to override this function if your own package needs initialization; however, you must be sure to invoke the initialization code for the packages included in the configuration.

The function cyg_user_start is invoked next. This is the normal application entry point. A default for this function, which does not perform any tasks, is provided in the file userstart.cxx under the infra subdirectory; therefore, it is not necessary to provide this function in your application. The cyg_user_start function is used instead of a main function.

NOTE

The function main can be used as the user application starting point if the ISO C library compatibility package is included in the configuration. To accommodate this, the ISO C library provides a default cyg_user_start function that is called if none is supplied by the user application. This default cyg_user_start function creates a thread that then calls the user application main function.


It is recommended that cyg_user_start be used to perform any application-specific initialization, create threads, create synchronization primitives, setting up alarms, and register any necessary interrupt handlers. It is not necessary to invoke the scheduler from the user start function, since this is performed when cyg_user_start returns. Code Listing 6.1, found in Chapter 6, is an example of a cyg_user_start routine that creates a thread.

The final step in the kernel startup procedure is to invoke the scheduler. The scheduler that is started, either multilevel queue or bitmap, depends on the configuration option settings under the kernel component package.

NOTE

Code running during initialization executes with interrupts disabled and the scheduler locked. Enabling interrupts or unlocking the scheduler is not allowed because the system is in an inconsistent state at this point.

Since the scheduler is started after cyg_user_start returns, it is important that kernel services are not used within this routine. Initializing kernel primitives, such as a semaphore, is acceptable; however, posting or waiting on a semaphore would cause undefined behavior and possibly system failure.


5.1.3. The Scheduler

The core of the eCos kernel is the scheduler. The jobs of the scheduler are to select the appropriate thread for execution, provide mechanisms for these executing threads to synchronize, and control the effect of interrupts on thread execution. This section does not describe the implementation details of the different schedulers, but instead gives a basic understanding of how the existing eCos schedulers operate and the configuration options available.

During the execution of the scheduler code, interrupts are not disabled. Because of this, interrupt latency is kept low.

A counter exists within the scheduler that determines whether the scheduler is free to run or disabled. If the lock counter is nonzero, scheduling is disabled; when the lock counter returns to zero, scheduling resumes. As described in Chapter 3, the HAL default interrupt handler routine modifies the lock counter to disable rescheduling from taking place during execution of the ISR. Threads also have the ability to lock and unlock the scheduler.

NOTE

It is important that the kernel API functions are used to lock and unlock the scheduler, not by accessing the lock variable directly.


On some occasions, it might be necessary for a thread to lock the scheduler in order to access data shared with another thread or DSR. The lock and unlock functions are atomic operations handled by the kernel. Item List 5.1 lists the supported kernel API functions.

Item list 5.1. Kernel Scheduler API Functions
Syntax:
void
cyg_scheduler_start(
 void
 );

Context:Init
Parameters:None
Description:Starts the scheduler, bitmap or multilevel queue, according to the configuration options selected. This call also enables interrupts.
Syntax:
void
cyg_scheduler_lock(
 void
 );

Context:Thread/DSR
Parameters:None
Description:Locks the scheduler, preventing any other threads from executing. This function increments the scheduler lock counter.
Syntax:
void
cyg_scheduler_unlock(
 void
 );

Context:Thread/DSR
Parameters:None
Description:This function decrements the scheduler lock counter. Threads are allowed to execute when the scheduler lock counter reaches 0.
Syntax:
cyg_ucount32
cyg_scheduler_read_lock(
 void
 );

Context:Thread/DSR
Parameters:None
Description:Returns the current state of the scheduler lock.

eCos supports two different schedulers that implement distinct policies. The eCos kernel is built using only a single scheduler at any one time. The schedulers are:

  • Multilevel queue

  • Bitmap

NOTE

A third scheduler exists in the eCos repository called the lottery scheduler, which is located in the file lottery.cxx under the kernelcurrentsrcsched subdirectory. The lottery scheduler is currently an experimental implementation and not shown in any configuration options. Only the multilevel queue and bitmap schedulers are actively supported. To use the lottery scheduler, hand-editing of the eCos system configuration is needed to include the lottery code implementation.


5.1.3.1. Multilevel Queue Scheduler

The multilevel queue scheduler allows the execution of multiple threads at each of its priority levels. The number of priority levels is a configuration option from 1 to 32, corresponding to priority numbers 0 (highest priority) to 31 (lowest priority). The scheduler allows preemption between the different priority levels.

Symmetric Multi-Processing (SMP) is only supported when using the multilevel queue scheduler. Additional information about SMP support under eCos can be found in Chapter 8.

Preemption is a context switch halting execution of a lower priority thread, thereby allowing a higher priority thread to execute. The multilevel queue scheduler also allows timeslicing within a priority level.

Timeslicing allows each thread at a given priority to execute for a specified amount of time, which is controlled by a configuration option. The queue implementation for the multilevel scheduler uses double linked circular lists to chain together threads within a priority level and threads at different priority levels.

In Figure 5.3, we see the multilevel scheduling queue representation along with an example of thread execution using this scheduler.

Figure 5.3. Multilevel queue scheduler thread operation.


In the scenario shown in Figure 5.3, three threads—Thread A, Thread B, and Thread C—are configured during creation of the threads at priority levels 0, 0, and 30, respectively. The state of the scheduler queue after thread creation is shown in Figure 5.3. For this scenario, timeslicing is enabled. The timeline is a snapshot that starts with Thread C executing.

Next, Thread A becomes able to run, causing Thread C to be preempted and a context switch occurs. During the execution of Thread A, Thread B also becomes able to run. Thread A continues until its timeslice period expires. Then, another context switch occurs allowing Thread B to run. Thread B completes within its given timeslice period. The de-scheduling of a thread can happen for various reasons; for example, by waiting on a mutex that is not free or delaying for a specified amount of time. Since Thread A has the highest priority of tasks waiting to execute, a context switch occurs and it runs next. After Thread A has completed, a context switch takes place allowing Thread C to execute.

5.1.3.2. Bitmap Scheduler

The bitmap scheduler allows the execution of threads at multiple priority levels; however, only a single thread can exist at each priority level. This simplifies the scheduling algorithm and makes the bitmap scheduler very efficient. The number of priority levels is a configuration option from 1 to 32, corresponding to priority numbers 0 (highest priority) to 31 (lowest priority).

NOTE

When using the bitmap scheduler, it is fatal to set two threads at the same priority number. An assertion is raised if the eCos image is built with assertion support.


The scheduling queue is either an 8-, 16-, or 32-bit value, depending on the number of priority levels selected. A bit in the scheduling queue represents each priority level. The scheduler allows preemption between the different priority levels. Since only one thread is allowed at each priority level, timeslicing is irrelevant and is disabled as a configuration option when using the bitmap scheduler.

Figure 5.4 illustrates an example of thread execution using the bitmap scheduler.

Figure 5.4. Bitmap scheduler thread operation.


In Figure 5.4, there are three threads created at different priority levels: Thread A—priority 0 (highest), Thread B—priority 1, and Thread C—priority 30 (lowest). The state of the bitmap scheduler queue after the threads are created is shown above the thread execution timeline. The timeline is a snapshot of thread execution starting with Thread C running. Next, Thread A and Thread B are able to run, causing a context switch and Thread C is preempted. Thread A executes next because it has the highest priority of the waiting threads. When Thread A completes, a context switch takes place, enabling Thread B to execute. After Thread B completes, Thread C can finish its processing.

As we can see comparing the execution timelines from Figures 5.3 and 5.4, the bitmap scheduler is a much more simplistic scheduling policy. However, the multilevel queue offers more options for thread operation. The decision of what scheduler to use is dependent on the specific needs of the application.

5.1.3.3. Priority Levels

Both schedulers support thread priority levels. The priority level determines which thread will run next of the threads ready to be run. Since the bitmap scheduler only allows a single thread per priority level, the number of priority levels determines the total number of possible threads in the system. The number of threads possible for the multilevel queue scheduler is independent from the number of priority levels. The maximum number of threads for the multilevel queue scheduler is dependent on the memory resources available.

The maximum number of priority levels allowed is 32. A smaller value for the priority level corresponds the higher the priority of the thread. Item List 5.2 lists the kernel API functions for priority level manipulation of a given thread.

Item list 5.2. Kernel Priority Level API Functions
Syntax:
void
cyg_thread_set_priority(
 cyg_handle_t thread,
 cyg_priority_t priority
 );

Context:Thread
Parameters:

thread— handle to the thread.

priority— level to set the thread priority.

Description:Set the thread to the specified priority level. The valid ranges for the priority value are determined by the configuration option settings. The number of priority levels can be configured from 1 to 32, where lower values represent higher thread priorities.
Syntax:
cyg_priority_t
cyg_thread_get_current_priority(
 cyg_handle_t thread
 );

Context:Thread/DSR
Parameters:

thread— handle to the thread.

Description:Return the current priority level for the specified thread. This priority value might differ from the priority set for the thread, during creation or by a cyg_thread_set_priority function call, since a thread priority boost might have occurred due to the thread using a mutex.
Syntax:
cyg_priority_t
cyg_thread_get_priority(
 cyg_handle_t thread
 );

Context:Thread/DSR
Parameters:

thread— handle to the thread.

Description:Return the priority for the specified thread. This priority returned is the value last used in a call to cyg_thread_set_priority or the value when the thread was created.

5.1.3.4. Scheduler Configuration

The scheduler configuration options are located under the Kernel Schedulers component within eCos Kernel package. The configuration options allow you to tailor the resources used by the scheduler according to the specific needs of the application. Item List 5.3 details the configuration options available, as well as the different suboptions.

Item list 5.3. Kernel Scheduler Configuration Options
Option NameMultilevel Queue Scheduler
CDL NameCYGSEM_KERNEL_SCHED_MLQUEUE
DescriptionEnables the multilevel queue scheduler implementation.
Option NameBitmap Scheduler
CDL NameCYGSEM_KERNEL_SCHED_BITMAP
DescriptionEnables the bitmap scheduler implementation.
Option NameNumber of Priority Levels
CDL NameCYGNUM_KERNEL_SCHED_PRIORITIES
DescriptionSpecifies the number of available priority levels. This number determines the queue size for the specified scheduler. For the bitmap scheduler, this number also determines total number of threads possible. Valid values for this option are 1 to 32, with the default being set to 32. A suboption allows the selection of the de-queue method. When enabled, threads of equal priority are de-queued with the oldest thread first. This suboption is disabled by default.
Option NameScheduler Timeslicing
CDL NameCYGSEM_KERNEL_SCHED_TIMESLICE
DescriptionEnables timeslicing mode for the multilevel queue scheduler. The scheduler checks to determine if another thread at the same priority level is ready to run. If so, a context switch will take place after the timeslice period, selectable as a suboption in clock ticks between timeslices, expires. This option is enabled by default for the multilevel queue scheduler. Another suboption allows timeslicing to be dynamically enabled or disabled on a per-thread basis.
Option NameEnable ASR Support
CDL NameCYGSEM_KERNEL_SCHED_ASR_SUPPORT
DescriptionControls Asynchronous Service Routine (ASR) support, which is a function called from the scheduler when it has exited the scheduler lock. This is typically used by compatibility layer packages, such as POSIX.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.172.252