The eCos kernel provides the mechanisms for the threads in the system to communicate with each other and synchronize access to shared resources. The mechanisms provided by the eCos kernel are:
The kernel also provides API functions that allow applications to make use of the synchronization mechanisms. Some of the synchronization mechanism API functions provided are either blocking or nonblocking.
Blocking function calls, such as cyg_semaphore_wait, halt execution of the thread until the API function can complete successfully. Nonblocking function calls, such as cyg_semaphore_trywait, attempt to complete successfully; however, if the API function is not successful, a return code indicates the status of the call so the thread can proceed with its execution.
Another type of blocking call, blocking with timeout, also exists for certain synchronization mechanisms. These are API functions that halt execution of the thread for a specified period of time, such as cyg_semaphore_timed_wait, while attempting to complete the function call successfully. If the function does not complete successfully before the timeout period, the function returns, indicating an unsuccessful status.
The first synchronization mechanism provided by eCos is the mutex. A mutex (mutual exclusion object) allows multiple threads to share a resource serially. The resource can be an area of memory or a piece of hardware, such as a Direct Memory Access (DMA) controller.
A mutex is similar to a binary semaphore in that it only has two states—locked and unlocked. However, there are a couple of differences between a binary semaphore and a mutex. A mutex provides protection against priority inheritance, whereas a binary semaphore does not. Priority inheritance is discussed further later in this section.
A mutex also has the concept of an owner, and only the owner can unlock the mutex. A binary semaphore does not have this requirement; it is possible for one thread to lock a binary semaphore and another thread to unlock it. Once a mutex is locked, it should not be locked again; this might cause undefined behavior. A thread that attempts to lock a mutex that is currently owned by another thread will block until the owner unlocks the mutex.
One issue that arises in real-time systems when using mutexes is priority inversion. Priority inversion occurs when a high priority thread is incorrectly prevented from executing by a low priority thread. An example of this is when the high priority thread is waiting on a mutex that is currently owned by the low priority thread. Then, an unrelated medium priority thread preempts the low priority thread, preventing the high priority thread from executing at its proper priority level.
eCos provides two solutions to the priority inversion problem that are selectable as configuration options. The first solution is a priority ceiling protocol. In the priority ceiling protocol, all threads that acquire the mutex have their priority level raised to a preconfigured value. This value is available as a configuration option. One disadvantage to this protocol is that the priority level for threads using the mutex must be known ahead of time so the proper ceiling value can be set. Another disadvantage is that if the ceiling value is set too high, other unrelated threads with priority levels below the ceiling can be locked out from executing, possibly causing real-time deadlines to be missed. The priority ceiling protocol is used even when priority inversion is not occurring.
A more elegant solution eCos provides is a priority inheritance protocol. The priority inheritance protocol allows a thread that owns the mutex to be raised to the priority level equal to the highest level of all threads waiting for the mutex. The priority inheritance protocol is only used when a higher priority thread is waiting for the mutex. The drawback to using this protocol is that synchronization calls are costlier because the scheduler must comply with the inheritance protocol each time.
The configuration options for the mutex synchronization primitive can be found under the Synchronization Primitives component within the eCos Kernel package. The main configuration option Priority Inversion Protection Protocols (CYGSEM_KERNEL_SYNCH_MUTEX_ PRIORITY_INVERSION_PROTOCOL) controls the inversion algorithm used for mutex operations. This option enables or disables the use of one of the priority inversion protocols for mutexes. Eliminating the use of a priority inversion protocol reduces code and data sizes. Currently, eCos defines one algorithm for protection against priority inversion called Simple, which is only available with the multilevel queue scheduler. The Simple algorithm is designed to be fast and deterministic. The priority protocol used within the Simple algorithm can be set by configuration suboptions. The configuration suboption that specifies the inversion protocol used is Default Priority Inversion Protocol, which can be set to INHERIT, CEILING, or NONE. Item List 6.3 lists the configuration suboptions for the mutex priority inversion protocol.
Option Name | Enable Priority Inheritance Protocol |
CDL Name | CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_INHERIT |
Description | Enables the priority inheritance protocol, which causes the owner of a mutex to be executed at the highest priority of all threads waiting for the mutex. The default value for this option is enabled. |
Option Name | Enable Priority Ceiling Protocol |
CDL Name | CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_CEILING |
Description | Enables the priority ceiling protocol, which causes the owner of a mutex to execute at a preset priority level. The default value for this option is enabled. The suboption Default Priority Ceiling specifies the priority level for the ceiling. The mutex will boost the owner of the thread to this priority level while executing. The default value for this option is 0, which is the maximum priority level. |
Option Name | No Priority Inversion Protocol |
CDL Name | CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_NONE |
Description | Disables the priority inversion protocol. This option is necessary for the run-time selection of a priority inversion protocol. The default value for this option is enabled. |
Option Name | Default Priority Inversion Protocol |
CDL Name | CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT |
Description | Defines the default inversion protocol to use when mutexes are created if a protocol is not specified. The possible values for this option are INHERIT, CEILING, or NONE. The default value for this option is INHERIT. |
Option Name | Specify Mutex Priority Inversion Protocol At Runtime |
CDL Name | CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DYNAMIC |
Description | Allows the priority inversion protocol used by a mutex to be specified when the mutex is created. The default value for this option is enabled. |
The eCos kernel provides API functions for creating and manipulating mutexes. The mutex functions available in the kernel API are listed in Item List 6.4.
Code Listing 6.2 is an example showing two threads using a mutex. The thread and mutex initializations are left out in this example to focus on the use of the mutex.
1 #include <cyg/kernel/kapi.h> 2 #include <cyg/io/io.h> 3 4 cyg_io_handle_t port_handle; 5 cyg_mutex_t mut_shared_port; 6 7 // 8 // Thread A. 9 // 10 void thread_a( cyg_addrword_t index ) 11 { 12 int result; 13 int write_length = 6; 14 unsigned char write_buffer[ 6 ] = {4,21,4,20,6,28}; 15 16 // Run this thread forever. 17 while ( 1 ) 18 { 19 // Get the mutex. 20 cyg_mutex_lock( &mut_shared_port ); 21 22 // Write data to the I/O port. 23 result = cyg_io_write( port_handle, 24 &write_buffer[ 0 ], 25 &write_length ); 26 27 // Release the mutex. 28 cyg_mutex_unlock( &mut_shared_port ); 29 30 // Get more data to send to the port... 31 } 32 } 33 34 // 35 // Thread B. 36 // 37 void thread_b( cyg_addrword_t index ) 38 { 39 int result; 40 int read_length = 3; 41 unsigned char read_buffer[ 3 ]; 42 43 // Run this thread forever. 44 while ( 1 ) 45 { 46 // Get the mutex. 47 cyg_mutex_lock( &mut_shared_port ); 48 49 // Read data from the I/O port. 50 result = cyg_io_read( port_handle, 51 &read_buffer[ 0 ], 52 &read_length ); 53 54 // Release the mutex. 55 cyg_mutex_unlock( &mut_shared_port ); 56 57 // Process the data read from the port... 58 } 59 } |
As we can see in Code Listing 6.2, Thread A and Thread B both use the same hardware I/O port. The mut_shared_port mutex protects the port so that only one thread accesses the port at a time. In this example, we assume Thread A acquires the mutex first by executing its cyg_mutex_lock function call on line 20. Thread A is then able to write out its data to the I/O port on line 23. While Thread A is writing out to the port, Thread B executes. However, Thread B must wait when it reaches its cyg_mutex_lock function call, shown on line 47, since the mutex is already owned by Thread A.
After Thread A finishes writing out its data, the function call cyg_mutex_unlock, on line 28, releases the mutex. Then, Thread B becomes the owner of the mutex and is allowed to access the port. Finally, after Thread B reads its data from the I/O port, on line 50, it releases the mutex with the call cyg_mutex_unlock on line 55.
A semaphore is a synchronization mechanism that contains a count indicating whether a resource is locked or available. There are two types of semaphores, counting and binary. Binary semaphores are similar to counting semaphores; however, their count is never incremented past a value of one. Binary semaphores are in either a locked or unlocked state.
Counting semaphores can be in multiple states depending on their count value. Counting semaphore objects contain a value that is incremented when a thread posts to a semaphore, and the value is decremented when a thread completes a wait for the semaphore. Only the highest priority waiting thread is executed when the semaphore count is above zero. Counting semaphores are often used when a higher priority thread or DSR, which received data, needs to signal another thread to continue processing the data at a lower priority.
The eCos kernel provides API functions for creating and manipulating semaphores. The kernel API is for counting semaphores and not binary semaphores. These API functions, which are defined in Item List 6.5, use counting semaphores.
Syntax: |
void
cyg_semaphore_init(
cyg_sem_t *sem,
cyg_ucount32 val
);
|
Context: | Init/Thread |
Parameters: | |
Description: | Initializes a semaphore with a count value specified in the val parameter. |
Syntax: |
void
cyg_semaphore_destroy(
cyg_sem_t *sem
);
|
Context: | Thread |
Parameters: | |
Description: | Destroys a semaphore. It is important that there are not any threads waiting on the semaphore when this function is called or the behavior is undefined. |
Syntax: |
void
cyg_semaphore_wait(
cyg_sem_t *sem
);
|
Context: | Thread |
Parameters: | |
Description: | When the semaphore count is zero, the thread calling this function will wait for the semaphore. When the semaphore count is nonzero, the count will be decremented and the thread calling this function will continue. |
Syntax: |
cyg_bool_t
cyg_semaphore_trywait(
cyg_sem_t *sem
);
|
Context: | Thread/DSR |
Parameters: | |
Description: | Attempts to decrement the semaphore count. If the semaphore count is greater than zero, the count is decremented and TRUE is returned. If the count is zero, the semaphore is unchanged and FALSE is returned. In either case, the thread does not block waiting for the semaphore. |
Syntax: |
cyg_bool_t
cyg_semaphore_timed_wait(
cyg_sem_t *sem,
cyg_tick_count_t abstime
);
|
Context: | Thread |
Parameters: | |
Description: | Attempts to decrement a semaphore count. This function is only available when the Allow Per-Thread Timers configuration option is enabled. If the semaphore count is greater than zero, the count is decremented and TRUE is returned. If the count is zero, the function call will wait for the amount of time specified in the abstime parameter. If the timeout occurs before the semaphore count can be decremented, FALSE is returned and the current thread will continue to run. The abstime parameter is an absolute time measured in clock ticks. The following shows how to use a relative wait time:
cyg_semaphore_timed_wait( &sem, cyg_current_time( ) + 100); In this example, the thread will wait for the semaphore for 100 ticks from the present time. |
Syntax: |
void
cyg_semaphore_post(
cyg_sem_t *sem
);
|
Context: | Thread/DSR |
Parameters: | |
Description: | Increment the semaphore count. If a thread is waiting on the specified semaphore, it is awakened. |
Syntax: |
void
cyg_semaphore_peek(
cyg_sem_t *sem,
cyg_count32 *val
);
|
Context: | Thread/DSR |
Parameters: | |
Description: | Returns the current semaphore count in the variable pointed to by the parameter val. |
Code Listing 6.3 is a simple example of the creation of a semaphore for use by two threads.
1 #include <cyg/kernel/kapi.h> 2 #include <cyg/infra/diag.h> 3 4 #define THREAD_A_STACK_SIZE ( 2048 / sizeof(int) ) 5 #define THREAD_B_STACK_SIZE ( 2048 / sizeof(int) ) 6 7 cyg_sem_t sem_get_data; 8 int thread_a_stack[ THREAD_A_STACK_SIZE ]; 9 int thread_b_stack[ THREAD_B_STACK_SIZE ]; 10 cyg_handle_t thread_a_handle; 11 cyg_handle_t thread_b_handle; 12 cyg_thread thread_a_obj; 13 cyg_thread thread_b_obj; 14 15 // 16 // Thread A. 17 // 18 void thread_a( cyg_addrword_t index ) 19 { 20 // Run this thread forever. 21 while ( 1 ) 22 { 23 // Delay for 1000 ticks. 24 cyg_thread_delay( 1000 ); 25 26 // Display a message. 27 diag_printf( "Thread A: Signal Thread B! " ); 28 29 // Signal Thread B to run. 30 cyg_semaphore_post( &sem_get_data ); 31 } 32 } 33 34 // 35 // Thread B. 36 // 37 void thread_b( cyg_addrword_t index ) 38 { 39 // Run this thread forever. 40 while ( 1 ) 41 { 42 // Signal Thread B to run. 43 cyg_semaphore_wait( &sem_get_data ); 44 45 // Display a message. 46 diag_printf( "Thread B: Got the signal! " ); 47 } 48 } 49 50 // 51 // Main starting point for the application. 52 // 53 void cyg_user_start( 54 void) 55 { 56 // Initialize the get data semaphore to 0. 57 cyg_semaphore_init( &sem_get_data, 0 ); 58 59 // Create Thread A. 60 cyg_thread_create( 61 12, 62 thread_a, 63 0, 64 "Thread A", 65 &thread_a_stack, 66 THREAD_A_STACK_SIZE, 67 &thread_a_handle, 68 &thread_a_obj ); 69 70 // Create Thread B. 71 cyg_thread_create( 72 12, 73 thread_b, 74 0, 75 "Thread B", 76 &thread_b_stack, 77 THREAD_B_STACK_SIZE, 78 &thread_b_handle, 79 &thread_b_obj ); 80 81 // Let the threads run when the scheduler starts. 82 cyg_thread_resume( thread_a_handle ); 83 cyg_thread_resume( thread_b_handle ); 84 } |
In Item List 6.3, we can see in the function cyg_user_start, on line 53, that the sem_get_data semaphore is initialized, shown on line 57.
NOTE
This might be understood; however, it should be pointed out nonetheless. It is important to initialize any semaphores, and any other synchronization mechanisms, prior to creating and resuming the threads that use these mechanisms. Undefined behavior will result if this rule is not followed. Careful consideration should also be given to the initial values of synchronization mechanisms to ensure that threads are running at the proper time.
When the semaphore is initialized, a parameter is passed in that determines the initial value of the semaphore count; we can see on line 57 that this value is 0. If the count value is initialized to zero, all threads waiting on the semaphore will continue to wait until a post, which increments the count value, to the semaphore occurs. If the count value is initialized to a value greater than zero, the scheduler will determine which waiting thread to run based on each thread's priority level until the semaphore count value is zero. Each time a wait function succeeds, the semaphore count value is decremented by one.
Thread A executes a delay of 1000 ticks (line 24), outputs a message (line 27), and then posts to the semaphore (line 30). The scheduler then wakes up Thread B because of the semaphore wait call on line 43, outputs a message (line 46), and then returns to the waiting state (line 43).
Another available synchronization mechanism is the condition variable. Condition variables are used with mutexes that allow multiple threads access to shared data. Typically, there is a single thread producing the data, and one or more threads waiting for the data to be available. The thread producing the data can either signal a single thread to wake up or all threads to wake up, with a broadcast signal, when the data is available. The waiting threads can then process the data as needed. Item List 6.6 lists the kernel API condition variable control functions.
eCos contains two configuration options for condition variables. These are located in the Synchronization Primitives component within the eCos Kernel package. The first configuration option is Condition Variable Timed-Wait Support (CYGMFN_KERNEL_SYNCH_CONDVAR_TIMED_WAIT), which allows the cyg_cond_timed_wait kernel API function to be used by applications. This option is enabled by default.
The second configuration option is Condition Variable Explicit Mutex Wait Support (CYGMFN_KERNEL_SYNCH_CONDVAR_WAIT_MUTEX), which permits a thread to provide a different mutex in a call to the wait functions. In the default case, condition variables are created with a statically associated mutex. This configuration option is enabled by default.
Code Listing 6.4 shows an example using a condition variable. The thread, condition variable, and mutex initializations are left out in this example to focus on the use of the condition variable.
1 #include <cyg/kernel/kapi.h> 2 #include <cyg/infra/cyg_type.h> 3 4 unsigned char buffer_empty = true; 5 cyg_mutex_t mut_cond_var; 6 cyg_cond_t cond_var; 7 8 // 9 // Thread A. 10 // 11 void thread_a( cyg_addrword_t index ) 12 { 13 // Run this thread forever. 14 while ( 1 ) 15 { 16 // Acquire data into the buffer... 17 18 // There is data in the buffer now. 19 buffer_empty = false; 20 21 // Get the mutex. 22 cyg_mutex_lock( &mut_cond_var ); 23 24 // Signal the condition variable. 25 cyg_cond_signal( &cond_var ); 26 27 // Release the mutex. 28 cyg_mutex_unlock( &mut_cond_var ); 29 } 30 } 31 32 // 33 // Thread B. 34 // 35 void thread_b( cyg_addrword_t index ) 36 { 37 // Run this thread forever. 38 while ( 1 ) 39 { 40 // Get the mutex. 41 cyg_mutex_lock( &mut_cond_var ); 42 43 // Wait for the data and the condition variable signal. 44 while ( buffer_empty == true ) 45 { 46 cyg_cond_wait( &cond_var ); 47 } 48 49 // Get the buffer data... 50 51 // The data in the buffer has been processed. 52 buffer_empty = true; 53 54 // Release the mutex. 55 cyg_mutex_unlock( &mut_cond_var ); 56 57 // Process the data in the buffer... 58 } 59 } |
In Code Listing 6.4, Thread A is acquiring data that is processed by Thread B. First, Thread B executes. On line 41, Thread B acquires the mutex associated with the condition variable. Next, since there is no data in the buffer to process, and buffer_empty is true on initialization (line 4), Thread B calls cyg_cond_wait on line 46. This call to cyg_cond_wait does two things—first, it suspends Thread B waiting for the condition variable to be set, and second, it unlocks the mutex mut_cond_var.
Now, an event occurs causing Thread A to execute and acquire data into a buffer, as we see on line 16. Next, buffer_empty is set to false on line 19. Thread A then locks the mutex (line 22), signals the condition variable (line 25), and then unlocks the mutex (line 28).
Next, Thread B is able to run because the condition variable is signaled from Thread A. Before returning from cyg_cond_wait, the mutex, mut_cond_var, is locked and owned by Thread B. Now, Thread B can get the data buffer (line 49) and set the buffer_empty flag to true (line 52). Finally, the mutex is released by Thread B on line 55 and the data in the buffer is processed, as we see on line 57.
It is important to understand a couple of issues relating to the code in Code Listing 6.4. First, the mutex unlock and wait code execution in the call to cyg_cond_wait, on line 46, is atomic; therefore, no other thread is allowed to run between the unlock and the wait. If this code were not atomic, then it would be possible for Thread B to miss the signal call from Thread A even though data was in the buffer. Why? Because Thread B calls cyg_cond_wait, which first checks to see if the condition variable is set; in this case, it is not. Next, the mutex is released in the cyg_cond_wait call. Now, Thread A executes, putting data into the buffer and then signaling the condition variable (line 25). Then, Thread B returns to waiting, however, the condition variable has been set.
Another issue to keep in mind is that the call to cyg_cond_wait by Thread B is in a while loop, on lines 44 through 47. This ensures that the condition that Thread B is waiting on is still true after returning from the condition wait call. Take the case where other threads are waiting on the same condition. Another thread might be queued to obtain the mutex before Thread B; therefore, being signaled and waking up before Thread B. When Thread B finally gets to run, the condition is then false. The while loop around the condition wait ensures the condition is still true before a thread executes.
Flags are synchronization mechanisms represented by a 32-bit word. Each bit in the flag represents a condition, which allows a thread to wait for either a single condition or a combination of conditions. The waiting thread specifies if all conditions or a combination of conditions are to be met before it wakes up. The signaling thread can then set or reset bits according to specific conditions so the appropriate thread can be executed. The kernel API functions for creating and controlling the flags are detailed in Item List 6.7.
Code Listing 6.5 shows an example using the kernel API for flags. The thread and flag initializations are left out in this example to focus on the use of flags.
1 #include <cyg/kernel/kapi.h> 2 3 cyg_flag_t flag_var; 4 5 // 6 // Thread A. 7 // 8 void thread_a( cyg_addrword_t index ) 9 { 10 // Run this thread forever. 11 while ( 1 ) 12 { 13 // Delay for 1000 ticks. 14 cyg_thread_delay( 1000 ); 15 16 // Set the appropriate flag bits to signal Thread B. 17 cyg_flag_setbits( &flag_var, 1 ); 18 } 19 } 20 21 // 22 // Thread B. 23 // 24 void thread_b( cyg_addrword_t index ) 25 { 26 // Run this thread forever. 27 while ( 1 ) 28 { 29 // Wait for the appropriate bits to be set in the flag. 30 cyg_flag_wait( &flag_var, 31 3, 32 CYG_FLAG_WAITMODE_OR | 33 CYG_FLAG_WAITMODE_CLR 34 ); 35 } 36 } |
Code Listing 6.5 shows a basic example of how Thread A is using the flag, flag_var declared on line 3, to signal Thread B. Thread B waits on the flag_var flag using the cyg_flag_wait function call, as shown on line 30. The second parameter, on line 31, determines the bit pattern, in this case 3, that the flag variable needs to be set to in order to wake up Thread B. The mode parameters, on lines 32 and 33, specify the conditions for wake up. In this case, CYG_FLAG_WAITMODE_OR means that Thread B will wake up if either bit 1 or bit 2 is set in the flag. The mode parameter CYG_FLAG_WAITMODE_CLR indicates that all bits in the flag are cleared when the condition is met. Thread A sets bit 1 in flag_var using the function call cyg_flag_setbits, as shown on line 17. Since Thread B is waiting on either bit 1 or bit 2 to be set, Thread B is then awakened.
Another synchronization mechanism provided by eCos are message boxes, also called mailboxes. Message boxes provide a means for two threads to exchange information. Typically, one thread will produce messages and send to another thread for processing. Message boxes offer another method for threads to communicate more than a single byte of information. Item List 6.8 describes the kernel API message box functions.
There are two configuration options for the message box synchronization mechanism. These are located under the Synchronization Primitives component within the eCos Kernel package. The first configuration option is Message Box Blocking Put Support (CYGMFN_KERNEL_SYNCH_MBOXT_PUT_CAN_WAIT). This option, which is enabled by default, allows the put and timed put function calls to be used when sending messages.
The second configuration option determines the number of messages that can be queued in a Message Box Queue Size (CYGNUM_KERNEL_SYNCH_MBOX_QUEUE_SIZE). The valid values for this option are 1 to 65535 with a default value of 10 messages.
Code Listing 6.6 shows an example using a message box to exchange data between two tasks. The thread and message box initializations are left out in this example.
1 #include <cyg/kernel/kapi.h> 2 3 cyg_handle_t mbox_handle; 4 5 // Thread A. 6 // 7 void thread_a( cyg_addrword_t index ) 8 { 9 // Run this thread forever. 10 while ( 1 ) 11 { 12 // Delay for 1000 ticks. 13 cyg_thread_delay( 1000 ); 14 15 // Send a message to Thread B. 16 cyg_mbox_put( mbox_handle, (void *)12 ); 17 } 18 } 19 20 // 21 // Thread B. 22 // 23 void thread_b( cyg_addrword_t index ) 24 { 25 void *message; 26 27 // Run this thread forever. 28 while ( 1 ) 29 { 30 // Wait for the message. 31 message = cyg_mbox_get( mbox_handle ); 32 33 // Make sure we received the message before attempting 34 // to process it. 35 if ( message != NULL ) 36 { 37 // Process the message. 38 } 39 } 40 } |
Code Listing 6.6 shows an example of Thread A sending a message to Thread B using a message box. Thread A places the message—in this case, the number 12—in the message box using the handle mbox_handle, as shown on line 16. Thread B retrieves the data and stores it in the local variable message using cyg_mbox_get, as we see on line 31. On line 35, we verify that there is valid data in the message variable before proceeding to process the data.
eCos kernel provides an additional synchronization mechanism for applications running on SMP systems called spinlocks. The other synchronization mechanisms work as well on SMP systems. Additional information about SMP support within eCos can be found in Chapter 8, Additional Functionality and Third-Party Contributions.
A spinlock is basically a flag that a processor can check prior to executing a particular piece of code. If the spinlock is not locked, the processor can set the flag and continue executing the thread. If the spinlock is locked, the thread spins in a tight loop continually checking the flag until it is released. Spinlocks operate at a lower level than other synchronization mechanisms and the implementation is hardware specific. Some processors offer a test-and-set instruction for implementing a spinlock.
Threads that do not acquire a spinlock are not suspended; therefore, it is important that spinlocks are held for a short period of time, typically on the order of 10 or 12 instructions. Understanding the consequences of using a spinlock is important. For example, since a processor does not perform any useful work waiting for a spinlock, it is important when a thread acquires a spinlock that it does not get preempted. This could cause another processor to wait for a timeslice period or longer for the spinlock. To avoid this, the kernel spinlock API, as shown in Item List 6.9, provides a function to disable interrupts on the processor where the thread is executing.
NOTE
Spinlocks should only be used in SMP systems and are not appropriate for single-processor systems. The problems of using spinlocks on a single-processor system can be illustrated in an example. Let's take the case where a high priority thread attempts to acquire a spinlock held by a lower priority thread. The high priority thread loops forever waiting for the lower priority thread to release the spinlock. However, since the lower priority thread never gets a chance to run, it can never release the spinlock; hence, a deadlock arises. Another deadlock scenario could arise if an interrupt attempted to acquire a spinlock previously acquired by a thread.
18.221.53.5