Alternate spinlock APIs

Standard spinlock operations that we discussed so far are suitable for the protection of shared resources that are accessed only from the process context kernel path. However, there might be scenarios where a specific shared resource or data might be accessed from both the process and interrupt context code of a kernel service. For instance, think of a device driver service that might contain both process context and interrupt context routines, both programmed to access the shared driver buffer for execution of appropriate I/O operations.

Let's presume that a spinlock was engaged to protect the driver's shared resource from concurrent access, and all routines of the driver service (both process and interrupt context) seeking access to the shared resource are programmed with appropriate critical sections using standard spin_lock() and spin_unlock() operations. This strategy would ensure protection of the shared resource by enforcing exclusion, but can cause a hard lock condition on the CPU at random times, due to lock contention by the interrupt path code on the same CPU where the lock was held by a process context path. To further understand this, let's assume the following events occur in the same order:

  1. Process context routine of the driver acquires lock (using the standard spin_lock() call).
  2. While the critical section is in execution, an interrupt occurs and is driven to the local CPU, causing the process context routine to preempt and give away the CPU for interrupt handlers.
  3. Interrupt context path of the driver (ISR) starts and tries to acquire lock (using the standard spin_lock() call), which then starts to spin for lock to be available.

For the duration of the ISR, the process context is preempted and can never resume execution, resulting in a lock that can never be released, and the CPU is hard locked with a spinning interrupt handler that never yields.

To prevent such occurrences, the process context code needs to disable interrupts on the current processor while it takes the lock. This will ensure that an interrupt can never preempt the current context until the completion of the critical section and lock release. Note that interrupts can still occur but are routed to other available CPUs, on which the interrupt handler can spin until lock becomes available. The spinlock interface provides an alternate locking routine spin_lock_irqsave(), which disables interrupts on the current processor along with kernel preemption. The following snippet shows the routine's underlying code:

unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)  
{
unsigned long flags;

for (;;) {
preempt_disable();
local_irq_save(flags);
if (likely(do_raw_##op##_trylock(lock)))
break;
local_irq_restore(flags);
preempt_enable();

if (!(lock)->break_lock)
(lock)->break_lock = 1;
while (!raw_##op##_can_lock(lock) && (lock)->break_lock)
arch_##op##_relax(&lock->raw_lock);
}
(lock)->break_lock = 0;
return flags;
}

local_irq_save() is invoked to disable hard interrupts for the current processor; notice how on failure to acquire the lock, interrupts are enabled by calling local_irq_restore(). Note that a lock taken by the caller using spin_lock_irqsave() needs to be unlocked using spin_lock_irqrestore(), which enables both kernel preemption and interrupts for the current processor before releasing lock.

Similar to hard interrupt handlers, it is also possible for soft interrupt context routines such as softirqs, tasklets, and other such bottom halves to contend for a lock held by the process context code on the same processor. This can be prevented by disabling the execution of bottom halves while acquiring lock in the process context. spin_lock_bh() is another variant of the locking routine that takes care of suspending the execution of interrupt context bottom halves on the local CPU.

void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock)                
{
unsigned long flags;

/* */
/* Careful: we must exclude softirqs too, hence the */
/* irq-disabling. We use the generic preemption-aware */
/* function: */
/**/
flags = _raw_##op##_lock_irqsave(lock);
local_bh_disable();
local_irq_restore(flags);
}

local_bh_disable() suspends bottom half execution for the local CPU. To release a lock acquired by spin_lock_bh(), the caller context will need to invoke spin_unlock_bh(), which releases spinlock and BH lock for the local CPU.

Following is a summarized list of the kernel spinlock API interface:

Function Description
spin_lock_init() Initialize spinlock
spin_lock() Acquire lock, spins on contention
spin_trylock() Attempt to acquire lock, returns error on contention
spin_lock_bh() Acquire lock by suspending BH routines on the local processor, spins on contention
spin_lock_irqsave() Acquire lock by suspending interrupts on the local processor by saving current interrupt state, spins on contention
spin_lock_irq() Acquire lock by suspending interrupts on the local processor, spins on contention
spin_unlock() Release the lock
spin_unlock_bh() Release lock and enable bottom half for the local processor
spin_unlock_irqrestore() Release lock and restore local interrupts to previous state
spin_unlock_irq() Release lock and restore interrupts for the local processor
spin_is_locked() Return state of the lock, nonzero if lock is held or zero if lock is available
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.127.232