Preemptible kernel locks

Making the majority of kernel locks preemptible is the most intrusive change that PREEMPT_RT makes and this code remains outside of the mainline kernel.

The problem occurs with spinlocks, which are used for much of the kernel locking. A spinlock is a busy-wait mutex which does not require a context switch in the contended case and so is very efficient as long as the lock is held for a short time. Ideally, they should be locked for less than the time it would take to reschedule twice. The following diagram shows threads running on two different CPUs contending the same spinlock. CPU0 gets it first, forcing CPU1 to spin, waiting until it is unlocked:

Preemptible kernel locks

The thread that holds the spinlock cannot be preempted since doing so may make the new thread enter the same code and deadlock when it tries to lock the same spinlock. Consequently, in mainline Linux, locking a spinlock disables kernel preemption, creating an atomic context. This means that a low priority thread that holds a spinlock can prevent a high priority thread from being scheduled.

Note

The solution adopted by PREEMPT_RT is to replace almost all spinlocks with rt-mutexes. A mutex is slower than a spinlock but it is fully preemptible. Not only that, but rt-mutexes implement priority inheritance and so are not susceptible to priority inversion.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.82.154