Runqueue

Conventionally, the runqueue contains all the processes that are contending for CPU time on a given CPU core (a runqueue is per-CPU). The generic scheduler is designed to look into the runqueue whenever it is invoked to schedule the next best runnable task. Maintaining a common runqueue for all the runnable processes would not be a possible since each scheduling class deals with specific scheduling policies and priorities.

The kernel addresses this by bringing its design principles to the fore. Each scheduling class defined the layout of its runqueue data structure as best suitable for its policies. The generic scheduler layer implements an abstract runqueue structure with common elements that serves as the runqueue interface. This structure is extended with pointers that refer to class-specific runqueues. In other words, all scheduling classes embed their runqueues into the main runqueue structure. This is a classic design hack, which lets every scheduler class choose an appropriate layout for its runqueue data structure.

The following code snippet of struct rq (runqueue) will help us comprehend the concept (elements related to SMP have been omitted from the structure to keep our focus on what's relevant):

 struct rq {
/* runqueue lock: */
raw_spinlock_t lock;
/*
* nr_running and cpu_load should be in the same cacheline because
* remote CPUs use both these fields when doing load calculation.
*/
unsigned int nr_running;
#ifdef CONFIG_NUMA_BALANCING
unsigned int nr_numa_running;
unsigned int nr_preferred_running;
#endif
#define CPU_LOAD_IDX_MAX 5
unsigned long cpu_load[CPU_LOAD_IDX_MAX];
#ifdef CONFIG_NO_HZ_COMMON
#ifdef CONFIG_SMP
unsigned long last_load_update_tick;
#endif /* CONFIG_SMP */
unsigned long nohz_flags;
#endif /* CONFIG_NO_HZ_COMMON */
#ifdef CONFIG_NO_HZ_FULL
unsigned long last_sched_tick;
#endif
/* capture load from *all* tasks on this cpu: */
struct load_weight load;
unsigned long nr_load_updates;
u64 nr_switches;

struct cfs_rq cfs;
struct rt_rq rt;
struct dl_rq dl;


#ifdef CONFIG_FAIR_GROUP_SCHED
/* list of leaf cfs_rq on this cpu: */
struct list_head leaf_cfs_rq_list;
struct list_head *tmp_alone_branch;
#endif /* CONFIG_FAIR_GROUP_SCHED */

unsigned long nr_uninterruptible;

struct task_struct *curr, *idle, *stop;
unsigned long next_balance;
struct mm_struct *prev_mm;

unsigned int clock_skip_update;
u64 clock;
u64 clock_task;

atomic_t nr_iowait;

#ifdef CONFIG_IRQ_TIME_ACCOUNTING
u64 prev_irq_time;
#endif
#ifdef CONFIG_PARAVIRT
u64 prev_steal_time;
#endif
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
u64 prev_steal_time_rq;
#endif

/* calc_load related fields */
unsigned long calc_load_update;
long calc_load_active;

#ifdef CONFIG_SCHED_HRTICK
#ifdef CONFIG_SMP
int hrtick_csd_pending;
struct call_single_data hrtick_csd;
#endif
struct hrtimer hrtick_timer;
#endif
...
#ifdef CONFIG_CPU_IDLE
/* Must be inspected within a rcu lock section */
struct cpuidle_state *idle_state;
#endif
};

You can see how the scheduling classes (cfs, rt, and dl) embed themselves into the runqueue. Other elements of interest in the runqueue are:

  • nr_running: This denotes the number of processes in the runqueue
  • load: This denotes the current load on the queue (all runnable processes)
  • curr and idle: These point to the task_struct of the current running task and the idle task, respectively. The idle task is scheduled when there are no other tasks to run.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.210.17