3. Outline
• Introduction of scheduler
• Scheduler History
– Round-Robin Scheduler
– O(N)
– O(1)
• Completely Fair Scheduler
• Real Time Scheduler
3
4. Introduction of Scheduler
• Scheduler
– Determining which process run when there are
multiple runnable processes.
• Linux Scheduler history
Linux Version
Scheduler
Previous 2.4
Round Robin Scheduler
Version 2.4
0(N)
V2.5.17~2.6.23
0(1)
V2.6.23~Now
Completely Fair Scheduler
4
5. Round Robin Scheduler
struct task_struct {
long counter;
long priority;
…
}
• Algorithm
– Init: p->counter = current->counter >> 1
– At each tick: current->counter -– When current->counter ==0, system picks the highest
counter thread to run.
– When all threads’ counter is 0, reset the counter:
p->counter = (p->counter >> 1) + p->priority
5
6. O(N) Scheduler
• Algorithm
struct task_struct{
…
long counter;
long nice;
…
}
– All runnable task lists in a global list.
– Time slice is related to priority & CONFIG_HZ.
– When pick next task, choose the most weight
task&&(p->counter!=0):weight = p->counter + (20-nice).
– After all task used up time slice, recalculate the
counter.
#if HZ < 200
#define TICK_SCALE(x) ((x) >> 2)
#elif HZ < 400
#define TICK_SCALE(x) ((x) >> 1)
…
#endif
#define NICE_TO_TICKS(nice) (TICK_SCALE(20-(nice))+1)
6
7. O(1) Scheduler (1)
• This scheduler use tow priority arrays per
processor to keep track of ready tasks of the
processor
7
9. O(1) Scheduler (3)
• Bonus is from sleep time.
• MAX_SLEEP_AVG is 1000ms; MAX_BONUS is 10
9
10. CFS Scheduler: Concept(1)
• "Ideal multi-tasking CPU" is a (non-existent :-))
CPU which can run each task at precise equal
speed and equal share.[1]
[1].Documentation/scheduler/sched-design-CFS.txt
10
11. CFS Scheduler: Concept(2)
• The actual things like this, obviously not fair:
• So, the concept of “virtual runtime” is
introduced.
Picture is from: Completely Fair Scheduler, Linux journal, Issue #184, August 2009
11
12. CFS Scheduler: Virtual Runtime (1)
• The virtual runtime of a task specifies when its
next time slice would start execution on the ideal
multi-tasking CPU.[1]
• CFS tries to maintain an equal virtual runtime for
each task in a CPU’s run_queue at all time.
– Reason: tasks would execute simultaneously and no
task would ever get "out of balance" from the "ideal"
share of CPU time.[1]
• CFS always tries to run the task with the smallest
virtual runtime value.
[1].Documentation/scheduler/sched-design-CFS.txt
12
13. CFS scheduler: Virtual Runtime (2)
• One period time for all tasks
(1)
• Time slice for a task on real Processor
(2)
• Virtual Runtime
(3)
According to (2) and (3), get:
(4)
13
14. A demo: understanding virtual runtime
• Thread 1: weight 2 /Thread 2: weight 5
• Period Clock: P=10ms(HZ:100)
Clock Sequence
Virtual Runtime 1
Virtual Runtime 2
0
0
0
1
½ *P
0
2
½ *P
1/5 * P
3
½ *P
2/5 * P
4
½ *P
3/5 * P
5
1*P
3/5 * P
6
1*P
4/5 * P
7
1*P
1*P
…
…
…
14
18. Real Time Scheduler
• The real-time scheduler has to ensure systemwide strict real-time priority scheduling (SWSRPS)
• Only the N highest-priority tasks be running at
any given point in time, where N is the number of
CPUs.
• Frequently task balancing can introduce cache
thrashing and contention for global data (such as
runqueue locks) and can degrade throughput.
• Tow policies
– SCHED_RR
– SCHED_FIFO
18
20. Overview of RT scheduler Algorithm
• The scheduler has to address several scenarios:
– Where to place a task optimally on wakeup (that is,
pre-balance).
– What to do with a lower-priority task when it wakes
up but is on a runqueue running a task of higher
priority.
– What to do with a low-priority task when a higherpriority task on the same runqueue wakes up and
preempts it.
– What to do when a task lowers its priority and thereby
causes a previously lower-priority task to have the
higher priority.
More: http://www.linuxjournal.com/magazine/real-time-linux-kernel-scheduler
20
24. Outline
• Objective
• How to balance among cores
– Hierarchy & Key Data Structures
• Scenarios of balance
24
25. Objective
1. Prevent processors from being idle while others
processors still have tasks waiting to execute[1]
2. Keep the difference in numbers of ready tasks
on all processors as small as possible[1]
Addition: Try to save power while the load is light.[2]
[1] Chun-Yu Lai, Performance Evaluation of Linux Kernel Load Balancing Mechanisms , 2006
[2] Suresh Siddha, Chip Multi Processing aware Linux Kernel Scheduler , 2006 Linux Symposium
25
26. Hierarchy
• Scheduling Domain: Each scheduling domain
spans a number of CPUs.
• Scheduling Group: Each scheduling domain
must have one or more CPU groups which are
organized as a circular one way linked list.
• Balancing within a scheduling domain occurs
between groups.
More information: http://lwn.net/Articles/80911/
26
28. Key members of sched_domain
struct sched_domain {
/* These fields must be setup */
struct sched_domain *parent; /* top domain must be null terminated */
struct sched_domain *child;
/* bottom domain must be null terminated */
struct sched_group *groups;
/* the balancing groups of the domain */
…
unsigned int busy_factor;
/* less balancing by factor if busy */
unsigned int imbalance_pct;
/* No balance until over watermark */
…
int flags;
/* See SD_* */
…
unsigned long last_balance;
/* init to jiffies. units in jiffies */
unsigned int balance_interval;
/* initialise to 1. units in ms. */
unsigned int span_weight;
unsigned long span[0];
};
/* * sched-domains (multiprocessor balancing) declarations: */
#ifdef CONFIG_SMP
#define SD_LOAD_BALANCE
0x0001
/* Do load balancing on this domain. */
#define SD_BALANCE_NEWIDLE 0x0002
/* Balance when about to become idle */
#define SD_BALANCE_EXEC
0x0004
/* Balance on exec */
#define SD_BALANCE_FORK
0x0008
/* Balance on fork, clone */
#define SD_BALANCE_WAKE
0x0010 /* Balance on wakeup */
#define SD_WAKE_AFFINE
0x0020
/* Wake task to waking CPU */
#define SD_SHARE_CPUPOWER
0x0080
/* Domain members share cpu power */
28
29. Key members of sched_group
struct sched_group {
struct sched_group *next;
/* Must be a circular list */
…
unsigned int group_weight;
struct sched_group_power *sgp;
…
unsigned long cpumask[0];
};
struct sched_group_power {
…
unsigned int power;
…
};
29
31. CFS Load Balancing: How to
• load_balance is used to offload tasks in the
busiest runqueue of the busiest group (most
runnable tasks):
– inactive(likely to be cache cold)
– high priority
• load_balance skips tasks that are:
– Currently running on a CPU
– Not allowed to run on the current CPU(as indicated by
the cpus_allowed bitmask in the task_struct)
– Still be cache warm on its currently CPU
31
32. How busiest is the busiest group?
• In current level domain, the biggest group
average load is the busiest group.
– If current processor is idle, the busiest group
should meet that number of running threads is
bigger than the core numbers of that group.
– Else
• If the busiest group is found, this domain is
unbalanced.
32
33. Restore balance
• How much load to actually move to equalize the
imbalance:
(1)
(2)
(3)
• Offload min(imbalance_x) from the busiest
runqueue in the busiest group to restore balance
• Busiest runqueue is the maximum load weight in
the busiest group
33
34. Load Balancing: idle balancing
• Idle balancing
– In schedule(), if this CPU is about to become idle.
Attempts to pull one task from busiest CPUs.
for_each_domain(this_cpu, sd) {
if (!(sd->flags & SD_LOAD_BALANCE))
continue;
pulled_task = load_balance(this_cpu, this_rq,
sd, CPU_NEWLY_IDLE, &balance);
if (pulled_task)
break;
}
34
35. Load Balancing: Periodic balancing
• In timer tick, if current time is after rq->next_balance, trigger
SCHED_SOFTIRQ.
• Current processor starts from the lowest-level
scheduling domain and searches the domain hierarchy
to decide whether the rebalancing is need.
interval = sd->balance_interval;
– Current time > sd->last_balance+interval
if (idle != CPU_IDLE)
– Current domain is unbalanced
interval *= sd->busy_factor;
– If needed rebalancing, pull tasks from busiest runqueue to
current runqueue.
• After one round of periodic balancing, rq->next_balance is
updated to current time + highest-level interval.
35
36. Other Methods to keep Balance
• Exec balancing
– Where to put a new task
SD_BALANCE_EXEC
SD_BALANCE_FORK
SD_BALANCE_WAKE
• Fork balancing
– Where to put a new spawned thread
• Wake balancing
– Where to put the wakee thread
• ILB balancing
36
37. Exec balancing
• Search the idlest group from the highest level
scheduling domain to lowest level domain.
– Idlest group is the minimum avg_load
– Meet
• Search the idlest cpu from idlest group.
– Idlest cpu is the minimum avg_load in idlest group
• Pack this task to a work and add this work to
&per_cpu(cpu_stopper, cpu) list.
• Wake up the stoper->thread which running on idlest
CPU
37
38. Fork Balancing
• In do_fork, select the idlest cpu and insert this
thread to the runqueue of the idlest cpu.
38
39. Wake Balancing
• If this_cpu_load+wakee_weight <= prev_cpu_load, the target cpu is close
to X;else close to Y.
• From the last level cache domain, choose the
idle cpu. If no idle cpu, choose X or Y.
Waker is currently running on CPU X
Wakee was last time running on CPU Y
39
40. Idle Load Balance(1)
• When one of the busy CPUs notice that there
may be an idle rebalancing needed, they will
kick the idle load balancer, which then does
idle load balancing for all the idle CPUs.
– Now >= nohz.next_balance
– Number of running tasks >2
– NOHZ.nr_cpus is not empty.
40
41. Idle Load Balance(2)
• Routine
– Find an ilber and send IPI_RESCHEDULE ipi to it
– After ilber wake up from ipi
• Do idle balance for itself
• Help other idle processors to do load balance.
• If pull tasks for other processor, send IPI_RESCHEDULE to it.
– Update nohz.next_next_balance to ilber’s
next_balance
41