3. INTRODUCTION
• A major task of an operating system is to manage a collection of
processes. In some cases, a single process may consist of a set of
individual threads.
• In both situations, a system with a single CPU or a multi-processor
system with fewer CPU’s than processes has to divide CPU time among
the different processes / threads that are competing to use it. This
process is called CPU scheduling.
4. BASIC TERMINOLOGIES
CPU Scheduling
▪ A process consist of a cycle of CPU execution and I/O execution.
▪ Normally every process begins with CPU burst that may be followed by I/O
burst then another CPU burst and then I/O burst and so on eventually in the
last will end up on CPU bound
I/O Bound
▪ If the CPU bursts are relatively short compared to the I/O bursts, then the
process is said to be I/O bound. For example, a typical data processing task
involves reading a record, some minimal computation and writing a record.
CPU Bound
▪ If CPU bursts are relatively long compared to I/O bursts, a process is said to be
CPU bound.
5. • When a Process complete it’s execute
• When a process leaves voluntary to perform I/O operation or to wait
for an event
• If a Process enter in ready state either from new or waiting state and
it is a high priority process
CPU Scheduling
Pre-emptivenon Pre-emptive
6. PREEMPTIVE AND NON-PREEMPTIVE
SCHEDULING
There are conditions under which CPU scheduling may take place.They
are :
1) When a process switches from the running state to the waiting state.
2) When a process switches from the running state to the ready state.
3) When a process switches from the waiting state to the ready state.
4) When a process terminates.
7. CONTINUE
▪ Only conditions 1 and 4 apply, the scheduling is called non-
preemptive.
▪ All other scheduling are preemptive.
8. CPU SCHEDULING TERMINOLOGY
▪ Burst Time/Execution Time/Running Time:- is time process require for
running CPU
▪ Waiting Time:- spend by a process in ready state waiting for CPU
▪ Arrival Time:- when a process enter in a ready state
▪ Finish Time:- when process complete and exit from system
▪ Turn around Time:- total time spend by a process in the system
▪ Response time:- time between process enter in ready queue and get
scheduled on the CPU for first time
9. SCHEDULING CRITERIA
There are many scheduling algorithms and various criteria to judge their
performance. Some Criteria are as follows:
• CPU utilization: CPU must be as busy as possible in performing
different tasks. CPU utilization is more important in real-time system
and multi-programmed systems.
• Throughput: The number of processes executed in a specified time
period is called throughput. The throughput increases for short
processes. It decreases if the size of processes is huge.
• Turnaround Time: The amount of time that is needed to execute a
process is called turnaround time
• Waiting Time: The amount of time the process has waited is called
waiting time. It is the turnaround time minus actual job time.
• Response Time: The amount of time between a request is submitted
and the first response is produced is called response time.
10. A CPU scheduling algorithm should try to maximize and minimize the following:
Maximize:- Minimize:-
CPU Utilization Turnaround time
Throughput Waiting time
Response time
SCHEDULING CRITERIA
Scheduling
Criteria
MinimizeMaximize
11. INTERVAL TIMER
Timer interruption is a technique that is closely related to preemption.
When a process gets the CPU, a timer may be set to a specified interval.
Both timer interruption and preemption force a process to yield the
CPU before its CPU burst is complete.
However, it is helpful to distinguish timer interruption from preemption
caused by higher priority processes becoming ready for two reasons:
▪ Timer interruption is a function of the particular process’s own
behavior. It is independent of the rest of the system.
▪ Almost all multi-programmed operating system use some form of timer
to prevent a process from tying up the system forever. But preemption
for a higher priority process is a feature that may or may not be
included in a given operating system.
12. DISPATCHER
Dispatcher is a program that actually gives control of CPU to a process
selected by CPU scheduler. It is another part of the scheduling system.
The functions of dispatcher module are as follows:
Context switching
Switching to user mode
Jumping to the proper location in the user program to restart it
The dispatcher should be very fast because it is called every time a
process takes control of CPU. This time that the dispatcher takes between
stopping one process and starting another process is called the dispatch
latency.
13. SCHEDULING ALGORITHMS
Below is a list of some well known scheduling algorithms:-
First Come First Served (FCFS) Scheduling-non-preemptive.
Shortest Job First (SJF) Scheduling-preemptive or non-preemptive
Priority Scheduling-preemptive or non-preemptive
Round Robin Scheduling- preemptive
Each scheduling algorithm has its own criteria to choose the next job that
will run on CPU. Since CPU scheduler needs to be fast, actual algorithms
are typically not very complex.
14. CONTINUE
Timelines:-
Scheduling is based on the information that is available at a given
time. We need some way to represent the state of the system and any
processes in it and how it changes over time Gantt Chart are used for this
purpose.
Gantt Chart:-
Star-time 1st Process end-time second process end-time End-time
First Process Name Second Process Name more Process Name
15. FIRST COME FIRST SERVED SCHEDULING
First Come First Served is the simplest CPU scheduling algorithm. It says
that the process that enters first should get. FCFS algorithm is easy to
understand.
A drawback to the FCFS algorithm is that the processes may have
to wait for excessively long amounts of time.
Example 1:
Process Burst Time
P1 24
P2 3
P3 3
16. ▪ Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
▪ Waiting time for P1 = 0; P2 = 24; P3 = 27
▪ Average waiting time: (0 + 24 + 27)/3 = 17
▪ Waiting Time: Finish Time – Burst Time – Arrival Time
▪ Turnaround Time= Finish Time – Arrival Time
P1 P2 P3
24 27 300
17. CONTINUE
Suppose that the processes arrive in the order
P2 , P3 , P1
▪ The Gantt chart for the schedule is:
▪ Waiting time for P1 = 6; P2 = 0; P3 = 3
▪ Average waiting time: (6 + 0 + 3)/3 = 3
▪ Much better than previous case
▪ Convoy effect short process behind long process
P1P3P2
63 300
18. SHORTEST JOB FIRST SCHEDULING
Associate with each process the length of its next CPU burst. Use these
lengths to schedule the process with the shortest time.
Two schemes:
• Non-preemptive – once CPU given to the process it cannot be
preempted until completes its CPU burst.
• preemptive – if a new process arrives with CPU burst length less
than remaining time of current executing process, preempt. This
scheme is know as the Shortest-Remaining-Time-First (SRTF).
SJF is optimal – gives minimum average waiting time for a given set
of processes
19. NON-PREEMPTIVE SJF EXAMPLE
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
▪ SJF (non-preemptive)
▪ Average waiting time = (0 + 6 + 3 + 7)/4 = 4
P1 P3 P2
73 160
P4
8 12
20. PREEMPTIVE SJF EXAMPLE
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
▪ SJF (preemptive)
▪ Average waiting time = (9 + 1 + 0 +2)/4 = 3
P1 P3P2
42 110
P4
5 7
P2 P1
16
21. PRIORITY SCHEDULING
Another way to schedule jobs is to pick the job that has the highest priority. This
requires that each process should have a priority associated with it. The priority is generally
an integer with some well defined range e.g. 1 to 10. The CPU is allocated to the process with
the highest priority.
Priority scheduling can either be non-preemptive or preemptive. If there is
preemption then a high priority job can remove a low priority job from the CPU and take
over.
Non-Preemptive Priority Scheduling:-
o 1 6 16 18 19
The average waiting time is: (1+6+16+18+19)/5=8.2
Process 2 Process 5 Process 1 Process 3 Process 4
22. PREEMPTIVE PRIORITY SCHEDULING
Preemptive Priority Scheduling Example:-
o 2 3 4 6 13 14 19
Shortest Job First scheduling is also a form of priority scheduling. In the
case of Shortest Job First scheduling, the priority is defined as the predicted next
CPU burst.
One major problem with Priority based scheduling is the it may not be
fair. Some lo priority processes may not ever get the chance to execute because
higher priority processes keep stealing the CPU. One solution to this problem is
to implement aging. The process of increasing the priority of a process as it gets
older is known as aging.
P1 P2 P1 P3 P1 P4 P5
23. ROUND ROBIN SCHEDULING
Each process gets a small unit of CPU time (time quantum), usually
10-100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.
Performance
• q large FIFO
• q small q must be large with respect to context switch,
otherwise overhead is too high
24. ROUND ROBIN EXAMPLE WITH
QUANTUM TIME =20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
▪ The Gantt chart is:
▪ Typically, higher average turnaround than SJF, but better response
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121 134 154 162
25. MULTILEVEL QUEUE
Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
Each queue has its own scheduling algorithm
▪ foreground – RR
▪ background – FCFS
Scheduling must be done between the queues
▪ Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
▪ Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
▪ 20% to background in FCFS
27. MULTILEVEL FEEDBACK QUEUE
▪ A process can move between the various queues; aging can be
implemented this way
▪ Multilevel-feedback-queue scheduler defined by the following
parameters:
▪ number of queues
▪ scheduling algorithms for each queue
▪ method used to determine when to upgrade a process
▪ method used to determine when to demote a process
▪ method used to determine which queue a process will enter when that process
needs service
29. EXAMPLE OF MULTILEVEL FEEDBACK
QUEUE
Three queues:
▪ Q0 – RR with time quantum 8 milliseconds
▪ Q1 – RR time quantum 16 milliseconds
▪ Q2 – FCFS
Scheduling
▪ A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8
milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.
▪ At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still
does not complete, it is preempted and moved to queue Q2.
30. MULTIPLE-PROCESSOR SCHEDULING
▪ CPU scheduling more complex when multiple CPUs are available
▪ Homogeneous processors within a multiprocessor
▪ Load sharing.
▪ Asymmetric multiprocessing – only one processor accesses the system
data structures, alleviating the need for data sharing
31. REAL-TIME SCHEDULING
▪ Hard real-time systems – required to complete a critical task within a
guaranteed amount of time
▪ Soft real-time computing – requires that critical processes receive
priority over less fortunate ones
32. THREAD SCHEDULING
• Local Scheduling – How the threads library decides
which thread to put onto an available LWP
• Global Scheduling – How the kernel decides which
kernel thread to run next