SlideShare una empresa de Scribd logo
1 de 44
Descargar para leer sin conexión
OPERATING SYSTEMS
ii- 1
UNIT-II
PROCESS SCHDEULING AND
SYNCHRONIZATION
OPERATING SYSTEMS
ii- 2
5
UNIT-II PROCESS SCHDEULING AND SYNCHRONIZATION
CPU scheduling: scheduling criteria-scheduling algorithms-Multiple-processor
scheduling-Real time scheduling-algorithm evaluation. case study: process
scheduling in Linux. Process synchronization: the critical-section problem-
synchronization hardware-semaphores-classic problems of synchronization-
critical regions-Monitors.Deadlock: system model-Deadlock characterization-
Methods for handling deadlocks-deadlock prevention-Deadlock avoidance-
Deadlock detection-Recovery from deadlock. .
Chapter 6: CPU Scheduling
1. Basic Concepts
2. Scheduling Criteria
3. Scheduling Algorithms
4. Multiple-Processor Scheduling
5. Real-Time Scheduling
Alternating Sequence of CPU And I/O Bursts
OPERATING SYSTEMS
ii- 3
Histogram of CPU-burst Times
CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates
the CPU to one of them
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start
another running
OPERATING SYSTEMS
ii- 4
6.2 Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-sharing environment)
6.3 Scheduling Algorithms
First-Come, First-Served (FCFS) Scheduling
Algorithm
Step 1:
Get the number of process and arrival time and CPU burst time of each
process
Step 2: Schedule the processes according to arrival time basis
Step 3: Calculate starting time, finishing time, waiting time and turn around
waiting time
Step 4: Calculate average waiting time and average turn around time
Step 5: display all the values
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt chart for the schedule is:
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P1 P2 P3
24 27 300
OPERATING SYSTEMS
ii- 5
P2 , P3 , P1
The Gantt chart for the schedule is:
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect short process behind long process
Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst. Use these lengths to
schedule the process with the shortest time
Two schemes:
non preemptive – once CPU given to the process it cannot be preempted until
completes its CPU burst
preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is know
as the
Shortest-Remaining-Time-First (SRTF)
SJF is optimal – gives minimum average waiting time for a given set of processes
Algorithm
Step 1:
Get the number of process and arrival time and CPU burst time of each
process
Step 2:
Schedule the processes according to minimum burst time
Step 3: Assign the minimum arrival time among all the processes as the CPU start
time.
Step 4: Calculate start time, finishing time, turn around time and waiting time
Step 5: Calculate average waiting time and average turn around time
Step 6: display the all calculated values
P1P3P2
63 300
OPERATING SYSTEMS
ii- 6
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (non-preemptive)
Average waiting time = (0 + 6 + 3 + 7)/4 = 4
Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (preemptive)
Average waiting time = (9 + 1 + 0 +2)/4 = 3
Determining Length of Next CPU Burst
Can only estimate the length
Can be done by using the length of previous CPU bursts, using exponential averaging
P1 P3 P2
73 160
P4
8 12
P1 P3P2
42 110
P4
5 7
P2 P1
16
OPERATING SYSTEMS
ii- 7
Prediction of the Length of the Next CPU Burst
Examples of Exponential Averaging
 =0
n+1 = n
Recent history does not count
 =1
n+1 =  tn
Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0
Since both  and (1 - ) are less than or equal to 1, each successive term has less
weight than its predecessor
Priority Scheduling
A priority number (integer) is associated with each process
:Define4.
10,3.
burstCPUnextfor thevaluepredicted2.
burstCPUoflengthactual1.
1





n
th
n nt
OPERATING SYSTEMS
ii- 8
The CPU is allocated to the process with the highest priority (smallest integer 
highest priority)
Preemptive
nonpreemptive
SJF is a priority scheduling where priority is the predicted next CPU burst time
Problem  Starvation – low priority processes may never execute
Solution  Aging – as time progresses increase the priority of the process
Algorithm
Step 1:
Get the number of processes, arrival time and CPU burst time and priority of
each process
Step 2: schedule the processes according to highest priority
Step 3: Calculate the starting time, finishing time, waiting time turn around time
Step 4: Calculate the average waiting time and average turn around time
Step 5: Display all calculated values
Example of priority scheduling algorithm
Process CPU burst time priority
P1 20 1
P2 5 3
P3 10 2
P4 15 3
Round Robin (RR)
Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the
end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
Performance
q large  FIFO
P1 P2P3
30200 35 50
P4
OPERATING SYSTEMS
ii- 9
q small  q must be large with respect to context switch, otherwise overhead
is too high
Algorithm
Step 1:
Get the number of processes, arrival time and CPU burst time and priority of
each process
Step 2: Get the time slice with which processes are to be executed.
Step 3: Schedule the processes according to arrival time
Step 4: Execute each schedule process for given time slice
Step 5:pre-empt the process after the expiry of the time slice and move it to the
tail of the queue.
Step 6: Calculate the starting time, finishing time, waiting time turn around time
Step 7: Calculate the average waiting time and average turn around time
Step 8: Display all calculated values
Process Burst Time
P1 53
P2 17
P3 68
P4 24
The Gantt chart is:
Typically, higher average turnaround than SJF, but better response
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121 134 154 162
OPERATING SYSTEMS
ii-10
Time Quantum and Context Switch Time
Multilevel Queue
Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Scheduling must be done between the queues
OPERATING SYSTEMS
ii-11
Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
A process can move between the various queues; aging can be implemented this way
Multilevel-feedback-queue scheduler defined by the following parameters:
1. number of queues
2. scheduling algorithms for each queue
3. method used to determine when to upgrade a process
4. method used to determine when to demote a process
5. method used to determine which queue a process will enter when that
process needs service
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
OPERATING SYSTEMS
ii-12
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS(first come First served algorithm)
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to
queue Q1.
At Q1 job is again served FCFS and receives 16 additional milliseconds. If it
still does not complete, it is preempted and moved to queue Q2.
Multilevel Feedback Queues
6.4 Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are available
Homogeneous processors within a multiprocessor
Load sharing
Asymmetric multiprocessing – only one processor accesses the system data structures,
alleviating the need for data sharing
OPERATING SYSTEMS
ii-13
6.5 Real-Time Scheduling
Hard real-time systems – required to complete a critical task within a guaranteed
amount of time
Soft real-time computing – requires that critical processes receive priority over less
fortunate ones
Chapter 7: Process Synchronization
1. The Critical-Section Problem
2. Synchronization Hardware
3. Semaphores
4. Classic Problems of Synchronization
5. Monitors
6. Synchronization Examples
Producer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
Race Condition
1. count++ could be implemented as
register1 = count
OPERATING SYSTEMS
ii-14
register1 = register1 + 1
count = register1
2. count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
3. Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
7.1 The Critical –Section Problem
Consider a system consisting of n processes{p0,p1,…pn-1}.Each process has a segment
of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file and so on. The important feature of the system is
that, when one process is executing in is critical section problem
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2.Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes that
will enter the critical section next cannot be postponed indefinitely
3.Bounded Waiting - A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
Race Condition
The situation where several processes access and manipulate shared data concurrently.
The final value of shared data depends upon which process finishes last.
The Critical-Section Problem
_ n processes all competing to use some shared data
_ Each process has a code segment, called critical section, in
which the shared data is accessed.
_ Problem – ensure that when one process is executing in its
critical section, no other process is allowed to execute in its
critical section.
_ Structure of process Pi
do
{
entry section
critical section
OPERATING SYSTEMS
ii-15
exit section
remainder section
}while(1);
Initial Attempts to Solve Problem
_ Only 2 processes, P0 and P1
_ General structure of process Pi (other process Pj )
_ Processes may share some common variables to synchronize
their actions.
do
{
entry section
critical section
exit section
remainder section
}while(1);
Algorithm 1
_ Shared variables:
– var turn: (0..1);
initially turn = 0
– turn = i ) Pi can enter its critical section
_ Process Pi
do
{
while turn= i do no-op;
critical section
turn := j;
remainder section
}while(1);
_ Satisfies mutual exclusion, but not progress.
Algorithm 2
_ Satisfies mutual exclusion, but not progress requirement.
_ Shared variables
– var flag: array [0..1] of boolean;
initially flag[0] = flag[1] = false.
_ Process Pi
do
{
flag[i] := true;
while flag[j] do no-op;
critical section
flag[i] := false;
remainder section
}
OPERATING SYSTEMS
ii-16
while(1);
_ Satisfies mutual exclusion, but not progress requirement.
Algorithm 3
_ Combined shared variables of algorithms 1 and 2.
_ Process Pi
do
{
flag[i] := true;
turn := j;
while (flag[j] and turn=j) do no-op;
critical section
flag[i] := false;
remainder section
} while(1);
_ Meets all three requirements; solves the critical-section
problem for two processes.
Peterson’s Solution
Two process solution
Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted.
The two processes share two variables:
int turn;
Boolean flag [2]
The variable turn indicates whose turn it is to enter the critical section.
The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready!
do
{
while (turn !=j);
critical section
turn=j;
remainder section
}
Algorithm for Process Pi
while (true) {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
CRITICAL SECTION
flag[i] = FALSE;
OPERATING SYSTEMS
ii-17
REMAINDER SECTION
}
7.2 Synchronization Hardware
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes
 Many systems provide hardware support for critical section code
 Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware instructions
 Atomic = non-interrupt able
 Either test memory word and set value
 Or swap contents of two memory words
TestAndndSet Instruction
 Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution using TestAndSet
Shared boolean variable lock., initialized to false.
Solution:
while (true) {
while ( TestAndSet (&lock ))
; /* do nothing
// critical section
lock = FALSE;
OPERATING SYSTEMS
ii-18
// remainder section
}
Swap Instruction
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
 Shared Boolean variable lock initialized to FALSE; Each process has a local
Boolean variable key.
 Solution:
while (true) {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
 // critical section
lock = FALSE;
// remainder section
}
7.3 Semaphore
1. Semaphore S – integer variable
2. Two standard operations modify S: wait() and signal()
a. Originally called P() and V()
3. Less complicated
4. Can only be accessed via two indivisible (atomic) operations
l wait (S) {
while S <= 0
; // no-op
OPERATING SYSTEMS
ii-19
S--;
}
l signal (S) {
S++;
}
Semaphore as General Synchronization Tool
Counting semaphore – integer value can range over an unrestricted domain
Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
Also known as mutex locks
Can implement a counting semaphore S as a binary semaphore
Provides mutual exclusion
Semaphore S; // initialized to 1
wait (S);
Critical Section
signal (S);
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time
Thus, implementation becomes the critical section problem where the wait and signal
code are placed in the crtical section.
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and therefore this is
not a good solution.
Semaphore Implementation with no busy waiting
With each semaphore there is an associated waiting queue. Each entry in a
waiting queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
block – place the process invoking the operation on the appropriate waiting
queue.
wakeup – remove one of processes in the waiting queue and place it in the
ready queue.
OPERATING SYSTEMS
ii-20
Semaphore Implementation with no Busy waiting (Cont.)
Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
Starvation – indefinite blocking. A process may never be removed from the
semaphore queue in which it is suspended.
7.4 Classical Problems of Synchronization
1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem
OPERATING SYSTEMS
ii-21
Bounded-Buffer Problem
N buffers, each can hold one item
Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value N.
Bounded Buffer Problem (Cont.)
The structure of the producer process
while (true) {
// produce an item
wait (empty);
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
}
Bounded Buffer Problem (Cont.)
The structure of the consumer process
while (true) {
wait (full);
wait (mutex);
// remove an item from buffer
signal (mutex);
signal (empty);
// consume the removed item
}
Readers-Writers Problem
OPERATING SYSTEMS
ii-22
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and write.
Problem – allow multiple readers to read at the same time. Only one single writer can
access the shared data at the same time.
Shared Data
Data set
Semaphore mutex initialized to 1.
Semaphore wrt initialized to 1.
Integer readcount initialized to 0.
Readers-Writers Problem (Cont.)
The structure of a writer process
while (true) {
wait (wrt) ;
// writing is performed
signal (wrt) ;
}
Readers-Writers Problem (Cont.)
The structure of a reader process
while (true) {
wait (mutex) ;
readcount ++ ;
if (readcount == 1) wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0) signal (wrt) ;
signal (mutex) ;
}
OPERATING SYSTEMS
ii-23
Dining-Philosophers Problem
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem (Cont.)
The structure of Philosopher i:
While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
OPERATING SYSTEMS
ii-24
signal (chopstick[ (i + 1) % 5] );
// think
}
Problems with Semaphores
Correct use of semaphore operations:
signal (mutex) …. wait (mutex)
wait (mutex) … wait (mutex)
Omitting of wait (mutex) or signal (mutex) (or both)
7. 6 Monitors
A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
Only one process may be active within the monitor at a time
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….) { … }
…
}
}
OPERATING SYSTEMS
ii-25
Schematic view of a Monitor
condition x, y;
Two operations on a condition variable:
x.wait () – a process that invokes the operation is
suspended.
x.signal () – resumes one of processes (if any) that
invoked x.wait ()
OPERATING SYSTEMS
ii-26
Monitor with Condition Variables
Solution to Dining Philosophers
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
OPERATING SYSTEMS
ii-27
}
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
Solution to Dining Philosophers (cont)
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
Each philosopher I invokes the operations pickup()
and putdown() in the following sequence:
dp.pickup (i)
EAT
OPERATING SYSTEMS
ii-28
dp.putdown (i)
Monitor Implementation Using Semaphores
Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
Each procedure F will be replaced by
wait(mutex);
…
body of F;
…
if (next-count > 0)
signal(next)
else
signal(mutex);
Mutual exclusion within a monitor is ensured.
Monitor Implementation
For each condition variable x, we have:
semaphore x-sem; // (initially = 0)
int x-count = 0;
The operation x.wait can be implemented as:
x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
The operation x.signal can be implemented as:
if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
Chapter 8: Deadlocks
OPERATING SYSTEMS
ii-29
1.The Deadlock Problem
2..System Model
3.Deadlock characterization
4.Methods for handling deadlocks
5.Deadlock prevention
6.Recovery from deadlocks
Deadlock definition
A set of processes is in deadlock state when every process in the set is waiting for an
event that can be caused by only another process in the set.
The Deadlock Problem
 A set of blocked processes each holding a resource and waiting to acquire a
resource to held by another process in the set.
Example
 System has 2 disk drives.
 P1 and P2 each hold one disk drive and each needs another
one.
Example
semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example
OPERATING SYSTEMS
ii-30
 Traffic only in one direction.
 Each section of a bridge can be viewed as a resource.
 If a deadlock occurs, it can be resolved if one car backs up (preempt resources
and rollback).
 Several cars may have to be backed up if a deadlock occurs.
 Starvation is possible.
8.1 System Model:
 Computer systems are full of resources that can be used by processes eg.of
resources are CPU, memory space, I/O device etc.
 Each process utilizes a resource as follows:
Request:
If the request cannot be granted immediately then the requesting process must wait
until it can acquire the resources.
Use: The process can operate on the resource.
Release:
The process releases the resource.
8.2 Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously
The following are the necessary condition for deadlock to occur
 Mutual exclusion: only one process at a time can use a resource.
 Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.
 No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task.
 Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by
OPERATING SYSTEMS
ii-31
P2… Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a
resource that is held by P0.
Resource-Allocation Graph
Deadlock can be described in terms of directed graph called as system resource
allocation graph. This graph consists of
A set of vertices V and set of edges E.
V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
Request edge – directed edge P1  Rj implies that process pi is requesting for
resources Rj and is currently waiting for that resource.
Assignment edge – directed edge Rj  Pi implies that the resource rj has been
allocated to pi.
The following notations are used:
 Process
 Resource Type with 4 instances
 Pi requests instance of Rj
 Pi is holding an instance of Rj
Example of a Resource Allocation Graph
Pi
Pi
OPERATING SYSTEMS
ii-32
The RAG depicts the following situation
1. The sets P, R, E
P={p1,p2,p3}
R={R1,R2,R3,R4}
E={p1->R1,p2->R3,r1->R2,r2->R1,R2->p2,R3->p3}
2. Resource instances.
1 instance of R1.
2 instances of R2.
1 instances of R3.
3 instances of R4.
3.Process states
Given a resources allocation graph, it can be shown that if a graph contains no cycles,
then no process in the system is deadlocked. If there is a cycle the deadlock may exist.
Consider the RAG above. If p3 request an instance of type R2, since the request cannot
be granted a request edge is added .P3->R2.At this point there are 2 cycles in the system.
Cycle I:p1->R!->p2->p3->R2->p1
Cycle II:p2->R3->p3->R2->p2
Thus process p1, p2, p3 are deadlocked.
Process p2 is waiting for the resource R3 which is held by p3.P3 on the other hand is
waiting for either p1 or p2 to release R2.In addition P1 is waiting for process P2 to
release resource R1.So there is deadlock. The following graph illustrates this situation.
OPERATING SYSTEMS
ii-33
Resource Allocation Graph With A Deadlock
Graph With A Cycle But No Deadlock
Basic Facts
 If graph contains no cycles  no deadlock.
 If graph contains a cycle 
o if only one instance per resource type, then deadlock.
OPERATING SYSTEMS
ii-34
o if several instances per resource type, possibility of deadlock.
8.3 Methods for Handling Deadlocks
There are three different methods for dealing with the deadlock problem.
1. A protocol can be used to ensure that the system will never enter into a
deadlock state.
2. The system can be allowed to enter deadlock state and then recover
3. The problem can be ignored with the assumption that deadlock never occurs
in the system.
8.4 Deadlock Prevention
It is a set of methods for ensuring that atleast one of the necessary
conditions cannot hold.
 Mutual Exclusion – not required for sharable resources; must hold for non-
sharable resources. A process never needs to wait for sharable resource.
 Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources.
o Require process to request and be allocated all its resources before it
begins execution, or allow process to request resources only when the
process has none.
o Low resource utilization; starvation possible.
 One protocol that can be used to require each process to request and be allocated
all its resources before it begins execution for ex. consider a process copies from
tape drive->Disk->printer, according to this protocol all three requested resources
are held by the processes from the beginning to the printing of the file. It will hold
the printer for its entire execution even though it needs the printer only at the end.
 Second protocol allows requesting resources only when the process has released
all the resources it was currently allocated. For eg..First the process request tape
drive and disk, then releases both before a fresh request is made for disk and
printer.
Disadvantages: Low resource utilization; starvation possible.
No Preemption –
If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held
are released.
OPERATING SYSTEMS
ii-35
Preempted resources are added to the list of resources for which the process is
waiting.
Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting.
Circular Wait – imposes a total ordering of all resource types, and require that
each process requests resources in an increasing order of enumeration.
Eg.F(Tape Drive)=1,F(Disk)=5,F(Printer)=12
A process can request only if F(Rj)>F(Ri)where
Ri is the current resource and Rj is the request resource.
8.5 Deadlock Avoidance
Deadlock prevention algorithms prevent deadlocks restraining how request can be
made.The restrains ensure that atleast one of the necessary condition for deadlock cannot
occur and hence there is no deadlock.
An alternative method requires that the system has some additional information about
how resources are requested. with a complete knowledge of the sequence of requests and
releases of each process it can be decided for each request whether or not the current
request can be satisfied or the process must wait to avoid possible future deadlock.
 Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
 The deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular-wait condition.
 Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.
Safe state:
 When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state.
 System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each Pi, the resources that Pi can still
request can be satisfied by currently available resources + resources held by
all the Pj, with j < i.
 That is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished.
When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
When Pi terminates, Pi +1 can obtain its needed resources, execute ,return
allocated resources and terminate.
OPERATING SYSTEMS
ii-36
Basic Facts
If a system is in safe state  no deadlocks.
If a system is in unsafe state  possibility of deadlock.
Avoidance  ensures that a system will never enter an unsafe state.
Safe, Unsafe , Deadlock State
Avoidance algorithms
 Single instance of a resource type. Use a resource-allocation graph
 Multiple instances of a resource type. Use the banker’s algorithm
Resource-Allocation Graph Scheme
 Claim edge Pi  Rj indicated that process Pj may request resource Rj;
represented by a dashed line.
 Claim edge converts to request edge when a process requests a resource.
 Request edge converted to an assignment edge when the resource is allocated to
the process.
OPERATING SYSTEMS
ii-37
 When a resource is released by a process, assignment edge reconverts to a claim
edge.
 Resources must be claimed a priori in the system.
Resource-Allocation Graph
Unsafe State In Resource-Allocation Graph
Resource-Allocation Graph Algorithm
OPERATING SYSTEMS
ii-38
 Suppose that process Pi requests a resource Rj
 The request can be granted only if converting the request edge to an assignment
edge does not result in the formation of a cycle in the resource allocation graph
Banker’s Algorithm
 This algorithm is applicable to a system which has multiple instances of a
resource type.
 It is called by the name since the concept was initially used in banks to
allocate cash to customers.
 When a new process enters the system, it must declare the maximum number
of instances of each resource type it may need. This number may not exceed
the total number of resources in the system.
 When a process request a resource the system in safe state, If it will the
resources are allocated otherwise the process must wait until some other
process release enough resources.
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
 Available: Vector of length m. If available [j] = k, there are k instances of
resource type Rj available.
 Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k
instances of resource type Rj.
 Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of Rj.
 Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj
to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Notation
Given x and y are vectors of length n, we say that x<=y(less than or equal) if and
only if x[i]<=y[i] for all i=1,2…n.
We treat each row in the matrices allocation and need as vectors and refer to them
as allocation and need, respectively. Therefore the vector allocation specifies the
resources currently allocated to process pi the vector needi specifies the additional
resources that process pi may still request to complete its task.
Safety Algorithm
Let Work and Finish be vectors of length m and n, respectively. Initialize:
1. Work = Available
OPERATING SYSTEMS
ii-39
Finish [i] = false for i = 0, 1, …, n- 1.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4.If Finish [i] == true for all i, then the system is in a safe state.
Request = request vector for process Pi. If Requesti [j] = k then process Pi wants
k
Resource-Request Algorithm for Process Pi
When a request for resources is made by process pi the following actions are
taken.If request[j]=k the process pi wants k instances of resources type Rj
1. If Requesti  Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi must
wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by
modifying the state as follows:
Available = Available – Request;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
 If safe  the resources are allocated to Pi.
 If unsafe  Pi must wait, and the old resource-allocation state is restored
Example of Banker’s Algorithm
 5 processes P0 through P4;
o 3 resource types:
 A (10 instances), B (5instances), and C (7 instances).
 Snapshot at time T0:
Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
 The content of the matrix Need is defined to be Max – Allocation.
Need
OPERATING SYSTEMS
ii-40
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
Process Allocation
A B C
Max
A B C
Need Max-
Allocation
A B C
Work=Work
+Allocation
A B C
3 3 2
Finish
P0 0 1 0 7 5 3 7 4 3 F
P1 2 0 0 3 2 2 1 2 2 5 3 2 T
P2 3 0 2 9 0 2 6 0 0 F
P3 2 1 1 2 2 2 0 1 1 7 4 3 T
P4 0 0 2 4 3 3 4 3 1 7 4 5 T
P0 0 1 0 7 5 3 7 4 3 7 5 5 T
P2 3 0 2 9 0 2 6 0 0 10 5 7 T
The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety
criteria.
Example: P1 Request (1,0,2)
Check that Request  Available (that is, (1,0,2)  (3,3,2)  true.
Allocation Need Available
A B C A B C A B C
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0
P2 3 0 1 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies
safety requirement.
OPERATING SYSTEMS
ii-41
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?
8.6 Deadlock Detection
If a system does not employ either a deadlock prevention or avoidance algorithm,then
a deadlock situation may occur. In this case the system must provide
It is an algorithm that examines the state of the system to determine whether a
deadlock has occurred.
It is an algorithm to recover from the deadlock.
Single Instance of Each Resource Type
If all resources have only a single of the resources than a variant of the RAG called the
Wait for
Graph can be used fro detection.
In a wait –for graph an edge from pi to pj implies pi is waiting for oj to release a resource
that pi needs. An edge from pi to pj implies pi is waiting of the RAG there are two edges
fro pi->rq and Rq->pj from some resources request.A cycle in the wait for graph indicates
a deadlock situation.
Pi  Pj if Pi is waiting for Pj.
Periodically invoke an algorithm that searches for a cycle in the graph. If there is a
cycle, there exists a deadlock.
Resource-Allocation Graph and Wait-for Graph
Resource-Allocation Graph Corresponding wait-for graph
Several Instance of each resource type:
The algorithm to detect deadlock in a system with several instances of resources
type uses the following data structures.
Let n=number of processes and m=number of resource types.
OPERATING SYSTEMS
ii-42
 Available: A vector of length m indicates the number of available resources
of each type.
 Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process.
 Request: An n x m matrix indicates the current request of each process. If
Request [ij] = k, then process Pi is requesting k more instances of resource
type. Rj.
Detection Algorithm
1.Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work = Available
(b)For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false;otherwise, Finish[i] = true.
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti  Work
3.If no such i exists, go to step 4.
Work = Work + Allocationi
Finish[i] = true
go to step 2.
4.If Finish[i] == false, for some i, 1  i  n, then the system is in deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked.
Algorithm requires an order of O(m x n2) operations to detect whether the
system is in deadlocked state.
Example of Detection Algorithm
Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances).
Snapshot at time T0:
Allocation Request Available
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i.
Example (Cont.)
OPERATING SYSTEMS
ii-43
P2 requests an additional instance of type C.
Request
A B C
P0 0 0 0
P1 2 0 1
P2 0 0 1
P3 1 0 0
P4 0 0 2
State of system?
Can reclaim resources held by process P0, but insufficient resources to fulfill
other processes; requests.
Deadlock exists, consisting of processes P1, P2, P3, and P4.
Detection-Algorithm Usage
When, and how often, to invoke depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled back?
one for each disjoint cycle
If detection algorithm is invoked arbitrarily, there may be many cycles in the resource
graph and so we would not be able to tell which of the many deadlocked processes
“caused” the deadlock.
8.7 Recovery from Deadlock:
When deadlock detection determines that deadlock exists, several alternatives exist.
 One possibility is to inform the operator that a deadlock has occurred and let
the operator deal with the deadlock manually.
 Another possibility is to recover from dead lock automatically.
There are two options for breaking the deadlock
a) Break the circular wait.
b) Preempt some resources from one or more deadlocked processes.
Process Termination
To eliminate deadlocks by aborting a process the system reclaims all resources
allocated to the terminate process. There are two ways which can be done.
 Abort all deadlocked processes.
 Abort one process at a time until the deadlock cycle is eliminated.
Recovery from Deadlock: Resource Preemption
To eliminate deadlocks using resource preemption, successively preempt
some processes and give these resources to other processes until the deadlock
cycle is broken. If preemption is required to deal with deadlocks then three
issues need to be addressed.
 Selecting a victim – Which process’s resources are to be preempted? Minimize
cost.
OPERATING SYSTEMS
ii-44
 Rollback –What should be done to the preempted processes? It should continue or
not?
Return to some safe state, restart process for that state.
 Starvation – same process may always be picked as victim, include number of
rollback in cost factor.
16 marks questions
1. Explain about the deadlock prevention methods?
2. Explain the deadlock avoidance with the help of banker’s algorithm
3. Explain the deadlock detection algorithm for multiple instances of a resource
type
4. Explain the deadlock detection algorithm for single instance of resource type?
5. Problem from deadlock detection and deadlock avoidance
**************************************************

Más contenido relacionado

La actualidad más candente

CPU Scheduling in OS Presentation
CPU Scheduling in OS  PresentationCPU Scheduling in OS  Presentation
CPU Scheduling in OS Presentationusmankiyani1
 
Window scheduling algorithm
Window scheduling algorithmWindow scheduling algorithm
Window scheduling algorithmBinal Parekh
 
OS - CPU Scheduling
OS - CPU SchedulingOS - CPU Scheduling
OS - CPU Schedulingvinay arora
 
Operating System 5
Operating System 5Operating System 5
Operating System 5tech2click
 
Comparison Analysis of CPU Scheduling : FCFS, SJF and Round Robin
Comparison Analysis of CPU Scheduling : FCFS, SJF and Round RobinComparison Analysis of CPU Scheduling : FCFS, SJF and Round Robin
Comparison Analysis of CPU Scheduling : FCFS, SJF and Round RobinUniversitas Pembangunan Panca Budi
 
Processor / CPU Scheduling
Processor / CPU SchedulingProcessor / CPU Scheduling
Processor / CPU SchedulingIzaz Roghani
 
Process Scheduling
Process SchedulingProcess Scheduling
Process Schedulingvampugani
 
Process scheduling algorithms
Process scheduling algorithmsProcess scheduling algorithms
Process scheduling algorithmsShubham Sharma
 
Windows process-scheduling
Windows process-schedulingWindows process-scheduling
Windows process-schedulingTalha Shaikh
 
CPU scheduling
CPU schedulingCPU scheduling
CPU schedulingAmir Khan
 
Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.Shreya Kumar
 
CPU Scheduling algorithms
CPU Scheduling algorithmsCPU Scheduling algorithms
CPU Scheduling algorithmsShanu Kumar
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process SchedulingDamian T. Gordon
 

La actualidad más candente (19)

5 Process Scheduling
5 Process Scheduling5 Process Scheduling
5 Process Scheduling
 
CPU Scheduling in OS Presentation
CPU Scheduling in OS  PresentationCPU Scheduling in OS  Presentation
CPU Scheduling in OS Presentation
 
cpu scheduling OS
 cpu scheduling OS cpu scheduling OS
cpu scheduling OS
 
Window scheduling algorithm
Window scheduling algorithmWindow scheduling algorithm
Window scheduling algorithm
 
OS - CPU Scheduling
OS - CPU SchedulingOS - CPU Scheduling
OS - CPU Scheduling
 
Operating System 5
Operating System 5Operating System 5
Operating System 5
 
Comparison Analysis of CPU Scheduling : FCFS, SJF and Round Robin
Comparison Analysis of CPU Scheduling : FCFS, SJF and Round RobinComparison Analysis of CPU Scheduling : FCFS, SJF and Round Robin
Comparison Analysis of CPU Scheduling : FCFS, SJF and Round Robin
 
Processor / CPU Scheduling
Processor / CPU SchedulingProcessor / CPU Scheduling
Processor / CPU Scheduling
 
Process Scheduling
Process SchedulingProcess Scheduling
Process Scheduling
 
Process scheduling algorithms
Process scheduling algorithmsProcess scheduling algorithms
Process scheduling algorithms
 
OSCh6
OSCh6OSCh6
OSCh6
 
Scheduling
SchedulingScheduling
Scheduling
 
Process Scheduling
Process SchedulingProcess Scheduling
Process Scheduling
 
Windows process-scheduling
Windows process-schedulingWindows process-scheduling
Windows process-scheduling
 
CPU scheduling
CPU schedulingCPU scheduling
CPU scheduling
 
Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.Process scheduling in Light weight weight and Heavy weight processes.
Process scheduling in Light weight weight and Heavy weight processes.
 
Sa by shekhar
Sa by shekharSa by shekhar
Sa by shekhar
 
CPU Scheduling algorithms
CPU Scheduling algorithmsCPU Scheduling algorithms
CPU Scheduling algorithms
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process Scheduling
 

Destacado

Mp Os Survey
Mp Os SurveyMp Os Survey
Mp Os Surveyallankliu
 
OS Database Security Chapter 6
OS Database Security Chapter 6OS Database Security Chapter 6
OS Database Security Chapter 6AfiqEfendy Zaen
 
Advanced Operating System- Introduction
Advanced Operating System- IntroductionAdvanced Operating System- Introduction
Advanced Operating System- IntroductionDebasis Das
 
OS Process and Thread Concepts
OS Process and Thread ConceptsOS Process and Thread Concepts
OS Process and Thread Conceptssgpraju
 
Distributed system notes unit I
Distributed system notes unit IDistributed system notes unit I
Distributed system notes unit INANDINI SHARMA
 
8. mutual exclusion in Distributed Operating Systems
8. mutual exclusion in Distributed Operating Systems8. mutual exclusion in Distributed Operating Systems
8. mutual exclusion in Distributed Operating SystemsDr Sandeep Kumar Poonia
 
16. Concurrency Control in DBMS
16. Concurrency Control in DBMS16. Concurrency Control in DBMS
16. Concurrency Control in DBMSkoolkampus
 
Operating System-Threads-Galvin
Operating System-Threads-GalvinOperating System-Threads-Galvin
Operating System-Threads-GalvinSonali Chauhan
 
Multiple processor (ppt 2010)
Multiple processor (ppt 2010)Multiple processor (ppt 2010)
Multiple processor (ppt 2010)Arth Ramada
 
Security & protection in operating system
Security & protection in operating systemSecurity & protection in operating system
Security & protection in operating systemAbou Bakr Ashraf
 
Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systemskaran2190
 

Destacado (15)

Mp Os Survey
Mp Os SurveyMp Os Survey
Mp Os Survey
 
Concurrency control
Concurrency controlConcurrency control
Concurrency control
 
OS Database Security Chapter 6
OS Database Security Chapter 6OS Database Security Chapter 6
OS Database Security Chapter 6
 
Advanced Operating System- Introduction
Advanced Operating System- IntroductionAdvanced Operating System- Introduction
Advanced Operating System- Introduction
 
Chapter 14 - Protection
Chapter 14 - ProtectionChapter 14 - Protection
Chapter 14 - Protection
 
Processes and threads
Processes and threadsProcesses and threads
Processes and threads
 
OS Process and Thread Concepts
OS Process and Thread ConceptsOS Process and Thread Concepts
OS Process and Thread Concepts
 
Distributed system notes unit I
Distributed system notes unit IDistributed system notes unit I
Distributed system notes unit I
 
8. mutual exclusion in Distributed Operating Systems
8. mutual exclusion in Distributed Operating Systems8. mutual exclusion in Distributed Operating Systems
8. mutual exclusion in Distributed Operating Systems
 
Multiprocessor system
Multiprocessor system Multiprocessor system
Multiprocessor system
 
16. Concurrency Control in DBMS
16. Concurrency Control in DBMS16. Concurrency Control in DBMS
16. Concurrency Control in DBMS
 
Operating System-Threads-Galvin
Operating System-Threads-GalvinOperating System-Threads-Galvin
Operating System-Threads-Galvin
 
Multiple processor (ppt 2010)
Multiple processor (ppt 2010)Multiple processor (ppt 2010)
Multiple processor (ppt 2010)
 
Security & protection in operating system
Security & protection in operating systemSecurity & protection in operating system
Security & protection in operating system
 
Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systems
 

Similar a Unit iios process scheduling and synchronization

Similar a Unit iios process scheduling and synchronization (20)

Ch6
Ch6Ch6
Ch6
 
CH06.pdf
CH06.pdfCH06.pdf
CH06.pdf
 
Ch05
Ch05Ch05
Ch05
 
cpu sechduling
cpu sechduling cpu sechduling
cpu sechduling
 
Cpu_sheduling.pptx
Cpu_sheduling.pptxCpu_sheduling.pptx
Cpu_sheduling.pptx
 
Process management in os
Process management in osProcess management in os
Process management in os
 
Ch5
Ch5Ch5
Ch5
 
Preemptive process example.pptx
Preemptive process example.pptxPreemptive process example.pptx
Preemptive process example.pptx
 
OS_Ch6
OS_Ch6OS_Ch6
OS_Ch6
 
Scheduling algo(by HJ)
Scheduling algo(by HJ)Scheduling algo(by HJ)
Scheduling algo(by HJ)
 
Cpu scheduling
Cpu schedulingCpu scheduling
Cpu scheduling
 
Ch05 cpu-scheduling
Ch05 cpu-schedulingCh05 cpu-scheduling
Ch05 cpu-scheduling
 
CPU Scheduling
CPU SchedulingCPU Scheduling
CPU Scheduling
 
Distributed Operating System_2
Distributed Operating System_2Distributed Operating System_2
Distributed Operating System_2
 
chapter 5 CPU scheduling.ppt
chapter  5 CPU scheduling.pptchapter  5 CPU scheduling.ppt
chapter 5 CPU scheduling.ppt
 
Cpu scheduling(suresh)
Cpu scheduling(suresh)Cpu scheduling(suresh)
Cpu scheduling(suresh)
 
ch6.ppt
ch6.pptch6.ppt
ch6.ppt
 
Csc4320 chapter 5 2
Csc4320 chapter 5 2Csc4320 chapter 5 2
Csc4320 chapter 5 2
 
Os..
Os..Os..
Os..
 
Operating Systems Third Unit - Fourth Semester - Engineering
Operating Systems Third Unit  - Fourth Semester - EngineeringOperating Systems Third Unit  - Fourth Semester - Engineering
Operating Systems Third Unit - Fourth Semester - Engineering
 

Más de donny101

Unit vos - File systems
Unit vos - File systemsUnit vos - File systems
Unit vos - File systemsdonny101
 
Unit ivos - file systems
Unit ivos - file systemsUnit ivos - file systems
Unit ivos - file systemsdonny101
 
Unit iiios Storage Management
Unit iiios Storage ManagementUnit iiios Storage Management
Unit iiios Storage Managementdonny101
 
Unit 1os processes and threads
Unit 1os processes and threadsUnit 1os processes and threads
Unit 1os processes and threadsdonny101
 

Más de donny101 (9)

Unit v
Unit vUnit v
Unit v
 
Unit iv
Unit ivUnit iv
Unit iv
 
Unit iii
Unit iiiUnit iii
Unit iii
 
Unit ii
Unit   iiUnit   ii
Unit ii
 
Unit 1
Unit  1Unit  1
Unit 1
 
Unit vos - File systems
Unit vos - File systemsUnit vos - File systems
Unit vos - File systems
 
Unit ivos - file systems
Unit ivos - file systemsUnit ivos - file systems
Unit ivos - file systems
 
Unit iiios Storage Management
Unit iiios Storage ManagementUnit iiios Storage Management
Unit iiios Storage Management
 
Unit 1os processes and threads
Unit 1os processes and threadsUnit 1os processes and threads
Unit 1os processes and threads
 

Último

Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwaitjaanualu31
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptNANDHAKUMARA10
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Servicemeghakumariji156
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationBhangaleSonal
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiessarkmank1
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdfKamal Acharya
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Call Girls Mumbai
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Computer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersComputer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersMairaAshraf6
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdfKamal Acharya
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxMuhammadAsimMuhammad6
 
Wadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxWadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxNadaHaitham1
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxJuliansyahHarahap1
 

Último (20)

Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Computer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersComputer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to Computers
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
Wadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxWadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptx
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 

Unit iios process scheduling and synchronization

  • 1. OPERATING SYSTEMS ii- 1 UNIT-II PROCESS SCHDEULING AND SYNCHRONIZATION
  • 2. OPERATING SYSTEMS ii- 2 5 UNIT-II PROCESS SCHDEULING AND SYNCHRONIZATION CPU scheduling: scheduling criteria-scheduling algorithms-Multiple-processor scheduling-Real time scheduling-algorithm evaluation. case study: process scheduling in Linux. Process synchronization: the critical-section problem- synchronization hardware-semaphores-classic problems of synchronization- critical regions-Monitors.Deadlock: system model-Deadlock characterization- Methods for handling deadlocks-deadlock prevention-Deadlock avoidance- Deadlock detection-Recovery from deadlock. . Chapter 6: CPU Scheduling 1. Basic Concepts 2. Scheduling Criteria 3. Scheduling Algorithms 4. Multiple-Processor Scheduling 5. Real-Time Scheduling Alternating Sequence of CPU And I/O Bursts
  • 3. OPERATING SYSTEMS ii- 3 Histogram of CPU-burst Times CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Scheduling under 1 and 4 is nonpreemptive All other scheduling is preemptive Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running
  • 4. OPERATING SYSTEMS ii- 4 6.2 Scheduling Criteria  CPU utilization – keep the CPU as busy as possible  Throughput – # of processes that complete their execution per time unit  Turnaround time – amount of time to execute a particular process  Waiting time – amount of time a process has been waiting in the ready queue  Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 6.3 Scheduling Algorithms First-Come, First-Served (FCFS) Scheduling Algorithm Step 1: Get the number of process and arrival time and CPU burst time of each process Step 2: Schedule the processes according to arrival time basis Step 3: Calculate starting time, finishing time, waiting time and turn around waiting time Step 4: Calculate average waiting time and average turn around time Step 5: display all the values Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P1 P2 P3 24 27 300
  • 5. OPERATING SYSTEMS ii- 5 P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect short process behind long process Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time Two schemes: non preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF) SJF is optimal – gives minimum average waiting time for a given set of processes Algorithm Step 1: Get the number of process and arrival time and CPU burst time of each process Step 2: Schedule the processes according to minimum burst time Step 3: Assign the minimum arrival time among all the processes as the CPU start time. Step 4: Calculate start time, finishing time, turn around time and waiting time Step 5: Calculate average waiting time and average turn around time Step 6: display the all calculated values P1P3P2 63 300
  • 6. OPERATING SYSTEMS ii- 6 Example of Non-Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 SJF (non-preemptive) Average waiting time = (0 + 6 + 3 + 7)/4 = 4 Example of Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 SJF (preemptive) Average waiting time = (9 + 1 + 0 +2)/4 = 3 Determining Length of Next CPU Burst Can only estimate the length Can be done by using the length of previous CPU bursts, using exponential averaging P1 P3 P2 73 160 P4 8 12 P1 P3P2 42 110 P4 5 7 P2 P1 16
  • 7. OPERATING SYSTEMS ii- 7 Prediction of the Length of the Next CPU Burst Examples of Exponential Averaging  =0 n+1 = n Recent history does not count  =1 n+1 =  tn Only the actual last CPU burst counts If we expand the formula, we get: n+1 =  tn+(1 - ) tn -1 + … +(1 -  )j  tn -j + … +(1 -  )n +1 0 Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor Priority Scheduling A priority number (integer) is associated with each process :Define4. 10,3. burstCPUnextfor thevaluepredicted2. burstCPUoflengthactual1. 1      n th n nt
  • 8. OPERATING SYSTEMS ii- 8 The CPU is allocated to the process with the highest priority (smallest integer  highest priority) Preemptive nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem  Starvation – low priority processes may never execute Solution  Aging – as time progresses increase the priority of the process Algorithm Step 1: Get the number of processes, arrival time and CPU burst time and priority of each process Step 2: schedule the processes according to highest priority Step 3: Calculate the starting time, finishing time, waiting time turn around time Step 4: Calculate the average waiting time and average turn around time Step 5: Display all calculated values Example of priority scheduling algorithm Process CPU burst time priority P1 20 1 P2 5 3 P3 10 2 P4 15 3 Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance q large  FIFO P1 P2P3 30200 35 50 P4
  • 9. OPERATING SYSTEMS ii- 9 q small  q must be large with respect to context switch, otherwise overhead is too high Algorithm Step 1: Get the number of processes, arrival time and CPU burst time and priority of each process Step 2: Get the time slice with which processes are to be executed. Step 3: Schedule the processes according to arrival time Step 4: Execute each schedule process for given time slice Step 5:pre-empt the process after the expiry of the time slice and move it to the tail of the queue. Step 6: Calculate the starting time, finishing time, waiting time turn around time Step 7: Calculate the average waiting time and average turn around time Step 8: Display all calculated values Process Burst Time P1 53 P2 17 P3 68 P4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 77 97 117 121 134 154 162
  • 10. OPERATING SYSTEMS ii-10 Time Quantum and Context Switch Time Multilevel Queue Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS Scheduling must be done between the queues
  • 11. OPERATING SYSTEMS ii-11 Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS Multilevel Queue Scheduling Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: 1. number of queues 2. scheduling algorithms for each queue 3. method used to determine when to upgrade a process 4. method used to determine when to demote a process 5. method used to determine which queue a process will enter when that process needs service Example of Multilevel Feedback Queue Three queues: Q0 – RR with time quantum 8 milliseconds
  • 12. OPERATING SYSTEMS ii-12 Q1 – RR time quantum 16 milliseconds Q2 – FCFS(first come First served algorithm) Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2. Multilevel Feedback Queues 6.4 Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available Homogeneous processors within a multiprocessor Load sharing Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing
  • 13. OPERATING SYSTEMS ii-13 6.5 Real-Time Scheduling Hard real-time systems – required to complete a critical task within a guaranteed amount of time Soft real-time computing – requires that critical processes receive priority over less fortunate ones Chapter 7: Process Synchronization 1. The Critical-Section Problem 2. Synchronization Hardware 3. Semaphores 4. Classic Problems of Synchronization 5. Monitors 6. Synchronization Examples Producer while (true) { /* produce an item and put in nextProduced */ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; } Consumer while (true) { while (count == 0) ; // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /* consume the item in nextConsumed } Race Condition 1. count++ could be implemented as register1 = count
  • 14. OPERATING SYSTEMS ii-14 register1 = register1 + 1 count = register1 2. count-- could be implemented as register2 = count register2 = register2 - 1 count = register2 3. Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4} 7.1 The Critical –Section Problem Consider a system consisting of n processes{p0,p1,…pn-1}.Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file and so on. The important feature of the system is that, when one process is executing in is critical section problem Solution to Critical-Section Problem 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2.Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3.Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Race Condition The situation where several processes access and manipulate shared data concurrently. The final value of shared data depends upon which process finishes last. The Critical-Section Problem _ n processes all competing to use some shared data _ Each process has a code segment, called critical section, in which the shared data is accessed. _ Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. _ Structure of process Pi do { entry section critical section
  • 15. OPERATING SYSTEMS ii-15 exit section remainder section }while(1); Initial Attempts to Solve Problem _ Only 2 processes, P0 and P1 _ General structure of process Pi (other process Pj ) _ Processes may share some common variables to synchronize their actions. do { entry section critical section exit section remainder section }while(1); Algorithm 1 _ Shared variables: – var turn: (0..1); initially turn = 0 – turn = i ) Pi can enter its critical section _ Process Pi do { while turn= i do no-op; critical section turn := j; remainder section }while(1); _ Satisfies mutual exclusion, but not progress. Algorithm 2 _ Satisfies mutual exclusion, but not progress requirement. _ Shared variables – var flag: array [0..1] of boolean; initially flag[0] = flag[1] = false. _ Process Pi do { flag[i] := true; while flag[j] do no-op; critical section flag[i] := false; remainder section }
  • 16. OPERATING SYSTEMS ii-16 while(1); _ Satisfies mutual exclusion, but not progress requirement. Algorithm 3 _ Combined shared variables of algorithms 1 and 2. _ Process Pi do { flag[i] := true; turn := j; while (flag[j] and turn=j) do no-op; critical section flag[i] := false; remainder section } while(1); _ Meets all three requirements; solves the critical-section problem for two processes. Peterson’s Solution Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn; Boolean flag [2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready! do { while (turn !=j); critical section turn=j; remainder section } Algorithm for Process Pi while (true) { flag[i] = TRUE; turn = j; while ( flag[j] && turn == j); CRITICAL SECTION flag[i] = FALSE;
  • 17. OPERATING SYSTEMS ii-17 REMAINDER SECTION } 7.2 Synchronization Hardware  Assume that each process executes at a nonzero speed  No assumption concerning relative speed of the N processes  Many systems provide hardware support for critical section code  Uniprocessors – could disable interrupts  Currently running code would execute without preemption  Generally too inefficient on multiprocessor systems  Operating systems using this not broadly scalable  Modern machines provide special atomic hardware instructions  Atomic = non-interrupt able  Either test memory word and set value  Or swap contents of two memory words TestAndndSet Instruction  Definition: boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } Solution using TestAndSet Shared boolean variable lock., initialized to false. Solution: while (true) { while ( TestAndSet (&lock )) ; /* do nothing // critical section lock = FALSE;
  • 18. OPERATING SYSTEMS ii-18 // remainder section } Swap Instruction Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: } Solution using Swap  Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key.  Solution: while (true) { key = TRUE; while ( key == TRUE) Swap (&lock, &key );  // critical section lock = FALSE; // remainder section } 7.3 Semaphore 1. Semaphore S – integer variable 2. Two standard operations modify S: wait() and signal() a. Originally called P() and V() 3. Less complicated 4. Can only be accessed via two indivisible (atomic) operations l wait (S) { while S <= 0 ; // no-op
  • 19. OPERATING SYSTEMS ii-19 S--; } l signal (S) { S++; } Semaphore as General Synchronization Tool Counting semaphore – integer value can range over an unrestricted domain Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore S; // initialized to 1 wait (S); Critical Section signal (S); Semaphore Implementation Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time Thus, implementation becomes the critical section problem where the wait and signal code are placed in the crtical section. Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Note that applications may spend lots of time in critical sections and therefore this is not a good solution. Semaphore Implementation with no busy waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list Two operations: block – place the process invoking the operation on the appropriate waiting queue. wakeup – remove one of processes in the waiting queue and place it in the ready queue.
  • 20. OPERATING SYSTEMS ii-20 Semaphore Implementation with no Busy waiting (Cont.) Implementation of wait: wait (S){ value--; if (value < 0) { add this process to waiting queue block(); } } Implementation of signal: Signal (S){ value++; if (value <= 0) { remove a process P from the waiting queue wakeup(P); } } Deadlock and Starvation Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P0 P1 wait (S); wait (Q); wait (Q); wait (S); . . . . . . signal (S); signal (Q); signal (Q); signal (S); Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. 7.4 Classical Problems of Synchronization 1. Bounded-Buffer Problem 2. Readers and Writers Problem 3. Dining-Philosophers Problem
  • 21. OPERATING SYSTEMS ii-21 Bounded-Buffer Problem N buffers, each can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N. Bounded Buffer Problem (Cont.) The structure of the producer process while (true) { // produce an item wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } Bounded Buffer Problem (Cont.) The structure of the consumer process while (true) { wait (full); wait (mutex); // remove an item from buffer signal (mutex); signal (empty); // consume the removed item } Readers-Writers Problem
  • 22. OPERATING SYSTEMS ii-22 A data set is shared among a number of concurrent processes Readers – only read the data set; they do not perform any updates Writers – can both read and write. Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time. Shared Data Data set Semaphore mutex initialized to 1. Semaphore wrt initialized to 1. Integer readcount initialized to 0. Readers-Writers Problem (Cont.) The structure of a writer process while (true) { wait (wrt) ; // writing is performed signal (wrt) ; } Readers-Writers Problem (Cont.) The structure of a reader process while (true) { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; }
  • 23. OPERATING SYSTEMS ii-23 Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1 Dining-Philosophers Problem (Cont.) The structure of Philosopher i: While (true) { wait ( chopstick[i] ); wait ( chopStick[ (i + 1) % 5] ); // eat signal ( chopstick[i] );
  • 24. OPERATING SYSTEMS ii-24 signal (chopstick[ (i + 1) % 5] ); // think } Problems with Semaphores Correct use of semaphore operations: signal (mutex) …. wait (mutex) wait (mutex) … wait (mutex) Omitting of wait (mutex) or signal (mutex) (or both) 7. 6 Monitors A high-level abstraction that provides a convenient and effective mechanism for process synchronization Only one process may be active within the monitor at a time monitor monitor-name { // shared variable declarations procedure P1 (…) { …. } … procedure Pn (…) {……} Initialization code ( ….) { … } … } }
  • 25. OPERATING SYSTEMS ii-25 Schematic view of a Monitor condition x, y; Two operations on a condition variable: x.wait () – a process that invokes the operation is suspended. x.signal () – resumes one of processes (if any) that invoked x.wait ()
  • 26. OPERATING SYSTEMS ii-26 Monitor with Condition Variables Solution to Dining Philosophers monitor DP { enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING) self [i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5);
  • 27. OPERATING SYSTEMS ii-27 } void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self[i].signal () ; } } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } } Solution to Dining Philosophers (cont) void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self[i].signal () ; } } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } } Each philosopher I invokes the operations pickup() and putdown() in the following sequence: dp.pickup (i) EAT
  • 28. OPERATING SYSTEMS ii-28 dp.putdown (i) Monitor Implementation Using Semaphores Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; Each procedure F will be replaced by wait(mutex); … body of F; … if (next-count > 0) signal(next) else signal(mutex); Mutual exclusion within a monitor is ensured. Monitor Implementation For each condition variable x, we have: semaphore x-sem; // (initially = 0) int x-count = 0; The operation x.wait can be implemented as: x-count++; if (next-count > 0) signal(next); else signal(mutex); wait(x-sem); x-count--; The operation x.signal can be implemented as: if (x-count > 0) { next-count++; signal(x-sem); wait(next); next-count--; } Chapter 8: Deadlocks
  • 29. OPERATING SYSTEMS ii-29 1.The Deadlock Problem 2..System Model 3.Deadlock characterization 4.Methods for handling deadlocks 5.Deadlock prevention 6.Recovery from deadlocks Deadlock definition A set of processes is in deadlock state when every process in the set is waiting for an event that can be caused by only another process in the set. The Deadlock Problem  A set of blocked processes each holding a resource and waiting to acquire a resource to held by another process in the set. Example  System has 2 disk drives.  P1 and P2 each hold one disk drive and each needs another one. Example semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A) Bridge Crossing Example
  • 30. OPERATING SYSTEMS ii-30  Traffic only in one direction.  Each section of a bridge can be viewed as a resource.  If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback).  Several cars may have to be backed up if a deadlock occurs.  Starvation is possible. 8.1 System Model:  Computer systems are full of resources that can be used by processes eg.of resources are CPU, memory space, I/O device etc.  Each process utilizes a resource as follows: Request: If the request cannot be granted immediately then the requesting process must wait until it can acquire the resources. Use: The process can operate on the resource. Release: The process releases the resource. 8.2 Deadlock Characterization Deadlock can arise if four conditions hold simultaneously The following are the necessary condition for deadlock to occur  Mutual exclusion: only one process at a time can use a resource.  Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes.  No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task.  Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by
  • 31. OPERATING SYSTEMS ii-31 P2… Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0. Resource-Allocation Graph Deadlock can be described in terms of directed graph called as system resource allocation graph. This graph consists of A set of vertices V and set of edges E. V is partitioned into two types: P = {P1, P2, …, Pn}, the set consisting of all the processes in the system. R = {R1, R2, …, Rm}, the set consisting of all resource types in the system. Request edge – directed edge P1  Rj implies that process pi is requesting for resources Rj and is currently waiting for that resource. Assignment edge – directed edge Rj  Pi implies that the resource rj has been allocated to pi. The following notations are used:  Process  Resource Type with 4 instances  Pi requests instance of Rj  Pi is holding an instance of Rj Example of a Resource Allocation Graph Pi Pi
  • 32. OPERATING SYSTEMS ii-32 The RAG depicts the following situation 1. The sets P, R, E P={p1,p2,p3} R={R1,R2,R3,R4} E={p1->R1,p2->R3,r1->R2,r2->R1,R2->p2,R3->p3} 2. Resource instances. 1 instance of R1. 2 instances of R2. 1 instances of R3. 3 instances of R4. 3.Process states Given a resources allocation graph, it can be shown that if a graph contains no cycles, then no process in the system is deadlocked. If there is a cycle the deadlock may exist. Consider the RAG above. If p3 request an instance of type R2, since the request cannot be granted a request edge is added .P3->R2.At this point there are 2 cycles in the system. Cycle I:p1->R!->p2->p3->R2->p1 Cycle II:p2->R3->p3->R2->p2 Thus process p1, p2, p3 are deadlocked. Process p2 is waiting for the resource R3 which is held by p3.P3 on the other hand is waiting for either p1 or p2 to release R2.In addition P1 is waiting for process P2 to release resource R1.So there is deadlock. The following graph illustrates this situation.
  • 33. OPERATING SYSTEMS ii-33 Resource Allocation Graph With A Deadlock Graph With A Cycle But No Deadlock Basic Facts  If graph contains no cycles  no deadlock.  If graph contains a cycle  o if only one instance per resource type, then deadlock.
  • 34. OPERATING SYSTEMS ii-34 o if several instances per resource type, possibility of deadlock. 8.3 Methods for Handling Deadlocks There are three different methods for dealing with the deadlock problem. 1. A protocol can be used to ensure that the system will never enter into a deadlock state. 2. The system can be allowed to enter deadlock state and then recover 3. The problem can be ignored with the assumption that deadlock never occurs in the system. 8.4 Deadlock Prevention It is a set of methods for ensuring that atleast one of the necessary conditions cannot hold.  Mutual Exclusion – not required for sharable resources; must hold for non- sharable resources. A process never needs to wait for sharable resource.  Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources. o Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none. o Low resource utilization; starvation possible.  One protocol that can be used to require each process to request and be allocated all its resources before it begins execution for ex. consider a process copies from tape drive->Disk->printer, according to this protocol all three requested resources are held by the processes from the beginning to the printing of the file. It will hold the printer for its entire execution even though it needs the printer only at the end.  Second protocol allows requesting resources only when the process has released all the resources it was currently allocated. For eg..First the process request tape drive and disk, then releases both before a fresh request is made for disk and printer. Disadvantages: Low resource utilization; starvation possible. No Preemption – If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released.
  • 35. OPERATING SYSTEMS ii-35 Preempted resources are added to the list of resources for which the process is waiting. Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. Circular Wait – imposes a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. Eg.F(Tape Drive)=1,F(Disk)=5,F(Printer)=12 A process can request only if F(Rj)>F(Ri)where Ri is the current resource and Rj is the request resource. 8.5 Deadlock Avoidance Deadlock prevention algorithms prevent deadlocks restraining how request can be made.The restrains ensure that atleast one of the necessary condition for deadlock cannot occur and hence there is no deadlock. An alternative method requires that the system has some additional information about how resources are requested. with a complete knowledge of the sequence of requests and releases of each process it can be decided for each request whether or not the current request can be satisfied or the process must wait to avoid possible future deadlock.  Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need.  The deadlock-avoidance algorithm dynamically examines the resource- allocation state to ensure that there can never be a circular-wait condition.  Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes. Safe state:  When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state.  System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes is the systems such that for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j < i.  That is: If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished. When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate. When Pi terminates, Pi +1 can obtain its needed resources, execute ,return allocated resources and terminate.
  • 36. OPERATING SYSTEMS ii-36 Basic Facts If a system is in safe state  no deadlocks. If a system is in unsafe state  possibility of deadlock. Avoidance  ensures that a system will never enter an unsafe state. Safe, Unsafe , Deadlock State Avoidance algorithms  Single instance of a resource type. Use a resource-allocation graph  Multiple instances of a resource type. Use the banker’s algorithm Resource-Allocation Graph Scheme  Claim edge Pi  Rj indicated that process Pj may request resource Rj; represented by a dashed line.  Claim edge converts to request edge when a process requests a resource.  Request edge converted to an assignment edge when the resource is allocated to the process.
  • 37. OPERATING SYSTEMS ii-37  When a resource is released by a process, assignment edge reconverts to a claim edge.  Resources must be claimed a priori in the system. Resource-Allocation Graph Unsafe State In Resource-Allocation Graph Resource-Allocation Graph Algorithm
  • 38. OPERATING SYSTEMS ii-38  Suppose that process Pi requests a resource Rj  The request can be granted only if converting the request edge to an assignment edge does not result in the formation of a cycle in the resource allocation graph Banker’s Algorithm  This algorithm is applicable to a system which has multiple instances of a resource type.  It is called by the name since the concept was initially used in banks to allocate cash to customers.  When a new process enters the system, it must declare the maximum number of instances of each resource type it may need. This number may not exceed the total number of resources in the system.  When a process request a resource the system in safe state, If it will the resources are allocated otherwise the process must wait until some other process release enough resources. Data Structures for the Banker’s Algorithm Let n = number of processes, and m = number of resources types.  Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available.  Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj.  Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj.  Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task. Need [i,j] = Max[i,j] – Allocation [i,j]. Notation Given x and y are vectors of length n, we say that x<=y(less than or equal) if and only if x[i]<=y[i] for all i=1,2…n. We treat each row in the matrices allocation and need as vectors and refer to them as allocation and need, respectively. Therefore the vector allocation specifies the resources currently allocated to process pi the vector needi specifies the additional resources that process pi may still request to complete its task. Safety Algorithm Let Work and Finish be vectors of length m and n, respectively. Initialize: 1. Work = Available
  • 39. OPERATING SYSTEMS ii-39 Finish [i] = false for i = 0, 1, …, n- 1. 2. Find and i such that both: (a) Finish [i] = false (b) Needi  Work If no such i exists, go to step 4. 3. Work = Work + Allocationi Finish[i] = true go to step 2. 4.If Finish [i] == true for all i, then the system is in a safe state. Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k Resource-Request Algorithm for Process Pi When a request for resources is made by process pi the following actions are taken.If request[j]=k the process pi wants k instances of resources type Rj 1. If Requesti  Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available = Available – Request; Allocationi = Allocationi + Requesti; Needi = Needi – Requesti;  If safe  the resources are allocated to Pi.  If unsafe  Pi must wait, and the old resource-allocation state is restored Example of Banker’s Algorithm  5 processes P0 through P4; o 3 resource types:  A (10 instances), B (5instances), and C (7 instances).  Snapshot at time T0: Allocation Max Available A B C A B C A B C P0 0 1 0 7 5 3 3 3 2 P1 2 0 0 3 2 2 P2 3 0 2 9 0 2 P3 2 1 1 2 2 2 P4 0 0 2 4 3 3  The content of the matrix Need is defined to be Max – Allocation. Need
  • 40. OPERATING SYSTEMS ii-40 A B C P0 7 4 3 P1 1 2 2 P2 6 0 0 P3 0 1 1 P4 4 3 1 Process Allocation A B C Max A B C Need Max- Allocation A B C Work=Work +Allocation A B C 3 3 2 Finish P0 0 1 0 7 5 3 7 4 3 F P1 2 0 0 3 2 2 1 2 2 5 3 2 T P2 3 0 2 9 0 2 6 0 0 F P3 2 1 1 2 2 2 0 1 1 7 4 3 T P4 0 0 2 4 3 3 4 3 1 7 4 5 T P0 0 1 0 7 5 3 7 4 3 7 5 5 T P2 3 0 2 9 0 2 6 0 0 10 5 7 T The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria. Example: P1 Request (1,0,2) Check that Request  Available (that is, (1,0,2)  (3,3,2)  true. Allocation Need Available A B C A B C A B C P0 0 1 0 7 4 3 2 3 0 P1 3 0 2 0 2 0 P2 3 0 1 6 0 0 P3 2 1 1 0 1 1 P4 0 0 2 4 3 1 Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies safety requirement.
  • 41. OPERATING SYSTEMS ii-41 Can request for (3,3,0) by P4 be granted? Can request for (0,2,0) by P0 be granted? 8.6 Deadlock Detection If a system does not employ either a deadlock prevention or avoidance algorithm,then a deadlock situation may occur. In this case the system must provide It is an algorithm that examines the state of the system to determine whether a deadlock has occurred. It is an algorithm to recover from the deadlock. Single Instance of Each Resource Type If all resources have only a single of the resources than a variant of the RAG called the Wait for Graph can be used fro detection. In a wait –for graph an edge from pi to pj implies pi is waiting for oj to release a resource that pi needs. An edge from pi to pj implies pi is waiting of the RAG there are two edges fro pi->rq and Rq->pj from some resources request.A cycle in the wait for graph indicates a deadlock situation. Pi  Pj if Pi is waiting for Pj. Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock. Resource-Allocation Graph and Wait-for Graph Resource-Allocation Graph Corresponding wait-for graph Several Instance of each resource type: The algorithm to detect deadlock in a system with several instances of resources type uses the following data structures. Let n=number of processes and m=number of resource types.
  • 42. OPERATING SYSTEMS ii-42  Available: A vector of length m indicates the number of available resources of each type.  Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process.  Request: An n x m matrix indicates the current request of each process. If Request [ij] = k, then process Pi is requesting k more instances of resource type. Rj. Detection Algorithm 1.Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work = Available (b)For i = 1,2, …, n, if Allocationi  0, then Finish[i] = false;otherwise, Finish[i] = true. 2. Find an index i such that both: (a) Finish[i] == false (b) Requesti  Work 3.If no such i exists, go to step 4. Work = Work + Allocationi Finish[i] = true go to step 2. 4.If Finish[i] == false, for some i, 1  i  n, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlocked. Algorithm requires an order of O(m x n2) operations to detect whether the system is in deadlocked state. Example of Detection Algorithm Five processes P0 through P4; three resource types A (7 instances), B (2 instances), and C (6 instances). Snapshot at time T0: Allocation Request Available A B C A B C A B C P0 0 1 0 0 0 0 0 0 0 P1 2 0 0 2 0 2 P2 3 0 3 0 0 0 P3 2 1 1 1 0 0 P4 0 0 2 0 0 2 Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i. Example (Cont.)
  • 43. OPERATING SYSTEMS ii-43 P2 requests an additional instance of type C. Request A B C P0 0 0 0 P1 2 0 1 P2 0 0 1 P3 1 0 0 P4 0 0 2 State of system? Can reclaim resources held by process P0, but insufficient resources to fulfill other processes; requests. Deadlock exists, consisting of processes P1, P2, P3, and P4. Detection-Algorithm Usage When, and how often, to invoke depends on: How often a deadlock is likely to occur? How many processes will need to be rolled back? one for each disjoint cycle If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock. 8.7 Recovery from Deadlock: When deadlock detection determines that deadlock exists, several alternatives exist.  One possibility is to inform the operator that a deadlock has occurred and let the operator deal with the deadlock manually.  Another possibility is to recover from dead lock automatically. There are two options for breaking the deadlock a) Break the circular wait. b) Preempt some resources from one or more deadlocked processes. Process Termination To eliminate deadlocks by aborting a process the system reclaims all resources allocated to the terminate process. There are two ways which can be done.  Abort all deadlocked processes.  Abort one process at a time until the deadlock cycle is eliminated. Recovery from Deadlock: Resource Preemption To eliminate deadlocks using resource preemption, successively preempt some processes and give these resources to other processes until the deadlock cycle is broken. If preemption is required to deal with deadlocks then three issues need to be addressed.  Selecting a victim – Which process’s resources are to be preempted? Minimize cost.
  • 44. OPERATING SYSTEMS ii-44  Rollback –What should be done to the preempted processes? It should continue or not? Return to some safe state, restart process for that state.  Starvation – same process may always be picked as victim, include number of rollback in cost factor. 16 marks questions 1. Explain about the deadlock prevention methods? 2. Explain the deadlock avoidance with the help of banker’s algorithm 3. Explain the deadlock detection algorithm for multiple instances of a resource type 4. Explain the deadlock detection algorithm for single instance of resource type? 5. Problem from deadlock detection and deadlock avoidance **************************************************