SlideShare una empresa de Scribd logo
1 de 37
Operating system
Processes , CPU Scheduling And Process
Synchronization
Processes
Process Concept, Process Scheduling,
Operation on Processes
Process Concept
• A process is an instance of a program in execution.
• Batch systems work in terms of "jobs". Many modern process
concepts are still expressed in terms of jobs, ( e.g. job scheduling ),
and the two terms are often used interchangeably.
• on a single-user system such as Microsoft Windows, a user may be able to run several
programs at one time: a word processor, a web browser, and an e-mail package. Even if
the user can execute only one program at a time, the operating system may need to
support its own internal programmed activities, such as memory management. In many
respects, all these activities are similar, so we call all of them processes.
The process
• The text section comprises the compiled program code, read in from non-
volatile storage when the program is launched.
• The data section stores global and static variables, allocated and initialized
prior to executing main.
• The heap is used for dynamic memory allocation, and is managed via calls
to new, delete, malloc, free, etc.
• The stack is used for local variables. Space on the stack is reserved for
local variables when they are declared ( at function entrance or elsewhere,
depending on the language ), and the space is freed up when the variables
go out of scope. Note that the stack is also used for function return values,
and the exact mechanisms of stack management may be language specific.
•Note that the stack and the heap start at opposite ends of the process's free space and grow towards each
other. If they should ever meet, then either a stack overflow error will occur, or else a call to new or malloc
will fail due to insufficient memory available.
Process State
As a process executes, it changes state. The state of a process is
defined in part by the current activity of that process. Each process
may be in one of the following states:
 New. The process is being created.
 Running. Instructions are being executed.
 Waiting. The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
 Ready. The process is waiting to be assigned to a processor.
Terminated. The process has finished execution.
Process Scheduling
• The two main objectives of the process scheduling system are to keep
the CPU busy at all times and to deliver "acceptable" response times
for all programs, particularly for interactive ones.
• The process scheduler must meet these objectives by implementing
suitable policies for swapping processes in and out of the CPU.
( Note that these objectives can be conflicting. In particular, every time the system
steps in to swap processes it takes up time on the CPU to do so, which is thereby
"lost" from doing any useful productive work.)
Scheduling Queues
• All processes are stored in the job
queue.
• Processes in the Ready state are
placed in the ready queue.
• Processes waiting for a device to
become available or to deliver data
are placed in device queues. There
is generally a separate device queue
for each device.
• Other queues may also be created
and used as needed.
Schedulers
• A long-term scheduler is typical of a batch system or a very heavily loaded
system. It runs infrequently, ( such as when one process ends selecting one
more to be loaded in from disk in its place ), and can afford to take the time
to implement intelligent and advanced scheduling algorithms.
• The short-term scheduler, or CPU Scheduler, runs very frequently, on the
order of 100 milliseconds, and must very quickly swap one process out of
the CPU and swap in another one.
• Some systems also employ a medium-term scheduler. When system loads
get high, this scheduler will swap one or more processes out of the ready
queue system for a few seconds, in order to allow smaller faster jobs to
finish up quickly and clear the system
Operations on Processes
1.Process Creation
• Processes may create other processes through appropriate system calls, such as fork or spawn. The process
which does the creating is termed the parent of the other process, which is termed its child.
• Each process is given an integer identifier, termed its process identifier, or PID. The parent PID ( PPID ) is
also stored for each process.
2.Process termination
 Processes may request their own termination by making
the exit() system call, typically returning an int. This int is passed
along to the parent if it is doing a wait( ), and is typically zero on
successful completion and some non-zero code in the event of
problems.
 When a process terminates, all of its system resources are freed up,
open files flushed and closed, etc. The process termination status and
execution times are returned to the parent if the parent is waiting for
the child to terminate, or eventually returned to init if the process
becomes an orphan.
CPU scheduling
Basic concept ,scheduling criteria
,Scheduling algorithms , Multiple-
Process Scheduling.
CPU Scheduling
o The objective of multiprogramming is to have some process running at all times,
to maximize CPU utilization. The idea is relatively simple. A process is executed
until' it must wait, typically for the completion of some I/O request
o In a simple computer system, the CPU then just sits idle. All this waiting time is
wasted; no useful work is accomplished. With multiprogramming, we try to use
this time productively. Several processes are kept in memory at one time. When
one process has to wait, the operating system takes the CPU away from that
process and gives the CPU to another process.
o This pattern continues. Every time one process has to wait, another process can
take over use of the CPU. Scheduling of this kind is a fundamental operating-
system function. Almost all computer resources are scheduled before use. The
CPU is, of course, one of the primary computer resources. Thus, its scheduling is
central to operating-system design.
Preemptive Scheduling and non- Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for
example, as the result of an I/O request or an invocation of wait for the
termination of one of the child processes)
2. when a process switches from the running state to the ready state (for
example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for
example, at completion of I/O)
4. When a process terminates
When scheduling takes place only under circumstances 1 and 4, we say that the
scheduling scheme is nonpreemptive or cooperative; otherwise, it is preemptive. Under
nonpreemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the
waiting state. This scheduling method was used by Microsoft Windows 3.x; Windows 95
introduced preemptive scheduling, and all subsequent versions of Windows operating
systems have used preemptive scheduling.
Scheduling Criteria
Different CPU scheduling algorithms have different properties, and the choice of a particular
algorithm may favor one class of processes over another. In choosing which algorithm to
use in a particular situation, we must consider the properties of the various algorithms.
Many criteria have been suggested for comparing CPU scheduling algorithms. Which
characteristics are used for comparison can make a substantial difference in which algorithm
is judged to be best. The criteria include the following:
CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU utilization
can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily used system).
Throughput. If the CPU is busy executing processes, then work is being done. One measure of
work is the number of processes that are completed per time unit, called throughput. For long
processes, this rate may be one process per hour; for short transactions, it may be 10 processes per
second.
Turnaround time. From the point of view of a particular process, the important criterion is
how long it takes to execute that process. The interval from the time of submission of a process to
the time of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
 Waiting time. The CPU scheduling algorithm does not affect the amount of time during which
a process executes or does I/O; it affects only the amount of time that a process spends waiting in
the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.
Response time. In an interactive system, turnaround time may not be the best criterion.
Often, a process can produce some output fairly early and can continue computing new results
while previous results are being output to the user. Thus, another measure is the time from the
submission of a request until the first response is produced. This measure, called response time, is
the time it takes to start responding, not the time it takes to output the response
Scheduling Algorithms
There are various algorithms which are used by the Operating
System to schedule the processes on the processor in an
efficient way.
There are the following algorithms which can be used to
schedule the jobs.
1. First Come First Serve
It is the simplest algorithm to implement. The process with the
minimal arrival time will get the CPU first. The lesser the arrival
time, the sooner will the process gets the CPU. It is the non-
preemptive type of scheduling.
2. Round Robin
In the Round Robin scheduling algorithm, the OS defines a time
quantum (slice). All the processes will get executed in the cyclic
way. Each of the process will get the CPU for a small amount of
time (called time quantum) and then get back to the ready queue
to wait for its next turn. It is a preemptive type of scheduling.
3. Shortest Job First
The job with the shortest burst time will get the CPU first. The
lesser the burst time, the sooner will the process get the CPU. It
is the non-preemptive type of scheduling.
4. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS
schedules the Job according to the remaining time of the
execution.
5. Priority based scheduling
In this algorithm, the priority will be assigned to each of the
processes. The higher the priority, the sooner will the process get
the CPU. If the priority of the two processes is same then they
will be scheduled according to their arrival time.
6. Highest Response Ratio Next
In this scheduling Algorithm, the process with highest response
ratio will be scheduled next. This reduces the starvation in the
Multiple-Processor Scheduling
Multiple processor scheduling or multiprocessor scheduling focuses on
designing the system's scheduling function, which consists of more than
one processor. Multiple CPUs share the load (load sharing) in
multiprocessor scheduling so that various processes run simultaneously. In
general, multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many
processors, and they are identical, and we can run any process at any
time.
The multiple CPUs in the system are in close communication, which
shares a common bus, memory, and other peripheral devices. So we can
say that the system is tightly coupled. These systems are used when we
want to process a bulk amount of data, and these systems are mainly used
in satellite, weather forecasting, etc.
Multiprocessor systems may be heterogeneous (different kinds of CPUs)
or homogenous (the same CPU).
Approaches to Multiple Processor Scheduling
There are two approaches to multiple processor scheduling in the
operating system: Symmetric Multiprocessing and Asymmetric
Multiprocessing
1.Symmetric Multiprocessing: It is used where each processor
is self-scheduling. All processes may be in a common ready queue,
or each processor may have its private queue for ready processes.
The scheduling proceeds further by having the scheduler for each
processor examine the ready queue and select a process to
execute.
2.Asymmetric Multiprocessing: It is used when all the scheduling
decisions and I/O processing are handled by a single processor
called the Master Server. The other processors execute only
the user code. This is simple and reduces the need for data sharing,
and this entire scenario is called Asymmetric Multiprocessing.
Processor Affinity
Processor Affinity means a process has an affinity for the processor
on which it is currently running. When a process runs on a specific
processor, there are certain effects on the cache memory. The data
most recently accessed by the process populate the cache for the
processor. As a result, successive memory access by the process is
often satisfied in the cache memory.
1.Soft Affinity: When an operating system has a policy of keeping a
process running on the same processor but not guaranteeing it will
do so, this situation is called soft affinity.
2.Hard Affinity: Hard Affinity allows a process to specify a subset of
processors on which it may run. Some Linux systems implement soft
affinity and provide system calls like sched_setaffinity() that also
support hard affinity.
Load Balancing
• Load Balancing is the phenomenon that keeps the workload evenly
distributed across all processors in an SMP system. Load balancing is
necessary only on systems where each processor has its own private
queue of a process that is eligible to execute.
• Load balancing is unnecessary because it immediately extracts a runnable
process from the common run queue once a processor becomes idle. One
or more processors will sit idle while other processors have high workloads
along with lists of processors awaiting the CPU.
• There are two general approaches to load balancing:
Push Migration: In push migration, a task routinely checks the load on each processor. If it finds an
imbalance, it evenly distributes the load on each processor by moving the processes from
overloaded to idle or less busy processors.
Pull Migration : Pull Migration occurs when an idle processor pulls a waiting task from a busy
processor for its execution
Symmetric Multithreading
SMP systems allow several threads to run concurrently by providing multiple physical
processors. An alternative strategy is to provide multiple logical rather than physical-
processors. Such a strategy is known as symmetric multithreading (or SMT); it has also
been termed hyperthreading technology on Intel processors.
The idea behind SMT is to create
multiple logical processors on the same
physical processor, presenting a view of
several logical processors to the
operating system, even on a system
with only a single physical processor.
Each logical processor has its own
architecture state, which includes
general-purpose and machine-state
registers.
Process
Synchronization
Background, The Critical-Section
Problems, Synchronization Hardware,
Semaphores , Classical Problem Of
Synchronization.
Process Synchronization
o Process Synchronization means coordinating the execution of processes
such that no two processes access the same shared resources and data. It is
required in a multi-process system where multiple processes run together,
and more than one process tries to gain access to the same shared resource
or data at the same time.
o Changes made in one process aren’t reflected when another process
accesses the same shared data. It is necessary that processes are
synchronized with each other as it helps avoid the inconsistency of shared
data.
o several processes access and manipulate the same data concurrently and
the outcome of the execution depends on the particular order in which the
access takes place, is called a race condition
o To prevent race conditions, concurrent processes must be synchronized.
The Critical-Section Problem
Consider a system consisting of 11 processes {Po, PI, ...,
Pn-1) Each process has a segment of code, called a
critical section, in which the process may be changing
common variables, updating a table, writing a file, and
so on. The important feature of the system is that, when
one process is executing in its critical section, no other
process is to be allowed to execute in its critical section.
That is, no two processes are executing in their critical
sections at the same time.
The critical-section problem is to design a protocol that
the processes can use to cooperate. Each process must
request permission to enter its critical section. The
section of code implementing this request is the entry
section. The critical section may be followed by an exit
section. The remaining code is the remainder section
The general structure of a typical process Pi is shown in Figure 6.1. The entry section and exit
section are enclosed in boxes to highlight these important segments of code
Solution to Critical-Section Problem
1. Mutual Exclusion. If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then
the selection of the processes that will enter the critical section next
cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of times
that other processes are allowed to enter their critical sections after
a process has made a request to enter its critical section and before
that request is granted.
• Assume that each process executes at a nonzero speed
• No assumption concerning relative speed of the n processes.
Synchronization Hardware
oSynchronization hardware is a hardware-based solution to resolve
the critical section problem. In our earlier content of the critical
section, we have discussed how the multiple processes sharing
common resources must be synchronized to avoid inconsistent
results
oSynchronization hardware i.e. hardware-based solution for the critical
section problem which introduces the hardware instructions that
can be used to resolve the critical section problem effectively.
Hardware solutions are often easier and also improves the efficiency
of the system.
oThe hardware synchronization provides two kinds of hardware
instructions that are TestAndSet and Swap. We will discuss each of
the instruction briefly
TestAndSet Hardware Instruction
• The TestAndSet() hardware instruction is atomic instruction. Atomic means
both the test operation and set operation are executed in one machine
cycle at once. If the two different processes are executing TestAndSet()
simultaneously each on different CPU.
• The TestAndSet() instruction can be defined as in the code below:
Swap Hardware Instruction
• Like TestAndSet() instruction the swap() hardware instruction is
also an atomic instruction. With a difference that it operates on
two variables provided in its parameter.
• The structure of swap() instruction is:
Semaphores
• Semaphores are integer variables that are used to solve the
critical section problem by using two atomic operations, wait
and signal that are used for process synchronization.
• The definitions of wait and signal are as follows −
Wait The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
Signal The signal operation increments the value of its argument S.
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
 Counting Semaphores These are integer value semaphores and have an unrestricted
value domain. These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are added,
semaphore count automatically incremented and if the resources are removed, the
count is decremented.
 Binary Semaphores The binary semaphores are like counting semaphores but their
value is restricted to 0 and 1. The wait operation only works when the semaphore is 1
and the signal operation succeeds when semaphore is 0. It is sometimes easier to
implement binary semaphores than counting semaphores.
Classical Problems of Synchronization
In this section, present a number of synchronization problems as examples of a large class
of concurrency-control problems. These problems are used for testing nearly every newly
proposed synchronization scheme. In our solutions to the problems, we use semaphores for
synchronization
The classical problems of synchronization are as follows:
1.Bound-Buffer problem
2.Sleeping barber problem
3.Dining Philosophers problem
4.Readers and writers problem
Bound-Buffer problem
• Also known as the Producer-Consumer problem. In this problem,
there is a buffer of n slots, and each buffer is capable of
storing one unit of data. There are two processes that are
operating on the buffer – Producer and Consumer. The producer
tries to insert data and the consumer tries to remove data.
• If the processes are run simultaneously they will not yield the
expected output.
• The solution to this problem is creating two semaphores, one full
and the other empty to keep a track of the concurrent processes.
Sleeping Barber Problem
• This problem is based on a hypothetical barbershop with one
barber.
• When there are no customers the barber sleeps in his chair. If any
customer enters he will wake up the barber and sit in the customer
chair. If there are no chairs empty they wait in the waiting
Dining Philosopher’s problem
• This problem states that there are K number of philosophers
sitting around a circular table with one chopstick placed between
each pair of philosophers. The philosopher will be able to eat if
he can pick up two chopsticks that are adjacent to the
philosopher.
• This problem deals with the allocation of limited resources.
Readers and Writers Problem
• This problem occurs when many threads of execution try to access
the same shared resources at a time. Some threads may read, and
some may write. In this scenario, we may get faulty outputs.
BIBLIOGRAPHY
02124802020 GAURAV KUMAR
02224802020 GAURAV KUMAR
02324802020 HARI OM SINGH
02424802020 HARSHIT GUPTA
02524802020 HARSHIT VASHISHT
02624802020 HEENA SATI
02724802020 HEMANT KUMAR YADAV
02824802020 HIMANSHU
02924802020 KISHAN KUMAR
03024802020 KSHITIZ PAL
03124802020 KUMAR SHIUBHAM
03224802020 KUNAL GUPTA
03324802020 KUNAL SINGH KORANGA
03424802020 KUSHAGRA SINGH
03524802020 LUV KUMAR
03624802020 MOHD AMIR HUSSAIN
03724802020 MOHIT KUMAR
03824802020 MOHIT KUMAR
03924802020 MOHIT KUMAR
04024802020 MUKUL GUPTA
Group 2 student Names who were shown a great contribution in this unit 2 ppt
Sources
www.javatpoint.com
www.geeksforgeeks.org
www.cs.uic.edu
www.binaryterms.com
www.tutorialspoint.com
TEXTBOOK : Silbersachatz and Galvin, “Operating System Concepts”, John Wiley & Sons, 7 th Ed. 2005

Más contenido relacionado

Similar a Operating System.pptx

OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process ConceptsMukesh Chinta
 
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationLM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationMani Deepak Choudhry
 
Multi-Threading.pptx
Multi-Threading.pptxMulti-Threading.pptx
Multi-Threading.pptxCHANDRUG31
 
Processor / CPU Scheduling
Processor / CPU SchedulingProcessor / CPU Scheduling
Processor / CPU SchedulingIzaz Roghani
 
Unit 2_OS process management
Unit 2_OS process management Unit 2_OS process management
Unit 2_OS process management JayeshGadhave1
 
CSI-503 - 3. Process Scheduling
CSI-503 - 3. Process SchedulingCSI-503 - 3. Process Scheduling
CSI-503 - 3. Process Schedulingghayour abbas
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2) rohassanie
 
Operating System
Operating SystemOperating System
Operating SystemGowriLatha1
 
Lecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptxLecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptxAmanuelmergia
 
52bf066dbfcc4d739fa99d255dba287a.pptx
52bf066dbfcc4d739fa99d255dba287a.pptx52bf066dbfcc4d739fa99d255dba287a.pptx
52bf066dbfcc4d739fa99d255dba287a.pptx11SnehlataGujar
 
Lecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptxLecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptxAmanuelmergia
 
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptx
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptxCPU Scheduling Criteria CPU Scheduling Criteria (1).pptx
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptxTSha7
 

Similar a Operating System.pptx (20)

UNIT - 3 PPT(Part- 1)_.pdf
UNIT - 3 PPT(Part- 1)_.pdfUNIT - 3 PPT(Part- 1)_.pdf
UNIT - 3 PPT(Part- 1)_.pdf
 
OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process Concepts
 
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncationLM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
 
Multi-Threading.pptx
Multi-Threading.pptxMulti-Threading.pptx
Multi-Threading.pptx
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
 
Processor / CPU Scheduling
Processor / CPU SchedulingProcessor / CPU Scheduling
Processor / CPU Scheduling
 
Unit 2_OS process management
Unit 2_OS process management Unit 2_OS process management
Unit 2_OS process management
 
CSI-503 - 3. Process Scheduling
CSI-503 - 3. Process SchedulingCSI-503 - 3. Process Scheduling
CSI-503 - 3. Process Scheduling
 
Process management1
Process management1Process management1
Process management1
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
 
Operating System
Operating SystemOperating System
Operating System
 
Lecture 4 process cpu scheduling
Lecture 4   process cpu schedulingLecture 4   process cpu scheduling
Lecture 4 process cpu scheduling
 
Lecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptxLecture 4 - Process Scheduling (1).pptx
Lecture 4 - Process Scheduling (1).pptx
 
Unit 2 notes
Unit 2 notesUnit 2 notes
Unit 2 notes
 
52bf066dbfcc4d739fa99d255dba287a.pptx
52bf066dbfcc4d739fa99d255dba287a.pptx52bf066dbfcc4d739fa99d255dba287a.pptx
52bf066dbfcc4d739fa99d255dba287a.pptx
 
Operating system
Operating systemOperating system
Operating system
 
Lecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptxLecture 4 - Process Scheduling.pptx
Lecture 4 - Process Scheduling.pptx
 
UNIT I-Processes.pptx
UNIT I-Processes.pptxUNIT I-Processes.pptx
UNIT I-Processes.pptx
 
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptx
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptxCPU Scheduling Criteria CPU Scheduling Criteria (1).pptx
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptx
 
Cp usched 2
Cp usched  2Cp usched  2
Cp usched 2
 

Último

Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Bhuvaneswari Subramani
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...apidays
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 

Último (20)

Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 

Operating System.pptx

  • 1. Operating system Processes , CPU Scheduling And Process Synchronization
  • 2. Processes Process Concept, Process Scheduling, Operation on Processes
  • 3. Process Concept • A process is an instance of a program in execution. • Batch systems work in terms of "jobs". Many modern process concepts are still expressed in terms of jobs, ( e.g. job scheduling ), and the two terms are often used interchangeably. • on a single-user system such as Microsoft Windows, a user may be able to run several programs at one time: a word processor, a web browser, and an e-mail package. Even if the user can execute only one program at a time, the operating system may need to support its own internal programmed activities, such as memory management. In many respects, all these activities are similar, so we call all of them processes.
  • 4. The process • The text section comprises the compiled program code, read in from non- volatile storage when the program is launched. • The data section stores global and static variables, allocated and initialized prior to executing main. • The heap is used for dynamic memory allocation, and is managed via calls to new, delete, malloc, free, etc. • The stack is used for local variables. Space on the stack is reserved for local variables when they are declared ( at function entrance or elsewhere, depending on the language ), and the space is freed up when the variables go out of scope. Note that the stack is also used for function return values, and the exact mechanisms of stack management may be language specific. •Note that the stack and the heap start at opposite ends of the process's free space and grow towards each other. If they should ever meet, then either a stack overflow error will occur, or else a call to new or malloc will fail due to insufficient memory available.
  • 5. Process State As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states:  New. The process is being created.  Running. Instructions are being executed.  Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal).  Ready. The process is waiting to be assigned to a processor. Terminated. The process has finished execution.
  • 6. Process Scheduling • The two main objectives of the process scheduling system are to keep the CPU busy at all times and to deliver "acceptable" response times for all programs, particularly for interactive ones. • The process scheduler must meet these objectives by implementing suitable policies for swapping processes in and out of the CPU. ( Note that these objectives can be conflicting. In particular, every time the system steps in to swap processes it takes up time on the CPU to do so, which is thereby "lost" from doing any useful productive work.)
  • 7. Scheduling Queues • All processes are stored in the job queue. • Processes in the Ready state are placed in the ready queue. • Processes waiting for a device to become available or to deliver data are placed in device queues. There is generally a separate device queue for each device. • Other queues may also be created and used as needed.
  • 8. Schedulers • A long-term scheduler is typical of a batch system or a very heavily loaded system. It runs infrequently, ( such as when one process ends selecting one more to be loaded in from disk in its place ), and can afford to take the time to implement intelligent and advanced scheduling algorithms. • The short-term scheduler, or CPU Scheduler, runs very frequently, on the order of 100 milliseconds, and must very quickly swap one process out of the CPU and swap in another one. • Some systems also employ a medium-term scheduler. When system loads get high, this scheduler will swap one or more processes out of the ready queue system for a few seconds, in order to allow smaller faster jobs to finish up quickly and clear the system
  • 9. Operations on Processes 1.Process Creation • Processes may create other processes through appropriate system calls, such as fork or spawn. The process which does the creating is termed the parent of the other process, which is termed its child. • Each process is given an integer identifier, termed its process identifier, or PID. The parent PID ( PPID ) is also stored for each process.
  • 10. 2.Process termination  Processes may request their own termination by making the exit() system call, typically returning an int. This int is passed along to the parent if it is doing a wait( ), and is typically zero on successful completion and some non-zero code in the event of problems.  When a process terminates, all of its system resources are freed up, open files flushed and closed, etc. The process termination status and execution times are returned to the parent if the parent is waiting for the child to terminate, or eventually returned to init if the process becomes an orphan.
  • 11. CPU scheduling Basic concept ,scheduling criteria ,Scheduling algorithms , Multiple- Process Scheduling.
  • 12. CPU Scheduling o The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The idea is relatively simple. A process is executed until' it must wait, typically for the completion of some I/O request o In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no useful work is accomplished. With multiprogramming, we try to use this time productively. Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. o This pattern continues. Every time one process has to wait, another process can take over use of the CPU. Scheduling of this kind is a fundamental operating- system function. Almost all computer resources are scheduled before use. The CPU is, of course, one of the primary computer resources. Thus, its scheduling is central to operating-system design.
  • 13. Preemptive Scheduling and non- Preemptive Scheduling CPU-scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state (for example, as the result of an I/O request or an invocation of wait for the termination of one of the child processes) 2. when a process switches from the running state to the ready state (for example, when an interrupt occurs) 3. When a process switches from the waiting state to the ready state (for example, at completion of I/O) 4. When a process terminates
  • 14. When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is nonpreemptive or cooperative; otherwise, it is preemptive. Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. This scheduling method was used by Microsoft Windows 3.x; Windows 95 introduced preemptive scheduling, and all subsequent versions of Windows operating systems have used preemptive scheduling.
  • 15. Scheduling Criteria Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. Many criteria have been suggested for comparing CPU scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best. The criteria include the following: CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system). Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. For long processes, this rate may be one process per hour; for short transactions, it may be 10 processes per second.
  • 16. Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.  Waiting time. The CPU scheduling algorithm does not affect the amount of time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue. Response time. In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response
  • 17. Scheduling Algorithms There are various algorithms which are used by the Operating System to schedule the processes on the processor in an efficient way. There are the following algorithms which can be used to schedule the jobs. 1. First Come First Serve It is the simplest algorithm to implement. The process with the minimal arrival time will get the CPU first. The lesser the arrival time, the sooner will the process gets the CPU. It is the non- preemptive type of scheduling.
  • 18. 2. Round Robin In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the processes will get executed in the cyclic way. Each of the process will get the CPU for a small amount of time (called time quantum) and then get back to the ready queue to wait for its next turn. It is a preemptive type of scheduling. 3. Shortest Job First The job with the shortest burst time will get the CPU first. The lesser the burst time, the sooner will the process get the CPU. It is the non-preemptive type of scheduling.
  • 19. 4. Shortest remaining time first It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according to the remaining time of the execution. 5. Priority based scheduling In this algorithm, the priority will be assigned to each of the processes. The higher the priority, the sooner will the process get the CPU. If the priority of the two processes is same then they will be scheduled according to their arrival time. 6. Highest Response Ratio Next In this scheduling Algorithm, the process with highest response ratio will be scheduled next. This reduces the starvation in the
  • 20. Multiple-Processor Scheduling Multiple processor scheduling or multiprocessor scheduling focuses on designing the system's scheduling function, which consists of more than one processor. Multiple CPUs share the load (load sharing) in multiprocessor scheduling so that various processes run simultaneously. In general, multiprocessor scheduling is complex as compared to single processor scheduling. In the multiprocessor scheduling, there are many processors, and they are identical, and we can run any process at any time. The multiple CPUs in the system are in close communication, which shares a common bus, memory, and other peripheral devices. So we can say that the system is tightly coupled. These systems are used when we want to process a bulk amount of data, and these systems are mainly used in satellite, weather forecasting, etc. Multiprocessor systems may be heterogeneous (different kinds of CPUs) or homogenous (the same CPU).
  • 21. Approaches to Multiple Processor Scheduling There are two approaches to multiple processor scheduling in the operating system: Symmetric Multiprocessing and Asymmetric Multiprocessing 1.Symmetric Multiprocessing: It is used where each processor is self-scheduling. All processes may be in a common ready queue, or each processor may have its private queue for ready processes. The scheduling proceeds further by having the scheduler for each processor examine the ready queue and select a process to execute. 2.Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O processing are handled by a single processor called the Master Server. The other processors execute only the user code. This is simple and reduces the need for data sharing, and this entire scenario is called Asymmetric Multiprocessing.
  • 22. Processor Affinity Processor Affinity means a process has an affinity for the processor on which it is currently running. When a process runs on a specific processor, there are certain effects on the cache memory. The data most recently accessed by the process populate the cache for the processor. As a result, successive memory access by the process is often satisfied in the cache memory. 1.Soft Affinity: When an operating system has a policy of keeping a process running on the same processor but not guaranteeing it will do so, this situation is called soft affinity. 2.Hard Affinity: Hard Affinity allows a process to specify a subset of processors on which it may run. Some Linux systems implement soft affinity and provide system calls like sched_setaffinity() that also support hard affinity.
  • 23. Load Balancing • Load Balancing is the phenomenon that keeps the workload evenly distributed across all processors in an SMP system. Load balancing is necessary only on systems where each processor has its own private queue of a process that is eligible to execute. • Load balancing is unnecessary because it immediately extracts a runnable process from the common run queue once a processor becomes idle. One or more processors will sit idle while other processors have high workloads along with lists of processors awaiting the CPU. • There are two general approaches to load balancing: Push Migration: In push migration, a task routinely checks the load on each processor. If it finds an imbalance, it evenly distributes the load on each processor by moving the processes from overloaded to idle or less busy processors. Pull Migration : Pull Migration occurs when an idle processor pulls a waiting task from a busy processor for its execution
  • 24. Symmetric Multithreading SMP systems allow several threads to run concurrently by providing multiple physical processors. An alternative strategy is to provide multiple logical rather than physical- processors. Such a strategy is known as symmetric multithreading (or SMT); it has also been termed hyperthreading technology on Intel processors. The idea behind SMT is to create multiple logical processors on the same physical processor, presenting a view of several logical processors to the operating system, even on a system with only a single physical processor. Each logical processor has its own architecture state, which includes general-purpose and machine-state registers.
  • 25. Process Synchronization Background, The Critical-Section Problems, Synchronization Hardware, Semaphores , Classical Problem Of Synchronization.
  • 26. Process Synchronization o Process Synchronization means coordinating the execution of processes such that no two processes access the same shared resources and data. It is required in a multi-process system where multiple processes run together, and more than one process tries to gain access to the same shared resource or data at the same time. o Changes made in one process aren’t reflected when another process accesses the same shared data. It is necessary that processes are synchronized with each other as it helps avoid the inconsistency of shared data. o several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition o To prevent race conditions, concurrent processes must be synchronized.
  • 27. The Critical-Section Problem Consider a system consisting of 11 processes {Po, PI, ..., Pn-1) Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. The important feature of the system is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. That is, no two processes are executing in their critical sections at the same time. The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section The general structure of a typical process Pi is shown in Figure 6.1. The entry section and exit section are enclosed in boxes to highlight these important segments of code
  • 28. Solution to Critical-Section Problem 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the n processes.
  • 29. Synchronization Hardware oSynchronization hardware is a hardware-based solution to resolve the critical section problem. In our earlier content of the critical section, we have discussed how the multiple processes sharing common resources must be synchronized to avoid inconsistent results oSynchronization hardware i.e. hardware-based solution for the critical section problem which introduces the hardware instructions that can be used to resolve the critical section problem effectively. Hardware solutions are often easier and also improves the efficiency of the system. oThe hardware synchronization provides two kinds of hardware instructions that are TestAndSet and Swap. We will discuss each of the instruction briefly
  • 30. TestAndSet Hardware Instruction • The TestAndSet() hardware instruction is atomic instruction. Atomic means both the test operation and set operation are executed in one machine cycle at once. If the two different processes are executing TestAndSet() simultaneously each on different CPU. • The TestAndSet() instruction can be defined as in the code below:
  • 31. Swap Hardware Instruction • Like TestAndSet() instruction the swap() hardware instruction is also an atomic instruction. With a difference that it operates on two variables provided in its parameter. • The structure of swap() instruction is:
  • 32. Semaphores • Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used for process synchronization. • The definitions of wait and signal are as follows − Wait The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed.
  • 33. Signal The signal operation increments the value of its argument S. Types of Semaphores There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these are given as follows −  Counting Semaphores These are integer value semaphores and have an unrestricted value domain. These semaphores are used to coordinate the resource access, where the semaphore count is the number of available resources. If the resources are added, semaphore count automatically incremented and if the resources are removed, the count is decremented.  Binary Semaphores The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores than counting semaphores.
  • 34. Classical Problems of Synchronization In this section, present a number of synchronization problems as examples of a large class of concurrency-control problems. These problems are used for testing nearly every newly proposed synchronization scheme. In our solutions to the problems, we use semaphores for synchronization The classical problems of synchronization are as follows: 1.Bound-Buffer problem 2.Sleeping barber problem 3.Dining Philosophers problem 4.Readers and writers problem
  • 35. Bound-Buffer problem • Also known as the Producer-Consumer problem. In this problem, there is a buffer of n slots, and each buffer is capable of storing one unit of data. There are two processes that are operating on the buffer – Producer and Consumer. The producer tries to insert data and the consumer tries to remove data. • If the processes are run simultaneously they will not yield the expected output. • The solution to this problem is creating two semaphores, one full and the other empty to keep a track of the concurrent processes. Sleeping Barber Problem • This problem is based on a hypothetical barbershop with one barber. • When there are no customers the barber sleeps in his chair. If any customer enters he will wake up the barber and sit in the customer chair. If there are no chairs empty they wait in the waiting
  • 36. Dining Philosopher’s problem • This problem states that there are K number of philosophers sitting around a circular table with one chopstick placed between each pair of philosophers. The philosopher will be able to eat if he can pick up two chopsticks that are adjacent to the philosopher. • This problem deals with the allocation of limited resources. Readers and Writers Problem • This problem occurs when many threads of execution try to access the same shared resources at a time. Some threads may read, and some may write. In this scenario, we may get faulty outputs.
  • 37. BIBLIOGRAPHY 02124802020 GAURAV KUMAR 02224802020 GAURAV KUMAR 02324802020 HARI OM SINGH 02424802020 HARSHIT GUPTA 02524802020 HARSHIT VASHISHT 02624802020 HEENA SATI 02724802020 HEMANT KUMAR YADAV 02824802020 HIMANSHU 02924802020 KISHAN KUMAR 03024802020 KSHITIZ PAL 03124802020 KUMAR SHIUBHAM 03224802020 KUNAL GUPTA 03324802020 KUNAL SINGH KORANGA 03424802020 KUSHAGRA SINGH 03524802020 LUV KUMAR 03624802020 MOHD AMIR HUSSAIN 03724802020 MOHIT KUMAR 03824802020 MOHIT KUMAR 03924802020 MOHIT KUMAR 04024802020 MUKUL GUPTA Group 2 student Names who were shown a great contribution in this unit 2 ppt Sources www.javatpoint.com www.geeksforgeeks.org www.cs.uic.edu www.binaryterms.com www.tutorialspoint.com TEXTBOOK : Silbersachatz and Galvin, “Operating System Concepts”, John Wiley & Sons, 7 th Ed. 2005